๐จ Research Engineer โ Formal Methods & Reasoning
๐ San Francisco, CA | Onsite
๐ง Early-stage AI research lab | Revenue-generating
An AI research lab focused on alignment, interpretability, and reinforcement learning is hiring engineers to explore how ideas from formal methods, programming languages, and verification can help us better understand and constrain model behavior
This is a highly research-driven role focused on bringing rigorous systems thinking into frontier AI alignment work
You'll work on:
โ๏ธ Applying formal verification & program analysis ideas to model internals
๐ง Building structured reasoning frameworks for interpretability research
๐ Exploring compiler-style abstractions for neural computations
๐ ๏ธ Developing tooling that combines interpretability with verification-inspired constraints
๐ Prototyping โverification-adjacentโ safety guarantees for AI systems
๐ง Shaping an entirely new research direction inside the lab
Strong backgrounds include:
โ Programming languages research
โ Compilers or systems engineering
โ Formal verification / theorem proving
โ Security research or operating systems
โ Research engineering with strong abstraction skills
PhD strongly preferred, but deep technical creativity matters most
This is not:
โ Traditional enterprise verification work
โ Pure theory with no implementation
โ Large-scale ML infrastructure or training engineering
The environment is small, technical, and highly experimental - ideal for people who enjoy building research systems from scratch and applying rigorous reasoning tools to frontier AI problems
Interested? Hit Apply & Drop me a message!