Staff AI Engineer

Elliptic

Elliptic

Software Engineering, Data Science

London, UK

Posted on May 7, 2026

Location

London, United Kingdom

Employment Type

Full time

Location Type

Hybrid

Department

Engineering

The impact you will have:

As Staff AI Engineer, you will be one of the most impactful early hires in Elliptic's next stage of AI expansion. You will join at a moment when Elliptic is actively forming its approach to AI foundations: tooling decisions are being made, agentic patterns are being established, and the kernel of a centralised AI platform is being laid out. Your role is to govern the quality and coherence of those decisions before they crystallise.

You will initially work across our AgentForce and Investigations & AI teams, holding the architectural bar on tooling evaluations, keeping the stack decision open and well-reasoned, and ensuring that the internal agentic patterns being developed today are genuinely inheritable by the customer-facing AI products of tomorrow. You will act as a strong advocate for AI adoption, AI technical best practices, and AI enablement across product, engineering, and development.

This is a role for someone who is comfortable with ambiguity, energised by the challenge of making decisions that others will build on for years, and confident enough to hold a strong technical position without needing a team beneath them to do it.

What you will do:

  • Serve as the architectural conscience for Elliptic's early AI decisions, evaluating our current tooling explorations (including the LangSmith ecosystem and Databricks) against the requirements of production-scale, customer-facing AI products, and producing a clear, evidence-based recommendation

  • Work consultatively with the Investigations & AI technical lead and AgentForce engineering to ensure that internal agentic patterns, prompt architectures, and evaluation frameworks are being designed with customer-facing scale and regulatory auditability in mind

  • Hold the AI stack decision open responsibly: document trade-offs, establish evaluation criteria, and prevent pragmatic local choices from defaulting the answer before the right person is in place to make it

  • Define and uphold engineering standards for AI systems across the organisation: model observability and tracing, prompt versioning and registry, cost governance, evaluation harnesses, and agent reliability patterns

  • Produce the technical foundation documents that will be a coherent architectural position, a clear view of decisions made and decisions deferred, and an honest assessment of what the architecture can accomplish

You will be a great fit here if you:

  • Are energised by the challenge of bringing rigour to early-stage technical decisions, and understand that preventing a bad architectural choice is often more valuable than shipping a feature

  • Can hold a strong, well-reasoned technical position without needing formal authority to make it stick. You influence through clarity, evidence, and the quality of your thinking

  • Think about AI infrastructure the way the best platform engineers think about data infrastructure: as a set of foundations with internal customers whose needs must be understood and balanced

  • Are comfortable operating in ambiguity and working across teams without a fixed mandate, and know how to make yourself useful in a way that doesn't create dependency or territorial friction

  • Care about the trustworthiness of AI systems, not just their capability. Understand why explainability, auditability, and reliability matter especially in a regulated compliance context

Our ideal candidate has:

  • Made production AI architectural decisions, including evaluation framework selection, LLM integration patterns, prompt management and versioning at scale, and model observability. You can speak to what went well, what they would do differently, and why

  • Worked across the boundary between internal tooling and customer-facing AI products, and understands how requirements differ across those contexts, particularly in relation to reliability, auditability, and cost

  • Built or significantly shaped an AI evaluation or observability framework in a production environment, and has strong opinions on what good looks like

  • Operated effectively without a team beneath them. As a Staff IC whose impact comes from technical leadership and cross-team influence rather than people management and team workstream prioritisation

Bonus Points for:

  • Experience building agentic systems in a production context, including orchestration patterns, tool use, memory management, and agent reliability at scale

  • Familiarity with one of the major AI ecosystems, such as LangSmith, MLflow, or Databricks ML

  • Having navigated a transition from a scrappy, point-to-point AI integration to a well-engineered, reusable AI platform. An understanding of the organisational as well as technical challenges that transition involves

  • An interest in the crypto ecosystem and the mission of making digital assets safer and more accessible

Job Benefits

> How we work:

  • Hybrid working and the option to work from almost anywhere for up to 90 days per year

  • £500 Remote working budget to set up your home office space

> Learning & Development:

  • $1,000 Learning & Development budget to use on anything (agreed with your manager) that contributes to your growth and development

> Vacation/ Leave:

  • Holidays: 25 days of annual leave + bank holidays

  • An extra day for your birthday

  • Enhanced parental leave: we provide eligible employees, regardless of gender or whether they become a parent by birth or adoption, 16 weeks fully-paid leave and leave.

> Benefits:

  • Private Health Insurance - we use Vitality!

  • Full access to Spill Mental Health Support

  • Life Assurance: we hope you will never need this - but our cover is for 4 times your salary to your beneficiaries

  • £100 Crypto for you!

  • Cycle to Work Scheme