A Personal Note on Temporal and Building Models
Introduction
Recently, I made the decision to move from building large language models to joining Temporal. I don’t see it as leaving so much as broadening my perspective — stepping into new industries while continuing to serve the one I know best. In my personal time, I plan to keep doing something I care deeply about: helping AI scientists and their supporting engineers understand why Temporal is the tool they don’t yet realize they desperately need.
Over the past seven years, I’ve watched brilliant scientists, engineers, and machine learning experts struggle against systems that fail to respect their time or their potential for innovation.
ML Ops at scale is difficult
This is personal because I’ve seen, over and over again, how much human potential is wasted when tools lag behind ambition. In 2018, I spent five or six evenings a week helping a scientist launch a batch of new Alexa features. Each launch took twelve to eighteen hours, often failed gating tests, and caused enormous stress for the scientist involved. The monolithic pipeline we relied on was never designed to serve more than a handful of teams, yet we kept pushing it past its limits. In place of real automation, my role became to simply hold the system together long enough for innovation to happen.
It’s 2025 now, and while the models and methods have evolved, the underlying pain remains. Science leaders are more focused than ever on maximizing experiment velocity, yet so much of their teams’ energy is still consumed not by discovery, but by the friction of unreliable infrastructure, fragile pipelines, and mounting operational debt. Once again, teams are just trying to hold the system together long enough for innovation to happen.
Temporal feels different…
Temporal represents something different. It enables scientists to write and scale experimental code with help from engineers on workflows and supporting systems — without needing to manage state, orchestrate distributed scaling, or reinvent durability. It can wrap existing ML libraries, add resilience, and, being polyglot, allows teams to work in whatever language fits them best. Today, there is immense pain in moving from initial experimental code to a reliable production pipeline.
Why This Matters to Me
After years of offering scientists only partial solutions, being able to point them toward Temporal feels redemptive.
That’s why this matters to me. Exploring and building this case is my way of reconciling two parts of myself — the technologist who understands how hard distributed systems can be, and the strategist who sees where AI infrastructure is headed. Hopefully, it will also help Temporal solidify its place in the story of how science itself evolves.
Let’s work together!
Going forward, it’s a personal mission of mine to help close the gap between the systems people and the science people — so that both can finally move at the speed their ideas deserve. I’ll be starting a blog where I explore specific use cases in more depth and sharing links here. In the meantime, please reach out if you have questions.
Note: Views here are my own, not necessarily those of my employer.