Trigger.dev (YC W23), we are currently hiring for 3 roles:
- Developer Relations Engineer | Hybrid UK or Europe remote | Full-time
- Support Engineer | Hybrid UK or remote | Full-time
- Senior Backend Engineer | Hybrid UK or Europe remote | Full-time
We are an open source developer platform for building and running managed workflows and AI agents, serving thousands of teams and handling hundreds of millions of executions per month.
We are looking for passionate devs who love solving challenging technical problems and want to directly contribute to a large commercial open source project.
Trigger.dev (YC W23) | Senior Backend Engineer | Hybrid UK or Remote (EET to PST) | Full-time
We are an open source developer platform for building and running managed workflows and AI agents, serving thousands of teams and handling hundreds of millions of executions per month.
The role: Build SDK/platform features, backend systems, optimize database performance, create scalable APIs, implement monitoring solutions, customer support and more.
Hi, I saw you had a couple openings for your company.. are both remote positions for those in the UK only or are you open to those in the U.S. as well?
Mastra and Trigger.dev solve different but complementary problems. Mastra is great for building AI agents, with abstractions for reasoning, memory, tools and observability. Trigger.dev is an agent-agnostic cloud runtime: capable of running long-running compute defined TypeScript with retries, queues, waitpoints, and observability.
We do plan on adding our own AI primitives in the future, but we will also always aim to be framework agnostic. Frameworks like Mastra, Vercel’s AI SDK, etc, pair really well with us; you get agent features on top of our execution layer that’s reliable at scale. We think the best solution for developers gives them optionality to use the tools they’re already familiar with.
> FWIW, more than 1/2 of our customers are AI agents, so it does make sense.
We’ve seen the same thing, more even. Our positioning wasn’t even lead by our investors; we figured if the main thing our customers were doing on the platform was building complex/long-running AI agents (and ‘agentic’ workflows), it made sense to lean into it. Fortunately (or unfortunately) for us, it does actually fit what we do pretty well, even if you can also do a lot of other great stuff with the platform which isn’t covered under that umbrella. It’s a tricky thing to get right.
That's great to hear, and thanks for the kind words.
Sorry you had some issues migrating. You're right, it was our biggest docs update so far, and unfortunately a few things did get missed which we have (hopefully) since rectified. Please do let us know if there's anything else we missed and we'll get it sorted.
Very common issues were: forgetting to put non-determistic code inside of steps (deterministic code can be put outside of steps, but non-deterministic = boom), incorrect use of cache keys (people would put dynamic data inside the cache key). Another issue we hit pretty frequently that a single step would take longer than the serverless function timeout it was running on (this was before we ran our own compute platform). Another issue was speed, especially with more complicated tasks with hundreds of steps: the amount of data that needed serializing/deserializing became pretty huge. Oh yea, that was another thing (it's coming back to me now), there were lots of fun surprises about data being serialized/deserialized and changing shape (e.g. you couldn't have a step that just returned a class instance, for example) which caused tons of bugs.
You keep saying deterministic… What are you actually trying to say?
Based on your docs: I mean googling “deterministic temporal.dev” brings up nothing. I found other libraries and I get it. I assume you mean, “replayable by our engine,” but that would give away: replayability of most code in all the languages you support is hopelessly out of scope. “Pure” isn’t even a useful constraint - lots of code looks pure, but basically none of Python is, because you will alloc by so much as sneezing and hence you can OOM. Why promise the world?
Determinism means that the control logic should lead to the same set of behaviours every time with the same set of inputs under otherwise normal runtime conditions.
And if you're gonna come back with "but that's not real determinism!" then I would remind you that actually no code is deterministic because of quantum uncertainty, and that quantum uncertainty isn't necessarily real because of solipsism, and that there's more to life than rhetorical definition interrogations.
That's actually how we started back during YC W23 as a "Zapier for developers" but we pivoted to "async workflows" later in 2023 and have since been used less like Zapier and more like the core part of an apps infra/backend, which has taken off with AI applications building AI workflows and agents, including those examples you quoted there.
Currently we have two deployment models: self-host the entire platform or use our cloud offering, where we host the platform and your workloads. We've had a lot of feedback from users and potential users that they'd like to be able to run workloads themself and have us host the platform side. We definitely plan on offering this deployment model (and allowing payloads and outputs to stay on-prem as well) eventually, but we're waiting until we can do it properly and that will support all the features of the cloud (snapshot/restore, warm starts, atomic versioning). We're planning on offering this alongside the release of our MicroVM runtime later this year/early next year.
- We're not really an agent framework, but more like a agent runtime that is agnostic to what framework you choose to run on our infra. We have lots of people running langchain, mastra, AI SDK, hand-rolled, etc on top of us, since we are just a compute platform. We have the building blocks needed for running any kind of agent or AI workflow: ability to run system packages (anything from chrome to ffmpeg), long-running (e.g. no timeouts), realtime updates to your frontend (including streaming tokens). We also provide queues and concurrency limits for doing stuff like multitenant concurrency, observability built on OpenTelemetry, schedules for doing ETL/ELT data stuff (including multitenant schedules).
- We are TS first and believe the future of agents and AI Applications will be won by TS devs.
- We have a deep integration with snapshotting so code can be written in a natural way but still exhibit continuation style behavior. For example, you can trigger another agent or task or tool to run (lets say an agent that specializes in browser use) and wait for the result as a tool call result. Instead of having to introduce a serialization boundary so you can stop compute while waiting and then rhydrate and resume through skipped "steps" or activities we instead will snapshot the process, kill it, and resume it later, continuing from the exact same process state as before. This is all handled under the hood and managed by us. We're currently using CRIU for this but will be moving to whole VM snapshots with our MicroVM release.
- Developer Relations Engineer | Hybrid UK or Europe remote | Full-time
- Support Engineer | Hybrid UK or remote | Full-time
- Senior Backend Engineer | Hybrid UK or Europe remote | Full-time
We are an open source developer platform for building and running managed workflows and AI agents, serving thousands of teams and handling hundreds of millions of executions per month.
We are looking for passionate devs who love solving challenging technical problems and want to directly contribute to a large commercial open source project.
Our stack: Node.js, TypeScript, Postgres, Redis, Remix, AWS
Learn more and apply here: https://jobs.ashbyhq.com/triggerdev
reply