Engineering Jan 02, 2026 8 min read

    Scaling to 1 Million Tasks: A Technical Deep Dive

    SJ

    Sarah Jones

    Head of Engineering

    Running one AI task is easy. Running one million scheduled tasks reliably is an engineering challenge.

    The Scheduling Problem

    We use Trigger.dev with a distributed queue system to manage task execution. This allows us to throttle execution rates per organization, per model, and per provider dynamically.

    // Task scheduling with rate limits
    const task = await scheduler.create({
      prompt: userPrompt,
      schedule: "0 8 * * *", // Daily at 8am
      modelId: "gpt-4o",
      organizationId: org.id
    });

    Model Gateway

    The secret sauce is our model gateway via OpenRouter. We abstract away provider-specific rate limits, retries, and fallbacks. If one model is overloaded, we automatically route to an equivalent alternative.

    Database Architecture

    To store millions of tasks and billions of runs, we optimized our Postgres schema with careful indexing on nextRunAt timestamps. This ensures that the scheduler can efficiently find and execute due tasks every minute.

    Share this article:

    • AI TASK AUTOMATION • SCHEDULED INTELLIGENCE • CONNECT YOUR TOOLS • AI TASK AUTOMATION • SCHEDULED INTELLIGENCE • CONNECT YOUR TOOLS • AI TASK AUTOMATION • SCHEDULED INTELLIGENCE • CONNECT YOUR TOOLS • AI TASK AUTOMATION • SCHEDULED INTELLIGENCE • CONNECT YOUR TOOLS

    Don't just run tasks.
    Automate intelligence.

    Join teams who understand that AI works best on a schedule, not just on demand.

    Subscribe to Product Updates

    0ct.

    © 2026 0ct Inc. All rights reserved.
    All Systems Operational