7 steps · ~10 minutes · Zero code required

Build Temporal Workers
Without Writing Code

A step-by-step guide to designing activities, workflows, workers, and clients using the drag-and-drop visual builder — then generating production-ready Python code.

How It Works

The builder turns your visual design into a YAML config, then the API generates fully-typed Python Temporal code.

1. Design Visually

Drag types from the palette, configure activities and workflows through forms — no YAML or code to write.

2. Validate & Preview

The builder catches naming errors, missing fields, and broken references before you generate — with clickable error links.

3. Generate & Download

One click generates types, activities, workflows, workers, clients, Dockerfile, and docker-compose — all ready to run.

0

Launch the Builder

Start the application with Docker Compose. The UI runs on port 3000 and the API on port 8000.

Website Running App https://temporal-build-flows.vercel.app/builder.html # Clone and start
unzip temporal-builder.zip
cd temporal-builder
docker compose up --build

# Open in browser
open http://localhost:3000

Tip: On first visit, a sample Order Processing System loads automatically with 6 types, 5 activities, 2 workflows, 3 workers, and 2 clients. Explore it or click Clear All to start fresh.

localhost:3000
Primitive Types
string
int
float
bool
Components
⚡ Activity
🔄 Workflow
⚙ Worker
Project Metadata
OrderSystem
orders-prod
order-processing
Python
Types Activities Workflows Workers Clients Results
Your workspace — content appears here as you add items
1

Define Your Types

Types are the data structures your activities and workflows pass around. You can create structs (like dataclasses), enums, and aliases.

STRUCT Building a struct

  1. 01 Click + Add Type on the Types tab
  2. 02 Name it in PascalCase — e.g. OrderInput
  3. 03 Drag a string from the sidebar onto the drop zone — a field appears
  4. 04 Name the field order_id (snake_case) and check required
  5. 05 Repeat — drag int, bool, or your own types as fields

ENUM Building an enum

  1. 01 Add a type, then change the kind dropdown to enum
  2. 02 Click + Add Value for each variant
  3. 03 Enter PENDING / pending as name/wire value

Naming rules: Type names must be PascalCase. Field names must be snake_case. Enum variants should be UPPER_SNAKE. The builder validates these on generate.

This generates typed Python dataclasses:

@dataclass
class OrderInput:
    order_id: str
    customer_id: str
    items: list[str]
    total: Money
2

Add Activities

Activities are the individual units of work — each one does a specific task like validating data, calling an API, or sending an email.

Sync Activities BLOCKING

The worker executes the function and returns the result directly. Use for fast operations.

Database lookups
Input validation
Simple API calls
Async Activities HEARTBEAT

Long-running operations that must heartbeat periodically to prove liveness. Can complete out-of-band.

Payment processing
File processing / ETL
Warehouse fulfillment

Configuring an activity:

  1. 01 Switch to the Activities tab and click + Add Activity
  2. 02 Choose sync or async from the mode dropdown
  3. 03 Name it in snake_case — e.g. capture_payment
  4. 04 Set Input/Output Types — your custom types appear in the dropdown
  5. 05 Set the Timeout (e.g. 30s, 5m, 2h) and Max Retries
  6. 06 For async: configure the Heartbeat Timeout and optional Task Queue Override

Heartbeat timeout: If your async activity doesn't heartbeat within this window, Temporal considers it failed and may retry. Set it to something reasonable — e.g. 30s for payment, 5m for warehouse ops.

3

Build Workflows

Workflows orchestrate your activities into a reliable process. Add steps that reference the activities you defined — they run in sequence.

sync — Caller blocks until complete
async — Fire-and-forget, returns a handle
cron — Runs on a schedule (e.g. 0 3 * * *)

Adding workflow steps:

  1. 01 Click + Add Step inside the workflow card
  2. 02 Give the step an ID (e.g. validate)
  3. 03 Select the step kind — usually activity
  4. 04 Pick the activity from the dropdown (only your defined activities appear)
  5. 05 Name the output variable (e.g. validated_order) — used by later steps
validate_order
reserve_inventory
capture_payment
fulfill_order
send_notification

Tip: Other step kinds include child_workflow (call another workflow), timer (durable sleep), and condition (branching). Advanced kinds like parallel and continue_as_new can be configured in the generated YAML.

4

Configure Workers

Workers are the running processes that poll Temporal for work. Each worker hosts a set of activities and workflows on a specific task queue.

  1. 01 Switch to the Workers tab and click + Add Worker
  2. 02 Name it in kebab-case — e.g. order-worker
  3. 03 Set the Task Queue it polls (e.g. order-processing)
  4. 04 Check the boxes for which activities and workflows this worker hosts
  5. 05 Tune concurrency (max activities, max workflow tasks) and resources (replicas, CPU, memory)

Why multiple workers? Separate workers let you scale independently — e.g. 3 replicas for the order-worker but only 1 for notifications. Activities on different task queues must be on separate workers.

Validation check: The builder warns you if any activity or workflow isn't assigned to a worker — unassigned items won't execute. Each worker must host at least one activity or workflow.

5

Set Up Clients

Clients are how your application code starts workflows, sends signals, and queries state. Each client gets type-safe generated stubs.

  1. 01 Switch to Clients and click + Add Client
  2. 02 Name it api-client (kebab-case)
  3. 03 Set the Target — the Temporal server address (e.g. localhost:7233)
  4. 04 Choose default mode: async (returns handle) or sync (blocks)
  5. 05 Check the workflows this client is allowed to invoke — this is a type-safe allow-list
  6. 06 Optionally enable TLS for production connections

The generator produces typed methods for each allowed workflow:

# Generated client methods
await client.start_process_order(input, workflow_id)
await client.execute_process_order(input, workflow_id)
await client.signal_cancel_order(workflow_id, payload)
await client.query_get_status(workflow_id)
6

Generate & Download

You're ready! Preview your YAML, then hit Generate to produce the full project.

Before generating

  1. 01 Click Preview YAML to inspect the config
  2. 02 Check for any red dots on tabs — they indicate validation errors
  3. 03 Click Generate — errors show in a panel with clickable links
  4. 04 Fix any issues and regenerate

Generated files

📦 types.py — Dataclasses and enums
activities.py — Activity stubs with retry policies
🔄 workflows.py — Workflow classes with steps
worker_*.py — Runnable worker entry points
🔌 client_*.py — Typed client classes
📄 Dockerfile — Container image
📄 docker-compose.yml — Worker services
📄 requirements.txt — Python deps
📄 config.yaml — Your source config

Running your generated project:

# Download the ZIP from the Results tab, then:
cd your-project
pip install -r requirements.txt

# Start a worker
python worker_order_worker.py

# Or use the generated docker-compose
docker compose up --build

Next step: The generated activities have NotImplementedError stubs. Open each activities.py function and replace the stub with your actual business logic — the Temporal wiring, types, retry policies, and heartbeat scaffolding are already in place.

Results Tab
OrderProcessingSystem
20260405_143022_a1b2c3d4
Download ZIP
📦types.py
activities.py
🔄workflows.py
worker_order.py
worker_pay.py
🔌client_api.py

Quick Reference

Naming Conventions

PascalCaseTypes, WorkflowsOrderInput, ProcessOrder
snake_caseActivities, Fieldsvalidate_order, order_id
kebab-caseWorkers, Clientsorder-worker, api-client
UPPER_SNAKEEnum variantsPENDING, PAYMENT_CAPTURED

Duration Format

500msMilliseconds
30sSeconds
5mMinutes
2hHours
1dDays

Keyboard Shortcuts

Load SamplePre-fill with an Order Processing System
Preview YAMLInspect the generated config before submitting
Clear AllReset everything to blank
GenerateValidate + send to API + show results

API Endpoints

POST/api/generateSubmit YAML config
GET/api/projectsList all projects
GET/api/projects/{id}/downloadDownload ZIP
GET/api/healthHealth check
TIDS v1.0

Schema Reference

The Temporal Interface Definition Schema (TIDS) is the YAML format that defines your entire Temporal application. Here's every top-level key and what it controls.

Required Sections

metadata— Project name, namespace, default task queue, target language
activities— Activity definitions with sync/async mode, I/O types, timeouts, retries
workflows— Workflow orchestrations with steps, signals, queries, updates
workers— Worker processes with task queues, concurrency, resource limits
clients— Client configs with TLS, allowed workflows, connection targets

Optional Sections

types— Reusable structs, enums, and aliases for I/O contracts
retry_policies— Named retry presets (backoff, max attempts, non-retryable errors)
signals— Signal definitions reusable across workflows
queries— Query definitions for reading workflow state
updates— Update handlers with optional validators
pipelines— Multi-workflow orchestration with stages and fan-out
observability— Metrics, tracing, and logging configuration

Key Schema Patterns

Activity I/O Contract
# Each activity declares its
# typed input and output
input:
  type: DocumentInput
output:
  type: ExtractedText
Workflow Step Graph
# Steps form a DAG with
# depends_on edges
steps:
  - id: extract
    kind: activity
    depends_on: [upload]
Error Handling
# Per-step error strategy
on_error:
  strategy: fallback
  fallback_step: compensate
  max_retries: 3

Type Expression Syntax

Types used in input.type, output.type, and fields[].type follow this grammar:

type_expr = primitive | reference | generic
primitive  = "string" | "int" | "float" | "bool" | "datetime" | "duration"
reference = PascalCaseName       # refers to a types entry
generic   = "list<" type ">" | "map<" type "," type ">" | "optional<" type ">"
EXAMPLE

Manual Schema: AI Document Pipeline

Let's write a complete YAML schema by hand for a real-world AI workflow: a document-processing pipeline that uploads documents, extracts text, stores embeddings in a vector database, and serves an LLM-powered chat agent — with Temporal orchestrating every step.

Two workflows in this example: (1) A document ingestion pipeline that processes uploaded files into a searchable context database, and (2) a chat agent session where each user message triggers a Temporal workflow that retrieves context and calls the LLM.

Architecture Overview

Upload Doc
PDF / DOCX
Extract Text
sync activity
Chunk + Embed
sync activity
Store in VectorDB
async activity
User Message
chat input
Retrieve Context
vector search
Call LLM
async + heartbeat
Response
streamed back

A. Metadata & Types

Start every schema with the version, project metadata, and the data structures your activities will pass around.

schema_version: "1.0.0" metadata: name: AIDocumentPipeline namespace: ai-prod default_task_queue: ai-pipeline language: [python] types: - name: DocumentInput kind: struct fields: - {name: doc_id, type: string, required: true} - {name: file_url, type: string, required: true} - {name: mime_type, type: string, required: true} - {name: user_id, type: string, required: true} - name: ExtractedText kind: struct fields: - {name: doc_id, type: string, required: true} - {name: text, type: string, required: true} - {name: page_count, type: int, required: true} - name: EmbeddingResult kind: struct fields: - {name: doc_id, type: string, required: true} - {name: chunk_count, type: int, required: true} - {name: collection_name, type: string, required: true} - name: ChatMessage kind: struct fields: - {name: session_id, type: string, required: true} - {name: user_message, type: string, required: true} - {name: collection_name, type: string, required: true} - name: LLMResponse kind: struct fields: - {name: response_text, type: string, required: true} - {name: tokens_used, type: int, required: true} - {name: context_chunks, type: int, required: true}

B. Activities — The Building Blocks

Each activity is a single unit of work. Notice the two modes: sync for fast operations and async for LLM/DB calls that need heartbeats.

retry_policies: - name: standard initial_interval: "1s" backoff_coefficient: 2.0 max_interval: "30s" max_attempts: 5 - name: llm-retry initial_interval: "2s" backoff_coefficient: 1.5 max_interval: "15s" max_attempts: 3 non_retryable_error_types: [ContentFilterError] activities: # ── Document Ingestion Activities ── - name: extract_text # PDF/DOCX → plain text mode: sync input: {type: DocumentInput} output: {type: ExtractedText} start_to_close_timeout: "2m" retry_policy: standard - name: chunk_and_embed # Split text → generate embeddings mode: sync input: {type: ExtractedText} output: {type: EmbeddingResult} start_to_close_timeout: "5m" retry_policy: standard - name: store_in_vectordb # Write embeddings to Pinecone/Chroma mode: async # Heartbeats during bulk upsert input: {type: EmbeddingResult} output: {type: bool} start_to_close_timeout: "10m" heartbeat_timeout: "30s" retry_policy: standard # ── Chat / Agent Activities ── - name: retrieve_context # Vector similarity search mode: sync input: {type: ChatMessage} output: {type: string} # Concatenated context chunks start_to_close_timeout: "15s" retry_policy: standard - name: call_llm # Send prompt + context to LLM API mode: async # Heartbeats while streaming tokens input: {type: string} # Assembled prompt output: {type: LLMResponse} start_to_close_timeout: "2m" heartbeat_timeout: "20s" retry_policy: llm-retry task_queue: llm-inference # Dedicated GPU-backed queue

Why async for LLM calls? LLM inference can take 10-60+ seconds. The async mode tells the generator to wire heartbeat scaffolding — the activity reports activity.heartbeat("streaming...") as tokens arrive. If the worker crashes mid-stream, Temporal detects the missing heartbeat and retries on another worker.

C. Workflows — Orchestrating the Pipeline

Two workflows: one for ingestion (async, fire-and-forget), one for each chat turn (sync, blocks until response).

workflows: # ── Pipeline: Document → Text → Embeddings → VectorDB ── - name: IngestDocument mode: async # Fire-and-forget from upload API id_pattern: "ingest-{{input.doc_id}}" id_reuse_policy: reject_duplicate input: {type: DocumentInput} output: {type: EmbeddingResult} execution_timeout: "30m" steps: - id: extract kind: activity activity: extract_text output_var: extracted - id: embed kind: activity activity: chunk_and_embed output_var: embeddings depends_on: [extract] - id: store kind: activity activity: store_in_vectordb output_var: stored depends_on: [embed] on_error: strategy: retry max_retries: 3 # ── Agent: Each chat message = one workflow execution ── - name: ChatTurn mode: sync # API gateway blocks until LLM responds id_pattern: "chat-{{input.session_id}}-{{uuid}}" input: {type: ChatMessage} output: {type: LLMResponse} execution_timeout: "3m" steps: - id: context kind: activity activity: retrieve_context output_var: context_text - id: llm kind: activity activity: call_llm output_var: response depends_on: [context] on_error: strategy: fallback fallback_step: fallback_response # Fallback: return a graceful error if LLM fails - id: fallback_response kind: local_activity local_activity_fn: return_error_message output_var: response

Chat turn as sync workflow: The API gateway calls execute_workflow(ChatTurn, ...) which blocks until the LLM responds. This gives you Temporal's full retry/timeout guarantees on every single chat message — if the LLM worker crashes mid-response, it retries automatically. The user just sees a slightly delayed reply, never a broken connection.

D. Workers & Clients

Separate workers for CPU-bound text extraction, GPU-bound LLM inference, and the API client.

workers: - name: ingestion-worker task_queue: ai-pipeline activities: [extract_text, chunk_and_embed, store_in_vectordb] workflows: [IngestDocument] max_concurrent_activities: 10 runtime: {replicas: 2, cpu: "2", memory: "4Gi"} - name: llm-worker task_queue: llm-inference # Matches call_llm.task_queue activities: [call_llm] workflows: [] # Activities only — no workflows max_concurrent_activities: 4 # Limited by GPU memory runtime: {replicas: 1, cpu: "4", memory: "16Gi"} - name: chat-worker task_queue: ai-pipeline activities: [retrieve_context] workflows: [ChatTurn] max_concurrent_activities: 100 max_concurrent_workflow_tasks: 200 runtime: {replicas: 3, cpu: "1", memory: "1Gi"} clients: - name: api-gateway target: "temporal.internal:7233" default_mode: async allowed_workflows: [IngestDocument, ChatTurn]

Why three workers? The llm-worker runs on GPU machines with max_concurrent_activities: 4 (limited by VRAM). The ingestion-worker runs on CPU instances for text parsing. The chat-worker handles fast vector lookups at high concurrency. Temporal routes work to the right queue automatically.

E. Chat Session Flow — Temporal in Every Interaction

Here's how every single chat message flows through Temporal. The API gateway is just a thin client — all reliability lives in the workflow.

1
User sends a message

The frontend POST to /api/chat. The API gateway uses the generated client to call client.execute_chat_turn(message, workflow_id) — this blocks until the workflow completes.

2
ChatTurn workflow starts

Temporal schedules the workflow on the ai-pipeline queue. Step 1: retrieve_context runs as a sync activity — queries the vector DB with the user's message, returns the top-K matching document chunks.

3
LLM call with context

Step 2: The workflow assembles a prompt (user message + retrieved context) and dispatches call_llm to the llm-inference queue. The LLM worker picks it up, starts streaming tokens, and heartbeats every few seconds. If it crashes, Temporal retries on another GPU worker.

4
Response returned

The LLM activity completes with an LLMResponse. The workflow finishes. The execute_workflow call unblocks and the API gateway returns the response to the user. Total time: typically 2-10 seconds, with full retry guarantees on every step.

Agent pattern: For multi-turn agents that use tools (search, code execution, API calls), each tool invocation becomes an activity. The agent's "reasoning loop" is the workflow — it calls call_llm, checks if the LLM wants to use a tool, executes the tool activity, feeds the result back, and loops via continue_as_new if the conversation exceeds history limits. Temporal makes the entire agent loop durable and retryable.

Try it yourself

Copy any of the YAML blocks above into a file called config.yaml, then POST it to the builder API:

curl -X POST http://localhost:8000/api/generate \
  -H "Content-Type: text/yaml" \
  -d @config.yaml

Or paste the complete YAML into the builder UI's Preview YAML modal (coming soon: YAML import), configure any remaining fields visually, and hit Generate.