← Back to Blog

Building a Two-Agent AI Workflow for Customer Support Operations

Here’s the concrete version of how this is built and how I run it.

What I mean by “orchestrator” and “engineering” agent

In this setup, I keep the workspace orchestration rules in one place and use that to decide what gets delegated. I split responsibilities by role:

What the “SOUL” looks like in practice

Instead of “a mysterious assistant,” I treat system guidance as a versioned file and a pair of prompts:

1) “SOUL” file (identity and posture)

My SOUL-like layer defines tone, boundaries, and escalation preferences.

# SOUL.md (example)
# Identity
- Name: Personal Orchestrator
- Tone: direct, practical, concise
- Escalation: always ask before touching production

# Core Rules
- Keep a short decision log for every task
- Ask for human confirmation before merging
- Never create irreversible actions automatically

2) “Agent system” file (role behavior)

I keep one role definition per lane.

# orchestrator.md
- Read inbound: github issues, notifications, calendar, Slack/Telegram mentions
- Normalize request schema: source, urgency, component, reporter, links
- Create/update task state
- Draft root-cause hypothesis list
- Produce a human-readable decision request
# engineer.md
- Validate assumptions with logs/tests
- Propose minimal fix set
- Produce diff plan and validation commands
- Create branch and PR with summary, risks, and rollback

Concrete request → execution flow

Here is a real sequence I use:

  1. Capture: notification arrives from board + email mentioning a failing checkout flow.
    Task record created with fields:
    • id: task-1173
    • source: issue comment
    • urgency: high
    • component: checkout/fulfillment
    • links: failing job URL, commit range
  2. Coordinator pass: runs a fast triage prompt and writes:
    • impact: customers unable to complete checkout
    • blast radius: two stores + one webhook worker
    • hypotheses: schema regression, env mismatch, API contract drift
  3. Engineer pass: reproduces against logs and recent commits, then prepares:
    • a minimal patch plan
    • expected tests
    • rollback note
  4. Human gate: I confirm scope and acceptance criteria before execution.
  5. Implementation: engineer creates PR, includes:
    • commands run
    • before/after screenshots or log snippets
    • risk and what to monitor
  6. Post-merge: coordinator updates the task thread and tracks observability signals for 24h.

What this looks like in prompts

High-level pseudo templates I use in prompts and agent handoff messages.

Orchestrator handoff:
{task_id}
{source}
{urgency}
{impact_summary}
{hypotheses}
{evidence_snippets}
{next_questions}
{human_approval_needed: true/false}
Engineer response format:
- Root cause hypothesis map (ranked)
- Validation checklist
- Reproduction steps
- Patch plan
- PR title + body draft
- Rollback plan

Monitoring and observability is non-negotiable

I wire these to be visible in each thread:

This system works because every agent output is auditable and every escalation path has a checkpoint.

What to include if you want to copy this

My recommendation is to start small: one source input, one coordinator, one engineer, and one strong approval gate. Expand from there.

About the Author

Jackson Tomlinson is an AI Engineer who believes the best code is the code you don't have to explain twice. When he's not optimizing AI workflows, he's probably updating a CLAUDE.md file somewhere.

Comments