Handoff Protocol

← Back to Enforcement Layer vf-handoff-protocol.md

Handoff Protocol

Protocol for work transitions between Claude Desktop (Opus) and Claude Code. Purpose: maintain AI-native framing, prevent translation loss, catch human-paced thinking.

Opus → Claude Code

When Opus creates specifications for Claude Code to implement.

Specification Format (Required)

Every specification must follow this structure:

# [Project Name]

## Target State

[Complete description of what will exist when done. No phases. No priorities. This is the end state.]

## Architecture Specification

[Technical details, object models, schemas, relationships]

## Architectural Dependencies

[What genuinely must exist before other things can work. Not "what's easier" or "what to start with"—actual technical prerequisites.]

A
└── B (requires A)
    └── C (requires B)

## Validation Gates

[Executable verification criteria. Each gate has specific checks and pass/fail criteria.]

## Constraints

### Do
[Required approaches]

### Don't
[Forbidden patterns]

## Reference

[Links to detailed documentation, IDs, credentials]

---

*This is the complete target architecture. Build it.*

Format Enforcement

Before handing off any specification, verify:

Check Requirement
Target State Complete, not phased
No calendar language No "Week 1-2", "Phase 1 (Months 1-3)"
No priorities No "Priority 1, 2, 3" or "start with"
No fallbacks No "quick wins", "if time permits", "MVP"
Dependencies are technical Not difficulty-based or preference-based
Validation gates exist Executable, pass/fail criteria
Imperative close Ends with "Build it" or equivalent

If any check fails → revise specification before handoff.

Context Packaging

Include with every specification:

  1. Platform context block:
## Platform Context

- Custom objects: Deliverable (2-18484424), Interest (2-54301725), Investment (2-53994804) only
- Everything else is native
- Forbidden: hardcoded config, calendar phases, hubspot-list-properties on complex objects
- HubSpot is source of truth
  1. Validation gate summary:
## Gates to Pass

1. [Gate name]: [One-line description]
2. [Gate name]: [One-line description]
...
  1. Handoff statement:
---

This is the complete target architecture. Validation gates define completion. Execute.

Anti-Patterns in Specifications

Do not include:

Anti-Pattern Why It's Wrong
"Getting Started" section with numbered steps Implies sequence, invites partial completion
"Suggested approach" AI determines approach
"You might want to start with" Human-paced thinking
"This could be phased as" Invites scope reduction
"Success metrics" Unless explicitly requested
"Timeline estimate" Calendar-based thinking

Claude Code → Opus

When Claude Code reports status or requests review.

Status Report Format (Required)

# [Project Name] Status

## Gate Status

| Gate | Status | Evidence |
|------|--------|----------|
| [Gate 1] | PASS/FAIL | [Specific evidence] |
| [Gate 2] | PASS/FAIL | [Specific evidence] |
...

## Deviations from Specification

[Any architectural decisions that differ from spec. If none, state "None."]

## Blockers

[Genuine technical obstacles. Not scope questions, not "what should I prioritize."]

## Questions Requiring Human Judgment

[Only questions that genuinely require Chris's input. Not "should I do X or Y first."]

Report Enforcement

Status reports must NOT include:

Forbidden Why
Progress percentages Implies partial credit
"Quick wins achieved" Fallback framing
"Completed Phase 1" Calendar/phase thinking
"Working on Priority 2" Sequential prioritization
Time estimates Calendar-based thinking
"Should I do X next?" AI determines execution order

Blockers vs. Non-Blockers

Blocker (include in report):

  • Technical impossibility: "HubSpot API doesn't support X"
  • Missing access: "Need credentials for Y"
  • Specification ambiguity: "Spec says A but also implies B"

Not a blocker (do not include):

  • "Should I do X or Y first?" → AI decides
  • "This is taking longer than expected" → Irrelevant
  • "I'm not sure the best approach" → Determine and execute
  • "Should we reduce scope?" → No

Opus Review Process

When Opus reviews Claude Code implementation.

Review Protocol

  1. Load original specification
  2. Run validation gates against implementation
  3. Check for bias drift:
    • Custom objects beyond Deliverable/Interest/Investment?
    • Hardcoded configuration?
    • CRM patterns instead of platform patterns?
  4. Output: APPROVED or corrections

Correction Format

Corrections are targeted, not full re-specifications:

## Correction Required

**Issue:** [Specific problem]

**You assumed:** [What Claude Code did]

**Correct approach:** [What should happen instead]

**Verification:** Run Gate [X] to confirm correction. Expected result: [specific outcome]

Do Not

  • Re-explain the entire specification
  • Provide multiple options
  • Soften with "you might consider"
  • Suggest phased corrections

Approval Format

## APPROVED

All validation gates pass.

[Optional: specific commendation for architectural decisions]

Adversarial Subagent Review

When dispatching subagents for complex implementation tasks, use adversarial review to catch self-serving optimism. The implementer and reviewer must be separate agents with deliberately different incentives.

When to Use

  • Multi-file changes (3+ files modified)
  • HubSpot data operations (creates, updates, bulk operations)
  • Portal/website builds before production push
  • Any work that will be hard to reverse once deployed

Implementer Prompt Template

When dispatching a subagent to implement:

## Task
[Full task description — never make the subagent read a file to understand the task]

## Constraints
[Architecture rules, forbidden patterns, custom object limits]

## Verification Required
Before reporting completion, you MUST:
1. Run every verification command listed below
2. Include the FULL output in your report
3. Flag any warnings, not just errors

## Verification Commands
[Specific commands: build, test, git diff, etc.]

Report format: What you did → What you verified → What the output showed → Any concerns

Reviewer Prompt Template

When dispatching a separate subagent to review the implementer's work:

## Role
You are reviewing work completed by another agent. Your job is adversarial verification.

## Assumption
The implementer's report may be incomplete, inaccurate, or optimistic.
DO NOT trust their claims. Verify independently.

## Review Protocol
1. Read the ACTUAL files that were modified (not the implementer's description of them)
2. Run the verification commands yourself — do not rely on the implementer's output
3. Check for enforcement violations:
   - Forbidden language (leads, funnel, conversion, nurture, prospects)
   - Hardcoded configuration (IDs, slugs in code)
   - Custom objects beyond Deliverable/Interest/Investment
   - Calendar-based phases or sequential prioritization
4. Verify the implementation matches the specification — not just "compiles" but "does what was asked"

## Output Format
For each claim in the implementer's report:
- CONFIRMED: [evidence from your own verification]
- DISPUTED: [what they claimed vs. what you found]
- UNVERIFIED: [claims you could not independently check, and why]

## Critical
If you find yourself agreeing with everything the implementer said, be suspicious.
Re-read the specification. Check edge cases. Look for what's missing, not just what's present.

Dispatch Pattern

1. Dispatch implementer subagent with full task + constraints
2. Receive implementer report
3. Dispatch reviewer subagent with:
   - The original specification (NOT the implementer's summary)
   - The implementer's report (for adversarial comparison)
   - Access to the actual codebase
4. Receive reviewer report
5. If DISPUTED items exist → fix before claiming completion
6. If all CONFIRMED → proceed with verification-before-completion gate

Anti-Sycophancy in Review

The reviewer must NOT:

  • Thank the implementer or express appreciation
  • Say "great work" or "well done" before flagging issues
  • Soften criticism with "minor suggestion" when it's an actual problem
  • Agree with the implementer's self-assessment without independent evidence

The reviewer MUST:

  • State findings directly: "This is wrong because..." or "This is correct because..."
  • Push back when the specification is not met, even if the code is "close"
  • Flag missing items, not just incorrect items
  • Distinguish between "works" and "meets specification"

Frame Maintenance

Both Opus and Claude Code must maintain AI-native frame throughout handoff.

Opus Responsibilities

  • Generate specifications in correct format
  • Include platform context
  • Define validation gates
  • Review without introducing human-paced thinking
  • Correct with targeted fixes, not scope reduction

Claude Code Responsibilities

  • Execute complete specification
  • Determine execution order autonomously
  • Report gate status, not progress percentages
  • Flag genuine blockers only
  • Never ask "what should I do first"

Shared Responsibilities

  • No calendar-based language
  • No fallback framing
  • No scope hedging
  • Complete target state, not incremental delivery
  • Gates pass or fail—no partial credit

Quick Reference

Specification Must Have

  • Target State (complete)
  • Architectural Dependencies (technical)
  • Validation Gates (executable)
  • Constraints (do/don't)
  • Imperative close

Specification Must NOT Have

  • Calendar phases
  • Sequential priorities
  • Fallback options
  • "Getting Started" steps
  • Time estimates

Status Report Must Have

  • Gate status with evidence
  • Deviations (if any)
  • Genuine blockers (if any)

Status Report Must NOT Have

  • Progress percentages
  • Phase completion
  • Time references
  • "What next?" questions