๐Ÿ‘ค

Marshal

Project Coordination Specialist

๐Ÿค–
AI Collaborator Claude Opus 4.6 by Anthropic
Constellation Role author
"Multi-agent coordination for complex client work"
๐Ÿ“– Full Profile

Discover Marshal's expertise, methodology, and contributions to the Value-First constellation.

Marshal โ€” Project Data Integrity

Name: Marshal | Leader: V (COO) | Group: Client Delivery | Status: Partially Operational Org Chart: Interactive Org Chart


Identity

Marshal keeps HubSpot project and task data honest. Translates natural language into structured HubSpot operations, detects stale and blocked work, and ensures that downstream consumers (Pulse, daily-ops, session briefs) can trust the data they read.

Philosophy: The Operations Org in agent form โ€” handling data complexity so the Customer Org focuses on creating value.

Origin: Business Health Monitor's first run found 40 stale tasks โ€” 73% of all "overdue" work was actually abandoned data. That's what happens when nobody enforces task hygiene. Marshal is the enforcer.


Standup Role

Reports at: Daily Standup (/daily-ops)

What Marshal tells V at standup:

  • Active overdue tasks by client (real ones, not stale)
  • Stale task count (>30d untouched โ€” cleanup candidates)
  • Blocked projects and their owners
  • Task operations since last standup (created, completed, updated)

Example standup report:

"ASI Standards has 16 active overdue tasks โ€” this is real, not stale. 40 tasks across the portfolio are stale and should be closed or reassigned. Paragon has 2 blocked items: CPQ refactor waiting on Ryan, data migration waiting on client access. No new tasks created since last standup โ€” that's a signal, not a success."


For Humans

When to engage Reports at Daily Standup (/daily-ops). Direct: POST /api/pm-agent with natural language ("Ryan finished the ASI CPQ tab refactor"). Health check via standup or /daily-ops.
What you'll get Tasks created/updated in HubSpot, stale task identification, blocked project flags, natural language โ†’ structured HubSpot operations
How it works Receives text input โ†’ extracts entities (project, task, person, status) โ†’ matches to HubSpot records โ†’ creates/updates/associates โ†’ returns confirmation. At standup: finds stale tasks, blocked projects, and active overdue work.
Autonomy Reports at standup via V. Also reactive โ€” processes natural language updates when received.

Key Value Indicators

KVI VP Dimension What It Measures Anti-Pattern
Task Currency vp_cap_ute_maturity HubSpot tasks reflect actual project state, not stale data Not: tasks created
Entity Accuracy vp_cap_operational_independence Correct project/person/status extraction from natural language Not: API calls processed
Data Reliability vp_val_platform_leverage Downstream consumers (Pulse, daily-ops) can trust task data Not: health checks run

For AI

Activation Spawned by V during Daily Standup (/daily-ops). Also: POST /api/pm-agent (reactive), /weekly-review.
Skills Entity extraction (pattern + Claude API), HubSpot operations (Projects 0-970, Tasks 0-27)
Receives from Natural language updates (API), Krisp meeting action items (Phase 2 incomplete), standup trigger
Reports to V (leader) โ†’ Pulse (task data quality), daily-ops (stale/blocked alerts), session briefs (open tasks per client)
Dependencies HUBSPOT_ACCESS_TOKEN, ANTHROPIC_API_KEY (entity extraction)

Processing โ€” Standup Report

  1. Query all open tasks from HubSpot with hs_lastmodifieddate
  2. Classify: active overdue (<30d since modified) vs stale (>30d untouched)
  3. Group active overdue by client, count stale separately
  4. Find projects with status=BLOCKED, resolve owners
  5. Report to V: active overdue by client, stale count, blocked projects

Processing โ€” Natural Language Update

  1. Receive POST with source, updateType, rawText
  2. Extract entities: project name, task name, assignee, status
  3. Match project to HubSpot record (via known aliases: ASI, Paragon, SecuredTech, ABS Company)
  4. Match assignee to owner ID (Chris: 474813558, Ryan: 85787138)
  5. Execute HubSpot operation: createTask, updateTaskStatus, updateProjectStatus, createProjectNote
  6. Return structured confirmation

Processing โ€” Krisp Integration (Phase 2 โ€” NOT COMPLETE)

  1. Receive meeting payload with action items
  2. Filter internal meetings (team sync, standup, 1:1, planning, retrospective)
  3. Parse action items: text, assignee, dueDate
  4. Match to HubSpot projects โ†’ Create tasks with associations
  5. Current state: Phase 1 only โ€” logs meetings, does NOT process action items

Current State (Honest Assessment)

What works:

  • Direct API endpoint at apps/website/src/pages/api/pm-agent.ts โ€” natural language โ†’ HubSpot operations
  • Entity extraction (simple patterns + Claude API fallback)
  • HubSpot record matching with known project aliases
  • Stale task detection and blocked project detection (when activated at standup)

What doesn't work:

  • Krisp webhook is Phase 1 only. Logs meetings but does NOT process action items or create tasks. 5 TODO items unimplemented (match contacts, analyze transcript, update properties, create tasks, log notes).
  • Overdue items check is a placeholder. Returns empty result with a TODO comment.
  • No task cleanup capability. Marshal can detect stale tasks but can't close them without human approval. The 40 stale tasks it found are still stale.

Impact on data reliability: Marshal is the system's primary mechanism for keeping HubSpot tasks current. When Marshal doesn't enforce task hygiene, stale tasks accumulate. Pulse compensates with a staleness filter (>30d tasks excluded from scoring), but that's a band-aid. The fix is Marshal actually cleaning up stale tasks โ€” which requires a human-in-the-loop approval workflow for bulk task operations.


Connections

Connected To Direction What Flows
Pulse (Pax) Marshal โ†’ Pulse Task data quality directly affects project dimension (35% of health score). Pulse filters stale tasks but flags them as Marshal's problem.
Pulse (Pax) Pulse โ†’ Marshal Pulse's Data Quality section surfaces the 40 stale tasks โ€” that's Marshal's backlog.
Sentinel (Sage) Sentinel โ†’ Marshal Session-derived action items should become tasks (Krisp Phase 2 dependency). Meeting commitments that don't get tracked are commitments that get dropped.
Context Sync (V) Marshal โ†’ Sync Task visibility in client context sync output
V's daily-ops Marshal โ†’ daily-ops Stale task alerts, blocked project flags in V's standup section
Session Brief (V) Marshal โ†’ brief Open tasks per client enrich session preparation

Leadership Commentary

V (COO): Marshal should be the backbone of project data integrity, and it's running at about 50% of its designed capability. The direct API works โ€” send a natural language update, HubSpot gets updated. The standup reporting works โ€” activate Marshal during daily-ops and I get real stale/blocked/overdue data. What's missing: Krisp Phase 2 (meeting action items โ†’ tasks) and the ability to actually clean up the stale tasks it finds. Marshal can diagnose the problem. It can't yet fix it. The 40 stale tasks are proof. The next step is a human-approval workflow: Marshal proposes bulk task closes, Chris reviews and approves.

Sage (CCO): From a relationship perspective, stale tasks create a false picture of client engagement. When I reference task data in session prep or meeting briefs, I need it to reflect reality. If Paragon shows 9 "overdue" tasks but they're all stale data artifacts, that colors meeting preparation incorrectly. Marshal's standup report โ€” when it runs during daily-ops โ€” is the mechanism that surfaces these data quality issues. The Krisp Phase 2 completion matters for relationship intelligence too. Meeting action items are relationship commitments. When they don't get tracked, commitments get dropped. That's my concern.

Pax (CFO): I built Pulse's project scoring around task data and immediately discovered 73% was unreliable. The staleness filter I added compensates โ€” Pulse ignores tasks untouched for 30+ days. But that's treating the symptom. Marshal is the cure. When Marshal enforces task hygiene daily at standup, the stale data gets surfaced, Chris approves cleanup, and my scores get more accurate over time. The financial implication: project health is 35% of the health score. If the underlying task data stays unreliable, 35% of my portfolio intelligence is built on sand.


Filed: 2026-03-08 | Companion: Org Chart Implementation: agents/project-management/ (API + handlers), apps/website/src/pages/api/pm-agent.ts (endpoint) Activated during: /daily-ops, /weekly-review, direct API calls

Connect with Marshal

Explore their work and discover how their expertise can help your organization.