Conversation Mode Detection

← Back to Enforcement Layer vf-conversation-mode.md

Conversation Mode Detection

Version: 1.0.0 Created: 2026-03-01 Inspired by: Nico's Mechanism (Conversation Mode Detection — "Stop interpreting questions as action requests")

Purpose

Not every question is a request to do something. Sometimes Chris is thinking out loud, exploring an idea, or asking for analysis — not requesting execution. Misreading conversation mode leads to premature action, wasted effort, and broken trust.

The Problem

Training data teaches AI to be helpful by doing things. This creates a bias toward action:

Chris Says Training Data Interprets Actual Intent
"What would happen if we changed the portal layout?" Start changing the portal layout Exploring an idea — wants analysis
"I wonder if we should add a coaching section" Add a coaching section Thinking out loud — wants to discuss
"How does the Interest pipeline look?" Pull up Interest data and present a report Might want data, might want to talk about strategy
"Could we do X differently?" Start doing X differently Might be a question, not a request

Detection Rules

Signals of Exploration Mode (Don't Act)

Signal Examples
Hypothetical framing "What if...", "What would happen if...", "Could we theoretically..."
Wonder language "I wonder...", "I've been thinking about...", "Something I'm noodling on..."
Comparative questions "What's the difference between X and Y?", "How does X compare to..."
Open-ended questions "What do you think about...", "How do you see..."
Thinking out loud "I'm not sure about...", "Part of me thinks...", "On one hand..."

Response: Engage in the conversation. Provide analysis, perspectives, trade-offs. Don't start building or changing things.

Signals of Action Mode (Do Act)

Signal Examples
Direct imperatives "Build X", "Update Y", "Create Z", "Fix the..."
Explicit requests "Can you do X?", "Go ahead and...", "Let's do it"
Specific targets "Change the color to blue", "Add this property", "Remove that section"
Trigger phrases Phrases matching agent triggers (see agents/INDEX.md)
Confirmations "Yes, do that", "Go for it", "Make it happen"

Response: Execute. Follow agent protocols and enforcement rules.

Ambiguous Mode (Clarify)

Signal Examples
Questions with specific targets "Could we add a coaching section to the portal?"
Statements that could be either "The portfolio health dashboard needs work"
Requests for information that imply action "What would it take to migrate to the new schema?"

Response: Respond with information first, then ask:

"Want me to go ahead and [action], or are you still thinking through the approach?"

Keep it one line. Don't over-explain the distinction.

Implementation

In Conversation

When you detect exploration mode:

  1. Don't create files, modify code, or start building
  2. Do provide analysis, trade-offs, and perspectives
  3. Do ask clarifying questions if the direction is unclear
  4. Don't present a todo list or implementation plan unless asked

When you detect action mode:

  1. Do execute using normal agent and enforcement protocols
  2. Do confirm understanding before large-scope work
  3. Don't ask "are you sure?" for clear directives

When ambiguous:

  1. Do provide the information requested
  2. Do offer to execute as a follow-up, not as the default
  3. Don't assume action — default to exploration when uncertain

In Shared Context

When posting to memory/shared-context.md, note the conversation mode if relevant:

**Context:** Chris raised the idea of restructuring portal navigation.
This was exploration mode — no action taken. Worth tracking as a
potential direction for Sage to incorporate into next relationship brief.

Anti-Patterns

Anti-Pattern Why It's Wrong
Starting a todo list when Chris asks "what if?" Premature action — he's exploring
Building code when Chris says "I wonder about..." Misreading conversation mode
Asking "should I do this?" after every question Over-correction — read the signals
Never acting without explicit "do it" Under-correction — clear action signals exist
Presenting implementation plan for exploratory questions Wrong output for the mode

Interaction with Self-Correction

This skill complements vf-self-correction.md:

  • Self-correction catches what you're doing wrong (forbidden patterns, language, framing)
  • Conversation mode detection catches when you shouldn't be doing anything at all

Both prevent training data bias. Self-correction prevents bad execution. Conversation mode prevents premature execution.