Skip to content

vt-c-ralph-wiggum-loop

[DEPRECATED] Convergence-based verification — core logic moved to /vt-c-3-build Step 6.5a (SPEC-108)

Plugin: core-standards
Category: Other
Command: /vt-c-ralph-wiggum-loop


ralph-wiggum-loop Skill

DEPRECATED: The core convergence verification logic has been integrated into /vt-c-3-build Step 6.5a (SPEC-108). Tests are now automatically run before the build gate is marked COMPLETE. This standalone skill is retained for reference but should not be invoked directly.

Purpose: Enforce convergence-based success - work is only "done" when verification passes.

Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

Core Concept

The Accountability Inversion: Human-defined criteria decide completion, not agent promises.

Convergence-Based Success: Repeated loops until verification passes, not first-pass success.

How It Works

  1. User defines verification criteria upfront (e.g., "Tests must pass")
  2. Agent executes work
  3. Wrapper runs verification commands
  4. If fail: Loop back to fix issues
  5. If pass: Emit completion

Verification Commands Database

Load from: references/verification-commands.md

Supports automatic detection for: - Ruby/Rails: bin/rails test, bundle exec rspec, rubocop - JavaScript/Node: npm test, npm run lint, npm run build - Python: pytest, mypy - Go: go test ./...

Max Iterations

Safety limit: 5 iterations

If exceeded: Present failure summary, ask for user guidance

Execution Steps

Step 1: Load Verification Criteria

Detect from project files: - Ruby/Rails: Rakefile, bin/rails test - JavaScript/Node: package.json scripts.test - Python: pytest.ini, setup.py test - Go: go.mod with go test

Or use user-specified criteria.

Step 2: Execute Work

Allow wrapped task to execute normally.

Step 3: Run Verification

# Example: Rails tests
bin/rails test

# Capture exit code and output
EXIT_CODE=$?
OUTPUT=$(bin/rails test 2>&1)

Step 4: Analyze Results

Pass Criteria: - Exit code == 0 - Output does NOT contain failure keywords

Fail Criteria: - Exit code != 0 - OR output contains: "failures", "errors", "failed"

Step 5: Loop or Complete

If PASS: - Log to metrics/verification-loops.json - Emit completion message - Exit loop

If FAIL: - Extract failure details - Present to agent: "Verification failed: {details}" - Loop back to Step 2 - Increment loop_count

Step 6: Track Metrics

# Log verification result
Read: metrics/verification-loops.json

# Append entry
{
  "timestamp": "{current}",
  "command": "bin/rails test",
  "loop_count": 2,
  "final_status": "passed",
  "false_claims_prevented": 1
}

Write: metrics/verification-loops.json

Integration Points

Invoked by: - workflows:work with --autonomous flag - Can wrap any command execution - Manual invocation for specific verification needs

Output: - Verification status (PASS/FAIL) - Loop count - Failure details if applicable - Metrics logged to JSON

Success Criteria

  • ✅ Verification runs after work
  • ✅ Loop continues on failure
  • ✅ Max iterations prevents infinite loops
  • ✅ Metrics logged for ROI tracking
  • ✅ Clear failure feedback to agent

Comparison: Native /loop vs ralph-wiggum-loop

Claude Code's native /loop command provides interval-based recurring prompts (e.g., /loop 5m /command). Our skill differs in a key way:

Feature Native /loop ralph-wiggum-loop
Scheduling Interval-based (cron-like) Convergence-based (loop until pass)
Completion Runs indefinitely on interval Exits when verification passes
Safety limit None built-in Max 5 iterations
Metrics None Logs to verification-loops.json
False claim prevention None Core feature — Iron Law

Recommendation: Use native /loop for periodic monitoring (e.g., check deploy status). Use ralph-wiggum-loop for convergence verification (e.g., "keep fixing until tests pass"). They complement each other: /loop 5m /vt-c-ralph-wiggum-loop combines interval scheduling with convergence verification.

Error Handling

Missing verification command: - Fall back to user-specified command - Warn if no verification possible

Infinite loop risk: - Hard limit at 5 iterations - Prompt user for manual intervention

Verification command failure: - Distinguish between test failures and command errors - Provide clear error context