claudemd-evolve¶
Plugin: core-standards
Category: Other
Command: /claudemd-evolve
CLAUDE.md Lifecycle Management¶
Keeps CLAUDE.md lean and current through four subcommands:
| Subcommand | Purpose |
|---|---|
audit |
Measure tokens per section, propose condensations |
collect |
Scan journals/reviews for uncovered patterns |
propose |
Review and apply pending proposals from collect |
prune |
Identify stale or redundant rules for removal |
Step 0: Locate CLAUDE.md Files¶
- Set
GLOBAL_CLAUDEMD=~/.claude/CLAUDE.md - Set
PROJECT_CLAUDEMD=./CLAUDE.md(current working directory) - Read both files. If either does not exist, note it and continue with the one that exists.
- If neither exists, report "No CLAUDE.md files found" and exit.
Step 1: Argument Dispatch¶
Read the argument passed to the skill:
audit→ go to Auditcollect→ go to Collectpropose→ go to Proposeprune→ go to Prune- No argument or unrecognized → display this help:
/claudemd-evolve <subcommand>
Subcommands:
audit — Measure token usage per section, propose condensations
collect — Scan journals and reviews for recurring patterns
propose — Review and apply pending proposals from collect
prune — Identify stale or redundant rules for removal
Examples:
/claudemd-evolve audit
/claudemd-evolve collect
/claudemd-evolve collect --days 60
/claudemd-evolve propose
/claudemd-evolve prune
Audit¶
Measures CLAUDE.md token usage and proposes condensations for sections that can be made more concise without losing rule coverage.
Audit Step 1: Token Measurement¶
- Read the global CLAUDE.md file
- Split content by
##headings into sections (each##starts a new section) - For each section:
a. Write the section content to a temp file
b. Run
wc -c < tempfileto get character count c. Compute estimated tokens:chars ÷ 4 - Display per-section breakdown:
CLAUDE.md Token Audit: ~/.claude/CLAUDE.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Section Tokens
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Part 1: Universal Standards 450
## Part 2: Engineering Standards 320
## Part 3: Git Workflow 280
## Part 4: Workflow Commands 60
## Part 5: Deployed Skills 150
## Before Every Response 40
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total 1,300
Budget 2,500
Status WITHIN BUDGET
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
- If a project CLAUDE.md exists, repeat for that file (no fixed budget — report only)
- EC-4: If total is under 2,500 tokens → report "Within budget" and skip condensation proposals. Ask user if they want to audit the project CLAUDE.md instead, or exit.
- If over budget → proceed to Audit Step 2.
Audit Step 2: Condensation Candidate Identification¶
For each section over 100 tokens, analyze whether paragraph prose can become concise imperative directives:
- Condensable: Section has 3+ sentence paragraphs explaining rules that can be reduced to single-line directives without losing coverage.
- Generate a proposed condensed version
-
Calculate token savings
-
EC-1 — Irreducible: A rule cannot be shortened without losing meaning. Mark as "irreducible" in the report. Do not propose condensation.
-
EC-2 — Overlapping: Two rules overlap or contradict. Flag for manual resolution with both rule texts shown. Do not auto-merge.
-
EC-3 — Project-specific in global: A global rule is project-specific. Flag for extraction to the project CLAUDE.md with the specific text.
Display candidates:
Condensation Candidates
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[1] ## Part 1: Universal Standards → Rule 2
Current (42 tokens):
"If information is missing or ambiguous, ASK for clarification.
NEVER guess library versions, API formats, implementation details,
business logic, or user requirements.
State clearly: 'I need clarification on [specific point]...'
Ask specific, targeted questions rather than proceeding with assumptions"
Proposed (18 tokens):
"Missing info → ASK, never guess. State what you need before proceeding"
Savings: 24 tokens
[2] ...
Irreducible:
• Rule 10 (Security) — cannot condense without losing specific checks
Overlapping:
(none found)
Project-specific in global:
(none found)
Audit Step 3: Approval Flow¶
- For each condensation candidate, use AskUserQuestion:
- Approve — apply this condensation
- Reject — keep current version
-
Edit — modify the proposed text before applying
-
NF-4: Before applying ANY approved edits, create a git snapshot:
-
Apply approved condensations via Edit tool
-
For each condensation applied: if the original text has useful detail, extract it to
docs/claudemd-reference/<section-slug>.mdwith a link from the condensed rule: -
Display before/after token counts:
-
EC-6: If user rejects ALL candidates → "No changes applied" and exit.
Collect¶
Scans session journals, review findings, and critical patterns for recurring themes not yet covered by CLAUDE.md rules.
Collect Step 1: Scan Sources¶
Scan these sources for learnings and findings:
- Session journals (
docs/vt-c-journal/): - Read entries from the last 30 days (or
--days Nif specified) -
Extract decision entries, problem entries, and learning entries
-
Review gate (
.review-gate.md): -
Extract review findings and pattern observations
-
Review history (
review-history.md): -
Extract recurring finding types across reviews
-
Critical patterns (
docs/solutions/patterns/critical-patterns.md): - Extract promoted patterns
Collect Step 2: Identify Recurring Patterns¶
A finding is "recurring" when it appears in: - 2+ separate review sessions, OR - 3+ journal entries with similar keywords
For each finding: 1. Extract the core lesson or directive 2. Check if CLAUDE.md already covers it (grep for overlapping keywords in section headings and rule text) 3. If already covered → skip 4. If NOT covered → generate a proposal
Collect Step 3: Generate Proposals¶
For each uncovered recurring pattern, create a proposal:
- id: proposal-001
source: "journal/2026-03-05, journal/2026-03-07, review-gate 2026-03-06"
pattern: "Repeatedly forgot to check hook compatibility after skill changes"
proposed_rule: "After modifying a skill, verify related hooks still function correctly"
rationale: "Pattern appeared 3 times in 1 week — should be a permanent rule"
target_section: "## Part 2: Engineering Standards"
token_impact: "+12 tokens"
Collect Step 4: Write or Exit¶
-
EC-5: If no uncovered patterns found → report:
Exit cleanly. -
If proposals found → write to
docs/claudemd-reference/pending-proposals.md:# Pending CLAUDE.md Proposals Generated: YYYY-MM-DD Source scan: N journal entries, M review findings ## Proposal 1: [pattern summary] - **Source**: [citations] - **Proposed rule**: [text] - **Rationale**: [why] - **Target section**: [where in CLAUDE.md] - **Token impact**: [+N tokens] ## Proposal 2: ... -
Report: "Found N proposals. Run
/claudemd-evolve proposeto review and apply."
Propose¶
Reviews pending proposals from collect and applies approved ones to CLAUDE.md.
Propose Step 1: Load Proposals¶
- Read
docs/claudemd-reference/pending-proposals.md - If file does not exist or is empty → report: Exit.
Propose Step 2: Present Each Proposal¶
For each proposal, display:
Proposal 1 of N: [pattern summary]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Source: journal/2026-03-05, review-gate 2026-03-06
Pattern: [what was observed]
Proposed: [rule text to add]
Section: [target section in CLAUDE.md]
Impact: +12 tokens (total would be 2,462/2,500)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Use AskUserQuestion for each: - Approve — add this rule - Reject — skip, keep in pending - Edit — modify text before adding
US-4: If adding this proposal would push total over 2,500 tokens, warn:
WARNING: Adding this rule would bring total to 2,580/2,500 tokens (+80 over budget).
Consider running /claudemd-evolve audit to condense before adding new rules.
Propose Step 3: Apply Approved Proposals¶
-
NF-4: Before applying, create git snapshot:
-
Insert approved rules into the appropriate section of CLAUDE.md via Edit tool
-
Display before/after token counts
-
Move applied proposals from
pending-proposals.mdtodocs/claudemd-reference/applied-proposals.mdwith timestamp: -
Keep rejected proposals in
pending-proposals.mdfor future review -
EC-6: If ALL proposals rejected → "No changes applied" and exit.
Prune¶
Identifies stale, redundant, or superseded CLAUDE.md rules and proposes removal.
Prune Step 1: Parse Rules¶
- Read CLAUDE.md
- Parse into individual rules:
- Each numbered item (e.g.,
### 1. Scope Discipline) is a rule - Each
###subsection within a##section is a rule - Build a list of rules with their text and section context
Prune Step 2: Apply Pruning Criteria¶
For each rule, check four criteria:
- Stale reference: Rule mentions a specific library, tool, or file path.
- Grep the codebase for that reference
-
If zero matches → flag as stale with evidence:
-
Hook duplicate: Rule describes behavior that is enforced by a hook.
- Read
.claude/hooks/directory -
If a hook enforces the same check → flag as redundant:
-
One-time resolved: Rule references a specific bug or incident.
- Check if the architectural fix (code change) makes the rule unnecessary
-
Flag with evidence of the fix
-
Superseded: Rule has the same heading prefix or 3+ keyword overlap with another rule.
- Flag as potentially superseded with the overlapping rule reference
Prune Step 3: Present Candidates¶
Prune Candidates
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[1] Rule: "### 12. TypeScript Strictness"
Criterion: Stale reference
Evidence: No TypeScript files found in current project
Savings: -35 tokens
[2] Rule: "### 10. Security — never commit secrets"
Criterion: Hook duplicate
Evidence: Enforced by .claude/hooks/secret-scanner
Savings: -28 tokens
No candidates: 13 rules passed all criteria.
Prune Step 4: Approval and Removal¶
- Use AskUserQuestion for each candidate:
- Remove — delete this rule
- Keep — retain the rule
-
Keep + note — retain with a note about why it stays despite the flag
-
NF-4: Before removing, create git snapshot:
-
Remove approved rules via Edit tool
-
Archive removed rules to
docs/claudemd-reference/archived-rules.md: -
Display before/after token counts
-
EC-6: If ALL candidates rejected → "No changes applied, pruning report saved for reference" and exit.