[FEATURE] CLAUDE.md instructions are read but not reliably followed - need enforcement mechanism
Problem
Instructions in CLAUDE.md (and .claude/*.md files) are loaded into context at session start, but Claude does not consistently follow them. The model acknowledges the rules exist, can repeat them back, but drifts toward task completion over process compliance.
This defeats the purpose of project-specific configuration. Users document rules to prevent repeated mistakes, but without enforcement, the same errors recur across sessions.
User Context: Non-Developer Power User
I'm not a software developer—I'm a real estate professional who has built an 82,000+ line TypeScript/React application almost entirely through Claude Code, often running sessions headlessly without reviewing the code directly.
Before Claude Code, I spent over $150,000 on human developers over several years trying to build this product. I scrapped all of that when I found Claude Code.
This changes the dynamic:
- I can't catch Claude's mistakes by reading the code
- I rely on documented rules being followed as my QA process
- CLAUDE.md isn't a nice-to-have—it's how I maintain engineering standards without being an engineer
- Session notes, changelogs, and inline documentation ARE my institutional memory
Most Claude Code users are probably developers using AI to accelerate their work. I'm a domain expert using AI as my entire development team. That requires a higher standard of instruction adherence than the tool currently delivers.
Specific Examples
In my project, CLAUDE.md explicitly requires:
- TSDoc comments on significant changes - with date, what was broken, why it was fixed
- CHANGELOG.md updates - for every significant change
- Local backup before commits - to recover from corrupted commits (this rule exists because I lost work when Claude didn't follow it)
-
Use established patterns - e.g.,
useTableInputhook instead of inline state for tables
What happens:
- Claude reads these instructions (confirmed—can quote them back verbatim)
- Claude proceeds to fix bugs/add features
- Claude skips the documentation steps
- Claude creates duplicate code instead of using established patterns
- I discover missing documentation later, or worse, lose work when a commit corrupts
When confronted, Claude acknowledges the failure and apologizes, but the next session repeats the pattern. The cycle is:
- Bad thing happens
- I create a rule to prevent it
- Claude doesn't follow the rule
- Bad thing happens again
- I remind Claude
- Claude apologizes
- Next session: back to step 3
Expected Behavior
Instructions marked as required in CLAUDE.md should be treated as non-negotiable. The model should either:
- Follow them automatically
- Have a self-check before completing a task
- Refuse to proceed if it can't comply
Proposed Solutions
Any of these would help:
-
Priority/enforcement syntax - A way to mark rules as mandatory vs. advisory
<!-- ENFORCE --> - Always add TSDoc comments on fixes - Always update CHANGELOG.md - Always create local backup before commits <!-- /ENFORCE --> -
Pre-completion checklist - Model reviews project rules before saying "done"
-
Hooks integration - Pre-commit hook that Claude must satisfy before committing
-
Explicit compliance confirmation - Model must confirm rule compliance in its response
Impact
- Lost work - Commits without backups have caused unrecoverable data loss
- Technical debt - Duplicate code accumulates because architectural patterns aren't followed
- Wasted money - Re-explaining rules, re-doing work, tokens spent on preventable mistakes
- Eroded trust - Can't rely on documented configuration being respected
For users like me who depend on Claude Code as their entire development capability (not just an accelerator for existing skills), unreliable instruction-following is a fundamental barrier to trust.
Environment
- Claude Code CLI (VS Code extension)
- Model: claude-opus-4-5-20250120
- Extensive
CLAUDE.mdwith project-specific rules - Multiple
.claude/*.mdfiles with architectural documentation - Long-running project with many sessions over months
Summary
The current behavior is: "I read your rules, I understand your rules, I don't follow your rules."
For developers, this is annoying. For non-developers who depend on Claude Code as their entire engineering team, it's a critical reliability gap.
Users need a way to make certain instructions genuinely mandatory, not just suggestions that get deprioritized when the model focuses on the immediate task.