I’ve run hundreds of Claude sessions. Many have:
- Tried to reinvent my entire architecture
- Forgotten what we were building midway through
- Built entire apps and features I never asked for
- Tried to use deprecated methods and outdated patterns
- Written way too many comments and docstrings (this happens every time)
I’ve developed a system for keeping AI on a leash. It starts with treating AI engineering as just engineering: applying decades-old principles to new tools.
Table of contents
Open Table of contents
Why Most AI Sessions Fail
The root cause? Every AI session starts fresh. No memory. No context. Clean slate.
You explain your patterns, coding style, architecture decisions. The AI gets it—for that session. Tomorrow? Groundhog Day.
Even worse, AI invents new patterns mid-session. One minute it’s following your Django conventions, the next it’s refactoring to FastAPI. This “creativity” kills velocity and breeds bugs.
The Solution
Start with one file: CLAUDE.md. This controls every AI session.
## 📋 User's Preferred Workflow
1. Read existing code before writing new
2. Follow existing patterns exactly
3. Ask before architectural changes
4. Keep changes minimal - don't refactor unless asked
5. No comments in code unless explicitly asked
## 📚 Required Reading
Go read these files first:
- API_REFERENCE.md - All endpoints and schemas
- ARCHITECTURE.md - Service boundaries
- DATA_MODEL.md - Database schema
- FEATURES.md - Business logic
## 🗂️ Key Locations
- Backend: backend/apps/*/api.py, */models.py, */services.py
- Frontend: frontend/src/components/, src/services/api/
Warning: It’s easy to go overboard. Too much documentation overwhelms the AI’s context window and dilutes focus. Every word in CLAUDE.md should earn its place. Brief, focused, actionable.
The Superpower: Add CLAUDE.md files to subfolders—tests/CLAUDE.md
, backend/CLAUDE.md
, frontend/CLAUDE.md
. AI loads them lazily when working in those directories. Root CLAUDE.md stays lean, folder-specific patterns live where they’re needed.
Making It Production-Ready
Documentation tells AI what to write. But that’s only half the equation.
Imagine coding on paper without ever running it—how many bugs would you create? That’s AI without validation tools.
Add ruff.toml
, mypy.ini
, pre-commit hooks, and Claude hooks. Now AI writes code, validates it, fixes issues—no human intervention.
# .pre-commit-config.yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
hooks: [ruff --fix, ruff-format]
- repo: https://github.com/pre-commit/mirrors-mypy
hooks: [mypy]
The Testing Strategy That Actually Works
Validation catches syntax errors. For logic? Tests encode requirements.
Backend: Less Is More
AI wants to write 50 tests. Cut to 5 that matter:
- Critical paths users depend on
- Edge cases you’re worried about
- Integration points between services
For routine feature testing, get the AI to follow existing patterns:
"Write tests for OrderService following tests/test_shopping_service.py"
Coverage as a dead code detector: Run coverage reports after each session. AI loves writing “helper” methods and “utility” functions that nothing actually uses. These show up as 0% covered. Delete them. Every line of code should earn its place through actual usage. This only works if you follow the less is more strategy above.
Frontend: Visual Validation Loop
Sketch → Build → Validate
- Draw rough UI in Excalidraw (or your tool of choice)
- AI builds with shadcn/ui components (not custom HTML)
- Playwright takes screenshots
- AI validates against its own screenshots
The magic: AI can see what it built and self-correct. Component libraries like shadcn/ui prevent AI from reinventing wheels.
Project Management for Parallel AI
With documentation, validation, and tests in place, I can run 4 AI agents simultaneously. Here’s how I orchestrate them.
Phase 1: Feature Planning with Claude Desktop
I use CopyChat to copy all relevant context—existing code, patterns, tests—into Claude Desktop (Opus). We discuss the feature and break it into isolated tasks.
The output: A task list in docs/project-management/feature-name.md
with exact prompts for each parallel session. Each task is independent. No dependencies. That’s the key to parallelization.
Phase 2: Parallel Execution
I open 4 Claude Code windows (Sonnet), each with its assigned task. The ESC key is my kill switch—any session going rogue gets terminated immediately.
Critical: Git worktrees keep them isolated. Each AI works in its own worktree, can’t see or mess with the others:
git worktree add ../project-models feature/models
git worktree add ../project-service feature/service
git worktree add ../project-api feature/api
git worktree add ../project-frontend feature/frontend
Now each session has its own branch, own working directory:
- Window 1 (../project-models): Models + migrations (~150 lines) → PR #1
- Window 2 (../project-service): Service layer (~500 lines) → PR #2
- Window 3 (../project-api): API endpoints (~500 lines) → PR #3
- Window 4 (../project-frontend): Frontend component (~400 lines) → PR #4
The 500-line rule: Each PR stays under 500 lines. Easier to review, easier to revert, easier to merge.
The reality: Not every session succeeds. I throw away maybe 1 in 5 PRs entirely—just close it and move on. Others need fixes: either I make quick edits or spin up a new session with “Fix the type errors in PR #3.” The goal isn’t perfection, it’s velocity with consistent patterns. AI-written, human-vetted, ready to scale.
Pro tip: Add a sound notification via Claude hooks to know when each session completes. Four parallel sessions means four completion chimes.
Pro tip: Give Claude access to GitHub CLI (gh
) so it can handle the full PR workflow—add files, commit, push, and open the PR. Each session becomes truly autonomous.
The Results Are Measurable
Before this system:
- AI “improvements” breaking existing code
- Constant context switching
- Maybe 1 feature per day
After:
- Consistent patterns across all sessions
- 4 features progressing simultaneously
- 3-4 features completed daily
Real project: I built Kinmel (Django + Next.js SaaS) entirely this way. 10+ features, all following consistent patterns.
Start Small, Scale Fast
Start with CLAUDE.md. Document your patterns. Parallelize with git worktrees. Watch your velocity quadruple.
The win isn’t just speed—it’s that AI becomes an extension of your coding style rather than fighting against it.
Your patterns. Your architecture. Your code. Just 4x faster.