Skip to content
Go back

Crowdsourced Wisdom: How Developers Are Actually Getting Better at AI-Assisted Coding

Note: This is an AI-assisted summary compiled for my own reference, and hopefully useful to others. The ideas below belong to their original authors. Sources: Hacker News discussion and Sleuth Diaries.

Table of contents

Open Table of contents

The Question Everyone’s Asking

A developer on Hacker News posed a question that resonated with many: they were migrating a jQuery + Django project to SvelteKit and found that “simple prompting just isn’t able to get AI’s code quality within 90% of what I’d write by hand.”

The thread exploded with practical advice. Here’s the distilled wisdom.

Advice Straight from the Claude Code Team

Boris (bcherny) from the Claude Code team offered four concrete tips:

  1. Use CLAUDE.md religiously — If Claude repeatedly gets something wrong or spends excessive tokens on certain tasks, document it in your CLAUDE.md. Boris adds to his team’s file multiple times a week.

  2. Plan mode is a multiplier — Press shift-tab twice to enter plan mode. Iterate on the plan until you’re satisfied before letting Claude execute. Boris claims this “easily 2-3x’s results for harder tasks.”

  3. Give the model ways to check its work — For frontend work, consider the Puppeteer MCP server so Claude can verify its changes in the browser. This is another 2-3x improvement.

  4. Use Opus 4.5 — It’s a “step change” from earlier models.

On that third point about giving Claude better context, I’ve found Context7 MCP invaluable when Claude makes wrong assumptions about library APIs from outdated training data.

Building Your Rules File Iteratively

Several commenters emphasized the power of building rules files through iteration rather than upfront specification.

serial_dev suggests: refactor one template with the AI, then when you’re happy with the result, ask it to write a rules file based on that conversation. Include examples and rationale. For the next route, just say “refactor” and let the rules guide it. When something’s off, fix it and update the rules.

hurturue takes a similar approach: put a before/after example directly in your prompt—the original Django file and the rewritten SvelteKit version—then ask it to convert another file using that as a template.

rdrd goes further with structure:

I’ve written extensively about my own CLAUDE.md evolution in how I run 4 AI agents in parallel. The key insight: every frustration with Claude becomes a new rule.

Voice Prompting: The Underrated Technique

bogtog shared a technique I hadn’t considered: voice transcription for prompts.

“I’m often voicing 500-word prompts. If you talk in a winding way that looks awkward when in text, that’s fine. The model will almost certainly be able to tell what you mean.”

The reasoning: typing at 70-90 WPM drops quickly when you also need to think about what you’re saying. Speaking at 100-120 WPM feels natural and removes friction from fully expressing what you want.

Tools mentioned: Whispering with OpenAI API, VoiceInk for Mac, or just your OS’s built-in dictation.

One communication pattern that surfaced repeatedly: instead of “if you have questions, ask, otherwise go ahead,” try “make a plan and ask me any questions you may have.” The latter produces more questions and better results.

The Three Modes of AI Development

Nisal Periyapperuma’s Sleuth Diaries piece frames AI-assisted development as three distinct modes:

  1. “Jesus take the wheel” mode — Tools like Lovable, Replit, Figma Make. Great for prototyping, but don’t expect to ship the result. As Nisal puts it: “The idea that an amateur with no experience can put together a complete application that’s more complex than a simple CRUD app is a fantasy.”

  2. AI-powered software engineering — Cursor agents, Claude Code, Codex. The sweet spot for experienced developers.

  3. Ambient AI helpers — Tab completion, intelligent search, refactoring assistance. So ubiquitous now they barely register.

The key insight: “Writing code has never been the bottleneck.” The hard part is understanding tradeoffs, reasoning about system limits, and making design decisions. That expertise becomes more valuable, not less, when managing AI agents.

Plan First, Execute Once

mirsadm’s workflow: break everything into very small tasks, review the plan for each, execute one step at a time. This gives you control over the whole process.

rdrd adds a critical point: if the first attempt doesn’t get you 99% of the way there, don’t try to guide fixes iteratively. Revise your plan.md and start fresh. “You will bang your head against the wall attempting to guide it like you would a junior developer.”

coryvirok shared a clever hack for SvelteKit specifically: have Claude translate to Next.js/React first, debug and verify, then translate to SvelteKit/Svelte. The intermediate step through better-trained territory produces cleaner results.

For managing complex sessions, I developed what I call the Golden Context workflow—treating Claude sessions like git branches with named session IDs you can return to.

Testing Becomes Mandatory

Here’s a perspective shift from Sleuth Diaries: when you’re using AI agents, “even if you’re working alone, you’re in the position of managing a team made of a group of very efficient, smart, but unreliable intelligent beings.”

This changes the calculus on testing. Alan01252 found that unit tests alone led to frequent regressions, but adding BDD with Cucumber stabilized things significantly.

The Sleuth Diaries piece puts it bluntly: tests that seemed optional for solo work become mandatory when your contributors are unreliable.

I’ve experienced this firsthand. In Strategic Test Design, I documented what I call “The Mock Circus”—where Claude mocks everything so thoroughly that tests pass while actual code remains broken.

Core Principles

Synthesizing across the discussion, a few principles emerge:

Never lose the mental model. As Sleuth Diaries warns: “The moment you lose the mental model of how your code works, you lose.” This is the difference between AI-assisted development and vibe coding into oblivion.

Treat AI output as drafts. thinkingtoilet puts it simply: “I have to review every line that it generates.” Addy Osmani (cited in Sleuth Diaries) expands: “Hold AI-written code to the same standards as human teammates.”

Build strong primitives first. Before letting AI loose, establish your reusable components, common services, and documented patterns. Claude will reach for one-off solutions if you don’t give it better options.

Experience matters more than ever. halfcat frames it well: “The expert gets promoted to manager, and the replacement worker can’t deliver even 90% of what the manager used to.” AI doesn’t change this dynamic. You still need to know what good looks like to recognize when you’re not getting it.

What’s Working for Me

After months of iterating on AI workflows, my stack is: Claude Code with Opus 4.5, a living CLAUDE.md that grows with every frustration, plan mode for anything non-trivial, and dedicated sessions for different features.

The meta-lesson: getting good at AI-assisted coding is its own skill. As Alan01252 noted, “prompting is a skill in itself. It takes time to learn how to get the best out of an Agent.”

What techniques are working for you? I’m still figuring this out and would love to hear what others have learned.


Share this post on:

Next Post
From Shower Thought to Blog Draft: Writing Posts with Claude Code Mobile