Home
Engineering / Vol. 01

Claude Code found bugs I wasn't looking for

January 2025 5 min read
Documentation / Debug

I asked Claude to document my Rails models. It found a bug instead.

The prompt was simple: perform a reverse engineering analysis of each model for potential reconstruction of the data layer. I expected clean markdown—schema details, associations, validations, callbacks. I got that… plus a section at the bottom of a few files that stopped me cold:

Potential Gotchas: is_purple_team? always returns true (hardcoded || true).

That's not documentation. That's a bug hiding in plain sight—probably for months. A conditional that never branches. The kind of thing that passes code review because the line looks normal until you actually read it.

The same pass flagged another issue: set_team? triggering on multiple attribute changes and recalculating team membership repeatedly. Not a correctness bug, but a performance leak nobody noticed because the tests passed.

The documentation exercise that became a debugging session

Here's the prompt I used:

You are performing a reverse engineering analysis of the Rails model
at {{MODEL_PATH}} as part of a comprehensive effort to document the
entire application's data layer for potential reconstruction.

Run that across every model and you get markdown files with schema breakdowns, relationships, validations, callbacks, and concerns. Useful on their own for onboarding and architecture discussions.

But Claude doesn't just transcribe. It reads. And when it reads, it notices the stuff humans skim past.

The same approach worked on background jobs

We also have dozens of background jobs writing JSON "details" into the same column (detail.content). Each job invented its own key structure. Over time, "close enough" became a querying nightmare:

user_email vs email_address vs userEmail

Same concept. Three representations.

So I pointed Claude at every job that saves detail records and asked it to map the keys being written. No database access needed—just code.

The output wasn't just a list. It was a map:

That last item is the payoff. Once we implement the schema, querying becomes reliable—and we have a standard for new jobs going forward.

Why this works

Claude doesn't get bored. It doesn't skim. It gives the fortieth model file the same attention as the first—which is exactly how it catches the || true your eyes slide past.

The trick is the frame. "Reverse engineering for reconstruction" implies the system has to be understood deeply enough to rebuild. That naturally produces different output than "summarize this model."


Try it on your codebase. Require a Potential Gotchas section. You might find documentation. You might find bugs. Probably both.