A lightweight system for documenting why our LLM interactions work — so the next developer doesn't have to rediscover what we already know.
"If the developer who built this left tomorrow, could someone else maintain it?"
Teams building with LLMs are creating a new category of institutional knowledge: prompt strategies, model configurations, and hard-won lessons about what works and what doesn't. Today, most of that knowledge lives in one person's head.
Architecture Decision Records (ADRs) solved this problem for system design decades ago. Prompt Decision Records (PDRs) apply the same pattern to LLM interactions — capturing not just the prompt, but the reasoning, the failures, and the runtime configuration that makes it work.
These are the kinds of discoveries teams make through iteration — and lose when someone leaves. PDRs capture them.
Same prompt. Same test set. Only change: temperature 0.2 → 0.0.
PDRs live alongside prompts and skills in the repo. A prompt references its PDR, a PDR references its prompt. Developers starting from either direction find the full picture — including the configuration the prompt depends on to function correctly.
Pick your most complex prompt. Write one PDR for the LLM interaction that would be hardest for someone else to maintain. One real example is worth more than a process document.
Automate the scaffolding. An LLM can generate 80% of a PDR draft from an existing prompt and its surrounding code. The developer fills in the tribal knowledge — what failed, what to watch for.
Make it part of code review. If a PR changes a prompt strategy, it should include a PDR update. No new tooling required — just a convention that reviewers enforce.
Standardize across teams. As LLM usage scales, PDRs become the shared documentation language for prompt strategies. Every team building with agents uses the same format.