compound-engineering
Summit: Erin Ahmed
Summit: Faye Zhang
The summit identified learning & memory as the biggest missing dimension across all 6 original repos. compound-engineering's ce:compound skill (530 lines, 3 modes, parallel subagent dispatch) provided the reference implementation. We stripped it to convergence's lean style: the agent reads recent git context and session artifacts, drafts a structured learning with searchable YAML frontmatter, and presents it for human correction. The human's effort drops from "write from scratch" to "fix what's wrong." Overlap detection prevents duplicates.
Core Instructions 12 instructions
- Gather context: Read recent git log, diff, and convergence session artifacts (review findings, debug output)
- Draft learning: Pre-fill title, what happened, root cause, fix, rule, and YAML frontmatter (problem_type, module, severity, tags)
- Check overlap: Grep existing learnings for matching tags/module. If match found, ask human: update existing or create new?
- Present for correction: Show full draft. Human corrects or approves. The
Rule field will often be wrong — that's fine, the correction is the highest-value moment
- Write artifact to
docs/convergence/learnings/YYYY-MM-DD-<slug>.md
Summit insight (Erin Ahmed, Cleric): "Agent capabilities are commoditized — the next horizon of differentiation is learning." Three principles: make correction easy, reward corrections with visible improvement, absorb context continuously. This skill implements all three.
Why compound-engineering: This was the only gap the summit explicitly validated that none of the original 6 repos addressed. compound-engineering's ce:compound + learnings-researcher agent provided the working pattern. Most of compound-engineering's other patterns (42 skills, 50+ agents, 1200-line plans) contradict summit findings on instruction budgets and plan leverage — but the knowledge compounding loop was the exception.