The Workflow Conductor. Coordinates agents, detects domains, manages fraction-based iteration cycles.
- Manages workflow state and transitions
- Detects domains per fraction from task annotations
- Dispatches domain-scoped Maker/Breaker pairs
When AI codes faster than you can specify, your methodology must evolve. MADD provides the framework to control drift and drive convergence between intention and implementation.
Each agent has a unique role. None validates its own work. Domain-specialized Maker/Breaker pairs ensure deep expertise.
The Workflow Conductor. Coordinates agents, detects domains, manages fraction-based iteration cycles.
The Intention Architect. Formalizes the "why" and "what", annotates tasks with domains.
The Maker. Domain-specialized implementation guided by skills.
The Quality Gate. Automated build, lint, type-check, and test validation between Maker and Breaker.
The Breaker. Domain-specialized verification. Challenges without complacency.
The Witness. Documents objective reality.
The Conductor detects technical domains from task annotations and invokes specialized agent pairs. Each domain has dedicated Maker and Breaker skills.
| Aspect | Maker produces | Breaker audits |
|---|---|---|
| Database | Normalization, RLS, audit logging, temporal patterns, indexing | Missing normalization, RLS, indexes, constraints, cascade risks |
| API | REST patterns, validation, rate limiting, pagination, error format | Missing validation, auth, CORS, N+1 queries, pagination |
| Frontend | Component architecture, state management, accessibility | Performance, bundle size, accessibility, XSS vectors |
| Security | OIDC/JWKS, RBAC, encryption, mTLS, security headers | OWASP Top 10, privilege escalation, secrets exposure |
| Infrastructure | Docker multi-stage, Compose, Terraform, secret management | Running as root, missing healthchecks, hardcoded secrets, SPOF |
AI produces code that seems correct but subtly deviates from your intention. Without knowing it, you accumulate invisible technical debt.
The agent says "done" but left TODOs, partial implementations, ignored edge cases. The code passes tests but doesn't solve the problem.
Each session starts from scratch. Architectural decisions get lost. Conventions erode. The project becomes an incoherent patchwork.
The agent that generates the code is the same one that documents it. It's the fox guarding the henhouse. Self-evaluation masks problems.
Intention is not a forgotten conversation. It's a versioned, structured document that answers "why" and "what" before any "how".
→ Before generating code, intention must be formalized and validated.
A specification that can't be automatically verified isn't a contract, it's a wish. Every requirement has automatable validation criteria.
→ Tests, assertions, schemas: the contract verifies itself.
An agent that generates code cannot be the source of truth on that code's quality. Validation must always come from an agent that didn't participate in production.
→ Mandatory architecture with 6 roles.
What was actually implemented must be documented objectively, independently of what the development agent claims to have done.
→ An independent Witness agent analyzes the code and produces the truth.
When marginal development cost decreases, investing in architecture becomes rational. We prioritize what de-risks what comes next.
→ Sequencing favors solidity, not immediate impressions.
AI has outdated training data. Skills inject up-to-date knowledge and define quality criteria between phases.
→ Each transition is mediated by an explicit skill.
| Aspect | Scrum / Kanban / SAFe | MADD |
|---|---|---|
| Bottleneck | Development time | Intention clarity |
| Work unit | Story / Ticket | Intention + Contract |
| Estimation | Points / Days | Obsolete (too fast) |
| Validation | Human code review | Independent agent audit |
| Documentation | Often neglected | Automatic retro-spec |
| Rhythm | Fixed sprints (2-3 weeks) | Adaptive cycles |
| Coordination | Team ceremonies | Inter-agent contracts |
| Project memory | In people's heads | Versioned retro-spec |
MADD shares the "specification-first" philosophy with other methodologies, but differs in its approach to validation and memory.
| Aspect | BMAD | SpecKit | MADD |
|---|---|---|---|
| Philosophy | Structured prompts | Living specifications | Executable contracts |
| Validation | Manual review | Spec conformance | Independent agent audit |
| Memory | Prompt history | Spec versioning | Retro-specification |
| Drift detection | Manual | Diff-based | Automated by Witness |
| Agent separation | Optional | Not enforced | Mandatory (6 roles) |
| Self-validation | Allowed | Allowed | Forbidden |
| Documentation | Developer-written | Spec-derived | Independent analysis |
| Objectivity | Dev perspective | Spec perspective | Code reality |
Use 2-3 different models: Gemini for specs, Claude for dev, GPT for audit. Never let an agent validate its own work.
Before each dev session, write an Intention Document: why, what, success criteria. This is your contract.
Pass the produced code to your audit agent with a due diligence skill. Never merge without independent audit.
A 4th agent (or you) analyzes what was actually implemented and updates the documentation. This is your project memory.
Intention Architect (PM/PO), Agent Conductor (senior dev), Alignment Arbiter (QA or peer). Each role is responsible for a phase.
A shared repo with all skills: Spec Skill, Dev Skill, Audit Skill, Retro-Spec Skill. Versioned like code.
The retro-specification is shared and updated each cycle. It's the starting point for every new iteration.
No more fixed duration. A cycle starts when an intention is formalized and ends when the retro-spec is validated.
Each story becomes an Intention Document with executable validation criteria. No more "looks good to me".
The DoD becomes a formal skill that the audit agent executes. Verifiable, reproducible, non-negotiable.
No more "how to improve the process". We consolidate retro-specs and improve skills.
MADD is open source. Join the community and help define the future of AI-assisted development.