You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 

96 lines
4.2 KiB

---
title: Documentation, References, and Model Agent Use
version: 1.1
alwaysApply: true
scope: code, project-plans
---
# Directive on Documentation, References, and Model Agent Use in Code and Project Plans
To ensure clarity, efficiency, and high-value documentation within code and project plans—and to leverage **model agents** (AI- or automation-based assistants) effectively—contributors must follow these rules:
---
## 1. Documentation and References Must Add Clear Value
- Only include documentation, comments, or reference links when they provide _new, meaningful information_ that assists understanding or decision-making.
- Avoid duplicating content already obvious in the codebase, version history, or linked project documents.
---
## 2. Eliminate Redundant or Noisy References
- Remove references that serve no purpose beyond filling space.
- Model agents may automatically flag and suggest removal of trivial references (e.g., links to unchanged boilerplate or self-evident context).
---
## 3. Explicit Role of Model Agents
Model agents are **active participants** in documentation quality control. Their tasks include:
- **Relevance Evaluation**: Automatically analyze references for their substantive contribution before inclusion.
- **Redundancy Detection**: Flag duplicate or trivial references across commits, files, or tasks.
- **Context Linking**: Suggest appropriate higher-level docs (designs, ADRs, meeting notes) when a code change touches multi-stage or cross-team items.
- **Placement Optimization**: Recommend centralization of references (e.g., in plan overviews, ADRs, or merge commit messages) rather than scattered low-value inline references.
- **Consistency Monitoring**: Ensure references align with team standards (e.g., ADR template, architecture repo, or external policy documents).
Contributors must treat agent recommendations as **first-pass reviews** but remain accountable for final human judgment.
---
## 4. Contextual References for Complex Items
- Use **centralized references** for multi-stage features (e.g., architectural docs, research threads).
- Keep inline code comments light; push broader context into centralized documents.
- Model agents may auto-summarize complex chains of discussion and attach them as a single reference point.
---
## 5. Centralization of Broader Context
- Store overarching context (design docs, proposals, workflows) in accessible, well-indexed places.
- Model agents should assist by **generating reference maps** that track where docs are cited across the codebase.
---
## 6. Focused Documentation
- Documentation should explain **why** and **how** decisions are made, not just what was changed.
- Model agents can auto-generate first-pass explanations from commit metadata, diffs, and linked issues—but humans must refine them for accuracy and intent.
---
## 7. Review and Accountability
- Reviewers and team leads must reject submissions containing unnecessary or low-quality documentation.
- Model agent outputs are aids, not replacements—contributors remain responsible for **final clarity and relevance**.
---
## 8. Continuous Improvement and Agent Feedback Loops
- Encourage iterative development of model agents so their evaluations become more precise over time.
- Contributions should include **feedback on agent suggestions** (e.g., accepted, rejected, or corrected) to train better future outputs.
- Agents should log patterns of “rejected” suggestions for refinement.
---
## 9. Workflow Overview (Mermaid Diagram)
```mermaid
flowchart TD
A[Contributor] -->|Writes Code & Draft Docs| B[Model Agent]
B -->|Evaluates References| C{Relevant?}
C -->|Yes| D[Suggest Placement & Context Links]
C -->|No| E[Flag Redundancy / Noise]
D --> F[Contributor Refines Docs]
E --> F
F --> G[Reviewer]
G -->|Approves / Requests Revisions| H[Final Documentation]
G -->|Feedback on Agent Suggestions| B
```
---
✅ **Outcome:** By integrating disciplined contributor standards with **model agent augmentation**, the team achieves documentation that is consistently _relevant, concise, centralized, and decision-focused_. AI ensures coverage and noise reduction, while humans ensure precision and judgment.