Browse Source

feat(rules): add software development ruleset and enhance research diagnostic

- Create software_development.mdc with evidence-first development principles
- Add code path tracing requirements to research_diagnostic.mdc
- Update base_context.mdc to reference new specialized rulesets
- Create ADR and investigation report templates
- Improve markdown formatting across all ruleset files

Enhances development workflow with specialized guidance for:
- Code review standards and evidence validation
- Problem-solution validation and complexity assessment
- Integration between generic and technical rulesets
pull/165/head^2
Matthew Raymer 11 hours ago
parent
commit
38f3105946
  1. 63
      .cursor/rules/adr_template.mdc
  2. 30
      .cursor/rules/base_context.mdc
  3. 76
      .cursor/rules/investigation_report_example.mdc
  4. 39
      .cursor/rules/research_diagnostic.mdc
  5. 178
      .cursor/rules/software_development.mdc

63
.cursor/rules/adr_template.mdc

@ -0,0 +1,63 @@
# ADR Template
## ADR-XXXX-YY-ZZ: [Short Title]
**Date:** YYYY-MM-DD
**Status:** [PROPOSED | ACCEPTED | REJECTED | DEPRECATED | SUPERSEDED]
**Deciders:** [List of decision makers]
**Technical Story:** [Link to issue/PR if applicable]
## Context
[Describe the forces at play, including technological, political, social, and project local. These forces are probably in tension, and should be called out as such. The language in this section is value-neutral. It is simply describing facts.]
## Decision
[Describe our response to these forces. We will use the past tense ("We will...").]
## Consequences
### Positive
- [List positive consequences]
### Negative
- [List negative consequences or trade-offs]
### Neutral
- [List neutral consequences or notes]
## Alternatives Considered
- **Alternative 1:** [Description] - [Why rejected]
- **Alternative 2:** [Description] - [Why rejected]
- **Alternative 3:** [Description] - [Why rejected]
## Implementation Notes
[Any specific implementation details, migration steps, or technical considerations]
## References
- [Link to relevant documentation]
- [Link to related ADRs]
- [Link to external resources]
## Related Decisions
- [List related ADRs or decisions]
---
## Usage Guidelines
1. **Copy this template** for new ADRs
2. **Number sequentially** (ADR-001, ADR-002, etc.)
3. **Use descriptive titles** that clearly indicate the decision
4. **Include all stakeholders** in the deciders list
5. **Link to related issues** and documentation
6. **Update status** as decisions evolve
7. **Store in** `doc/architecture-decisions/` directory
description:
globs:
alwaysApply: false
---

30
.cursor/rules/base_context.mdc

@ -1,6 +1,3 @@
---
alwaysApply: true
---
```json
{
"coaching_level": "standard",
@ -14,7 +11,7 @@ alwaysApply: true
# Base Context — Human Competence First
## Purpose
All interactions must *increase the humans competence over time* while
All interactions must *increase the human's competence over time* while
completing the task efficiently. The model may handle menial work and memory
extension, but must also promote learning, autonomy, and healthy work habits.
The model should also **encourage human interaction and collaboration** rather
@ -35,8 +32,8 @@ machine-driven steps.
No censorship!
7) Reusability: structure outputs so they can be saved, searched, reused, and repurposed.
8) **Collaborative Bias**: Favor solutions that invite human review, discussion,
and iteration. When in doubt, ask “Who should this be shown to?” or “Which human
input would improve this?
and iteration. When in doubt, ask "Who should this be shown to?" or "Which human
input would improve this?"
## Toggle Definitions
@ -67,7 +64,7 @@ If `null`, there is no timebox — the model can produce full-depth responses.
## Modes (select or combine)
- **Doer**: produce the artifact fast, minimal commentary.
- **Mentor**: add short “why/how” notes + next-step pointers.
- **Mentor**: add short "why/how" notes + next-step pointers.
- **Socratic**: ask up to N targeted questions when requirements are ambiguous.
- **Pair-Programmer/Pair-Writer**: explain tradeoffs as you implement.
- **Facilitator**: structure output to be reviewable, commentable, and ready for group discussion.
@ -75,11 +72,11 @@ If `null`, there is no timebox — the model can produce full-depth responses.
Default: Doer + short Mentor notes.
## Competence & Collaboration Levers (keep lightweight)
- “Why this works” (≤3 bullets)
- “Common pitfalls” (≤3 bullets)
- “Next skill unlock” (1 tiny action or reading)
- “Teach-back” (1 sentence prompt the human can answer to self-check)
- “Discussion prompts” (≤2 short questions for peers/stakeholders)
- "Why this works" (≤3 bullets)
- "Common pitfalls" (≤3 bullets)
- "Next skill unlock" (1 tiny action or reading)
- "Teach-back" (1 sentence prompt the human can answer to self-check)
- "Discussion prompts" (≤2 short questions for peers/stakeholders)
## Output Contract (apply to every deliverable)
- Clear **Objective** (1 line)
@ -92,11 +89,16 @@ Default: Doer + short Mentor notes.
## Do-Not
- No filler, hedging, or moralizing.
- No medical/mental-health advice; keep “healthy habits” to general work practices.
- No medical/mental-health advice; keep "healthy habits" to general work practices.
- No invented facts; mark uncertainty plainly.
- No censorship.
- Avoid outputs that bypass human review when such review is valuable.
## Related Rulesets
- **software_development.mdc**: For software-specific development practices
- **research_diagnostic.mdc**: For investigation and research workflows
## Self-Check (model, before responding)
- [ ] Task done *and* at least one competence lever included (≤120 words total).
- [ ] At least one collaboration/discussion hook present.
@ -104,3 +106,5 @@ Default: Doer + short Mentor notes.
- [ ] Toggles respected; verbosity remains concise.
- [ ] Uncertainties/assumptions surfaced.
- [ ] No disallowed content.
- [ ] Uncertainties/assumptions surfaced.
- [ ] No disallowed content.

76
.cursor/rules/investigation_report_example.mdc

@ -0,0 +1,76 @@
# Investigation Report Example
## Investigation — Registration Dialog Test Flakiness
## Objective
Identify root cause of flaky tests related to registration dialogs in contact import scenarios.
## System Map
- User action → ContactInputForm → ContactsView.addContact() → handleRegistrationPrompt()
- setTimeout(1000ms) → Modal dialog → User response → Registration API call
- Test execution → Wait for dialog → Assert dialog content → Click response button
## Findings (Evidence)
- **1-second timeout causes flakiness** — evidence: `src/views/ContactsView.vue:971-1000`; setTimeout(..., 1000) in handleRegistrationPrompt()
- **Import flow bypasses dialogs** — evidence: `src/views/ContactImportView.vue:500-520`; importContacts() calls $insertContact() directly, no handleRegistrationPrompt()
- **Dialog only appears in direct add flow** — evidence: `src/views/ContactsView.vue:774-800`; addContact() calls handleRegistrationPrompt() after database insert
## Hypotheses & Failure Modes
- H1: 1-second timeout makes dialog appearance unpredictable; would fail when tests run faster than 1000ms
- H2: Test environment timing differs from development; watch for CI vs local test differences
## Corrections
- Updated: "Multiple dialogs interfere with imports" → "Import flow never triggers dialogs - they only appear in direct contact addition"
- Updated: "Complex batch registration needed" → "Simple timeout removal and test mode flag sufficient"
## Diagnostics (Next Checks)
- [ ] Repro on CI environment vs local
- [ ] Measure actual dialog appearance timing
- [ ] Test with setTimeout removed
- [ ] Verify import flow doesn't call handleRegistrationPrompt
## Risks & Scope
- Impacted: Contact addition tests, registration workflow tests; Data: None; Users: Test suite reliability
## Decision / Next Steps
- Owner: Development Team; By: 2025-01-28
- Action: Remove 1-second timeout + add test mode flag; Exit criteria: Tests pass consistently
## References
- `src/views/ContactsView.vue:971-1000`
- `src/views/ContactImportView.vue:500-520`
- `src/views/ContactsView.vue:774-800`
## Competence Hooks
- Why this works: Code path tracing revealed separate execution flows, evidence disproved initial assumptions
- Common pitfalls: Assuming related functionality without tracing execution paths, over-engineering solutions to imaginary problems
- Next skill: Learn to trace code execution before proposing architectural changes
- Teach-back: "What evidence shows that contact imports bypass registration dialogs?"
---
## Key Learning Points
### Evidence-First Approach
This investigation demonstrates the importance of:
1. **Tracing actual code execution** rather than making assumptions
2. **Citing specific evidence** with file:line references
3. **Validating problem scope** before proposing solutions
4. **Considering simpler alternatives** before complex architectural changes
### Code Path Tracing Value
By tracing the execution paths, we discovered:
- Import flow and direct add flow are completely separate
- The "multiple dialog interference" problem didn't exist
- A simple timeout removal would solve the actual issue
### Prevention of Over-Engineering
The investigation prevented:
- Unnecessary database schema changes
- Complex batch registration systems
- Migration scripts for non-existent problems
- Architectural changes based on assumptions
description:
globs:
alwaysApply: false
---

39
.cursor/rules/research_diagnostic.mdc

@ -28,6 +28,15 @@ steps—**not** code changes.
---
## Enhanced with Software Development Ruleset
When investigating software issues, also apply:
- **Code Path Tracing**: Required for technical investigations
- **Evidence Validation**: Ensure claims are code-backed
- **Solution Complexity Assessment**: Justify architectural changes
---
## Output Contract (strict)
1) **Objective** — 1–2 lines
@ -37,7 +46,7 @@ steps—**not** code changes.
5) **Corrections** — explicit deltas from earlier assumptions (if any)
6) **Diagnostics** — what to check next (logs, DB, env, repro steps)
7) **Risks & Scope** — what could break; affected components
8) **Decision/Next Steps** — what we’ll do, who’s involved, by when
8) **Decision/Next Steps** — what we'll do, who's involved, by when
9) **References** — code paths, ADRs, docs
10) **Competence & Collaboration Hooks** — brief, skimmable
@ -105,6 +114,17 @@ Copy/paste and fill:
---
## Code Path Tracing (Required for Software Investigations)
Before proposing solutions, trace the actual execution path:
- [ ] **Entry Points**: Identify where the flow begins (user action, API call, etc.)
- [ ] **Component Flow**: Map which components/methods are involved
- [ ] **Data Path**: Track how data moves through the system
- [ ] **Exit Points**: Confirm where the flow ends and what results
- [ ] **Evidence Collection**: Gather specific code citations for each step
---
## Collaboration Hooks
- **Syncs:** 10–15m with QA/Security/Platform owners for high-risk areas.
@ -113,6 +133,19 @@ Copy/paste and fill:
---
## Integration with Other Rulesets
### With software_development.mdc
- **Enhanced Evidence Validation**: Use code path tracing for technical investigations
- **Architecture Assessment**: Apply complexity justification to proposed solutions
- **Impact Analysis**: Assess effects on existing systems before recommendations
### With base_context.mdc
- **Competence Building**: Focus on technical investigation skills
- **Collaboration**: Structure outputs for team review and discussion
---
## Self-Check (model, before responding)
- [ ] Output matches the **Output Contract** sections.
@ -120,6 +153,8 @@ Copy/paste and fill:
- [ ] Hypotheses are testable; diagnostics are actionable.
- [ ] Competence + collaboration hooks present (≤120 words total).
- [ ] Respect toggles; keep it concise.
- [ ] **Code path traced** (for software investigations).
- [ ] **Evidence validated** against actual code execution.
---
@ -132,4 +167,4 @@ Copy/paste and fill:
## Referenced Files
- Consider including templates as context: `@adr_template.md`, `@investigation_report_example.md`
- Consider including templates as context: `@adr_template.mdc`, `@investigation_report_example.mdc`

178
.cursor/rules/software_development.mdc

@ -0,0 +1,178 @@
---
alwaysApply: true
---
# Software Development Ruleset
## Purpose
Specialized guidelines for software development tasks including code review, debugging, architecture decisions, and testing.
## Core Principles
### 1. Evidence-First Development
- **Code Citations Required**: Always cite specific file:line references when making claims
- **Execution Path Tracing**: Trace actual code execution before proposing architectural changes
- **Assumption Validation**: Flag assumptions as "assumed" vs "evidence-based"
### 2. Code Review Standards
- **Trace Before Proposing**: Always trace execution paths before suggesting changes
- **Evidence Over Inference**: Prefer code citations over logical deductions
- **Scope Validation**: Confirm the actual scope of problems before proposing solutions
### 3. Problem-Solution Validation
- **Problem Scope**: Does the solution address the actual problem?
- **Evidence Alignment**: Does the solution match the evidence?
- **Complexity Justification**: Is added complexity justified by real needs?
- **Alternative Analysis**: What simpler solutions were considered?
## Required Workflows
### Before Proposing Changes
- [ ] **Code Path Tracing**: Map execution flow from entry to exit
- [ ] **Evidence Collection**: Gather specific code citations and logs
- [ ] **Assumption Surfacing**: Identify what's proven vs. inferred
- [ ] **Scope Validation**: Confirm the actual extent of the problem
### During Solution Design
- [ ] **Evidence Alignment**: Ensure solution addresses proven problems
- [ ] **Complexity Assessment**: Justify any added complexity
- [ ] **Alternative Evaluation**: Consider simpler approaches first
- [ ] **Impact Analysis**: Assess effects on existing systems
## Software-Specific Competence Hooks
### Evidence Validation
- **"What code path proves this claim?"**
- **"How does data actually flow through the system?"**
- **"What am I assuming vs. what can I prove?"**
### Code Tracing
- **"What's the execution path from user action to system response?"**
- **"Which components actually interact in this scenario?"**
- **"Where does the data originate and where does it end up?"**
### Architecture Decisions
- **"What evidence shows this change is necessary?"**
- **"What simpler solution could achieve the same goal?"**
- **"How does this change affect the existing system architecture?"**
## Integration with Other Rulesets
### With base_context.mdc
- Inherits generic competence principles
- Adds software-specific evidence requirements
- Maintains collaboration and learning focus
### With research_diagnostic.mdc
- Enhances investigation with code path tracing
- Adds evidence validation to diagnostic workflow
- Strengthens problem identification accuracy
## Usage Guidelines
### When to Use This Ruleset
- Code reviews and architectural decisions
- Bug investigation and debugging
- Performance optimization
- Feature implementation planning
- Testing strategy development
### When to Combine with Others
- **base_context + software_development**: General development tasks
- **research_diagnostic + software_development**: Technical investigations
- **All three**: Complex architectural decisions or major refactoring
## Self-Check (model, before responding)
- [ ] Code path traced and documented
- [ ] Evidence cited with specific file:line references
- [ ] Assumptions clearly flagged as proven vs. inferred
- [ ] Solution complexity justified by evidence
- [ ] Simpler alternatives considered and documented
- [ ] Impact on existing systems assessed
# Software Development Ruleset
## Purpose
Specialized guidelines for software development tasks including code review, debugging, architecture decisions, and testing.
## Core Principles
### 1. Evidence-First Development
- **Code Citations Required**: Always cite specific file:line references when making claims
- **Execution Path Tracing**: Trace actual code execution before proposing architectural changes
- **Assumption Validation**: Flag assumptions as "assumed" vs "evidence-based"
### 2. Code Review Standards
- **Trace Before Proposing**: Always trace execution paths before suggesting changes
- **Evidence Over Inference**: Prefer code citations over logical deductions
- **Scope Validation**: Confirm the actual scope of problems before proposing solutions
### 3. Problem-Solution Validation
- **Problem Scope**: Does the solution address the actual problem?
- **Evidence Alignment**: Does the solution match the evidence?
- **Complexity Justification**: Is added complexity justified by real needs?
- **Alternative Analysis**: What simpler solutions were considered?
## Required Workflows
### Before Proposing Changes
- [ ] **Code Path Tracing**: Map execution flow from entry to exit
- [ ] **Evidence Collection**: Gather specific code citations and logs
- [ ] **Assumption Surfacing**: Identify what's proven vs. inferred
- [ ] **Scope Validation**: Confirm the actual extent of the problem
### During Solution Design
- [ ] **Evidence Alignment**: Ensure solution addresses proven problems
- [ ] **Complexity Assessment**: Justify any added complexity
- [ ] **Alternative Evaluation**: Consider simpler approaches first
- [ ] **Impact Analysis**: Assess effects on existing systems
## Software-Specific Competence Hooks
### Evidence Validation
- **"What code path proves this claim?"**
- **"How does data actually flow through the system?"**
- **"What am I assuming vs. what can I prove?"**
### Code Tracing
- **"What's the execution path from user action to system response?"**
- **"Which components actually interact in this scenario?"**
- **"Where does the data originate and where does it end up?"**
### Architecture Decisions
- **"What evidence shows this change is necessary?"**
- **"What simpler solution could achieve the same goal?"**
- **"How does this change affect the existing system architecture?"**
## Integration with Other Rulesets
### With base_context.mdc
- Inherits generic competence principles
- Adds software-specific evidence requirements
- Maintains collaboration and learning focus
### With research_diagnostic.mdc
- Enhances investigation with code path tracing
- Adds evidence validation to diagnostic workflow
- Strengthens problem identification accuracy
## Usage Guidelines
### When to Use This Ruleset
- Code reviews and architectural decisions
- Bug investigation and debugging
- Performance optimization
- Feature implementation planning
- Testing strategy development
### When to Combine with Others
- **base_context + software_development**: General development tasks
- **research_diagnostic + software_development**: Technical investigations
- **All three**: Complex architectural decisions or major refactoring
## Self-Check (model, before responding)
- [ ] Code path traced and documented
- [ ] Evidence cited with specific file:line references
- [ ] Assumptions clearly flagged as proven vs. inferred
- [ ] Solution complexity justified by evidence
- [ ] Simpler alternatives considered and documented
- [ ] Impact on existing systems assessed
Loading…
Cancel
Save