refactor(cursor-rules): restructure rules architecture with meta-rule system

- Reorganize cursor rules into logical domain-based directories
- Implement meta-rule system for workflow-specific rule bundling
- Move core rules to dedicated /core directory
- Consolidate development rules under /development namespace
- Add architectural patterns and implementation examples
- Create workflow-specific meta-rules for common development tasks
- Remove deprecated standalone rule files
- Update package dependencies for new rule structure

BREAKING CHANGE: Cursor rules file structure has been reorganized
Files moved from root rules directory to domain-specific subdirectories
This commit is contained in:
Matthew Raymer
2025-08-23 13:04:09 +00:00
parent b9ca59a718
commit a224aced85
61 changed files with 8944 additions and 2847 deletions

View File

@@ -0,0 +1,105 @@
---
description: when doing anything with capacitor assets
alwaysApply: false
---
# Asset Configuration Directive
**Author**: Matthew Raymer
**Date**: 2025-08-19
**Status**: 🎯 **ACTIVE** - Asset management guidelines
*Scope: Assets Only (icons, splashes, image pipelines) — not overall build
orchestration*
## Intent
- Version **asset configuration files** (optionally dev-time generated).
- **Do not** version platform asset outputs (Android/iOS/Electron); generate
them **at build-time** with standard tools.
- Keep existing per-platform build scripts unchanged.
## Source of Truth
- **Preferred (Capacitor default):** `resources/` as the single master source.
- **Alternative:** `assets/` is acceptable **only** if `capacitor-assets` is
explicitly configured to read from it.
- **Never** maintain both `resources/` and `assets/` as parallel sources.
Migrate and delete the redundant folder.
## Config Files
- Live under: `config/assets/` (committed).
- Examples:
- `config/assets/capacitor-assets.config.json` (or the path the tool
expects)
- `config/assets/android.assets.json`
- `config/assets/ios.assets.json`
- `config/assets/common.assets.yaml` (optional shared layer)
- **Dev-time generation allowed** for these configs; **build-time
generation is forbidden**.
## Build-Time Behavior
- Build generates platform assets (not configs) using the standard chain:
```bash
npm run build:capacitor # web build via Vite (.mts)
npx cap sync
npx capacitor-assets generate # produces platform assets; not committed
# then platform-specific build steps
```
---
**Status**: Active asset management directive
**Priority**: Medium
**Estimated Effort**: Ongoing reference
**Dependencies**: capacitor-assets toolchain
**Stakeholders**: Development team, Build team
npx capacitor-assets generate # produces platform assets; not committed
# then platform-specific build steps
## Model Implementation Checklist
### Before Asset Configuration
- [ ] **Source Review**: Identify current asset source location (`resources/` or
`assets/`)
- [ ] **Tool Assessment**: Verify capacitor-assets toolchain is available
- [ ] **Config Planning**: Plan configuration file structure and location
- [ ] **Platform Analysis**: Understand asset requirements for all target platforms
### During Asset Configuration
- [ ] **Source Consolidation**: Ensure single source of truth (prefer `resources/`)
- [ ] **Config Creation**: Create platform-specific asset configuration files
- [ ] **Tool Integration**: Configure capacitor-assets to read from correct source
- [ ] **Build Integration**: Integrate asset generation into build pipeline
### After Asset Configuration
- [ ] **Build Testing**: Verify assets generate correctly at build time
- [ ] **Platform Validation**: Test asset generation across all platforms
- [ ] **Documentation**: Update build documentation with asset generation steps
- [ ] **Team Communication**: Communicate asset workflow changes to team

View File

@@ -0,0 +1,177 @@
# Complexity Assessment — Evaluation Frameworks
> **Agent role**: Reference this file for
complexity evaluation frameworks when assessing project complexity.
## 📊 Complexity Assessment Framework
### **Technical Complexity Factors**
#### **Code Changes**
- **Simple**: Text, styling, configuration updates
- **Medium**: New components, refactoring existing code
- **Complex**: Architecture changes, new patterns, integrations
- **Unknown**: New technologies, APIs, or approaches
#### **Platform Impact**
- **Single platform**: Web-only or mobile-only changes
- **Two platforms**: Web + mobile or web + desktop
- **Three platforms**: Web + mobile + desktop
- **Cross-platform consistency**: Ensuring behavior matches across all platforms
#### **Testing Requirements**
- **Basic**: Unit tests for new functionality
- **Comprehensive**: Integration tests, cross-platform testing
- **User acceptance**: User testing, feedback integration
- **Performance**: Load testing, optimization validation
### **Dependency Complexity**
#### **Internal Dependencies**
- **Low**: Self-contained changes, no other components affected
- **Medium**: Changes affect related components or services
- **High**: Changes affect core architecture or multiple systems
- **Critical**: Changes affect data models or core business logic
#### **External Dependencies**
- **None**: No external services or APIs involved
- **Low**: Simple API calls or service integrations
- **Medium**: Complex integrations with external systems
- **High**: Third-party platform dependencies or complex APIs
#### **Infrastructure Dependencies**
- **None**: No infrastructure changes required
- **Low**: Configuration updates or environment changes
- **Medium**: New services or infrastructure components
- **High**: Platform migrations or major infrastructure changes
## 🔍 Complexity Evaluation Process
### **Step 1: Technical Assessment**
1. **Identify scope of changes** - what files/components are affected
2. **Assess platform impact** - which platforms need updates
3. **Evaluate testing needs** - what testing is required
4. **Consider performance impact** - will this affect performance
### **Step 2: Dependency Mapping**
1. **Map internal dependencies** - what other components are affected
2. **Identify external dependencies** - what external services are involved
3. **Assess infrastructure needs** - what infrastructure changes are required
4. **Evaluate risk factors** - what could go wrong
### **Step 3: Complexity Classification**
1. **Assign complexity levels** to each factor
2. **Identify highest complexity** areas that need attention
3. **Plan mitigation strategies** for high-complexity areas
4. **Set realistic expectations** based on complexity assessment
## 📋 Complexity Assessment Checklist
- [ ] Technical scope identified and mapped
- [ ] Platform impact assessed across all targets
- [ ] Testing requirements defined and planned
- [ ] Internal dependencies mapped and evaluated
- [ ] External dependencies identified and assessed
- [ ] Infrastructure requirements evaluated
- [ ] Risk factors identified and mitigation planned
- [ ] Complexity levels assigned to all factors
- [ ] Realistic expectations set based on assessment
## 🎯 Complexity Reduction Strategies
### **Scope Reduction**
- Break large features into smaller, manageable pieces
- Focus on core functionality first, add polish later
- Consider phased rollout to reduce initial complexity
### **Dependency Management**
- Minimize external dependencies when possible
- Use abstraction layers to isolate complex integrations
- Plan for dependency failures and fallbacks
### **Testing Strategy**
- Start with basic testing and expand coverage
- Use automated testing to reduce manual testing complexity
- Plan for iterative testing and feedback cycles
## **See also**
- `.cursor/rules/development/realistic_time_estimation.mdc` for the core principles
- `.cursor/rules/development/planning_examples.mdc` for planning examples
## Model Implementation Checklist
### Before Complexity Assessment
- [ ] **Problem Scope**: Clearly define the problem to be assessed
- [ ] **Stakeholder Identification**: Identify all parties affected by complexity
- [ ] **Context Analysis**: Understand technical and business context
- [ ] **Assessment Criteria**: Define what factors determine complexity
### During Complexity Assessment
- [ ] **Technical Mapping**: Map technical scope and platform impact
- [ ] **Dependency Analysis**: Identify internal and external dependencies
- [ ] **Risk Evaluation**: Assess infrastructure needs and risk factors
- [ ] **Complexity Classification**: Assign complexity levels to all factors
### After Complexity Assessment
- [ ] **Mitigation Planning**: Plan strategies for high-complexity areas
- [ ] **Expectation Setting**: Set realistic expectations based on assessment
- [ ] **Documentation**: Document assessment process and findings
- [ ] **Stakeholder Communication**: Share results and recommendations

View File

@@ -0,0 +1,177 @@
# Dependency Management — Best Practices
> **Agent role**: Reference this file for dependency management strategies and
best practices when working with software projects.
## Dependency Management Best Practices
### Pre-build Validation
- **Check Critical Dependencies**:
Validate essential tools before executing build
scripts
- **Use npx for Local Dependencies**: Prefer `npx tsx` over direct `tsx` to
avoid PATH issues
- **Environment Consistency**: Ensure all team members have identical dependency
versions
### Common Pitfalls
- **Missing npm install**: Team members cloning without running `npm install`
- **PATH Issues**: Direct command execution vs. npm script execution differences
- **Version Mismatches**: Different Node.js/npm versions across team members
### Validation Strategies
- **Dependency Check Scripts**: Implement pre-build validation for critical
dependencies
- **Environment Requirements**:
Document and enforce minimum Node.js/npm versions
- **Onboarding Checklist**: Standardize team member setup procedures
### Error Messages and Guidance
- **Specific Error Context**:
Provide clear guidance when dependency issues occur
- **Actionable Solutions**: Direct users to specific commands (`npm install`,
`npm run check:dependencies`)
- **Environment Diagnostics**: Implement comprehensive environment validation
tools
### Build Script Enhancements
- **Early Validation**: Check dependencies before starting build processes
- **Graceful Degradation**: Continue builds when possible but warn about issues
- **Helpful Tips**: Remind users about dependency management best practices
## Environment Setup Guidelines
### Required Tools
- **Node.js**: Minimum version requirements and LTS recommendations
- **npm**: Version compatibility and global package management
- **Platform-specific tools**: Android SDK, Xcode, etc.
### Environment Variables
- **NODE_ENV**: Development, testing, production environments
- **PATH**: Ensure tools are accessible from command line
- **Platform-specific**: Android SDK paths, Xcode command line tools
### Validation Commands
```bash
# Check Node.js version
node --version
# Check npm version
npm --version
# Check global packages
npm list -g --depth=0
# Validate platform tools
npx capacitor doctor
```
## Dependency Troubleshooting
### Common Issues
1. **Permission Errors**: Use `sudo` sparingly, prefer `npm config set prefix`
2. **Version Conflicts**: Use `npm ls` to identify dependency conflicts
3. **Cache Issues**: Clear npm cache with `npm cache clean --force`
4. **Lock File Issues**: Delete `package-lock.json` and `node_modules`,
then reinstall
### Resolution Strategies
- **Dependency Audit**: Run `npm audit` to identify security issues
- **Version Pinning**: Use exact versions for critical dependencies
- **Peer Dependency Management**: Ensure compatible versions across packages
- **Platform-specific Dependencies**: Handle different requirements per platform
## Best Practices for Teams
### Onboarding
- **Environment Setup Script**: Automated setup for new team members
- **Version Locking**: Use `package-lock.json` and `yarn.lock` consistently
- **Documentation**: Clear setup instructions with troubleshooting steps
### Maintenance
- **Regular Updates**: Schedule dependency updates and security patches
- **Testing**: Validate changes don't break existing functionality
- **Rollback Plan**: Maintain ability to revert to previous working versions
**See also**:
`.cursor/rules/development/software_development.mdc` for core development principles.
**Status**: Active dependency management guidelines
**Priority**: Medium
**Estimated Effort**: Ongoing reference
**Dependencies**: software_development.mdc
**Stakeholders**: Development team, DevOps team
## Model Implementation Checklist
### Before Dependency Changes
- [ ] **Current State Review**: Check current dependency versions and status
- [ ] **Impact Analysis**: Assess impact of dependency changes on codebase
- [ ] **Compatibility Check**: Verify compatibility with existing code
- [ ] **Security Review**: Review security implications of dependency changes
### During Dependency Management
- [ ] **Version Selection**: Choose appropriate dependency versions
- [ ] **Testing**: Test with new dependency versions
- [ ] **Documentation**: Update dependency documentation
- [ ] **Team Communication**: Communicate changes to team members
### After Dependency Changes
- [ ] **Comprehensive Testing**: Test all functionality with new dependencies
- [ ] **Documentation Update**: Update all relevant documentation
- [ ] **Deployment Planning**: Plan and execute deployment strategy
- [ ] **Monitoring**: Monitor for issues after deployment

View File

@@ -2,8 +2,32 @@
globs: **/src/**/*
alwaysApply: false
---
✅ use system date command to timestamp all interactions with accurate date and time
✅ use system date command to timestamp all interactions with accurate date and
time
✅ python script files must always have a blank line at their end
✅ remove whitespace at the end of lines
✅ use npm run lint-fix to check for warnings
✅ do not use npm run dev let me handle running and supplying feedback
## Model Implementation Checklist
### Before Development Work
- [ ] **System Date Check**: Use system date command for accurate timestamps
- [ ] **Environment Setup**: Verify development environment is ready
- [ ] **Linting Setup**: Ensure npm run lint-fix is available
- [ ] **Code Standards**: Review project coding standards and requirements
### During Development
- [ ] **Timestamp Usage**: Include accurate timestamps in all interactions
- [ ] **Code Quality**: Use npm run lint-fix to check for warnings
- [ ] **File Standards**: Ensure Python files have blank line at end
- [ ] **Whitespace**: Remove trailing whitespace from all lines
### After Development
- [ ] **Linting Check**: Run npm run lint-fix to verify code quality
- [ ] **File Validation**: Confirm Python files end with blank line
- [ ] **Whitespace Review**: Verify no trailing whitespace remains
- [ ] **Documentation**: Update relevant documentation with changes

View File

@@ -0,0 +1,119 @@
# Historical Comment Management — Code Clarity Guidelines
> **Agent role**: When encountering historical comments about removed
> methods, deprecated patterns, or architectural changes, apply these
> guidelines to maintain code clarity and developer guidance.
## Overview
Historical comments should either be **removed entirely** or **transformed
into actionable guidance** for future developers. Avoid keeping comments
that merely state what was removed without explaining why or what to do
instead.
## Decision Framework
### When to Remove Comments
- **Obsolete Information**: Comment describes functionality that no
longer exists
- **Outdated Context**: Comment refers to old patterns that are no
longer relevant
- **No Actionable Value**: Comment doesn't help future developers
make decisions
### When to Transform Comments
- **Migration Guidance**: Future developers might need to understand
the evolution
- **Alternative Approaches**: The comment can guide future
implementation choices
- **Historical Context**: Understanding the change helps with
current decisions
## Transformation Patterns
### 1. **Removed Method** → **Alternative Approach**
```typescript
// Before: Historical comment
// turnOffNotifyingFlags method removed - notification state is now
// managed by NotificationSection component
// After: Actionable guidance
// Note: Notification state management has been migrated to
// NotificationSection component
```
### 2. **Deprecated Pattern** → **Current Best Practice**
```typescript
// Before: Historical comment
// Database access has been migrated from direct IndexedDB calls to
// PlatformServiceMixin
// After: Actionable guidance
// This provides better platform abstraction and consistent error
// handling across web/mobile/desktop
// When adding new database operations, use this.$getContact(),
// this.$saveSettings(), etc.
```
## Best Practices
### 1. **Use Actionable Language**: Guide future decisions, not just
document history
### 2. **Provide Alternatives**: Always suggest what to use instead
### 3. **Update Related Docs**: If removing from code, consider
adding to documentation
### 4. **Keep Context**: Include enough information to understand
why the change was made
## Integration Points
- Apply these rules when reviewing code changes
- Use during code cleanup and refactoring
- Include in code review checklists
---
**See also**:
- `.cursor/rules/development/historical_comment_patterns.mdc` for detailed
transformation examples and patterns
**Status**: Active comment management guidelines
**Priority**: Medium
**Estimated Effort**: Ongoing reference
**Dependencies**: None
**Stakeholders**: Development team, Code reviewers
## Model Implementation Checklist
### Before Comment Review
- [ ] **Code Analysis**: Review code for historical or outdated comments
- [ ] **Context Understanding**: Understand the current state of the codebase
- [ ] **Pattern Identification**: Identify comments that need transformation or removal
- [ ] **Documentation Planning**: Plan where to move important historical context
### During Comment Management
- [ ] **Transformation**: Convert historical comments to actionable guidance
- [ ] **Alternative Provision**: Suggest current best practices and alternatives
- [ ] **Context Preservation**: Maintain enough information to understand changes
- [ ] **Documentation Update**: Move important context to appropriate documentation
### After Comment Management
- [ ] **Code Review**: Ensure transformed comments provide actionable value
- [ ] **Documentation Sync**: Verify related documentation is updated
- [ ] **Team Communication**: Share comment transformation patterns with team
- [ ] **Process Integration**: Include comment management in code review checklists

View File

@@ -0,0 +1,139 @@
# Historical Comment Patterns — Transformation Examples
> **Agent role**: Reference this file for specific patterns and
examples when transforming historical comments into actionable guidance.
## 🔄 Transformation Patterns
### 1. From Removal Notice to Migration Note
```typescript
// ❌ REMOVE THIS
// turnOffNotifyingFlags method removed -
notification state is now managed by NotificationSection component
// ✅ TRANSFORM TO THIS
// Note: Notification state management has been migrated to NotificationSection
component
// which handles its own lifecycle and persistence via PlatformServiceMixin
```
### 2. From Deprecation Notice to Implementation Guide
```typescript
// ❌ REMOVE THIS
// This will be handled by the NewComponent now
// No need to call oldMethod() as it's no longer needed
// ✅ TRANSFORM TO THIS
// Note: This functionality has been migrated to NewComponent
// which provides better separation of concerns and testability
```
### 3. From Historical Note to Architectural Context
```typescript
// ❌ REMOVE THIS
// Old approach: used direct database calls
// New approach: uses service layer
// ✅ TRANSFORM TO THIS
// Note: Database access has been abstracted through service layer
// for better testability and platform independence
```
## 🚫 Anti-Patterns to Remove
- Comments that only state what was removed
- Comments that don't explain the current approach
- Comments that reference non-existent methods
- Comments that are self-evident from the code
- Comments that don't help future decision-making
## 📚 Examples
### Good Historical Comment (Keep & Transform)
```typescript
// Note: Database access has been migrated from direct IndexedDB calls to
PlatformServiceMixin
// This provides better platform abstraction and
consistent error handling across web/mobile/desktop
// When adding new database operations, use this.$getContact(),
this.$saveSettings(), etc.
```
### Bad Historical Comment (Remove)
```typescript
// Old method getContactFromDB() removed - now handled by PlatformServiceMixin
// No need to call the old method anymore
```
## 🎯 When to Use Each Pattern
### Migration Notes
- Use when functionality has moved to a different component/service
- Explain the new location and why it's better
- Provide guidance on how to use the new approach
### Implementation Guides
- Use when patterns have changed significantly
- Explain the architectural benefits
- Show how to implement the new pattern
### Architectural Context
- Use when the change represents a system-wide improvement
- Explain the reasoning behind the change
- Help future developers understand the evolution
---
**See also**: `.cursor/rules/development/historical_comment_management.mdc` for
the core decision framework and best practices.
## Model Implementation Checklist
### Before Comment Review
- [ ] **Code Analysis**: Review code for historical or outdated comments
- [ ] **Pattern Identification**: Identify comments that need transformation or removal
- [ ] **Context Understanding**: Understand the current state of the codebase
- [ ] **Transformation Planning**: Plan how to transform or remove comments
### During Comment Transformation
- [ ] **Pattern Selection**: Choose appropriate transformation pattern
- [ ] **Content Creation**: Create actionable guidance for future developers
- [ ] **Alternative Provision**: Suggest current best practices and approaches
- [ ] **Context Preservation**: Maintain enough information to understand changes
### After Comment Transformation
- [ ] **Code Review**: Ensure transformed comments provide actionable value
- [ ] **Pattern Documentation**: Document transformation patterns for team use
- [ ] **Team Communication**: Share comment transformation patterns with team
- [ ] **Process Integration**: Include comment patterns in code review checklists

View File

@@ -0,0 +1,178 @@
# Investigation Report Example
**Author**: Matthew Raymer
**Date**: 2025-08-19
**Status**: 🎯 **ACTIVE** - Investigation methodology example
## Investigation — Registration Dialog Test Flakiness
## Objective
Identify root cause of flaky tests related to registration dialogs in contact
import scenarios.
## System Map
- User action → ContactInputForm → ContactsView.addContact() →
handleRegistrationPrompt()
- setTimeout(1000ms) → Modal dialog → User response → Registration API call
- Test execution → Wait for dialog → Assert dialog content → Click response
button
## Findings (Evidence)
- **1-second timeout causes flakiness** — evidence:
`src/views/ContactsView.vue:971-1000`; setTimeout(..., 1000) in
handleRegistrationPrompt()
- **Import flow bypasses dialogs** — evidence:
`src/views/ContactImportView.vue:500-520`; importContacts() calls
$insertContact() directly, no handleRegistrationPrompt()
- **Dialog only appears in direct add flow** — evidence:
`src/views/ContactsView.vue:774-800`; addContact() calls
handleRegistrationPrompt() after database insert
## Hypotheses & Failure Modes
- H1: 1-second timeout makes dialog appearance unpredictable; would fail when
tests run faster than 1000ms
- H2: Test environment timing differs from development; watch for CI vs local
test differences
## Corrections
- Updated: "Multiple dialogs interfere with imports" → "Import flow never
triggers dialogs - they only appear in direct contact addition"
- Updated: "Complex batch registration needed" → "Simple timeout removal and
test mode flag sufficient"
## Diagnostics (Next Checks)
- [ ] Repro on CI environment vs local
- [ ] Measure actual dialog appearance timing
- [ ] Test with setTimeout removed
- [ ] Verify import flow doesn't call handleRegistrationPrompt
## Risks & Scope
- Impacted: Contact addition tests, registration workflow tests; Data: None;
Users: Test suite reliability
## Decision / Next Steps
- Owner: Development Team; By: 2025-01-28
- Action: Remove 1-second timeout + add test mode flag; Exit criteria: Tests
pass consistently
## References
- `src/views/ContactsView.vue:971-1000`
- `src/views/ContactImportView.vue:500-520`
- `src/views/ContactsView.vue:774-800`
## Competence Hooks
- Why this works: Code path tracing revealed separate execution flows,
evidence disproved initial assumptions
- Common pitfalls: Assuming related functionality without tracing execution
paths, over-engineering solutions to imaginary problems
- Next skill: Learn to trace code execution before proposing architectural
changes
- Teach-back: "What evidence shows that contact imports bypass registration
dialogs?"
## Key Learning Points
### Evidence-First Approach
This investigation demonstrates the importance of:
1. **Tracing actual code execution** rather than making assumptions
2. **Citing specific evidence** with file:line references
3. **Validating problem scope** before proposing solutions
4. **Considering simpler alternatives** before complex architectural changes
### Code Path Tracing Value
By tracing the execution paths, we discovered:
- Import flow and direct add flow are completely separate
- The "multiple dialog interference" problem didn't exist
- A simple timeout removal would solve the actual issue
### Prevention of Over-Engineering
The investigation prevented:
- Unnecessary database schema changes
- Complex batch registration systems
- Migration scripts for non-existent problems
- Architectural changes based on assumptions
---
**Status**: Active investigation methodology
**Priority**: High
**Estimated Effort**: Ongoing reference
**Dependencies**: software_development.mdc
**Stakeholders**: Development team, QA team
## Model Implementation Checklist
### Before Investigation
- [ ] **Problem Definition**: Clearly define the problem to investigate
- [ ] **Scope Definition**: Determine investigation scope and boundaries
- [ ] **Methodology Planning**: Plan investigation approach and methods
- [ ] **Resource Assessment**: Identify required resources and tools
### During Investigation
- [ ] **Evidence Collection**: Gather relevant evidence and data systematically
- [ ] **Code Path Tracing**: Map execution flow for software investigations
- [ ] **Analysis**: Analyze evidence using appropriate methods
- [ ] **Documentation**: Document investigation process and findings
### After Investigation
- [ ] **Synthesis**: Synthesize findings into actionable insights
- [ ] **Report Creation**: Create comprehensive investigation report
- [ ] **Recommendations**: Provide clear, actionable recommendations
- [ ] **Team Communication**: Share findings and next steps with team

View File

@@ -0,0 +1,358 @@
---
alwaysApply: false
---
# Logging Migration — Patterns and Examples
> **Agent role**: Reference this file for specific migration patterns and
examples when converting console.* calls to logger usage.
## Migration — AutoRewrites (Apply Every Time)
### Exact Transforms
- `console.debug(...)` → `logger.debug(...)`
- `console.log(...)` → `logger.log(...)` (or `logger.info(...)` when
clearly stateful)
- `console.info(...)` → `logger.info(...)`
- `console.warn(...)` → `logger.warn(...)`
- `console.error(...)` → `logger.error(...)`
### Multi-arg Handling
- First arg becomes `message` (stringify safely if non-string).
- Remaining args map 1:1 to `...args`:
`console.info(msg, a, b)` → `logger.info(String(msg), a, b)`
### Sole `Error`
- `console.error(err)` → `logger.error(err.message, err)`
### Object-wrapping Cleanup
Replace `{{ userId, meta }}` wrappers with separate args:
`logger.info('User signed in', userId, meta)`
## Level Guidelines with Examples
### DEBUG Examples
```typescript
logger.debug('[HomeView] reloadFeedOnChange() called');
logger.debug('[HomeView] Current filter settings',
settings.filterFeedByVisible,
settings.filterFeedByNearby,
settings.searchBoxes?.length ?? 0);
logger.debug('[FeedFilters] Toggling nearby filter',
this.isNearby, this.settingChanged, this.activeDid);
```
**Avoid**: Vague messages (`'Processing data'`).
### INFO Examples
```typescript
logger.info('[StartView] Component mounted', process.env.VITE_PLATFORM);
logger.info('[StartView] User selected new seed generation');
logger.info('[SearchAreaView] Search box stored',
searchBox.name, searchBox.bbox);
logger.info('[ContactQRScanShowView] Contact registration OK',
contact.did);
```
**Avoid**: Diagnostic details that belong in `debug`.
### WARN Examples
```typescript
logger.warn('[ContactQRScanShowView] Invalid scan result no value',
resultType);
logger.warn('[ContactQRScanShowView] Invalid QR format no JWT in URL');
logger.warn('[ContactQRScanShowView] JWT missing "own" field');
```
**Avoid**: Hard failures (those are `error`).
### ERROR Examples
```typescript
logger.error('[HomeView Settings] initializeIdentity() failed', err);
logger.error('[StartView] Failed to load initialization data', error);
logger.error('[ContactQRScanShowView] Error processing contact QR',
error, rawValue);
```
**Avoid**: Expected user cancels (use `info`/`debug`).
## Context Hygiene Examples
### Component Context
```typescript
const log = logger.withContext('UserService');
log.info('User created', userId);
log.error('Failed to create user', error);
```
If not using `withContext`, prefix message with `[ComponentName]`.
### Emoji Guidelines
Recommended set for visual scanning:
- Start/finish: 🚀 / ✅
- Retry/loop: 🔄
- External call: 📡
- Data/metrics: 📊
- Inspection: 🔍
## Quick Before/After
### **Before**
```typescript
console.log('User signed in', user.id, meta);
console.error('Failed to update profile', err);
console.info('Filter toggled', this.hasVisibleDid);
```
### **After**
```typescript
import { logger } from '@/utils/logger';
logger.info('User signed in', user.id, meta);
logger.error('Failed to update profile', err);
logger.debug('[FeedFilters] Filter toggled', this.hasVisibleDid);
```
## Checklist (for every PR)
- [ ] No `console.*` (or properly pragma'd in the allowed locations)
- [ ] Correct import path for `logger`
- [ ] Rest-parameter call shape (`message, ...args`)
- [ ] Right level chosen (debug/info/warn/error)
- [ ] No secrets / oversized payloads / throwaway context objects
- [ ] Component context provided (scoped logger or `[Component]` prefix)
**See also**:
`.cursor/rules/development/logging_standards.mdc` for the core standards and rules.
# Logging Migration — Patterns and Examples
> **Agent role**: Reference this file for specific migration patterns and
examples when converting console.* calls to logger usage.
## Migration — AutoRewrites (Apply Every Time)
### Exact Transforms
- `console.debug(...)` → `logger.debug(...)`
- `console.log(...)` → `logger.log(...)` (or `logger.info(...)` when
clearly stateful)
- `console.info(...)` → `logger.info(...)`
- `console.warn(...)` → `logger.warn(...)`
- `console.error(...)` → `logger.error(...)`
### Multi-arg Handling
- First arg becomes `message` (stringify safely if non-string).
- Remaining args map 1:1 to `...args`:
`console.info(msg, a, b)` → `logger.info(String(msg), a, b)`
### Sole `Error`
- `console.error(err)` → `logger.error(err.message, err)`
### Object-wrapping Cleanup
Replace `{{ userId, meta }}` wrappers with separate args:
`logger.info('User signed in', userId, meta)`
## Level Guidelines with Examples
### DEBUG Examples
```typescript
logger.debug('[HomeView] reloadFeedOnChange() called');
logger.debug('[HomeView] Current filter settings',
settings.filterFeedByVisible,
settings.filterFeedByNearby,
settings.searchBoxes?.length ?? 0);
logger.debug('[FeedFilters] Toggling nearby filter',
this.isNearby, this.settingChanged, this.activeDid);
```
**Avoid**: Vague messages (`'Processing data'`).
### INFO Examples
```typescript
logger.info('[StartView] Component mounted', process.env.VITE_PLATFORM);
logger.info('[StartView] User selected new seed generation');
logger.info('[SearchAreaView] Search box stored',
searchBox.name, searchBox.bbox);
logger.info('[ContactQRScanShowView] Contact registration OK',
contact.did);
```
**Avoid**: Diagnostic details that belong in `debug`.
### WARN Examples
```typescript
logger.warn('[ContactQRScanShowView] Invalid scan result no value',
resultType);
logger.warn('[ContactQRScanShowView] Invalid QR format no JWT in URL');
logger.warn('[ContactQRScanShowView] JWT missing "own" field');
```
**Avoid**: Hard failures (those are `error`).
### ERROR Examples
```typescript
logger.error('[HomeView Settings] initializeIdentity() failed', err);
logger.error('[StartView] Failed to load initialization data', error);
logger.error('[ContactQRScanShowView] Error processing contact QR',
error, rawValue);
```
**Avoid**: Expected user cancels (use `info`/`debug`).
## Context Hygiene Examples
### Component Context
```typescript
const log = logger.withContext('UserService');
log.info('User created', userId);
log.error('Failed to create user', error);
```
If not using `withContext`, prefix message with `[ComponentName]`.
### Emoji Guidelines
Recommended set for visual scanning:
- Start/finish: 🚀 / ✅
- Retry/loop: 🔄
- External call: 📡
- Data/metrics: 📊
- Inspection: 🔍
## Quick Before/After
### **Before**
```typescript
console.log('User signed in', user.id, meta);
console.error('Failed to update profile', err);
console.info('Filter toggled', this.hasVisibleDid);
```
### **After**
```typescript
import { logger } from '@/utils/logger';
logger.info('User signed in', user.id, meta);
logger.error('Failed to update profile', err);
logger.debug('[FeedFilters] Filter toggled', this.hasVisibleDid);
```
## Checklist (for every PR)
- [ ] No `console.*` (or properly pragma'd in the allowed locations)
- [ ] Correct import path for `logger`
- [ ] Rest-parameter call shape (`message, ...args`)
- [ ] Right level chosen (debug/info/warn/error)
- [ ] No secrets / oversized payloads / throwaway context objects
- [ ] Component context provided (scoped logger or `[Component]` prefix)
**See also**:
`.cursor/rules/development/logging_standards.mdc` for the core standards and rules.
## Model Implementation Checklist
### Before Logging Migration
- [ ] **Code Review**: Identify all `console.*` calls in the codebase
- [ ] **Logger Import**: Verify logger utility is available and accessible
- [ ] **Context Planning**: Plan component context for each logging call
- [ ] **Level Assessment**: Determine appropriate log levels for each call
### During Logging Migration
- [ ] **Import Replacement**: Replace `console.*` with `logger.*` calls
- [ ] **Context Addition**: Add component context using scoped logger or prefix
- [ ] **Level Selection**: Choose appropriate log level (debug/info/warn/error)
- [ ] **Parameter Format**: Use rest-parameter call shape (`message, ...args`)
### After Logging Migration
- [ ] **Console Check**: Ensure no `console.*` methods remain (unless pragma'd)
- [ ] **Context Validation**: Verify all logging calls have proper context
- [ ] **Level Review**: Confirm log levels are appropriate for each situation
- [ ] **Testing**: Test logging functionality across all platforms

View File

@@ -0,0 +1,176 @@
# Agent Contract — TimeSafari Logging (Unified, MANDATORY)
**Author**: Matthew Raymer
**Date**: 2025-08-19
**Status**: 🎯 **ACTIVE** - Mandatory logging standards
## Overview
This document defines unified logging standards for the TimeSafari project,
ensuring consistent, rest-parameter logging style using the project logger.
No `console.*` methods are allowed in production code.
## Scope and Goals
**Scope**: Applies to all diffs and generated code in this workspace unless
explicitly exempted below.
**Goal**: One consistent, rest-parameter logging style using the project
logger; no `console.*` in production code.
## NonNegotiables (DO THIS)
- You **MUST** use the project logger; **DO NOT** use any `console.*`
methods.
- Import exactly as:
- `import { logger } from '@/utils/logger'`
- If `@` alias is unavailable, compute the correct relative path (do not
fail).
- Call signatures use **rest parameters**: `logger.info(message, ...args)`
- Prefer primitives/IDs and small objects in `...args`; **never build a
throwaway object** just to "wrap context".
- Production defaults: Web = `warn+`, Electron = `error`, Dev/Capacitor =
`info+` (override via `VITE_LOG_LEVEL`).
- **Database persistence**: `info|warn|error` are persisted; `debug` is not.
Use `logger.toDb(msg, level?)` for DB-only.
## Available Logger API (Authoritative)
- `logger.debug(message, ...args)` — verbose internals, timings, input/output
shapes
- `logger.log(message, ...args)` — synonym of `info` for general info
- `logger.info(message, ...args)` — lifecycle, state changes, success paths
- `logger.warn(message, ...args)` — recoverable issues, retries, degraded mode
- `logger.error(message, ...args)` — failures, thrown exceptions, aborts
- `logger.toDb(message, level?)` — DB-only entry (default level = `info`)
- `logger.toConsoleAndDb(message, isError)` — console + DB (use sparingly)
- `logger.withContext(componentName)` — returns a scoped logger
## Level Guidelines (Use These Heuristics)
### DEBUG
Use for method entry/exit, computed values, filters, loops, retries, and
external call payload sizes.
### INFO
Use for user-visible lifecycle and completed operations.
### WARN
Use for recoverable issues, fallbacks, unexpected-but-handled conditions.
### ERROR
Use for unrecoverable failures, data integrity issues, and thrown
exceptions.
## Context Hygiene (Consistent, Minimal, Helpful)
- **Component context**: Prefer scoped logger.
- **Emojis**: Optional and minimal for visual scanning.
- **Sensitive data**: Never log secrets (tokens, keys, passwords) or
payloads >10KB. Prefer IDs over objects; redact/hash when needed.
## DB Logging Rules
- `debug` **never** persists automatically.
- `info|warn|error` persist automatically.
- For DB-only events (no console), call `logger.toDb('Message',
'info'|'warn'|'error')`.
## Exceptions (Tightly Scoped)
Allowed paths (still prefer logger):
- `**/*.test.*`, `**/*.spec.*`
- `scripts/dev/**`, `scripts/migrate/**`
To intentionally keep `console.*`, add a pragma on the previous line:
```typescript
// cursor:allow-console reason="short justification"
console.log('temporary output');
```
## CI & Diff Enforcement
- Do not introduce `console.*` anywhere outside allowed, pragma'd spots.
- If an import is missing, insert it and resolve alias/relative path
correctly.
- Enforce rest-parameter call shape in reviews; replace object-wrapped
context.
- Ensure environment log level rules remain intact (`VITE_LOG_LEVEL`
respected).
---
**See also**:
`.cursor/rules/development/logging_migration.mdc` for migration patterns and examples.
**Status**: Active and enforced
**Priority**: Critical
**Estimated Effort**: Ongoing reference
**Dependencies**: TimeSafari logger utility
**Stakeholders**: Development team, Code review team
## Model Implementation Checklist
### Before Adding Logging
- [ ] **Logger Import**: Import logger as `import { logger } from
'@/utils/logger'`
- [ ] **Log Level Selection**: Determine appropriate log level
(debug/info/warn/error)
- [ ] **Context Planning**: Plan what context information to include
- [ ] **Sensitive Data Review**: Identify any sensitive data that needs redaction
### During Logging Implementation
- [ ] **Rest Parameters**: Use `logger.info(message, ...args)` format, not object
wrapping
- [ ] **Context Addition**: Include relevant IDs, primitives, or small objects in
args
- [ ] **Level Appropriateness**: Use correct log level for the situation
- [ ] **Scoped Logger**: Use `logger.withContext(componentName)` for
component-specific logging
### After Logging Implementation
- [ ] **Console Check**: Ensure no `console.*` methods are used (unless in
allowed paths)
- [ ] **Performance Review**: Verify logging doesn't impact performance
- [ ] **DB Persistence**: Use `logger.toDb()` for database-only logging if needed
- [ ] **Environment Compliance**: Respect `VITE_LOG_LEVEL` environment
variable

View File

@@ -0,0 +1,160 @@
# Planning Examples — No Time Estimates
> **Agent role**: Reference this file for detailed planning examples and
anti-patterns when creating project plans.
## 🎯 Example Planning (No Time Estimates)
### **Example 1: Simple Feature**
```
Phase 1: Core implementation
- Basic functionality
- Single platform support
- Unit tests
Phase 2: Platform expansion
- Multi-platform support
- Integration tests
Phase 3: Polish
- User testing
- Edge case handling
```
### **Example 2: Complex Cross-Platform Feature**
```
Phase 1: Foundation
- Architecture design
- Core service implementation
- Basic web platform support
Phase 2: Platform Integration
- Mobile platform support
- Desktop platform support
- Cross-platform consistency
Phase 3: Testing & Polish
- Comprehensive testing
- Error handling
- User experience refinement
```
## 🚫 Anti-Patterns to Avoid
- **"This should take X days"** - Red flag for time estimation
- **"Just a few hours"** - Ignores complexity and testing
- **"Similar to X"** - Without considering differences
- **"Quick fix"** - Nothing is ever quick in software
- **"No testing needed"** - Testing always takes effort
## ✅ Best Practices
### **When Planning:**
1. **Break down everything** - no work is too small to plan
2. **Consider all platforms** - web, mobile, desktop differences
3. **Include testing strategy** - unit, integration, and user testing
4. **Account for unknowns** - there are always surprises
5. **Focus on dependencies** - what blocks what
### **When Presenting Plans:**
1. **Show the phases** - explain the logical progression
2. **Highlight dependencies** - what could block progress
3. **Define milestones** - clear success criteria
4. **Identify risks** - what could go wrong
5. **Suggest alternatives** - ways to reduce scope or complexity
## 🔄 Continuous Improvement
### **Track Progress**
- Record planned vs. actual phases completed
- Identify what took longer than expected
- Learn from complexity misjudgments
- Adjust planning process based on experience
### **Learn from Experience**
- **Underestimated complexity**: Increase complexity categories
- **Missed dependencies**: Improve dependency mapping
- **Platform surprises**: Better platform research upfront
## 🎯 Integration with Harbor Pilot
This rule works in conjunction with:
- **Project Planning**: Focuses on phases and milestones
- **Resource Allocation**: Based on complexity, not time
- **Risk Management**: Identifies blockers and dependencies
- **Stakeholder Communication**: Sets progress-based expectations
---
**See also**: `.cursor/rules/development/realistic_time_estimation.mdc` for
the core principles and framework.
## Model Implementation Checklist
### Before Planning
- [ ] **Requirements Review**: Understand all requirements completely
- [ ] **Stakeholder Input**: Gather input from all stakeholders
- [ ] **Complexity Assessment**: Evaluate technical and business complexity
- [ ] **Platform Analysis**: Consider requirements across all target platforms
### During Planning
- [ ] **Phase Definition**: Define clear phases and milestones
- [ ] **Dependency Mapping**: Map dependencies between tasks
- [ ] **Risk Identification**: Identify potential risks and challenges
- [ ] **Testing Strategy**: Plan comprehensive testing approach
### After Planning
- [ ] **Stakeholder Review**: Review plan with stakeholders
- [ ] **Documentation**: Document plan clearly with phases and milestones
- [ ] **Team Communication**: Communicate plan to team
- [ ] **Progress Tracking**: Set up monitoring and tracking mechanisms

View File

@@ -0,0 +1,128 @@
# Realistic Time Estimation — Development Guidelines
> **Agent role**: **DO NOT MAKE TIME ESTIMATES**. Instead, use phases,
> milestones, and complexity levels. Time estimates are consistently wrong
> and create unrealistic expectations.
## Purpose
Development time estimates are consistently wrong and create unrealistic
expectations. This rule ensures we focus on phases, milestones, and
complexity rather than trying to predict specific timeframes.
## Critical Rule
**NEVER provide specific time estimates** (hours, days, weeks) for
development tasks. Instead, use:
- **Complexity levels** (Low, Medium, High, Critical)
- **Phases and milestones** with clear acceptance criteria
- **Platform-specific considerations** (Web, Mobile, Desktop)
- **Testing requirements** and validation steps
## Planning Framework
### Complexity Assessment
- **Low**: Simple changes, existing patterns, minimal testing
- **Medium**: New features, moderate testing, some integration
- **High**: Complex features, extensive testing, multiple platforms
- **Critical**: Core architecture changes, full regression testing
### Platform Categories
- **Web**: Browser compatibility, responsive design, accessibility
- **Mobile**: Native APIs, platform-specific testing, deployment
- **Desktop**: Electron integration, system APIs, distribution
### Testing Strategy
- **Unit tests**: Core functionality validation
- **Integration tests**: Component interaction testing
- **E2E tests**: User workflow validation
- **Platform tests**: Cross-platform compatibility
## Process Guidelines
### Planning Phase
1. **Scope Definition**: Clear requirements and acceptance criteria
2. **Complexity Assessment**: Evaluate technical and business complexity
3. **Phase Breakdown**: Divide into logical, testable phases
4. **Milestone Definition**: Define success criteria for each phase
### Execution Phase
1. **Phase 1**: Foundation and core implementation
2. **Phase 2**: Feature completion and integration
3. **Phase 3**: Testing, refinement, and documentation
4. **Phase 4**: Deployment and validation
### Validation Phase
1. **Acceptance Testing**: Verify against defined criteria
2. **Platform Testing**: Validate across target platforms
3. **Performance Testing**: Ensure performance requirements met
4. **Documentation**: Update relevant documentation
## Remember
**Your first estimate is wrong. Your second estimate is probably still
wrong. Focus on progress, not deadlines.**
---
**See also**:
- `.cursor/rules/development/planning_examples.mdc` for detailed planning examples
- `.cursor/rules/development/complexity_assessment.mdc` for complexity evaluation
**Status**: Active development guidelines
**Priority**: High
**Estimated Effort**: Ongoing reference
**Dependencies**: None
**Stakeholders**: Development team, Project managers
## Model Implementation Checklist
### Before Time Estimation
- [ ] **Requirements Analysis**: Understand all requirements and acceptance criteria
- [ ] **Complexity Assessment**: Evaluate technical and business complexity
- [ ] **Platform Review**: Identify requirements across all target platforms
- [ ] **Stakeholder Input**: Gather input from all affected parties
### During Time Estimation
- [ ] **Phase Breakdown**: Divide work into logical, testable phases
- [ ] **Complexity Classification**: Assign complexity levels (Low/Medium/High/Critical)
- [ ] **Platform Considerations**: Account for platform-specific requirements
- [ ] **Testing Strategy**: Plan comprehensive testing approach
### After Time Estimation
- [ ] **Milestone Definition**: Define success criteria for each phase
- [ ] **Progress Tracking**: Set up monitoring and tracking mechanisms
- [ ] **Documentation**: Document estimation process and assumptions
- [ ] **Stakeholder Communication**: Share estimation approach and progress focus

View File

@@ -0,0 +1,262 @@
---
description: Use this workflow when doing **pre-implementation research, defect
investigations with uncertain repros, or clarifying system architecture and
behaviors**.
alwaysApply: false
---
```json
{
"coaching_level": "light",
"socratic_max_questions": 2,
"verbosity": "concise",
"timebox_minutes": null,
"format_enforcement": "strict"
}
```
# Research & Diagnostic Workflow (R&D)
## Purpose
Provide a **repeatable, evidence-first** workflow to investigate features and
defects **before coding**. Outputs are concise reports, hypotheses, and next
steps—**not** code changes.
## When to Use
- Pre-implementation research for new features
- Defect investigations (repros uncertain, user-specific failures)
- Architecture/behavior clarifications (e.g., auth flows, merges, migrations)
---
## Enhanced with Software Development Ruleset
When investigating software issues, also apply:
- **Code Path Tracing**: Required for technical investigations
- **Evidence Validation**: Ensure claims are code-backed
- **Solution Complexity Assessment**: Justify architectural changes
---
## Output Contract (strict)
1) **Objective** — 12 lines
2) **System Map (if helpful)** — short diagram or bullet flow (≤8 bullets)
3) **Findings (Evidence-linked)** — bullets; each with file/function refs
4) **Hypotheses & Failure Modes** — short list, each testable
5) **Corrections** — explicit deltas from earlier assumptions (if any)
6) **Diagnostics** — what to check next (logs, DB, env, repro steps)
7) **Risks & Scope** — what could break; affected components
8) **Decision/Next Steps** — what we'll do, who's involved, by when
9) **References** — code paths, ADRs, docs
10) **Competence & Collaboration Hooks** — brief, skimmable
> Keep total length lean. Prefer links and bullets over prose.
---
## Quickstart Template
Copy/paste and fill:
```md
# Investigation — <short title>
## Objective
<one or two lines>
## System Map
- <module> → <function> → <downstream>
- <data path> → <db table> → <api>
## Findings (Evidence)
- <claim> —
evidence: `src/path/file.ts:function` (lines XY); log snippet/trace id
- <claim> — evidence: `...`
## Hypotheses & Failure Modes
- H1: <hypothesis>; would fail when <condition>
- H2: <hypothesis>; watch for <signal>
## Corrections
- Updated: <old statement> → <new statement with evidence>
## Diagnostics (Next Checks)
- [ ] Repro on <platform/version>
- [ ] Inspect <table/store> for <record>
- [ ] Capture <log/trace>
## Risks & Scope
- Impacted: <areas/components>; Data: <tables/keys>; Users: <segments>
## Decision / Next Steps
- Owner: <name>; By: <date> (YYYY-MM-DD)
- Action: <spike/bugfix/ADR>; Exit criteria: <binary checks>
## References
- `src/...`
- ADR: `docs/adr/xxxx-yy-zz-something.md`
- Design: `docs/...`
## Competence Hooks
- Why this works: <≤3 bullets>
- Common pitfalls: <≤3 bullets>
- Next skill: <≤1 item>
- Teach-back: "<one question>"
```
---
## Evidence Quality Bar
- **Cite the source** (file:func, line range if possible).
- **Prefer primary evidence** (code, logs) over inference.
- **Disambiguate platform** (Web/Capacitor/Electron) and **state** (migration,
auth).
- **Note uncertainty** explicitly.
---
## Code Path Tracing (Required for Software Investigations)
Before proposing solutions, trace the actual execution path:
- [ ] **Entry Points**:
Identify where the flow begins (user action, API call, etc.)
- [ ] **Component Flow**: Map which components/methods are involved
- [ ] **Data Path**: Track how data moves through the system
- [ ] **Exit Points**: Confirm where the flow ends and what results
- [ ] **Evidence Collection**: Gather specific code citations for each step
---
## Collaboration Hooks
- **Syncs:** 1015m with QA/Security/Platform owners for high-risk areas.
- **ADR:** Record major decisions; link here.
- **Review:** Share repro + diagnostics checklist in PR/issue.
---
## Integration with Other Rulesets
### With software_development.mdc
- **Enhanced Evidence Validation**:
Use code path tracing for technical investigations
- **Architecture Assessment**:
Apply complexity justification to proposed solutions
- **Impact Analysis**: Assess effects on existing systems before recommendations
### With base_context.mdc
- **Competence Building**: Focus on technical investigation skills
- **Collaboration**: Structure outputs for team review and discussion
---
## Self-Check (model, before responding)
- [ ] Output matches the **Output Contract** sections.
- [ ] Each claim has **evidence** or **uncertainty** is flagged.
- [ ] Hypotheses are testable; diagnostics are actionable.
- [ ] Competence + collaboration hooks present (≤120 words total).
- [ ] Respect toggles; keep it concise.
- [ ] **Code path traced** (for software investigations).
- [ ] **Evidence validated** against actual code execution.
---
## Optional Globs (examples)
> Uncomment `globs` in the header if you want auto-attach behavior.
- `src/platforms/**`, `src/services/**` —
attach during service/feature investigations
- `docs/adr/**` — attach when editing ADRs
## Referenced Files
- Consider including templates as context: `@adr_template.mdc`,
`@investigation_report_example.mdc`
## Model Implementation Checklist
### Before Investigation
- [ ] **Problem Definition**: Clearly define the research question or issue
- [ ] **Scope Definition**: Determine investigation scope and boundaries
- [ ] **Methodology Planning**: Plan investigation approach and methods
- [ ] **Resource Assessment**: Identify required resources and tools
### During Investigation
- [ ] **Evidence Collection**: Gather relevant evidence and data systematically
- [ ] **Code Path Tracing**: Map execution flow for software investigations
- [ ] **Analysis**: Analyze evidence using appropriate methods
- [ ] **Documentation**: Document investigation process and findings
### After Investigation
- [ ] **Synthesis**: Synthesize findings into actionable insights
- [ ] **Report Creation**: Create comprehensive investigation report
- [ ] **Recommendations**: Provide clear, actionable recommendations
- [ ] **Team Communication**: Share findings and next steps with team

View File

@@ -0,0 +1,227 @@
---
alwaysApply: false
---
# Software Development Ruleset
**Author**: Matthew Raymer
**Date**: 2025-08-19
**Status**: 🎯 **ACTIVE** - Core development guidelines
## Purpose
Specialized guidelines for software development tasks including code review,
debugging, architecture decisions, and testing.
## Core Principles
### 1. Evidence-First Development
- **Code Citations Required**: Always cite specific file:line references when
making claims
- **Execution Path Tracing**: Trace actual code execution before proposing
architectural changes
- **Assumption Validation**: Flag assumptions as "assumed" vs "evidence-based"
### 2. Code Review Standards
- **Trace Before Proposing**: Always trace execution paths before suggesting
changes
- **Evidence Over Inference**: Prefer code citations over logical deductions
- **Scope Validation**: Confirm the actual scope of problems before proposing
solutions
### 3. Problem-Solution Validation
- **Problem Scope**: Does the solution address the actual problem?
- **Evidence Alignment**: Does the solution match the evidence?
- **Complexity Justification**: Is added complexity justified by real needs?
- **Alternative Analysis**: What simpler solutions were considered?
### 4. Dependency Management & Environment Validation
- **Pre-build Validation**:
Always validate critical dependencies before executing
build scripts
- **Environment Consistency**: Ensure team members have identical development
environments
- **Dependency Verification**: Check that required packages are installed and
accessible
- **Path Resolution**: Use `npx` for local dependencies to avoid PATH issues
## Required Workflows
### Before Proposing Changes
- [ ] **Code Path Tracing**: Map execution flow from entry to exit
- [ ] **Evidence Collection**: Gather specific code citations and logs
- [ ] **Assumption Surfacing**: Identify what's proven vs. inferred
- [ ] **Scope Validation**: Confirm the actual extent of the problem
- [ ] **Dependency Validation**: Verify all required dependencies are available
and accessible
### During Solution Design
- [ ] **Evidence Alignment**: Ensure solution addresses proven problems
- [ ] **Complexity Assessment**: Justify any added complexity
- [ ] **Alternative Evaluation**: Consider simpler approaches first
- [ ] **Impact Analysis**: Assess effects on existing systems
- [ ] **Environment Impact**: Assess how changes affect team member setups
## Software-Specific Competence Hooks
### Evidence Validation
- **"What code path proves this claim?"**
- **"How does data actually flow through the system?"**
- **"What am I assuming vs. what can I prove?"**
### Code Tracing
- **"What's the execution path from user action to system response?"**
- **"Which components actually interact in this scenario?"**
- **"Where does the data originate and where does it end up?"**
### Architecture Decisions
- **"What evidence shows this change is necessary?"**
- **"What simpler solution could achieve the same goal?"**
- **"How does this change affect the existing system architecture?"**
### Dependency & Environment Management
- **"What dependencies does this feature require and are they properly
declared?"**
- **"How will this change affect team member development environments?"**
- **"What validation can we add to catch dependency issues early?"**
## Integration with Other Rulesets
### With base_context.mdc
- Inherits generic competence principles
- Adds software-specific evidence requirements
- Maintains collaboration and learning focus
### With research_diagnostic.mdc
- Enhances investigation with code path tracing
- Adds evidence validation to diagnostic workflow
- Strengthens problem identification accuracy
## Usage Guidelines
### When to Use This Ruleset
- Code reviews and architectural decisions
- Bug investigation and debugging
- Performance optimization
- Feature implementation planning
- Testing strategy development
### When to Combine with Others
- **base_context + software_development**: General development tasks
- **research_diagnostic + software_development**: Technical investigations
- **All three**: Complex architectural decisions or major refactoring
## Self-Check (model, before responding)
- [ ] Code path traced and documented
- [ ] Evidence cited with specific file:line references
- [ ] Assumptions clearly flagged as proven vs. inferred
- [ ] Solution complexity justified by evidence
- [ ] Simpler alternatives considered and documented
- [ ] Impact on existing systems assessed
- [ ] Dependencies validated and accessible
- [ ] Environment impact assessed for team members
- [ ] Pre-build validation implemented where appropriate
---
**See also**: `.cursor/rules/development/dependency_management.mdc` for
detailed dependency management practices.
**Status**: Active development guidelines
**Priority**: High
**Estimated Effort**: Ongoing reference
**Dependencies**: base_context.mdc, research_diagnostic.mdc
**Stakeholders**: Development team, Code review team
## Model Implementation Checklist
### Before Development Work
- [ ] **Code Path Tracing**: Map execution flow from entry to exit
- [ ] **Evidence Collection**: Gather specific code citations and logs
- [ ] **Assumption Surfacing**: Identify what's proven vs. inferred
- [ ] **Scope Validation**: Confirm the actual extent of the problem
### During Development
- [ ] **Evidence Alignment**: Ensure solution addresses proven problems
- [ ] **Complexity Assessment**: Justify any added complexity
- [ ] **Alternative Evaluation**: Consider simpler approaches first
- [ ] **Impact Analysis**: Assess effects on existing systems
### After Development
- [ ] **Code Path Validation**: Verify execution paths are correct
- [ ] **Evidence Documentation**: Document all code citations and evidence
- [ ] **Assumption Review**: Confirm all assumptions are documented
- [ ] **Environment Impact**: Assess how changes affect team member setups

View File

@@ -0,0 +1,146 @@
# Time Handling in Development Workflow
**Author**: Matthew Raymer
**Date**: 2025-08-17
**Status**: 🎯 **ACTIVE** - Production Ready
## Overview
This guide establishes **how time should be referenced and used** across the
development workflow. It is not tied to any one project, but applies to **all
feature development, issue investigations, ADRs, and documentation**.
## General Principles
- **Explicit over relative**: Always prefer absolute dates (`2025-08-17`) over
relative references like "last week."
- **ISO 8601 Standard**: Use `YYYY-MM-DD` format for all date references in
docs, issues, ADRs, and commits.
- **Time zones**: Default to **UTC** unless explicitly tied to user-facing
behavior.
- **Precision**: Only specify as much precision as needed (date vs. datetime vs.
timestamp).
- **Consistency**: Align time references across ADRs, commits, and investigation
reports.
## In Documentation & ADRs
- Record decision dates using **absolute ISO dates**.
- For ongoing timelines, state start and end explicitly (e.g., `2025-08-01` →
`2025-08-17`).
- Avoid ambiguous terms like *recently*, *last month*, or *soon*.
- For time-based experiments (e.g., A/B tests), always include:
- Start date
- Expected duration
- Review date checkpoint
## In Code & Commits
- Use **UTC timestamps** in logs, DB migrations, and serialized formats.
- In commits, link changes to **date-bound ADRs or investigation docs**.
- For migrations, include both **applied date** and **intended version window**.
- Use constants for known fixed dates; avoid hardcoding arbitrary strings.
## In Investigations & Research
- Capture **when** an issue occurred (absolute time or version tag).
- When describing failures: note whether they are **time-sensitive** (e.g.,
after
migrations, cache expirations).
- Record diagnostic timelines in ISO format (not relative).
- For performance regressions, annotate both **baseline timeframe** and
**measurement timeframe**.
## Collaboration Hooks
- During reviews, verify **time references are clear, absolute, and
standardized**.
- In syncs, reframe relative terms ("this week") into shared absolute
references.
- Tag ADRs with both **date created** and **review by** checkpoints.
## Self-Check Before Submitting
- [ ] Did I check the time using the **developer's actual system time and
timezone**?
- [ ] Am I using absolute ISO dates?
- [ ] Is UTC assumed unless specified otherwise?
- [ ] Did I avoid ambiguous relative terms?
- [ ] If duration matters, did I specify both start and end?
- [ ] For future work, did I include a review/revisit date?
---
**See also**:
- `.cursor/rules/development/time_implementation.mdc` for
detailed implementation instructions
- `.cursor/rules/development/time_examples.mdc` for practical examples and patterns
**Status**: Active time handling guidelines
**Priority**: High
**Estimated Effort**: Ongoing reference
**Dependencies**: None
**Stakeholders**: Development team, Documentation team
**Maintainer**: Matthew Raymer
**Next Review**: 2025-09-17
## Model Implementation Checklist
### Before Time-Related Work
- [ ] **Time Context**: Understand what time information is needed
- [ ] **Format Review**: Review time formatting standards (UTC, ISO 8601)
- [ ] **Platform Check**: Identify platform-specific time requirements
- [ ] **User Context**: Consider user's timezone and preferences
### During Time Implementation
- [ ] **UTC Usage**: Use UTC for all system and log timestamps
- [ ] **Format Consistency**: Apply consistent time formatting patterns
- [ ] **Timezone Handling**: Properly handle timezone conversions
- [ ] **User Display**: Format times appropriately for user display
### After Time Implementation
- [ ] **Validation**: Verify time formats are correct and consistent
- [ ] **Testing**: Test time handling across different scenarios
- [ ] **Documentation**: Update relevant documentation with time patterns
- [ ] **Review**: Confirm implementation follows time standards

View File

@@ -0,0 +1,243 @@
# Time Examples — Practical Patterns
> **Agent role**: Reference this file for practical examples and
patterns when working with time handling in development.
## Examples
### Good
- "Feature flag rollout started on `2025-08-01` and will be reviewed on
`2025-08-21`."
- "Migration applied on `2025-07-15T14:00Z`."
- "Issue reproduced on `2025-08-17T09:00-05:00 (local)` /
`2025-08-17T14:00Z (UTC)`."
### Bad
- "Feature flag rolled out last week."
- "Migration applied recently."
- "Now is August, so we assume this was last month."
### More Examples
#### Issue Reports
- ✅ **Good**: "User reported login failure at `2025-08-17T14:30:00Z`. Issue
persisted until `2025-08-17T15:45:00Z`."
- ❌ **Bad**: "User reported login failure earlier today. Issue lasted for a
while."
#### Release Planning
- ✅ **Good**: "Feature X scheduled for release on `2025-08-25`. Testing
window: `2025-08-20` to `2025-08-24`."
- ❌ **Bad**: "Feature X will be released next week after testing."
#### Performance Monitoring
- ✅ **Good**: "Baseline performance measured on `2025-08-10T09:00:00Z`.
Regression detected on `2025-08-15T14:00:00Z`."
- ❌ **Bad**: "Performance was good last week but got worse this week."
## Technical Implementation Examples
### Database Storage
```sql
-- ✅ Good: Store in UTC
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
-- ❌ Bad: Store in local time
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
```
### API Responses
```json
// ✅ Good: Include both UTC and local time
{
"eventTime": "2025-08-17T14:00:00Z",
"localTime": "2025-08-17T10:00:00-04:00",
"timezone": "America/New_York"
}
// ❌ Bad: Only local time
{
"eventTime": "2025-08-17T10:00:00-04:00"
}
```
### Logging
```python
# ✅ Good: Log in UTC with timezone info
logger.info(f"User action at {datetime.utcnow().isoformat()}Z (UTC)")
# ❌ Bad: Log in local time
logger.info(f"User action at {datetime.now()}")
```
## Timezone Handling Examples
### Good Timezone Usage
```typescript
// ✅ Good: Store UTC, convert for display
const eventTime = new Date().toISOString(); // Store in UTC
const localTime = new Date().toLocaleString('en-US', {
timeZone: 'America/New_York'
}); // Convert for display
// ✅ Good: Include timezone context
const timestamp = {
utc: "2025-08-17T14:00:00Z",
local: "2025-08-17T10:00:00-04:00",
timezone: "America/New_York"
};
```
### Bad Timezone Usage
```typescript
// ❌ Bad: Assume timezone
const now = new Date(); // Assumes system timezone
// ❌ Bad: Mix formats
const timestamp = "2025-08-17 10:00 AM"; // Ambiguous format
```
## Common Patterns
### Date Range References
```typescript
// ✅ Good: Explicit date ranges
const dateRange = {
start: "2025-08-01T00:00:00Z",
end: "2025-08-31T23:59:59Z"
};
// ❌ Bad: Relative ranges
const dateRange = {
start: "beginning of month",
end: "end of month"
};
```
### Duration References
```typescript
// ✅ Good: Specific durations
const duration = {
value: 30,
unit: "days",
startDate: "2025-08-01T00:00:00Z"
};
// ❌ Bad: Vague durations
const duration = "about a month";
```
### Version References
```typescript
// ✅ Good: Version with date
const version = {
number: "1.2.3",
releaseDate: "2025-08-17T10:00:00Z",
buildDate: "2025-08-17T09:30:00Z"
};
// ❌ Bad: Version without context
const version = "latest";
```
## References
- [ISO 8601 Date and Time Standard](https://en.wikipedia.org/wiki/ISO_8601)
- [IANA Timezone Database](https://www.iana.org/time-zones)
- [ADR Template](./adr_template.md)
- [Research & Diagnostic Workflow](./research_diagnostic.mdc)
---
**Rule of Thumb**: Every time reference in development artifacts should be
**clear in 6 months without context**, and aligned to the **developer's actual
current time**.
**Technical Rule of Thumb**: **Store in UTC, display in local time, always
include timezone context.**
---
**See also**:
- `.cursor/rules/development/time.mdc` for core principles
- `.cursor/rules/development/time_implementation.mdc` for implementation instructions
**Status**: Active examples and patterns
**Priority**: Medium
**Estimated Effort**: Ongoing reference
**Dependencies**: time.mdc, time_implementation.mdc
**Stakeholders**: Development team, Documentation team
## Model Implementation Checklist
### Before Time Implementation
- [ ] **Time Context**: Understand what time information needs to be implemented
- [ ] **Format Review**: Review time formatting standards (UTC, ISO 8601)
- [ ] **Platform Check**: Identify platform-specific time requirements
- [ ] **User Context**: Consider user's timezone and display preferences
### During Time Implementation
- [ ] **UTC Storage**: Use UTC for all system and log timestamps
- [ ] **Format Consistency**: Apply consistent time formatting patterns
- [ ] **Timezone Handling**: Properly handle timezone conversions
- [ ] **User Display**: Format times appropriately for user display
### After Time Implementation
- [ ] **Format Validation**: Verify time formats are correct and consistent
- [ ] **Cross-Platform Testing**: Test time handling across different platforms
- [ ] **Documentation**: Update relevant documentation with time patterns
- [ ] **User Experience**: Confirm time display is clear and user-friendly

View File

@@ -0,0 +1,285 @@
# Time Implementation — Technical Instructions
> **Agent role**: Reference this file for detailed implementation instructions
when working with time handling in development.
## Real-Time Context in Developer Interactions
- The model must always resolve **"current time"** using the **developer's
actual system time and timezone**.
- When generating timestamps (e.g., in investigation logs, ADRs, or examples),
the model should:
- Use the **developer's current local time** by default.
- Indicate the timezone explicitly (e.g., `2025-08-17T10:32-05:00`).
- Optionally provide UTC alongside if context requires cross-team clarity.
- When interpreting relative terms like *now*, *today*, *last week*:
- Resolve them against the **developer's current time**.
- Convert them into **absolute ISO-8601 values** in the output.
## LLM Time Checking Instructions
**CRITICAL**: The LLM must actively query the system for current time rather
than assuming or inventing times.
### How to Check Current Time
#### 1. **Query System Time (Required)**
- **Always start** by querying the current system time using available tools
- **Never assume** what the current time is
- **Never use** placeholder values like "current time" or "now"
#### 2. **Available Time Query Methods**
- **System Clock**: Use `date` command or equivalent system time function
- **Programming Language**: Use language-specific time functions (e.g.,
`Date.now()`, `datetime.now()`)
- **Environment Variables**: Check for time-related environment variables
- **API Calls**: Use time service APIs if available
#### 3. **Required Time Information**
When querying time, always obtain:
- **Current Date**: YYYY-MM-DD format
- **Current Time**: HH:MM:SS format (24-hour)
- **Timezone**: Current system timezone or UTC offset
- **UTC Equivalent**: Convert local time to UTC for cross-team clarity
#### 4. **Time Query Examples**
```bash
# Example: Query system time
$ date
# Expected output: Mon Aug 17 10:32:45 EDT 2025
# Example: Query UTC time
$ date -u
# Expected output: Mon Aug 17 14:32:45 UTC 2025
```
```python
# Example: Python time query
import datetime
current_time = datetime.datetime.now()
utc_time = datetime.datetime.utcnow()
print(f"Local: {current_time}")
print(f"UTC: {utc_time}")
```
```javascript
// Example: JavaScript time query
const now = new Date();
const utc = new Date().toISOString();
console.log(`Local: ${now}`);
console.log(`UTC: ${utc}`);
```
#### 5. **LLM Time Checking Workflow**
1. **Query**: Actively query system for current time
2. **Validate**: Confirm time data is reasonable and current
3. **Format**: Convert to ISO 8601 format
4. **Context**: Provide both local and UTC times when helpful
5. **Document**: Show the source of time information
#### 6. **Error Handling for Time Queries**
- **If time query fails**: Ask user for current time or use "unknown time"
with explanation
- **If timezone unclear**: Default to UTC and ask for clarification
- **If time seems wrong**: Verify with user before proceeding
- **Always log**: Record when and how time was obtained
#### 7. **Time Query Verification**
Before using queried time, verify:
- [ ] Time is recent (within last few minutes)
- [ ] Timezone information is available
- [ ] UTC conversion is accurate
- [ ] Format follows ISO 8601 standard
## Model Behavior Rules
- **Never invent a "fake now"**: All "current time" references must come from
the real system clock available at runtime.
- **Check developer time zone**: If ambiguous, ask for clarification (e.g.,
"Should I use UTC or your local timezone?").
- **Format for clarity**:
- Local time: `YYYY-MM-DDTHH:mm±hh:mm`
- UTC equivalent (if needed): `YYYY-MM-DDTHH:mmZ`
## Technical Implementation Notes
### UTC Storage Principle
- **Store all timestamps in UTC** in databases, logs, and serialized formats
- **Convert to local time only for user display**
- **Use ISO 8601 format** for all storage: `YYYY-MM-DDTHH:mm:ss.sssZ`
### Common Implementation Patterns
#### Database Storage
```sql
-- ✅ Good: Store in UTC
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
```
#### API Responses
```json
// ✅ Good: Include both UTC and local time
{
"eventTime": "2025-08-17T14:00:00Z",
"localTime": "2025-08-17T10:00:00-04:00",
"timezone": "America/New_York"
}
```
#### Logging
```python
# ✅ Good: Log in UTC with timezone info
logger.info(f"User action at {datetime.utcnow().isoformat()}Z (UTC)")
```
### Timezone Handling Best Practices
#### 1. Always Store Timezone Information
- Include IANA timezone identifier (e.g., `America/New_York`)
- Store UTC offset at time of creation
- Handle daylight saving time transitions automatically
#### 2. User Display Considerations
- Convert UTC to user's preferred timezone
- Show timezone abbreviation when helpful
- Use relative time for recent events ("2 hours ago")
#### 3. Edge Case Handling
- **Daylight Saving Time**: Use timezone-aware libraries
- **Leap Seconds**: Handle gracefully (rare but important)
- **Invalid Times**: Validate before processing
### Common Mistakes to Avoid
#### 1. Timezone Confusion
- ❌ **Don't**: Assume server timezone is user timezone
- ✅ **Do**: Always convert UTC to user's local time for display
#### 2. Format Inconsistency
- ❌ **Don't**: Mix different time formats in the same system
- ✅ **Do**: Standardize on ISO 8601 for all storage
#### 3. Relative Time References
- ❌ **Don't**: Use relative terms in persistent storage
- ✅ **Do**: Convert relative terms to absolute timestamps immediately
---
**See also**:
- `.cursor/rules/development/time.mdc` for core principles
- `.cursor/rules/development/time_examples.mdc` for practical examples
**Status**: Active implementation guidelines
**Priority**: Medium
**Estimated Effort**: Ongoing reference
**Dependencies**: time.mdc
**Stakeholders**: Development team, DevOps team
## Model Implementation Checklist
### Before Time Implementation
- [ ] **Time Context**: Understand what time information needs to be implemented
- [ ] **Format Review**: Review time formatting standards (UTC, ISO 8601)
- [ ] **Platform Check**: Identify platform-specific time requirements
- [ ] **User Context**: Consider user's timezone and display preferences
### During Time Implementation
- [ ] **UTC Storage**: Use UTC for all system and log timestamps
- [ ] **Format Consistency**: Apply consistent time formatting patterns
- [ ] **Timezone Handling**: Properly handle timezone conversions
- [ ] **User Display**: Format times appropriately for user display
### After Time Implementation
- [ ] **Format Validation**: Verify time formats are correct and consistent
- [ ] **Cross-Platform Testing**: Test time handling across different platforms
- [ ] **Documentation**: Update relevant documentation with time patterns
- [ ] **User Experience**: Confirm time display is clear and user-friendly

View File

@@ -2,7 +2,9 @@
description: when dealing with types and Typesript
alwaysApply: false
---
```json
{
"coaching_level": "light",
"socratic_max_questions": 7,
@@ -10,6 +12,7 @@ alwaysApply: false
"timebox_minutes": null,
"format_enforcement": "strict"
}
```
# TypeScript Type Safety Guidelines
@@ -25,18 +28,25 @@ Practical rules to keep TypeScript strict and predictable. Minimize exceptions.
## Core Rules
1. **No `any`**
- Use explicit types. If unknown, use `unknown` and **narrow** via guards.
2. **Error handling uses guards**
- Reuse guards from `src/interfaces/**` (e.g., `isDatabaseError`,
`isApiError`).
- Catch with `unknown`; never cast to `any`.
3. **Dynamic property access is typesafe**
- Use `keyof` + `in` checks:
```ts
obj[k as keyof typeof obj]
```
- Avoid `(obj as any)[k]`.
@@ -46,24 +56,45 @@ Practical rules to keep TypeScript strict and predictable. Minimize exceptions.
### Core Type Safety Rules
- **No `any` Types**: Use explicit types or `unknown` with proper type guards
- **Error Handling Uses Guards**: Implement and reuse type guards from `src/interfaces/**`
- **Dynamic Property Access**: Use `keyof` + `in` checks for type-safe property access
- **Error Handling Uses Guards**:
Implement and reuse type guards from `src/interfaces/**`
- **Dynamic Property Access**:
Use `keyof` + `in` checks for type-safe property access
### Type Guard Patterns
- **API Errors**: Use `isApiError(error)` guards for API error handling
- **Database Errors**: Use `isDatabaseError(error)` guards for database operations
- **Axios Errors**: Implement `isAxiosError(error)` guards for HTTP error handling
- **Database Errors**:
Use `isDatabaseError(error)` guards for database operations
- **Axios Errors**:
Implement `isAxiosError(error)` guards for HTTP error handling
### Implementation Guidelines
- **Avoid Type Assertions**: Replace `as any` with proper type guards and interfaces
- **Avoid Type Assertions**:
Replace `as any` with proper type guards and interfaces
- **Narrow Types Properly**: Use type guards to narrow `unknown` types safely
- **Document Type Decisions**: Explain complex type structures and their purpose
## Minimal Special Cases (document in PR when used)
- **Vue refs / instances**: Use `ComponentPublicInstance` or specific
component types for dynamic refs.
- **3rdparty libs without types**: Narrow immediately to a **known
interface**; do not leave `any` hanging.
## Patterns (short)
@@ -71,31 +102,38 @@ Practical rules to keep TypeScript strict and predictable. Minimize exceptions.
### Database errors
```ts
try { await this.$addContact(contact); }
catch (e: unknown) {
if (isDatabaseError(e) && e.message.includes("Key already exists")) {
/* handle duplicate */
}
}
```
### API errors
```ts
try { await apiCall(); }
catch (e: unknown) {
if (isApiError(e)) {
const msg = e.response?.data?.error?.message;
}
}
```
### Dynamic keys
```ts
const keys = Object.keys(newSettings).filter(
k => k in newSettings && newSettings[k as keyof typeof newSettings] !== undefined
k => k in newSettings && newSettings[k as keyof typeof newSettings] !==
undefined
);
```
## Checklists
@@ -103,28 +141,38 @@ const keys = Object.keys(newSettings).filter(
**Before commit**
- [ ] No `any` (except documented, justified cases)
- [ ] Errors handled via guards
- [ ] Dynamic access uses `keyof`/`in`
- [ ] Imports point to correct interfaces/types
**Code review**
- [ ] Hunt hidden `as any`
- [ ] Guardbased error paths verified
- [ ] Dynamic ops are typesafe
- [ ] Prefer existing types over reinventing
## Tools
- `npm run lint-fix` — lint & autofix
- `npm run type-check` — strict type compilation (CI + prerelease)
- IDE: enable strict TS, ESLint/TS ESLint, Volar (Vue 3)
## References
- TS Handbook — https://www.typescriptlang.org/docs/
- TSESLint — https://typescript-eslint.io/rules/
- Vue 3 + TS — https://vuejs.org/guide/typescript/
- TS Handbook — <https://www.typescriptlang.org/docs/>
- TSESLint<https://typescript-eslint.io/rules/>
- Vue 3 + TS — <https://vuejs.org/guide/typescript/>
---
@@ -134,6 +182,31 @@ const keys = Object.keys(newSettings).filter(
**Dependencies**: TypeScript, ESLint, Vue 3
**Stakeholders**: Development team
- TS Handbook — https://www.typescriptlang.org/docs/
- TSESLint — https://typescript-eslint.io/rules/
- Vue 3 + TS — https://vuejs.org/guide/typescript/
- TS Handbook — <https://www.typescriptlang.org/docs/>
- TSESLint<https://typescript-eslint.io/rules/>
- Vue 3 + TS — <https://vuejs.org/guide/typescript/>
## Model Implementation Checklist
### Before Type Implementation
- [ ] **Type Analysis**: Understand current type definitions and usage
- [ ] **Interface Review**: Review existing interfaces and types
- [ ] **Error Handling**: Plan error handling with type guards
- [ ] **Dynamic Access**: Identify dynamic access patterns that need type safety
### During Type Implementation
- [ ] **Type Safety**: Ensure types provide meaningful safety guarantees
- [ ] **Error Guards**: Implement proper error handling with type guards
- [ ] **Dynamic Operations**: Use `keyof`/`in` for dynamic access
- [ ] **Import Validation**: Verify imports point to correct interfaces/types
### After Type Implementation
- [ ] **Linting Check**: Run `npm run lint-fix` to verify code quality
- [ ] **Type Check**: Run `npm run type-check` for strict type compilation
- [ ] **Code Review**: Hunt for hidden `as any` and type safety issues
- [ ] **Documentation**: Update type documentation and examples