Compare commits
5 Commits
notificati
...
playwright
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4391cb2881 | ||
| 0b9c243969 | |||
|
|
6afe1c4c13 | ||
|
|
f31eb5f6c9 | ||
|
|
9f976f011a |
@@ -202,3 +202,6 @@ Follow this exact order **after** the Base Contract’s **Objective → Result
|
|||||||
**Notes for Implementers:**
|
**Notes for Implementers:**
|
||||||
- Respect Base *Do-Not* (no filler, no invented facts, no censorship).
|
- Respect Base *Do-Not* (no filler, no invented facts, no censorship).
|
||||||
- Prefer clarity over completeness when timeboxed; capture unknowns explicitly.
|
- Prefer clarity over completeness when timeboxed; capture unknowns explicitly.
|
||||||
|
- Apply historical comment management rules (see `.cursor/rules/historical_comment_management.mdc`)
|
||||||
|
- Apply realistic time estimation rules (see `.cursor/rules/realistic_time_estimation.mdc`)
|
||||||
|
- Apply Playwright test investigation rules (see `.cursor/rules/playwright_test_investigation.mdc`)
|
||||||
236
.cursor/rules/historical_comment_management.mdc
Normal file
236
.cursor/rules/historical_comment_management.mdc
Normal file
@@ -0,0 +1,236 @@
|
|||||||
|
---
|
||||||
|
description: when comments are generated by the model
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# Historical Comment Management — Harbor Pilot Directive
|
||||||
|
|
||||||
|
> **Agent role**: When encountering historical comments about removed methods, deprecated patterns, or architectural changes, apply these guidelines to maintain code clarity and developer guidance.
|
||||||
|
|
||||||
|
## 🎯 Purpose
|
||||||
|
|
||||||
|
Historical comments should either be **removed entirely** or **transformed into actionable guidance** for future developers. Avoid keeping comments that merely state what was removed without explaining why or what to do instead.
|
||||||
|
|
||||||
|
## 📋 Decision Framework
|
||||||
|
|
||||||
|
### Remove Historical Comments When:
|
||||||
|
- **Obsolete Information**: Comment describes functionality that no longer exists
|
||||||
|
- **No Action Required**: Comment doesn't help future developers make decisions
|
||||||
|
- **Outdated Context**: Comment refers to old patterns that are no longer relevant
|
||||||
|
- **Self-Evident**: The current code clearly shows the current approach
|
||||||
|
|
||||||
|
### Transform Historical Comments When:
|
||||||
|
- **Architectural Context**: The change represents a significant pattern shift
|
||||||
|
- **Migration Guidance**: Future developers might need to understand the evolution
|
||||||
|
- **Decision Rationale**: The "why" behind the change is still relevant
|
||||||
|
- **Alternative Approaches**: The comment can guide future implementation choices
|
||||||
|
|
||||||
|
## 🔄 Transformation Patterns
|
||||||
|
|
||||||
|
### 1. From Removal Notice to Migration Note
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// turnOffNotifyingFlags method removed - notification state is now managed by NotificationSection component
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: Notification state management has been migrated to NotificationSection component
|
||||||
|
// which handles its own lifecycle and persistence via PlatformServiceMixin
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. From Deprecation Notice to Implementation Guide
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// This will be handled by the NewComponent now
|
||||||
|
// No need to call oldMethod() as it's no longer needed
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: This functionality has been migrated to NewComponent
|
||||||
|
// which provides better separation of concerns and testability
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. From Historical Note to Architectural Context
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// Old approach: used direct database calls
|
||||||
|
// New approach: uses service layer
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: Database access has been abstracted through service layer
|
||||||
|
// for better testability and platform independence
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚫 Anti-Patterns to Remove
|
||||||
|
|
||||||
|
- Comments that only state what was removed
|
||||||
|
- Comments that don't explain the current approach
|
||||||
|
- Comments that reference non-existent methods
|
||||||
|
- Comments that are self-evident from the code
|
||||||
|
- Comments that don't help future decision-making
|
||||||
|
|
||||||
|
## ✅ Best Practices
|
||||||
|
|
||||||
|
### When Keeping Historical Context:
|
||||||
|
1. **Explain the "Why"**: Why was the change made?
|
||||||
|
2. **Describe the "What"**: What is the current approach?
|
||||||
|
3. **Provide Context**: When might this information be useful?
|
||||||
|
4. **Use Actionable Language**: Guide future decisions, not just document history
|
||||||
|
|
||||||
|
### When Removing Historical Context:
|
||||||
|
1. **Verify Obsoleteness**: Ensure the information is truly outdated
|
||||||
|
2. **Check for Dependencies**: Ensure no other code references the old approach
|
||||||
|
3. **Update Related Docs**: If removing from code, consider adding to documentation
|
||||||
|
4. **Preserve in Git History**: The change is preserved in version control
|
||||||
|
|
||||||
|
## 🔍 Implementation Checklist
|
||||||
|
|
||||||
|
- [ ] Identify historical comments about removed/deprecated functionality
|
||||||
|
- [ ] Determine if comment provides actionable guidance
|
||||||
|
- [ ] Transform useful comments into migration notes or architectural context
|
||||||
|
- [ ] Remove comments that are purely historical without guidance value
|
||||||
|
- [ ] Ensure remaining comments explain current approach and rationale
|
||||||
|
- [ ] Update related documentation if significant context is removed
|
||||||
|
|
||||||
|
## 📚 Examples
|
||||||
|
|
||||||
|
### Good Historical Comment (Keep & Transform)
|
||||||
|
```typescript
|
||||||
|
// Note: Database access has been migrated from direct IndexedDB calls to PlatformServiceMixin
|
||||||
|
// This provides better platform abstraction and consistent error handling across web/mobile/desktop
|
||||||
|
// When adding new database operations, use this.$getContact(), this.$saveSettings(), etc.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bad Historical Comment (Remove)
|
||||||
|
```typescript
|
||||||
|
// Old method getContactFromDB() removed - now handled by PlatformServiceMixin
|
||||||
|
// No need to call the old method anymore
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎯 Integration with Harbor Pilot
|
||||||
|
|
||||||
|
This rule works in conjunction with:
|
||||||
|
- **Component Creation Ideals**: Maintains architectural consistency
|
||||||
|
- **Migration Patterns**: Documents evolution of patterns
|
||||||
|
- **Code Review Guidelines**: Ensures comments provide value
|
||||||
|
|
||||||
|
## 📝 Version History
|
||||||
|
|
||||||
|
### v1.0.0 (2025-08-21)
|
||||||
|
- Initial creation based on notification system cleanup
|
||||||
|
- Established decision framework for historical comment management
|
||||||
|
- Added transformation patterns and anti-patterns
|
||||||
|
- Integrated with existing Harbor Pilot architecture rules
|
||||||
|
# Historical Comment Management — Harbor Pilot Directive
|
||||||
|
|
||||||
|
> **Agent role**: When encountering historical comments about removed methods, deprecated patterns, or architectural changes, apply these guidelines to maintain code clarity and developer guidance.
|
||||||
|
|
||||||
|
## 🎯 Purpose
|
||||||
|
|
||||||
|
Historical comments should either be **removed entirely** or **transformed into actionable guidance** for future developers. Avoid keeping comments that merely state what was removed without explaining why or what to do instead.
|
||||||
|
|
||||||
|
## 📋 Decision Framework
|
||||||
|
|
||||||
|
### Remove Historical Comments When:
|
||||||
|
- **Obsolete Information**: Comment describes functionality that no longer exists
|
||||||
|
- **No Action Required**: Comment doesn't help future developers make decisions
|
||||||
|
- **Outdated Context**: Comment refers to old patterns that are no longer relevant
|
||||||
|
- **Self-Evident**: The current code clearly shows the current approach
|
||||||
|
|
||||||
|
### Transform Historical Comments When:
|
||||||
|
- **Architectural Context**: The change represents a significant pattern shift
|
||||||
|
- **Migration Guidance**: Future developers might need to understand the evolution
|
||||||
|
- **Decision Rationale**: The "why" behind the change is still relevant
|
||||||
|
- **Alternative Approaches**: The comment can guide future implementation choices
|
||||||
|
|
||||||
|
## 🔄 Transformation Patterns
|
||||||
|
|
||||||
|
### 1. From Removal Notice to Migration Note
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// turnOffNotifyingFlags method removed - notification state is now managed by NotificationSection component
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: Notification state management has been migrated to NotificationSection component
|
||||||
|
// which handles its own lifecycle and persistence via PlatformServiceMixin
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. From Deprecation Notice to Implementation Guide
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// This will be handled by the NewComponent now
|
||||||
|
// No need to call oldMethod() as it's no longer needed
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: This functionality has been migrated to NewComponent
|
||||||
|
// which provides better separation of concerns and testability
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. From Historical Note to Architectural Context
|
||||||
|
```typescript
|
||||||
|
// ❌ REMOVE THIS
|
||||||
|
// Old approach: used direct database calls
|
||||||
|
// New approach: uses service layer
|
||||||
|
|
||||||
|
// ✅ TRANSFORM TO THIS
|
||||||
|
// Note: Database access has been abstracted through service layer
|
||||||
|
// for better testability and platform independence
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚫 Anti-Patterns to Remove
|
||||||
|
|
||||||
|
- Comments that only state what was removed
|
||||||
|
- Comments that don't explain the current approach
|
||||||
|
- Comments that reference non-existent methods
|
||||||
|
- Comments that are self-evident from the code
|
||||||
|
- Comments that don't help future decision-making
|
||||||
|
|
||||||
|
## ✅ Best Practices
|
||||||
|
|
||||||
|
### When Keeping Historical Context:
|
||||||
|
1. **Explain the "Why"**: Why was the change made?
|
||||||
|
2. **Describe the "What"**: What is the current approach?
|
||||||
|
3. **Provide Context**: When might this information be useful?
|
||||||
|
4. **Use Actionable Language**: Guide future decisions, not just document history
|
||||||
|
|
||||||
|
### When Removing Historical Context:
|
||||||
|
1. **Verify Obsoleteness**: Ensure the information is truly outdated
|
||||||
|
2. **Check for Dependencies**: Ensure no other code references the old approach
|
||||||
|
3. **Update Related Docs**: If removing from code, consider adding to documentation
|
||||||
|
4. **Preserve in Git History**: The change is preserved in version control
|
||||||
|
|
||||||
|
## 🔍 Implementation Checklist
|
||||||
|
|
||||||
|
- [ ] Identify historical comments about removed/deprecated functionality
|
||||||
|
- [ ] Determine if comment provides actionable guidance
|
||||||
|
- [ ] Transform useful comments into migration notes or architectural context
|
||||||
|
- [ ] Remove comments that are purely historical without guidance value
|
||||||
|
- [ ] Ensure remaining comments explain current approach and rationale
|
||||||
|
- [ ] Update related documentation if significant context is removed
|
||||||
|
|
||||||
|
## 📚 Examples
|
||||||
|
|
||||||
|
### Good Historical Comment (Keep & Transform)
|
||||||
|
```typescript
|
||||||
|
// Note: Database access has been migrated from direct IndexedDB calls to PlatformServiceMixin
|
||||||
|
// This provides better platform abstraction and consistent error handling across web/mobile/desktop
|
||||||
|
// When adding new database operations, use this.$getContact(), this.$saveSettings(), etc.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bad Historical Comment (Remove)
|
||||||
|
```typescript
|
||||||
|
// Old method getContactFromDB() removed - now handled by PlatformServiceMixin
|
||||||
|
// No need to call the old method anymore
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎯 Integration with Harbor Pilot
|
||||||
|
|
||||||
|
This rule works in conjunction with:
|
||||||
|
- **Component Creation Ideals**: Maintains architectural consistency
|
||||||
|
- **Migration Patterns**: Documents evolution of patterns
|
||||||
|
- **Code Review Guidelines**: Ensures comments provide value
|
||||||
|
|
||||||
|
## 📝 Version History
|
||||||
|
|
||||||
|
### v1.0.0 (2025-08-21)
|
||||||
|
- Initial creation based on notification system cleanup
|
||||||
|
- Established decision framework for historical comment management
|
||||||
|
- Added transformation patterns and anti-patterns
|
||||||
|
- Integrated with existing Harbor Pilot architecture rules
|
||||||
356
.cursor/rules/playwright-test-investigation.mdc
Normal file
356
.cursor/rules/playwright-test-investigation.mdc
Normal file
@@ -0,0 +1,356 @@
|
|||||||
|
---
|
||||||
|
description: when working with playwright tests either generating them or using them to test code
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# Playwright Test Investigation — Harbor Pilot Directive
|
||||||
|
|
||||||
|
**Author**: Matthew Raymer
|
||||||
|
**Date**: 2025-08-21T14:22Z
|
||||||
|
**Status**: 🎯 **ACTIVE** - Playwright test debugging guidelines
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Provide systematic approach for investigating Playwright test failures with focus on UI element conflicts, timing issues, and selector ambiguity.
|
||||||
|
|
||||||
|
## Context & Scope
|
||||||
|
- **Audience**: Developers debugging Playwright test failures
|
||||||
|
- **In scope**: Test failure analysis, selector conflicts, UI state investigation, timing issues
|
||||||
|
- **Out of scope**: Test writing best practices, CI/CD configuration
|
||||||
|
|
||||||
|
## Artifacts & Links
|
||||||
|
- Test results: `test-results/` directory
|
||||||
|
- Error context: `error-context.md` files with page snapshots
|
||||||
|
- Trace files: `trace.zip` files for failed tests
|
||||||
|
- HTML reports: Interactive test reports with screenshots
|
||||||
|
|
||||||
|
## Environment & Preconditions
|
||||||
|
- OS/Runtime: Linux/Windows/macOS with Node.js
|
||||||
|
- Versions: Playwright test framework, browser drivers
|
||||||
|
- Services: Local test server (localhost:8080), test data setup
|
||||||
|
- Auth mode: None required for test investigation
|
||||||
|
|
||||||
|
## Architecture / Process Overview
|
||||||
|
Playwright test investigation follows a systematic diagnostic workflow that leverages built-in debugging tools and error context analysis.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A[Test Failure] --> B[Check Error Context]
|
||||||
|
B --> C[Analyze Page Snapshot]
|
||||||
|
C --> D[Identify UI Conflicts]
|
||||||
|
D --> E[Check Trace Files]
|
||||||
|
E --> F[Verify Selector Uniqueness]
|
||||||
|
F --> G[Test Selector Fixes]
|
||||||
|
G --> H[Document Root Cause]
|
||||||
|
|
||||||
|
B --> I[Check Test Results Directory]
|
||||||
|
I --> J[Locate Failed Test Results]
|
||||||
|
J --> K[Extract Error Details]
|
||||||
|
|
||||||
|
D --> L[Multiple Alerts?]
|
||||||
|
L --> M[Button Text Conflicts?]
|
||||||
|
M --> N[Timing Issues?]
|
||||||
|
|
||||||
|
E --> O[Use Trace Viewer]
|
||||||
|
O --> P[Analyze Action Sequence]
|
||||||
|
P --> Q[Identify Failure Point]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Interfaces & Contracts
|
||||||
|
|
||||||
|
### Test Results Structure
|
||||||
|
| Component | Format | Content | Validation |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Error Context | Markdown | Page snapshot in YAML | Verify DOM state matches test expectations |
|
||||||
|
| Trace Files | ZIP archive | Detailed execution trace | Use `npx playwright show-trace` |
|
||||||
|
| HTML Reports | Interactive HTML | Screenshots, traces, logs | Check browser for full report |
|
||||||
|
| JSON Results | JSON | Machine-readable results | Parse for automated analysis |
|
||||||
|
|
||||||
|
### Investigation Commands
|
||||||
|
| Step | Command | Expected Output | Notes |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Locate failed tests | `find test-results -name "*test-name*"` | Test result directories | Use exact test name patterns |
|
||||||
|
| Check error context | `cat test-results/*/error-context.md` | Page snapshots | Look for UI state conflicts |
|
||||||
|
| View traces | `npx playwright show-trace trace.zip` | Interactive trace viewer | Analyze exact failure sequence |
|
||||||
|
|
||||||
|
## Repro: End-to-End Investigation Procedure
|
||||||
|
|
||||||
|
### 1. Locate Failed Test Results
|
||||||
|
```bash
|
||||||
|
# Find all results for a specific test
|
||||||
|
find test-results -name "*test-name*" -type d
|
||||||
|
|
||||||
|
# Check for error context files
|
||||||
|
find test-results -name "error-context.md" | head -5
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Analyze Error Context
|
||||||
|
```bash
|
||||||
|
# Read error context for specific test
|
||||||
|
cat test-results/test-name-test-description-browser/error-context.md
|
||||||
|
|
||||||
|
# Look for UI conflicts in page snapshot
|
||||||
|
grep -A 10 -B 5 "button.*Yes\|button.*No" test-results/*/error-context.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Check Trace Files
|
||||||
|
```bash
|
||||||
|
# List available trace files
|
||||||
|
find test-results -name "*.zip" | grep trace
|
||||||
|
|
||||||
|
# View trace in browser
|
||||||
|
npx playwright show-trace test-results/test-name/trace.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Investigate Selector Issues
|
||||||
|
```typescript
|
||||||
|
// Check for multiple elements with same text
|
||||||
|
await page.locator('button:has-text("Yes")').count(); // Should be 1
|
||||||
|
|
||||||
|
// Use more specific selectors
|
||||||
|
await page.locator('div[role="alert"]:has-text("Register") button:has-text("Yes")').click();
|
||||||
|
```
|
||||||
|
|
||||||
|
## What Works (Evidence)
|
||||||
|
- ✅ **Error context files** provide page snapshots showing exact DOM state at failure
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `test-results/60-new-activity-New-offers-for-another-user-chromium/error-context.md` shows both alerts visible
|
||||||
|
- **Verify at**: Error context files in test results directory
|
||||||
|
|
||||||
|
- ✅ **Trace files** capture detailed execution sequence for failed tests
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `trace.zip` files available for all failed tests
|
||||||
|
- **Verify at**: Use `npx playwright show-trace <filename>`
|
||||||
|
|
||||||
|
- ✅ **Page snapshots** reveal UI conflicts like multiple alerts with duplicate button text
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: YAML snapshots show registration + export alerts simultaneously
|
||||||
|
- **Verify at**: Error context markdown files
|
||||||
|
|
||||||
|
## What Doesn't (Evidence & Hypotheses)
|
||||||
|
- ❌ **Generic selectors** fail with multiple similar elements at `test-playwright/testUtils.ts:161`
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `button:has-text("Yes")` matches both "Yes" and "Yes, Export Data"
|
||||||
|
- **Hypothesis**: Selector ambiguity due to multiple alerts with conflicting button text
|
||||||
|
- **Next probe**: Use more specific selectors or dismiss alerts sequentially
|
||||||
|
|
||||||
|
- ❌ **Timing-dependent tests** fail due to alert stacking at `src/views/ContactsView.vue:860,1283`
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: Both alerts use identical 1000ms delays, ensuring simultaneous display
|
||||||
|
- **Hypothesis**: Race condition between alert displays creates UI conflicts
|
||||||
|
- **Next probe**: Implement alert queuing or prevent overlapping alerts
|
||||||
|
|
||||||
|
## Risks, Limits, Assumptions
|
||||||
|
- **Trace file size**: Large trace files may impact storage and analysis time
|
||||||
|
- **Browser compatibility**: Trace viewer requires specific browser support
|
||||||
|
- **Test isolation**: Shared state between tests may affect investigation results
|
||||||
|
- **Timing sensitivity**: Tests may pass/fail based on system performance
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
| Owner | Task | Exit Criteria | Target Date (UTC) |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Development Team | Fix test selectors for multiple alerts | All tests pass consistently | 2025-08-22 |
|
||||||
|
| Development Team | Implement alert queuing system | No overlapping alerts with conflicting buttons | 2025-08-25 |
|
||||||
|
| Development Team | Add test IDs to alert buttons | Unique selectors for all UI elements | 2025-08-28 |
|
||||||
|
|
||||||
|
## References
|
||||||
|
- [Playwright Trace Viewer Documentation](https://playwright.dev/docs/trace-viewer)
|
||||||
|
- [Playwright Test Results](https://playwright.dev/docs/test-reporters)
|
||||||
|
- [Test Investigation Workflow](./research_diagnostic.mdc)
|
||||||
|
|
||||||
|
## Competence Hooks
|
||||||
|
- **Why this works**: Systematic investigation leverages Playwright's built-in debugging tools to identify root causes
|
||||||
|
- **Common pitfalls**: Generic selectors fail with multiple similar elements; timing issues create race conditions; alert stacking causes UI conflicts
|
||||||
|
- **Next skill unlock**: Implement unique test IDs and handle alert dismissal order in test flows
|
||||||
|
- **Teach-back**: "How would you investigate a Playwright test failure using error context, trace files, and page snapshots?"
|
||||||
|
|
||||||
|
## Collaboration Hooks
|
||||||
|
- **Reviewers**: QA team, test automation engineers
|
||||||
|
- **Sign-off checklist**: Error context analyzed, trace files reviewed, root cause identified, fix implemented and tested
|
||||||
|
|
||||||
|
## Assumptions & Limits
|
||||||
|
- Test results directory structure follows Playwright conventions
|
||||||
|
- Trace files are enabled in configuration (`trace: "retain-on-failure"`)
|
||||||
|
- Error context files contain valid YAML page snapshots
|
||||||
|
- Browser environment supports trace viewer functionality
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Status**: Active investigation directive
|
||||||
|
**Priority**: High
|
||||||
|
**Maintainer**: Development team
|
||||||
|
**Next Review**: 2025-09-21
|
||||||
|
# Playwright Test Investigation — Harbor Pilot Directive
|
||||||
|
|
||||||
|
**Author**: Matthew Raymer
|
||||||
|
**Date**: 2025-08-21T14:22Z
|
||||||
|
**Status**: 🎯 **ACTIVE** - Playwright test debugging guidelines
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Provide systematic approach for investigating Playwright test failures with focus on UI element conflicts, timing issues, and selector ambiguity.
|
||||||
|
|
||||||
|
## Context & Scope
|
||||||
|
- **Audience**: Developers debugging Playwright test failures
|
||||||
|
- **In scope**: Test failure analysis, selector conflicts, UI state investigation, timing issues
|
||||||
|
- **Out of scope**: Test writing best practices, CI/CD configuration
|
||||||
|
|
||||||
|
## Artifacts & Links
|
||||||
|
- Test results: `test-results/` directory
|
||||||
|
- Error context: `error-context.md` files with page snapshots
|
||||||
|
- Trace files: `trace.zip` files for failed tests
|
||||||
|
- HTML reports: Interactive test reports with screenshots
|
||||||
|
|
||||||
|
## Environment & Preconditions
|
||||||
|
- OS/Runtime: Linux/Windows/macOS with Node.js
|
||||||
|
- Versions: Playwright test framework, browser drivers
|
||||||
|
- Services: Local test server (localhost:8080), test data setup
|
||||||
|
- Auth mode: None required for test investigation
|
||||||
|
|
||||||
|
## Architecture / Process Overview
|
||||||
|
Playwright test investigation follows a systematic diagnostic workflow that leverages built-in debugging tools and error context analysis.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A[Test Failure] --> B[Check Error Context]
|
||||||
|
B --> C[Analyze Page Snapshot]
|
||||||
|
C --> D[Identify UI Conflicts]
|
||||||
|
D --> E[Check Trace Files]
|
||||||
|
E --> F[Verify Selector Uniqueness]
|
||||||
|
F --> G[Test Selector Fixes]
|
||||||
|
G --> H[Document Root Cause]
|
||||||
|
|
||||||
|
B --> I[Check Test Results Directory]
|
||||||
|
I --> J[Locate Failed Test Results]
|
||||||
|
J --> K[Extract Error Details]
|
||||||
|
|
||||||
|
D --> L[Multiple Alerts?]
|
||||||
|
L --> M[Button Text Conflicts?]
|
||||||
|
M --> N[Timing Issues?]
|
||||||
|
|
||||||
|
E --> O[Use Trace Viewer]
|
||||||
|
O --> P[Analyze Action Sequence]
|
||||||
|
P --> Q[Identify Failure Point]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Interfaces & Contracts
|
||||||
|
|
||||||
|
### Test Results Structure
|
||||||
|
| Component | Format | Content | Validation |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Error Context | Markdown | Page snapshot in YAML | Verify DOM state matches test expectations |
|
||||||
|
| Trace Files | ZIP archive | Detailed execution trace | Use `npx playwright show-trace` |
|
||||||
|
| HTML Reports | Interactive HTML | Screenshots, traces, logs | Check browser for full report |
|
||||||
|
| JSON Results | JSON | Machine-readable results | Parse for automated analysis |
|
||||||
|
|
||||||
|
### Investigation Commands
|
||||||
|
| Step | Command | Expected Output | Notes |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Locate failed tests | `find test-results -name "*test-name*"` | Test result directories | Use exact test name patterns |
|
||||||
|
| Check error context | `cat test-results/*/error-context.md` | Page snapshots | Look for UI state conflicts |
|
||||||
|
| View traces | `npx playwright show-trace trace.zip` | Interactive trace viewer | Analyze exact failure sequence |
|
||||||
|
|
||||||
|
## Repro: End-to-End Investigation Procedure
|
||||||
|
|
||||||
|
### 1. Locate Failed Test Results
|
||||||
|
```bash
|
||||||
|
# Find all results for a specific test
|
||||||
|
find test-results -name "*test-name*" -type d
|
||||||
|
|
||||||
|
# Check for error context files
|
||||||
|
find test-results -name "error-context.md" | head -5
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Analyze Error Context
|
||||||
|
```bash
|
||||||
|
# Read error context for specific test
|
||||||
|
cat test-results/test-name-test-description-browser/error-context.md
|
||||||
|
|
||||||
|
# Look for UI conflicts in page snapshot
|
||||||
|
grep -A 10 -B 5 "button.*Yes\|button.*No" test-results/*/error-context.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Check Trace Files
|
||||||
|
```bash
|
||||||
|
# List available trace files
|
||||||
|
find test-results -name "*.zip" | grep trace
|
||||||
|
|
||||||
|
# View trace in browser
|
||||||
|
npx playwright show-trace test-results/test-name/trace.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Investigate Selector Issues
|
||||||
|
```typescript
|
||||||
|
// Check for multiple elements with same text
|
||||||
|
await page.locator('button:has-text("Yes")').count(); // Should be 1
|
||||||
|
|
||||||
|
// Use more specific selectors
|
||||||
|
await page.locator('div[role="alert"]:has-text("Register") button:has-text("Yes")').click();
|
||||||
|
```
|
||||||
|
|
||||||
|
## What Works (Evidence)
|
||||||
|
- ✅ **Error context files** provide page snapshots showing exact DOM state at failure
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `test-results/60-new-activity-New-offers-for-another-user-chromium/error-context.md` shows both alerts visible
|
||||||
|
- **Verify at**: Error context files in test results directory
|
||||||
|
|
||||||
|
- ✅ **Trace files** capture detailed execution sequence for failed tests
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `trace.zip` files available for all failed tests
|
||||||
|
- **Verify at**: Use `npx playwright show-trace <filename>`
|
||||||
|
|
||||||
|
- ✅ **Page snapshots** reveal UI conflicts like multiple alerts with duplicate button text
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: YAML snapshots show registration + export alerts simultaneously
|
||||||
|
- **Verify at**: Error context markdown files
|
||||||
|
|
||||||
|
## What Doesn't (Evidence & Hypotheses)
|
||||||
|
- ❌ **Generic selectors** fail with multiple similar elements at `test-playwright/testUtils.ts:161`
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: `button:has-text("Yes")` matches both "Yes" and "Yes, Export Data"
|
||||||
|
- **Hypothesis**: Selector ambiguity due to multiple alerts with conflicting button text
|
||||||
|
- **Next probe**: Use more specific selectors or dismiss alerts sequentially
|
||||||
|
|
||||||
|
- ❌ **Timing-dependent tests** fail due to alert stacking at `src/views/ContactsView.vue:860,1283`
|
||||||
|
- **Time**: 2025-08-21T14:22Z
|
||||||
|
- **Evidence**: Both alerts use identical 1000ms delays, ensuring simultaneous display
|
||||||
|
- **Hypothesis**: Race condition between alert displays creates UI conflicts
|
||||||
|
- **Next probe**: Implement alert queuing or prevent overlapping alerts
|
||||||
|
|
||||||
|
## Risks, Limits, Assumptions
|
||||||
|
- **Trace file size**: Large trace files may impact storage and analysis time
|
||||||
|
- **Browser compatibility**: Trace viewer requires specific browser support
|
||||||
|
- **Test isolation**: Shared state between tests may affect investigation results
|
||||||
|
- **Timing sensitivity**: Tests may pass/fail based on system performance
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
| Owner | Task | Exit Criteria | Target Date (UTC) |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Development Team | Fix test selectors for multiple alerts | All tests pass consistently | 2025-08-22 |
|
||||||
|
| Development Team | Implement alert queuing system | No overlapping alerts with conflicting buttons | 2025-08-25 |
|
||||||
|
| Development Team | Add test IDs to alert buttons | Unique selectors for all UI elements | 2025-08-28 |
|
||||||
|
|
||||||
|
## References
|
||||||
|
- [Playwright Trace Viewer Documentation](https://playwright.dev/docs/trace-viewer)
|
||||||
|
- [Playwright Test Results](https://playwright.dev/docs/test-reporters)
|
||||||
|
- [Test Investigation Workflow](./research_diagnostic.mdc)
|
||||||
|
|
||||||
|
## Competence Hooks
|
||||||
|
- **Why this works**: Systematic investigation leverages Playwright's built-in debugging tools to identify root causes
|
||||||
|
- **Common pitfalls**: Generic selectors fail with multiple similar elements; timing issues create race conditions; alert stacking causes UI conflicts
|
||||||
|
- **Next skill unlock**: Implement unique test IDs and handle alert dismissal order in test flows
|
||||||
|
- **Teach-back**: "How would you investigate a Playwright test failure using error context, trace files, and page snapshots?"
|
||||||
|
|
||||||
|
## Collaboration Hooks
|
||||||
|
- **Reviewers**: QA team, test automation engineers
|
||||||
|
- **Sign-off checklist**: Error context analyzed, trace files reviewed, root cause identified, fix implemented and tested
|
||||||
|
|
||||||
|
## Assumptions & Limits
|
||||||
|
- Test results directory structure follows Playwright conventions
|
||||||
|
- Trace files are enabled in configuration (`trace: "retain-on-failure"`)
|
||||||
|
- Error context files contain valid YAML page snapshots
|
||||||
|
- Browser environment supports trace viewer functionality
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Status**: Active investigation directive
|
||||||
|
**Priority**: High
|
||||||
|
**Maintainer**: Development team
|
||||||
|
**Next Review**: 2025-09-21
|
||||||
348
.cursor/rules/realistic_time_estimation.mdc
Normal file
348
.cursor/rules/realistic_time_estimation.mdc
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
---
|
||||||
|
description: when generating text that has project task work estimates
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# No Time Estimates — Harbor Pilot Directive
|
||||||
|
|
||||||
|
> **Agent role**: **DO NOT MAKE TIME ESTIMATES**. Instead, use phases, milestones, and complexity levels. Time estimates are consistently wrong and create unrealistic expectations.
|
||||||
|
|
||||||
|
## 🎯 Purpose
|
||||||
|
|
||||||
|
Development time estimates are consistently wrong and create unrealistic expectations. This rule ensures we focus on phases, milestones, and complexity rather than trying to predict specific timeframes.
|
||||||
|
|
||||||
|
## 🚨 Critical Rule
|
||||||
|
|
||||||
|
**DO NOT MAKE TIME ESTIMATES**
|
||||||
|
- **Never provide specific time estimates** - they are always wrong
|
||||||
|
- **Use phases and milestones** instead of days/weeks
|
||||||
|
- **Focus on complexity and dependencies** rather than time
|
||||||
|
- **Set expectations based on progress, not deadlines**
|
||||||
|
|
||||||
|
## 📊 Planning Framework (Not Time Estimates)
|
||||||
|
|
||||||
|
### **Complexity Categories**
|
||||||
|
- **Simple**: Text changes, styling updates, minor bug fixes
|
||||||
|
- **Medium**: New features, refactoring, component updates
|
||||||
|
- **Complex**: Architecture changes, integrations, cross-platform work
|
||||||
|
- **Unknown**: New technologies, APIs, or approaches
|
||||||
|
|
||||||
|
### **Platform Complexity**
|
||||||
|
- **Single platform**: Web-only or mobile-only changes
|
||||||
|
- **Two platforms**: Web + mobile or web + desktop
|
||||||
|
- **Three platforms**: Web + mobile + desktop
|
||||||
|
- **Cross-platform consistency**: Ensuring behavior matches across all platforms
|
||||||
|
|
||||||
|
### **Testing Complexity**
|
||||||
|
- **Basic**: Unit tests for new functionality
|
||||||
|
- **Comprehensive**: Integration tests, cross-platform testing
|
||||||
|
- **User acceptance**: User testing, feedback integration
|
||||||
|
|
||||||
|
## 🔍 Planning Process (No Time Estimates)
|
||||||
|
|
||||||
|
### **Step 1: Break Down the Work**
|
||||||
|
- Identify all subtasks and dependencies
|
||||||
|
- Group related work into logical phases
|
||||||
|
- Identify critical path and blockers
|
||||||
|
|
||||||
|
### **Step 2: Define Phases and Milestones**
|
||||||
|
- **Phase 1**: Foundation work (basic fixes, core functionality)
|
||||||
|
- **Phase 2**: Enhancement work (new features, integrations)
|
||||||
|
- **Phase 3**: Polish work (testing, user experience, edge cases)
|
||||||
|
|
||||||
|
### **Step 3: Identify Dependencies**
|
||||||
|
- **Technical dependencies**: What must be built first
|
||||||
|
- **Platform dependencies**: What works on which platforms
|
||||||
|
- **Testing dependencies**: What can be tested when
|
||||||
|
|
||||||
|
### **Step 4: Set Progress Milestones**
|
||||||
|
- **Milestone 1**: Basic functionality working
|
||||||
|
- **Milestone 2**: All platforms supported
|
||||||
|
- **Milestone 3**: Fully tested and polished
|
||||||
|
|
||||||
|
## 📋 Planning Checklist (No Time Estimates)
|
||||||
|
|
||||||
|
- [ ] Work broken down into logical phases
|
||||||
|
- [ ] Dependencies identified and mapped
|
||||||
|
- [ ] Milestones defined with clear criteria
|
||||||
|
- [ ] Complexity levels assigned to each phase
|
||||||
|
- [ ] Platform requirements identified
|
||||||
|
- [ ] Testing strategy planned
|
||||||
|
- [ ] Risk factors identified
|
||||||
|
- [ ] Success criteria defined
|
||||||
|
|
||||||
|
## 🎯 Example Planning (No Time Estimates)
|
||||||
|
|
||||||
|
### **Example 1: Simple Feature**
|
||||||
|
```
|
||||||
|
Phase 1: Core implementation
|
||||||
|
- Basic functionality
|
||||||
|
- Single platform support
|
||||||
|
- Unit tests
|
||||||
|
|
||||||
|
Phase 2: Platform expansion
|
||||||
|
- Multi-platform support
|
||||||
|
- Integration tests
|
||||||
|
|
||||||
|
Phase 3: Polish
|
||||||
|
- User testing
|
||||||
|
- Edge case handling
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Example 2: Complex Cross-Platform Feature**
|
||||||
|
```
|
||||||
|
Phase 1: Foundation
|
||||||
|
- Architecture design
|
||||||
|
- Core service implementation
|
||||||
|
- Basic web platform support
|
||||||
|
|
||||||
|
Phase 2: Platform Integration
|
||||||
|
- Mobile platform support
|
||||||
|
- Desktop platform support
|
||||||
|
- Cross-platform consistency
|
||||||
|
|
||||||
|
Phase 3: Testing & Polish
|
||||||
|
- Comprehensive testing
|
||||||
|
- Error handling
|
||||||
|
- User experience refinement
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚫 Anti-Patterns to Avoid
|
||||||
|
|
||||||
|
- **"This should take X days"** - Red flag for time estimation
|
||||||
|
- **"Just a few hours"** - Ignores complexity and testing
|
||||||
|
- **"Similar to X"** - Without considering differences
|
||||||
|
- **"Quick fix"** - Nothing is ever quick in software
|
||||||
|
- **"No testing needed"** - Testing always takes effort
|
||||||
|
|
||||||
|
## ✅ Best Practices
|
||||||
|
|
||||||
|
### **When Planning:**
|
||||||
|
1. **Break down everything** - no work is too small to plan
|
||||||
|
2. **Consider all platforms** - web, mobile, desktop differences
|
||||||
|
3. **Include testing strategy** - unit, integration, and user testing
|
||||||
|
4. **Account for unknowns** - there are always surprises
|
||||||
|
5. **Focus on dependencies** - what blocks what
|
||||||
|
|
||||||
|
### **When Presenting Plans:**
|
||||||
|
1. **Show the phases** - explain the logical progression
|
||||||
|
2. **Highlight dependencies** - what could block progress
|
||||||
|
3. **Define milestones** - clear success criteria
|
||||||
|
4. **Identify risks** - what could go wrong
|
||||||
|
5. **Suggest alternatives** - ways to reduce scope or complexity
|
||||||
|
|
||||||
|
## 🔄 Continuous Improvement
|
||||||
|
|
||||||
|
### **Track Progress**
|
||||||
|
- Record planned vs. actual phases completed
|
||||||
|
- Identify what took longer than expected
|
||||||
|
- Learn from complexity misjudgments
|
||||||
|
- Adjust planning process based on experience
|
||||||
|
|
||||||
|
### **Learn from Experience**
|
||||||
|
- **Underestimated complexity**: Increase complexity categories
|
||||||
|
- **Missed dependencies**: Improve dependency mapping
|
||||||
|
- **Platform surprises**: Better platform research upfront
|
||||||
|
|
||||||
|
## 🎯 Integration with Harbor Pilot
|
||||||
|
|
||||||
|
This rule works in conjunction with:
|
||||||
|
- **Project Planning**: Focuses on phases and milestones
|
||||||
|
- **Resource Allocation**: Based on complexity, not time
|
||||||
|
- **Risk Management**: Identifies blockers and dependencies
|
||||||
|
- **Stakeholder Communication**: Sets progress-based expectations
|
||||||
|
|
||||||
|
## 📝 Version History
|
||||||
|
|
||||||
|
### v2.0.0 (2025-08-21)
|
||||||
|
- **Major Change**: Completely removed time estimation approach
|
||||||
|
- **New Focus**: Phases, milestones, and complexity-based planning
|
||||||
|
- **Eliminated**: All time multipliers, estimates, and calculations
|
||||||
|
- **Added**: Dependency mapping and progress milestone framework
|
||||||
|
|
||||||
|
### v1.0.0 (2025-08-21)
|
||||||
|
- Initial creation based on user feedback about estimation accuracy
|
||||||
|
- ~~Established realistic estimation multipliers and process~~
|
||||||
|
- ~~Added comprehensive estimation checklist and examples~~
|
||||||
|
- Integrated with Harbor Pilot planning and risk management
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚨 Remember
|
||||||
|
|
||||||
|
**DO NOT MAKE TIME ESTIMATES. Use phases, milestones, and complexity instead. Focus on progress, not deadlines.**
|
||||||
|
|
||||||
|
## 🚨 Remember
|
||||||
|
|
||||||
|
**Your first estimate is wrong. Your second estimate is probably still wrong. Focus on progress, not deadlines.**
|
||||||
|
# No Time Estimates — Harbor Pilot Directive
|
||||||
|
|
||||||
|
> **Agent role**: **DO NOT MAKE TIME ESTIMATES**. Instead, use phases, milestones, and complexity levels. Time estimates are consistently wrong and create unrealistic expectations.
|
||||||
|
|
||||||
|
## 🎯 Purpose
|
||||||
|
|
||||||
|
Development time estimates are consistently wrong and create unrealistic expectations. This rule ensures we focus on phases, milestones, and complexity rather than trying to predict specific timeframes.
|
||||||
|
|
||||||
|
## 🚨 Critical Rule
|
||||||
|
|
||||||
|
**DO NOT MAKE TIME ESTIMATES**
|
||||||
|
- **Never provide specific time estimates** - they are always wrong
|
||||||
|
- **Use phases and milestones** instead of days/weeks
|
||||||
|
- **Focus on complexity and dependencies** rather than time
|
||||||
|
- **Set expectations based on progress, not deadlines**
|
||||||
|
|
||||||
|
## 📊 Planning Framework (Not Time Estimates)
|
||||||
|
|
||||||
|
### **Complexity Categories**
|
||||||
|
- **Simple**: Text changes, styling updates, minor bug fixes
|
||||||
|
- **Medium**: New features, refactoring, component updates
|
||||||
|
- **Complex**: Architecture changes, integrations, cross-platform work
|
||||||
|
- **Unknown**: New technologies, APIs, or approaches
|
||||||
|
|
||||||
|
### **Platform Complexity**
|
||||||
|
- **Single platform**: Web-only or mobile-only changes
|
||||||
|
- **Two platforms**: Web + mobile or web + desktop
|
||||||
|
- **Three platforms**: Web + mobile + desktop
|
||||||
|
- **Cross-platform consistency**: Ensuring behavior matches across all platforms
|
||||||
|
|
||||||
|
### **Testing Complexity**
|
||||||
|
- **Basic**: Unit tests for new functionality
|
||||||
|
- **Comprehensive**: Integration tests, cross-platform testing
|
||||||
|
- **User acceptance**: User testing, feedback integration
|
||||||
|
|
||||||
|
## 🔍 Planning Process (No Time Estimates)
|
||||||
|
|
||||||
|
### **Step 1: Break Down the Work**
|
||||||
|
- Identify all subtasks and dependencies
|
||||||
|
- Group related work into logical phases
|
||||||
|
- Identify critical path and blockers
|
||||||
|
|
||||||
|
### **Step 2: Define Phases and Milestones**
|
||||||
|
- **Phase 1**: Foundation work (basic fixes, core functionality)
|
||||||
|
- **Phase 2**: Enhancement work (new features, integrations)
|
||||||
|
- **Phase 3**: Polish work (testing, user experience, edge cases)
|
||||||
|
|
||||||
|
### **Step 3: Identify Dependencies**
|
||||||
|
- **Technical dependencies**: What must be built first
|
||||||
|
- **Platform dependencies**: What works on which platforms
|
||||||
|
- **Testing dependencies**: What can be tested when
|
||||||
|
|
||||||
|
### **Step 4: Set Progress Milestones**
|
||||||
|
- **Milestone 1**: Basic functionality working
|
||||||
|
- **Milestone 2**: All platforms supported
|
||||||
|
- **Milestone 3**: Fully tested and polished
|
||||||
|
|
||||||
|
## 📋 Planning Checklist (No Time Estimates)
|
||||||
|
|
||||||
|
- [ ] Work broken down into logical phases
|
||||||
|
- [ ] Dependencies identified and mapped
|
||||||
|
- [ ] Milestones defined with clear criteria
|
||||||
|
- [ ] Complexity levels assigned to each phase
|
||||||
|
- [ ] Platform requirements identified
|
||||||
|
- [ ] Testing strategy planned
|
||||||
|
- [ ] Risk factors identified
|
||||||
|
- [ ] Success criteria defined
|
||||||
|
|
||||||
|
## 🎯 Example Planning (No Time Estimates)
|
||||||
|
|
||||||
|
### **Example 1: Simple Feature**
|
||||||
|
```
|
||||||
|
Phase 1: Core implementation
|
||||||
|
- Basic functionality
|
||||||
|
- Single platform support
|
||||||
|
- Unit tests
|
||||||
|
|
||||||
|
Phase 2: Platform expansion
|
||||||
|
- Multi-platform support
|
||||||
|
- Integration tests
|
||||||
|
|
||||||
|
Phase 3: Polish
|
||||||
|
- User testing
|
||||||
|
- Edge case handling
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Example 2: Complex Cross-Platform Feature**
|
||||||
|
```
|
||||||
|
Phase 1: Foundation
|
||||||
|
- Architecture design
|
||||||
|
- Core service implementation
|
||||||
|
- Basic web platform support
|
||||||
|
|
||||||
|
Phase 2: Platform Integration
|
||||||
|
- Mobile platform support
|
||||||
|
- Desktop platform support
|
||||||
|
- Cross-platform consistency
|
||||||
|
|
||||||
|
Phase 3: Testing & Polish
|
||||||
|
- Comprehensive testing
|
||||||
|
- Error handling
|
||||||
|
- User experience refinement
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚫 Anti-Patterns to Avoid
|
||||||
|
|
||||||
|
- **"This should take X days"** - Red flag for time estimation
|
||||||
|
- **"Just a few hours"** - Ignores complexity and testing
|
||||||
|
- **"Similar to X"** - Without considering differences
|
||||||
|
- **"Quick fix"** - Nothing is ever quick in software
|
||||||
|
- **"No testing needed"** - Testing always takes effort
|
||||||
|
|
||||||
|
## ✅ Best Practices
|
||||||
|
|
||||||
|
### **When Planning:**
|
||||||
|
1. **Break down everything** - no work is too small to plan
|
||||||
|
2. **Consider all platforms** - web, mobile, desktop differences
|
||||||
|
3. **Include testing strategy** - unit, integration, and user testing
|
||||||
|
4. **Account for unknowns** - there are always surprises
|
||||||
|
5. **Focus on dependencies** - what blocks what
|
||||||
|
|
||||||
|
### **When Presenting Plans:**
|
||||||
|
1. **Show the phases** - explain the logical progression
|
||||||
|
2. **Highlight dependencies** - what could block progress
|
||||||
|
3. **Define milestones** - clear success criteria
|
||||||
|
4. **Identify risks** - what could go wrong
|
||||||
|
5. **Suggest alternatives** - ways to reduce scope or complexity
|
||||||
|
|
||||||
|
## 🔄 Continuous Improvement
|
||||||
|
|
||||||
|
### **Track Progress**
|
||||||
|
- Record planned vs. actual phases completed
|
||||||
|
- Identify what took longer than expected
|
||||||
|
- Learn from complexity misjudgments
|
||||||
|
- Adjust planning process based on experience
|
||||||
|
|
||||||
|
### **Learn from Experience**
|
||||||
|
- **Underestimated complexity**: Increase complexity categories
|
||||||
|
- **Missed dependencies**: Improve dependency mapping
|
||||||
|
- **Platform surprises**: Better platform research upfront
|
||||||
|
|
||||||
|
## 🎯 Integration with Harbor Pilot
|
||||||
|
|
||||||
|
This rule works in conjunction with:
|
||||||
|
- **Project Planning**: Focuses on phases and milestones
|
||||||
|
- **Resource Allocation**: Based on complexity, not time
|
||||||
|
- **Risk Management**: Identifies blockers and dependencies
|
||||||
|
- **Stakeholder Communication**: Sets progress-based expectations
|
||||||
|
|
||||||
|
## 📝 Version History
|
||||||
|
|
||||||
|
### v2.0.0 (2025-08-21)
|
||||||
|
- **Major Change**: Completely removed time estimation approach
|
||||||
|
- **New Focus**: Phases, milestones, and complexity-based planning
|
||||||
|
- **Eliminated**: All time multipliers, estimates, and calculations
|
||||||
|
- **Added**: Dependency mapping and progress milestone framework
|
||||||
|
|
||||||
|
### v1.0.0 (2025-08-21)
|
||||||
|
- Initial creation based on user feedback about estimation accuracy
|
||||||
|
- ~~Established realistic estimation multipliers and process~~
|
||||||
|
- ~~Added comprehensive estimation checklist and examples~~
|
||||||
|
- Integrated with Harbor Pilot planning and risk management
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚨 Remember
|
||||||
|
|
||||||
|
**DO NOT MAKE TIME ESTIMATES. Use phases, milestones, and complexity instead. Focus on progress, not deadlines.**
|
||||||
|
|
||||||
|
## 🚨 Remember
|
||||||
|
|
||||||
|
**Your first estimate is wrong. Your second estimate is probably still wrong. Focus on progress, not deadlines.**
|
||||||
@@ -23,10 +23,11 @@ test('New offers for another user', async ({ page }) => {
|
|||||||
await page.getByPlaceholder('URL or DID, Name, Public Key').fill(autoCreatedDid + ', A Friend');
|
await page.getByPlaceholder('URL or DID, Name, Public Key').fill(autoCreatedDid + ', A Friend');
|
||||||
await expect(page.locator('button > svg.fa-plus')).toBeVisible();
|
await expect(page.locator('button > svg.fa-plus')).toBeVisible();
|
||||||
await page.locator('button > svg.fa-plus').click();
|
await page.locator('button > svg.fa-plus').click();
|
||||||
await page.locator('div[role="alert"] button:has-text("No")').click(); // don't register
|
await expect(page.locator('div[role="alert"] h4:has-text("Success")')).toBeVisible(); // wait for info alert to be visible…
|
||||||
await expect(page.locator('div[role="alert"] h4:has-text("Success")')).toBeVisible();
|
await page.locator('div[role="alert"] button > svg.fa-xmark').click(); // …and dismiss it
|
||||||
await page.locator('div[role="alert"] button > svg.fa-xmark').click(); // dismiss info alert
|
|
||||||
await expect(page.locator('div[role="alert"] button > svg.fa-xmark')).toBeHidden(); // ensure alert is gone
|
await expect(page.locator('div[role="alert"] button > svg.fa-xmark')).toBeHidden(); // ensure alert is gone
|
||||||
|
await page.locator('div[role="alert"] button:text-is("No")').click(); // Dismiss register prompt
|
||||||
|
await page.locator('div[role="alert"] button:text-is("No, Not Now")').click(); // Dismiss export data prompt
|
||||||
|
|
||||||
// show buttons to make offers directly to people
|
// show buttons to make offers directly to people
|
||||||
await page.getByRole('button').filter({ hasText: /See Actions/i }).click();
|
await page.getByRole('button').filter({ hasText: /See Actions/i }).click();
|
||||||
|
|||||||
@@ -158,10 +158,10 @@ export async function generateAndRegisterEthrUser(page: Page): Promise<string> {
|
|||||||
.fill(`${newDid}, ${contactName}`);
|
.fill(`${newDid}, ${contactName}`);
|
||||||
await page.locator("button > svg.fa-plus").click();
|
await page.locator("button > svg.fa-plus").click();
|
||||||
// register them
|
// register them
|
||||||
await page.locator('div[role="alert"] button:has-text("Yes")').click();
|
await page.locator('div[role="alert"] button:text-is("Yes")').click();
|
||||||
// wait for it to disappear because the next steps may depend on alerts being gone
|
// wait for it to disappear because the next steps may depend on alerts being gone
|
||||||
await expect(
|
await expect(
|
||||||
page.locator('div[role="alert"] button:has-text("Yes")')
|
page.locator('div[role="alert"] button:text-is("Yes")')
|
||||||
).toBeHidden();
|
).toBeHidden();
|
||||||
await expect(page.locator("li", { hasText: contactName })).toBeVisible();
|
await expect(page.locator("li", { hasText: contactName })).toBeVisible();
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user