Compare commits
8 Commits
electron-b
...
didview-in
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1a77dfb750 | ||
|
|
1365adad92 | ||
|
|
baccb962cf | ||
|
|
4391cb2881 | ||
| 0b9c243969 | |||
|
|
74c70c7fa0 | ||
|
|
f31eb5f6c9 | ||
|
|
9f976f011a |
207
.cursor/rules/harbor_pilot_universal.mdc
Normal file
207
.cursor/rules/harbor_pilot_universal.mdc
Normal file
@@ -0,0 +1,207 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
inherits: base_context.mdc
|
||||
---
|
||||
```json
|
||||
{
|
||||
"coaching_level": "standard",
|
||||
"socratic_max_questions": 2,
|
||||
"verbosity": "concise",
|
||||
"timebox_minutes": 10,
|
||||
"format_enforcement": "strict"
|
||||
}
|
||||
```
|
||||
|
||||
# Harbor Pilot — Universal Directive for Human-Facing Technical Guides
|
||||
|
||||
**Author**: System/Shared
|
||||
**Date**: 2025-08-21 (UTC)
|
||||
**Status**: 🚢 ACTIVE — General ruleset extending *Base Context — Human Competence First*
|
||||
|
||||
> **Alignment with Base Context**
|
||||
> - **Purpose fit**: Prioritizes human competence and collaboration while delivering reproducible artifacts.
|
||||
> - **Output Contract**: This directive **adds universal constraints** for any technical topic while **inheriting** the Base Context contract sections.
|
||||
> - **Toggles honored**: Uses the same toggle semantics; defaults above can be overridden by the caller.
|
||||
|
||||
---
|
||||
|
||||
## Objective
|
||||
Produce a **developer-grade, reproducible guide** for any technical topic that onboards a competent practitioner **without meta narration** and **with evidence-backed steps**.
|
||||
|
||||
## Scope & Constraints
|
||||
- **One Markdown document** as the deliverable.
|
||||
- Use **absolute dates** in **UTC** (e.g., `2025-08-21T14:22Z`) — avoid “today/yesterday”.
|
||||
- Include at least **one diagram** (Mermaid preferred). Choose the most fitting type:
|
||||
- `sequenceDiagram` (protocols/flows), `flowchart`, `stateDiagram`, `gantt` (timelines), or `classDiagram` (schemas).
|
||||
- Provide runnable examples where applicable:
|
||||
- **APIs**: `curl` + one client library (e.g., `httpx` for Python).
|
||||
- **CLIs**: literal command blocks and expected output snippets.
|
||||
- **Code**: minimal, self-contained samples (language appropriate).
|
||||
- Cite **evidence** for *Works/Doesn’t* items (timestamps, filenames, line numbers, IDs/status codes, or logs).
|
||||
- If something is unknown, output `TODO:<missing>` — **never invent**.
|
||||
|
||||
## Required Sections (extends Base Output Contract)
|
||||
Follow this exact order **after** the Base Contract’s **Objective → Result → Use/Run** headers:
|
||||
|
||||
1. **Context & Scope**
|
||||
- Problem statement, audience, in/out-of-scope bullets.
|
||||
2. **Artifacts & Links**
|
||||
- Repos/PRs, design docs, datasets/HARs/pcaps, scripts/tools, dashboards.
|
||||
3. **Environment & Preconditions**
|
||||
- OS/runtime, versions/build IDs, services/endpoints/URLs, credentials/auth mode (describe acquisition, do not expose secrets).
|
||||
4. **Architecture / Process Overview**
|
||||
- Short prose + **one diagram** selected from the list above.
|
||||
5. **Interfaces & Contracts (choose one)**
|
||||
- **API-based**: Endpoint table (*Step, Method, Path/URL, Auth, Key Headers/Params, Sample Req/Resp ref*).
|
||||
- **Data/Files**: I/O contract table (*Source, Format, Schema/Columns, Size, Validation rules*).
|
||||
- **Systems/Hardware**: Interfaces table (*Port/Bus, Protocol, Voltage/Timing, Constraints*).
|
||||
6. **Repro: End-to-End Procedure**
|
||||
- Minimal copy-paste steps with code/commands and **expected outputs**.
|
||||
7. **What Works (with Evidence)**
|
||||
- Each item: **Time (UTC)** • **Artifact/Req IDs** • **Status/Result** • **Where to verify**.
|
||||
8. **What Doesn’t (Evidence & Hypotheses)**
|
||||
- Each failure: locus (file/endpoint/module), evidence snippet; short hypothesis and **next probe**.
|
||||
9. **Risks, Limits, Assumptions**
|
||||
- SLOs/limits, rate/size caps, security boundaries (CORS/CSRF/ACLs), retries/backoff/idempotency patterns.
|
||||
10. **Next Steps (Owner • Exit Criteria • Target Date)**
|
||||
- Actionable, assigned, and time-bound.
|
||||
11. **References**
|
||||
- Canonical docs, specs, tickets, prior analyses.
|
||||
|
||||
> **Competence Hooks (per Base Context; keep lightweight):**
|
||||
> - *Why this works* (≤3 bullets) — core invariants or guarantees.
|
||||
> - *Common pitfalls* (≤3 bullets) — the traps we saw in evidence.
|
||||
> - *Next skill unlock* (1 line) — the next capability to implement/learn.
|
||||
> - *Teach-back* (1 line) — prompt the reader to restate the flow/architecture.
|
||||
|
||||
> **Collaboration Hooks (per Base Context):**
|
||||
> - Name reviewers for **Interfaces & Contracts** and the **diagram**.
|
||||
> - Short **sign-off checklist** before merging/publishing the guide.
|
||||
|
||||
## Do / Don’t (Base-aligned)
|
||||
- **Do** quantify progress only against a defined scope with acceptance criteria.
|
||||
- **Do** include minimal sample payloads/headers or I/O schemas; redact sensitive values.
|
||||
- **Do** keep commentary lean; if timeboxed, move depth to **Deferred for depth**.
|
||||
- **Don’t** use marketing language or meta narration (“Perfect!”, “tool called”, “new chat”).
|
||||
- **Don’t** include IDE-specific chatter or internal rules unrelated to the task.
|
||||
|
||||
## Validation Checklist (self-check before returning)
|
||||
- [ ] All Required Sections present and ordered.
|
||||
- [ ] Diagram compiles (basic Mermaid syntax) and fits the problem.
|
||||
- [ ] If API-based, **Auth** and **Key Headers/Params** are listed for each endpoint.
|
||||
- [ ] Repro section includes commands/code **and expected outputs**.
|
||||
- [ ] Every Works/Doesn’t item has **UTC timestamp**, **status/result**, and **verifiable evidence**.
|
||||
- [ ] Next Steps include **Owner**, **Exit Criteria**, **Target Date**.
|
||||
- [ ] Unknowns are `TODO:<missing>` — no fabrication.
|
||||
- [ ] Base **Output Contract** sections satisfied (Objective/Result/Use/Run/Competence/Collaboration/Assumptions/References).
|
||||
|
||||
## Universal Template (fill-in)
|
||||
```markdown
|
||||
# <Title> — Working Notes (As of YYYY-MM-DDTHH:MMZ)
|
||||
|
||||
## Objective
|
||||
<one line>
|
||||
|
||||
## Result
|
||||
<link to the produced guide file or say “this document”>
|
||||
|
||||
## Use/Run
|
||||
<how to apply/test and where to run samples>
|
||||
|
||||
## Context & Scope
|
||||
- Audience: <role(s)>
|
||||
- In scope: <bullets>
|
||||
- Out of scope: <bullets>
|
||||
|
||||
## Artifacts & Links
|
||||
- Repo/PR: <link>
|
||||
- Data/Logs: <paths or links>
|
||||
- Scripts/Tools: <paths>
|
||||
- Dashboards: <links>
|
||||
|
||||
## Environment & Preconditions
|
||||
- OS/Runtime: <details>
|
||||
- Versions/Builds: <list>
|
||||
- Services/Endpoints: <list>
|
||||
- Auth mode: <Bearer/Session/Keys + how acquired>
|
||||
|
||||
## Architecture / Process Overview
|
||||
<short prose>
|
||||
```mermaid
|
||||
<one suitable diagram: sequenceDiagram | flowchart | stateDiagram | gantt | classDiagram>
|
||||
```
|
||||
|
||||
## Interfaces & Contracts
|
||||
### If API-based
|
||||
| Step | Method | Path/URL | Auth | Key Headers/Params | Sample |
|
||||
|---|---|---|---|---|---|
|
||||
| <…> | <…> | <…> | <…> | <…> | below |
|
||||
|
||||
### If Data/Files
|
||||
| Source | Format | Schema/Columns | Size | Validation |
|
||||
|---|---|---|---|---|
|
||||
| <…> | <…> | <…> | <…> | <…> |
|
||||
|
||||
### If Systems/Hardware
|
||||
| Interface | Protocol | Timing/Voltage | Constraints | Notes |
|
||||
|---|---|---|---|---|
|
||||
| <…> | <…> | <…> | <…> | <…> |
|
||||
|
||||
## Repro: End-to-End Procedure
|
||||
```bash
|
||||
# commands / curl examples (redacted where necessary)
|
||||
```
|
||||
```python
|
||||
# minimal client library example (language appropriate)
|
||||
```
|
||||
> Expected output: <snippet/checks>
|
||||
|
||||
## What Works (Evidence)
|
||||
- ✅ <short statement>
|
||||
- **Time**: <YYYY-MM-DDTHH:MMZ>
|
||||
- **Evidence**: file/line/log or request id/status
|
||||
- **Verify at**: <where>
|
||||
|
||||
## What Doesn’t (Evidence & Hypotheses)
|
||||
- ❌ <short failure> at `<component/endpoint/file>`
|
||||
- **Time**: <YYYY-MM-DDTHH:MMZ>
|
||||
- **Evidence**: <snippet/id/status>
|
||||
- **Hypothesis**: <short>
|
||||
- **Next probe**: <short>
|
||||
|
||||
## Risks, Limits, Assumptions
|
||||
<bullets: limits, security boundaries, retries/backoff, idempotency, SLOs>
|
||||
|
||||
## Next Steps
|
||||
| Owner | Task | Exit Criteria | Target Date (UTC) |
|
||||
|---|---|---|---|
|
||||
| <name> | <action> | <measurable outcome> | <YYYY-MM-DD> |
|
||||
|
||||
## References
|
||||
<links/titles>
|
||||
|
||||
## Competence Hooks
|
||||
- *Why this works*: <≤3 bullets>
|
||||
- *Common pitfalls*: <≤3 bullets>
|
||||
- *Next skill unlock*: <1 line>
|
||||
- *Teach-back*: <1 line>
|
||||
|
||||
## Collaboration Hooks
|
||||
- Reviewers: <names/roles>
|
||||
- Sign-off checklist: <≤5 checks>
|
||||
|
||||
## Assumptions & Limits
|
||||
<bullets>
|
||||
|
||||
## Deferred for depth
|
||||
<park deeper material here to respect timeboxing>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Notes for Implementers:**
|
||||
- Respect Base *Do-Not* (no filler, no invented facts, no censorship).
|
||||
- Prefer clarity over completeness when timeboxed; capture unknowns explicitly.
|
||||
- Apply historical comment management rules (see `.cursor/rules/historical_comment_management.mdc`)
|
||||
- Apply realistic time estimation rules (see `.cursor/rules/realistic_time_estimation.mdc`)
|
||||
- Apply Playwright test investigation rules (see `.cursor/rules/playwright_test_investigation.mdc`)
|
||||
356
.cursor/rules/playwright-test-investigation.mdc
Normal file
356
.cursor/rules/playwright-test-investigation.mdc
Normal file
@@ -0,0 +1,356 @@
|
||||
---
|
||||
description: when working with playwright tests either generating them or using them to test code
|
||||
alwaysApply: false
|
||||
---
|
||||
# Playwright Test Investigation — Harbor Pilot Directive
|
||||
|
||||
**Author**: Matthew Raymer
|
||||
**Date**: 2025-08-21T14:22Z
|
||||
**Status**: 🎯 **ACTIVE** - Playwright test debugging guidelines
|
||||
|
||||
## Objective
|
||||
Provide systematic approach for investigating Playwright test failures with focus on UI element conflicts, timing issues, and selector ambiguity.
|
||||
|
||||
## Context & Scope
|
||||
- **Audience**: Developers debugging Playwright test failures
|
||||
- **In scope**: Test failure analysis, selector conflicts, UI state investigation, timing issues
|
||||
- **Out of scope**: Test writing best practices, CI/CD configuration
|
||||
|
||||
## Artifacts & Links
|
||||
- Test results: `test-results/` directory
|
||||
- Error context: `error-context.md` files with page snapshots
|
||||
- Trace files: `trace.zip` files for failed tests
|
||||
- HTML reports: Interactive test reports with screenshots
|
||||
|
||||
## Environment & Preconditions
|
||||
- OS/Runtime: Linux/Windows/macOS with Node.js
|
||||
- Versions: Playwright test framework, browser drivers
|
||||
- Services: Local test server (localhost:8080), test data setup
|
||||
- Auth mode: None required for test investigation
|
||||
|
||||
## Architecture / Process Overview
|
||||
Playwright test investigation follows a systematic diagnostic workflow that leverages built-in debugging tools and error context analysis.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Test Failure] --> B[Check Error Context]
|
||||
B --> C[Analyze Page Snapshot]
|
||||
C --> D[Identify UI Conflicts]
|
||||
D --> E[Check Trace Files]
|
||||
E --> F[Verify Selector Uniqueness]
|
||||
F --> G[Test Selector Fixes]
|
||||
G --> H[Document Root Cause]
|
||||
|
||||
B --> I[Check Test Results Directory]
|
||||
I --> J[Locate Failed Test Results]
|
||||
J --> K[Extract Error Details]
|
||||
|
||||
D --> L[Multiple Alerts?]
|
||||
L --> M[Button Text Conflicts?]
|
||||
M --> N[Timing Issues?]
|
||||
|
||||
E --> O[Use Trace Viewer]
|
||||
O --> P[Analyze Action Sequence]
|
||||
P --> Q[Identify Failure Point]
|
||||
```
|
||||
|
||||
## Interfaces & Contracts
|
||||
|
||||
### Test Results Structure
|
||||
| Component | Format | Content | Validation |
|
||||
|---|---|---|---|
|
||||
| Error Context | Markdown | Page snapshot in YAML | Verify DOM state matches test expectations |
|
||||
| Trace Files | ZIP archive | Detailed execution trace | Use `npx playwright show-trace` |
|
||||
| HTML Reports | Interactive HTML | Screenshots, traces, logs | Check browser for full report |
|
||||
| JSON Results | JSON | Machine-readable results | Parse for automated analysis |
|
||||
|
||||
### Investigation Commands
|
||||
| Step | Command | Expected Output | Notes |
|
||||
|---|---|---|---|
|
||||
| Locate failed tests | `find test-results -name "*test-name*"` | Test result directories | Use exact test name patterns |
|
||||
| Check error context | `cat test-results/*/error-context.md` | Page snapshots | Look for UI state conflicts |
|
||||
| View traces | `npx playwright show-trace trace.zip` | Interactive trace viewer | Analyze exact failure sequence |
|
||||
|
||||
## Repro: End-to-End Investigation Procedure
|
||||
|
||||
### 1. Locate Failed Test Results
|
||||
```bash
|
||||
# Find all results for a specific test
|
||||
find test-results -name "*test-name*" -type d
|
||||
|
||||
# Check for error context files
|
||||
find test-results -name "error-context.md" | head -5
|
||||
```
|
||||
|
||||
### 2. Analyze Error Context
|
||||
```bash
|
||||
# Read error context for specific test
|
||||
cat test-results/test-name-test-description-browser/error-context.md
|
||||
|
||||
# Look for UI conflicts in page snapshot
|
||||
grep -A 10 -B 5 "button.*Yes\|button.*No" test-results/*/error-context.md
|
||||
```
|
||||
|
||||
### 3. Check Trace Files
|
||||
```bash
|
||||
# List available trace files
|
||||
find test-results -name "*.zip" | grep trace
|
||||
|
||||
# View trace in browser
|
||||
npx playwright show-trace test-results/test-name/trace.zip
|
||||
```
|
||||
|
||||
### 4. Investigate Selector Issues
|
||||
```typescript
|
||||
// Check for multiple elements with same text
|
||||
await page.locator('button:has-text("Yes")').count(); // Should be 1
|
||||
|
||||
// Use more specific selectors
|
||||
await page.locator('div[role="alert"]:has-text("Register") button:has-text("Yes")').click();
|
||||
```
|
||||
|
||||
## What Works (Evidence)
|
||||
- ✅ **Error context files** provide page snapshots showing exact DOM state at failure
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `test-results/60-new-activity-New-offers-for-another-user-chromium/error-context.md` shows both alerts visible
|
||||
- **Verify at**: Error context files in test results directory
|
||||
|
||||
- ✅ **Trace files** capture detailed execution sequence for failed tests
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `trace.zip` files available for all failed tests
|
||||
- **Verify at**: Use `npx playwright show-trace <filename>`
|
||||
|
||||
- ✅ **Page snapshots** reveal UI conflicts like multiple alerts with duplicate button text
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: YAML snapshots show registration + export alerts simultaneously
|
||||
- **Verify at**: Error context markdown files
|
||||
|
||||
## What Doesn't (Evidence & Hypotheses)
|
||||
- ❌ **Generic selectors** fail with multiple similar elements at `test-playwright/testUtils.ts:161`
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `button:has-text("Yes")` matches both "Yes" and "Yes, Export Data"
|
||||
- **Hypothesis**: Selector ambiguity due to multiple alerts with conflicting button text
|
||||
- **Next probe**: Use more specific selectors or dismiss alerts sequentially
|
||||
|
||||
- ❌ **Timing-dependent tests** fail due to alert stacking at `src/views/ContactsView.vue:860,1283`
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: Both alerts use identical 1000ms delays, ensuring simultaneous display
|
||||
- **Hypothesis**: Race condition between alert displays creates UI conflicts
|
||||
- **Next probe**: Implement alert queuing or prevent overlapping alerts
|
||||
|
||||
## Risks, Limits, Assumptions
|
||||
- **Trace file size**: Large trace files may impact storage and analysis time
|
||||
- **Browser compatibility**: Trace viewer requires specific browser support
|
||||
- **Test isolation**: Shared state between tests may affect investigation results
|
||||
- **Timing sensitivity**: Tests may pass/fail based on system performance
|
||||
|
||||
## Next Steps
|
||||
| Owner | Task | Exit Criteria | Target Date (UTC) |
|
||||
|---|---|---|---|
|
||||
| Development Team | Fix test selectors for multiple alerts | All tests pass consistently | 2025-08-22 |
|
||||
| Development Team | Implement alert queuing system | No overlapping alerts with conflicting buttons | 2025-08-25 |
|
||||
| Development Team | Add test IDs to alert buttons | Unique selectors for all UI elements | 2025-08-28 |
|
||||
|
||||
## References
|
||||
- [Playwright Trace Viewer Documentation](https://playwright.dev/docs/trace-viewer)
|
||||
- [Playwright Test Results](https://playwright.dev/docs/test-reporters)
|
||||
- [Test Investigation Workflow](./research_diagnostic.mdc)
|
||||
|
||||
## Competence Hooks
|
||||
- **Why this works**: Systematic investigation leverages Playwright's built-in debugging tools to identify root causes
|
||||
- **Common pitfalls**: Generic selectors fail with multiple similar elements; timing issues create race conditions; alert stacking causes UI conflicts
|
||||
- **Next skill unlock**: Implement unique test IDs and handle alert dismissal order in test flows
|
||||
- **Teach-back**: "How would you investigate a Playwright test failure using error context, trace files, and page snapshots?"
|
||||
|
||||
## Collaboration Hooks
|
||||
- **Reviewers**: QA team, test automation engineers
|
||||
- **Sign-off checklist**: Error context analyzed, trace files reviewed, root cause identified, fix implemented and tested
|
||||
|
||||
## Assumptions & Limits
|
||||
- Test results directory structure follows Playwright conventions
|
||||
- Trace files are enabled in configuration (`trace: "retain-on-failure"`)
|
||||
- Error context files contain valid YAML page snapshots
|
||||
- Browser environment supports trace viewer functionality
|
||||
|
||||
---
|
||||
|
||||
**Status**: Active investigation directive
|
||||
**Priority**: High
|
||||
**Maintainer**: Development team
|
||||
**Next Review**: 2025-09-21
|
||||
# Playwright Test Investigation — Harbor Pilot Directive
|
||||
|
||||
**Author**: Matthew Raymer
|
||||
**Date**: 2025-08-21T14:22Z
|
||||
**Status**: 🎯 **ACTIVE** - Playwright test debugging guidelines
|
||||
|
||||
## Objective
|
||||
Provide systematic approach for investigating Playwright test failures with focus on UI element conflicts, timing issues, and selector ambiguity.
|
||||
|
||||
## Context & Scope
|
||||
- **Audience**: Developers debugging Playwright test failures
|
||||
- **In scope**: Test failure analysis, selector conflicts, UI state investigation, timing issues
|
||||
- **Out of scope**: Test writing best practices, CI/CD configuration
|
||||
|
||||
## Artifacts & Links
|
||||
- Test results: `test-results/` directory
|
||||
- Error context: `error-context.md` files with page snapshots
|
||||
- Trace files: `trace.zip` files for failed tests
|
||||
- HTML reports: Interactive test reports with screenshots
|
||||
|
||||
## Environment & Preconditions
|
||||
- OS/Runtime: Linux/Windows/macOS with Node.js
|
||||
- Versions: Playwright test framework, browser drivers
|
||||
- Services: Local test server (localhost:8080), test data setup
|
||||
- Auth mode: None required for test investigation
|
||||
|
||||
## Architecture / Process Overview
|
||||
Playwright test investigation follows a systematic diagnostic workflow that leverages built-in debugging tools and error context analysis.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Test Failure] --> B[Check Error Context]
|
||||
B --> C[Analyze Page Snapshot]
|
||||
C --> D[Identify UI Conflicts]
|
||||
D --> E[Check Trace Files]
|
||||
E --> F[Verify Selector Uniqueness]
|
||||
F --> G[Test Selector Fixes]
|
||||
G --> H[Document Root Cause]
|
||||
|
||||
B --> I[Check Test Results Directory]
|
||||
I --> J[Locate Failed Test Results]
|
||||
J --> K[Extract Error Details]
|
||||
|
||||
D --> L[Multiple Alerts?]
|
||||
L --> M[Button Text Conflicts?]
|
||||
M --> N[Timing Issues?]
|
||||
|
||||
E --> O[Use Trace Viewer]
|
||||
O --> P[Analyze Action Sequence]
|
||||
P --> Q[Identify Failure Point]
|
||||
```
|
||||
|
||||
## Interfaces & Contracts
|
||||
|
||||
### Test Results Structure
|
||||
| Component | Format | Content | Validation |
|
||||
|---|---|---|---|
|
||||
| Error Context | Markdown | Page snapshot in YAML | Verify DOM state matches test expectations |
|
||||
| Trace Files | ZIP archive | Detailed execution trace | Use `npx playwright show-trace` |
|
||||
| HTML Reports | Interactive HTML | Screenshots, traces, logs | Check browser for full report |
|
||||
| JSON Results | JSON | Machine-readable results | Parse for automated analysis |
|
||||
|
||||
### Investigation Commands
|
||||
| Step | Command | Expected Output | Notes |
|
||||
|---|---|---|---|
|
||||
| Locate failed tests | `find test-results -name "*test-name*"` | Test result directories | Use exact test name patterns |
|
||||
| Check error context | `cat test-results/*/error-context.md` | Page snapshots | Look for UI state conflicts |
|
||||
| View traces | `npx playwright show-trace trace.zip` | Interactive trace viewer | Analyze exact failure sequence |
|
||||
|
||||
## Repro: End-to-End Investigation Procedure
|
||||
|
||||
### 1. Locate Failed Test Results
|
||||
```bash
|
||||
# Find all results for a specific test
|
||||
find test-results -name "*test-name*" -type d
|
||||
|
||||
# Check for error context files
|
||||
find test-results -name "error-context.md" | head -5
|
||||
```
|
||||
|
||||
### 2. Analyze Error Context
|
||||
```bash
|
||||
# Read error context for specific test
|
||||
cat test-results/test-name-test-description-browser/error-context.md
|
||||
|
||||
# Look for UI conflicts in page snapshot
|
||||
grep -A 10 -B 5 "button.*Yes\|button.*No" test-results/*/error-context.md
|
||||
```
|
||||
|
||||
### 3. Check Trace Files
|
||||
```bash
|
||||
# List available trace files
|
||||
find test-results -name "*.zip" | grep trace
|
||||
|
||||
# View trace in browser
|
||||
npx playwright show-trace test-results/test-name/trace.zip
|
||||
```
|
||||
|
||||
### 4. Investigate Selector Issues
|
||||
```typescript
|
||||
// Check for multiple elements with same text
|
||||
await page.locator('button:has-text("Yes")').count(); // Should be 1
|
||||
|
||||
// Use more specific selectors
|
||||
await page.locator('div[role="alert"]:has-text("Register") button:has-text("Yes")').click();
|
||||
```
|
||||
|
||||
## What Works (Evidence)
|
||||
- ✅ **Error context files** provide page snapshots showing exact DOM state at failure
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `test-results/60-new-activity-New-offers-for-another-user-chromium/error-context.md` shows both alerts visible
|
||||
- **Verify at**: Error context files in test results directory
|
||||
|
||||
- ✅ **Trace files** capture detailed execution sequence for failed tests
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `trace.zip` files available for all failed tests
|
||||
- **Verify at**: Use `npx playwright show-trace <filename>`
|
||||
|
||||
- ✅ **Page snapshots** reveal UI conflicts like multiple alerts with duplicate button text
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: YAML snapshots show registration + export alerts simultaneously
|
||||
- **Verify at**: Error context markdown files
|
||||
|
||||
## What Doesn't (Evidence & Hypotheses)
|
||||
- ❌ **Generic selectors** fail with multiple similar elements at `test-playwright/testUtils.ts:161`
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: `button:has-text("Yes")` matches both "Yes" and "Yes, Export Data"
|
||||
- **Hypothesis**: Selector ambiguity due to multiple alerts with conflicting button text
|
||||
- **Next probe**: Use more specific selectors or dismiss alerts sequentially
|
||||
|
||||
- ❌ **Timing-dependent tests** fail due to alert stacking at `src/views/ContactsView.vue:860,1283`
|
||||
- **Time**: 2025-08-21T14:22Z
|
||||
- **Evidence**: Both alerts use identical 1000ms delays, ensuring simultaneous display
|
||||
- **Hypothesis**: Race condition between alert displays creates UI conflicts
|
||||
- **Next probe**: Implement alert queuing or prevent overlapping alerts
|
||||
|
||||
## Risks, Limits, Assumptions
|
||||
- **Trace file size**: Large trace files may impact storage and analysis time
|
||||
- **Browser compatibility**: Trace viewer requires specific browser support
|
||||
- **Test isolation**: Shared state between tests may affect investigation results
|
||||
- **Timing sensitivity**: Tests may pass/fail based on system performance
|
||||
|
||||
## Next Steps
|
||||
| Owner | Task | Exit Criteria | Target Date (UTC) |
|
||||
|---|---|---|---|
|
||||
| Development Team | Fix test selectors for multiple alerts | All tests pass consistently | 2025-08-22 |
|
||||
| Development Team | Implement alert queuing system | No overlapping alerts with conflicting buttons | 2025-08-25 |
|
||||
| Development Team | Add test IDs to alert buttons | Unique selectors for all UI elements | 2025-08-28 |
|
||||
|
||||
## References
|
||||
- [Playwright Trace Viewer Documentation](https://playwright.dev/docs/trace-viewer)
|
||||
- [Playwright Test Results](https://playwright.dev/docs/test-reporters)
|
||||
- [Test Investigation Workflow](./research_diagnostic.mdc)
|
||||
|
||||
## Competence Hooks
|
||||
- **Why this works**: Systematic investigation leverages Playwright's built-in debugging tools to identify root causes
|
||||
- **Common pitfalls**: Generic selectors fail with multiple similar elements; timing issues create race conditions; alert stacking causes UI conflicts
|
||||
- **Next skill unlock**: Implement unique test IDs and handle alert dismissal order in test flows
|
||||
- **Teach-back**: "How would you investigate a Playwright test failure using error context, trace files, and page snapshots?"
|
||||
|
||||
## Collaboration Hooks
|
||||
- **Reviewers**: QA team, test automation engineers
|
||||
- **Sign-off checklist**: Error context analyzed, trace files reviewed, root cause identified, fix implemented and tested
|
||||
|
||||
## Assumptions & Limits
|
||||
- Test results directory structure follows Playwright conventions
|
||||
- Trace files are enabled in configuration (`trace: "retain-on-failure"`)
|
||||
- Error context files contain valid YAML page snapshots
|
||||
- Browser environment supports trace viewer functionality
|
||||
|
||||
---
|
||||
|
||||
**Status**: Active investigation directive
|
||||
**Priority**: High
|
||||
**Maintainer**: Development team
|
||||
**Next Review**: 2025-09-21
|
||||
@@ -273,6 +273,7 @@ import {
|
||||
didInfoForContact,
|
||||
displayAmount,
|
||||
getHeaders,
|
||||
isDid,
|
||||
register,
|
||||
setVisibilityUtil,
|
||||
} from "../libs/endorserServer";
|
||||
@@ -289,6 +290,7 @@ import {
|
||||
NOTIFY_REGISTRATION_ERROR,
|
||||
NOTIFY_SERVER_ACCESS_ERROR,
|
||||
NOTIFY_NO_IDENTITY_ERROR,
|
||||
NOTIFY_CONTACT_INVALID_DID,
|
||||
} from "@/constants/notifications";
|
||||
import { THAT_UNNAMED_PERSON } from "@/constants/entities";
|
||||
|
||||
@@ -380,22 +382,29 @@ export default class DIDView extends Vue {
|
||||
|
||||
/**
|
||||
* Determines which DID to display based on URL parameters
|
||||
* Falls back to active DID if no parameter provided
|
||||
* Validates DID format and shows error for invalid DIDs
|
||||
*/
|
||||
private async determineDIDToDisplay() {
|
||||
const pathParam = window.location.pathname.substring("/did/".length);
|
||||
let showDid = pathParam;
|
||||
|
||||
if (!showDid) {
|
||||
// No DID provided in URL, use active DID
|
||||
showDid = this.activeDid;
|
||||
if (showDid) {
|
||||
this.notifyDefaultToActiveDID();
|
||||
this.notifyDefaultToActiveDID();
|
||||
} else {
|
||||
// DID provided in URL, validate it
|
||||
const decodedDid = decodeURIComponent(showDid);
|
||||
if (!isDid(decodedDid)) {
|
||||
// Invalid DID format - show error and redirect
|
||||
this.notify.error(NOTIFY_CONTACT_INVALID_DID.message, TIMEOUTS.LONG);
|
||||
this.$router.push({ name: "home" });
|
||||
return;
|
||||
}
|
||||
showDid = decodedDid;
|
||||
}
|
||||
|
||||
if (showDid) {
|
||||
this.viewingDid = decodeURIComponent(showDid);
|
||||
}
|
||||
this.viewingDid = showDid;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -71,6 +71,7 @@
|
||||
import { test, expect } from '@playwright/test';
|
||||
import { UNNAMED_ENTITY_NAME } from '../src/constants/entities';
|
||||
import { deleteContact, generateAndRegisterEthrUser, importUser } from './testUtils';
|
||||
import { NOTIFY_CONTACT_INVALID_DID } from '../src/constants/notifications';
|
||||
|
||||
test('Check activity feed - check that server is running', async ({ page }) => {
|
||||
// Load app homepage
|
||||
@@ -170,6 +171,19 @@ test('Confirm test API setting (may fail if you are running your own Time Safari
|
||||
await expect(page.locator('#apiServerInput')).toHaveValue(endorserServer);
|
||||
});
|
||||
|
||||
test('Check invalid DID shows error and redirects', async ({ page }) => {
|
||||
await importUser(page, '00');
|
||||
|
||||
// Navigate to an invalid DID URL
|
||||
await page.goto('./did/0');
|
||||
|
||||
// Should show error message about invalid DID format
|
||||
await expect(page.getByText(NOTIFY_CONTACT_INVALID_DID.message)).toBeVisible();
|
||||
|
||||
// Should redirect to homepage
|
||||
await expect(page).toHaveURL(/.*\/$/);
|
||||
});
|
||||
|
||||
test('Check User 0 can register a random person', async ({ page }) => {
|
||||
await importUser(page, '00');
|
||||
const newDid = await generateAndRegisterEthrUser(page);
|
||||
|
||||
@@ -23,10 +23,11 @@ test('New offers for another user', async ({ page }) => {
|
||||
await page.getByPlaceholder('URL or DID, Name, Public Key').fill(autoCreatedDid + ', A Friend');
|
||||
await expect(page.locator('button > svg.fa-plus')).toBeVisible();
|
||||
await page.locator('button > svg.fa-plus').click();
|
||||
await page.locator('div[role="alert"] button:has-text("No")').click(); // don't register
|
||||
await expect(page.locator('div[role="alert"] h4:has-text("Success")')).toBeVisible();
|
||||
await page.locator('div[role="alert"] button > svg.fa-xmark').click(); // dismiss info alert
|
||||
await expect(page.locator('div[role="alert"] h4:has-text("Success")')).toBeVisible(); // wait for info alert to be visible…
|
||||
await page.locator('div[role="alert"] button > svg.fa-xmark').click(); // …and dismiss it
|
||||
await expect(page.locator('div[role="alert"] button > svg.fa-xmark')).toBeHidden(); // ensure alert is gone
|
||||
await page.locator('div[role="alert"] button:text-is("No")').click(); // Dismiss register prompt
|
||||
await page.locator('div[role="alert"] button:text-is("No, Not Now")').click(); // Dismiss export data prompt
|
||||
|
||||
// show buttons to make offers directly to people
|
||||
await page.getByRole('button').filter({ hasText: /See Actions/i }).click();
|
||||
|
||||
@@ -158,10 +158,10 @@ export async function generateAndRegisterEthrUser(page: Page): Promise<string> {
|
||||
.fill(`${newDid}, ${contactName}`);
|
||||
await page.locator("button > svg.fa-plus").click();
|
||||
// register them
|
||||
await page.locator('div[role="alert"] button:has-text("Yes")').click();
|
||||
await page.locator('div[role="alert"] button:text-is("Yes")').click();
|
||||
// wait for it to disappear because the next steps may depend on alerts being gone
|
||||
await expect(
|
||||
page.locator('div[role="alert"] button:has-text("Yes")')
|
||||
page.locator('div[role="alert"] button:text-is("Yes")')
|
||||
).toBeHidden();
|
||||
await expect(page.locator("li", { hasText: contactName })).toBeVisible();
|
||||
|
||||
|
||||
Reference in New Issue
Block a user