docs: merge notification system docs into single Native-First guide

- Consolidate 5 notification-system-* files into doc/notification-system.md
- Add web-push cleanup guide and Start-on-Login glossary entry
- Configure markdownlint for consistent formatting
- Remove web-push references, focus on native OS scheduling

Reduces maintenance overhead while preserving all essential information
in a single, well-formatted reference document.
This commit is contained in:
Matthew Raymer
2025-09-07 13:16:23 +00:00
parent 79b226e7d2
commit 0dcb1d029e
23 changed files with 874 additions and 2930 deletions

View File

@@ -21,7 +21,7 @@ alwaysApply: false
## Purpose
All interactions must *increase the human's competence over time* while
All interactions must _increase the human's competence over time_ while
completing the task efficiently. The model may handle menial work and memory
extension, but must also promote learning, autonomy, and healthy work habits.
The model should also **encourage human interaction and collaboration** rather
@@ -31,7 +31,7 @@ machine-driven steps.
## Principles
1. Competence over convenience: finish the task *and* leave the human more
1. Competence over convenience: finish the task _and_ leave the human more
capable next time.
@@ -75,7 +75,7 @@ assumptions if unanswered.
### timebox_minutes
*integer or null* — When set to a positive integer (e.g., `5`), this acts
_integer or null_ — When set to a positive integer (e.g., `5`), this acts
as a **time budget** guiding the model to prioritize delivering the most
essential parts of the task within that constraint.
@@ -91,7 +91,7 @@ Behavior when set:
3. **Signal Skipped Depth** — Omitted details should be listed under
*Deferred for depth*.
_Deferred for depth_.
4. **Order by Value** — Start with blocking or high-value items, then
@@ -198,7 +198,7 @@ Default: Doer + short Mentor notes.
## Self-Check (model, before responding)
- [ ] Task done *and* at least one competence lever included (≤120 words
- [ ] Task done _and_ at least one competence lever included (≤120 words
total)
- [ ] At least one collaboration/discussion hook present
- [ ] Output follows the **Output Contract** sections

View File

@@ -53,7 +53,7 @@ evidence-backed steps**.
- **Verifiable Outputs**: Include expected results, status codes, or
error messages
- **Cite evidence** for *Works/Doesn't* items (timestamps, filenames,
- **Cite evidence** for _Works/Doesn't_ items (timestamps, filenames,
line numbers, IDs/status codes, or logs).
## Required Sections
@@ -181,8 +181,8 @@ Before publishing, verify:
---
**Status**: 🚢 ACTIVE — General ruleset extending *Base Context — Human
Competence First*
**Status**: 🚢 ACTIVE — General ruleset extending _Base Context — Human
Competence First_
**Priority**: Critical
**Estimated Effort**: Ongoing reference

View File

@@ -16,7 +16,7 @@ language: Match repository languages and conventions
where it occurs; avoid new layers, indirection, or patterns unless
strictly necessary.
2. **Keep scope tight.** Implement only what is needed to satisfy the
acceptance criteria and tests for *this* issue.
acceptance criteria and tests for _this_ issue.
3. **Avoid speculative abstractions.** Use the **Rule of Three**:
don't extract helpers/patterns until the third concrete usage proves
the shape.
@@ -29,7 +29,7 @@ language: Match repository languages and conventions
7. **Targeted tests only.** Add the smallest set of tests that prove
the fix and guard against regression; don't rewrite suites.
8. **Document the "why enough."** Include a one-paragraph note
explaining why this minimal solution is sufficient *now*.
explaining why this minimal solution is sufficient _now_.
## Future-Proofing Requires Evidence + Discussion

View File

@@ -9,8 +9,8 @@ alwaysApply: false
**Date**: 2025-08-19
**Status**: 🎯 **ACTIVE** - Asset management guidelines
*Scope: Assets Only (icons, splashes, image pipelines) — not overall build
orchestration*
_Scope: Assets Only (icons, splashes, image pipelines) — not overall build
orchestration_
## Intent

View File

@@ -40,7 +40,7 @@ feature development, issue investigations, ADRs, and documentation**.
`2025-08-17`).
- Avoid ambiguous terms like *recently*, *last month*, or *soon*.
- Avoid ambiguous terms like _recently_, _last month_, or _soon_.
- For time-based experiments (e.g., A/B tests), always include:

View File

@@ -19,7 +19,7 @@
- Optionally provide UTC alongside if context requires cross-team clarity.
- When interpreting relative terms like *now*, *today*, *last week*:
- When interpreting relative terms like _now_, _today_, _last week_:
- Resolve them against the **developer's current time**.

View File

@@ -12,19 +12,23 @@ To ensure clarity, efficiency, and high-value documentation within code and proj
---
## 1. Documentation and References Must Add Clear Value
- Only include documentation, comments, or reference links when they provide *new, meaningful information* that assists understanding or decision-making.
- Only include documentation, comments, or reference links when they provide _new, meaningful information_ that assists understanding or decision-making.
- Avoid duplicating content already obvious in the codebase, version history, or linked project documents.
---
## 2. Eliminate Redundant or Noisy References
- Remove references that serve no purpose beyond filling space.
- Model agents may automatically flag and suggest removal of trivial references (e.g., links to unchanged boilerplate or self-evident context).
---
## 3. Explicit Role of Model Agents
Model agents are **active participants** in documentation quality control. Their tasks include:
- **Relevance Evaluation**: Automatically analyze references for their substantive contribution before inclusion.
- **Redundancy Detection**: Flag duplicate or trivial references across commits, files, or tasks.
- **Context Linking**: Suggest appropriate higher-level docs (designs, ADRs, meeting notes) when a code change touches multi-stage or cross-team items.
@@ -36,6 +40,7 @@ Contributors must treat agent recommendations as **first-pass reviews** but rema
---
## 4. Contextual References for Complex Items
- Use **centralized references** for multi-stage features (e.g., architectural docs, research threads).
- Keep inline code comments light; push broader context into centralized documents.
- Model agents may auto-summarize complex chains of discussion and attach them as a single reference point.
@@ -43,24 +48,28 @@ Contributors must treat agent recommendations as **first-pass reviews** but rema
---
## 5. Centralization of Broader Context
- Store overarching context (design docs, proposals, workflows) in accessible, well-indexed places.
- Model agents should assist by **generating reference maps** that track where docs are cited across the codebase.
---
## 6. Focused Documentation
- Documentation should explain **why** and **how** decisions are made, not just what was changed.
- Model agents can auto-generate first-pass explanations from commit metadata, diffs, and linked issues—but humans must refine them for accuracy and intent.
---
## 7. Review and Accountability
- Reviewers and team leads must reject submissions containing unnecessary or low-quality documentation.
- Model agent outputs are aids, not replacements—contributors remain responsible for **final clarity and relevance**.
---
## 8. Continuous Improvement and Agent Feedback Loops
- Encourage iterative development of model agents so their evaluations become more precise over time.
- Contributions should include **feedback on agent suggestions** (e.g., accepted, rejected, or corrected) to train better future outputs.
- Agents should log patterns of “rejected” suggestions for refinement.
@@ -84,4 +93,4 @@ flowchart TD
---
✅ **Outcome:** By integrating disciplined contributor standards with **model agent augmentation**, the team achieves documentation that is consistently *relevant, concise, centralized, and decision-focused*. AI ensures coverage and noise reduction, while humans ensure precision and judgment.
✅ **Outcome:** By integrating disciplined contributor standards with **model agent augmentation**, the team achieves documentation that is consistently _relevant, concise, centralized, and decision-focused_. AI ensures coverage and noise reduction, while humans ensure precision and judgment.

View File

@@ -16,7 +16,7 @@ inherits: base_context.mdc
**Author**: System/Shared
**Date**: 2025-08-21 (UTC)
**Status**: 🚢 ACTIVE — General ruleset extending *Base Context — Human Competence First*
**Status**: 🚢 ACTIVE — General ruleset extending _Base Context — Human Competence First_
> **Alignment with Base Context**
>
@@ -40,7 +40,7 @@ Produce a **developer-grade, reproducible guide** for any technical topic that o
- **APIs**: `curl` + one client library (e.g., `httpx` for Python).
- **CLIs**: literal command blocks and expected output snippets.
- **Code**: minimal, self-contained samples (language appropriate).
- Cite **evidence** for *Works/Doesnt* items (timestamps, filenames, line numbers, IDs/status codes, or logs).
- Cite **evidence** for _Works/Doesnt_ items (timestamps, filenames, line numbers, IDs/status codes, or logs).
- If something is unknown, output `TODO:<missing>` — **never invent**.
## Required Sections (extends Base Output Contract)
@@ -56,9 +56,9 @@ Follow this exact order **after** the Base Contracts **Objective → Result
4. **Architecture / Process Overview**
- Short prose + **one diagram** selected from the list above.
5. **Interfaces & Contracts (choose one)**
- **API-based**: Endpoint table (*Step, Method, Path/URL, Auth, Key Headers/Params, Sample Req/Resp ref*).
- **Data/Files**: I/O contract table (*Source, Format, Schema/Columns, Size, Validation rules*).
- **Systems/Hardware**: Interfaces table (*Port/Bus, Protocol, Voltage/Timing, Constraints*).
- **API-based**: Endpoint table (_Step, Method, Path/URL, Auth, Key Headers/Params, Sample Req/Resp ref_).
- **Data/Files**: I/O contract table (_Source, Format, Schema/Columns, Size, Validation rules_).
- **Systems/Hardware**: Interfaces table (_Port/Bus, Protocol, Voltage/Timing, Constraints_).
6. **Repro: End-to-End Procedure**
- Minimal copy-paste steps with code/commands and **expected outputs**.
7. **What Works (with Evidence)**
@@ -74,10 +74,10 @@ Follow this exact order **after** the Base Contracts **Objective → Result
> **Competence Hooks (per Base Context; keep lightweight):**
>
> - *Why this works* (≤3 bullets) — core invariants or guarantees.
> - *Common pitfalls* (≤3 bullets) — the traps we saw in evidence.
> - *Next skill unlock* (1 line) — the next capability to implement/learn.
> - *Teach-back* (1 line) — prompt the reader to restate the flow/architecture.
> - _Why this works_ (≤3 bullets) — core invariants or guarantees.
> - _Common pitfalls_ (≤3 bullets) — the traps we saw in evidence.
> - _Next skill unlock_ (1 line) — the next capability to implement/learn.
> - _Teach-back_ (1 line) — prompt the reader to restate the flow/architecture.
> **Collaboration Hooks (per Base Context):**
>
@@ -203,10 +203,10 @@ Follow this exact order **after** the Base Contracts **Objective → Result
## Competence Hooks
- *Why this works*: <≤3 bullets>
- *Common pitfalls*: <≤3 bullets>
- *Next skill unlock*: <1 line>
- *Teach-back*: <1 line>
- _Why this works_: <≤3 bullets>
- _Common pitfalls_: <≤3 bullets>
- _Next skill unlock_: <1 line>
- _Teach-back_: <1 line>
## Collaboration Hooks
@@ -226,7 +226,7 @@ Follow this exact order **after** the Base Contracts **Objective → Result
**Notes for Implementers:**
- Respect Base *Do-Not* (no filler, no invented facts, no censorship).
- Respect Base _Do-Not_ (no filler, no invented facts, no censorship).
- Prefer clarity over completeness when timeboxed; capture unknowns explicitly.
- Apply historical comment management rules (see `.cursor/rules/historical_comment_management.mdc`)
- Apply realistic time estimation rules (see `.cursor/rules/realistic_time_estimation.mdc`)

View File

@@ -73,7 +73,7 @@
### Avoid
- Vague: *improved, enhanced, better*
- Vague: _improved, enhanced, better_
- Trivialities: tiny docs, one-liners, pure lint cleanups (separate,