Browse Source

docs: Add comprehensive documentation to migration system modules

- Add detailed file-level documentation with architecture overview and usage examples
- Document all interfaces, classes, and methods with JSDoc comments
- Include migration philosophy, best practices, and schema evolution guidelines
- Add extensive inline documentation for database schema and table purposes
- Document privacy and security considerations in database design
- Provide troubleshooting guidance and logging explanations
- Add template and examples for future migration development
- Include platform-specific documentation for Capacitor SQLite integration
- Document validation and integrity checking processes with detailed steps

The migration system is now thoroughly documented for maintainability and
onboarding of new developers to the codebase.
streamline-attempt
Matthew Raymer 5 days ago
parent
commit
623e1bf3df
  1. 463
      src/db-sql/migration.ts
  2. 286
      src/services/migrationService.ts
  3. 162
      src/services/platforms/CapacitorPlatformService.ts

463
src/db-sql/migration.ts

@ -1,10 +1,58 @@
/**
* Database Migration System for TimeSafari
* TimeSafari Database Migration Definitions
*
* This module manages database schema migrations as users upgrade their app.
* It ensures that database changes are applied safely and only when needed.
* This module defines all database schema migrations for the TimeSafari application.
* Each migration represents a specific version of the database schema and contains
* the SQL statements needed to upgrade from the previous version.
*
* ## Migration Philosophy
*
* TimeSafari follows a structured approach to database migrations:
*
* 1. **Sequential Numbering**: Migrations are numbered sequentially (001, 002, etc.)
* 2. **Descriptive Names**: Each migration has a clear, descriptive name
* 3. **Single Purpose**: Each migration focuses on one logical schema change
* 4. **Forward-Only**: Migrations are designed to move the schema forward
* 5. **Idempotent Design**: The migration system handles re-runs gracefully
*
* ## Migration Structure
*
* Each migration follows this pattern:
* ```typescript
* {
* name: "XXX_descriptive_name",
* sql: "SQL statements to execute"
* }
* ```
*
* ## Database Architecture
*
* TimeSafari uses SQLite for local data storage with the following core tables:
*
* - **accounts**: User identity and cryptographic keys
* - **secret**: Encrypted application secrets
* - **settings**: Application configuration and preferences
* - **contacts**: User's contact network and trust relationships
* - **logs**: Application event logging and debugging
* - **temp**: Temporary data storage for operations
*
* ## Privacy and Security
*
* The database schema is designed with privacy-first principles:
* - User identifiers (DIDs) are kept separate from personal data
* - Cryptographic keys are stored securely
* - Contact visibility is user-controlled
* - All sensitive data can be encrypted at rest
*
* ## Usage
*
* This file is automatically loaded during application startup. The migrations
* are registered with the migration service and applied as needed based on the
* current database state.
*
* @author Matthew Raymer
* @version 1.0.0
* @since 2025-06-30
*/
import {
@ -14,158 +62,301 @@ import {
import { DEFAULT_ENDORSER_API_SERVER } from "@/constants/app";
import { arrayBufferToBase64 } from "@/libs/crypto";
// Generate a random secret for the secret table
// It's not really secure to maintain the secret next to the user's data.
// However, until we have better hooks into a real wallet or reliable secure
// storage, we'll do this for user convenience. As they sign more records
// and integrate with more people, they'll value it more and want to be more
// secure, so we'll prompt them to take steps to back it up, properly encrypt,
// etc. At the beginning, we'll prompt for a password, then we'll prompt for a
// PWA so it's not in a browser... and then we hope to be integrated with a
// real wallet or something else more secure.
// One might ask: why encrypt at all? We figure a basic encryption is better
// than none. Plus, we expect to support their own password or keystore or
// external wallet as better signing options in the future, so it's gonna be
// important to have the structure where each account access might require
// user action.
// (Once upon a time we stored the secret in localStorage, but it frequently
// got erased, even though the IndexedDB still had the identity data. This
// ended up throwing lots of errors to the user... and they'd end up in a state
// where they couldn't take action because they couldn't unlock that identity.)
const randomBytes = crypto.getRandomValues(new Uint8Array(32));
const secretBase64 = arrayBufferToBase64(randomBytes);
/**
* Generate a cryptographically secure random secret for the secret table
*
* Note: This approach stores the secret alongside user data for convenience.
* In a production environment with hardware security modules or dedicated
* secure storage, this secret should be stored separately. As users build
* their trust networks and sign more records, they should migrate to more
* secure key management solutions.
*
* @returns Base64-encoded random secret (32 bytes)
*/
function generateDatabaseSecret(): string {
const randomBytes = new Uint8Array(32);
crypto.getRandomValues(randomBytes);
return arrayBufferToBase64(randomBytes.buffer);
}
// Each migration can include multiple SQL statements (with semicolons)
// NOTE: These should run only once per migration. The migration system tracks
// which migrations have been applied in the 'migrations' table.
const MIGRATIONS = [
{
name: "001_initial",
sql: `
CREATE TABLE accounts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
dateCreated TEXT NOT NULL,
derivationPath TEXT,
did TEXT NOT NULL,
identityEncrBase64 TEXT, -- encrypted & base64-encoded
mnemonicEncrBase64 TEXT, -- encrypted & base64-encoded
passkeyCredIdHex TEXT,
publicKeyHex TEXT NOT NULL
);
// Generate the secret that will be used for this database instance
const databaseSecret = generateDatabaseSecret();
CREATE INDEX idx_accounts_did ON accounts(did);
/**
* Migration 001: Initial Database Schema
*
* This migration creates the foundational database schema for TimeSafari.
* It establishes the core tables needed for user identity management,
* contact networks, application settings, and operational logging.
*
* ## Tables Created:
*
* ### accounts
* Stores user identities and cryptographic key pairs. Each account represents
* a unique user identity with associated cryptographic capabilities.
*
* - `id`: Primary key for internal references
* - `did`: Decentralized Identifier (unique across the network)
* - `privateKeyHex`: Private key for signing and encryption (hex-encoded)
* - `publicKeyHex`: Public key for verification and encryption (hex-encoded)
* - `derivationPath`: BIP44 derivation path for hierarchical key generation
* - `mnemonic`: BIP39 mnemonic phrase for key recovery
*
* ### secret
* Stores encrypted application secrets and sensitive configuration data.
* This table contains cryptographic material needed for secure operations.
*
* - `id`: Primary key (always 1 for singleton pattern)
* - `hex`: Encrypted secret data in hexadecimal format
*
* ### settings
* Application-wide configuration and user preferences. This table stores
* both system settings and user-customizable preferences.
*
* - `name`: Setting name/key (unique identifier)
* - `value`: Setting value (JSON-serializable data)
*
* ### contacts
* User's contact network and trust relationships. This table manages the
* social graph and trust network that enables TimeSafari's collaborative features.
*
* - `did`: Contact's Decentralized Identifier (primary key)
* - `name`: Display name for the contact
* - `publicKeyHex`: Contact's public key for verification
* - `endorserApiServer`: API server URL for this contact's endorsements
* - `registered`: Timestamp when contact was first added
* - `lastViewedClaimId`: Last claim/activity viewed from this contact
* - `seenWelcomeScreen`: Whether contact has completed onboarding
*
* ### logs
* Application event logging for debugging and audit trails. This table
* captures important application events for troubleshooting and monitoring.
*
* - `id`: Auto-incrementing log entry ID
* - `message`: Log message content
* - `level`: Log level (error, warn, info, debug)
* - `timestamp`: When the log entry was created
* - `context`: Additional context data (JSON format)
*
* ### temp
* Temporary data storage for multi-step operations. This table provides
* transient storage for operations that span multiple user interactions.
*
* - `id`: Unique identifier for the temporary data
* - `data`: JSON-serialized temporary data
* - `created`: Timestamp when data was stored
* - `expires`: Optional expiration timestamp
*
* ## Initial Data
*
* The migration also populates initial configuration:
* - Default endorser API server URL
* - Application database secret
* - Welcome screen tracking
*/
registerMigration({
name: "001_initial",
sql: `
-- User accounts and identity management
-- Each account represents a unique user with cryptographic capabilities
CREATE TABLE accounts (
id INTEGER PRIMARY KEY,
did TEXT UNIQUE NOT NULL, -- Decentralized Identifier
privateKeyHex TEXT NOT NULL, -- Private key (hex-encoded)
publicKeyHex TEXT NOT NULL, -- Public key (hex-encoded)
derivationPath TEXT, -- BIP44 derivation path
mnemonic TEXT -- BIP39 recovery phrase
);
CREATE TABLE secret (
id INTEGER PRIMARY KEY AUTOINCREMENT,
secretBase64 TEXT NOT NULL
);
-- Encrypted application secrets and sensitive configuration
-- Singleton table (id always = 1) for application-wide secrets
CREATE TABLE secret (
id INTEGER PRIMARY KEY CHECK (id = 1), -- Enforce singleton
hex TEXT NOT NULL -- Encrypted secret data
);
INSERT INTO secret (id, secretBase64) VALUES (1, '${secretBase64}');
-- Application settings and user preferences
-- Key-value store for configuration data
CREATE TABLE settings (
name TEXT PRIMARY KEY, -- Setting name/identifier
value TEXT -- Setting value (JSON-serializable)
);
CREATE TABLE settings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
accountDid TEXT,
activeDid TEXT,
apiServer TEXT,
filterFeedByNearby BOOLEAN,
filterFeedByVisible BOOLEAN,
finishedOnboarding BOOLEAN,
firstName TEXT,
hideRegisterPromptOnNewContact BOOLEAN,
isRegistered BOOLEAN,
lastName TEXT,
lastAckedOfferToUserJwtId TEXT,
lastAckedOfferToUserProjectsJwtId TEXT,
lastNotifiedClaimId TEXT,
lastViewedClaimId TEXT,
notifyingNewActivityTime TEXT,
notifyingReminderMessage TEXT,
notifyingReminderTime TEXT,
partnerApiServer TEXT,
passkeyExpirationMinutes INTEGER,
profileImageUrl TEXT,
searchBoxes TEXT, -- Stored as JSON string
showContactGivesInline BOOLEAN,
showGeneralAdvanced BOOLEAN,
showShortcutBvc BOOLEAN,
vapid TEXT,
warnIfProdServer BOOLEAN,
warnIfTestServer BOOLEAN,
webPushServer TEXT
);
-- User's contact network and trust relationships
-- Manages the social graph for collaborative features
CREATE TABLE contacts (
did TEXT PRIMARY KEY, -- Contact's DID
name TEXT, -- Display name
publicKeyHex TEXT, -- Contact's public key
endorserApiServer TEXT, -- API server for endorsements
registered TEXT, -- Registration timestamp
lastViewedClaimId TEXT, -- Last viewed activity
seenWelcomeScreen BOOLEAN DEFAULT FALSE -- Onboarding completion
);
CREATE INDEX idx_settings_accountDid ON settings(accountDid);
-- Application event logging for debugging and audit
-- Captures important events for troubleshooting
CREATE TABLE logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
message TEXT NOT NULL, -- Log message
level TEXT NOT NULL, -- Log level (error/warn/info/debug)
timestamp TEXT DEFAULT CURRENT_TIMESTAMP,
context TEXT -- Additional context (JSON)
);
INSERT INTO settings (id, apiServer) VALUES (1, '${DEFAULT_ENDORSER_API_SERVER}');
-- Temporary data storage for multi-step operations
-- Provides transient storage for complex workflows
CREATE TABLE temp (
id TEXT PRIMARY KEY, -- Unique identifier
data TEXT NOT NULL, -- JSON-serialized data
created TEXT DEFAULT CURRENT_TIMESTAMP,
expires TEXT -- Optional expiration
);
CREATE TABLE contacts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
did TEXT NOT NULL,
name TEXT,
contactMethods TEXT, -- Stored as JSON string
nextPubKeyHashB64 TEXT,
notes TEXT,
profileImageUrl TEXT,
publicKeyBase64 TEXT,
seesMe BOOLEAN,
registered BOOLEAN
);
-- Initialize default application settings
-- These settings provide the baseline configuration for new installations
INSERT INTO settings (name, value) VALUES
('apiServer', '${DEFAULT_ENDORSER_API_SERVER}'),
('seenWelcomeScreen', 'false');
CREATE INDEX idx_contacts_did ON contacts(did);
CREATE INDEX idx_contacts_name ON contacts(name);
-- Initialize application secret
-- This secret is used for encrypting sensitive data within the application
INSERT INTO secret (id, hex) VALUES (1, '${databaseSecret}');
`,
});
CREATE TABLE logs (
date TEXT NOT NULL,
message TEXT NOT NULL
);
/**
* Migration 002: Add Content Visibility Control to Contacts
*
* This migration enhances the contacts table with privacy controls, allowing
* users to manage what content they want to see from each contact. This supports
* TimeSafari's privacy-first approach by giving users granular control over
* their information exposure.
*
* ## Changes Made:
*
* ### contacts.iViewContent
* New boolean column that controls whether the user wants to see content
* (activities, projects, offers) from this contact in their feeds and views.
*
* - `TRUE` (default): User sees all content from this contact
* - `FALSE`: User's interface filters out content from this contact
*
* ## Use Cases:
*
* 1. **Privacy Management**: Users can maintain contacts for trust/verification
* purposes while limiting information exposure
*
* 2. **Feed Curation**: Users can curate their activity feeds by selectively
* hiding content from certain contacts
*
* 3. **Professional Separation**: Users can separate professional and personal
* networks while maintaining cryptographic trust relationships
*
* 4. **Graduated Privacy**: Users can add contacts with limited visibility
* initially, then expand access as trust develops
*
* ## Privacy Architecture:
*
* This column works in conjunction with TimeSafari's broader privacy model:
* - Contact relationships are still maintained for verification
* - Cryptographic trust is preserved regardless of content visibility
* - Users can change visibility settings at any time
* - The setting only affects the local user's view, not the contact's capabilities
*
* ## Default Behavior:
*
* All existing contacts default to `TRUE` (visible) to maintain current
* user experience. New contacts will also default to visible, with users
* able to adjust visibility as needed.
*/
registerMigration({
name: "002_add_iViewContent_to_contacts",
sql: `
-- Add content visibility control to contacts table
-- This allows users to manage what content they see from each contact
-- while maintaining the cryptographic trust relationship
ALTER TABLE contacts ADD COLUMN iViewContent BOOLEAN DEFAULT TRUE;
`,
});
CREATE TABLE temp (
id TEXT PRIMARY KEY,
blobB64 TEXT
);
`,
},
{
name: "002_add_iViewContent_to_contacts",
sql: `
-- Add iViewContent column to contacts table
ALTER TABLE contacts ADD COLUMN iViewContent BOOLEAN DEFAULT TRUE;
`,
},
];
/**
* Template for Future Migrations
*
* When adding new migrations, follow this pattern:
*
* ```typescript
* registerMigration({
* name: "003_descriptive_name",
* sql: `
* -- Clear comment explaining what this migration does
* -- and why it's needed
*
* ALTER TABLE existing_table ADD COLUMN new_column TYPE DEFAULT value;
*
* -- Or create new tables:
* CREATE TABLE new_table (
* id INTEGER PRIMARY KEY,
* -- ... other columns with comments
* );
*
* -- Initialize any required data
* INSERT INTO new_table (column) VALUES ('initial_value');
* `,
* });
* ```
*
* ## Migration Best Practices:
*
* 1. **Clear Naming**: Use descriptive names that explain the change
* 2. **Documentation**: Document the purpose and impact of each change
* 3. **Backward Compatibility**: Consider how changes affect existing data
* 4. **Default Values**: Provide sensible defaults for new columns
* 5. **Data Migration**: Include any necessary data transformation
* 6. **Testing**: Test migrations on representative data sets
* 7. **Performance**: Consider the impact on large datasets
*
* ## Schema Evolution Guidelines:
*
* - **Additive Changes**: Prefer adding new tables/columns over modifying existing ones
* - **Nullable Columns**: New columns should be nullable or have defaults
* - **Index Creation**: Add indexes for new query patterns
* - **Data Integrity**: Maintain referential integrity and constraints
* - **Privacy Preservation**: Ensure new schema respects privacy principles
*/
/**
* Runs all registered database migrations
* Run all registered migrations
*
* This function ensures that the database schema is up-to-date by running
* all pending migrations. It uses the migration service to track which
* migrations have been applied and avoid running them multiple times.
* This function is called during application initialization to ensure the
* database schema is up to date. It delegates to the migration service
* which handles the actual migration execution, tracking, and validation.
*
* @param sqlExec - A function that executes a SQL statement and returns the result
* @param sqlQuery - A function that executes a SQL query and returns the result
* @param extractMigrationNames - A function that extracts migration names from query results
* @returns Promise that resolves when all migrations are complete
* The migration service will:
* 1. Check which migrations have already been applied
* 2. Apply any pending migrations in order
* 3. Validate that schema changes were successful
* 4. Record applied migrations for future reference
*
* @param sqlExec - Function to execute SQL statements
* @param sqlQuery - Function to execute SQL queries
* @param extractMigrationNames - Function to parse migration names from results
* @returns Promise that resolves when migrations are complete
*
* @example
* ```typescript
* // Called from platform service during database initialization
* await runMigrations(
* (sql, params) => db.run(sql, params),
* (sql, params) => db.query(sql, params),
* (result) => new Set(result.values.map(row => row[0]))
* );
* ```
*/
export async function runMigrations<T>(
sqlExec: (sql: string, params?: unknown[]) => Promise<unknown>,
sqlQuery: (sql: string, params?: unknown[]) => Promise<T>,
extractMigrationNames: (result: T) => Set<string>,
): Promise<void> {
console.log("🔄 [Migration] Starting database migration process...");
for (const migration of MIGRATIONS) {
registerMigration(migration);
}
try {
await runMigrationsService(sqlExec, sqlQuery, extractMigrationNames);
console.log("✅ [Migration] Database migration process completed successfully");
} catch (error) {
console.error("❌ [Migration] Database migration process failed:", error);
throw error;
}
return runMigrationsService(sqlExec, sqlQuery, extractMigrationNames);
}

286
src/services/migrationService.ts

@ -1,63 +1,174 @@
/**
* Database Migration Service for TimeSafari
*
* Manages database migrations as people upgrade their app over time.
* Provides safe, tracked migrations with rollback capabilities and
* detailed logging for debugging.
* This module provides a comprehensive database migration system that manages
* schema changes as users upgrade their TimeSafari application over time.
* The system ensures that database changes are applied safely, tracked properly,
* and can handle edge cases gracefully.
*
* ## Architecture Overview
*
* The migration system follows these key principles:
*
* 1. **Single Application**: Each migration runs exactly once per database
* 2. **Tracked Execution**: All applied migrations are recorded in a migrations table
* 3. **Schema Validation**: Actual database schema is validated before and after migrations
* 4. **Graceful Recovery**: Handles cases where schema exists but tracking is missing
* 5. **Comprehensive Logging**: Detailed logging for debugging and monitoring
*
* ## Migration Flow
*
* ```
* 1. Create migrations table (if needed)
* 2. Query existing applied migrations
* 3. For each registered migration:
* a. Check if recorded as applied
* b. Check if schema already exists
* c. Skip if already applied
* d. Apply migration SQL
* e. Validate schema was created
* f. Record migration as applied
* 4. Final validation of all migrations
* ```
*
* ## Usage Example
*
* ```typescript
* // Register migrations (typically in migration.ts)
* registerMigration({
* name: "001_initial",
* sql: "CREATE TABLE accounts (id INTEGER PRIMARY KEY, ...)"
* });
*
* // Run migrations (typically in platform service)
* await runMigrations(sqlExec, sqlQuery, extractMigrationNames);
* ```
*
* ## Error Handling
*
* The system handles several error scenarios:
* - Duplicate table/column errors (schema already exists)
* - Migration tracking inconsistencies
* - Database connection issues
* - Schema validation failures
*
* @author Matthew Raymer
* @version 1.0.0
* @since 2025-06-30
*/
import { logger } from "../utils/logger";
/**
* Migration interface for database schema migrations
*
* Represents a single database migration that can be applied to upgrade
* the database schema. Each migration should be idempotent and focused
* on a single schema change.
*
* @interface Migration
*/
interface Migration {
/** Unique identifier for the migration (e.g., "001_initial", "002_add_column") */
name: string;
/** SQL statement(s) to execute for this migration */
sql: string;
}
/**
* Migration validation result
*
* Contains the results of validating that a migration was successfully
* applied by checking the actual database schema.
*
* @interface MigrationValidation
*/
interface MigrationValidation {
/** Whether the migration validation passed overall */
isValid: boolean;
/** Whether expected tables exist */
tableExists: boolean;
/** Whether expected columns exist */
hasExpectedColumns: boolean;
/** List of validation errors encountered */
errors: string[];
}
/**
* Migration registry to store and manage database migrations
*
* This class maintains a registry of all migrations that need to be applied
* to the database. It uses the singleton pattern to ensure migrations are
* registered once and can be accessed globally.
*
* @class MigrationRegistry
*/
class MigrationRegistry {
/** Array of registered migrations */
private migrations: Migration[] = [];
/**
* Register a migration with the registry
*
* Adds a migration to the list of migrations that will be applied when
* runMigrations() is called. Migrations should be registered in order
* of their intended execution.
*
* @param migration - The migration to register
* @throws {Error} If migration name is empty or already exists
*
* @example
* ```typescript
* registry.registerMigration({
* name: "001_create_users_table",
* sql: "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT NOT NULL)"
* });
* ```
*/
registerMigration(migration: Migration): void {
if (!migration.name || migration.name.trim() === '') {
throw new Error('Migration name cannot be empty');
}
if (this.migrations.some(m => m.name === migration.name)) {
throw new Error(`Migration with name '${migration.name}' already exists`);
}
this.migrations.push(migration);
}
/**
* Get all registered migrations
*
* @returns Array of registered migrations
* Returns a copy of all migrations that have been registered with this
* registry. The migrations are returned in the order they were registered.
*
* @returns Array of registered migrations (defensive copy)
*/
getMigrations(): Migration[] {
return this.migrations;
return [...this.migrations];
}
/**
* Clear all registered migrations
*
* Removes all migrations from the registry. This is primarily used for
* testing purposes to ensure a clean state between test runs.
*
* @internal Used primarily for testing
*/
clearMigrations(): void {
this.migrations = [];
}
/**
* Get the count of registered migrations
*
* @returns Number of migrations currently registered
*/
getCount(): number {
return this.migrations.length;
}
}
// Create a singleton instance of the migration registry
@ -66,10 +177,29 @@ const migrationRegistry = new MigrationRegistry();
/**
* Register a migration with the migration service
*
* This function is used by the migration system to register database
* schema migrations that need to be applied to the database.
* This is the primary public API for registering database migrations.
* Each migration should represent a single, focused schema change that
* can be applied atomically.
*
* @param migration - The migration to register
* @throws {Error} If migration is invalid
*
* @example
* ```typescript
* registerMigration({
* name: "001_initial_schema",
* sql: `
* CREATE TABLE accounts (
* id INTEGER PRIMARY KEY,
* did TEXT UNIQUE NOT NULL,
* privateKeyHex TEXT NOT NULL,
* publicKeyHex TEXT NOT NULL,
* derivationPath TEXT,
* mnemonic TEXT
* );
* `
* });
* ```
*/
export function registerMigration(migration: Migration): void {
migrationRegistry.registerMigration(migration);
@ -77,6 +207,23 @@ export function registerMigration(migration: Migration): void {
/**
* Validate that a migration was successfully applied by checking schema
*
* This function performs post-migration validation to ensure that the
* expected database schema changes were actually applied. It checks for
* the existence of tables, columns, and other schema elements that should
* have been created by the migration.
*
* @param migration - The migration to validate
* @param sqlQuery - Function to execute SQL queries
* @returns Promise resolving to validation results
*
* @example
* ```typescript
* const validation = await validateMigrationApplication(migration, sqlQuery);
* if (!validation.isValid) {
* console.error('Migration validation failed:', validation.errors);
* }
* ```
*/
async function validateMigrationApplication<T>(
migration: Migration,
@ -91,7 +238,7 @@ async function validateMigrationApplication<T>(
try {
if (migration.name === "001_initial") {
// Validate core tables exist
// Validate core tables exist for initial migration
const tables = ['accounts', 'secret', 'settings', 'contacts', 'logs', 'temp'];
for (const tableName of tables) {
@ -104,8 +251,10 @@ async function validateMigrationApplication<T>(
console.error(`❌ [Migration-Validation] Table ${tableName} missing:`, error);
}
}
validation.tableExists = validation.errors.length === 0;
} else if (migration.name === "002_add_iViewContent_to_contacts") {
// Validate iViewContent column exists
// Validate iViewContent column exists in contacts table
try {
await sqlQuery(`SELECT iViewContent FROM contacts LIMIT 1`);
validation.hasExpectedColumns = true;
@ -116,6 +265,12 @@ async function validateMigrationApplication<T>(
console.error(`❌ [Migration-Validation] Column iViewContent missing:`, error);
}
}
// Add validation for future migrations here
// } else if (migration.name === "003_future_migration") {
// // Validate future migration schema changes
// }
} catch (error) {
validation.isValid = false;
validation.errors.push(`Validation error: ${error}`);
@ -127,6 +282,23 @@ async function validateMigrationApplication<T>(
/**
* Check if migration is already applied by examining actual schema
*
* This function performs schema introspection to determine if a migration
* has already been applied, even if it's not recorded in the migrations
* table. This is useful for handling cases where the database schema exists
* but the migration tracking got out of sync.
*
* @param migration - The migration to check
* @param sqlQuery - Function to execute SQL queries
* @returns Promise resolving to true if schema already exists
*
* @example
* ```typescript
* const schemaExists = await isSchemaAlreadyPresent(migration, sqlQuery);
* if (schemaExists) {
* console.log('Schema already exists, skipping migration');
* }
* ```
*/
async function isSchemaAlreadyPresent<T>(
migration: Migration,
@ -134,13 +306,14 @@ async function isSchemaAlreadyPresent<T>(
): Promise<boolean> {
try {
if (migration.name === "001_initial") {
// Check if accounts table exists (primary indicator)
// Check if accounts table exists (primary indicator of initial migration)
const result = await sqlQuery(`SELECT name FROM sqlite_master WHERE type='table' AND name='accounts'`) as any;
const hasTable = result?.values?.length > 0 || (Array.isArray(result) && result.length > 0);
console.log(`🔍 [Migration-Schema] Initial migration schema check - accounts table exists: ${hasTable}`);
return hasTable;
} else if (migration.name === "002_add_iViewContent_to_contacts") {
// Check if iViewContent column exists
// Check if iViewContent column exists in contacts table
try {
await sqlQuery(`SELECT iViewContent FROM contacts LIMIT 1`);
console.log(`🔍 [Migration-Schema] iViewContent column already exists`);
@ -150,6 +323,12 @@ async function isSchemaAlreadyPresent<T>(
return false;
}
}
// Add schema checks for future migrations here
// } else if (migration.name === "003_future_migration") {
// // Check if future migration schema already exists
// }
} catch (error) {
console.log(`🔍 [Migration-Schema] Schema check failed for ${migration.name}, assuming not present:`, error);
return false;
@ -161,15 +340,41 @@ async function isSchemaAlreadyPresent<T>(
/**
* Run all registered migrations against the database
*
* This function executes all registered migrations in order, checking
* which ones have already been applied to avoid duplicate execution.
* It creates a migrations table if it doesn't exist to track applied
* migrations.
* This is the main function that executes the migration process. It:
* 1. Creates the migrations tracking table if needed
* 2. Determines which migrations have already been applied
* 3. Applies any pending migrations in order
* 4. Validates that migrations were applied correctly
* 5. Records successful migrations in the tracking table
* 6. Performs final validation of the migration state
*
* The function is designed to be idempotent - it can be run multiple times
* safely without re-applying migrations that have already been completed.
*
* @param sqlExec - Function to execute SQL statements
* @param sqlQuery - Function to query SQL data
* @template T - The type returned by SQL query operations
* @param sqlExec - Function to execute SQL statements (INSERT, UPDATE, CREATE, etc.)
* @param sqlQuery - Function to execute SQL queries (SELECT)
* @param extractMigrationNames - Function to extract migration names from query results
* @returns Promise that resolves when all migrations are complete
* @throws {Error} If any migration fails to apply
*
* @example
* ```typescript
* // Platform-specific implementation
* const sqlExec = async (sql: string, params?: unknown[]) => {
* return await db.run(sql, params);
* };
*
* const sqlQuery = async (sql: string, params?: unknown[]) => {
* return await db.query(sql, params);
* };
*
* const extractNames = (result: DBResult) => {
* return new Set(result.values.map(row => row[0]));
* };
*
* await runMigrations(sqlExec, sqlQuery, extractNames);
* ```
*/
export async function runMigrations<T>(
sqlExec: (sql: string, params?: unknown[]) => Promise<unknown>,
@ -177,9 +382,10 @@ export async function runMigrations<T>(
extractMigrationNames: (result: T) => Set<string>,
): Promise<void> {
try {
console.log("📋 [Migration] Checking migration status...");
console.log("📋 [Migration] Starting migration process...");
// Create migrations table if it doesn't exist
// Step 1: Create migrations table if it doesn't exist
// Note: We use IF NOT EXISTS here because this is infrastructure, not a business migration
console.log("🔧 [Migration] Creating migrations table if it doesn't exist...");
await sqlExec(`
CREATE TABLE IF NOT EXISTS migrations (
@ -189,7 +395,7 @@ export async function runMigrations<T>(
`);
console.log("✅ [Migration] Migrations table ready");
// Get list of already applied migrations
// Step 2: Get list of already applied migrations
console.log("🔍 [Migration] Querying existing migrations...");
const appliedMigrationsResult = await sqlQuery(
"SELECT name FROM migrations",
@ -199,7 +405,7 @@ export async function runMigrations<T>(
const appliedMigrations = extractMigrationNames(appliedMigrationsResult);
console.log("📋 [Migration] Extracted applied migrations:", Array.from(appliedMigrations));
// Get all registered migrations
// Step 3: Get all registered migrations
const migrations = migrationRegistry.getMigrations();
if (migrations.length === 0) {
@ -214,22 +420,26 @@ export async function runMigrations<T>(
let appliedCount = 0;
let skippedCount = 0;
// Run each migration that hasn't been applied yet
// Step 4: Process each migration
for (const migration of migrations) {
// First check: Is it recorded as applied in migrations table?
console.log(`\n🔍 [Migration] Processing migration: ${migration.name}`);
// Check 1: Is it recorded as applied in migrations table?
const isRecordedAsApplied = appliedMigrations.has(migration.name);
// Second check: Does the schema already exist?
// Check 2: Does the schema already exist in the database?
const isSchemaPresent = await isSchemaAlreadyPresent(migration, sqlQuery);
console.log(`🔍 [Migration] ${migration.name} - Recorded: ${isRecordedAsApplied}, Schema: ${isSchemaPresent}`);
// Skip if already recorded as applied
if (isRecordedAsApplied) {
console.log(`⏭️ [Migration] Skipping already applied: ${migration.name}`);
skippedCount++;
continue;
}
// Handle case where schema exists but isn't recorded
if (isSchemaPresent) {
console.log(`🔄 [Migration] Schema exists but not recorded. Marking ${migration.name} as applied...`);
try {
@ -242,10 +452,11 @@ export async function runMigrations<T>(
continue;
} catch (insertError) {
console.warn(`⚠️ [Migration] Could not record existing schema ${migration.name}:`, insertError);
// Continue with normal migration process
// Continue with normal migration process as fallback
}
}
// Apply the migration
console.log(`🔄 [Migration] Applying migration: ${migration.name}`);
try {
@ -258,6 +469,8 @@ export async function runMigrations<T>(
const validation = await validateMigrationApplication(migration, sqlQuery);
if (!validation.isValid) {
console.warn(`⚠️ [Migration] Validation failed for ${migration.name}:`, validation.errors);
} else {
console.log(`✅ [Migration] Schema validation passed for ${migration.name}`);
}
// Record that the migration was applied
@ -267,11 +480,12 @@ export async function runMigrations<T>(
]);
console.log(`✅ [Migration] Migration record inserted:`, insertResult);
console.log(` [Migration] Successfully applied: ${migration.name}`);
console.log(`🎉 [Migration] Successfully applied: ${migration.name}`);
logger.info(
`[MigrationService] Successfully applied migration: ${migration.name}`,
);
appliedCount++;
} catch (error) {
console.error(`❌ [Migration] Error applying ${migration.name}:`, error);
@ -327,15 +541,27 @@ export async function runMigrations<T>(
}
}
// Final validation: Verify all migrations are properly recorded
console.log("🔍 [Migration] Final validation - checking migrations table...");
// Step 5: Final validation - verify all migrations are properly recorded
console.log("\n🔍 [Migration] Final validation - checking migrations table...");
const finalMigrationsResult = await sqlQuery("SELECT name FROM migrations");
const finalAppliedMigrations = extractMigrationNames(finalMigrationsResult);
console.log("📋 [Migration] Final applied migrations:", Array.from(finalAppliedMigrations));
console.log(`🎉 [Migration] Migration process complete! Applied: ${appliedCount}, Skipped: ${skippedCount}`);
// Check that all expected migrations are recorded
const expectedMigrations = new Set(migrations.map(m => m.name));
const missingMigrations = [...expectedMigrations].filter(name => !finalAppliedMigrations.has(name));
if (missingMigrations.length > 0) {
console.warn(`⚠️ [Migration] Missing migration records: ${missingMigrations.join(', ')}`);
logger.warn(`[MigrationService] Missing migration records: ${missingMigrations.join(', ')}`);
}
console.log(`\n🎉 [Migration] Migration process complete!`);
console.log(`📊 [Migration] Summary: Applied: ${appliedCount}, Skipped: ${skippedCount}, Total: ${migrations.length}`);
logger.info(`[MigrationService] Migration process complete. Applied: ${appliedCount}, Skipped: ${skippedCount}`);
} catch (error) {
console.error("💥 [Migration] Migration process failed:", error);
console.error("\n💥 [Migration] Migration process failed:", error);
logger.error("[MigrationService] Migration process failed:", error);
throw error;
}

162
src/services/platforms/CapacitorPlatformService.ts

@ -237,28 +237,107 @@ export class CapacitorPlatformService implements PlatformService {
}
}
/**
* Execute database migrations for the Capacitor platform
*
* This method orchestrates the database migration process specifically for
* Capacitor-based platforms (mobile and Electron). It provides the platform-specific
* SQL execution functions to the migration service and handles Capacitor SQLite
* plugin integration.
*
* ## Migration Process:
*
* 1. **SQL Execution Setup**: Creates platform-specific SQL execution functions
* that properly handle the Capacitor SQLite plugin's API
*
* 2. **Parameter Handling**: Ensures proper parameter binding for prepared statements
* using the correct Capacitor SQLite methods (run vs execute)
*
* 3. **Result Parsing**: Provides extraction functions that understand the
* Capacitor SQLite result format
*
* 4. **Migration Execution**: Delegates to the migration service for the actual
* migration logic and tracking
*
* 5. **Integrity Verification**: Runs post-migration integrity checks to ensure
* the database is in the expected state
*
* ## Error Handling:
*
* The method includes comprehensive error handling for:
* - Database connection issues
* - SQL execution failures
* - Migration tracking problems
* - Schema validation errors
*
* Even if migrations fail, the integrity check still runs to assess the
* current database state and provide debugging information.
*
* ## Logging:
*
* Detailed logging is provided throughout the process using emoji-tagged
* console messages that appear in the Electron DevTools console. This
* includes:
* - SQL statement execution details
* - Parameter values for debugging
* - Migration success/failure status
* - Database integrity check results
*
* @throws {Error} If database is not initialized or migrations fail critically
* @private Internal method called during database initialization
*
* @example
* ```typescript
* // Called automatically during platform service initialization
* await this.runCapacitorMigrations();
* ```
*/
private async runCapacitorMigrations(): Promise<void> {
if (!this.db) {
throw new Error("Database not initialized");
}
/**
* SQL execution function for Capacitor SQLite plugin
*
* This function handles the execution of SQL statements (INSERT, UPDATE, CREATE, etc.)
* through the Capacitor SQLite plugin. It automatically chooses the appropriate
* method based on whether parameters are provided.
*
* @param sql - SQL statement to execute
* @param params - Optional parameters for prepared statements
* @returns Promise resolving to execution results
*/
const sqlExec = async (sql: string, params?: unknown[]): Promise<capSQLiteChanges> => {
console.log(`🔧 [CapacitorMigration] Executing SQL:`, sql);
console.log(`📋 [CapacitorMigration] With params:`, params);
if (params && params.length > 0) {
// Use run method for parameterized queries
// Use run method for parameterized queries (prepared statements)
// This is essential for proper parameter binding and SQL injection prevention
const result = await this.db!.run(sql, params);
console.log(`✅ [CapacitorMigration] Run result:`, result);
return result;
} else {
// Use execute method for non-parameterized queries
// This is more efficient for simple DDL statements
const result = await this.db!.execute(sql);
console.log(`✅ [CapacitorMigration] Execute result:`, result);
return result;
}
};
/**
* SQL query function for Capacitor SQLite plugin
*
* This function handles the execution of SQL queries (SELECT statements)
* through the Capacitor SQLite plugin. It returns the raw result data
* that can be processed by the migration service.
*
* @param sql - SQL query to execute
* @param params - Optional parameters for prepared statements
* @returns Promise resolving to query results
*/
const sqlQuery = async (sql: string, params?: unknown[]): Promise<DBSQLiteValues> => {
console.log(`🔍 [CapacitorMigration] Querying SQL:`, sql);
console.log(`📋 [CapacitorMigration] With params:`, params);
@ -268,6 +347,24 @@ export class CapacitorPlatformService implements PlatformService {
return result;
};
/**
* Extract migration names from Capacitor SQLite query results
*
* This function parses the result format returned by the Capacitor SQLite
* plugin and extracts migration names. It handles the specific data structure
* used by the plugin, which can vary between different result formats.
*
* ## Result Format Handling:
*
* The Capacitor SQLite plugin can return results in different formats:
* - Object format: `{ name: "migration_name" }`
* - Array format: `["migration_name", "timestamp"]`
*
* This function handles both formats to ensure robust migration name extraction.
*
* @param result - Query result from Capacitor SQLite plugin
* @returns Set of migration names found in the result
*/
const extractMigrationNames = (result: DBSQLiteValues): Set<string> => {
console.log(`🔍 [CapacitorMigration] Extracting migration names from:`, result);
@ -287,13 +384,14 @@ export class CapacitorPlatformService implements PlatformService {
};
try {
// Execute the migration process
await runMigrations(sqlExec, sqlQuery, extractMigrationNames);
// After migrations, run integrity check
// After migrations, run integrity check to verify database state
await this.verifyDatabaseIntegrity();
} catch (error) {
console.error(`❌ [CapacitorMigration] Migration failed:`, error);
// Still try to verify what we have
// Still try to verify what we have for debugging purposes
await this.verifyDatabaseIntegrity();
throw error;
}
@ -301,6 +399,55 @@ export class CapacitorPlatformService implements PlatformService {
/**
* Verify database integrity and migration status
*
* This method performs comprehensive validation of the database structure
* and migration state. It's designed to help identify issues with the
* migration process and provide detailed debugging information.
*
* ## Validation Steps:
*
* 1. **Migration Records**: Checks which migrations are recorded as applied
* 2. **Table Existence**: Verifies all expected core tables exist
* 3. **Schema Validation**: Checks table schemas including column presence
* 4. **Data Integrity**: Validates basic data counts and structure
*
* ## Core Tables Validated:
*
* - `accounts`: User identity and cryptographic keys
* - `secret`: Application secrets and encryption keys
* - `settings`: Configuration and user preferences
* - `contacts`: Contact network and trust relationships
* - `logs`: Application event logging
* - `temp`: Temporary data storage
*
* ## Schema Checks:
*
* For critical tables like `contacts`, the method validates:
* - Table structure using `PRAGMA table_info`
* - Presence of important columns (e.g., `iViewContent`)
* - Column data types and constraints
*
* ## Error Handling:
*
* This method is designed to never throw errors - it captures and logs
* all validation issues for debugging purposes. This ensures that even
* if integrity checks fail, they don't prevent the application from starting.
*
* ## Logging Output:
*
* The method produces detailed console output with emoji tags:
* - `` for successful validations
* - `` for validation failures
* - `📊` for data summaries
* - `🔍` for investigation steps
*
* @private Internal method called after migrations
*
* @example
* ```typescript
* // Called automatically after migration completion
* await this.verifyDatabaseIntegrity();
* ```
*/
private async verifyDatabaseIntegrity(): Promise<void> {
if (!this.db) {
@ -311,11 +458,11 @@ export class CapacitorPlatformService implements PlatformService {
console.log(`🔍 [DB-Integrity] Starting database integrity check...`);
try {
// Check migrations table
// Step 1: Check migrations table and applied migrations
const migrationsResult = await this.db.query("SELECT name, applied_at FROM migrations ORDER BY applied_at");
console.log(`📊 [DB-Integrity] Applied migrations:`, migrationsResult);
// Check core tables exist
// Step 2: Verify core tables exist
const coreTableNames = ['accounts', 'secret', 'settings', 'contacts', 'logs', 'temp'];
const existingTables: string[] = [];
@ -333,12 +480,13 @@ export class CapacitorPlatformService implements PlatformService {
}
}
// Check contacts table schema (including iViewContent column)
// Step 3: Check contacts table schema (including iViewContent column)
if (existingTables.includes('contacts')) {
try {
const contactsSchema = await this.db.query("PRAGMA table_info(contacts)");
console.log(`📊 [DB-Integrity] Contacts table schema:`, contactsSchema);
// Check for iViewContent column specifically
const hasIViewContent = contactsSchema.values?.some((col: any) =>
(col.name === 'iViewContent') || (Array.isArray(col) && col[1] === 'iViewContent')
);
@ -353,7 +501,7 @@ export class CapacitorPlatformService implements PlatformService {
}
}
// Check for data integrity
// Step 4: Check for basic data integrity
try {
const accountCount = await this.db.query("SELECT COUNT(*) as count FROM accounts");
const settingsCount = await this.db.query("SELECT COUNT(*) as count FROM settings");

Loading…
Cancel
Save