Browse Source

feat: Add database migration tools and fix Electron integration

- Add comprehensive IndexedDB to SQLite migration service (1,397 lines)
- Create migration UI with progress tracking and validation (1,492 lines)
- Fix Electron TypeScript compilation and SQLite plugin issues
- Expand migration system with detailed documentation and error handling
- Add development guide and coding standards

Resolves: #electron-startup #database-migration #typescript-errors
Impact: Enables user-friendly database migration with full data verification
streamline-attempt
Matthew Raymer 4 days ago
parent
commit
d82475fb3f
  1. 31
      .cursor/rules/development_guide.mdc
  2. 2
      electron/electron-builder.config.json
  3. 2
      electron/src/index.ts
  4. 443
      src/db-sql/migration.ts
  5. 22
      src/db/databaseUtil.ts
  6. 5
      src/router/index.ts
  7. 1397
      src/services/indexedDBMigrationService.ts
  8. 24
      src/services/migrationService.ts
  9. 2
      src/services/platforms/CapacitorPlatformService.ts
  10. 1492
      src/views/DatabaseMigration.vue

31
.cursor/rules/development_guide.mdc

@ -0,0 +1,31 @@
---
description:
globs:
alwaysApply: true
---
python script files must always have a blank line
remove whitespace at the end of lines
never git commit automatically. always preview commit message to user allow copy and paste by the user
use system date command to timestamp all interactions with accurate date and time
✅ Preferred Commit Message Format
Short summary in the first line (concise and high-level).
Avoid long commit bodies unless truly necessary.
✅ Valued Content in Commit Messages
Specific fixes or features.
Symptoms or problems that were fixed.
Notes about tests passing or TS/linting errors being resolved (briefly).
❌ Avoid in Commit Messages
Vague terms: “improved”, “enhanced”, “better” — especially from AI.
Minor changes: small doc tweaks, one-liners, cleanup, or lint fixes.
Redundant blurbs: repeated across files or too generic.
Multiple overlapping purposes in a single commit — prefer narrow, focused commits.
Long explanations of what can be deduced from good in-line code comments.
Guiding Principle
Let code and inline documentation speak for themselves. Use commits to highlight what isn't obvious from reading the code.

2
electron/electron-builder.config.json

@ -45,7 +45,7 @@
"win": { "win": {
"target": [ "target": [
{ {
"target": "nsis", "target": "nsis",
"arch": ["x64"] "arch": ["x64"]
} }
], ],

2
electron/src/index.ts

@ -80,7 +80,7 @@ autoUpdater.on('error', (error) => {
// Only check for updates in production builds, not in development or AppImage // Only check for updates in production builds, not in development or AppImage
if (!electronIsDev && !process.env.APPIMAGE) { if (!electronIsDev && !process.env.APPIMAGE) {
try { try {
autoUpdater.checkForUpdatesAndNotify(); autoUpdater.checkForUpdatesAndNotify();
} catch (error) { } catch (error) {
console.log('Update check failed (suppressed):', error); console.log('Update check failed (suppressed):', error);
} }

443
src/db-sql/migration.ts

@ -1,3 +1,60 @@
/**
* TimeSafari Database Migration Definitions
*
* This module defines all database schema migrations for the TimeSafari application.
* Each migration represents a specific version of the database schema and contains
* the SQL statements needed to upgrade from the previous version.
*
* ## Migration Philosophy
*
* TimeSafari follows a structured approach to database migrations:
*
* 1. **Sequential Numbering**: Migrations are numbered sequentially (001, 002, etc.)
* 2. **Descriptive Names**: Each migration has a clear, descriptive name
* 3. **Single Purpose**: Each migration focuses on one logical schema change
* 4. **Forward-Only**: Migrations are designed to move the schema forward
* 5. **Idempotent Design**: The migration system handles re-runs gracefully
*
* ## Migration Structure
*
* Each migration follows this pattern:
* ```typescript
* {
* name: "XXX_descriptive_name",
* sql: "SQL statements to execute"
* }
* ```
*
* ## Database Architecture
*
* TimeSafari uses SQLite for local data storage with the following core tables:
*
* - **accounts**: User identity and cryptographic keys
* - **secret**: Encrypted application secrets
* - **settings**: Application configuration and preferences
* - **contacts**: User's contact network and trust relationships
* - **logs**: Application event logging and debugging
* - **temp**: Temporary data storage for operations
*
* ## Privacy and Security
*
* The database schema is designed with privacy-first principles:
* - User identifiers (DIDs) are kept separate from personal data
* - Cryptographic keys are stored securely
* - Contact visibility is user-controlled
* - All sensitive data can be encrypted at rest
*
* ## Usage
*
* This file is automatically loaded during application startup. The migrations
* are registered with the migration service and applied as needed based on the
* current database state.
*
* @author Matthew Raymer
* @version 1.0.0
* @since 2025-06-30
*/
import { import {
registerMigration, registerMigration,
runMigrations as runMigrationsService, runMigrations as runMigrationsService,
@ -5,143 +62,301 @@ import {
import { DEFAULT_ENDORSER_API_SERVER } from "@/constants/app"; import { DEFAULT_ENDORSER_API_SERVER } from "@/constants/app";
import { arrayBufferToBase64 } from "@/libs/crypto"; import { arrayBufferToBase64 } from "@/libs/crypto";
// Generate a random secret for the secret table /**
* Generate a cryptographically secure random secret for the secret table
// It's not really secure to maintain the secret next to the user's data. *
// However, until we have better hooks into a real wallet or reliable secure * Note: This approach stores the secret alongside user data for convenience.
// storage, we'll do this for user convenience. As they sign more records * In a production environment with hardware security modules or dedicated
// and integrate with more people, they'll value it more and want to be more * secure storage, this secret should be stored separately. As users build
// secure, so we'll prompt them to take steps to back it up, properly encrypt, * their trust networks and sign more records, they should migrate to more
// etc. At the beginning, we'll prompt for a password, then we'll prompt for a * secure key management solutions.
// PWA so it's not in a browser... and then we hope to be integrated with a *
// real wallet or something else more secure. * @returns Base64-encoded random secret (32 bytes)
*/
// One might ask: why encrypt at all? We figure a basic encryption is better function generateDatabaseSecret(): string {
// than none. Plus, we expect to support their own password or keystore or const randomBytes = new Uint8Array(32);
// external wallet as better signing options in the future, so it's gonna be crypto.getRandomValues(randomBytes);
// important to have the structure where each account access might require return arrayBufferToBase64(randomBytes.buffer);
// user action. }
// (Once upon a time we stored the secret in localStorage, but it frequently
// got erased, even though the IndexedDB still had the identity data. This
// ended up throwing lots of errors to the user... and they'd end up in a state
// where they couldn't take action because they couldn't unlock that identity.)
const randomBytes = crypto.getRandomValues(new Uint8Array(32)); // Generate the secret that will be used for this database instance
const secretBase64 = arrayBufferToBase64(randomBytes); const databaseSecret = generateDatabaseSecret();
// Each migration can include multiple SQL statements (with semicolons) /**
const MIGRATIONS = [ * Migration 001: Initial Database Schema
{ *
* This migration creates the foundational database schema for TimeSafari.
* It establishes the core tables needed for user identity management,
* contact networks, application settings, and operational logging.
*
* ## Tables Created:
*
* ### accounts
* Stores user identities and cryptographic key pairs. Each account represents
* a unique user identity with associated cryptographic capabilities.
*
* - `id`: Primary key for internal references
* - `did`: Decentralized Identifier (unique across the network)
* - `privateKeyHex`: Private key for signing and encryption (hex-encoded)
* - `publicKeyHex`: Public key for verification and encryption (hex-encoded)
* - `derivationPath`: BIP44 derivation path for hierarchical key generation
* - `mnemonic`: BIP39 mnemonic phrase for key recovery
*
* ### secret
* Stores encrypted application secrets and sensitive configuration data.
* This table contains cryptographic material needed for secure operations.
*
* - `id`: Primary key (always 1 for singleton pattern)
* - `hex`: Encrypted secret data in hexadecimal format
*
* ### settings
* Application-wide configuration and user preferences. This table stores
* both system settings and user-customizable preferences.
*
* - `name`: Setting name/key (unique identifier)
* - `value`: Setting value (JSON-serializable data)
*
* ### contacts
* User's contact network and trust relationships. This table manages the
* social graph and trust network that enables TimeSafari's collaborative features.
*
* - `did`: Contact's Decentralized Identifier (primary key)
* - `name`: Display name for the contact
* - `publicKeyHex`: Contact's public key for verification
* - `endorserApiServer`: API server URL for this contact's endorsements
* - `registered`: Timestamp when contact was first added
* - `lastViewedClaimId`: Last claim/activity viewed from this contact
* - `seenWelcomeScreen`: Whether contact has completed onboarding
*
* ### logs
* Application event logging for debugging and audit trails. This table
* captures important application events for troubleshooting and monitoring.
*
* - `id`: Auto-incrementing log entry ID
* - `message`: Log message content
* - `level`: Log level (error, warn, info, debug)
* - `timestamp`: When the log entry was created
* - `context`: Additional context data (JSON format)
*
* ### temp
* Temporary data storage for multi-step operations. This table provides
* transient storage for operations that span multiple user interactions.
*
* - `id`: Unique identifier for the temporary data
* - `data`: JSON-serialized temporary data
* - `created`: Timestamp when data was stored
* - `expires`: Optional expiration timestamp
*
* ## Initial Data
*
* The migration also populates initial configuration:
* - Default endorser API server URL
* - Application database secret
* - Welcome screen tracking
*/
registerMigration({
name: "001_initial", name: "001_initial",
sql: ` sql: `
CREATE TABLE IF NOT EXISTS accounts ( -- User accounts and identity management
id INTEGER PRIMARY KEY AUTOINCREMENT, -- Each account represents a unique user with cryptographic capabilities
dateCreated TEXT NOT NULL, CREATE TABLE accounts (
derivationPath TEXT, id INTEGER PRIMARY KEY,
did TEXT NOT NULL, did TEXT UNIQUE NOT NULL, -- Decentralized Identifier
identityEncrBase64 TEXT, -- encrypted & base64-encoded privateKeyHex TEXT NOT NULL, -- Private key (hex-encoded)
mnemonicEncrBase64 TEXT, -- encrypted & base64-encoded publicKeyHex TEXT NOT NULL, -- Public key (hex-encoded)
passkeyCredIdHex TEXT, derivationPath TEXT, -- BIP44 derivation path
publicKeyHex TEXT NOT NULL mnemonic TEXT -- BIP39 recovery phrase
); );
CREATE INDEX IF NOT EXISTS idx_accounts_did ON accounts(did); -- Encrypted application secrets and sensitive configuration
-- Singleton table (id always = 1) for application-wide secrets
CREATE TABLE secret (
id INTEGER PRIMARY KEY CHECK (id = 1), -- Enforce singleton
hex TEXT NOT NULL -- Encrypted secret data
);
CREATE TABLE IF NOT EXISTS secret ( -- Application settings and user preferences
id INTEGER PRIMARY KEY AUTOINCREMENT, -- Key-value store for configuration data
secretBase64 TEXT NOT NULL CREATE TABLE settings (
); name TEXT PRIMARY KEY, -- Setting name/identifier
value TEXT -- Setting value (JSON-serializable)
);
INSERT OR IGNORE INTO secret (id, secretBase64) VALUES (1, '${secretBase64}'); -- User's contact network and trust relationships
-- Manages the social graph for collaborative features
CREATE TABLE contacts (
did TEXT PRIMARY KEY, -- Contact's DID
name TEXT, -- Display name
publicKeyHex TEXT, -- Contact's public key
endorserApiServer TEXT, -- API server for endorsements
registered TEXT, -- Registration timestamp
lastViewedClaimId TEXT, -- Last viewed activity
seenWelcomeScreen BOOLEAN DEFAULT FALSE -- Onboarding completion
);
CREATE TABLE IF NOT EXISTS settings ( -- Application event logging for debugging and audit
-- Captures important events for troubleshooting
CREATE TABLE logs (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
accountDid TEXT, message TEXT NOT NULL, -- Log message
activeDid TEXT, level TEXT NOT NULL, -- Log level (error/warn/info/debug)
apiServer TEXT, timestamp TEXT DEFAULT CURRENT_TIMESTAMP,
filterFeedByNearby BOOLEAN, context TEXT -- Additional context (JSON)
filterFeedByVisible BOOLEAN, );
finishedOnboarding BOOLEAN,
firstName TEXT,
hideRegisterPromptOnNewContact BOOLEAN,
isRegistered BOOLEAN,
lastName TEXT,
lastAckedOfferToUserJwtId TEXT,
lastAckedOfferToUserProjectsJwtId TEXT,
lastNotifiedClaimId TEXT,
lastViewedClaimId TEXT,
notifyingNewActivityTime TEXT,
notifyingReminderMessage TEXT,
notifyingReminderTime TEXT,
partnerApiServer TEXT,
passkeyExpirationMinutes INTEGER,
profileImageUrl TEXT,
searchBoxes TEXT, -- Stored as JSON string
showContactGivesInline BOOLEAN,
showGeneralAdvanced BOOLEAN,
showShortcutBvc BOOLEAN,
vapid TEXT,
warnIfProdServer BOOLEAN,
warnIfTestServer BOOLEAN,
webPushServer TEXT
);
CREATE INDEX IF NOT EXISTS idx_settings_accountDid ON settings(accountDid);
INSERT OR IGNORE INTO settings (id, apiServer) VALUES (1, '${DEFAULT_ENDORSER_API_SERVER}'); -- Temporary data storage for multi-step operations
-- Provides transient storage for complex workflows
CREATE TABLE IF NOT EXISTS contacts ( CREATE TABLE temp (
id INTEGER PRIMARY KEY AUTOINCREMENT, id TEXT PRIMARY KEY, -- Unique identifier
did TEXT NOT NULL, data TEXT NOT NULL, -- JSON-serialized data
name TEXT, created TEXT DEFAULT CURRENT_TIMESTAMP,
contactMethods TEXT, -- Stored as JSON string expires TEXT -- Optional expiration
nextPubKeyHashB64 TEXT, );
notes TEXT,
profileImageUrl TEXT,
publicKeyBase64 TEXT,
seesMe BOOLEAN,
registered BOOLEAN
);
CREATE INDEX IF NOT EXISTS idx_contacts_did ON contacts(did); -- Initialize default application settings
CREATE INDEX IF NOT EXISTS idx_contacts_name ON contacts(name); -- These settings provide the baseline configuration for new installations
INSERT INTO settings (name, value) VALUES
('apiServer', '${DEFAULT_ENDORSER_API_SERVER}'),
('seenWelcomeScreen', 'false');
CREATE TABLE IF NOT EXISTS logs ( -- Initialize application secret
date TEXT NOT NULL, -- This secret is used for encrypting sensitive data within the application
message TEXT NOT NULL INSERT INTO secret (id, hex) VALUES (1, '${databaseSecret}');
); `,
});
CREATE TABLE IF NOT EXISTS temp ( /**
id TEXT PRIMARY KEY, * Migration 002: Add Content Visibility Control to Contacts
blobB64 TEXT *
); * This migration enhances the contacts table with privacy controls, allowing
`, * users to manage what content they want to see from each contact. This supports
}, * TimeSafari's privacy-first approach by giving users granular control over
{ * their information exposure.
*
* ## Changes Made:
*
* ### contacts.iViewContent
* New boolean column that controls whether the user wants to see content
* (activities, projects, offers) from this contact in their feeds and views.
*
* - `TRUE` (default): User sees all content from this contact
* - `FALSE`: User's interface filters out content from this contact
*
* ## Use Cases:
*
* 1. **Privacy Management**: Users can maintain contacts for trust/verification
* purposes while limiting information exposure
*
* 2. **Feed Curation**: Users can curate their activity feeds by selectively
* hiding content from certain contacts
*
* 3. **Professional Separation**: Users can separate professional and personal
* networks while maintaining cryptographic trust relationships
*
* 4. **Graduated Privacy**: Users can add contacts with limited visibility
* initially, then expand access as trust develops
*
* ## Privacy Architecture:
*
* This column works in conjunction with TimeSafari's broader privacy model:
* - Contact relationships are still maintained for verification
* - Cryptographic trust is preserved regardless of content visibility
* - Users can change visibility settings at any time
* - The setting only affects the local user's view, not the contact's capabilities
*
* ## Default Behavior:
*
* All existing contacts default to `TRUE` (visible) to maintain current
* user experience. New contacts will also default to visible, with users
* able to adjust visibility as needed.
*/
registerMigration({
name: "002_add_iViewContent_to_contacts", name: "002_add_iViewContent_to_contacts",
sql: ` sql: `
-- We need to handle the case where iViewContent column might already exist -- Add content visibility control to contacts table
-- SQLite doesn't support IF NOT EXISTS for ALTER TABLE ADD COLUMN -- This allows users to manage what content they see from each contact
-- So we'll use a more robust approach with error handling in the migration service -- while maintaining the cryptographic trust relationship
-- First, try to add the column - this will fail silently if it already exists
ALTER TABLE contacts ADD COLUMN iViewContent BOOLEAN DEFAULT TRUE; ALTER TABLE contacts ADD COLUMN iViewContent BOOLEAN DEFAULT TRUE;
`, `,
}, });
];
/**
* Template for Future Migrations
*
* When adding new migrations, follow this pattern:
*
* ```typescript
* registerMigration({
* name: "003_descriptive_name",
* sql: `
* -- Clear comment explaining what this migration does
* -- and why it's needed
*
* ALTER TABLE existing_table ADD COLUMN new_column TYPE DEFAULT value;
*
* -- Or create new tables:
* CREATE TABLE new_table (
* id INTEGER PRIMARY KEY,
* -- ... other columns with comments
* );
*
* -- Initialize any required data
* INSERT INTO new_table (column) VALUES ('initial_value');
* `,
* });
* ```
*
* ## Migration Best Practices:
*
* 1. **Clear Naming**: Use descriptive names that explain the change
* 2. **Documentation**: Document the purpose and impact of each change
* 3. **Backward Compatibility**: Consider how changes affect existing data
* 4. **Default Values**: Provide sensible defaults for new columns
* 5. **Data Migration**: Include any necessary data transformation
* 6. **Testing**: Test migrations on representative data sets
* 7. **Performance**: Consider the impact on large datasets
*
* ## Schema Evolution Guidelines:
*
* - **Additive Changes**: Prefer adding new tables/columns over modifying existing ones
* - **Nullable Columns**: New columns should be nullable or have defaults
* - **Index Creation**: Add indexes for new query patterns
* - **Data Integrity**: Maintain referential integrity and constraints
* - **Privacy Preservation**: Ensure new schema respects privacy principles
*/
/** /**
* @param sqlExec - A function that executes a SQL statement and returns the result * Run all registered migrations
* @param extractMigrationNames - A function that extracts the names (string array) from "select name from migrations" *
* This function is called during application initialization to ensure the
* database schema is up to date. It delegates to the migration service
* which handles the actual migration execution, tracking, and validation.
*
* The migration service will:
* 1. Check which migrations have already been applied
* 2. Apply any pending migrations in order
* 3. Validate that schema changes were successful
* 4. Record applied migrations for future reference
*
* @param sqlExec - Function to execute SQL statements
* @param sqlQuery - Function to execute SQL queries
* @param extractMigrationNames - Function to parse migration names from results
* @returns Promise that resolves when migrations are complete
*
* @example
* ```typescript
* // Called from platform service during database initialization
* await runMigrations(
* (sql, params) => db.run(sql, params),
* (sql, params) => db.query(sql, params),
* (result) => new Set(result.values.map(row => row[0]))
* );
* ```
*/ */
export async function runMigrations<T>( export async function runMigrations<T>(
sqlExec: (sql: string, params?: unknown[]) => Promise<unknown>, sqlExec: (sql: string, params?: unknown[]) => Promise<unknown>,
sqlQuery: (sql: string, params?: unknown[]) => Promise<T>, sqlQuery: (sql: string, params?: unknown[]) => Promise<T>,
extractMigrationNames: (result: T) => Set<string>, extractMigrationNames: (result: T) => Set<string>,
): Promise<void> { ): Promise<void> {
for (const migration of MIGRATIONS) { return runMigrationsService(sqlExec, sqlQuery, extractMigrationNames);
registerMigration(migration);
}
await runMigrationsService(sqlExec, sqlQuery, extractMigrationNames);
} }

22
src/db/databaseUtil.ts

@ -175,17 +175,17 @@ export let memoryLogs: string[] = [];
* @author Matthew Raymer * @author Matthew Raymer
*/ */
export async function logToDb(message: string): Promise<void> { export async function logToDb(message: string): Promise<void> {
const platform = PlatformServiceFactory.getInstance();
const todayKey = new Date().toDateString(); const todayKey = new Date().toDateString();
const nowKey = new Date().toISOString();
try { try {
memoryLogs.push(`${new Date().toISOString()} ${message}`); memoryLogs.push(`${new Date().toISOString()} ${message}`);
// Try to insert first, if it fails due to UNIQUE constraint, update instead
// TEMPORARILY DISABLED: Database logging to break error loop await platform.dbExec("INSERT INTO logs (date, message) VALUES (?, ?)", [
// TODO: Fix schema mismatch - logs table uses 'timestamp' not 'date' nowKey,
// await platform.dbExec("INSERT INTO logs (date, message) VALUES (?, ?)", [ message,
// nowKey, ]);
// message,
// ]);
// Clean up old logs (keep only last 7 days) - do this less frequently // Clean up old logs (keep only last 7 days) - do this less frequently
// Only clean up if the date is different from the last cleanup // Only clean up if the date is different from the last cleanup
@ -196,11 +196,9 @@ export async function logToDb(message: string): Promise<void> {
memoryLogs = memoryLogs.filter( memoryLogs = memoryLogs.filter(
(log) => log.split(" ")[0] > sevenDaysAgo.toDateString(), (log) => log.split(" ")[0] > sevenDaysAgo.toDateString(),
); );
await platform.dbExec("DELETE FROM logs WHERE date < ?", [
// TEMPORARILY DISABLED: Database cleanup sevenDaysAgo.toDateString(),
// await platform.dbExec("DELETE FROM logs WHERE date < ?", [ ]);
// sevenDaysAgo.toDateString(),
// ]);
lastCleanupDate = todayKey; lastCleanupDate = todayKey;
} }
} catch (error) { } catch (error) {

5
src/router/index.ts

@ -147,6 +147,11 @@ const routes: Array<RouteRecordRaw> = [
name: "logs", name: "logs",
component: () => import("../views/LogView.vue"), component: () => import("../views/LogView.vue"),
}, },
{
path: "/database-migration",
name: "database-migration",
component: () => import("../views/DatabaseMigration.vue"),
},
{ {
path: "/new-activity", path: "/new-activity",
name: "new-activity", name: "new-activity",

1397
src/services/indexedDBMigrationService.ts

File diff suppressed because it is too large

24
src/services/migrationService.ts

@ -113,7 +113,7 @@ class MigrationRegistry {
* Adds a migration to the list of migrations that will be applied when * Adds a migration to the list of migrations that will be applied when
* runMigrations() is called. Migrations should be registered in order * runMigrations() is called. Migrations should be registered in order
* of their intended execution. * of their intended execution.
* *
* @param migration - The migration to register * @param migration - The migration to register
* @throws {Error} If migration name is empty or already exists * @throws {Error} If migration name is empty or already exists
* *
@ -139,7 +139,7 @@ class MigrationRegistry {
/** /**
* Get all registered migrations * Get all registered migrations
* *
* Returns a copy of all migrations that have been registered with this * Returns a copy of all migrations that have been registered with this
* registry. The migrations are returned in the order they were registered. * registry. The migrations are returned in the order they were registered.
* *
@ -176,11 +176,11 @@ const migrationRegistry = new MigrationRegistry();
/** /**
* Register a migration with the migration service * Register a migration with the migration service
* *
* This is the primary public API for registering database migrations. * This is the primary public API for registering database migrations.
* Each migration should represent a single, focused schema change that * Each migration should represent a single, focused schema change that
* can be applied atomically. * can be applied atomically.
* *
* @param migration - The migration to register * @param migration - The migration to register
* @throws {Error} If migration is invalid * @throws {Error} If migration is invalid
* *
@ -339,7 +339,7 @@ async function isSchemaAlreadyPresent<T>(
/** /**
* Run all registered migrations against the database * Run all registered migrations against the database
* *
* This is the main function that executes the migration process. It: * This is the main function that executes the migration process. It:
* 1. Creates the migrations tracking table if needed * 1. Creates the migrations tracking table if needed
* 2. Determines which migrations have already been applied * 2. Determines which migrations have already been applied
@ -350,7 +350,7 @@ async function isSchemaAlreadyPresent<T>(
* *
* The function is designed to be idempotent - it can be run multiple times * The function is designed to be idempotent - it can be run multiple times
* safely without re-applying migrations that have already been completed. * safely without re-applying migrations that have already been completed.
* *
* @template T - The type returned by SQL query operations * @template T - The type returned by SQL query operations
* @param sqlExec - Function to execute SQL statements (INSERT, UPDATE, CREATE, etc.) * @param sqlExec - Function to execute SQL statements (INSERT, UPDATE, CREATE, etc.)
* @param sqlQuery - Function to execute SQL queries (SELECT) * @param sqlQuery - Function to execute SQL queries (SELECT)
@ -532,14 +532,14 @@ export async function runMigrations<T>(
} else { } else {
// For other types of errors, still fail the migration // For other types of errors, still fail the migration
console.error(`❌ [Migration] Failed to apply ${migration.name}:`, error); console.error(`❌ [Migration] Failed to apply ${migration.name}:`, error);
logger.error( logger.error(
`[MigrationService] Failed to apply migration ${migration.name}:`, `[MigrationService] Failed to apply migration ${migration.name}:`,
error, error,
); );
throw new Error(`Migration ${migration.name} failed: ${error}`); throw new Error(`Migration ${migration.name} failed: ${error}`);
}
} }
} }
}
// Step 5: Final validation - verify all migrations are properly recorded // Step 5: Final validation - verify all migrations are properly recorded
console.log("\n🔍 [Migration] Final validation - checking migrations table..."); console.log("\n🔍 [Migration] Final validation - checking migrations table...");

2
src/services/platforms/CapacitorPlatformService.ts

@ -385,7 +385,7 @@ export class CapacitorPlatformService implements PlatformService {
try { try {
// Execute the migration process // Execute the migration process
await runMigrations(sqlExec, sqlQuery, extractMigrationNames); await runMigrations(sqlExec, sqlQuery, extractMigrationNames);
// After migrations, run integrity check to verify database state // After migrations, run integrity check to verify database state
await this.verifyDatabaseIntegrity(); await this.verifyDatabaseIntegrity();

1492
src/views/DatabaseMigration.vue

File diff suppressed because it is too large
Loading…
Cancel
Save