147 changed files with 10861 additions and 3758 deletions
@ -0,0 +1,153 @@ |
|||||
|
--- |
||||
|
description: |
||||
|
globs: |
||||
|
alwaysApply: true |
||||
|
--- |
||||
|
# Absurd SQL - Cursor Development Guide |
||||
|
|
||||
|
## Project Overview |
||||
|
Absurd SQL is a backend implementation for sql.js that enables persistent SQLite databases in the browser by using IndexedDB as a block storage system. This guide provides rules and best practices for developing with this project in Cursor. |
||||
|
|
||||
|
## Project Structure |
||||
|
``` |
||||
|
absurd-sql/ |
||||
|
├── src/ # Source code |
||||
|
├── dist/ # Built files |
||||
|
├── package.json # Dependencies and scripts |
||||
|
├── rollup.config.js # Build configuration |
||||
|
└── jest.config.js # Test configuration |
||||
|
``` |
||||
|
|
||||
|
## Development Rules |
||||
|
|
||||
|
### 1. Worker Thread Requirements |
||||
|
- All SQL operations MUST be performed in a worker thread |
||||
|
- Main thread should only handle worker initialization and communication |
||||
|
- Never block the main thread with database operations |
||||
|
|
||||
|
### 2. Code Organization |
||||
|
- Keep worker code in separate files (e.g., `*.worker.js`) |
||||
|
- Use ES modules for imports/exports |
||||
|
- Follow the project's existing module structure |
||||
|
|
||||
|
### 3. Required Headers |
||||
|
When developing locally or deploying, ensure these headers are set: |
||||
|
``` |
||||
|
Cross-Origin-Opener-Policy: same-origin |
||||
|
Cross-Origin-Embedder-Policy: require-corp |
||||
|
``` |
||||
|
|
||||
|
### 4. Browser Compatibility |
||||
|
- Primary target: Modern browsers with SharedArrayBuffer support |
||||
|
- Fallback mode: Safari (with limitations) |
||||
|
- Always test in both modes |
||||
|
|
||||
|
### 5. Database Configuration |
||||
|
Recommended database settings: |
||||
|
```sql |
||||
|
PRAGMA journal_mode=MEMORY; |
||||
|
PRAGMA page_size=8192; -- Optional, but recommended |
||||
|
``` |
||||
|
|
||||
|
### 6. Development Workflow |
||||
|
1. Install dependencies: |
||||
|
```bash |
||||
|
yarn add @jlongster/sql.js absurd-sql |
||||
|
``` |
||||
|
|
||||
|
2. Development commands: |
||||
|
- `yarn build` - Build the project |
||||
|
- `yarn jest` - Run tests |
||||
|
- `yarn serve` - Start development server |
||||
|
|
||||
|
### 7. Testing Guidelines |
||||
|
- Write tests for both SharedArrayBuffer and fallback modes |
||||
|
- Use Jest for testing |
||||
|
- Include performance benchmarks for critical operations |
||||
|
|
||||
|
### 8. Performance Considerations |
||||
|
- Use bulk operations when possible |
||||
|
- Monitor read/write performance |
||||
|
- Consider using transactions for multiple operations |
||||
|
- Avoid unnecessary database connections |
||||
|
|
||||
|
### 9. Error Handling |
||||
|
- Implement proper error handling for: |
||||
|
- Worker initialization failures |
||||
|
- Database connection issues |
||||
|
- Concurrent access conflicts (in fallback mode) |
||||
|
- Storage quota exceeded scenarios |
||||
|
|
||||
|
### 10. Security Best Practices |
||||
|
- Never expose database operations directly to the client |
||||
|
- Validate all SQL queries |
||||
|
- Implement proper access controls |
||||
|
- Handle sensitive data appropriately |
||||
|
|
||||
|
### 11. Code Style |
||||
|
- Follow ESLint configuration |
||||
|
- Use async/await for asynchronous operations |
||||
|
- Document complex database operations |
||||
|
- Include comments for non-obvious optimizations |
||||
|
|
||||
|
### 12. Debugging |
||||
|
- Use `jest-debug` for debugging tests |
||||
|
- Monitor IndexedDB usage in browser dev tools |
||||
|
- Check worker communication in console |
||||
|
- Use performance monitoring tools |
||||
|
|
||||
|
## Common Patterns |
||||
|
|
||||
|
### Worker Initialization |
||||
|
```javascript |
||||
|
// Main thread |
||||
|
import { initBackend } from 'absurd-sql/dist/indexeddb-main-thread'; |
||||
|
|
||||
|
function init() { |
||||
|
let worker = new Worker(new URL('./index.worker.js', import.meta.url)); |
||||
|
initBackend(worker); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### Database Setup |
||||
|
```javascript |
||||
|
// Worker thread |
||||
|
import initSqlJs from '@jlongster/sql.js'; |
||||
|
import { SQLiteFS } from 'absurd-sql'; |
||||
|
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend'; |
||||
|
|
||||
|
async function setupDatabase() { |
||||
|
let SQL = await initSqlJs({ locateFile: file => file }); |
||||
|
let sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend()); |
||||
|
SQL.register_for_idb(sqlFS); |
||||
|
|
||||
|
SQL.FS.mkdir('/sql'); |
||||
|
SQL.FS.mount(sqlFS, {}, '/sql'); |
||||
|
|
||||
|
return new SQL.Database('/sql/db.sqlite', { filename: true }); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
## Troubleshooting |
||||
|
|
||||
|
### Common Issues |
||||
|
1. SharedArrayBuffer not available |
||||
|
- Check COOP/COEP headers |
||||
|
- Verify browser support |
||||
|
- Test fallback mode |
||||
|
|
||||
|
2. Worker initialization failures |
||||
|
- Check file paths |
||||
|
- Verify module imports |
||||
|
- Check browser console for errors |
||||
|
|
||||
|
3. Performance issues |
||||
|
- Monitor IndexedDB usage |
||||
|
- Check for unnecessary operations |
||||
|
- Verify transaction usage |
||||
|
|
||||
|
## Resources |
||||
|
- [Project Demo](https://priceless-keller-d097e5.netlify.app/) |
||||
|
- [Example Project](https://github.com/jlongster/absurd-example-project) |
||||
|
- [Blog Post](https://jlongster.com/future-sql-web) |
||||
|
- [SQL.js Documentation](https://github.com/sql-js/sql.js/) |
@ -1,6 +0,0 @@ |
|||||
# Admin DID credentials |
|
||||
ADMIN_DID=did:ethr:0x0000694B58C2cC69658993A90D3840C560f2F51F |
|
||||
ADMIN_PRIVATE_KEY=2b6472c026ec2aa2c4235c994a63868fc9212d18b58f6cbfe861b52e71330f5b |
|
||||
|
|
||||
# API Configuration |
|
||||
ENDORSER_API_URL=https://test-api.endorser.ch/api/v2/claim |
|
@ -1,7 +1,15 @@ |
|||||
package app.timesafari; |
package app.timesafari; |
||||
|
|
||||
|
import android.os.Bundle; |
||||
import com.getcapacitor.BridgeActivity; |
import com.getcapacitor.BridgeActivity; |
||||
|
//import com.getcapacitor.community.sqlite.SQLite;
|
||||
|
|
||||
public class MainActivity extends BridgeActivity { |
public class MainActivity extends BridgeActivity { |
||||
// ... existing code ...
|
@Override |
||||
|
public void onCreate(Bundle savedInstanceState) { |
||||
|
super.onCreate(savedInstanceState); |
||||
|
|
||||
|
// Initialize SQLite
|
||||
|
//registerPlugin(SQLite.class);
|
||||
|
} |
||||
} |
} |
@ -1,5 +0,0 @@ |
|||||
package timesafari.app; |
|
||||
|
|
||||
import com.getcapacitor.BridgeActivity; |
|
||||
|
|
||||
public class MainActivity extends BridgeActivity {} |
|
@ -0,0 +1,399 @@ |
|||||
|
# Dexie to absurd-sql Mapping Guide |
||||
|
|
||||
|
## Schema Mapping |
||||
|
|
||||
|
### Current Dexie Schema |
||||
|
```typescript |
||||
|
// Current Dexie schema |
||||
|
const db = new Dexie('TimeSafariDB'); |
||||
|
|
||||
|
db.version(1).stores({ |
||||
|
accounts: 'did, publicKeyHex, createdAt, updatedAt', |
||||
|
settings: 'key, value, updatedAt', |
||||
|
contacts: 'id, did, name, createdAt, updatedAt' |
||||
|
}); |
||||
|
``` |
||||
|
|
||||
|
### New SQLite Schema |
||||
|
```sql |
||||
|
-- New SQLite schema |
||||
|
CREATE TABLE accounts ( |
||||
|
did TEXT PRIMARY KEY, |
||||
|
public_key_hex TEXT NOT NULL, |
||||
|
created_at INTEGER NOT NULL, |
||||
|
updated_at INTEGER NOT NULL |
||||
|
); |
||||
|
|
||||
|
CREATE TABLE settings ( |
||||
|
key TEXT PRIMARY KEY, |
||||
|
value TEXT NOT NULL, |
||||
|
updated_at INTEGER NOT NULL |
||||
|
); |
||||
|
|
||||
|
CREATE TABLE contacts ( |
||||
|
id TEXT PRIMARY KEY, |
||||
|
did TEXT NOT NULL, |
||||
|
name TEXT, |
||||
|
created_at INTEGER NOT NULL, |
||||
|
updated_at INTEGER NOT NULL, |
||||
|
FOREIGN KEY (did) REFERENCES accounts(did) |
||||
|
); |
||||
|
|
||||
|
-- Indexes for performance |
||||
|
CREATE INDEX idx_accounts_created_at ON accounts(created_at); |
||||
|
CREATE INDEX idx_contacts_did ON contacts(did); |
||||
|
CREATE INDEX idx_settings_updated_at ON settings(updated_at); |
||||
|
``` |
||||
|
|
||||
|
## Query Mapping |
||||
|
|
||||
|
### 1. Account Operations |
||||
|
|
||||
|
#### Get Account by DID |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
const account = await db.accounts.get(did); |
||||
|
|
||||
|
// absurd-sql |
||||
|
const result = await db.exec(` |
||||
|
SELECT * FROM accounts WHERE did = ? |
||||
|
`, [did]); |
||||
|
const account = result[0]?.values[0]; |
||||
|
``` |
||||
|
|
||||
|
#### Get All Accounts |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
const accounts = await db.accounts.toArray(); |
||||
|
|
||||
|
// absurd-sql |
||||
|
const result = await db.exec(` |
||||
|
SELECT * FROM accounts ORDER BY created_at DESC |
||||
|
`); |
||||
|
const accounts = result[0]?.values || []; |
||||
|
``` |
||||
|
|
||||
|
#### Add Account |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
await db.accounts.add({ |
||||
|
did, |
||||
|
publicKeyHex, |
||||
|
createdAt: Date.now(), |
||||
|
updatedAt: Date.now() |
||||
|
}); |
||||
|
|
||||
|
// absurd-sql |
||||
|
await db.run(` |
||||
|
INSERT INTO accounts (did, public_key_hex, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?) |
||||
|
`, [did, publicKeyHex, Date.now(), Date.now()]); |
||||
|
``` |
||||
|
|
||||
|
#### Update Account |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
await db.accounts.update(did, { |
||||
|
publicKeyHex, |
||||
|
updatedAt: Date.now() |
||||
|
}); |
||||
|
|
||||
|
// absurd-sql |
||||
|
await db.run(` |
||||
|
UPDATE accounts |
||||
|
SET public_key_hex = ?, updated_at = ? |
||||
|
WHERE did = ? |
||||
|
`, [publicKeyHex, Date.now(), did]); |
||||
|
``` |
||||
|
|
||||
|
### 2. Settings Operations |
||||
|
|
||||
|
#### Get Setting |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
const setting = await db.settings.get(key); |
||||
|
|
||||
|
// absurd-sql |
||||
|
const result = await db.exec(` |
||||
|
SELECT * FROM settings WHERE key = ? |
||||
|
`, [key]); |
||||
|
const setting = result[0]?.values[0]; |
||||
|
``` |
||||
|
|
||||
|
#### Set Setting |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
await db.settings.put({ |
||||
|
key, |
||||
|
value, |
||||
|
updatedAt: Date.now() |
||||
|
}); |
||||
|
|
||||
|
// absurd-sql |
||||
|
await db.run(` |
||||
|
INSERT INTO settings (key, value, updated_at) |
||||
|
VALUES (?, ?, ?) |
||||
|
ON CONFLICT(key) DO UPDATE SET |
||||
|
value = excluded.value, |
||||
|
updated_at = excluded.updated_at |
||||
|
`, [key, value, Date.now()]); |
||||
|
``` |
||||
|
|
||||
|
### 3. Contact Operations |
||||
|
|
||||
|
#### Get Contacts by Account |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
const contacts = await db.contacts |
||||
|
.where('did') |
||||
|
.equals(accountDid) |
||||
|
.toArray(); |
||||
|
|
||||
|
// absurd-sql |
||||
|
const result = await db.exec(` |
||||
|
SELECT * FROM contacts |
||||
|
WHERE did = ? |
||||
|
ORDER BY created_at DESC |
||||
|
`, [accountDid]); |
||||
|
const contacts = result[0]?.values || []; |
||||
|
``` |
||||
|
|
||||
|
#### Add Contact |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
await db.contacts.add({ |
||||
|
id: generateId(), |
||||
|
did: accountDid, |
||||
|
name, |
||||
|
createdAt: Date.now(), |
||||
|
updatedAt: Date.now() |
||||
|
}); |
||||
|
|
||||
|
// absurd-sql |
||||
|
await db.run(` |
||||
|
INSERT INTO contacts (id, did, name, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?, ?) |
||||
|
`, [generateId(), accountDid, name, Date.now(), Date.now()]); |
||||
|
``` |
||||
|
|
||||
|
## Transaction Mapping |
||||
|
|
||||
|
### Batch Operations |
||||
|
```typescript |
||||
|
// Dexie |
||||
|
await db.transaction('rw', [db.accounts, db.contacts], async () => { |
||||
|
await db.accounts.add(account); |
||||
|
await db.contacts.bulkAdd(contacts); |
||||
|
}); |
||||
|
|
||||
|
// absurd-sql |
||||
|
await db.exec('BEGIN TRANSACTION;'); |
||||
|
try { |
||||
|
await db.run(` |
||||
|
INSERT INTO accounts (did, public_key_hex, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?) |
||||
|
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]); |
||||
|
|
||||
|
for (const contact of contacts) { |
||||
|
await db.run(` |
||||
|
INSERT INTO contacts (id, did, name, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?, ?) |
||||
|
`, [contact.id, contact.did, contact.name, contact.createdAt, contact.updatedAt]); |
||||
|
} |
||||
|
await db.exec('COMMIT;'); |
||||
|
} catch (error) { |
||||
|
await db.exec('ROLLBACK;'); |
||||
|
throw error; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
## Migration Helper Functions |
||||
|
|
||||
|
### 1. Data Export (Dexie to JSON) |
||||
|
```typescript |
||||
|
async function exportDexieData(): Promise<MigrationData> { |
||||
|
const db = new Dexie('TimeSafariDB'); |
||||
|
|
||||
|
return { |
||||
|
accounts: await db.accounts.toArray(), |
||||
|
settings: await db.settings.toArray(), |
||||
|
contacts: await db.contacts.toArray(), |
||||
|
metadata: { |
||||
|
version: '1.0.0', |
||||
|
timestamp: Date.now(), |
||||
|
dexieVersion: Dexie.version |
||||
|
} |
||||
|
}; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### 2. Data Import (JSON to absurd-sql) |
||||
|
```typescript |
||||
|
async function importToAbsurdSql(data: MigrationData): Promise<void> { |
||||
|
await db.exec('BEGIN TRANSACTION;'); |
||||
|
try { |
||||
|
// Import accounts |
||||
|
for (const account of data.accounts) { |
||||
|
await db.run(` |
||||
|
INSERT INTO accounts (did, public_key_hex, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?) |
||||
|
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]); |
||||
|
} |
||||
|
|
||||
|
// Import settings |
||||
|
for (const setting of data.settings) { |
||||
|
await db.run(` |
||||
|
INSERT INTO settings (key, value, updated_at) |
||||
|
VALUES (?, ?, ?) |
||||
|
`, [setting.key, setting.value, setting.updatedAt]); |
||||
|
} |
||||
|
|
||||
|
// Import contacts |
||||
|
for (const contact of data.contacts) { |
||||
|
await db.run(` |
||||
|
INSERT INTO contacts (id, did, name, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?, ?) |
||||
|
`, [contact.id, contact.did, contact.name, contact.createdAt, contact.updatedAt]); |
||||
|
} |
||||
|
await db.exec('COMMIT;'); |
||||
|
} catch (error) { |
||||
|
await db.exec('ROLLBACK;'); |
||||
|
throw error; |
||||
|
} |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### 3. Verification |
||||
|
```typescript |
||||
|
async function verifyMigration(dexieData: MigrationData): Promise<boolean> { |
||||
|
// Verify account count |
||||
|
const accountResult = await db.exec('SELECT COUNT(*) as count FROM accounts'); |
||||
|
const accountCount = accountResult[0].values[0][0]; |
||||
|
if (accountCount !== dexieData.accounts.length) { |
||||
|
return false; |
||||
|
} |
||||
|
|
||||
|
// Verify settings count |
||||
|
const settingsResult = await db.exec('SELECT COUNT(*) as count FROM settings'); |
||||
|
const settingsCount = settingsResult[0].values[0][0]; |
||||
|
if (settingsCount !== dexieData.settings.length) { |
||||
|
return false; |
||||
|
} |
||||
|
|
||||
|
// Verify contacts count |
||||
|
const contactsResult = await db.exec('SELECT COUNT(*) as count FROM contacts'); |
||||
|
const contactsCount = contactsResult[0].values[0][0]; |
||||
|
if (contactsCount !== dexieData.contacts.length) { |
||||
|
return false; |
||||
|
} |
||||
|
|
||||
|
// Verify data integrity |
||||
|
for (const account of dexieData.accounts) { |
||||
|
const result = await db.exec( |
||||
|
'SELECT * FROM accounts WHERE did = ?', |
||||
|
[account.did] |
||||
|
); |
||||
|
const migratedAccount = result[0]?.values[0]; |
||||
|
if (!migratedAccount || |
||||
|
migratedAccount[1] !== account.publicKeyHex) { // public_key_hex is second column |
||||
|
return false; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
return true; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
## Performance Considerations |
||||
|
|
||||
|
### 1. Indexing |
||||
|
- Dexie automatically creates indexes based on the schema |
||||
|
- absurd-sql requires explicit index creation |
||||
|
- Added indexes for frequently queried fields |
||||
|
- Use `PRAGMA journal_mode=MEMORY;` for better performance |
||||
|
|
||||
|
### 2. Batch Operations |
||||
|
- Dexie has built-in bulk operations |
||||
|
- absurd-sql uses transactions for batch operations |
||||
|
- Consider chunking large datasets |
||||
|
- Use prepared statements for repeated queries |
||||
|
|
||||
|
### 3. Query Optimization |
||||
|
- Dexie uses IndexedDB's native indexing |
||||
|
- absurd-sql requires explicit query optimization |
||||
|
- Use prepared statements for repeated queries |
||||
|
- Consider using `PRAGMA synchronous=NORMAL;` for better performance |
||||
|
|
||||
|
## Error Handling |
||||
|
|
||||
|
### 1. Common Errors |
||||
|
```typescript |
||||
|
// Dexie errors |
||||
|
try { |
||||
|
await db.accounts.add(account); |
||||
|
} catch (error) { |
||||
|
if (error instanceof Dexie.ConstraintError) { |
||||
|
// Handle duplicate key |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
// absurd-sql errors |
||||
|
try { |
||||
|
await db.run(` |
||||
|
INSERT INTO accounts (did, public_key_hex, created_at, updated_at) |
||||
|
VALUES (?, ?, ?, ?) |
||||
|
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]); |
||||
|
} catch (error) { |
||||
|
if (error.message.includes('UNIQUE constraint failed')) { |
||||
|
// Handle duplicate key |
||||
|
} |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### 2. Transaction Recovery |
||||
|
```typescript |
||||
|
// Dexie transaction |
||||
|
try { |
||||
|
await db.transaction('rw', db.accounts, async () => { |
||||
|
// Operations |
||||
|
}); |
||||
|
} catch (error) { |
||||
|
// Dexie automatically rolls back |
||||
|
} |
||||
|
|
||||
|
// absurd-sql transaction |
||||
|
try { |
||||
|
await db.exec('BEGIN TRANSACTION;'); |
||||
|
// Operations |
||||
|
await db.exec('COMMIT;'); |
||||
|
} catch (error) { |
||||
|
await db.exec('ROLLBACK;'); |
||||
|
throw error; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
## Migration Strategy |
||||
|
|
||||
|
1. **Preparation** |
||||
|
- Export all Dexie data |
||||
|
- Verify data integrity |
||||
|
- Create SQLite schema |
||||
|
- Setup indexes |
||||
|
|
||||
|
2. **Migration** |
||||
|
- Import data in transactions |
||||
|
- Verify each batch |
||||
|
- Handle errors gracefully |
||||
|
- Maintain backup |
||||
|
|
||||
|
3. **Verification** |
||||
|
- Compare record counts |
||||
|
- Verify data integrity |
||||
|
- Test common queries |
||||
|
- Validate relationships |
||||
|
|
||||
|
4. **Cleanup** |
||||
|
- Remove Dexie database |
||||
|
- Clear IndexedDB storage |
||||
|
- Update application code |
||||
|
- Remove old dependencies |
@ -0,0 +1,339 @@ |
|||||
|
# Secure Storage Implementation Guide for TimeSafari App |
||||
|
|
||||
|
## Overview |
||||
|
|
||||
|
This document outlines the implementation of secure storage for the TimeSafari app. The implementation focuses on: |
||||
|
|
||||
|
1. **Platform-Specific Storage Solutions**: |
||||
|
- Web: SQLite with IndexedDB backend (absurd-sql) |
||||
|
- Electron: SQLite with Node.js backend |
||||
|
- Native: (Planned) SQLCipher with platform-specific secure storage |
||||
|
|
||||
|
2. **Key Features**: |
||||
|
- SQLite-based storage using absurd-sql for web |
||||
|
- Platform-specific service factory pattern |
||||
|
- Consistent API across platforms |
||||
|
- Migration support from Dexie.js |
||||
|
|
||||
|
## Quick Start |
||||
|
|
||||
|
### 1. Installation |
||||
|
|
||||
|
```bash |
||||
|
# Core dependencies |
||||
|
npm install @jlongster/sql.js |
||||
|
npm install absurd-sql |
||||
|
|
||||
|
# Platform-specific dependencies (for future native support) |
||||
|
npm install @capacitor/preferences |
||||
|
npm install @capacitor-community/biometric-auth |
||||
|
``` |
||||
|
|
||||
|
### 2. Basic Usage |
||||
|
|
||||
|
```typescript |
||||
|
// Using the platform service |
||||
|
import { PlatformServiceFactory } from '../services/PlatformServiceFactory'; |
||||
|
|
||||
|
// Get platform-specific service instance |
||||
|
const platformService = PlatformServiceFactory.getInstance(); |
||||
|
|
||||
|
// Example database operations |
||||
|
async function example() { |
||||
|
try { |
||||
|
// Query example |
||||
|
const result = await platformService.dbQuery( |
||||
|
"SELECT * FROM accounts WHERE did = ?", |
||||
|
[did] |
||||
|
); |
||||
|
|
||||
|
// Execute example |
||||
|
await platformService.dbExec( |
||||
|
"INSERT INTO accounts (did, public_key_hex) VALUES (?, ?)", |
||||
|
[did, publicKeyHex] |
||||
|
); |
||||
|
|
||||
|
} catch (error) { |
||||
|
console.error('Database operation failed:', error); |
||||
|
} |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### 3. Platform Detection |
||||
|
|
||||
|
```typescript |
||||
|
// src/services/PlatformServiceFactory.ts |
||||
|
export class PlatformServiceFactory { |
||||
|
static getInstance(): PlatformService { |
||||
|
if (process.env.ELECTRON) { |
||||
|
// Electron platform |
||||
|
return new ElectronPlatformService(); |
||||
|
} else { |
||||
|
// Web platform (default) |
||||
|
return new AbsurdSqlDatabaseService(); |
||||
|
} |
||||
|
} |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
### 4. Current Implementation Details |
||||
|
|
||||
|
#### Web Platform (AbsurdSqlDatabaseService) |
||||
|
|
||||
|
The web platform uses absurd-sql with IndexedDB backend: |
||||
|
|
||||
|
```typescript |
||||
|
// src/services/AbsurdSqlDatabaseService.ts |
||||
|
export class AbsurdSqlDatabaseService implements PlatformService { |
||||
|
private static instance: AbsurdSqlDatabaseService | null = null; |
||||
|
private db: AbsurdSqlDatabase | null = null; |
||||
|
private initialized: boolean = false; |
||||
|
|
||||
|
// Singleton pattern |
||||
|
static getInstance(): AbsurdSqlDatabaseService { |
||||
|
if (!AbsurdSqlDatabaseService.instance) { |
||||
|
AbsurdSqlDatabaseService.instance = new AbsurdSqlDatabaseService(); |
||||
|
} |
||||
|
return AbsurdSqlDatabaseService.instance; |
||||
|
} |
||||
|
|
||||
|
// Database operations |
||||
|
async dbQuery(sql: string, params: unknown[] = []): Promise<QueryExecResult[]> { |
||||
|
await this.waitForInitialization(); |
||||
|
return this.queueOperation<QueryExecResult[]>("query", sql, params); |
||||
|
} |
||||
|
|
||||
|
async dbExec(sql: string, params: unknown[] = []): Promise<void> { |
||||
|
await this.waitForInitialization(); |
||||
|
await this.queueOperation<void>("run", sql, params); |
||||
|
} |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
Key features: |
||||
|
- Uses absurd-sql for SQLite in the browser |
||||
|
- Implements operation queuing for thread safety |
||||
|
- Handles initialization and connection management |
||||
|
- Provides consistent API across platforms |
||||
|
|
||||
|
### 5. Migration from Dexie.js |
||||
|
|
||||
|
The current implementation supports gradual migration from Dexie.js: |
||||
|
|
||||
|
```typescript |
||||
|
// Example of dual-storage pattern |
||||
|
async function getAccount(did: string): Promise<Account | undefined> { |
||||
|
// Try SQLite first |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
let account = await platform.dbQuery( |
||||
|
"SELECT * FROM accounts WHERE did = ?", |
||||
|
[did] |
||||
|
); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
account = await db.accounts.get(did); |
||||
|
} |
||||
|
|
||||
|
return account; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
#### A. Modifying Code |
||||
|
|
||||
|
When converting from Dexie.js to SQL-based implementation, follow these patterns: |
||||
|
|
||||
|
1. **Database Access Pattern** |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
const result = await db.table.where("field").equals(value).first(); |
||||
|
|
||||
|
// After (SQL) |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
let result = await platform.dbQuery( |
||||
|
"SELECT * FROM table WHERE field = ?", |
||||
|
[value] |
||||
|
); |
||||
|
result = databaseUtil.mapQueryResultToValues(result); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
result = await db.table.where("field").equals(value).first(); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
2. **Update Operations** |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
await db.table.where("id").equals(id).modify(changes); |
||||
|
|
||||
|
// After (SQL) |
||||
|
// For settings updates, use the utility methods: |
||||
|
await databaseUtil.updateDefaultSettings(changes); |
||||
|
// OR |
||||
|
await databaseUtil.updateAccountSettings(did, changes); |
||||
|
|
||||
|
// For other tables, use direct SQL: |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
await platform.dbExec( |
||||
|
"UPDATE table SET field1 = ?, field2 = ? WHERE id = ?", |
||||
|
[changes.field1, changes.field2, id] |
||||
|
); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
await db.table.where("id").equals(id).modify(changes); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
3. **Insert Operations** |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
await db.table.add(item); |
||||
|
|
||||
|
// After (SQL) |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
const columns = Object.keys(item); |
||||
|
const values = Object.values(item); |
||||
|
const placeholders = values.map(() => '?').join(', '); |
||||
|
const sql = `INSERT INTO table (${columns.join(', ')}) VALUES (${placeholders})`; |
||||
|
await platform.dbExec(sql, values); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
await db.table.add(item); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
4. **Delete Operations** |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
await db.table.where("id").equals(id).delete(); |
||||
|
|
||||
|
// After (SQL) |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
await platform.dbExec("DELETE FROM table WHERE id = ?", [id]); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
await db.table.where("id").equals(id).delete(); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
5. **Result Processing** |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
const items = await db.table.toArray(); |
||||
|
|
||||
|
// After (SQL) |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
let items = await platform.dbQuery("SELECT * FROM table"); |
||||
|
items = databaseUtil.mapQueryResultToValues(items); |
||||
|
|
||||
|
// Fallback to Dexie if needed |
||||
|
if (USE_DEXIE_DB) { |
||||
|
items = await db.table.toArray(); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
6. **Using Utility Methods** |
||||
|
|
||||
|
When working with settings or other common operations, use the utility methods in `db/index.ts`: |
||||
|
|
||||
|
```typescript |
||||
|
// Settings operations |
||||
|
await databaseUtil.updateDefaultSettings(settings); |
||||
|
await databaseUtil.updateAccountSettings(did, settings); |
||||
|
const settings = await databaseUtil.retrieveSettingsForDefaultAccount(); |
||||
|
const settings = await databaseUtil.retrieveSettingsForActiveAccount(); |
||||
|
|
||||
|
// Logging operations |
||||
|
await databaseUtil.logToDb(message); |
||||
|
await databaseUtil.logConsoleAndDb(message, showInConsole); |
||||
|
``` |
||||
|
|
||||
|
Key Considerations: |
||||
|
- Always use `databaseUtil.mapQueryResultToValues()` to process SQL query results |
||||
|
- Use utility methods from `db/index.ts` when available instead of direct SQL |
||||
|
- Keep Dexie fallbacks wrapped in `if (USE_DEXIE_DB)` checks |
||||
|
- For queries that return results, use `let` variables to allow Dexie fallback to override |
||||
|
- For updates/inserts/deletes, execute both SQL and Dexie operations when `USE_DEXIE_DB` is true |
||||
|
|
||||
|
Example Migration: |
||||
|
```typescript |
||||
|
// Before (Dexie) |
||||
|
export async function updateSettings(settings: Settings): Promise<void> { |
||||
|
await db.settings.put(settings); |
||||
|
} |
||||
|
|
||||
|
// After (SQL) |
||||
|
export async function updateSettings(settings: Settings): Promise<void> { |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
const { sql, params } = generateUpdateStatement( |
||||
|
settings, |
||||
|
"settings", |
||||
|
"id = ?", |
||||
|
[settings.id] |
||||
|
); |
||||
|
await platform.dbExec(sql, params); |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
Remember to: |
||||
|
- Create database access code to use the platform service, putting it in front of the Dexie version |
||||
|
- Instead of removing Dexie-specific code, keep it. |
||||
|
|
||||
|
- For creates & updates & deletes, the duplicate code is fine. |
||||
|
|
||||
|
- For queries where we use the results, make the setting from SQL into a 'let' variable, then wrap the Dexie code in a check for USE_DEXIE_DB from app.ts and if |
||||
|
it's true then use that result instead of the SQL code's result. |
||||
|
|
||||
|
- Consider data migration needs, and warn if there are any potential migration problems |
||||
|
|
||||
|
## Success Criteria |
||||
|
|
||||
|
1. **Functionality** |
||||
|
- [x] Basic CRUD operations work correctly |
||||
|
- [x] Platform service factory pattern implemented |
||||
|
- [x] Error handling in place |
||||
|
- [ ] Native platform support (planned) |
||||
|
|
||||
|
2. **Performance** |
||||
|
- [x] Database operations complete within acceptable time |
||||
|
- [x] Operation queuing for thread safety |
||||
|
- [x] Proper initialization handling |
||||
|
- [ ] Performance monitoring (planned) |
||||
|
|
||||
|
3. **Security** |
||||
|
- [x] Basic data integrity |
||||
|
- [ ] Encryption (planned for native platforms) |
||||
|
- [ ] Secure key storage (planned) |
||||
|
- [ ] Platform-specific security features (planned) |
||||
|
|
||||
|
4. **Testing** |
||||
|
- [x] Basic unit tests |
||||
|
- [ ] Comprehensive integration tests (planned) |
||||
|
- [ ] Platform-specific tests (planned) |
||||
|
- [ ] Migration tests (planned) |
||||
|
|
||||
|
## Next Steps |
||||
|
|
||||
|
1. **Native Platform Support** |
||||
|
- Implement SQLCipher for iOS/Android |
||||
|
- Add platform-specific secure storage |
||||
|
- Implement biometric authentication |
||||
|
|
||||
|
2. **Enhanced Security** |
||||
|
- Add encryption for sensitive data |
||||
|
- Implement secure key storage |
||||
|
- Add platform-specific security features |
||||
|
|
||||
|
3. **Testing and Monitoring** |
||||
|
- Add comprehensive test coverage |
||||
|
- Implement performance monitoring |
||||
|
- Add error tracking and analytics |
||||
|
|
||||
|
4. **Documentation** |
||||
|
- Add API documentation |
||||
|
- Create migration guides |
||||
|
- Document security measures |
@ -0,0 +1,329 @@ |
|||||
|
# Storage Implementation Checklist |
||||
|
|
||||
|
## Core Services |
||||
|
|
||||
|
### 1. Storage Service Layer |
||||
|
- [x] Create base `PlatformService` interface |
||||
|
- [x] Define common methods for all platforms |
||||
|
- [x] Add platform-specific method signatures |
||||
|
- [x] Include error handling types |
||||
|
- [x] Add migration support methods |
||||
|
|
||||
|
- [x] Implement platform-specific services |
||||
|
- [x] `AbsurdSqlDatabaseService` (web) |
||||
|
- [x] Database initialization |
||||
|
- [x] VFS setup with IndexedDB backend |
||||
|
- [x] Connection management |
||||
|
- [x] Operation queuing |
||||
|
- [ ] `NativeSQLiteService` (iOS/Android) (planned) |
||||
|
- [ ] SQLCipher integration |
||||
|
- [ ] Native bridge setup |
||||
|
- [ ] File system access |
||||
|
- [ ] `ElectronSQLiteService` (planned) |
||||
|
- [ ] Node SQLite integration |
||||
|
- [ ] IPC communication |
||||
|
- [ ] File system access |
||||
|
|
||||
|
### 2. Migration Services |
||||
|
- [x] Implement basic migration support |
||||
|
- [x] Dual-storage pattern (SQLite + Dexie) |
||||
|
- [x] Basic data verification |
||||
|
- [ ] Rollback procedures (planned) |
||||
|
- [ ] Progress tracking (planned) |
||||
|
- [ ] Create `MigrationUI` components (planned) |
||||
|
- [ ] Progress indicators |
||||
|
- [ ] Error handling |
||||
|
- [ ] User notifications |
||||
|
- [ ] Manual triggers |
||||
|
|
||||
|
### 3. Security Layer |
||||
|
- [x] Basic data integrity |
||||
|
- [ ] Implement `EncryptionService` (planned) |
||||
|
- [ ] Key management |
||||
|
- [ ] Encryption/decryption |
||||
|
- [ ] Secure storage |
||||
|
- [ ] Add `BiometricService` (planned) |
||||
|
- [ ] Platform detection |
||||
|
- [ ] Authentication flow |
||||
|
- [ ] Fallback mechanisms |
||||
|
|
||||
|
## Platform-Specific Implementation |
||||
|
|
||||
|
### Web Platform |
||||
|
- [x] Setup absurd-sql |
||||
|
- [x] Install dependencies |
||||
|
```json |
||||
|
{ |
||||
|
"@jlongster/sql.js": "^1.8.0", |
||||
|
"absurd-sql": "^1.8.0" |
||||
|
} |
||||
|
``` |
||||
|
- [x] Configure VFS with IndexedDB backend |
||||
|
- [x] Setup worker threads |
||||
|
- [x] Implement operation queuing |
||||
|
- [x] Configure database pragmas |
||||
|
|
||||
|
```sql |
||||
|
PRAGMA journal_mode=MEMORY; |
||||
|
PRAGMA synchronous=NORMAL; |
||||
|
PRAGMA foreign_keys=ON; |
||||
|
PRAGMA busy_timeout=5000; |
||||
|
``` |
||||
|
|
||||
|
- [x] Update build configuration |
||||
|
- [x] Modify `vite.config.ts` |
||||
|
- [x] Add worker configuration |
||||
|
- [x] Update chunk splitting |
||||
|
- [x] Configure asset handling |
||||
|
|
||||
|
- [x] Implement IndexedDB backend |
||||
|
- [x] Create database service |
||||
|
- [x] Add operation queuing |
||||
|
- [x] Handle initialization |
||||
|
- [x] Implement atomic operations |
||||
|
|
||||
|
### iOS Platform (Planned) |
||||
|
- [ ] Setup SQLCipher |
||||
|
- [ ] Install pod dependencies |
||||
|
- [ ] Configure encryption |
||||
|
- [ ] Setup keychain access |
||||
|
- [ ] Implement secure storage |
||||
|
|
||||
|
- [ ] Update Capacitor config |
||||
|
- [ ] Modify `capacitor.config.ts` |
||||
|
- [ ] Add iOS permissions |
||||
|
- [ ] Configure backup |
||||
|
- [ ] Setup app groups |
||||
|
|
||||
|
### Android Platform (Planned) |
||||
|
- [ ] Setup SQLCipher |
||||
|
- [ ] Add Gradle dependencies |
||||
|
- [ ] Configure encryption |
||||
|
- [ ] Setup keystore |
||||
|
- [ ] Implement secure storage |
||||
|
|
||||
|
- [ ] Update Capacitor config |
||||
|
- [ ] Modify `capacitor.config.ts` |
||||
|
- [ ] Add Android permissions |
||||
|
- [ ] Configure backup |
||||
|
- [ ] Setup file provider |
||||
|
|
||||
|
### Electron Platform (Planned) |
||||
|
- [ ] Setup Node SQLite |
||||
|
- [ ] Install dependencies |
||||
|
- [ ] Configure IPC |
||||
|
- [ ] Setup file system access |
||||
|
- [ ] Implement secure storage |
||||
|
|
||||
|
- [ ] Update Electron config |
||||
|
- [ ] Modify `electron.config.ts` |
||||
|
- [ ] Add security policies |
||||
|
- [ ] Configure file access |
||||
|
- [ ] Setup auto-updates |
||||
|
|
||||
|
## Data Models and Types |
||||
|
|
||||
|
### 1. Database Schema |
||||
|
- [x] Define tables |
||||
|
|
||||
|
```sql |
||||
|
-- Accounts table |
||||
|
CREATE TABLE accounts ( |
||||
|
did TEXT PRIMARY KEY, |
||||
|
public_key_hex TEXT NOT NULL, |
||||
|
created_at INTEGER NOT NULL, |
||||
|
updated_at INTEGER NOT NULL |
||||
|
); |
||||
|
|
||||
|
-- Settings table |
||||
|
CREATE TABLE settings ( |
||||
|
key TEXT PRIMARY KEY, |
||||
|
value TEXT NOT NULL, |
||||
|
updated_at INTEGER NOT NULL |
||||
|
); |
||||
|
|
||||
|
-- Contacts table |
||||
|
CREATE TABLE contacts ( |
||||
|
id TEXT PRIMARY KEY, |
||||
|
did TEXT NOT NULL, |
||||
|
name TEXT, |
||||
|
created_at INTEGER NOT NULL, |
||||
|
updated_at INTEGER NOT NULL, |
||||
|
FOREIGN KEY (did) REFERENCES accounts(did) |
||||
|
); |
||||
|
|
||||
|
-- Indexes for performance |
||||
|
CREATE INDEX idx_accounts_created_at ON accounts(created_at); |
||||
|
CREATE INDEX idx_contacts_did ON contacts(did); |
||||
|
CREATE INDEX idx_settings_updated_at ON settings(updated_at); |
||||
|
``` |
||||
|
|
||||
|
- [x] Create indexes |
||||
|
- [x] Define constraints |
||||
|
- [ ] Add triggers (planned) |
||||
|
- [ ] Setup migrations (planned) |
||||
|
|
||||
|
### 2. Type Definitions |
||||
|
|
||||
|
- [x] Create interfaces |
||||
|
```typescript |
||||
|
interface Account { |
||||
|
did: string; |
||||
|
publicKeyHex: string; |
||||
|
createdAt: number; |
||||
|
updatedAt: number; |
||||
|
} |
||||
|
|
||||
|
interface Setting { |
||||
|
key: string; |
||||
|
value: string; |
||||
|
updatedAt: number; |
||||
|
} |
||||
|
|
||||
|
interface Contact { |
||||
|
id: string; |
||||
|
did: string; |
||||
|
name?: string; |
||||
|
createdAt: number; |
||||
|
updatedAt: number; |
||||
|
} |
||||
|
``` |
||||
|
|
||||
|
- [x] Add validation |
||||
|
- [x] Create DTOs |
||||
|
- [x] Define enums |
||||
|
- [x] Add type guards |
||||
|
|
||||
|
## UI Components |
||||
|
|
||||
|
### 1. Migration UI (Planned) |
||||
|
- [ ] Create components |
||||
|
- [ ] `MigrationProgress.vue` |
||||
|
- [ ] `MigrationError.vue` |
||||
|
- [ ] `MigrationSettings.vue` |
||||
|
- [ ] `MigrationStatus.vue` |
||||
|
|
||||
|
### 2. Settings UI (Planned) |
||||
|
- [ ] Update components |
||||
|
- [ ] Add storage settings |
||||
|
- [ ] Add migration controls |
||||
|
- [ ] Add backup options |
||||
|
- [ ] Add security settings |
||||
|
|
||||
|
### 3. Error Handling UI (Planned) |
||||
|
- [ ] Create components |
||||
|
- [ ] `StorageError.vue` |
||||
|
- [ ] `QuotaExceeded.vue` |
||||
|
- [ ] `MigrationFailed.vue` |
||||
|
- [ ] `RecoveryOptions.vue` |
||||
|
|
||||
|
## Testing |
||||
|
|
||||
|
### 1. Unit Tests |
||||
|
- [x] Basic service tests |
||||
|
- [x] Platform service tests |
||||
|
- [x] Database operation tests |
||||
|
- [ ] Security service tests (planned) |
||||
|
- [ ] Platform detection tests (planned) |
||||
|
|
||||
|
### 2. Integration Tests (Planned) |
||||
|
- [ ] Test migrations |
||||
|
- [ ] Web platform tests |
||||
|
- [ ] iOS platform tests |
||||
|
- [ ] Android platform tests |
||||
|
- [ ] Electron platform tests |
||||
|
|
||||
|
### 3. E2E Tests (Planned) |
||||
|
- [ ] Test workflows |
||||
|
- [ ] Account management |
||||
|
- [ ] Settings management |
||||
|
- [ ] Contact management |
||||
|
- [ ] Migration process |
||||
|
|
||||
|
## Documentation |
||||
|
|
||||
|
### 1. Technical Documentation |
||||
|
- [x] Update architecture docs |
||||
|
- [x] Add API documentation |
||||
|
- [ ] Create migration guides (planned) |
||||
|
- [ ] Document security measures (planned) |
||||
|
|
||||
|
### 2. User Documentation (Planned) |
||||
|
- [ ] Update user guides |
||||
|
- [ ] Add troubleshooting guides |
||||
|
- [ ] Create FAQ |
||||
|
- [ ] Document new features |
||||
|
|
||||
|
## Deployment |
||||
|
|
||||
|
### 1. Build Process |
||||
|
- [x] Update build scripts |
||||
|
- [x] Add platform-specific builds |
||||
|
- [ ] Configure CI/CD (planned) |
||||
|
- [ ] Setup automated testing (planned) |
||||
|
|
||||
|
### 2. Release Process (Planned) |
||||
|
- [ ] Create release checklist |
||||
|
- [ ] Add version management |
||||
|
- [ ] Setup rollback procedures |
||||
|
- [ ] Configure monitoring |
||||
|
|
||||
|
## Monitoring and Analytics (Planned) |
||||
|
|
||||
|
### 1. Error Tracking |
||||
|
- [ ] Setup error logging |
||||
|
- [ ] Add performance monitoring |
||||
|
- [ ] Configure alerts |
||||
|
- [ ] Create dashboards |
||||
|
|
||||
|
### 2. Usage Analytics |
||||
|
- [ ] Add storage metrics |
||||
|
- [ ] Track migration success |
||||
|
- [ ] Monitor performance |
||||
|
- [ ] Collect user feedback |
||||
|
|
||||
|
## Security Audit (Planned) |
||||
|
|
||||
|
### 1. Code Review |
||||
|
- [ ] Review encryption |
||||
|
- [ ] Check access controls |
||||
|
- [ ] Verify data handling |
||||
|
- [ ] Audit dependencies |
||||
|
|
||||
|
### 2. Penetration Testing |
||||
|
- [ ] Test data access |
||||
|
- [ ] Verify encryption |
||||
|
- [ ] Check authentication |
||||
|
- [ ] Review permissions |
||||
|
|
||||
|
## Success Criteria |
||||
|
|
||||
|
### 1. Performance |
||||
|
- [x] Query response time < 100ms |
||||
|
- [x] Operation queuing for thread safety |
||||
|
- [x] Proper initialization handling |
||||
|
- [ ] Migration time < 5s per 1000 records (planned) |
||||
|
- [ ] Storage overhead < 10% (planned) |
||||
|
- [ ] Memory usage < 50MB (planned) |
||||
|
|
||||
|
### 2. Reliability |
||||
|
- [x] Basic data integrity |
||||
|
- [x] Operation queuing |
||||
|
- [ ] Automatic recovery (planned) |
||||
|
- [ ] Backup verification (planned) |
||||
|
- [ ] Transaction atomicity (planned) |
||||
|
- [ ] Data consistency (planned) |
||||
|
|
||||
|
### 3. Security |
||||
|
- [x] Basic data integrity |
||||
|
- [ ] AES-256 encryption (planned) |
||||
|
- [ ] Secure key storage (planned) |
||||
|
- [ ] Access control (planned) |
||||
|
- [ ] Audit logging (planned) |
||||
|
|
||||
|
### 4. User Experience |
||||
|
- [x] Basic database operations |
||||
|
- [ ] Smooth migration (planned) |
||||
|
- [ ] Clear error messages (planned) |
||||
|
- [ ] Progress indicators (planned) |
||||
|
- [ ] Recovery options (planned) |
File diff suppressed because it is too large
@ -1,29 +0,0 @@ |
|||||
const { app, BrowserWindow } = require('electron'); |
|
||||
const path = require('path'); |
|
||||
|
|
||||
function createWindow() { |
|
||||
const win = new BrowserWindow({ |
|
||||
width: 1200, |
|
||||
height: 800, |
|
||||
webPreferences: { |
|
||||
nodeIntegration: true, |
|
||||
contextIsolation: false |
|
||||
} |
|
||||
}); |
|
||||
|
|
||||
win.loadFile(path.join(__dirname, 'dist-electron/www/index.html')); |
|
||||
} |
|
||||
|
|
||||
app.whenReady().then(createWindow); |
|
||||
|
|
||||
app.on('window-all-closed', () => { |
|
||||
if (process.platform !== 'darwin') { |
|
||||
app.quit(); |
|
||||
} |
|
||||
}); |
|
||||
|
|
||||
app.on('activate', () => { |
|
||||
if (BrowserWindow.getAllWindows().length === 0) { |
|
||||
createWindow(); |
|
||||
} |
|
||||
}); |
|
File diff suppressed because it is too large
@ -0,0 +1,15 @@ |
|||||
|
const fs = require('fs'); |
||||
|
const path = require('path'); |
||||
|
|
||||
|
// Create public/wasm directory if it doesn't exist
|
||||
|
const wasmDir = path.join(__dirname, '../public/wasm'); |
||||
|
if (!fs.existsSync(wasmDir)) { |
||||
|
fs.mkdirSync(wasmDir, { recursive: true }); |
||||
|
} |
||||
|
|
||||
|
// Copy the WASM file from node_modules to public/wasm
|
||||
|
const sourceFile = path.join(__dirname, '../node_modules/@jlongster/sql.js/dist/sql-wasm.wasm'); |
||||
|
const targetFile = path.join(wasmDir, 'sql-wasm.wasm'); |
||||
|
|
||||
|
fs.copyFileSync(sourceFile, targetFile); |
||||
|
console.log('WASM file copied successfully!'); |
@ -0,0 +1,138 @@ |
|||||
|
import migrationService from "../services/migrationService"; |
||||
|
import { DEFAULT_ENDORSER_API_SERVER } from "@/constants/app"; |
||||
|
import { arrayBufferToBase64 } from "@/libs/crypto"; |
||||
|
|
||||
|
// Generate a random secret for the secret table
|
||||
|
|
||||
|
// It's not really secure to maintain the secret next to the user's data.
|
||||
|
// However, until we have better hooks into a real wallet or reliable secure
|
||||
|
// storage, we'll do this for user convenience. As they sign more records
|
||||
|
// and integrate with more people, they'll value it more and want to be more
|
||||
|
// secure, so we'll prompt them to take steps to back it up, properly encrypt,
|
||||
|
// etc. At the beginning, we'll prompt for a password, then we'll prompt for a
|
||||
|
// PWA so it's not in a browser... and then we hope to be integrated with a
|
||||
|
// real wallet or something else more secure.
|
||||
|
|
||||
|
// One might ask: why encrypt at all? We figure a basic encryption is better
|
||||
|
// than none. Plus, we expect to support their own password or keystore or
|
||||
|
// external wallet as better signing options in the future, so it's gonna be
|
||||
|
// important to have the structure where each account access might require
|
||||
|
// user action.
|
||||
|
|
||||
|
// (Once upon a time we stored the secret in localStorage, but it frequently
|
||||
|
// got erased, even though the IndexedDB still had the identity data. This
|
||||
|
// ended up throwing lots of errors to the user... and they'd end up in a state
|
||||
|
// where they couldn't take action because they couldn't unlock that identity.)
|
||||
|
|
||||
|
const randomBytes = crypto.getRandomValues(new Uint8Array(32)); |
||||
|
const secretBase64 = arrayBufferToBase64(randomBytes); |
||||
|
|
||||
|
// Each migration can include multiple SQL statements (with semicolons)
|
||||
|
const MIGRATIONS = [ |
||||
|
{ |
||||
|
name: "001_initial", |
||||
|
// see ../db/tables files for explanations of the fields
|
||||
|
sql: ` |
||||
|
CREATE TABLE IF NOT EXISTS accounts ( |
||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT, |
||||
|
dateCreated TEXT NOT NULL, |
||||
|
derivationPath TEXT, |
||||
|
did TEXT NOT NULL, |
||||
|
identityEncrBase64 TEXT, -- encrypted & base64-encoded |
||||
|
mnemonicEncrBase64 TEXT, -- encrypted & base64-encoded |
||||
|
passkeyCredIdHex TEXT, |
||||
|
publicKeyHex TEXT NOT NULL |
||||
|
); |
||||
|
|
||||
|
CREATE INDEX IF NOT EXISTS idx_accounts_did ON accounts(did); |
||||
|
|
||||
|
CREATE TABLE IF NOT EXISTS secret ( |
||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT, |
||||
|
secretBase64 TEXT NOT NULL |
||||
|
); |
||||
|
|
||||
|
INSERT INTO secret (id, secretBase64) VALUES (1, '${secretBase64}'); |
||||
|
|
||||
|
CREATE TABLE IF NOT EXISTS settings ( |
||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT, |
||||
|
accountDid TEXT, |
||||
|
activeDid TEXT, |
||||
|
apiServer TEXT, |
||||
|
filterFeedByNearby BOOLEAN, |
||||
|
filterFeedByVisible BOOLEAN, |
||||
|
finishedOnboarding BOOLEAN, |
||||
|
firstName TEXT, |
||||
|
hideRegisterPromptOnNewContact BOOLEAN, |
||||
|
isRegistered BOOLEAN, |
||||
|
lastName TEXT, |
||||
|
lastAckedOfferToUserJwtId TEXT, |
||||
|
lastAckedOfferToUserProjectsJwtId TEXT, |
||||
|
lastNotifiedClaimId TEXT, |
||||
|
lastViewedClaimId TEXT, |
||||
|
notifyingNewActivityTime TEXT, |
||||
|
notifyingReminderMessage TEXT, |
||||
|
notifyingReminderTime TEXT, |
||||
|
partnerApiServer TEXT, |
||||
|
passkeyExpirationMinutes INTEGER, |
||||
|
profileImageUrl TEXT, |
||||
|
searchBoxes TEXT, -- Stored as JSON string |
||||
|
showContactGivesInline BOOLEAN, |
||||
|
showGeneralAdvanced BOOLEAN, |
||||
|
showShortcutBvc BOOLEAN, |
||||
|
vapid TEXT, |
||||
|
warnIfProdServer BOOLEAN, |
||||
|
warnIfTestServer BOOLEAN, |
||||
|
webPushServer TEXT |
||||
|
); |
||||
|
|
||||
|
CREATE INDEX IF NOT EXISTS idx_settings_accountDid ON settings(accountDid); |
||||
|
|
||||
|
INSERT INTO settings (id, apiServer) VALUES (1, '${DEFAULT_ENDORSER_API_SERVER}'); |
||||
|
|
||||
|
CREATE TABLE IF NOT EXISTS contacts ( |
||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT, |
||||
|
did TEXT NOT NULL, |
||||
|
name TEXT, |
||||
|
contactMethods TEXT, -- Stored as JSON string |
||||
|
nextPubKeyHashB64 TEXT, |
||||
|
notes TEXT, |
||||
|
profileImageUrl TEXT, |
||||
|
publicKeyBase64 TEXT, |
||||
|
seesMe BOOLEAN, |
||||
|
registered BOOLEAN |
||||
|
); |
||||
|
|
||||
|
CREATE INDEX IF NOT EXISTS idx_contacts_did ON contacts(did); |
||||
|
CREATE INDEX IF NOT EXISTS idx_contacts_name ON contacts(name); |
||||
|
|
||||
|
CREATE TABLE IF NOT EXISTS logs ( |
||||
|
date TEXT NOT NULL, |
||||
|
message TEXT NOT NULL |
||||
|
); |
||||
|
|
||||
|
CREATE TABLE IF NOT EXISTS temp ( |
||||
|
id TEXT PRIMARY KEY, |
||||
|
blobB64 TEXT |
||||
|
); |
||||
|
`,
|
||||
|
}, |
||||
|
]; |
||||
|
|
||||
|
/** |
||||
|
* @param sqlExec - A function that executes a SQL statement and returns the result |
||||
|
* @param extractMigrationNames - A function that extracts the names (string array) from "select name from migrations" |
||||
|
*/ |
||||
|
export async function runMigrations<T>( |
||||
|
sqlExec: (sql: string) => Promise<unknown>, |
||||
|
sqlQuery: (sql: string) => Promise<T>, |
||||
|
extractMigrationNames: (result: T) => Set<string>, |
||||
|
): Promise<void> { |
||||
|
for (const migration of MIGRATIONS) { |
||||
|
migrationService.registerMigration(migration); |
||||
|
} |
||||
|
await migrationService.runMigrations( |
||||
|
sqlExec, |
||||
|
sqlQuery, |
||||
|
extractMigrationNames, |
||||
|
); |
||||
|
} |
@ -0,0 +1,330 @@ |
|||||
|
/** |
||||
|
* This file is the SQL replacement of the index.ts file in the db directory. |
||||
|
* That file will eventually be deleted. |
||||
|
*/ |
||||
|
|
||||
|
import { PlatformServiceFactory } from "@/services/PlatformServiceFactory"; |
||||
|
import { MASTER_SETTINGS_KEY, Settings } from "./tables/settings"; |
||||
|
import { logger } from "@/utils/logger"; |
||||
|
import { DEFAULT_ENDORSER_API_SERVER } from "@/constants/app"; |
||||
|
import { QueryExecResult } from "@/interfaces/database"; |
||||
|
|
||||
|
export async function updateDefaultSettings( |
||||
|
settingsChanges: Settings, |
||||
|
): Promise<boolean> { |
||||
|
delete settingsChanges.accountDid; // just in case
|
||||
|
// ensure there is no "id" that would override the key
|
||||
|
delete settingsChanges.id; |
||||
|
try { |
||||
|
const platformService = PlatformServiceFactory.getInstance(); |
||||
|
const { sql, params } = generateUpdateStatement( |
||||
|
settingsChanges, |
||||
|
"settings", |
||||
|
"id = ?", |
||||
|
[MASTER_SETTINGS_KEY], |
||||
|
); |
||||
|
const result = await platformService.dbExec(sql, params); |
||||
|
return result.changes === 1; |
||||
|
} catch (error) { |
||||
|
logger.error("Error updating default settings:", error); |
||||
|
if (error instanceof Error) { |
||||
|
throw error; // Re-throw if it's already an Error with a message
|
||||
|
} else { |
||||
|
throw new Error( |
||||
|
`Failed to update settings. We recommend you try again or restart the app.`, |
||||
|
); |
||||
|
} |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
export async function updateAccountSettings( |
||||
|
accountDid: string, |
||||
|
settingsChanges: Settings, |
||||
|
): Promise<boolean> { |
||||
|
settingsChanges.accountDid = accountDid; |
||||
|
delete settingsChanges.id; // key off account, not ID
|
||||
|
|
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
|
||||
|
// First try to update existing record
|
||||
|
const { sql: updateSql, params: updateParams } = generateUpdateStatement( |
||||
|
settingsChanges, |
||||
|
"settings", |
||||
|
"accountDid = ?", |
||||
|
[accountDid], |
||||
|
); |
||||
|
|
||||
|
const updateResult = await platform.dbExec(updateSql, updateParams); |
||||
|
|
||||
|
// If no record was updated, insert a new one
|
||||
|
if (updateResult.changes === 1) { |
||||
|
return true; |
||||
|
} else { |
||||
|
const columns = Object.keys(settingsChanges); |
||||
|
const values = Object.values(settingsChanges); |
||||
|
const placeholders = values.map(() => "?").join(", "); |
||||
|
|
||||
|
const insertSql = `INSERT INTO settings (${columns.join(", ")}) VALUES (${placeholders})`; |
||||
|
const result = await platform.dbExec(insertSql, values); |
||||
|
|
||||
|
return result.changes === 1; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
const DEFAULT_SETTINGS: Settings = { |
||||
|
id: MASTER_SETTINGS_KEY, |
||||
|
activeDid: undefined, |
||||
|
apiServer: DEFAULT_ENDORSER_API_SERVER, |
||||
|
}; |
||||
|
|
||||
|
// retrieves default settings
|
||||
|
export async function retrieveSettingsForDefaultAccount(): Promise<Settings> { |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
const sql = "SELECT * FROM settings WHERE id = ?"; |
||||
|
const result = await platform.dbQuery(sql, [MASTER_SETTINGS_KEY]); |
||||
|
if (!result) { |
||||
|
return DEFAULT_SETTINGS; |
||||
|
} else { |
||||
|
const settings = mapColumnsToValues( |
||||
|
result.columns, |
||||
|
result.values, |
||||
|
)[0] as Settings; |
||||
|
if (settings.searchBoxes) { |
||||
|
// @ts-expect-error - the searchBoxes field is a string in the DB
|
||||
|
settings.searchBoxes = JSON.parse(settings.searchBoxes); |
||||
|
} |
||||
|
return settings; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
/** |
||||
|
* Retrieves settings for the active account, merging with default settings |
||||
|
* |
||||
|
* @returns Promise<Settings> Combined settings with account-specific overrides |
||||
|
* @throws Will log specific errors for debugging but returns default settings on failure |
||||
|
*/ |
||||
|
export async function retrieveSettingsForActiveAccount(): Promise<Settings> { |
||||
|
try { |
||||
|
// Get default settings first
|
||||
|
const defaultSettings = await retrieveSettingsForDefaultAccount(); |
||||
|
|
||||
|
// If no active DID, return defaults
|
||||
|
if (!defaultSettings.activeDid) { |
||||
|
logConsoleAndDb( |
||||
|
"[databaseUtil] No active DID found, returning default settings", |
||||
|
); |
||||
|
return defaultSettings; |
||||
|
} |
||||
|
|
||||
|
// Get account-specific settings
|
||||
|
try { |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
const result = await platform.dbQuery( |
||||
|
"SELECT * FROM settings WHERE accountDid = ?", |
||||
|
[defaultSettings.activeDid], |
||||
|
); |
||||
|
|
||||
|
if (!result?.values?.length) { |
||||
|
logConsoleAndDb( |
||||
|
`[databaseUtil] No account-specific settings found for ${defaultSettings.activeDid}`, |
||||
|
); |
||||
|
return defaultSettings; |
||||
|
} |
||||
|
|
||||
|
// Map and filter settings
|
||||
|
const overrideSettings = mapColumnsToValues( |
||||
|
result.columns, |
||||
|
result.values, |
||||
|
)[0] as Settings; |
||||
|
const overrideSettingsFiltered = Object.fromEntries( |
||||
|
Object.entries(overrideSettings).filter(([_, v]) => v !== null), |
||||
|
); |
||||
|
|
||||
|
// Merge settings
|
||||
|
const settings = { ...defaultSettings, ...overrideSettingsFiltered }; |
||||
|
|
||||
|
// Handle searchBoxes parsing
|
||||
|
if (settings.searchBoxes) { |
||||
|
try { |
||||
|
// @ts-expect-error - the searchBoxes field is a string in the DB
|
||||
|
settings.searchBoxes = JSON.parse(settings.searchBoxes); |
||||
|
} catch (error) { |
||||
|
logConsoleAndDb( |
||||
|
`[databaseUtil] Failed to parse searchBoxes for ${defaultSettings.activeDid}: ${error}`, |
||||
|
true, |
||||
|
); |
||||
|
// Reset to empty array on parse failure
|
||||
|
settings.searchBoxes = []; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
return settings; |
||||
|
} catch (error) { |
||||
|
logConsoleAndDb( |
||||
|
`[databaseUtil] Failed to retrieve account settings for ${defaultSettings.activeDid}: ${error}`, |
||||
|
true, |
||||
|
); |
||||
|
// Return defaults on error
|
||||
|
return defaultSettings; |
||||
|
} |
||||
|
} catch (error) { |
||||
|
logConsoleAndDb( |
||||
|
`[databaseUtil] Failed to retrieve default settings: ${error}`, |
||||
|
true, |
||||
|
); |
||||
|
// Return minimal default settings on complete failure
|
||||
|
return { |
||||
|
id: MASTER_SETTINGS_KEY, |
||||
|
activeDid: undefined, |
||||
|
apiServer: DEFAULT_ENDORSER_API_SERVER, |
||||
|
}; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
let lastCleanupDate: string | null = null; |
||||
|
export let memoryLogs: string[] = []; |
||||
|
|
||||
|
/** |
||||
|
* Logs a message to the database with proper handling of concurrent writes |
||||
|
* @param message - The message to log |
||||
|
* @author Matthew Raymer |
||||
|
*/ |
||||
|
export async function logToDb(message: string): Promise<void> { |
||||
|
const platform = PlatformServiceFactory.getInstance(); |
||||
|
const todayKey = new Date().toDateString(); |
||||
|
const nowKey = new Date().toISOString(); |
||||
|
|
||||
|
try { |
||||
|
memoryLogs.push(`${new Date().toISOString()} ${message}`); |
||||
|
// Try to insert first, if it fails due to UNIQUE constraint, update instead
|
||||
|
await platform.dbExec("INSERT INTO logs (date, message) VALUES (?, ?)", [ |
||||
|
nowKey, |
||||
|
message, |
||||
|
]); |
||||
|
|
||||
|
// Clean up old logs (keep only last 7 days) - do this less frequently
|
||||
|
// Only clean up if the date is different from the last cleanup
|
||||
|
if (!lastCleanupDate || lastCleanupDate !== todayKey) { |
||||
|
const sevenDaysAgo = new Date( |
||||
|
new Date().getTime() - 7 * 24 * 60 * 60 * 1000, |
||||
|
); |
||||
|
memoryLogs = memoryLogs.filter( |
||||
|
(log) => log.split(" ")[0] > sevenDaysAgo.toDateString(), |
||||
|
); |
||||
|
await platform.dbExec("DELETE FROM logs WHERE date < ?", [ |
||||
|
sevenDaysAgo.toDateString(), |
||||
|
]); |
||||
|
lastCleanupDate = todayKey; |
||||
|
} |
||||
|
} catch (error) { |
||||
|
// Log to console as fallback
|
||||
|
// eslint-disable-next-line no-console
|
||||
|
console.error( |
||||
|
"Error logging to database:", |
||||
|
error, |
||||
|
" ... for original message:", |
||||
|
message, |
||||
|
); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
// similar method is in the sw_scripts/additional-scripts.js file
|
||||
|
export async function logConsoleAndDb( |
||||
|
message: string, |
||||
|
isError = false, |
||||
|
): Promise<void> { |
||||
|
if (isError) { |
||||
|
logger.error(`${new Date().toISOString()} ${message}`); |
||||
|
} else { |
||||
|
logger.log(`${new Date().toISOString()} ${message}`); |
||||
|
} |
||||
|
await logToDb(message); |
||||
|
} |
||||
|
|
||||
|
/** |
||||
|
* Generates an SQL INSERT statement and parameters from a model object. |
||||
|
* @param model The model object containing fields to update |
||||
|
* @param tableName The name of the table to update |
||||
|
* @returns Object containing the SQL statement and parameters array |
||||
|
*/ |
||||
|
export function generateInsertStatement( |
||||
|
model: Record<string, unknown>, |
||||
|
tableName: string, |
||||
|
): { sql: string; params: unknown[] } { |
||||
|
const columns = Object.keys(model).filter((key) => model[key] !== undefined); |
||||
|
const values = Object.values(model).filter((value) => value !== undefined); |
||||
|
const placeholders = values.map(() => "?").join(", "); |
||||
|
const insertSql = `INSERT INTO ${tableName} (${columns.join(", ")}) VALUES (${placeholders})`; |
||||
|
return { |
||||
|
sql: insertSql, |
||||
|
params: values, |
||||
|
}; |
||||
|
} |
||||
|
|
||||
|
/** |
||||
|
* Generates an SQL UPDATE statement and parameters from a model object. |
||||
|
* @param model The model object containing fields to update |
||||
|
* @param tableName The name of the table to update |
||||
|
* @param whereClause The WHERE clause for the update (e.g. "id = ?") |
||||
|
* @param whereParams Parameters for the WHERE clause |
||||
|
* @returns Object containing the SQL statement and parameters array |
||||
|
*/ |
||||
|
export function generateUpdateStatement( |
||||
|
model: Record<string, unknown>, |
||||
|
tableName: string, |
||||
|
whereClause: string, |
||||
|
whereParams: unknown[] = [], |
||||
|
): { sql: string; params: unknown[] } { |
||||
|
// Filter out undefined/null values and create SET clause
|
||||
|
const setClauses: string[] = []; |
||||
|
const params: unknown[] = []; |
||||
|
|
||||
|
Object.entries(model).forEach(([key, value]) => { |
||||
|
if (value !== undefined) { |
||||
|
setClauses.push(`${key} = ?`); |
||||
|
params.push(value); |
||||
|
} |
||||
|
}); |
||||
|
|
||||
|
if (setClauses.length === 0) { |
||||
|
throw new Error("No valid fields to update"); |
||||
|
} |
||||
|
|
||||
|
const sql = `UPDATE ${tableName} SET ${setClauses.join(", ")} WHERE ${whereClause}`; |
||||
|
|
||||
|
return { |
||||
|
sql, |
||||
|
params: [...params, ...whereParams], |
||||
|
}; |
||||
|
} |
||||
|
|
||||
|
export function mapQueryResultToValues( |
||||
|
record: QueryExecResult | undefined, |
||||
|
): Array<Record<string, unknown>> { |
||||
|
if (!record) { |
||||
|
return []; |
||||
|
} |
||||
|
return mapColumnsToValues(record.columns, record.values) as Array< |
||||
|
Record<string, unknown> |
||||
|
>; |
||||
|
} |
||||
|
|
||||
|
/** |
||||
|
* Maps an array of column names to an array of value arrays, creating objects where each column name |
||||
|
* is mapped to its corresponding value. |
||||
|
* @param columns Array of column names to use as object keys |
||||
|
* @param values Array of value arrays, where each inner array corresponds to one row of data |
||||
|
* @returns Array of objects where each object maps column names to their corresponding values |
||||
|
*/ |
||||
|
export function mapColumnsToValues( |
||||
|
columns: string[], |
||||
|
values: unknown[][], |
||||
|
): Array<Record<string, unknown>> { |
||||
|
return values.map((row) => { |
||||
|
const obj: Record<string, unknown> = {}; |
||||
|
columns.forEach((column, index) => { |
||||
|
obj[column] = row[index]; |
||||
|
}); |
||||
|
return obj; |
||||
|
}); |
||||
|
} |
@ -0,0 +1,59 @@ |
|||||
|
import type { QueryExecResult, SqlValue } from "./database"; |
||||
|
|
||||
|
declare module "@jlongster/sql.js" { |
||||
|
interface SQL { |
||||
|
Database: new (path: string, options?: { filename: boolean }) => AbsurdSqlDatabase; |
||||
|
FS: { |
||||
|
mkdir: (path: string) => void; |
||||
|
mount: (fs: any, options: any, path: string) => void; |
||||
|
open: (path: string, flags: string) => any; |
||||
|
close: (stream: any) => void; |
||||
|
}; |
||||
|
register_for_idb: (fs: any) => void; |
||||
|
} |
||||
|
|
||||
|
interface AbsurdSqlDatabase { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
run: ( |
||||
|
sql: string, |
||||
|
params?: unknown[], |
||||
|
) => Promise<{ changes: number; lastId?: number }>; |
||||
|
} |
||||
|
|
||||
|
const initSqlJs: (options?: { |
||||
|
locateFile?: (file: string) => string; |
||||
|
}) => Promise<SQL>; |
||||
|
|
||||
|
export default initSqlJs; |
||||
|
} |
||||
|
|
||||
|
declare module "absurd-sql" { |
||||
|
import type { SQL } from "@jlongster/sql.js"; |
||||
|
|
||||
|
export class SQLiteFS { |
||||
|
constructor(fs: any, backend: any); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module "absurd-sql/dist/indexeddb-backend" { |
||||
|
export default class IndexedDBBackend { |
||||
|
constructor(); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module "absurd-sql/dist/indexeddb-main-thread" { |
||||
|
export interface SQLiteOptions { |
||||
|
filename?: string; |
||||
|
autoLoad?: boolean; |
||||
|
debug?: boolean; |
||||
|
} |
||||
|
|
||||
|
export interface SQLiteDatabase { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
close: () => Promise<void>; |
||||
|
} |
||||
|
|
||||
|
export function initSqlJs(options?: any): Promise<any>; |
||||
|
export function createDatabase(options?: SQLiteOptions): Promise<SQLiteDatabase>; |
||||
|
export function openDatabase(options?: SQLiteOptions): Promise<SQLiteDatabase>; |
||||
|
} |
@ -0,0 +1,15 @@ |
|||||
|
export type SqlValue = string | number | null | Uint8Array; |
||||
|
|
||||
|
export interface QueryExecResult { |
||||
|
columns: Array<string>; |
||||
|
values: Array<Array<SqlValue>>; |
||||
|
} |
||||
|
|
||||
|
export interface DatabaseService { |
||||
|
initialize(): Promise<void>; |
||||
|
query(sql: string, params?: unknown[]): Promise<QueryExecResult[]>; |
||||
|
run( |
||||
|
sql: string, |
||||
|
params?: unknown[], |
||||
|
): Promise<{ changes: number; lastId?: number }>; |
||||
|
} |
@ -1,7 +1,37 @@ |
|||||
export * from "./claims"; |
export type { |
||||
export * from "./claims-result"; |
// From common.ts
|
||||
export * from "./common"; |
GenericCredWrapper, |
||||
|
GenericVerifiableCredential, |
||||
|
KeyMeta, |
||||
|
// Exclude types that are also exported from other files
|
||||
|
// GiveVerifiableCredential,
|
||||
|
// OfferVerifiableCredential,
|
||||
|
// RegisterVerifiableCredential,
|
||||
|
// PlanSummaryRecord,
|
||||
|
// UserInfo,
|
||||
|
} from "./common"; |
||||
|
|
||||
|
export type { |
||||
|
// From claims.ts
|
||||
|
GiveVerifiableCredential, |
||||
|
OfferVerifiableCredential, |
||||
|
RegisterVerifiableCredential, |
||||
|
} from "./claims"; |
||||
|
|
||||
|
export type { |
||||
|
// From claims-result.ts
|
||||
|
CreateAndSubmitClaimResult, |
||||
|
} from "./claims-result"; |
||||
|
|
||||
|
export type { |
||||
|
// From records.ts
|
||||
|
PlanSummaryRecord, |
||||
|
} from "./records"; |
||||
|
|
||||
|
export type { |
||||
|
// From user.ts
|
||||
|
UserInfo, |
||||
|
} from "./user"; |
||||
|
|
||||
export * from "./limits"; |
export * from "./limits"; |
||||
export * from "./records"; |
|
||||
export * from "./user"; |
|
||||
export * from "./deepLinks"; |
export * from "./deepLinks"; |
||||
|
@ -1,4 +1,16 @@ |
|||||
import { initializeApp } from "./main.common"; |
import { initializeApp } from "./main.common"; |
||||
|
import { logger } from "./utils/logger"; |
||||
|
|
||||
|
const platform = process.env.VITE_PLATFORM; |
||||
|
const pwa_enabled = process.env.VITE_PWA_ENABLED === "true"; |
||||
|
|
||||
|
logger.info("[Electron] Initializing app"); |
||||
|
logger.info("[Electron] Platform:", { platform }); |
||||
|
logger.info("[Electron] PWA enabled:", { pwa_enabled }); |
||||
|
|
||||
|
if (pwa_enabled) { |
||||
|
logger.warn("[Electron] PWA is enabled, but not supported in electron"); |
||||
|
} |
||||
|
|
||||
const app = initializeApp(); |
const app = initializeApp(); |
||||
app.mount("#app"); |
app.mount("#app"); |
||||
|
@ -1,215 +0,0 @@ |
|||||
import { createPinia } from "pinia"; |
|
||||
import { App as VueApp, ComponentPublicInstance, createApp } from "vue"; |
|
||||
import App from "./App.vue"; |
|
||||
import "./registerServiceWorker"; |
|
||||
import router from "./router"; |
|
||||
import axios from "axios"; |
|
||||
import VueAxios from "vue-axios"; |
|
||||
import Notifications from "notiwind"; |
|
||||
import "./assets/styles/tailwind.css"; |
|
||||
|
|
||||
import { library } from "@fortawesome/fontawesome-svg-core"; |
|
||||
import { |
|
||||
faArrowDown, |
|
||||
faArrowLeft, |
|
||||
faArrowRight, |
|
||||
faArrowRotateBackward, |
|
||||
faArrowUpRightFromSquare, |
|
||||
faArrowUp, |
|
||||
faBan, |
|
||||
faBitcoinSign, |
|
||||
faBurst, |
|
||||
faCalendar, |
|
||||
faCamera, |
|
||||
faCameraRotate, |
|
||||
faCaretDown, |
|
||||
faChair, |
|
||||
faCheck, |
|
||||
faChevronDown, |
|
||||
faChevronLeft, |
|
||||
faChevronRight, |
|
||||
faChevronUp, |
|
||||
faCircle, |
|
||||
faCircleCheck, |
|
||||
faCircleInfo, |
|
||||
faCircleQuestion, |
|
||||
faCircleUser, |
|
||||
faClock, |
|
||||
faCoins, |
|
||||
faComment, |
|
||||
faCopy, |
|
||||
faDollar, |
|
||||
faEllipsis, |
|
||||
faEllipsisVertical, |
|
||||
faEnvelopeOpenText, |
|
||||
faEraser, |
|
||||
faEye, |
|
||||
faEyeSlash, |
|
||||
faFileContract, |
|
||||
faFileLines, |
|
||||
faFilter, |
|
||||
faFloppyDisk, |
|
||||
faFolderOpen, |
|
||||
faForward, |
|
||||
faGift, |
|
||||
faGlobe, |
|
||||
faHammer, |
|
||||
faHand, |
|
||||
faHandHoldingDollar, |
|
||||
faHandHoldingHeart, |
|
||||
faHouseChimney, |
|
||||
faImage, |
|
||||
faImagePortrait, |
|
||||
faLeftRight, |
|
||||
faLightbulb, |
|
||||
faLink, |
|
||||
faLocationDot, |
|
||||
faLongArrowAltLeft, |
|
||||
faLongArrowAltRight, |
|
||||
faMagnifyingGlass, |
|
||||
faMessage, |
|
||||
faMinus, |
|
||||
faPen, |
|
||||
faPersonCircleCheck, |
|
||||
faPersonCircleQuestion, |
|
||||
faPlus, |
|
||||
faQuestion, |
|
||||
faQrcode, |
|
||||
faRightFromBracket, |
|
||||
faRotate, |
|
||||
faShareNodes, |
|
||||
faSpinner, |
|
||||
faSquare, |
|
||||
faSquareCaretDown, |
|
||||
faSquareCaretUp, |
|
||||
faSquarePlus, |
|
||||
faTrashCan, |
|
||||
faTriangleExclamation, |
|
||||
faUser, |
|
||||
faUsers, |
|
||||
faXmark, |
|
||||
} from "@fortawesome/free-solid-svg-icons"; |
|
||||
|
|
||||
library.add( |
|
||||
faArrowDown, |
|
||||
faArrowLeft, |
|
||||
faArrowRight, |
|
||||
faArrowRotateBackward, |
|
||||
faArrowUpRightFromSquare, |
|
||||
faArrowUp, |
|
||||
faBan, |
|
||||
faBitcoinSign, |
|
||||
faBurst, |
|
||||
faCalendar, |
|
||||
faCamera, |
|
||||
faCameraRotate, |
|
||||
faCaretDown, |
|
||||
faChair, |
|
||||
faCheck, |
|
||||
faChevronDown, |
|
||||
faChevronLeft, |
|
||||
faChevronRight, |
|
||||
faChevronUp, |
|
||||
faCircle, |
|
||||
faCircleCheck, |
|
||||
faCircleInfo, |
|
||||
faCircleQuestion, |
|
||||
faCircleUser, |
|
||||
faClock, |
|
||||
faCoins, |
|
||||
faComment, |
|
||||
faCopy, |
|
||||
faDollar, |
|
||||
faEllipsis, |
|
||||
faEllipsisVertical, |
|
||||
faEnvelopeOpenText, |
|
||||
faEraser, |
|
||||
faEye, |
|
||||
faEyeSlash, |
|
||||
faFileContract, |
|
||||
faFileLines, |
|
||||
faFilter, |
|
||||
faFloppyDisk, |
|
||||
faFolderOpen, |
|
||||
faForward, |
|
||||
faGift, |
|
||||
faGlobe, |
|
||||
faHammer, |
|
||||
faHand, |
|
||||
faHandHoldingDollar, |
|
||||
faHandHoldingHeart, |
|
||||
faHouseChimney, |
|
||||
faImage, |
|
||||
faImagePortrait, |
|
||||
faLeftRight, |
|
||||
faLightbulb, |
|
||||
faLink, |
|
||||
faLocationDot, |
|
||||
faLongArrowAltLeft, |
|
||||
faLongArrowAltRight, |
|
||||
faMagnifyingGlass, |
|
||||
faMessage, |
|
||||
faMinus, |
|
||||
faPen, |
|
||||
faPersonCircleCheck, |
|
||||
faPersonCircleQuestion, |
|
||||
faPlus, |
|
||||
faQrcode, |
|
||||
faQuestion, |
|
||||
faRotate, |
|
||||
faRightFromBracket, |
|
||||
faShareNodes, |
|
||||
faSpinner, |
|
||||
faSquare, |
|
||||
faSquareCaretDown, |
|
||||
faSquareCaretUp, |
|
||||
faSquarePlus, |
|
||||
faTrashCan, |
|
||||
faTriangleExclamation, |
|
||||
faUser, |
|
||||
faUsers, |
|
||||
faXmark, |
|
||||
); |
|
||||
|
|
||||
import { FontAwesomeIcon } from "@fortawesome/vue-fontawesome"; |
|
||||
import Camera from "simple-vue-camera"; |
|
||||
import { logger } from "./utils/logger"; |
|
||||
|
|
||||
// Can trigger this with a 'throw' inside some top-level function, eg. on the HomeView
|
|
||||
function setupGlobalErrorHandler(app: VueApp) { |
|
||||
// @ts-expect-error 'cause we cannot see why config is not defined
|
|
||||
app.config.errorHandler = ( |
|
||||
err: Error, |
|
||||
instance: ComponentPublicInstance | null, |
|
||||
info: string, |
|
||||
) => { |
|
||||
logger.error( |
|
||||
"Ouch! Global Error Handler.", |
|
||||
"Error:", |
|
||||
err, |
|
||||
"- Error toString:", |
|
||||
err.toString(), |
|
||||
"- Info:", |
|
||||
info, |
|
||||
"- Instance:", |
|
||||
instance, |
|
||||
); |
|
||||
// Want to show a nice notiwind notification but can't figure out how.
|
|
||||
alert( |
|
||||
(err.message || "Something bad happened") + |
|
||||
" - Try reloading or restarting the app.", |
|
||||
); |
|
||||
}; |
|
||||
} |
|
||||
|
|
||||
const app = createApp(App) |
|
||||
.component("fa", FontAwesomeIcon) |
|
||||
.component("camera", Camera) |
|
||||
.use(createPinia()) |
|
||||
.use(VueAxios, axios) |
|
||||
.use(router) |
|
||||
.use(Notifications); |
|
||||
|
|
||||
setupGlobalErrorHandler(app); |
|
||||
|
|
||||
app.mount("#app"); |
|
@ -1,5 +1,37 @@ |
|||||
|
import { initBackend } from "absurd-sql/dist/indexeddb-main-thread"; |
||||
import { initializeApp } from "./main.common"; |
import { initializeApp } from "./main.common"; |
||||
import "./registerServiceWorker"; // Web PWA support
|
import { logger } from "./utils/logger"; |
||||
|
|
||||
|
const platform = process.env.VITE_PLATFORM; |
||||
|
const pwa_enabled = process.env.VITE_PWA_ENABLED === "true"; |
||||
|
|
||||
|
logger.info("[Web] PWA enabled", { pwa_enabled }); |
||||
|
logger.info("[Web] Platform", { platform }); |
||||
|
|
||||
|
// Only import service worker for web builds
|
||||
|
if (platform !== "electron" && pwa_enabled) { |
||||
|
import("./registerServiceWorker"); // Web PWA support
|
||||
|
} |
||||
|
|
||||
const app = initializeApp(); |
const app = initializeApp(); |
||||
|
|
||||
|
function sqlInit() { |
||||
|
// see https://github.com/jlongster/absurd-sql
|
||||
|
const worker = new Worker( |
||||
|
new URL("./registerSQLWorker.js", import.meta.url), |
||||
|
{ |
||||
|
type: "module", |
||||
|
}, |
||||
|
); |
||||
|
// This is only required because Safari doesn't support nested
|
||||
|
// workers. This installs a handler that will proxy creating web
|
||||
|
// workers through the main thread
|
||||
|
initBackend(worker); |
||||
|
} |
||||
|
if (platform === "web" || platform === "development") { |
||||
|
sqlInit(); |
||||
|
} else { |
||||
|
logger.info("[Web] SQL not initialized for platform", { platform }); |
||||
|
} |
||||
|
|
||||
app.mount("#app"); |
app.mount("#app"); |
||||
|
@ -0,0 +1,6 @@ |
|||||
|
import databaseService from "./services/AbsurdSqlDatabaseService"; |
||||
|
|
||||
|
async function run() { |
||||
|
await databaseService.initialize(); |
||||
|
} |
||||
|
run(); |
@ -0,0 +1,29 @@ |
|||||
|
import { DatabaseService } from "../interfaces/database"; |
||||
|
|
||||
|
declare module "@jlongster/sql.js" { |
||||
|
interface SQL { |
||||
|
Database: unknown; |
||||
|
FS: unknown; |
||||
|
register_for_idb: (fs: unknown) => void; |
||||
|
} |
||||
|
|
||||
|
function initSqlJs(config: { |
||||
|
locateFile: (file: string) => string; |
||||
|
}): Promise<SQL>; |
||||
|
export default initSqlJs; |
||||
|
} |
||||
|
|
||||
|
declare module "absurd-sql" { |
||||
|
export class SQLiteFS { |
||||
|
constructor(fs: unknown, backend: unknown); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module "absurd-sql/dist/indexeddb-backend" { |
||||
|
export default class IndexedDBBackend { |
||||
|
constructor(); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare const databaseService: DatabaseService; |
||||
|
export default databaseService; |
@ -0,0 +1,231 @@ |
|||||
|
import initSqlJs from "@jlongster/sql.js"; |
||||
|
import { SQLiteFS } from "absurd-sql"; |
||||
|
import IndexedDBBackend from "absurd-sql/dist/indexeddb-backend"; |
||||
|
|
||||
|
import { runMigrations } from "../db-sql/migration"; |
||||
|
import type { DatabaseService, QueryExecResult } from "../interfaces/database"; |
||||
|
import { logger } from "@/utils/logger"; |
||||
|
|
||||
|
interface QueuedOperation { |
||||
|
type: "run" | "query"; |
||||
|
sql: string; |
||||
|
params: unknown[]; |
||||
|
resolve: (value: unknown) => void; |
||||
|
reject: (reason: unknown) => void; |
||||
|
} |
||||
|
|
||||
|
interface AbsurdSqlDatabase { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
run: ( |
||||
|
sql: string, |
||||
|
params?: unknown[], |
||||
|
) => Promise<{ changes: number; lastId?: number }>; |
||||
|
} |
||||
|
|
||||
|
class AbsurdSqlDatabaseService implements DatabaseService { |
||||
|
private static instance: AbsurdSqlDatabaseService | null = null; |
||||
|
private db: AbsurdSqlDatabase | null; |
||||
|
private initialized: boolean; |
||||
|
private initializationPromise: Promise<void> | null = null; |
||||
|
private operationQueue: Array<QueuedOperation> = []; |
||||
|
private isProcessingQueue: boolean = false; |
||||
|
|
||||
|
private constructor() { |
||||
|
this.db = null; |
||||
|
this.initialized = false; |
||||
|
} |
||||
|
|
||||
|
static getInstance(): AbsurdSqlDatabaseService { |
||||
|
if (!AbsurdSqlDatabaseService.instance) { |
||||
|
AbsurdSqlDatabaseService.instance = new AbsurdSqlDatabaseService(); |
||||
|
} |
||||
|
return AbsurdSqlDatabaseService.instance; |
||||
|
} |
||||
|
|
||||
|
async initialize(): Promise<void> { |
||||
|
// If already initialized, return immediately
|
||||
|
if (this.initialized) { |
||||
|
return; |
||||
|
} |
||||
|
|
||||
|
// If initialization is in progress, wait for it
|
||||
|
if (this.initializationPromise) { |
||||
|
return this.initializationPromise; |
||||
|
} |
||||
|
|
||||
|
// Start initialization
|
||||
|
this.initializationPromise = this._initialize(); |
||||
|
try { |
||||
|
await this.initializationPromise; |
||||
|
} catch (error) { |
||||
|
logger.error(`AbsurdSqlDatabaseService initialize method failed:`, error); |
||||
|
this.initializationPromise = null; // Reset on failure
|
||||
|
throw error; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
private async _initialize(): Promise<void> { |
||||
|
if (this.initialized) { |
||||
|
return; |
||||
|
} |
||||
|
|
||||
|
const SQL = await initSqlJs({ |
||||
|
locateFile: (file: string) => { |
||||
|
return new URL( |
||||
|
`/node_modules/@jlongster/sql.js/dist/${file}`, |
||||
|
import.meta.url, |
||||
|
).href; |
||||
|
}, |
||||
|
}); |
||||
|
|
||||
|
const sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend()); |
||||
|
SQL.register_for_idb(sqlFS); |
||||
|
|
||||
|
SQL.FS.mkdir("/sql"); |
||||
|
SQL.FS.mount(sqlFS, {}, "/sql"); |
||||
|
|
||||
|
const path = "/sql/timesafari.absurd-sql"; |
||||
|
if (typeof SharedArrayBuffer === "undefined") { |
||||
|
const stream = SQL.FS.open(path, "a+"); |
||||
|
await stream.node.contents.readIfFallback(); |
||||
|
SQL.FS.close(stream); |
||||
|
} |
||||
|
|
||||
|
this.db = new SQL.Database(path, { filename: true }); |
||||
|
if (!this.db) { |
||||
|
throw new Error( |
||||
|
"The database initialization failed. We recommend you restart or reinstall.", |
||||
|
); |
||||
|
} |
||||
|
|
||||
|
// An error is thrown without this pragma: "File has invalid page size. (the first block of a new file must be written first)"
|
||||
|
await this.db.exec(`PRAGMA journal_mode=MEMORY;`); |
||||
|
const sqlExec = this.db.run.bind(this.db); |
||||
|
const sqlQuery = this.db.exec.bind(this.db); |
||||
|
|
||||
|
// Extract the migration names for the absurd-sql format
|
||||
|
const extractMigrationNames: (result: QueryExecResult[]) => Set<string> = ( |
||||
|
result, |
||||
|
) => { |
||||
|
// Even with the "select name" query, the QueryExecResult may be [] (which doesn't make sense to me).
|
||||
|
const names = result?.[0]?.values.map((row) => row[0] as string) || []; |
||||
|
return new Set(names); |
||||
|
}; |
||||
|
|
||||
|
// Run migrations
|
||||
|
await runMigrations(sqlExec, sqlQuery, extractMigrationNames); |
||||
|
|
||||
|
this.initialized = true; |
||||
|
|
||||
|
// Start processing the queue after initialization
|
||||
|
this.processQueue(); |
||||
|
} |
||||
|
|
||||
|
private async processQueue(): Promise<void> { |
||||
|
if (this.isProcessingQueue || !this.initialized || !this.db) { |
||||
|
return; |
||||
|
} |
||||
|
|
||||
|
this.isProcessingQueue = true; |
||||
|
|
||||
|
while (this.operationQueue.length > 0) { |
||||
|
const operation = this.operationQueue.shift(); |
||||
|
if (!operation) continue; |
||||
|
|
||||
|
try { |
||||
|
let result: unknown; |
||||
|
switch (operation.type) { |
||||
|
case "run": |
||||
|
result = await this.db.run(operation.sql, operation.params); |
||||
|
break; |
||||
|
case "query": |
||||
|
result = await this.db.exec(operation.sql, operation.params); |
||||
|
break; |
||||
|
} |
||||
|
operation.resolve(result); |
||||
|
} catch (error) { |
||||
|
logger.error( |
||||
|
"Error while processing SQL queue:", |
||||
|
error, |
||||
|
" ... for sql:", |
||||
|
operation.sql, |
||||
|
" ... with params:", |
||||
|
operation.params, |
||||
|
); |
||||
|
operation.reject(error); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
this.isProcessingQueue = false; |
||||
|
} |
||||
|
|
||||
|
private async queueOperation<R>( |
||||
|
type: QueuedOperation["type"], |
||||
|
sql: string, |
||||
|
params: unknown[] = [], |
||||
|
): Promise<R> { |
||||
|
return new Promise<R>((resolve, reject) => { |
||||
|
const operation: QueuedOperation = { |
||||
|
type, |
||||
|
sql, |
||||
|
params, |
||||
|
resolve: (value: unknown) => resolve(value as R), |
||||
|
reject, |
||||
|
}; |
||||
|
this.operationQueue.push(operation); |
||||
|
|
||||
|
// If we're already initialized, start processing the queue
|
||||
|
if (this.initialized && this.db) { |
||||
|
this.processQueue(); |
||||
|
} |
||||
|
}); |
||||
|
} |
||||
|
|
||||
|
private async waitForInitialization(): Promise<void> { |
||||
|
// If we have an initialization promise, wait for it
|
||||
|
if (this.initializationPromise) { |
||||
|
await this.initializationPromise; |
||||
|
return; |
||||
|
} |
||||
|
|
||||
|
// If not initialized and no promise, start initialization
|
||||
|
if (!this.initialized) { |
||||
|
await this.initialize(); |
||||
|
return; |
||||
|
} |
||||
|
|
||||
|
// If initialized but no db, something went wrong
|
||||
|
if (!this.db) { |
||||
|
logger.error( |
||||
|
`Database not properly initialized after await waitForInitialization() - initialized flag is true but db is null`, |
||||
|
); |
||||
|
throw new Error( |
||||
|
`The database could not be initialized. We recommend you restart or reinstall.`, |
||||
|
); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
// Used for inserts, updates, and deletes
|
||||
|
async run( |
||||
|
sql: string, |
||||
|
params: unknown[] = [], |
||||
|
): Promise<{ changes: number; lastId?: number }> { |
||||
|
await this.waitForInitialization(); |
||||
|
return this.queueOperation<{ changes: number; lastId?: number }>( |
||||
|
"run", |
||||
|
sql, |
||||
|
params, |
||||
|
); |
||||
|
} |
||||
|
|
||||
|
// Note that the resulting array may be empty if there are no results from the query
|
||||
|
async query(sql: string, params: unknown[] = []): Promise<QueryExecResult[]> { |
||||
|
await this.waitForInitialization(); |
||||
|
return this.queueOperation<QueryExecResult[]>("query", sql, params); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
// Create a singleton instance
|
||||
|
const databaseService = AbsurdSqlDatabaseService.getInstance(); |
||||
|
|
||||
|
export default databaseService; |
@ -0,0 +1,60 @@ |
|||||
|
interface Migration { |
||||
|
name: string; |
||||
|
sql: string; |
||||
|
} |
||||
|
|
||||
|
export class MigrationService { |
||||
|
private static instance: MigrationService; |
||||
|
private migrations: Migration[] = []; |
||||
|
|
||||
|
private constructor() {} |
||||
|
|
||||
|
static getInstance(): MigrationService { |
||||
|
if (!MigrationService.instance) { |
||||
|
MigrationService.instance = new MigrationService(); |
||||
|
} |
||||
|
return MigrationService.instance; |
||||
|
} |
||||
|
|
||||
|
registerMigration(migration: Migration) { |
||||
|
this.migrations.push(migration); |
||||
|
} |
||||
|
|
||||
|
/** |
||||
|
* @param sqlExec - A function that executes a SQL statement and returns some update result |
||||
|
* @param sqlQuery - A function that executes a SQL query and returns the result in some format |
||||
|
* @param extractMigrationNames - A function that extracts the names (string array) from a "select name from migrations" query |
||||
|
*/ |
||||
|
async runMigrations<T>( |
||||
|
// note that this does not take parameters because the Capacitor SQLite 'execute' is different
|
||||
|
sqlExec: (sql: string) => Promise<unknown>, |
||||
|
sqlQuery: (sql: string) => Promise<T>, |
||||
|
extractMigrationNames: (result: T) => Set<string>, |
||||
|
): Promise<void> { |
||||
|
// Create migrations table if it doesn't exist
|
||||
|
await sqlExec(` |
||||
|
CREATE TABLE IF NOT EXISTS migrations ( |
||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT, |
||||
|
name TEXT NOT NULL UNIQUE, |
||||
|
executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP |
||||
|
); |
||||
|
`);
|
||||
|
|
||||
|
// Get list of executed migrations
|
||||
|
const result1: T = await sqlQuery("SELECT name FROM migrations;"); |
||||
|
const executedMigrations = extractMigrationNames(result1); |
||||
|
|
||||
|
// Run pending migrations in order
|
||||
|
for (const migration of this.migrations) { |
||||
|
if (!executedMigrations.has(migration.name)) { |
||||
|
await sqlExec(migration.sql); |
||||
|
|
||||
|
await sqlExec( |
||||
|
`INSERT INTO migrations (name) VALUES ('${migration.name}')`, |
||||
|
); |
||||
|
} |
||||
|
} |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
export default MigrationService.getInstance(); |
@ -0,0 +1,45 @@ |
|||||
|
declare module 'absurd-sql/dist/indexeddb-backend' { |
||||
|
export default class IndexedDBBackend { |
||||
|
constructor(options?: { |
||||
|
dbName?: string; |
||||
|
storeName?: string; |
||||
|
onReady?: () => void; |
||||
|
onError?: (error: Error) => void; |
||||
|
}); |
||||
|
init(): Promise<void>; |
||||
|
exec(sql: string, params?: any[]): Promise<any>; |
||||
|
close(): Promise<void>; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module 'absurd-sql/dist/indexeddb-main-thread' { |
||||
|
export function initBackend(worker: Worker): Promise<void>; |
||||
|
|
||||
|
export default class IndexedDBMainThread { |
||||
|
constructor(options?: { |
||||
|
dbName?: string; |
||||
|
storeName?: string; |
||||
|
onReady?: () => void; |
||||
|
onError?: (error: Error) => void; |
||||
|
}); |
||||
|
init(): Promise<void>; |
||||
|
exec(sql: string, params?: any[]): Promise<any>; |
||||
|
close(): Promise<void>; |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module 'absurd-sql' { |
||||
|
export class SQLiteFS { |
||||
|
constructor(fs: unknown, backend: IndexedDBBackend); |
||||
|
init(): Promise<void>; |
||||
|
close(): Promise<void>; |
||||
|
exec(sql: string, params?: any[]): Promise<any>; |
||||
|
prepare(sql: string): Promise<any>; |
||||
|
run(sql: string, params?: any[]): Promise<any>; |
||||
|
get(sql: string, params?: any[]): Promise<any>; |
||||
|
all(sql: string, params?: any[]): Promise<any[]>; |
||||
|
} |
||||
|
|
||||
|
export * from 'absurd-sql/dist/indexeddb-backend'; |
||||
|
export * from 'absurd-sql/dist/indexeddb-main-thread'; |
||||
|
} |
@ -0,0 +1,36 @@ |
|||||
|
import type { QueryExecResult, SqlValue } from "./database"; |
||||
|
|
||||
|
declare module '@jlongster/sql.js' { |
||||
|
interface SQL { |
||||
|
Database: new (path: string, options?: { filename: boolean }) => Database; |
||||
|
FS: { |
||||
|
mkdir: (path: string) => void; |
||||
|
mount: (fs: any, options: any, path: string) => void; |
||||
|
open: (path: string, flags: string) => any; |
||||
|
close: (stream: any) => void; |
||||
|
}; |
||||
|
register_for_idb: (fs: any) => void; |
||||
|
} |
||||
|
|
||||
|
interface Database { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
run: (sql: string, params?: unknown[]) => Promise<{ changes: number; lastId?: number }>; |
||||
|
get: (sql: string, params?: unknown[]) => Promise<SqlValue[]>; |
||||
|
all: (sql: string, params?: unknown[]) => Promise<SqlValue[][]>; |
||||
|
prepare: (sql: string) => Promise<Statement>; |
||||
|
close: () => void; |
||||
|
} |
||||
|
|
||||
|
interface Statement { |
||||
|
run: (params?: unknown[]) => Promise<{ changes: number; lastId?: number }>; |
||||
|
get: (params?: unknown[]) => Promise<SqlValue[]>; |
||||
|
all: (params?: unknown[]) => Promise<SqlValue[][]>; |
||||
|
finalize: () => void; |
||||
|
} |
||||
|
|
||||
|
const initSqlJs: (options?: { |
||||
|
locateFile?: (file: string) => string; |
||||
|
}) => Promise<SQL>; |
||||
|
|
||||
|
export default initSqlJs; |
||||
|
} |
@ -0,0 +1,67 @@ |
|||||
|
import type { QueryExecResult, SqlValue } from "./database"; |
||||
|
|
||||
|
declare module '@jlongster/sql.js' { |
||||
|
interface SQL { |
||||
|
Database: new (path: string, options?: { filename: boolean }) => Database; |
||||
|
FS: { |
||||
|
mkdir: (path: string) => void; |
||||
|
mount: (fs: any, options: any, path: string) => void; |
||||
|
open: (path: string, flags: string) => any; |
||||
|
close: (stream: any) => void; |
||||
|
}; |
||||
|
register_for_idb: (fs: any) => void; |
||||
|
} |
||||
|
|
||||
|
interface Database { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
run: (sql: string, params?: unknown[]) => Promise<{ changes: number; lastId?: number }>; |
||||
|
get: (sql: string, params?: unknown[]) => Promise<SqlValue[]>; |
||||
|
all: (sql: string, params?: unknown[]) => Promise<SqlValue[][]>; |
||||
|
prepare: (sql: string) => Promise<Statement>; |
||||
|
close: () => void; |
||||
|
} |
||||
|
|
||||
|
interface Statement { |
||||
|
run: (params?: unknown[]) => Promise<{ changes: number; lastId?: number }>; |
||||
|
get: (params?: unknown[]) => Promise<SqlValue[]>; |
||||
|
all: (params?: unknown[]) => Promise<SqlValue[][]>; |
||||
|
finalize: () => void; |
||||
|
} |
||||
|
|
||||
|
const initSqlJs: (options?: { |
||||
|
locateFile?: (file: string) => string; |
||||
|
}) => Promise<SQL>; |
||||
|
|
||||
|
export default initSqlJs; |
||||
|
} |
||||
|
|
||||
|
declare module 'absurd-sql' { |
||||
|
import type { SQL } from '@jlongster/sql.js'; |
||||
|
export class SQLiteFS { |
||||
|
constructor(fs: any, backend: any); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module 'absurd-sql/dist/indexeddb-backend' { |
||||
|
export default class IndexedDBBackend { |
||||
|
constructor(); |
||||
|
} |
||||
|
} |
||||
|
|
||||
|
declare module 'absurd-sql/dist/indexeddb-main-thread' { |
||||
|
import type { QueryExecResult } from './database'; |
||||
|
export interface SQLiteOptions { |
||||
|
filename?: string; |
||||
|
autoLoad?: boolean; |
||||
|
debug?: boolean; |
||||
|
} |
||||
|
|
||||
|
export interface SQLiteDatabase { |
||||
|
exec: (sql: string, params?: unknown[]) => Promise<QueryExecResult[]>; |
||||
|
close: () => Promise<void>; |
||||
|
} |
||||
|
|
||||
|
export function initSqlJs(options?: any): Promise<any>; |
||||
|
export function createDatabase(options?: SQLiteOptions): Promise<SQLiteDatabase>; |
||||
|
export function openDatabase(options?: SQLiteOptions): Promise<SQLiteDatabase>; |
||||
|
} |
@ -0,0 +1,57 @@ |
|||||
|
/** |
||||
|
* Type definitions for @jlongster/sql.js |
||||
|
* @author Matthew Raymer |
||||
|
* @description TypeScript declaration file for the SQL.js WASM module with filesystem support |
||||
|
*/ |
||||
|
|
||||
|
declare module '@jlongster/sql.js' { |
||||
|
export interface FileSystem { |
||||
|
mkdir(path: string): void; |
||||
|
mount(fs: any, opts: any, mountpoint: string): void; |
||||
|
open(path: string, flags: string): FileStream; |
||||
|
close(stream: FileStream): void; |
||||
|
} |
||||
|
|
||||
|
export interface FileStream { |
||||
|
node: { |
||||
|
contents: { |
||||
|
readIfFallback(): Promise<void>; |
||||
|
}; |
||||
|
}; |
||||
|
} |
||||
|
|
||||
|
export interface Database { |
||||
|
exec(sql: string, params?: any[]): Promise<QueryExecResult[]>; |
||||
|
prepare(sql: string): Statement; |
||||
|
run(sql: string, params?: any[]): Promise<{ changes: number; lastId?: number }>; |
||||
|
close(): void; |
||||
|
} |
||||
|
|
||||
|
export interface QueryExecResult { |
||||
|
columns: string[]; |
||||
|
values: any[][]; |
||||
|
} |
||||
|
|
||||
|
export interface Statement { |
||||
|
bind(params: any[]): void; |
||||
|
step(): boolean; |
||||
|
get(): any[]; |
||||
|
getColumnNames(): string[]; |
||||
|
reset(): void; |
||||
|
free(): void; |
||||
|
} |
||||
|
|
||||
|
export interface InitSqlJsStatic { |
||||
|
(config?: { |
||||
|
locateFile?: (file: string) => string; |
||||
|
wasmBinary?: ArrayBuffer; |
||||
|
}): Promise<{ |
||||
|
Database: new (path?: string | Uint8Array, opts?: { filename?: boolean }) => Database; |
||||
|
FS: FileSystem; |
||||
|
register_for_idb: (fs: any) => void; |
||||
|
}>; |
||||
|
} |
||||
|
|
||||
|
const initSqlJs: InitSqlJsStatic; |
||||
|
export default initSqlJs; |
||||
|
} |
@ -0,0 +1,2 @@ |
|||||
|
// Empty module to satisfy Node.js built-in module imports
|
||||
|
export default {}; |
@ -0,0 +1,17 @@ |
|||||
|
// Minimal crypto module implementation for browser using Web Crypto API
|
||||
|
const crypto = { |
||||
|
...window.crypto, |
||||
|
// Add any Node.js crypto methods that might be needed
|
||||
|
randomBytes: (size) => { |
||||
|
const buffer = new Uint8Array(size); |
||||
|
window.crypto.getRandomValues(buffer); |
||||
|
return buffer; |
||||
|
}, |
||||
|
createHash: () => ({ |
||||
|
update: () => ({ |
||||
|
digest: () => new Uint8Array(32), // Return empty hash
|
||||
|
}), |
||||
|
}), |
||||
|
}; |
||||
|
|
||||
|
export default crypto; |
@ -0,0 +1,18 @@ |
|||||
|
// Minimal fs module implementation for browser
|
||||
|
const fs = { |
||||
|
readFileSync: () => { |
||||
|
throw new Error("fs.readFileSync is not supported in browser"); |
||||
|
}, |
||||
|
writeFileSync: () => { |
||||
|
throw new Error("fs.writeFileSync is not supported in browser"); |
||||
|
}, |
||||
|
existsSync: () => false, |
||||
|
mkdirSync: () => {}, |
||||
|
readdirSync: () => [], |
||||
|
statSync: () => ({ |
||||
|
isDirectory: () => false, |
||||
|
isFile: () => false, |
||||
|
}), |
||||
|
}; |
||||
|
|
||||
|
export default fs; |
@ -0,0 +1,13 @@ |
|||||
|
// Minimal path module implementation for browser
|
||||
|
const path = { |
||||
|
resolve: (...parts) => parts.join("/"), |
||||
|
join: (...parts) => parts.join("/"), |
||||
|
dirname: (p) => p.split("/").slice(0, -1).join("/"), |
||||
|
basename: (p) => p.split("/").pop(), |
||||
|
extname: (p) => { |
||||
|
const parts = p.split("."); |
||||
|
return parts.length > 1 ? "." + parts.pop() : ""; |
||||
|
}, |
||||
|
}; |
||||
|
|
||||
|
export default path; |
Some files were not shown because too many files changed in this diff
Loading…
Reference in new issue