Browse Source

chore: update migration documents and move to new home

sql-absurd-sql
Matt Raymer 2 weeks ago
parent
commit
75f6e99200
  1. 153
      .cursor/rules/absurd-sql.mdc
  2. 124
      doc/dexie-to-sqlite-mapping.md
  3. 95
      doc/migration-to-wa-sqlite.md
  4. 0
      doc/secure-storage-implementation.md
  5. 29
      doc/storage-implementation-checklist.md

153
.cursor/rules/absurd-sql.mdc

@ -0,0 +1,153 @@
---
description:
globs:
alwaysApply: true
---
# Absurd SQL - Cursor Development Guide
## Project Overview
Absurd SQL is a backend implementation for sql.js that enables persistent SQLite databases in the browser by using IndexedDB as a block storage system. This guide provides rules and best practices for developing with this project in Cursor.
## Project Structure
```
absurd-sql/
├── src/ # Source code
├── dist/ # Built files
├── package.json # Dependencies and scripts
├── rollup.config.js # Build configuration
└── jest.config.js # Test configuration
```
## Development Rules
### 1. Worker Thread Requirements
- All SQL operations MUST be performed in a worker thread
- Main thread should only handle worker initialization and communication
- Never block the main thread with database operations
### 2. Code Organization
- Keep worker code in separate files (e.g., `*.worker.js`)
- Use ES modules for imports/exports
- Follow the project's existing module structure
### 3. Required Headers
When developing locally or deploying, ensure these headers are set:
```
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
```
### 4. Browser Compatibility
- Primary target: Modern browsers with SharedArrayBuffer support
- Fallback mode: Safari (with limitations)
- Always test in both modes
### 5. Database Configuration
Recommended database settings:
```sql
PRAGMA journal_mode=MEMORY;
PRAGMA page_size=8192; -- Optional, but recommended
```
### 6. Development Workflow
1. Install dependencies:
```bash
yarn add @jlongster/sql.js absurd-sql
```
2. Development commands:
- `yarn build` - Build the project
- `yarn jest` - Run tests
- `yarn serve` - Start development server
### 7. Testing Guidelines
- Write tests for both SharedArrayBuffer and fallback modes
- Use Jest for testing
- Include performance benchmarks for critical operations
### 8. Performance Considerations
- Use bulk operations when possible
- Monitor read/write performance
- Consider using transactions for multiple operations
- Avoid unnecessary database connections
### 9. Error Handling
- Implement proper error handling for:
- Worker initialization failures
- Database connection issues
- Concurrent access conflicts (in fallback mode)
- Storage quota exceeded scenarios
### 10. Security Best Practices
- Never expose database operations directly to the client
- Validate all SQL queries
- Implement proper access controls
- Handle sensitive data appropriately
### 11. Code Style
- Follow ESLint configuration
- Use async/await for asynchronous operations
- Document complex database operations
- Include comments for non-obvious optimizations
### 12. Debugging
- Use `jest-debug` for debugging tests
- Monitor IndexedDB usage in browser dev tools
- Check worker communication in console
- Use performance monitoring tools
## Common Patterns
### Worker Initialization
```javascript
// Main thread
import { initBackend } from 'absurd-sql/dist/indexeddb-main-thread';
function init() {
let worker = new Worker(new URL('./index.worker.js', import.meta.url));
initBackend(worker);
}
```
### Database Setup
```javascript
// Worker thread
import initSqlJs from '@jlongster/sql.js';
import { SQLiteFS } from 'absurd-sql';
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend';
async function setupDatabase() {
let SQL = await initSqlJs({ locateFile: file => file });
let sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend());
SQL.register_for_idb(sqlFS);
SQL.FS.mkdir('/sql');
SQL.FS.mount(sqlFS, {}, '/sql');
return new SQL.Database('/sql/db.sqlite', { filename: true });
}
```
## Troubleshooting
### Common Issues
1. SharedArrayBuffer not available
- Check COOP/COEP headers
- Verify browser support
- Test fallback mode
2. Worker initialization failures
- Check file paths
- Verify module imports
- Check browser console for errors
3. Performance issues
- Monitor IndexedDB usage
- Check for unnecessary operations
- Verify transaction usage
## Resources
- [Project Demo](https://priceless-keller-d097e5.netlify.app/)
- [Example Project](https://github.com/jlongster/absurd-example-project)
- [Blog Post](https://jlongster.com/future-sql-web)
- [SQL.js Documentation](https://github.com/sql-js/sql.js/)

124
docs/dexie-to-sqlite-mapping.md → doc/dexie-to-sqlite-mapping.md

@ -1,4 +1,4 @@
# Dexie to SQLite Mapping Guide
# Dexie to absurd-sql Mapping Guide
## Schema Mapping
@ -54,10 +54,11 @@ CREATE INDEX idx_settings_updated_at ON settings(updated_at);
// Dexie
const account = await db.accounts.get(did);
// SQLite
const account = await db.selectOne(`
// absurd-sql
const result = await db.exec(`
SELECT * FROM accounts WHERE did = ?
`, [did]);
const account = result[0]?.values[0];
```
#### Get All Accounts
@ -65,10 +66,11 @@ const account = await db.selectOne(`
// Dexie
const accounts = await db.accounts.toArray();
// SQLite
const accounts = await db.selectAll(`
// absurd-sql
const result = await db.exec(`
SELECT * FROM accounts ORDER BY created_at DESC
`);
const accounts = result[0]?.values || [];
```
#### Add Account
@ -81,8 +83,8 @@ await db.accounts.add({
updatedAt: Date.now()
});
// SQLite
await db.execute(`
// absurd-sql
await db.run(`
INSERT INTO accounts (did, public_key_hex, created_at, updated_at)
VALUES (?, ?, ?, ?)
`, [did, publicKeyHex, Date.now(), Date.now()]);
@ -96,8 +98,8 @@ await db.accounts.update(did, {
updatedAt: Date.now()
});
// SQLite
await db.execute(`
// absurd-sql
await db.run(`
UPDATE accounts
SET public_key_hex = ?, updated_at = ?
WHERE did = ?
@ -111,10 +113,11 @@ await db.execute(`
// Dexie
const setting = await db.settings.get(key);
// SQLite
const setting = await db.selectOne(`
// absurd-sql
const result = await db.exec(`
SELECT * FROM settings WHERE key = ?
`, [key]);
const setting = result[0]?.values[0];
```
#### Set Setting
@ -126,8 +129,8 @@ await db.settings.put({
updatedAt: Date.now()
});
// SQLite
await db.execute(`
// absurd-sql
await db.run(`
INSERT INTO settings (key, value, updated_at)
VALUES (?, ?, ?)
ON CONFLICT(key) DO UPDATE SET
@ -146,12 +149,13 @@ const contacts = await db.contacts
.equals(accountDid)
.toArray();
// SQLite
const contacts = await db.selectAll(`
// absurd-sql
const result = await db.exec(`
SELECT * FROM contacts
WHERE did = ?
ORDER BY created_at DESC
`, [accountDid]);
const contacts = result[0]?.values || [];
```
#### Add Contact
@ -165,8 +169,8 @@ await db.contacts.add({
updatedAt: Date.now()
});
// SQLite
await db.execute(`
// absurd-sql
await db.run(`
INSERT INTO contacts (id, did, name, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
`, [generateId(), accountDid, name, Date.now(), Date.now()]);
@ -182,20 +186,25 @@ await db.transaction('rw', [db.accounts, db.contacts], async () => {
await db.contacts.bulkAdd(contacts);
});
// SQLite
await db.transaction(async (tx) => {
await tx.execute(`
// absurd-sql
await db.exec('BEGIN TRANSACTION;');
try {
await db.run(`
INSERT INTO accounts (did, public_key_hex, created_at, updated_at)
VALUES (?, ?, ?, ?)
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]);
for (const contact of contacts) {
await tx.execute(`
await db.run(`
INSERT INTO contacts (id, did, name, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
`, [contact.id, contact.did, contact.name, contact.createdAt, contact.updatedAt]);
}
});
await db.exec('COMMIT;');
} catch (error) {
await db.exec('ROLLBACK;');
throw error;
}
```
## Migration Helper Functions
@ -218,15 +227,14 @@ async function exportDexieData(): Promise<MigrationData> {
}
```
### 2. Data Import (JSON to SQLite)
### 2. Data Import (JSON to absurd-sql)
```typescript
async function importToSQLite(data: MigrationData): Promise<void> {
const db = await getSQLiteConnection();
await db.transaction(async (tx) => {
async function importToAbsurdSql(data: MigrationData): Promise<void> {
await db.exec('BEGIN TRANSACTION;');
try {
// Import accounts
for (const account of data.accounts) {
await tx.execute(`
await db.run(`
INSERT INTO accounts (did, public_key_hex, created_at, updated_at)
VALUES (?, ?, ?, ?)
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]);
@ -234,7 +242,7 @@ async function importToSQLite(data: MigrationData): Promise<void> {
// Import settings
for (const setting of data.settings) {
await tx.execute(`
await db.run(`
INSERT INTO settings (key, value, updated_at)
VALUES (?, ?, ?)
`, [setting.key, setting.value, setting.updatedAt]);
@ -242,52 +250,52 @@ async function importToSQLite(data: MigrationData): Promise<void> {
// Import contacts
for (const contact of data.contacts) {
await tx.execute(`
await db.run(`
INSERT INTO contacts (id, did, name, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
`, [contact.id, contact.did, contact.name, contact.createdAt, contact.updatedAt]);
}
});
await db.exec('COMMIT;');
} catch (error) {
await db.exec('ROLLBACK;');
throw error;
}
}
```
### 3. Verification
```typescript
async function verifyMigration(dexieData: MigrationData): Promise<boolean> {
const db = await getSQLiteConnection();
// Verify account count
const accountCount = await db.selectValue(
'SELECT COUNT(*) FROM accounts'
);
const accountResult = await db.exec('SELECT COUNT(*) as count FROM accounts');
const accountCount = accountResult[0].values[0][0];
if (accountCount !== dexieData.accounts.length) {
return false;
}
// Verify settings count
const settingsCount = await db.selectValue(
'SELECT COUNT(*) FROM settings'
);
const settingsResult = await db.exec('SELECT COUNT(*) as count FROM settings');
const settingsCount = settingsResult[0].values[0][0];
if (settingsCount !== dexieData.settings.length) {
return false;
}
// Verify contacts count
const contactsCount = await db.selectValue(
'SELECT COUNT(*) FROM contacts'
);
const contactsResult = await db.exec('SELECT COUNT(*) as count FROM contacts');
const contactsCount = contactsResult[0].values[0][0];
if (contactsCount !== dexieData.contacts.length) {
return false;
}
// Verify data integrity
for (const account of dexieData.accounts) {
const migratedAccount = await db.selectOne(
const result = await db.exec(
'SELECT * FROM accounts WHERE did = ?',
[account.did]
);
const migratedAccount = result[0]?.values[0];
if (!migratedAccount ||
migratedAccount.public_key_hex !== account.publicKeyHex) {
migratedAccount[1] !== account.publicKeyHex) { // public_key_hex is second column
return false;
}
}
@ -300,18 +308,21 @@ async function verifyMigration(dexieData: MigrationData): Promise<boolean> {
### 1. Indexing
- Dexie automatically creates indexes based on the schema
- SQLite requires explicit index creation
- absurd-sql requires explicit index creation
- Added indexes for frequently queried fields
- Use `PRAGMA journal_mode=MEMORY;` for better performance
### 2. Batch Operations
- Dexie has built-in bulk operations
- SQLite uses transactions for batch operations
- absurd-sql uses transactions for batch operations
- Consider chunking large datasets
- Use prepared statements for repeated queries
### 3. Query Optimization
- Dexie uses IndexedDB's native indexing
- SQLite requires explicit query optimization
- absurd-sql requires explicit query optimization
- Use prepared statements for repeated queries
- Consider using `PRAGMA synchronous=NORMAL;` for better performance
## Error Handling
@ -326,14 +337,14 @@ try {
}
}
// SQLite errors
// absurd-sql errors
try {
await db.execute(`
await db.run(`
INSERT INTO accounts (did, public_key_hex, created_at, updated_at)
VALUES (?, ?, ?, ?)
`, [account.did, account.publicKeyHex, account.createdAt, account.updatedAt]);
} catch (error) {
if (error.code === 'SQLITE_CONSTRAINT') {
if (error.message.includes('UNIQUE constraint failed')) {
// Handle duplicate key
}
}
@ -350,15 +361,14 @@ try {
// Dexie automatically rolls back
}
// SQLite transaction
const db = await getSQLiteConnection();
// absurd-sql transaction
try {
await db.transaction(async (tx) => {
// Operations
});
await db.exec('BEGIN TRANSACTION;');
// Operations
await db.exec('COMMIT;');
} catch (error) {
// SQLite automatically rolls back
await db.execute('ROLLBACK');
await db.exec('ROLLBACK;');
throw error;
}
```

95
docs/migration-to-wa-sqlite.md → doc/migration-to-wa-sqlite.md

@ -1,8 +1,8 @@
# Migration Guide: Dexie to wa-sqlite
# Migration Guide: Dexie to absurd-sql
## Overview
This document outlines the migration process from Dexie.js to wa-sqlite for the TimeSafari app's storage implementation. The migration aims to provide a consistent SQLite-based storage solution across all platforms while maintaining data integrity and ensuring a smooth transition for users.
This document outlines the migration process from Dexie.js to absurd-sql for the TimeSafari app's storage implementation. The migration aims to provide a consistent SQLite-based storage solution across all platforms while maintaining data integrity and ensuring a smooth transition for users.
## Migration Goals
@ -43,12 +43,20 @@ This document outlines the migration process from Dexie.js to wa-sqlite for the
}
```
2. **Storage Requirements**
2. **Dependencies**
```json
{
"@jlongster/sql.js": "^1.8.0",
"absurd-sql": "^1.8.0"
}
```
3. **Storage Requirements**
- Sufficient IndexedDB quota
- Available disk space for SQLite
- Backup storage space
3. **Platform Support**
4. **Platform Support**
- Web: Modern browser with IndexedDB support
- iOS: iOS 13+ with SQLite support
- Android: Android 5+ with SQLite support
@ -60,9 +68,15 @@ This document outlines the migration process from Dexie.js to wa-sqlite for the
```typescript
// src/services/storage/migration/MigrationService.ts
import initSqlJs from '@jlongster/sql.js';
import { SQLiteFS } from 'absurd-sql';
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend';
export class MigrationService {
private static instance: MigrationService;
private backup: MigrationBackup | null = null;
private sql: any = null;
private db: any = null;
async prepare(): Promise<void> {
try {
@ -75,8 +89,8 @@ export class MigrationService {
// 3. Verify backup integrity
await this.verifyBackup();
// 4. Initialize wa-sqlite
await this.initializeWaSqlite();
// 4. Initialize absurd-sql
await this.initializeAbsurdSql();
} catch (error) {
throw new StorageError(
'Migration preparation failed',
@ -86,6 +100,42 @@ export class MigrationService {
}
}
private async initializeAbsurdSql(): Promise<void> {
// Initialize SQL.js
this.sql = await initSqlJs({
locateFile: (file: string) => {
return new URL(`/node_modules/@jlongster/sql.js/dist/${file}`, import.meta.url).href;
}
});
// Setup SQLiteFS with IndexedDB backend
const sqlFS = new SQLiteFS(this.sql.FS, new IndexedDBBackend());
this.sql.register_for_idb(sqlFS);
// Create and mount filesystem
this.sql.FS.mkdir('/sql');
this.sql.FS.mount(sqlFS, {}, '/sql');
// Open database
const path = '/sql/db.sqlite';
if (typeof SharedArrayBuffer === 'undefined') {
let stream = this.sql.FS.open(path, 'a+');
await stream.node.contents.readIfFallback();
this.sql.FS.close(stream);
}
this.db = new this.sql.Database(path, { filename: true });
if (!this.db) {
throw new StorageError(
'Database initialization failed',
StorageErrorCodes.INITIALIZATION_FAILED
);
}
// Configure database
await this.db.exec(`PRAGMA journal_mode=MEMORY;`);
}
private async checkPrerequisites(): Promise<void> {
// Check IndexedDB availability
if (!window.indexedDB) {
@ -160,12 +210,11 @@ export class DataMigration {
}
private async migrateAccounts(accounts: Account[]): Promise<void> {
const db = await this.getWaSqliteConnection();
// Use transaction for atomicity
await db.transaction(async (tx) => {
await this.db.exec('BEGIN TRANSACTION;');
try {
for (const account of accounts) {
await tx.execute(`
await this.db.run(`
INSERT INTO accounts (did, public_key_hex, created_at, updated_at)
VALUES (?, ?, ?, ?)
`, [
@ -175,16 +224,18 @@ export class DataMigration {
account.updatedAt
]);
}
});
await this.db.exec('COMMIT;');
} catch (error) {
await this.db.exec('ROLLBACK;');
throw error;
}
}
private async verifyMigration(backup: MigrationBackup): Promise<void> {
const db = await this.getWaSqliteConnection();
// Verify account count
const accountCount = await db.selectValue(
'SELECT COUNT(*) FROM accounts'
);
const result = await this.db.exec('SELECT COUNT(*) as count FROM accounts');
const accountCount = result[0].values[0][0];
if (accountCount !== backup.accounts.length) {
throw new StorageError(
'Account count mismatch',
@ -214,8 +265,8 @@ export class RollbackService {
// 3. Verify restoration
await this.verifyRestoration(backup);
// 4. Clean up wa-sqlite
await this.cleanupWaSqlite();
// 4. Clean up absurd-sql
await this.cleanupAbsurdSql();
} catch (error) {
throw new StorageError(
'Rollback failed',
@ -371,6 +422,14 @@ button:hover {
```typescript
// src/services/storage/migration/__tests__/MigrationService.spec.ts
describe('MigrationService', () => {
it('should initialize absurd-sql correctly', async () => {
const service = MigrationService.getInstance();
await service.initializeAbsurdSql();
expect(service.isInitialized()).toBe(true);
expect(service.getDatabase()).toBeDefined();
});
it('should create valid backup', async () => {
const service = MigrationService.getInstance();
const backup = await service.createBackup();

0
docs/secure-storage-implementation.md → doc/secure-storage-implementation.md

29
docs/storage-implementation-checklist.md → doc/storage-implementation-checklist.md

@ -10,9 +10,9 @@
- [ ] Add migration support methods
- [ ] Implement platform-specific services
- [ ] `WebSQLiteService` (wa-sqlite)
- [ ] `WebSQLiteService` (absurd-sql)
- [ ] Database initialization
- [ ] VFS setup
- [ ] VFS setup with IndexedDB backend
- [ ] Connection management
- [ ] Query builder
- [ ] `NativeSQLiteService` (iOS/Android)
@ -49,17 +49,24 @@
## Platform-Specific Implementation
### Web Platform
- [ ] Setup wa-sqlite
- [ ] Setup absurd-sql
- [ ] Install dependencies
```json
{
"@wa-sqlite/sql.js": "^0.8.12",
"@wa-sqlite/sql.js-httpvfs": "^0.8.12"
"@jlongster/sql.js": "^1.8.0",
"absurd-sql": "^1.8.0"
}
```
- [ ] Configure VFS
- [ ] Configure VFS with IndexedDB backend
- [ ] Setup worker threads
- [ ] Implement connection pooling
- [ ] Configure database pragmas
```sql
PRAGMA journal_mode=MEMORY;
PRAGMA synchronous=NORMAL;
PRAGMA foreign_keys=ON;
PRAGMA busy_timeout=5000;
```
- [ ] Update build configuration
- [ ] Modify `vite.config.ts`
@ -71,6 +78,7 @@
- [ ] Create fallback service
- [ ] Add data synchronization
- [ ] Handle quota exceeded
- [ ] Implement atomic operations
### iOS Platform
- [ ] Setup SQLCipher
@ -140,6 +148,11 @@
updated_at INTEGER NOT NULL,
FOREIGN KEY (did) REFERENCES accounts(did)
);
-- Indexes for performance
CREATE INDEX idx_accounts_created_at ON accounts(created_at);
CREATE INDEX idx_contacts_did ON contacts(did);
CREATE INDEX idx_settings_updated_at ON settings(updated_at);
```
- [ ] Create indexes
@ -286,12 +299,16 @@
- [ ] Migration time < 5s per 1000 records
- [ ] Storage overhead < 10%
- [ ] Memory usage < 50MB
- [ ] Atomic operations complete successfully
- [ ] Transaction performance meets requirements
### 2. Reliability
- [ ] 99.9% uptime
- [ ] Zero data loss
- [ ] Automatic recovery
- [ ] Backup verification
- [ ] Transaction atomicity
- [ ] Data consistency
### 3. Security
- [ ] AES-256 encryption
Loading…
Cancel
Save