ORM Adapters
Production-ready persistence using Drizzle, Prisma, or TypeORM with any supported database.
noddde provides three ORM adapter packages that implement all persistence interfaces and UnitOfWork using your ORM's native transaction mechanism. Pick the ORM you already use -- each adapter works with whatever database your ORM supports (PostgreSQL, MySQL, SQLite, etc.).
Available Adapters
| Package | ORM | Schema |
|---|---|---|
@noddde/drizzle | Drizzle ORM | TypeScript table builders (per dialect) |
@noddde/prisma | Prisma | .prisma schema file |
@noddde/typeorm | TypeORM | TypeScript entity decorators |
Dialect Support Matrix
Persistence (event store, state store, saga store, snapshot store) and concurrency control work with every dialect supported by your ORM. The only dialect restriction applies to pessimistic locking, which requires database-level advisory locks:
| Dialect | Persistence | No concurrency / Optimistic | Pessimistic locking |
|---|---|---|---|
| PostgreSQL | ✅ All ORMs | ✅ All ORMs | ✅ All ORMs |
| MySQL | ✅ All ORMs | ✅ All ORMs | ✅ All ORMs |
| MariaDB | ✅ All ORMs | ✅ All ORMs | ✅ All ORMs |
| SQLite | ✅ All ORMs | ✅ All ORMs | ❌ No advisory locks |
| MSSQL | ✅ TypeORM | ✅ TypeORM | ✅ TypeORM only |
For SQLite or any dialect without advisory lock support, use InMemoryAggregateLocker from @noddde/engine for single-process deployments, or choose the optimistic strategy instead.
Each package exports a single factory function that returns all persistence implementations wired to share a transaction context:
import {
events,
aggregateStates,
sagaStates,
snapshots,
} from "@noddde/drizzle/pg";
const infra = createDrizzlePersistence(db, {
events,
aggregateStates,
sagaStates,
snapshots,
});
// or: createPrismaPersistence(prisma)
// or: createTypeORMPersistence(dataSource)
// Returns:
// infra.eventSourcedPersistence -- EventSourcedAggregatePersistence & PartialEventLoad
// infra.stateStoredPersistence -- StateStoredAggregatePersistence
// infra.sagaPersistence -- SagaPersistence
// infra.snapshotStore -- SnapshotStore (Drizzle: only when snapshots schema provided)
// infra.unitOfWorkFactory -- UnitOfWorkFactory (real DB transactions)Drizzle
Installation
yarn add @noddde/drizzle drizzle-orm
# Plus your database driver, e.g.:
yarn add better-sqlite3 # or: pg, mysql2Schema Setup
The package exports convenience table definitions for each Drizzle dialect. Import from the sub-path matching your database:
// SQLite
import {
events,
aggregateStates,
sagaStates,
snapshots,
} from "@noddde/drizzle/sqlite";
// PostgreSQL (uses serial PK, jsonb for payloads)
import {
events,
aggregateStates,
sagaStates,
snapshots,
} from "@noddde/drizzle/pg";
// MySQL (uses int auto-increment, varchar(255), json)
import {
events,
aggregateStates,
sagaStates,
snapshots,
} from "@noddde/drizzle/mysql";You can also define your own tables matching the expected column structure -- the adapter does not require using the provided schemas.
Configuration
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createDrizzlePersistence } from "@noddde/drizzle";
import {
events,
aggregateStates,
sagaStates,
snapshots,
} from "@noddde/drizzle/sqlite";
import { everyNEvents } from "@noddde/core";
const db = drizzle(new Database("app.db"));
const infra = createDrizzlePersistence(db, {
events,
aggregateStates,
sagaStates,
snapshots,
});
const domain = await configureDomain({
writeModel: { aggregates: { BankAccount } },
readModel: { projections: { BankAccount: BankAccountProjection } },
infrastructure: {
aggregatePersistence: () => infra.eventSourcedPersistence,
sagaPersistence: () => infra.sagaPersistence,
snapshotStore: () => infra.snapshotStore,
snapshotStrategy: everyNEvents(100),
unitOfWorkFactory: () => infra.unitOfWorkFactory,
},
});How Drizzle Transactions Work
The adapter detects the dialect automatically. For SQLite (sync drivers like better-sqlite3), it uses explicit BEGIN/COMMIT/ROLLBACK SQL statements. For PostgreSQL and MySQL, it uses the native db.transaction() callback, which ensures connection affinity in pooled environments.
The persistence classes and UnitOfWork share a transaction store. When a transaction is active, all queries automatically route through it.
Prisma
Installation
yarn add @noddde/prisma @prisma/client
yarn add -D prismaSchema Setup
Copy the three model definitions from the package's Prisma schema into your own schema.prisma:
model NodddeEvent {
id Int @id @default(autoincrement())
aggregateName String @map("aggregate_name")
aggregateId String @map("aggregate_id")
sequenceNumber Int @map("sequence_number")
eventName String @map("event_name")
payload String
metadata String?
@@unique([aggregateName, aggregateId, sequenceNumber])
@@map("noddde_events")
}
model NodddeAggregateState {
aggregateName String @map("aggregate_name")
aggregateId String @map("aggregate_id")
state String
version Int @default(0)
@@id([aggregateName, aggregateId])
@@map("noddde_aggregate_states")
}
model NodddeSagaState {
sagaName String @map("saga_name")
sagaId String @map("saga_id")
state String
@@id([sagaName, sagaId])
@@map("noddde_saga_states")
}
model NodddeSnapshot {
aggregateName String @map("aggregate_name")
aggregateId String @map("aggregate_id")
state String
version Int @default(0)
@@id([aggregateName, aggregateId])
@@map("noddde_snapshots")
}Then run prisma generate and your preferred migration command.
Configuration
import { PrismaClient } from "@prisma/client";
import { createPrismaPersistence } from "@noddde/prisma";
import { everyNEvents } from "@noddde/core";
const prisma = new PrismaClient();
const infra = createPrismaPersistence(prisma);
const domain = await configureDomain({
writeModel: { aggregates: { BankAccount } },
readModel: { projections: { BankAccount: BankAccountProjection } },
infrastructure: {
aggregatePersistence: () => infra.eventSourcedPersistence,
sagaPersistence: () => infra.sagaPersistence,
snapshotStore: () => infra.snapshotStore,
snapshotStrategy: everyNEvents(100),
unitOfWorkFactory: () => infra.unitOfWorkFactory,
},
});How Prisma Transactions Work
The Prisma adapter uses interactive transactions via prisma.$transaction(async (tx) => { ... }). When a unit of work commits, it sets txStore.current to the transactional client tx, and all persistence classes route their queries through it. Prisma automatically rolls back the transaction if any operation throws.
TypeORM
Installation
yarn add @noddde/typeorm typeorm reflect-metadata
# Plus your database driver, e.g.:
yarn add pgSchema Setup
The package exports TypeORM entity classes decorated with @Entity, @Column, etc. Register them in your DataSource configuration:
import {
NodddeEventEntity,
NodddeAggregateStateEntity,
NodddeSagaStateEntity,
NodddeSnapshotEntity,
} from "@noddde/typeorm";
const dataSource = new DataSource({
type: "postgres",
url: process.env.DATABASE_URL,
entities: [
NodddeEventEntity,
NodddeAggregateStateEntity,
NodddeSagaStateEntity,
NodddeSnapshotEntity,
],
synchronize: true, // use migrations in production
});
await dataSource.initialize();For production, use TypeORM migrations instead of synchronize: true.
Configuration
import { createTypeORMPersistence } from "@noddde/typeorm";
import { everyNEvents } from "@noddde/core";
const infra = createTypeORMPersistence(dataSource);
const domain = await configureDomain({
writeModel: { aggregates: { BankAccount } },
readModel: { projections: { BankAccount: BankAccountProjection } },
infrastructure: {
aggregatePersistence: () => infra.eventSourcedPersistence,
sagaPersistence: () => infra.sagaPersistence,
snapshotStore: () => infra.snapshotStore,
snapshotStrategy: everyNEvents(100),
unitOfWorkFactory: () => infra.unitOfWorkFactory,
},
});How TypeORM Transactions Work
The TypeORM adapter uses dataSource.manager.transaction() to wrap operations. When a unit of work commits, it sets txStore.current to the transactional EntityManager, and all persistence classes use it for their repository operations.
How Transactions Work
All three adapters follow the same pattern for integrating with the Unit of Work:
- The adapter opens a database transaction
- It sets
txStore.currentto the transaction-scoped database client - All enlisted persistence operations execute within that transaction, because they read
txStore.currentfor their queries - On success, the transaction commits and deferred events are returned for publishing
- On failure, the transaction rolls back and no events are published
This shared transaction store pattern means persistence classes do not need to know whether they are operating inside a unit of work or not -- they always read from txStore.current, which is null outside a transaction and points to the active transaction client inside one.
Concurrency Control
All three adapters support both optimistic and pessimistic concurrency strategies. Here is what each adapter provides at the database level.
Optimistic Concurrency (built-in)
Handled automatically by the persistence implementations via database constraints:
- Events table: A unique constraint on
(aggregate_name, aggregate_id, sequence_number)prevents concurrent appends. Violations throwConcurrencyError. - States table: A
versioncolumn enables optimistic locking. Updates useWHERE version = expectedVersion; zero rows affected throwsConcurrencyError.
Advisory Lockers (for pessimistic concurrency)
Each adapter exports an advisory locker for use with the pessimistic strategy. See the Dialect Support Matrix above for which databases support locking.
| Adapter | Constructor | Dialect Detection |
|---|---|---|
DrizzleAdvisoryLocker | (db, dialect) | Explicit: "pg" | "mysql" | "sqlite" (throws) |
PrismaAdvisoryLocker | (prisma, dialect) | Explicit: "postgresql" | "mysql" | "mariadb" |
TypeORMAdvisoryLocker | (dataSource) | Auto-detects from dataSource.options.type |
Under the hood, each dialect uses the database's native advisory lock mechanism:
| Dialect | Lock mechanism | Lock key format |
|---|---|---|
| PostgreSQL | pg_advisory_lock / pg_try_advisory_lock | 64-bit FNV-1a hash of name:id |
| MySQL/MariaDB | GET_LOCK / RELEASE_LOCK | First 64 chars of name:id (MySQL limit) |
| MSSQL | sp_getapplock / sp_releaseapplock | First 255 chars of name:id (TypeORM only) |
SQLite has no advisory lock mechanism. For single-process SQLite deployments, use InMemoryAggregateLocker from @noddde/engine.
Advisory locks are session-level, spanning beyond the database transaction. This is intentional: the lock covers the entire load→execute→save lifecycle.
Database Tables
All three adapters use the same logical schema:
| Table | Purpose | Key | Columns |
|---|---|---|---|
noddde_events | Event streams | Auto-increment id | aggregate_name, aggregate_id, sequence_number, event_name, payload (JSON), metadata (JSON, nullable). Unique constraint on (aggregate_name, aggregate_id, sequence_number). |
noddde_aggregate_states | State snapshots | Composite (aggregate_name, aggregate_id) | state (JSON), version (integer, default 0) |
noddde_saga_states | Saga state | Composite (saga_name, saga_id) | state (JSON) |
noddde_snapshots | Event-sourced snapshots | Composite (aggregate_name, aggregate_id) | state (JSON), version (integer, default 0) |
States and event payloads are serialized as JSON strings, making the schema database-agnostic.
Event Metadata Column
The metadata column on noddde_events stores the event metadata envelope as a nullable JSON value. The engine auto-populates metadata (eventId, timestamp, correlationId, causationId, userId, aggregate info, sequence number) before persistence -- command handlers do not need to produce it.
The column type varies by dialect:
| Dialect | Column type | Notes |
|---|---|---|
| PostgreSQL | jsonb | Supports indexing via GIN for metadata queries |
| MySQL | json | Native JSON type with validation |
| SQLite | text | JSON stored as text; parsed on load |
The column is nullable for backward compatibility -- events persisted before the metadata feature was added will have null metadata. When loading events, adapters deserialize the JSON back to an EventMetadata object, or leave it as undefined if the column is null.
Choosing an Adapter
| Factor | Drizzle | Prisma | TypeORM |
|---|---|---|---|
| Schema definition | TypeScript table builders | .prisma schema file | Decorator-based entities |
| Code generation | None | Required (prisma generate) | None |
| Type safety | Full (inferred from schema) | Full (generated client) | Partial (decorator metadata) |
| Bundle size | Lightweight | Heavier (generated client) | Medium |
| Sync driver support | Yes (better-sqlite3) | No (async only) | Yes |
| Migration tooling | Drizzle Kit | Prisma Migrate | TypeORM migrations |
All three provide identical functionality for noddde's purposes. The choice comes down to which ORM your project already uses.
Next Steps
- Unit of Work -- Atomic persistence and deferred event publishing
- Persistence -- Choosing between event-sourced and state-stored strategies
- Infrastructure -- The full infrastructure provider system