{"id":"fa06a6cd-af67-4f2c-8d25-596015aa7756","shortId":"SNsy2G","kind":"skill","title":"event-store-design","tagline":"Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implementing event persistence patterns.","description":"# Event Store Design\n\nComprehensive guide to designing event stores for event-sourced applications.\n\n## Do not use this skill when\n\n- The task is unrelated to event store design\n- You need a different domain or tool outside this scope\n\n## Instructions\n\n- Clarify goals, constraints, and required inputs.\n- Apply relevant best practices and validate outcomes.\n- Provide actionable steps and verification.\n- If detailed examples are required, open `resources/implementation-playbook.md`.\n\n## Use this skill when\n\n- Designing event sourcing infrastructure\n- Choosing between event store technologies\n- Implementing custom event stores\n- Optimizing event storage and retrieval\n- Setting up event store schemas\n- Planning for event store scaling\n\n## Core Concepts\n\n### 1. Event Store Architecture\n\n```\n┌─────────────────────────────────────────────────────┐\n│                    Event Store                       │\n├─────────────────────────────────────────────────────┤\n│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │\n│  │   Stream 1   │  │   Stream 2   │  │   Stream 3   │ │\n│  │ (Aggregate)  │  │ (Aggregate)  │  │ (Aggregate)  │ │\n│  ├─────────────┤  ├─────────────┤  ├─────────────┤ │\n│  │ Event 1     │  │ Event 1     │  │ Event 1     │ │\n│  │ Event 2     │  │ Event 2     │  │ Event 2     │ │\n│  │ Event 3     │  │ ...         │  │ Event 3     │ │\n│  │ ...         │  │             │  │ Event 4     │ │\n│  └─────────────┘  └─────────────┘  └─────────────┘ │\n├─────────────────────────────────────────────────────┤\n│  Global Position: 1 → 2 → 3 → 4 → 5 → 6 → ...     │\n└─────────────────────────────────────────────────────┘\n```\n\n### 2. Event Store Requirements\n\n| Requirement       | Description                        |\n| ----------------- | ---------------------------------- |\n| **Append-only**   | Events are immutable, only appends |\n| **Ordered**       | Per-stream and global ordering     |\n| **Versioned**     | Optimistic concurrency control     |\n| **Subscriptions** | Real-time event notifications      |\n| **Idempotent**    | Handle duplicate writes safely     |\n\n## Technology Comparison\n\n| Technology       | Best For                  | Limitations                      |\n| ---------------- | ------------------------- | -------------------------------- |\n| **EventStoreDB** | Pure event sourcing       | Single-purpose                   |\n| **PostgreSQL**   | Existing Postgres stack   | Manual implementation            |\n| **Kafka**        | High-throughput streaming | Not ideal for per-stream queries |\n| **DynamoDB**     | Serverless, AWS-native    | Query limitations                |\n| **Marten**       | .NET ecosystems           | .NET specific                    |\n\n## Templates\n\n### Template 1: PostgreSQL Event Store Schema\n\n```sql\n-- Events table\nCREATE TABLE events (\n    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),\n    stream_id VARCHAR(255) NOT NULL,\n    stream_type VARCHAR(255) NOT NULL,\n    event_type VARCHAR(255) NOT NULL,\n    event_data JSONB NOT NULL,\n    metadata JSONB DEFAULT '{}',\n    version BIGINT NOT NULL,\n    global_position BIGSERIAL,\n    created_at TIMESTAMPTZ DEFAULT NOW(),\n\n    CONSTRAINT unique_stream_version UNIQUE (stream_id, version)\n);\n\n-- Index for stream queries\nCREATE INDEX idx_events_stream_id ON events(stream_id, version);\n\n-- Index for global subscription\nCREATE INDEX idx_events_global_position ON events(global_position);\n\n-- Index for event type queries\nCREATE INDEX idx_events_event_type ON events(event_type);\n\n-- Index for time-based queries\nCREATE INDEX idx_events_created_at ON events(created_at);\n\n-- Snapshots table\nCREATE TABLE snapshots (\n    stream_id VARCHAR(255) PRIMARY KEY,\n    stream_type VARCHAR(255) NOT NULL,\n    snapshot_data JSONB NOT NULL,\n    version BIGINT NOT NULL,\n    created_at TIMESTAMPTZ DEFAULT NOW()\n);\n\n-- Subscriptions checkpoint table\nCREATE TABLE subscription_checkpoints (\n    subscription_id VARCHAR(255) PRIMARY KEY,\n    last_position BIGINT NOT NULL DEFAULT 0,\n    updated_at TIMESTAMPTZ DEFAULT NOW()\n);\n```\n\n### Template 2: Python Event Store Implementation\n\n```python\nfrom dataclasses import dataclass, field\nfrom datetime import datetime\nfrom typing import Any, Optional, List\nfrom uuid import UUID, uuid4\nimport json\nimport asyncpg\n\n@dataclass\nclass Event:\n    stream_id: str\n    event_type: str\n    data: dict\n    metadata: dict = field(default_factory=dict)\n    event_id: UUID = field(default_factory=uuid4)\n    version: Optional[int] = None\n    global_position: Optional[int] = None\n    created_at: datetime = field(default_factory=datetime.utcnow)\n\n\nclass EventStore:\n    def __init__(self, pool: asyncpg.Pool):\n        self.pool = pool\n\n    async def append_events(\n        self,\n        stream_id: str,\n        stream_type: str,\n        events: List[Event],\n        expected_version: Optional[int] = None\n    ) -> List[Event]:\n        \"\"\"Append events to a stream with optimistic concurrency.\"\"\"\n        async with self.pool.acquire() as conn:\n            async with conn.transaction():\n                # Check expected version\n                if expected_version is not None:\n                    current = await conn.fetchval(\n                        \"SELECT MAX(version) FROM events WHERE stream_id = $1\",\n                        stream_id\n                    )\n                    current = current or 0\n                    if current != expected_version:\n                        raise ConcurrencyError(\n                            f\"Expected version {expected_version}, got {current}\"\n                        )\n\n                # Get starting version\n                start_version = await conn.fetchval(\n                    \"SELECT COALESCE(MAX(version), 0) + 1 FROM events WHERE stream_id = $1\",\n                    stream_id\n                )\n\n                # Insert events\n                saved_events = []\n                for i, event in enumerate(events):\n                    event.version = start_version + i\n                    row = await conn.fetchrow(\n                        \"\"\"\n                        INSERT INTO events (id, stream_id, stream_type, event_type,\n                                          event_data, metadata, version, created_at)\n                        VALUES ($1, $2, $3, $4, $5, $6, $7, $8)\n                        RETURNING global_position\n                        \"\"\",\n                        event.event_id,\n                        stream_id,\n                        stream_type,\n                        event.event_type,\n                        json.dumps(event.data),\n                        json.dumps(event.metadata),\n                        event.version,\n                        event.created_at\n                    )\n                    event.global_position = row['global_position']\n                    saved_events.append(event)\n\n                return saved_events\n\n    async def read_stream(\n        self,\n        stream_id: str,\n        from_version: int = 0,\n        limit: int = 1000\n    ) -> List[Event]:\n        \"\"\"Read events from a stream.\"\"\"\n        async with self.pool.acquire() as conn:\n            rows = await conn.fetch(\n                \"\"\"\n                SELECT id, stream_id, event_type, event_data, metadata,\n                       version, global_position, created_at\n                FROM events\n                WHERE stream_id = $1 AND version >= $2\n                ORDER BY version\n                LIMIT $3\n                \"\"\",\n                stream_id, from_version, limit\n            )\n            return [self._row_to_event(row) for row in rows]\n\n    async def read_all(\n        self,\n        from_position: int = 0,\n        limit: int = 1000\n    ) -> List[Event]:\n        \"\"\"Read all events globally.\"\"\"\n        async with self.pool.acquire() as conn:\n            rows = await conn.fetch(\n                \"\"\"\n                SELECT id, stream_id, event_type, event_data, metadata,\n                       version, global_position, created_at\n                FROM events\n                WHERE global_position > $1\n                ORDER BY global_position\n                LIMIT $2\n                \"\"\",\n                from_position, limit\n            )\n            return [self._row_to_event(row) for row in rows]\n\n    async def subscribe(\n        self,\n        subscription_id: str,\n        handler,\n        from_position: int = 0,\n        batch_size: int = 100\n    ):\n        \"\"\"Subscribe to all events from a position.\"\"\"\n        # Get checkpoint\n        async with self.pool.acquire() as conn:\n            checkpoint = await conn.fetchval(\n                \"\"\"\n                SELECT last_position FROM subscription_checkpoints\n                WHERE subscription_id = $1\n                \"\"\",\n                subscription_id\n            )\n            position = checkpoint or from_position\n\n        while True:\n            events = await self.read_all(position, batch_size)\n            if not events:\n                await asyncio.sleep(1)  # Poll interval\n                continue\n\n            for event in events:\n                await handler(event)\n                position = event.global_position\n\n            # Save checkpoint\n            async with self.pool.acquire() as conn:\n                await conn.execute(\n                    \"\"\"\n                    INSERT INTO subscription_checkpoints (subscription_id, last_position)\n                    VALUES ($1, $2)\n                    ON CONFLICT (subscription_id)\n                    DO UPDATE SET last_position = $2, updated_at = NOW()\n                    \"\"\",\n                    subscription_id, position\n                )\n\n    def _row_to_event(self, row) -> Event:\n        return Event(\n            event_id=row['id'],\n            stream_id=row['stream_id'],\n            event_type=row['event_type'],\n            data=json.loads(row['event_data']),\n            metadata=json.loads(row['metadata']),\n            version=row['version'],\n            global_position=row['global_position'],\n            created_at=row['created_at']\n        )\n\n\nclass ConcurrencyError(Exception):\n    \"\"\"Raised when optimistic concurrency check fails.\"\"\"\n    pass\n```\n\n### Template 3: EventStoreDB Usage\n\n```python\nfrom esdbclient import EventStoreDBClient, NewEvent, StreamState\nimport json\n\n# Connect\nclient = EventStoreDBClient(uri=\"esdb://localhost:2113?tls=false\")\n\n# Append events\ndef append_events(stream_name: str, events: list, expected_revision=None):\n    new_events = [\n        NewEvent(\n            type=event['type'],\n            data=json.dumps(event['data']).encode(),\n            metadata=json.dumps(event.get('metadata', {})).encode()\n        )\n        for event in events\n    ]\n\n    if expected_revision is None:\n        state = StreamState.ANY\n    elif expected_revision == -1:\n        state = StreamState.NO_STREAM\n    else:\n        state = expected_revision\n\n    return client.append_to_stream(\n        stream_name=stream_name,\n        events=new_events,\n        current_version=state\n    )\n\n# Read stream\ndef read_stream(stream_name: str, from_revision: int = 0):\n    events = client.get_stream(\n        stream_name=stream_name,\n        stream_position=from_revision\n    )\n    return [\n        {\n            'type': event.type,\n            'data': json.loads(event.data),\n            'metadata': json.loads(event.metadata) if event.metadata else {},\n            'stream_position': event.stream_position,\n            'commit_position': event.commit_position\n        }\n        for event in events\n    ]\n\n# Subscribe to all\nasync def subscribe_to_all(handler, from_position: int = 0):\n    subscription = client.subscribe_to_all(commit_position=from_position)\n    async for event in subscription:\n        await handler({\n            'type': event.type,\n            'data': json.loads(event.data),\n            'stream_id': event.stream_name,\n            'position': event.commit_position\n        })\n\n# Category projection ($ce-Category)\ndef read_category(category: str):\n    \"\"\"Read all events for a category using system projection.\"\"\"\n    return read_stream(f\"$ce-{category}\")\n```\n\n### Template 4: DynamoDB Event Store\n\n```python\nimport boto3\nfrom boto3.dynamodb.conditions import Key\nfrom datetime import datetime\nimport json\nimport uuid\n\nclass DynamoEventStore:\n    def __init__(self, table_name: str):\n        self.dynamodb = boto3.resource('dynamodb')\n        self.table = self.dynamodb.Table(table_name)\n\n    def append_events(self, stream_id: str, events: list, expected_version: int = None):\n        \"\"\"Append events with conditional write for concurrency.\"\"\"\n        with self.table.batch_writer() as batch:\n            for i, event in enumerate(events):\n                version = (expected_version or 0) + i + 1\n                item = {\n                    'PK': f\"STREAM#{stream_id}\",\n                    'SK': f\"VERSION#{version:020d}\",\n                    'GSI1PK': 'EVENTS',\n                    'GSI1SK': datetime.utcnow().isoformat(),\n                    'event_id': str(uuid.uuid4()),\n                    'stream_id': stream_id,\n                    'event_type': event['type'],\n                    'event_data': json.dumps(event['data']),\n                    'version': version,\n                    'created_at': datetime.utcnow().isoformat()\n                }\n                batch.put_item(Item=item)\n        return events\n\n    def read_stream(self, stream_id: str, from_version: int = 0):\n        \"\"\"Read events from a stream.\"\"\"\n        response = self.table.query(\n            KeyConditionExpression=Key('PK').eq(f\"STREAM#{stream_id}\") &\n                                  Key('SK').gte(f\"VERSION#{from_version:020d}\")\n        )\n        return [\n            {\n                'event_type': item['event_type'],\n                'data': json.loads(item['event_data']),\n                'version': item['version']\n            }\n            for item in response['Items']\n        ]\n\n# Table definition (CloudFormation/Terraform)\n\"\"\"\nDynamoDB Table:\n  - PK (Partition Key): String\n  - SK (Sort Key): String\n  - GSI1PK, GSI1SK for global ordering\n\nCapacity: On-demand or provisioned based on throughput needs\n\"\"\"\n```\n\n## Best Practices\n\n### Do's\n\n- **Use stream IDs that include aggregate type** - `Order-{uuid}`\n- **Include correlation/causation IDs** - For tracing\n- **Version events from day one** - Plan for schema evolution\n- **Implement idempotency** - Use event IDs for deduplication\n- **Index appropriately** - For your query patterns\n\n### Don'ts\n\n- **Don't update or delete events** - They're immutable facts\n- **Don't store large payloads** - Keep events small\n- **Don't skip optimistic concurrency** - Prevents data corruption\n- **Don't ignore backpressure** - Handle slow consumers\n\n## Resources\n\n- [EventStoreDB](https://www.eventstore.com/)\n- [Marten Events](https://martendb.io/events/)\n- [Event Sourcing Pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing)\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["event","store","design","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-event-store-design","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/event-store-design","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34793 github stars · SKILL.md body (15,200 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-24T00:50:53.769Z","embedding":null,"createdAt":"2026-04-18T21:36:47.424Z","updatedAt":"2026-04-24T00:50:53.769Z","lastSeenAt":"2026-04-24T00:50:53.769Z","tsv":"'-1':1044 '/)':1456 '/en-us/azure/architecture/patterns/event-sourcing)':1467 '/events/)':1461 '0':425,574,599,690,757,822,1077,1125,1248,1306 '020d':1261,1329 '1':128,135,144,146,148,163,250,568,600,606,643,728,794,853,875,907,1250 '100':826 '1000':693,760 '2':137,150,152,154,164,169,432,644,731,800,908,918 '2113':998 '255':272,278,284,383,389,416 '3':139,156,158,165,645,736,981 '4':160,166,646,1179 '5':167,647 '6':168,648 '7':649 '8':650 'action':83 'aggreg':140,141,142,1386 'append':176,182,513,532,1001,1004,1214,1226 'append-on':175 'appli':75 'applic':43 'appropri':1412 'architectur':131 'ask':1501 'async':511,540,545,679,701,749,767,811,836,891,1116,1134 'asyncio.sleep':874 'asyncpg':461 'asyncpg.pool':508 'aw':239 'await':558,593,624,707,773,842,864,873,883,896,1139 'aws-nat':238 'backpressur':1448 'base':363,1373 'batch':823,868,1237 'batch.put':1290 'best':77,208,1377 'bigint':296,398,421 'bigseri':301 'boto3':1185 'boto3.dynamodb.conditions':1187 'boto3.resource':1207 'boundari':1509 'build':17 'capac':1367 'categori':1153,1157,1160,1161,1168,1177 'ce':1156,1176 'ce-categori':1155 'check':548,977 'checkpoint':407,412,835,841,849,857,890,901 'choos':21,102 'clarif':1503 'clarifi':69 'class':463,502,970,1198 'clear':1476 'client':994 'client.append':1053 'client.get':1079 'client.subscribe':1127 'cloudformation/terraform':1351 'coalesc':596 'commit':1105,1130 'comparison':206 'comprehens':33 'concept':127 'concurr':192,539,976,1232,1441 'concurrencyerror':580,971 'condit':1229 'conflict':910 'conn':544,705,771,840,895 'conn.execute':897 'conn.fetch':708,774 'conn.fetchrow':625 'conn.fetchval':559,594,843 'conn.transaction':547 'connect':993 'constraint':71,307 'consum':1451 'continu':878 'control':193 'core':126 'correlation/causation':1391 'corrupt':1444 'creat':258,302,319,334,349,365,369,373,377,401,409,495,640,721,787,965,968,1286 'criteria':1512 'current':557,571,572,576,587,1063 'custom':108 'data':288,393,471,637,716,782,948,952,1020,1023,1092,1143,1280,1283,1336,1340,1443 'dataclass':439,441,462 'datetim':444,446,497,1191,1193 'datetime.utcnow':501,1265,1288 'day':1398 'dedupl':1410 'def':504,512,680,750,812,925,1003,1068,1117,1158,1200,1213,1296 'default':265,294,305,404,424,429,476,483,499 'definit':1350 'delet':1423 'demand':1370 'describ':1480 'descript':174 'design':4,5,32,36,57,98 'detail':88 'dict':472,474,478 'differ':61 'docs.microsoft.com':1466 'docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing)':1465 'domain':62 'duplic':202 'dynamodb':236,1180,1208,1352 'dynamoeventstor':1199 'ecosystem':245 'elif':1041 'els':1048,1100 'encod':1024,1029 'enumer':617,1242 'environ':1492 'environment-specif':1491 'eq':1317 'esdbclient':986 'event':2,8,12,18,22,27,30,37,41,55,99,104,109,112,118,123,129,132,143,145,147,149,151,153,155,157,159,170,178,198,213,252,256,260,281,287,322,326,337,341,346,352,353,356,357,368,372,434,464,468,479,514,522,524,531,533,564,602,610,612,615,618,628,634,636,675,678,695,697,713,715,724,762,765,779,781,790,830,863,872,880,882,885,928,931,933,934,943,946,951,1002,1005,1009,1015,1018,1022,1031,1033,1060,1062,1078,1110,1112,1136,1165,1181,1215,1220,1227,1240,1243,1263,1267,1275,1277,1279,1282,1295,1308,1331,1334,1339,1396,1407,1424,1435,1458,1462 'event-sourc':11,40 'event-store-design':1 'event.commit':1107,1151 'event.created':667 'event.data':663,1094,1145 'event.event':654,660 'event.get':1027 'event.global':669,887 'event.metadata':665,1097,1099 'event.stream':1103,1148 'event.type':1091,1142 'event.version':619,666 'eventstor':503 'eventstoredb':211,982,1453 'eventstoredbcli':988,995 'evolut':1403 'exampl':89 'except':972 'exist':219 'expect':525,549,552,577,582,584,1011,1035,1042,1050,1222,1245 'expert':1497 'f':581,1175,1253,1258,1318,1325 'fact':1428 'factori':477,484,500 'fail':978 'fals':1000 'field':442,475,482,498 'gen':266 'get':588,834 'global':161,188,299,332,338,342,490,652,672,719,766,785,792,797,960,963,1365 'goal':70 'got':586 'gsi1pk':1262,1362 'gsi1sk':1264,1363 'gte':1324 'guid':34 'handl':201,1449 'handler':818,884,1121,1140 'high':226 'high-throughput':225 'id':261,270,313,324,328,381,414,466,480,517,567,570,605,608,629,631,655,657,685,710,712,727,738,776,778,816,852,855,903,912,923,935,937,939,942,1147,1218,1256,1268,1272,1274,1301,1321,1383,1392,1408 'ideal':230 'idempot':200,1405 'idx':321,336,351,367 'ignor':1447 'immut':180,1427 'implement':7,26,107,223,436,1404 'import':440,445,449,455,458,460,987,991,1184,1188,1192,1194,1196 'includ':1385,1390 'index':315,320,330,335,344,350,359,366,1411 'infrastructur':20,101 'init':505,1201 'input':74,1506 'insert':609,626,898 'instruct':68 'int':488,493,528,689,692,756,759,821,825,1076,1124,1224,1305 'interv':877 'isoformat':1266,1289 'item':1251,1291,1292,1293,1333,1338,1342,1345,1348 'json':459,992,1195 'json.dumps':662,664,1021,1026,1281 'json.loads':949,954,1093,1096,1144,1337 'jsonb':289,293,394 'kafka':224 'keep':1434 'key':264,385,418,1189,1315,1322,1356,1360 'keyconditionexpress':1314 'larg':1432 'last':419,845,904,916 'limit':210,242,691,735,741,758,799,803,1468 'list':452,523,530,694,761,1010,1221 'localhost':997 'manual':222 'marten':243,1457 'martendb.io':1460 'martendb.io/events/)':1459 'match':1477 'max':561,597 'metadata':292,473,638,717,783,953,956,1025,1028,1095 'miss':1514 'name':1007,1057,1059,1072,1082,1084,1149,1204,1212 'nativ':240 'need':59,1376 'net':244,246 'new':1014,1061 'newev':989,1016 'none':489,494,529,556,1013,1038,1225 'notif':199 'null':274,280,286,291,298,391,396,400,423 'on-demand':1368 'one':1399 'open':92 'optim':111 'optimist':191,538,975,1440 'option':451,487,492,527 'order':183,189,732,795,1366,1388 'outcom':81 'output':1486 'outsid':65 'partit':1355 'pass':979 'pattern':29,1416,1464 'payload':1433 'per':185,233 'per-stream':184,232 'permiss':1507 'persist':28 'pk':1252,1316,1354 'plan':121,1400 'poll':876 'pool':507,510 'posit':162,300,339,343,420,491,653,670,673,720,755,786,793,798,802,820,833,846,856,860,867,886,888,905,917,924,961,964,1086,1102,1104,1106,1108,1123,1131,1133,1150,1152 'postgr':220 'postgresql':218,251 'practic':78,1378 'prevent':1442 'primari':263,384,417 'project':1154,1171 'provid':82 'provis':1372 'pure':212 'purpos':217 'python':433,437,984,1183 'queri':235,241,318,348,364,1415 'rais':579,973 'random':267 're':1426 'read':681,696,751,763,1066,1069,1159,1163,1173,1297,1307 'real':196 'real-tim':195 'relev':76 'requir':73,91,172,173,1505 'resourc':1452 'resources/implementation-playbook.md':93 'respons':1312,1347 'retriev':115 'return':651,676,742,804,932,1052,1089,1172,1294,1330 'review':1498 'revis':1012,1036,1043,1051,1075,1088 'row':623,671,706,744,746,748,772,806,808,810,926,930,936,940,945,950,955,958,962,967 'safe':204 'safeti':1508 'save':611,677,889 'saved_events.append':674 'scale':125 'schema':120,254,1402 'scope':67,1479 'select':560,595,709,775,844 'self':506,515,683,753,814,929,1202,1216,1299 'self._row_to_event':743,805 'self.dynamodb':1206 'self.dynamodb.table':1210 'self.pool':509 'self.pool.acquire':542,703,769,838,893 'self.read':865 'self.table':1209 'self.table.batch':1234 'self.table.query':1313 'serverless':237 'set':116,915 'singl':216 'single-purpos':215 'size':824,869 'sk':1257,1323,1358 'skill':48,96,1471 'skill-event-store-design' 'skip':1439 'slow':1450 'small':1436 'snapshot':375,379,392 'sort':1359 'sourc':13,19,42,100,214,1463 'source-sickn33' 'specif':247,1493 'sql':255 'stack':221 'start':589,591,620 'state':1039,1045,1049,1065 'step':84 'stop':1499 'storag':113 'store':3,9,23,31,38,56,105,110,119,124,130,133,171,253,435,1182,1431 'str':467,470,518,521,686,817,1008,1073,1162,1205,1219,1269,1302 'stream':134,136,138,186,228,234,269,275,309,312,317,323,327,380,386,465,516,519,536,566,569,604,607,630,632,656,658,682,684,700,711,726,737,777,938,941,1006,1047,1055,1056,1058,1067,1070,1071,1080,1081,1083,1085,1101,1146,1174,1217,1254,1255,1271,1273,1298,1300,1311,1319,1320,1382 'streamstat':990 'streamstate.any':1040 'streamstate.no':1046 'string':1357,1361 'subscrib':813,827,1113,1118 'subscript':194,333,406,411,413,815,848,851,854,900,902,911,922,1126,1138 'substitut':1489 'success':1511 'system':14,1170 'tabl':257,259,376,378,408,410,1203,1211,1349,1353 'task':51,1475 'technolog':24,106,205,207 'templat':248,249,431,980,1178 'test':1495 'throughput':227,1375 'time':197,362 'time-bas':361 'timestamptz':304,403,428 'tls':999 'tool':64 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'trace':1394 'treat':1484 'true':862 'ts':1418 'type':276,282,347,354,358,387,448,469,520,633,635,659,661,714,780,944,947,1017,1019,1090,1141,1276,1278,1332,1335,1387 'uniqu':308,311 'unrel':53 'updat':426,914,919,1421 'uri':996 'usag':983 'use':15,46,94,1169,1381,1406,1469 'uuid':262,268,454,456,481,1197,1389 'uuid.uuid4':1270 'uuid4':457,485 'valid':80,1494 'valu':642,906 'varchar':271,277,283,382,388,415 'verif':86 'version':190,295,310,314,329,397,486,526,550,553,562,578,583,585,590,592,598,621,639,688,718,730,734,740,784,957,959,1064,1223,1244,1246,1259,1260,1284,1285,1304,1326,1328,1341,1343,1395 'write':203,1230 'writer':1235 'www.eventstore.com':1455 'www.eventstore.com/)':1454","prices":[{"id":"10bebca0-8158-485e-8768-72be36bcb361","listingId":"fa06a6cd-af67-4f2c-8d25-596015aa7756","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:36:47.424Z"}],"sources":[{"listingId":"fa06a6cd-af67-4f2c-8d25-596015aa7756","source":"github","sourceId":"sickn33/antigravity-awesome-skills/event-store-design","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/event-store-design","isPrimary":false,"firstSeenAt":"2026-04-18T21:36:47.424Z","lastSeenAt":"2026-04-24T00:50:53.769Z"}],"details":{"listingId":"fa06a6cd-af67-4f2c-8d25-596015aa7756","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"event-store-design","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34793,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-24T00:28:59Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"ecb16ce735f8070921954c4321b5721a420b1c5a","skill_md_path":"skills/event-store-design/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/event-store-design"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"event-store-design","description":"Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implementing event persistence patterns."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/event-store-design"},"updatedAt":"2026-04-24T00:50:53.769Z"}}