{"id":"0b45accd-f4ba-4ac9-af6d-ffb9295df081","shortId":"qTAeuQ","kind":"skill","title":"data-engineering-data-pipeline","tagline":"You are a data pipeline architecture expert specializing in scalable, reliable, and cost-effective data pipelines for batch and streaming data processing.","description":"# Data Pipeline Architecture\n\nYou are a data pipeline architecture expert specializing in scalable, reliable, and cost-effective data pipelines for batch and streaming data processing.\n\n## Use this skill when\n\n- Working on data pipeline architecture tasks or workflows\n- Needing guidance, best practices, or checklists for data pipeline architecture\n\n## Do not use this skill when\n\n- The task is unrelated to data pipeline architecture\n- You need a different domain or tool outside this scope\n\n## Requirements\n\n$ARGUMENTS\n\n## Core Capabilities\n\n- Design ETL/ELT, Lambda, Kappa, and Lakehouse architectures\n- Implement batch and streaming data ingestion\n- Build workflow orchestration with Airflow/Prefect\n- Transform data using dbt and Spark\n- Manage Delta Lake/Iceberg storage with ACID transactions\n- Implement data quality frameworks (Great Expectations, dbt tests)\n- Monitor pipelines with CloudWatch/Prometheus/Grafana\n- Optimize costs through partitioning, lifecycle policies, and compute optimization\n\n## Instructions\n\n### 1. Architecture Design\n- Assess: sources, volume, latency requirements, targets\n- Select pattern: ETL (transform before load), ELT (load then transform), Lambda (batch + speed layers), Kappa (stream-only), Lakehouse (unified)\n- Design flow: sources → ingestion → processing → storage → serving\n- Add observability touchpoints\n\n### 2. Ingestion Implementation\n**Batch**\n- Incremental loading with watermark columns\n- Retry logic with exponential backoff\n- Schema validation and dead letter queue for invalid records\n- Metadata tracking (_extracted_at, _source)\n\n**Streaming**\n- Kafka consumers with exactly-once semantics\n- Manual offset commits within transactions\n- Windowing for time-based aggregations\n- Error handling and replay capability\n\n### 3. Orchestration\n**Airflow**\n- Task groups for logical organization\n- XCom for inter-task communication\n- SLA monitoring and email alerts\n- Incremental execution with execution_date\n- Retry with exponential backoff\n\n**Prefect**\n- Task caching for idempotency\n- Parallel execution with .submit()\n- Artifacts for visibility\n- Automatic retries with configurable delays\n\n### 4. Transformation with dbt\n- Staging layer: incremental materialization, deduplication, late-arriving data handling\n- Marts layer: dimensional models, aggregations, business logic\n- Tests: unique, not_null, relationships, accepted_values, custom data quality tests\n- Sources: freshness checks, loaded_at_field tracking\n- Incremental strategy: merge or delete+insert\n\n### 5. Data Quality Framework\n**Great Expectations**\n- Table-level: row count, column count\n- Column-level: uniqueness, nullability, type validation, value sets, ranges\n- Checkpoints for validation execution\n- Data docs for documentation\n- Failure notifications\n\n**dbt Tests**\n- Schema tests in YAML\n- Custom data quality tests with dbt-expectations\n- Test results tracked in metadata\n\n### 6. Storage Strategy\n**Delta Lake**\n- ACID transactions with append/overwrite/merge modes\n- Upsert with predicate-based matching\n- Time travel for historical queries\n- Optimize: compact small files, Z-order clustering\n- Vacuum to remove old files\n\n**Apache Iceberg**\n- Partitioning and sort order optimization\n- MERGE INTO for upserts\n- Snapshot isolation and time travel\n- File compaction with binpack strategy\n- Snapshot expiration for cleanup\n\n### 7. Monitoring & Cost Optimization\n**Monitoring**\n- Track: records processed/failed, data size, execution time, success/failure rates\n- CloudWatch metrics and custom namespaces\n- SNS alerts for critical/warning/info events\n- Data freshness checks\n- Performance trend analysis\n\n**Cost Optimization**\n- Partitioning: date/entity-based, avoid over-partitioning (keep >1GB)\n- File sizes: 512MB-1GB for Parquet\n- Lifecycle policies: hot (Standard) → warm (IA) → cold (Glacier)\n- Compute: spot instances for batch, on-demand for streaming, serverless for adhoc\n- Query optimization: partition pruning, clustering, predicate pushdown\n\n## Example: Minimal Batch Pipeline\n\n```python\n# Batch ingestion with validation\nfrom batch_ingestion import BatchDataIngester\nfrom storage.delta_lake_manager import DeltaLakeManager\nfrom data_quality.expectations_suite import DataQualityFramework\n\ningester = BatchDataIngester(config={})\n\n# Extract with incremental loading\ndf = ingester.extract_from_database(\n    connection_string='postgresql://host:5432/db',\n    query='SELECT * FROM orders',\n    watermark_column='updated_at',\n    last_watermark=last_run_timestamp\n)\n\n# Validate\nschema = {'required_fields': ['id', 'user_id'], 'dtypes': {'id': 'int64'}}\ndf = ingester.validate_and_clean(df, schema)\n\n# Data quality checks\ndq = DataQualityFramework()\nresult = dq.validate_dataframe(df, suite_name='orders_suite', data_asset_name='orders')\n\n# Write to Delta Lake\ndelta_mgr = DeltaLakeManager(storage_path='s3://lake')\ndelta_mgr.create_or_update_table(\n    df=df,\n    table_name='orders',\n    partition_columns=['order_date'],\n    mode='append'\n)\n\n# Save failed records\ningester.save_dead_letter_queue('s3://lake/dlq/orders')\n```\n\n## Output Deliverables\n\n### 1. Architecture Documentation\n- Architecture diagram with data flow\n- Technology stack with justification\n- Scalability analysis and growth patterns\n- Failure modes and recovery strategies\n\n### 2. Implementation Code\n- Ingestion: batch/streaming with error handling\n- Transformation: dbt models (staging → marts) or Spark jobs\n- Orchestration: Airflow/Prefect DAGs with dependencies\n- Storage: Delta/Iceberg table management\n- Data quality: Great Expectations suites and dbt tests\n\n### 3. Configuration Files\n- Orchestration: DAG definitions, schedules, retry policies\n- dbt: models, sources, tests, project config\n- Infrastructure: Docker Compose, K8s manifests, Terraform\n- Environment: dev/staging/prod configs\n\n### 4. Monitoring & Observability\n- Metrics: execution time, records processed, quality scores\n- Alerts: failures, performance degradation, data freshness\n- Dashboards: Grafana/CloudWatch for pipeline health\n- Logging: structured logs with correlation IDs\n\n### 5. Operations Guide\n- Deployment procedures and rollback strategy\n- Troubleshooting guide for common issues\n- Scaling guide for increased volume\n- Cost optimization strategies and savings\n- Disaster recovery and backup procedures\n\n## Success Criteria\n- Pipeline meets defined SLA (latency, throughput)\n- Data quality checks pass with >99% success rate\n- Automatic retry and alerting on failures\n- Comprehensive monitoring shows health and performance\n- Documentation enables team maintenance\n- Cost optimization reduces infrastructure costs by 30-50%\n- Schema evolution without downtime\n- End-to-end data lineage tracked\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["data","engineering","pipeline","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-data-engineering-data-pipeline","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/data-engineering-data-pipeline","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34831 github stars · SKILL.md body (7,010 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-24T06:51:00.350Z","embedding":null,"createdAt":"2026-04-18T21:35:33.119Z","updatedAt":"2026-04-24T06:51:00.350Z","lastSeenAt":"2026-04-24T06:51:00.350Z","tsv":"'-50':821 '/lake':621 '/lake/dlq/orders':645 '1':158,648 '1gb':489,494 '2':197,670 '3':249,703 '30':820 '4':294,727 '5':339,754 '512mb':493 '512mb-1gb':492 '5432/db':564 '6':391 '7':450 '99':795 'accept':320 'acid':134,396 'add':194 'adhoc':517 'aggreg':243,312 'airflow':251 'airflow/prefect':122,687 'alert':267,470,737,801 'analysi':479,661 'apach':425 'append':636 'append/overwrite/merge':399 'architectur':11,31,37,63,76,90,111,159,649,651 'argument':102 'arriv':305 'artifact':286 'ask':866 'assess':161 'asset':608 'automat':289,798 'avoid':484 'backoff':210,276 'backup':780 'base':242,405 'batch':24,50,113,178,200,509,527,530,535 'batch/streaming':674 'batchdataingest':538,551 'best':69 'binpack':444 'boundari':874 'build':118 'busi':313 'cach':279 'capabl':104,248 'check':328,476,596,792 'checklist':72 'checkpoint':362 'clarif':868 'clean':591 'cleanup':449 'clear':841 'cloudwatch':464 'cloudwatch/prometheus/grafana':147 'cluster':419,522 'code':672 'cold':503 'column':205,350,353,570,632 'column-level':352 'commit':235 'common':765 'communic':262 'compact':413,442 'compos':720 'comprehens':804 'comput':155,505 'config':552,717,726 'configur':292,704 'connect':561 'consum':227 'core':103 'correl':752 'cost':19,45,149,452,480,772,814,818 'cost-effect':18,44 'count':349,351 'criteria':783,877 'critical/warning/info':472 'custom':322,378,467 'dag':688,707 'dashboard':743 'data':2,4,9,21,27,29,35,47,53,61,74,88,116,124,137,306,323,340,366,379,458,474,594,607,654,695,741,790,830 'data-engineering-data-pipelin':1 'data_quality.expectations':546 'databas':560 'datafram':601 'dataqualityframework':549,598 'date':272,634 'date/entity-based':483 'dbt':126,142,297,372,384,679,701,712 'dbt-expect':383 'dead':214,641 'dedupl':302 'defin':786 'definit':708 'degrad':740 'delay':293 'delet':337 'deliver':647 'delta':130,394,613,615 'delta/iceberg':692 'delta_mgr.create':622 'deltalakemanag':544,617 'demand':512 'depend':690 'deploy':757 'describ':845 'design':105,160,187 'dev/staging/prod':725 'df':557,588,592,602,626,627 'diagram':652 'differ':94 'dimension':310 'disast':777 'doc':367 'docker':719 'document':369,650,810 'domain':95 'downtim':825 'dq':597 'dq.validate':600 'dtype':585 'effect':20,46 'elt':173 'email':266 'enabl':811 'end':827,829 'end-to-end':826 'engin':3 'environ':724,857 'environment-specif':856 'error':244,676 'etl':169 'etl/elt':106 'event':473 'evolut':823 'exact':230 'exactly-onc':229 'exampl':525 'execut':269,271,283,365,460,731 'expect':141,344,385,698 'expert':12,38,862 'expir':447 'exponenti':209,275 'extract':222,553 'fail':638 'failur':370,665,738,803 'field':331,581 'file':415,424,441,490,705 'flow':188,655 'framework':139,342 'fresh':327,475,742 'glacier':504 'grafana/cloudwatch':744 'great':140,343,697 'group':253 'growth':663 'guid':756,763,768 'guidanc':68 'handl':245,307,677 'health':747,807 'histor':410 'host':563 'hot':499 'ia':502 'iceberg':426 'id':582,584,586,753 'idempot':281 'implement':112,136,199,671 'import':537,543,548 'increas':770 'increment':201,268,300,333,555 'infrastructur':718,817 'ingest':117,190,198,531,536,550,673 'ingester.extract':558 'ingester.save':640 'ingester.validate':589 'input':871 'insert':338 'instanc':507 'instruct':157 'int64':587 'inter':260 'inter-task':259 'invalid':218 'isol':437 'issu':766 'job':685 'justif':659 'k8s':721 'kafka':226 'kappa':108,181 'keep':488 'lake':395,541,614 'lake/iceberg':131 'lakehous':110,185 'lambda':107,177 'last':573,575 'late':304 'late-arriv':303 'latenc':164,788 'layer':180,299,309 'letter':215,642 'level':347,354 'lifecycl':152,497 'limit':833 'lineag':831 'load':172,174,202,329,556 'log':748,750 'logic':207,255,314 'mainten':813 'manag':129,542,694 'manifest':722 'manual':233 'mart':308,682 'match':406,842 'materi':301 'meet':785 'merg':335,432 'metadata':220,390 'metric':465,730 'mgr':616 'minim':526 'miss':879 'mode':400,635,666 'model':311,680,713 'monitor':144,264,451,454,728,805 'name':604,609,629 'namespac':468 'need':67,92 'notif':371 'null':318 'nullabl':356 'observ':195,729 'offset':234 'old':423 'on-demand':510 'oper':755 'optim':148,156,412,431,453,481,519,773,815 'orchestr':120,250,686,706 'order':418,430,568,605,610,630,633 'organ':256 'output':646,851 'outsid':98 'over-partit':485 'parallel':282 'parquet':496 'partit':151,427,482,487,520,631 'pass':793 'path':619 'pattern':168,664 'perform':477,739,809 'permiss':872 'pipelin':5,10,22,30,36,48,62,75,89,145,528,746,784 'polici':153,498,711 'practic':70 'predic':404,523 'predicate-bas':403 'prefect':277 'procedur':758,781 'process':28,54,191,734 'processed/failed':457 'project':716 'prune':521 'pushdown':524 'python':529 'qualiti':138,324,341,380,595,696,735,791 'queri':411,518,565 'queue':216,643 'rang':361 'rate':463,797 'record':219,456,639,733 'recoveri':668,778 'reduc':816 'relationship':319 'reliabl':16,42 'remov':422 'replay':247 'requir':101,165,580,870 'result':387,599 'retri':206,273,290,710,799 'review':863 'rollback':760 'row':348 'run':576 's3':620,644 'safeti':873 'save':637,776 'scalabl':15,41,660 'scale':767 'schedul':709 'schema':211,374,579,593,822 'scope':100,844 'score':736 'select':167,566 'semant':232 'serv':193 'serverless':515 'set':360 'show':806 'size':459,491 'skill':57,81,836 'skill-data-engineering-data-pipeline' 'sla':263,787 'small':414 'snapshot':436,446 'sns':469 'sort':429 'sourc':162,189,224,326,714 'source-sickn33' 'spark':128,684 'special':13,39 'specif':858 'speed':179 'spot':506 'stack':657 'stage':298,681 'standard':500 'stop':864 'storag':132,192,392,618,691 'storage.delta':540 'strategi':334,393,445,669,761,774 'stream':26,52,115,183,225,514 'stream-on':182 'string':562 'structur':749 'submit':285 'substitut':854 'success':782,796,876 'success/failure':462 'suit':547,603,606,699 'tabl':346,625,628,693 'table-level':345 'target':166 'task':64,84,252,261,278,840 'team':812 'technolog':656 'terraform':723 'test':143,315,325,373,375,381,386,702,715,860 'throughput':789 'time':241,407,439,461,732 'time-bas':240 'timestamp':577 'tool':97 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'touchpoint':196 'track':221,332,388,455,832 'transact':135,237,397 'transform':123,170,176,295,678 'travel':408,440 'treat':849 'trend':478 'troubleshoot':762 'type':357 'unifi':186 'uniqu':316,355 'unrel':86 'updat':571,624 'upsert':401,435 'use':55,79,125,834 'user':583 'vacuum':420 'valid':212,358,364,533,578,859 'valu':321,359 'visibl':288 'volum':163,771 'warm':501 'watermark':204,569,574 'window':238 'within':236 'without':824 'work':59 'workflow':66,119 'write':611 'xcom':257 'yaml':377 'z':417 'z-order':416","prices":[{"id":"37607514-63bd-4a93-98b3-e8f93f7a5990","listingId":"0b45accd-f4ba-4ac9-af6d-ffb9295df081","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:35:33.119Z"}],"sources":[{"listingId":"0b45accd-f4ba-4ac9-af6d-ffb9295df081","source":"github","sourceId":"sickn33/antigravity-awesome-skills/data-engineering-data-pipeline","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/data-engineering-data-pipeline","isPrimary":false,"firstSeenAt":"2026-04-18T21:35:33.119Z","lastSeenAt":"2026-04-24T06:51:00.350Z"}],"details":{"listingId":"0b45accd-f4ba-4ac9-af6d-ffb9295df081","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"data-engineering-data-pipeline","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34831,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-24T06:41:17Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"fa1ddb3954910c4bf3011084e49faadab0599dee","skill_md_path":"skills/data-engineering-data-pipeline/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/data-engineering-data-pipeline"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"data-engineering-data-pipeline","description":"You are a data pipeline architecture expert specializing in scalable, reliable, and cost-effective data pipelines for batch and streaming data processing."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/data-engineering-data-pipeline"},"updatedAt":"2026-04-24T06:51:00.350Z"}}