{"id":"735565c4-5b11-4237-9b4c-7452d9990d57","shortId":"Nwe2sV","kind":"skill","title":"anycap-blog-production","tagline":"Turn user-provided data, notes, research, product facts, or structured inputs into AnyCap-style blog posts and then enrich those posts with first-party evidence blocks when needed. Use when a user wants a blog, article, tutorial, learn page, or SEO content draft written in the to","description":"# AnyCap Blog Production\n\n> **Read this entire file before starting.** This skill is for turning user-fed data into AnyCap-style articles, then adding evidence where the article needs proof.\n\nUse this skill when the user provides facts, notes, examples, benchmarks, product capabilities, or other raw inputs and wants a finished blog post that sounds like the AnyCap website. The core job is:\n\n1. normalize the input data into a working brief\n2. draft a page in AnyCap's website tone\n3. decide whether the article needs first-party evidence blocks\n4. add AnyCap-generated visuals only when they materially improve the page\n\nThis skill is about **blog production workflow**. For raw CLI syntax, authentication, and command behavior, read the `anycap-cli` skill. For broader search-intent planning, read the `anycap-ai-tool-seo` skill.\n\nRead these reference files before drafting:\n- [voice.md](references/voice.md)\n- [patterns.md](references/patterns.md)\n- [input-brief.md](references/input-brief.md)\n\n## Best Fit\n\n- data-driven blog drafting\n- learn pages and tutorials built from research notes\n- glossary or concept pages built from product facts and supporting examples\n- SEO articles where the user feeds benchmarks, examples, or feature data first\n- content workflows where the input is bullets, tables, JSON, spreadsheets, links, or rough notes\n- AnyCap website content that should sound operational, answer-first, and product-literate instead of generic content-marketing prose\n\n## Before You Start\n\n1. Verify AnyCap is available:\n\n```bash\nanycap status\n```\n\n2. Inspect the page structure before editing:\n   - find the base page\n   - find the localized wrapper\n   - check whether localized or duplicated routes simply re-export the base page\n\n3. Normalize the input into a brief:\n   - what is the article topic?\n   - who is it for?\n   - what does the data actually prove?\n   - what is the one-sentence answer or promise?\n   - what can be stated confidently, and what is still unknown?\n\n4. Check the git worktree and do not overwrite unrelated user edits.\n\n5. Read the tone and structure references before writing.\n\n## Workflow\n\n### 1. Build a working brief from the input data\n\nThe user may feed:\n- bullets\n- product facts\n- competitive notes\n- spreadsheet exports\n- JSON\n- URLs\n- benchmarks\n- rough observations\n\nYour first job is to convert that into a compact internal brief:\n- target reader\n- search or page intent\n- page thesis\n- 3-6 supporting facts\n- missing proof or uncertainty\n- recommended page shape\n\nIf the input is sparse, infer carefully but keep the article scoped to what the data can actually support.\n\n### 2. Draft in AnyCap website tone before expanding\n\nDefault AnyCap article pattern:\n\n1. short eyebrow\n2. direct H1 with one concrete promise\n3. concise intro for readers who already know the general category\n4. answer-first summary block near the top\n5. one or more structured sections:\n   - workflow\n   - comparison\n   - checklist\n   - use cases\n   - FAQ\n6. CTA or internal links that move the reader to the next relevant page\n\nDo not start with generic scene-setting. Start with the actual problem, constraint, or useful outcome.\n\n### 3. Match the evidence type to the page's claim\n\n- **Show the final outcome**: generate one polished hero or showcase image.\n- **Show a transformation**: generate a rough source image, then a refined after-image from the same subject.\n- **Show range or iteration**: generate a triptych or clean review board with multiple variations.\n- **Support a time-based workflow**: generate a companion still or keyframe that represents the brief. Do not present a static image as the full video result.\n- **Support an audio-based workflow**: generate a mood image or cover visual that supports the prompt. Do not imply the image is audio output.\n- **Support a process explanation**: generate a clean step illustration or diagram-like visual that makes the written explanation easier to scan.\n\nIf the article already lands without media, do not force a proof block. This skill is for useful proof, not decorative filler.\n\n### 4. Discover real model constraints before prompting\n\nIf the article needs media, inspect the live model catalog and schema first:\n\n```bash\nanycap image models\nanycap image models <model> schema --operation generate --mode <mode>\n\nanycap video models\nanycap video models <model> schema --operation generate --mode <mode>\n\nanycap music models\nanycap music models <model> schema --operation generate\n```\n\nNever assume mode names, parameter names, or supported aspect ratios.\n\n### 5. Generate assets into a reusable public path\n\nUse descriptive filenames and keep related artifacts together:\n\n```bash\nmkdir -p web/public/content-evidence\n\nanycap image generate \\\n  --model <model> \\\n  --prompt \"<subject-specific brief> ... no readable text, no watermark\" \\\n  --param aspect_ratio=16:9 \\\n  -o web/public/content-evidence/<page-slug>-hero.png\n```\n\nRules:\n- Always use `-o` with a descriptive filename.\n- Prefer `16:9` for hero or showcase blocks unless the layout needs another ratio.\n- Version or split files clearly for before/after workflows.\n- Keep assets topic-specific at generation time, but keep the skill itself topic-agnostic.\n- If the page is localized through wrapper files, keep assets shared unless a locale-specific image is genuinely needed.\n\n### 6. Verify the generated asset before wiring it into the page\n\nUse AnyCap vision to validate the artifact:\n\n```bash\nanycap actions image-read \\\n  --file web/public/content-evidence/<page-slug>-hero.png \\\n  --instruction \"Describe this image, confirm it matches the intended page claim, and mention any visible text or watermark.\"\n```\n\nFor edit workflows, compare source and revision:\n\n```bash\nanycap actions image-read \\\n  --file web/public/content-evidence/<page-slug>-before.png \\\n  --file web/public/content-evidence/<page-slug>-after.png \\\n  --instruction \"Confirm whether the second preserves the same subject while improving composition and background cleanliness.\"\n```\n\nIf verification reveals visible text, stray signage, wrong subject identity, or prompt drift, re-prompt and regenerate before editing code.\n\n### 7. Optimize the asset and wire it into reusable code\n\n- Compress oversized PNGs to JPG or WebP when the visual difference is acceptable.\n- Prefer a reusable component plus a lookup table over duplicating large JSX blocks in every page.\n- Keep captions factual:\n  - static images can say the image was generated through AnyCap for the page\n  - time-based or audio-based pages should explicitly label the visual as a companion still or cover visual\n- If localized routes simply re-export the base page, edit the base page once instead of patching every localized copy.\n- Keep the reusable block generic enough that it can be dropped into articles, guides, or landing pages without rewriting the component itself.\n\n### 8. Verify the page integration\n\nRun targeted checks on the files you touched:\n\n```bash\npnpm exec eslint \\\n  'src/components/seo/ContentEvidenceBlock.tsx' \\\n  'src/lib/content-evidence.ts' \\\n  'src/app/(seo)/<section>/<page>/page.tsx'\n```\n\nIf a full repo-wide typecheck fails because of pre-existing issues, record the exact blocking file and keep the skill output focused on what was actually validated.\n\n## Output Expectations\n\nDefault deliverables:\n- a compact brief distilled from the input data\n- an article draft in AnyCap website tone\n- generated assets in a stable public directory\n- one reusable content or UI component if multiple pages need the same evidence pattern\n- page copy that explains what the asset proves\n- alt text, caption, and prompt block that match the real artifact\n- concise validation notes covering generation, verification, and code checks\n\n## Guardrails\n\n- Do not write generic \"AI blog\" copy that could belong to any SaaS site.\n- Do not let the article drift beyond what the provided data can support.\n- Do not bury the answer under a long warm-up.\n- Do not use stock art when the whole point is to show first-party AnyCap output.\n- Do not present a static image as if it were the actual output of a video or music model.\n- Do not leave visible text or watermarks in showcase assets unless the page specifically needs them.\n- Do not duplicate localized page implementations when a wrapper already reuses the base page.\n- Do not inflate pages with generic prose when the missing piece is proof.\n- Do not bake page topic, keyword cluster, or niche-specific nouns into the skill itself. Those belong to the actual task input, not to the reusable workflow.\n\n## Quick Reference\n\n```bash\n# Check auth and feature availability\nanycap status\n\n# Normalize the topic with a small brief before writing\n# Then inspect model parameters only if the article truly needs media\n\n# Discover image models and parameters\nanycap image models\nanycap image models <model> schema --operation generate --mode text-to-image\n\n# Generate a page-specific artifact\nanycap image generate --model <model> --prompt \"...\" -o web/public/content-evidence/<page-slug>-hero.png\n\n# Validate the artifact with vision\nanycap actions image-read --file web/public/content-evidence/<page-slug>-hero.png --instruction \"Describe this image and mention visible text.\"\n\n# Compress a heavy PNG to JPG\nsips -s format jpeg -s formatOptions 80 web/public/content-evidence/<page-slug>-hero.png --out web/public/content-evidence/<page-slug>-hero.jpg\n```","tags":["anycap","blog","production","anycap-ai","agent","agent-skills","claude-code","cli","coding-agent","skills"],"capabilities":["skill","source-anycap-ai","skill-anycap-blog-production","topic-agent","topic-agent-skills","topic-claude-code","topic-cli","topic-coding-agent","topic-skills"],"categories":["anycap"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/anycap-ai/anycap/anycap-blog-production","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add anycap-ai/anycap","source_repo":"https://github.com/anycap-ai/anycap","install_from":"skills.sh"}},"qualityScore":"0.466","qualityRationale":"deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 32 github stars · SKILL.md body (9,406 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-01T12:57:23.636Z","embedding":null,"createdAt":"2026-04-18T22:22:31.830Z","updatedAt":"2026-05-01T12:57:23.636Z","lastSeenAt":"2026-05-01T12:57:23.636Z","tsv":"'-6':429 '/page.tsx':1095 '1':119,284,383,470 '16':785,799 '2':128,292,458,473 '3':137,320,428,480,543 '4':148,361,491,692 '5':373,500,752 '6':512,856 '7':956 '8':1074 '80':1436 '9':786,800 'accept':978 'action':876,910,1409 'actual':340,456,537,1124,1261,1332 'ad':79 'add':149 'after-imag':575 'after.png':919 'agnost':835 'ai':192,1199 'alreadi':486,673,1294 'alt':1174 'alway':791 'anoth':810 'answer':268,348,493,1226 'answer-first':267,492 'anycap':2,19,55,75,113,133,151,179,191,260,286,290,461,467,713,716,723,726,733,736,772,868,875,909,1007,1142,1248,1348,1375,1378,1395,1408 'anycap-ai-tool-seo':190 'anycap-blog-product':1 'anycap-c':178 'anycap-gener':150 'anycap-styl':18,74 'art':1237 'articl':43,77,83,141,235,330,449,468,672,701,1064,1139,1213,1366 'artifact':766,873,1184,1394,1405 'aspect':750,783 'asset':754,821,845,860,959,1146,1172,1278 'assum':743 'audio':626,646,1016 'audio-bas':625,1015 'auth':1344 'authent':172 'avail':288,1347 'background':933 'bake':1314 'base':301,318,600,627,1013,1017,1039,1043,1297 'bash':289,712,768,874,908,1087,1342 'before.png':916 'before/after':818 'behavior':175 'belong':1204,1329 'benchmark':96,240,405 'best':208 'beyond':1215 'block':33,147,496,682,805,991,1055,1113,1179 'blog':3,21,42,56,107,165,213,1200 'board':592 'brief':127,326,387,419,611,1132,1356 'broader':183 'build':384 'built':219,227 'bullet':252,396 'buri':1224 'capabl':98 'caption':996,1176 'care':445 'case':510 'catalog':708 'categori':490 'check':307,362,1081,1193,1343 'checklist':508 'claim':552,893 'clean':590,654 'cleanli':934 'clear':816 'cli':170,180 'cluster':1318 'code':955,965,1192 'command':174 'compact':417,1131 'companion':604,1026 'compar':904 'comparison':507 'competit':399 'compon':982,1072,1157 'composit':931 'compress':966,1424 'concept':225 'concis':481,1185 'concret':478 'confid':355 'confirm':887,921 'constraint':539,696 'content':49,246,262,278,1154 'content-market':277 'convert':413 'copi':1051,1167,1201 'core':116 'could':1203 'cover':634,1029,1188 'cta':513 'data':9,72,123,211,244,339,391,454,1137,1219 'data-driven':210 'decid':138 'decor':690 'default':466,1128 'deliver':1129 'describ':884,1417 'descript':761,796 'diagram':659 'diagram-lik':658 'differ':976 'direct':474 'directori':1151 'discov':693,1370 'distil':1133 'draft':50,129,201,214,459,1140 'drift':947,1214 'driven':212 'drop':1062 'duplic':311,988,1287 'easier':667 'edit':298,372,902,954,1041 'enough':1057 'enrich':25 'entir':60 'eslint':1090 'everi':993,1049 'evid':32,80,146,546,1164 'exact':1112 'exampl':95,233,241 'exec':1089 'exist':1108 'expand':465 'expect':1127 'explain':1169 'explan':651,666 'explicit':1020 'export':316,402,1037 'eyebrow':472 'fact':13,93,230,398,431 'factual':997 'fail':1103 'faq':511 'featur':243,1346 'fed':71 'feed':239,395 'file':61,199,815,843,880,914,917,1084,1114,1413 'filenam':762,797 'filler':691 'final':555 'find':299,303 'finish':106 'first':30,144,245,269,409,494,711,1246 'first-parti':29,143,1245 'fit':209 'focus':1120 'forc':679 'format':1432 'formatopt':1435 'full':620,1098 'general':489 'generat':152,557,567,586,602,629,652,721,731,741,753,774,826,859,1005,1145,1189,1383,1389,1397 'generic':276,530,1056,1198,1304 'genuin':854 'git':364 'glossari':223 'guardrail':1194 'guid':1065 'h1':475 'heavi':1426 'hero':560,802 'hero.jpg':1441 'hero.png':789,882,1402,1415,1438 'ident':944 'illustr':656 'imag':563,571,577,617,632,644,714,717,773,852,878,886,912,999,1003,1255,1371,1376,1379,1388,1396,1411,1419 'image-read':877,911,1410 'implement':1290 'impli':642 'improv':158,930 'infer':444 'inflat':1301 'input':16,102,122,250,323,390,441,1136,1334 'input-brief.md':206 'inspect':293,704,1360 'instead':274,1046 'instruct':883,920,1416 'integr':1078 'intend':891 'intent':186,425 'intern':418,515 'intro':482 'issu':1109 'iter':585 'job':117,410 'jpeg':1433 'jpg':970,1429 'json':254,403 'jsx':990 'keep':447,764,820,829,844,995,1052,1116 'keyfram':607 'keyword':1317 'know':487 'label':1021 'land':674,1067 'larg':989 'layout':808 'learn':45,215 'leav':1271 'let':1211 'like':111,660 'link':256,516 'liter':273 'live':706 'local':305,309,840,850,1032,1050,1288 'locale-specif':849 'long':1229 'lookup':985 'make':663 'market':279 'match':544,889,1181 'materi':157 'may':394 'media':676,703,1369 'mention':895,1421 'miss':432,1308 'mkdir':769 'mode':722,732,744,1384 'model':695,707,715,718,725,728,735,738,775,1268,1361,1372,1377,1380,1398 'mood':631 'move':518 'multipl':594,1159 'music':734,737,1267 'name':745,747 'near':497 'need':35,84,142,702,809,855,1161,1283,1368 'never':742 'next':523 'nich':1321 'niche-specif':1320 'normal':120,321,1350 'note':10,94,222,259,400,1187 'noun':1323 'o':787,793,1400 'observ':407 'one':346,477,501,558,1152 'one-sent':345 'oper':266,720,730,740,1382 'optim':957 'outcom':542,556 'output':647,1119,1126,1249,1262 'overs':967 'overwrit':369 'p':770 'page':46,131,160,216,226,295,302,319,424,426,437,525,550,838,866,892,994,1010,1018,1040,1044,1068,1077,1160,1166,1281,1289,1298,1302,1315,1392 'page-specif':1391 'param':782 'paramet':746,1362,1374 'parti':31,145,1247 'patch':1048 'path':759 'pattern':469,1165 'patterns.md':204 'piec':1309 'plan':187 'plus':983 'png':1427 'pngs':968 'pnpm':1088 'point':1241 'polish':559 'post':22,27,108 'pre':1107 'pre-exist':1106 'prefer':798,979 'present':614,1252 'preserv':925 'problem':538 'process':650 'product':4,12,57,97,166,229,272,397 'product-liter':271 'promis':350,479 'prompt':639,698,776,946,950,1178,1399 'proof':85,433,681,688,1311 'prose':280,1305 'prove':341,1173 'provid':8,92,1218 'public':758,1150 'quick':1340 'rang':583 'ratio':751,784,811 'raw':101,169 're':315,949,1036 're-export':314,1035 're-prompt':948 'read':58,176,188,196,374,879,913,1412 'readabl':778 'reader':421,484,520 'real':694,1183 'recommend':436 'record':1110 'refer':198,379,1341 'references/input-brief.md':207 'references/patterns.md':205 'references/voice.md':203 'refin':574 'regener':952 'relat':765 'relev':524 'repo':1100 'repo-wid':1099 'repres':609 'research':11,221 'result':622 'reus':1295 'reusabl':757,964,981,1054,1153,1338 'reveal':937 'review':591 'revis':907 'rewrit':1070 'rough':258,406,569 'rout':312,1033 'rule':790 'run':1079 'saa':1207 'say':1001 'scan':669 'scene':532 'scene-set':531 'schema':710,719,729,739,1381 'scope':450 'search':185,422 'search-int':184 'second':924 'section':505 'sentenc':347 'seo':48,194,234,1094 'set':533 'shape':438 'share':846 'short':471 'show':553,564,582,1244 'showcas':562,804,1277 'signag':941 'simpli':313,1034 'sip':1430 'site':1208 'skill':65,88,162,181,195,684,831,1118,1326 'skill-anycap-blog-production' 'small':1355 'sound':110,265 'sourc':570,905 'source-anycap-ai' 'spars':443 'specif':824,851,1282,1322,1393 'split':814 'spreadsheet':255,401 'src/app':1093 'src/components/seo/contentevidenceblock.tsx':1091 'src/lib/content-evidence.ts':1092 'stabl':1149 'start':63,283,528,534 'state':354 'static':616,998,1254 'status':291,1349 'step':655 'still':359,605,1027 'stock':1236 'stray':940 'structur':15,296,378,504 'style':20,76 'subject':581,928,943 'summari':495 'support':232,430,457,596,623,637,648,749,1221 'syntax':171 'tabl':253,986 'target':420,1080 'task':1333 'text':779,898,939,1175,1273,1386,1423 'text-to-imag':1385 'thesi':427 'time':599,827,1012 'time-bas':598,1011 'togeth':767 'tone':136,376,463,1144 'tool':193 'top':499 'topic':331,823,834,1316,1352 'topic-agent' 'topic-agent-skills' 'topic-agnost':833 'topic-claude-code' 'topic-cli' 'topic-coding-agent' 'topic-skills' 'topic-specif':822 'touch':1086 'transform':566 'triptych':588 'truli':1367 'turn':5,68 'tutori':44,218 'type':547 'typecheck':1102 'ui':1156 'uncertainti':435 'unknown':360 'unless':806,847,1279 'unrel':370 'url':404 'use':36,86,509,541,687,760,792,867,1235 'user':7,39,70,91,238,371,393 'user-f':69 'user-provid':6 'valid':871,1125,1186,1403 'variat':595 'verif':936,1190 'verifi':285,857,1075 'version':812 'video':621,724,727,1265 'visibl':897,938,1272,1422 'vision':869,1407 'visual':153,635,661,975,1023,1030 'voice.md':202 'want':40,104 'warm':1231 'warm-up':1230 'watermark':781,900,1275 'web/public/content-evidence':771,788,881,915,918,1401,1414,1437,1440 'webp':972 'websit':114,135,261,462,1143 'whether':139,308,922 'whole':1240 'wide':1101 'wire':862,961 'without':675,1069 'work':126,386 'workflow':167,247,382,506,601,628,819,903,1339 'worktre':365 'wrapper':306,842,1293 'write':381,1197,1358 'written':51,665 'wrong':942","prices":[{"id":"cd95536f-d91c-41b2-9f4f-e0d8bc4ac028","listingId":"735565c4-5b11-4237-9b4c-7452d9990d57","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"anycap-ai","category":"anycap","install_from":"skills.sh"},"createdAt":"2026-04-18T22:22:31.830Z"}],"sources":[{"listingId":"735565c4-5b11-4237-9b4c-7452d9990d57","source":"github","sourceId":"anycap-ai/anycap/anycap-blog-production","sourceUrl":"https://github.com/anycap-ai/anycap/tree/main/skills/anycap-blog-production","isPrimary":false,"firstSeenAt":"2026-04-18T22:22:31.830Z","lastSeenAt":"2026-05-01T12:57:23.636Z"}],"details":{"listingId":"735565c4-5b11-4237-9b4c-7452d9990d57","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"anycap-ai","slug":"anycap-blog-production","github":{"repo":"anycap-ai/anycap","stars":32,"topics":["agent","agent-skills","claude-code","cli","coding-agent","skills"],"license":"mit","html_url":"https://github.com/anycap-ai/anycap","pushed_at":"2026-04-23T15:05:30Z","description":"The capability harness for AI agents. Skills over SDKs.","skill_md_sha":"86101124c2c0a9fdbc7c5c9cdcb970c4fc6cedbe","skill_md_path":"skills/anycap-blog-production/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/anycap-ai/anycap/tree/main/skills/anycap-blog-production"},"layout":"multi","source":"github","category":"anycap","frontmatter":{"name":"anycap-blog-production","license":"MIT","description":"Turn user-provided data, notes, research, product facts, or structured inputs into AnyCap-style blog posts and then enrich those posts with first-party evidence blocks when needed. Use when a user wants a blog, article, tutorial, learn page, or SEO content draft written in the tone of the AnyCap website. Best fit when the input is data-first rather than prose-first: spreadsheets, JSON, bullets, notes, source URLs, benchmarks, product facts, or rough research. Trigger on: write blog from data, turn notes into article, blog drafting, article production, learn page drafting, SEO article in AnyCap tone, or requests to convert structured inputs into publish-ready content."},"skills_sh_url":"https://skills.sh/anycap-ai/anycap/anycap-blog-production"},"updatedAt":"2026-05-01T12:57:23.636Z"}}