{"id":"bef1f49f-5896-4a5a-9a03-b8a4122f90a2","shortId":"RK2MhQ","kind":"skill","title":"anycap-media-production","tagline":"Produce media assets using AnyCap: generate images, videos, and music from text or reference inputs, refine images through interactive visual annotation, and deliver finished assets. Covers the full production workflow from concept to delivery across all media types (image, video","description":"# AnyCap Media Production\n\n> **Read this entire file before starting.** It covers the full production workflow across image, video, music, and audio -- including iterative refinement with human feedback.\n\nWorkflow guide for producing media assets with AnyCap. Covers image, video, music, and audio -- from initial generation through iterative refinement to delivery.\n\nThis skill is about **how to produce media**. For CLI command reference and parameters, read the `anycap-cli` skill.\n\n## Prerequisites\n\nAnyCap CLI must be installed and authenticated. Read the `anycap-cli` skill if setup is needed.\n\n## Quick Reference\n\n| Media | Generate | Refine | Typical duration |\n|-------|----------|--------|------------------|\n| Image | `anycap image generate` | Annotate + image-to-image | 5-30s |\n| Video | `anycap video generate` | Re-generate with adjusted params | 30-120s |\n| Music | `anycap music generate` | Re-generate with adjusted prompt | 30-90s |\n| Audio | Coming soon | -- | -- |\n\nAll generation commands follow the same pattern:\n\n```\n1. Discover models    anycap {cap} models\n2. Check schema       anycap {cap} models <model> schema [--mode <mode>]\n3. Generate           anycap {cap} generate --model <model> --prompt \"...\" -o output.ext\n```\n\nAlways use `-o` with a descriptive filename.\n\n## Image Production\n\n### Text-to-Image\n\nGenerate an image from a text prompt:\n\n```bash\nanycap image generate \\\n  --prompt \"a cozy home office with a wooden desk, laptop, coffee cup, and plants by the window\" \\\n  --model nano-banana-2 \\\n  -o workspace-v1.png\n```\n\n### Image-to-Image (Edit / Transform)\n\nUse `--mode image-to-image` with a reference image to edit or transform an existing image:\n\n```bash\nanycap image generate \\\n  --prompt \"make it a watercolor painting\" \\\n  --model nano-banana-2 \\\n  --mode image-to-image \\\n  --param images=./photo.png \\\n  -o photo-watercolor.png\n```\n\nReference images can be local paths or URLs. The CLI handles upload automatically.\n\n### Multiple Reference Images\n\nSome models accept multiple reference images for style transfer, composition blending, or subject-driven generation. Use JSON array syntax to pass multiple files:\n\n```bash\n# Combine style from one image with composition from another\nanycap image generate \\\n  --prompt \"merge the architectural style of the first image with the color palette of the second\" \\\n  --model nano-banana-2 \\\n  --mode image-to-image \\\n  --param images='[\"./style-ref.png\",\"./color-ref.png\"]' \\\n  -o blended.png\n\n# Mix local files and URLs\nanycap image generate \\\n  --prompt \"a portrait in the style of the reference images\" \\\n  --model nano-banana-2 \\\n  --mode image-to-image \\\n  --param images='[\"./local-ref.png\",\"https://example.com/style-ref.jpg\"]' \\\n  -o portrait-styled.png\n```\n\nTips:\n- Use JSON array syntax `'[\"path1\",\"path2\"]'` -- repeating `--param images=` overwrites rather than appends.\n- Local file paths inside the array are auto-uploaded, same as single-file mode.\n- Not all models support multiple references. Check the model schema first. When unsupported, the model typically uses only the first image.\n\n### Iterative Refinement with Annotation\n\nWhen text prompts alone cannot describe the desired edit precisely (\"move this\", \"remove that specific thing\", \"change the color of this area\"), use the annotation workflow. For the full annotation guide -- including URL/video review, headless access, recording analysis, and multi-user collaboration -- read the `anycap-human-interaction` skill.\n\n```mermaid\ngraph TD\n    A[Start: concept or existing image] --> B{Have an image?}\n    B -->|No| C[Generate initial image]\n    B -->|Yes| D[Human annotates the image]\n    C --> D\n    D --> E[Build prompt from annotations]\n    E --> F[Generate with image-to-image]\n    F --> G[Show result to human]\n    G --> H{Satisfied?}\n    H -->|Yes| I[Done -- deliver final asset]\n    H -->|No| D\n```\n\n#### Step 1: Generate or Use an Existing Image\n\n```bash\nanycap image generate \\\n  --prompt \"a landing page hero banner with mountains and sunrise\" \\\n  --model nano-banana-2 \\\n  -o banner-v1.png\n```\n\n#### Step 2: Annotate\n\nOpen the annotation tool so the human can visually mark regions, describe desired changes, and optionally record a narrated walkthrough. Multiple users can collaborate on the same session in real-time.\n\n**For agent workflows** (non-blocking, recommended):\n\n```bash\nanycap annotate banner-v1.png --no-wait -o banner-v1-annotated.png\n# Returns: {session, url, poll_command, stop_command}\n```\n\nShow the URL to the human and ask them to annotate. Multiple people can open the same URL to collaborate. Wait for the human to confirm they are done, then:\n\n```bash\n# Fetch the result (single call, no loop)\nanycap annotate poll --session <session_id>\n\n# If recording exists, analyze it for visual understanding\nanycap actions video-read --file .anycap/annotate/<session_id>/recording.webm \\\n  --instruction \"Describe what changes the user wants\"\n\n# Clean up\nanycap annotate stop --session <session_id>\n```\n\n**For interactive sessions** (human is at the terminal):\n\n```bash\nanycap annotate banner-v1.png -o banner-v1-annotated.png\n# Blocks until Done click, outputs annotation JSON\n```\n\nThe annotation tool supports four tools: Rectangle (`R`), Arrow (`A`), Point (`P`), Freehand (`F`). Each annotation gets a numbered marker and a text label.\n\n#### Step 3: Build a Prompt from Annotations\n\nThe annotation output contains structured data. Translate each label into a coherent prompt:\n\n```json\n{\n  \"annotations\": [\n    {\"id\": 1, \"type\": \"rect\", \"label\": \"Replace with a standing desk\"},\n    {\"id\": 2, \"type\": \"point\", \"label\": \"Add a cat sitting here\"},\n    {\"id\": 3, \"type\": \"freehand\", \"label\": \"This area should be a bookshelf\"}\n  ]\n}\n```\n\nPrompt: \"#1: Replace the desk with a standing desk. #2: Add a cat sitting at the marked position. #3: Transform the outlined area into a bookshelf. Keep all other elements unchanged.\"\n\nRules:\n- Reference each annotation by its number (#1, #2, etc.)\n- Include the human's exact label text\n- Add \"Keep all other elements unchanged\" to preserve unmodified areas\n\n#### Step 4: Apply the Edit\n\nUse the **annotated image** (with visual markers) as the reference:\n\n```bash\nanycap image generate \\\n  --prompt \"#1: Replace the desk with a standing desk. #2: Add a cat. Keep all other elements unchanged.\" \\\n  --model nano-banana-2 \\\n  --mode image-to-image \\\n  --param images=./banner-v1-annotated.png \\\n  -o banner-v2.png\n```\n\n#### Step 5: Iterate\n\nIf the human wants more changes, use the latest version as input and repeat from Step 2. Version filenames (`v1`, `v2`, `v3`) so the human can compare and revert.\n\n### Image Tips\n\n- **Start broad, refine narrow.** First generation nails the composition. Annotation iterations handle targeted adjustments.\n- **One thing at a time.** If multi-region edits produce poor results, try one annotation per pass.\n- **Annotated image only.** Pass only the annotated image as the reference. Most models understand numbered markers and remove them from the output.\n\n## Video Production\n\n### Text-to-Video\n\n```bash\nanycap video generate \\\n  --prompt \"a cat walking on the beach at sunset, cinematic, slow motion\" \\\n  --model veo-3.1 \\\n  -o cat-beach.mp4\n```\n\n### Image-to-Video\n\nAnimate a still image:\n\n```bash\nanycap video generate \\\n  --prompt \"gentle camera pan across the landscape, wind blowing through trees\" \\\n  --model seedance-1.5-pro \\\n  --mode image-to-video \\\n  --param images=./landscape.png \\\n  -o landscape-animated.mp4\n```\n\nThis is powerful for combining with image generation: generate a still image first, then animate it.\n\n### Video Production Workflow\n\n```mermaid\ngraph LR\n    A[Text prompt] --> B[Generate image]\n    B --> C{Animate?}\n    C -->|Yes| D[image-to-video]\n    C -->|No| E[Done]\n    A --> F[text-to-video]\n    F --> E\n    D --> E\n```\n\nFor best results with image-to-video:\n1. Generate a high-quality still image first (iterate with annotation if needed)\n2. Use the final image as the reference for video generation\n3. Keep the video prompt focused on motion and camera movement, not scene description\n\n### Video Tips\n\n- Video generation takes 30-120s. Use async execution when your runtime supports it.\n- Check model schema for supported parameters (`aspect_ratio`, `duration`, etc.).\n- Different models excel at different styles. Check available models with `anycap video models`.\n\n## Music Production\n\n### Text-to-Music\n\n```bash\nanycap music generate \\\n  --prompt \"upbeat electronic track with synth leads and driving bass, 120 BPM\" \\\n  --model suno-v5 \\\n  -o background-track.mp3\n```\n\nMusic generation may return multiple clips. Extract the first:\n\n```bash\nanycap music generate --prompt \"...\" --model suno-v5 -o track.mp3 \\\n  | jq -r '.outputs[0].local_path'\n```\n\n### Music Tips\n\n- Be specific about genre, tempo, instruments, and mood in prompts.\n- Music generation takes 30-90s. Use async execution when possible.\n- Check model parameters via schema -- some models support `duration`, `genre`, `tags`.\n\n## Audio Production\n\nAudio generation is on the roadmap and not yet available. Audio **understanding** (analysis) is available via `anycap actions audio-read`.\n\n## Delivery\n\nWhen the asset is ready, deliver using the appropriate method:\n\n```bash\n# Share via Drive (generates a shareable link)\nanycap drive upload banner-final.png\nanycap drive share banner-final.png\n\n# Publish as a web page\nanycap page deploy ./site-directory\n```\n\n## Multi-Media Production Example\n\nA complete workflow producing a promotional package:\n\n```bash\n# 1. Generate hero image\nanycap image generate \\\n  --prompt \"modern SaaS dashboard with data visualizations, dark mode, purple accents\" \\\n  --model nano-banana-2 -o hero-v1.png\n\n# 2. Refine via annotation (agent asks human to mark changes)\nanycap annotate hero-v1.png --no-wait -o hero-v1-annotated.png\n# ... human annotates, agent polls result ...\nanycap image generate \\\n  --prompt \"#1: Make the chart larger. #2: Change accent color to blue.\" \\\n  --model nano-banana-2 --mode image-to-image \\\n  --param images=./hero-v1-annotated.png -o hero-v2.png\n\n# 3. Create an animated version\nanycap video generate \\\n  --prompt \"slow zoom into the dashboard, data points animate in sequentially\" \\\n  --model seedance-1.5-pro --mode image-to-video \\\n  --param images=./hero-v2.png -o hero-animation.mp4\n\n# 4. Generate background music\nanycap music generate \\\n  --prompt \"ambient tech background music, minimal, clean, 90 BPM\" \\\n  --model suno-v5 -o background-music.mp3\n\n# 5. Deliver\nanycap drive upload hero-v2.png hero-animation.mp4 background-music.mp3\n```","tags":["anycap","media","production","anycap-ai","agent","agent-skills","claude-code","cli","coding-agent","skills"],"capabilities":["skill","source-anycap-ai","skill-anycap-media-production","topic-agent","topic-agent-skills","topic-claude-code","topic-cli","topic-coding-agent","topic-skills"],"categories":["anycap"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/anycap-ai/anycap/anycap-media-production","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add anycap-ai/anycap","source_repo":"https://github.com/anycap-ai/anycap","install_from":"skills.sh"}},"qualityScore":"0.466","qualityRationale":"deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 32 github stars · SKILL.md body (10,576 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-01T12:57:24.058Z","embedding":null,"createdAt":"2026-04-18T22:22:34.848Z","updatedAt":"2026-05-01T12:57:24.058Z","lastSeenAt":"2026-05-01T12:57:24.058Z","tsv":"'-1.5':1102,1517 '-120':162,1222 '-3.1':1071 '-30':149 '-90':175,1328 '/banner-v1-annotated.png':956 '/color-ref.png':388 '/hero-v1-annotated.png':1493 '/hero-v2.png':1526 '/landscape.png':1111 '/local-ref.png':421 '/photo.png':303 '/recording.webm':737 '/site-directory':1404 '/style-ref.jpg':424 '/style-ref.png':387 '0':1309 '1':187,594,819,850,887,927,1177,1418,1470 '120':1275 '2':193,255,295,379,413,619,623,829,858,888,935,948,978,1191,1440,1443,1475,1485 '3':201,797,839,867,1202,1496 '30':161,174,1221,1327 '4':908,1532 '5':148,960,1557 '90':1546 'accent':1435,1477 'accept':324 'access':517 'across':39,60,1093 'action':731,1365 'add':833,859,897,936 'adjust':159,172,1006 'agent':658,1447,1463 'alon':485 'alway':210 'ambient':1540 'analysi':519,1360 'analyz':725 'anim':1081,1115,1131,1147,1499,1512,1530,1565 'annot':25,143,481,506,511,555,565,624,627,666,690,719,748,761,770,773,787,802,804,817,883,914,1002,1022,1025,1031,1188,1446,1454,1462 'anoth':355 'anycap':2,9,45,79,111,115,125,140,152,165,190,196,203,231,282,356,396,528,602,665,718,730,747,760,923,1054,1086,1252,1262,1296,1364,1388,1392,1401,1422,1453,1466,1501,1536,1559 'anycap-c':110,124 'anycap-human-interact':527 'anycap-media-product':1 'anycap/annotate':736 'append':440 'appli':909 'appropri':1378 'architectur':362 'area':503,844,871,906 'array':340,430,446 'arrow':780 'ask':687,1448 'aspect':1238 'asset':7,29,77,589,1372 'async':1225,1331 'audio':65,85,177,1346,1348,1358,1367 'audio-read':1366 'authent':121 'auto':449 'auto-upload':448 'automat':318 'avail':1249,1357,1362 'b':541,545,551,1142,1145 'background':1283,1534,1542,1554,1568 'background-mus':1553,1567 'background-track':1282 'banana':254,294,378,412,618,947,1439,1484 'banner':610 'banner-final.png':1391,1395 'banner-v1-annotated.png':672,764 'banner-v1.png':621,667,762 'banner-v2.png':958 'bash':230,281,346,601,664,710,759,922,1053,1085,1261,1295,1380,1417 'bass':1274 'beach':1063,1075 'best':1170 'blend':332 'blended.png':390 'block':662,765 'blow':1097 'blue':1480 'bookshelf':848,874 'bpm':1276,1547 'broad':994 'build':562,798 'c':547,558,1146,1148,1155 'call':715 'camera':1091,1211 'cannot':486 'cap':191,197,204 'cat':835,861,938,1059,1074 'cat-beach':1073 'chang':498,638,741,967,1452,1476 'chart':1473 'check':194,463,1232,1248,1335 'cinemat':1066 'clean':745,1545 'cli':103,112,116,126,315 'click':768 'clip':1291 'coffe':244 'coher':814 'collabor':524,648,699 'color':370,500,1478 'combin':347,1121 'come':178 'command':104,182,677,679 'compar':988 'complet':1411 'composit':331,353,1001 'concept':36,537 'confirm':705 'contain':806 'cover':30,55,80 'cozi':236 'creat':1497 'cup':245 'd':553,559,560,592,1150,1167 'dark':1432 'dashboard':1428,1509 'data':808,1430,1510 'deliv':27,587,1375,1558 'deliveri':38,93,1369 'deploy':1403 'describ':487,636,739 'descript':215,1215 'desir':489,637 'desk':242,827,853,857,930,934 'differ':1242,1246 'discov':188 'done':586,708,767,1158 'drive':1273,1383,1389,1393,1560 'driven':336 'durat':138,1240,1343 'e':561,566,1157,1166,1168 'edit':262,275,490,911,1016 'electron':1267 'element':878,901,942 'entir':50 'etc':889,1241 'exact':894 'exampl':1409 'example.com':423 'example.com/style-ref.jpg':422 'excel':1244 'execut':1226,1332 'exist':279,539,599,724 'extract':1292 'f':567,574,785,1160,1165 'feedback':71 'fetch':711 'file':51,345,393,442,455,735 'filenam':216,980 'final':588,1194 'finish':28 'first':366,467,476,997,1129,1185,1294 'focus':1207 'follow':183 'four':776 'freehand':784,841 'full':32,57,510 'g':575,580 'generat':10,88,135,142,154,157,167,170,181,202,205,223,233,284,337,358,398,548,568,595,604,925,998,1056,1088,1124,1125,1143,1178,1201,1219,1264,1287,1298,1325,1349,1384,1419,1424,1468,1503,1533,1538 'genr':1317,1344 'gentl':1090 'get':788 'graph':533,1137 'guid':73,512 'h':581,583,590 'handl':316,1004 'headless':516 'hero':609,1420,1529,1564 'hero-anim':1528,1563 'hero-v1-annotated.png':1460 'hero-v1.png':1442,1455 'hero-v2.png':1495,1562 'high':1181 'high-qual':1180 'home':237 'human':70,529,554,579,631,685,703,754,892,964,986,1449,1461 'id':818,828,838 'imag':11,21,43,61,81,139,141,145,147,217,222,225,232,259,261,267,269,273,280,283,298,300,302,307,321,327,351,357,367,382,384,386,397,408,416,418,420,436,477,540,544,550,557,571,573,600,603,915,924,951,953,955,991,1026,1032,1078,1084,1106,1110,1123,1128,1144,1152,1174,1184,1195,1421,1423,1467,1488,1490,1492,1521,1525 'image-to-imag':144,258,266,297,381,415,570,950,1487 'image-to-video':1077,1105,1151,1173,1520 'includ':66,513,890 'initi':87,549 'input':19,973 'insid':444 'instal':119 'instruct':738 'instrument':1319 'interact':23,530,752 'iter':67,90,478,961,1003,1186 'jq':1306 'json':339,429,771,816 'keep':875,898,939,1203 'label':795,811,822,832,842,895 'land':607 'landscap':1095,1114 'landscape-anim':1113 'laptop':243 'larger':1474 'latest':970 'lead':1271 'link':1387 'local':310,392,441,1310 'loop':717 'lr':1138 'make':286,1471 'mark':634,865,1451 'marker':791,918,1040 'may':1288 'media':3,6,41,46,76,101,134,1407 'merg':360 'mermaid':532,1136 'method':1379 'minim':1544 'mix':391 'mode':200,265,296,380,414,456,949,1104,1433,1486,1519 'model':189,192,198,206,251,291,323,375,409,459,465,471,615,944,1037,1069,1100,1233,1243,1250,1254,1277,1300,1336,1341,1436,1481,1515,1548 'modern':1426 'mood':1321 'motion':1068,1209 'mountain':612 'move':492 'movement':1212 'mp3':1285,1556,1570 'mp4':1076,1116,1531,1566 'multi':522,1014,1406 'multi-media':1405 'multi-region':1013 'multi-us':521 'multipl':319,325,344,461,645,691,1290 'music':14,63,83,164,166,1255,1260,1263,1286,1297,1312,1324,1535,1537,1543,1555,1569 'must':117 'nail':999 'nano':253,293,377,411,617,946,1438,1483 'nano-banana':252,292,376,410,616,945,1437,1482 'narrat':643 'narrow':996 'need':131,1190 'no-wait':668,1456 'non':661 'non-block':660 'number':790,886,1039 'o':208,212,256,304,389,425,620,671,763,957,1072,1112,1281,1304,1441,1459,1494,1527,1552 'offic':238 'one':350,1007,1021 'open':625,694 'option':640 'outlin':870 'output':769,805,1046,1308 'output.ext':209 'overwrit':437 'p':783 'packag':1416 'page':608,1400,1402 'paint':290 'palett':371 'pan':1092 'param':160,301,385,419,435,954,1109,1491,1524 'paramet':107,1237,1337 'pass':343,1024,1028 'path':311,443,1311 'path1':432 'path2':433 'pattern':186 'peopl':692 'per':1023 'photo-watercolor.png':305 'plant':247 'point':782,831,1511 'poll':676,720,1464 'poor':1018 'portrait':401 'portrait-styled.png':426 'posit':866 'possibl':1334 'power':1119 'precis':491 'prerequisit':114 'preserv':904 'pro':1103,1518 'produc':5,75,100,1017,1413 'product':4,33,47,58,218,1048,1134,1256,1347,1408 'promot':1415 'prompt':173,207,229,234,285,359,399,484,563,605,800,815,849,926,1057,1089,1141,1206,1265,1299,1323,1425,1469,1504,1539 'publish':1396 'purpl':1434 'qualiti':1182 'quick':132 'r':779,1307 'rather':438 'ratio':1239 're':156,169 're-gener':155,168 'read':48,108,122,525,734,1368 'readi':1374 'real':655 'real-tim':654 'recommend':663 'record':518,641,723 'rect':821 'rectangl':778 'refer':18,105,133,272,306,320,326,407,462,881,921,1035,1198 'refin':20,68,91,136,479,995,1444 'region':635,1015 'remov':494,1042 'repeat':434,975 'replac':823,851,928 'result':577,713,1019,1171,1465 'return':673,1289 'revert':990 'review':515 'roadmap':1353 'rule':880 'runtim':1229 'saa':1427 'satisfi':582 'scene':1214 'schema':195,199,466,1234,1339 'second':374 'seedanc':1101,1516 'sequenti':1514 'session':652,674,721,750,753 'setup':129 'share':1381,1394 'shareabl':1386 'show':576,680 'singl':454,714 'single-fil':453 'sit':836,862 'skill':95,113,127,531 'skill-anycap-media-production' 'slow':1067,1505 'soon':179 'source-anycap-ai' 'specif':496,1315 'stand':826,856,933 'start':53,536,993 'step':593,622,796,907,959,977 'still':1083,1127,1183 'stop':678,749 'structur':807 'style':329,348,363,404,1247 'subject':335 'subject-driven':334 'suno':1279,1302,1550 'suno-v5':1278,1301,1549 'sunris':614 'sunset':1065 'support':460,775,1230,1236,1342 'syntax':341,431 'synth':1270 'tag':1345 'take':1220,1326 'target':1005 'td':534 'tech':1541 'tempo':1318 'termin':758 'text':16,220,228,483,794,896,1050,1140,1162,1258 'text-to-imag':219 'text-to-mus':1257 'text-to-video':1049,1161 'thing':497,1008 'time':656,1011 'tip':427,992,1217,1313 'tool':628,774,777 'topic-agent' 'topic-agent-skills' 'topic-claude-code' 'topic-cli' 'topic-coding-agent' 'topic-skills' 'track':1268,1284 'track.mp3':1305 'transfer':330 'transform':263,277,868 'translat':809 'tree':1099 'tri':1020 'type':42,820,830,840 'typic':137,472 'unchang':879,902,943 'understand':729,1038,1359 'unmodifi':905 'unsupport':469 'upbeat':1266 'upload':317,450,1390,1561 'url':313,395,675,682,697 'url/video':514 'use':8,211,264,338,428,473,504,597,912,968,1192,1224,1330,1376 'user':523,646,743 'v1':981 'v2':982 'v3':983 'v5':1280,1303,1551 'veo':1070 'version':971,979,1500 'via':1338,1363,1382,1445 'video':12,44,62,82,151,153,733,1047,1052,1055,1080,1087,1108,1133,1154,1164,1176,1200,1205,1216,1218,1253,1502,1523 'video-read':732 'visual':24,633,728,917,1431 'wait':670,700,1458 'walk':1060 'walkthrough':644 'want':744,965 'watercolor':289 'web':1399 'wind':1096 'window':250 'wooden':241 'workflow':34,59,72,507,659,1135,1412 'workspace-v1.png':257 'yes':552,584,1149 'yet':1356 'zoom':1506","prices":[{"id":"4a1edc98-353b-469a-b75d-07b27af0f4ae","listingId":"bef1f49f-5896-4a5a-9a03-b8a4122f90a2","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"anycap-ai","category":"anycap","install_from":"skills.sh"},"createdAt":"2026-04-18T22:22:34.848Z"}],"sources":[{"listingId":"bef1f49f-5896-4a5a-9a03-b8a4122f90a2","source":"github","sourceId":"anycap-ai/anycap/anycap-media-production","sourceUrl":"https://github.com/anycap-ai/anycap/tree/main/skills/anycap-media-production","isPrimary":false,"firstSeenAt":"2026-04-18T22:22:34.848Z","lastSeenAt":"2026-05-01T12:57:24.058Z"}],"details":{"listingId":"bef1f49f-5896-4a5a-9a03-b8a4122f90a2","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"anycap-ai","slug":"anycap-media-production","github":{"repo":"anycap-ai/anycap","stars":32,"topics":["agent","agent-skills","claude-code","cli","coding-agent","skills"],"license":"mit","html_url":"https://github.com/anycap-ai/anycap","pushed_at":"2026-04-23T15:05:30Z","description":"The capability harness for AI agents. Skills over SDKs.","skill_md_sha":"79b485f479a6e6f2722231cfe194a709a5ff0cf1","skill_md_path":"skills/anycap-media-production/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/anycap-ai/anycap/tree/main/skills/anycap-media-production"},"layout":"multi","source":"github","category":"anycap","frontmatter":{"name":"anycap-media-production","license":"MIT","description":"Produce media assets using AnyCap: generate images, videos, and music from text or reference inputs, refine images through interactive visual annotation, and deliver finished assets. Covers the full production workflow from concept to delivery across all media types (image, video, music, audio). Use when creating images, videos, music, or any visual/audio content -- including iterative refinement with human feedback. Also use for image-to-image transformation, video generation from images, and annotation-driven precise edits. Trigger on: media production, asset generation, generate image/video/music, create visual content, produce assets, iterative image editing, annotate and refine, creative workflow, content creation, or any task requiring AI-generated media output.","compatibility":"Requires anycap CLI binary and internet access. Works with any agent that supports shell commands."},"skills_sh_url":"https://skills.sh/anycap-ai/anycap/anycap-media-production"},"updatedAt":"2026-05-01T12:57:24.058Z"}}