{"id":"ebb2617d-e09c-40ff-9ccc-56d31564f15f","shortId":"MuNqjt","kind":"skill","title":"gpt-image","tagline":"General-purpose image generation and reference-image editing via OpenAI GPT Image 2 (`gpt-image-2`). Wraps the two official endpoints from the OpenAI cookbook — `/v1/images/generations` for text-to-image and `/v1/images/edits` for reference-image edits (including alpha-channel ma","description":"# gpt-image\n\nGeneral image generation/editing CLI for OpenAI's `gpt-image-2`. Designed for agents: all API parameters are first-class flags, defaults are sane, output is a file on disk. The skill auto-loads when Claude detects an image-generation intent — no slash command needed.\n\n## One-line usage\n\n```bash\n# As a Claude Code plugin (installed via /plugin install):\nuv run \"$CLAUDE_PLUGIN_ROOT/skills/gpt-image/scripts/generate.py\" -p \"PROMPT\" [-f OUT] [-i REF...] [-m MASK] [options]\n\n# As a direct CLI (installed via uvx or uv tool install):\nuvx --from git+https://github.com/wuyoscar/gpt_image_2_skill gpt-image -p \"PROMPT\" [-f OUT] [-i REF...] [-m MASK] [options]\n\n# Or once installed globally:\ngpt-image -p \"PROMPT\" [-f OUT] [-i REF...] [-m MASK] [options]\n```\n\nReads `OPENAI_API_KEY` from env. Writes to `OUT` (or auto-named `YYYY-MM-DD-HH-MM-SS-<slug>.png` in `./fig/` or cwd). Prints output path(s) on stdout. Exit 0 on success, 1 on API error, 2 on bad args / missing key.\n\n## CLI flags (complete reference)\n\n| Flag | Type / Values | Default | Applies to | Description |\n|------|---------------|---------|------------|-------------|\n| `-p, --prompt` | str | — required | both | Text prompt for generation, or edit instruction. |\n| `-f, --file` | path | auto | both | Output path. Auto-gen if omitted. Extension follows `--format`. |\n| `-i, --image` | path (repeatable) | — | edits | Reference image(s). Presence routes through `/v1/images/edits` (the official endpoint per the OpenAI cookbook). |\n| `-m, --mask` | path | — | edits | Alpha-channel PNG mask. Opaque pixels are preserved, transparent pixels are regenerated. Edits endpoint only — requires `-i`. |\n| `--input-fidelity` | `low` \\| `high` | — | edits | Controls how closely the output tracks the reference. Supported on `gpt-image-1` and `gpt-image-1.5`; silently ignored by `gpt-image-2` (already high fidelity by default). |\n| `--model` | str | `gpt-image-2` | both | Model ID. Fallbacks: `gpt-image-1.5`, `gpt-image-1`, `gpt-image-1-mini`. |\n| `--size` | literal / shortcut | `1024x1024` | both | Literals: `1024x1024`, `1536x1024`, `1024x1536`, `2048x2048`, `2048x1152`, `3840x2160`, `2160x3840`, or any 16-px multiple up to 3840 max edge (3:1 ratio cap, 655k–8.3M total pixels). Shortcuts: `1k` `2k` `4k` `portrait` `landscape` `square` `wide` `tall`. |\n| `--quality` | `auto` \\| `low` \\| `medium` \\| `high` | `high` | both | Cost roughly 10× per step. `low` ≈ $0.005/img, `medium` ≈ $0.04, `high` ≈ $0.17. CLI default stays `high`, but agents should choose deliberately: `low` for cheap drafts / large sweeps, `medium` for normal exploration, `high` for final assets, typography, Chinese text, diagrams, or anything shipping-facing. |\n| `-n, --n` | int | `1` | both | Number of images to return. `>1` suffixes filenames `_0`, `_1`, … |\n| `--background` | `auto` \\| `opaque` | API default | generations only | `opaque` disables transparent background. |\n| `--moderation` | `auto` \\| `low` | API default | generations only | `low` relaxes content filter where policy allows. |\n| `--format` | `png` \\| `jpeg` \\| `webp` | `png` | both | Response encoding. |\n| `--compression` | int 0–100 | — | both | JPEG/WebP compression. Ignored for PNG. |\n| `--user` | str | — | both | Optional end-user identifier for OpenAI abuse tracking. |\n\n## Budget / quality policy for agents\n\nUse `--quality` as the budget dial. There is no separate `--budget` flag in this CLI.\n\n- `low` — cheap draft mode. Use for broad prompt exploration, collecting many variants, gallery mining, rough composition checks, or when the user explicitly wants low cost / fast iteration.\n- `medium` — balanced mode. Use for normal one-off exploration, style probing, or cases where readability matters but the output is not yet final.\n- `high` — shipping / report mode. Use for Chinese text, posters, infographics, paper figures, dense labels, multi-panel layouts, banners, or any asset likely to be kept.\n\nRule of thumb for autonomous agents:\n- If the user asks for **many variants**, **cheap**, **draft**, **explore**, or **collect**, start with `low`.\n- If the user asks for **polished but still exploratory**, use `medium`.\n- If the user asks for **final**, **fancy**, **hero**, **paper figure**, **poster**, **diagram**, or **exact text**, use `high`.\n- If unsure, keep the CLI default `high` for text-heavy / delivery-facing outputs; otherwise prefer `medium` during exploration.\n\n## Endpoint selection (official OpenAI cookbook pattern)\n\n| Mode | Trigger | Endpoint |\n|------|---------|----------|\n| Generate from prompt | no `-i` | `POST /v1/images/generations` (JSON body) |\n| Edit / reference-based | `-i` one or more times | `POST /v1/images/edits` (multipart form) |\n| Inpaint with mask | `-i` + `-m` | `POST /v1/images/edits` with a `mask` file |\n\nBoth endpoints accept `gpt-image-2` as of April 2026 — verified against OpenAI's [official cookbook prompting guide](https://github.com/openai/openai-cookbook/blob/main/examples/multimodal/image-gen-models-prompting-guide.ipynb). The skill uses the official `openai` Python SDK under the hood (`from openai import OpenAI; client.images.generate(...)` / `client.images.edit(...)`) — the CLI is a thin wrapper that exposes every SDK parameter as a flag.\n\n**Content policy:** gpt-image-2 enforces its own content rules on the edits endpoint. Real-person-likeness edits usually refuse (400 error with a moderation message). The skill surfaces the response body verbatim on stderr and exits 1.\n\n## Canonical examples\n\n```bash\n# 1. Vanilla generate, 1K square, auto quality\nuv run generate.py -p \"a photorealistic convenience store at 10pm\"\n\n# 2. 2K portrait poster with exact Chinese text, high quality\nuv run generate.py \\\n  -p 'Design a 3:4 tea poster. Exact copy: \"山川茶事\" / \"冷泡系列\" / \"中杯 16 元\"' \\\n  --size portrait --quality high -f poster.png\n\n# 3. 4-image grid, transparent background disabled, webp\nuv run generate.py -p \"isometric furniture, minimalist\" \\\n  -n 4 --background opaque --format webp --compression 85\n\n# 4. Edit / colorize existing image\nuv run generate.py -p \"colorize this manga page and translate to Chinese\" \\\n  -i page.jpg -f colored.png\n\n# 5. Multi-reference brand collab\nuv run generate.py -p \"77 (the cat) × KFC employee poster\" \\\n  -i cat.png -i kfc_logo.png -f collab.png --size portrait\n\n# 6. Masked inpaint — replace sky only\nuv run generate.py -p \"replace sky with aurora, keep foreground intact\" \\\n  -i photo.jpg -m sky_mask.png -f aurora.png --quality high\n\n# 7. 4K widescreen render\nuv run generate.py -p \"cinematic Shanghai skyline at dusk\" \\\n  --size 4k --quality high -f skyline.png\n```\n\n## Response handling\n\n- API returns `data: [{ b64_json: \"…\" }]` by default; the script decodes base64 and writes bytes.\n- If the API returns `url` instead, the script GETs the URL and writes the downloaded bytes.\n- With `-n > 1`, files are suffixed: `out.png` → `out_0.png`, `out_1.png`, …\n\n## Error surface\n\n| Condition | Exit | stderr |\n|-----------|------|--------|\n| `OPENAI_API_KEY` unset | 2 | `error: OPENAI_API_KEY not set. ...` |\n| `--mask` without `-i` | 2 | `error: --mask requires --image (edits endpoint only)` |\n| `-i` or `-m` path missing | 2 | `error: --image not found: PATH` |\n| OpenAI returns non-2xx | 1 | `error: <status> from OpenAI: <body>` (first 2000 chars of response) |\n| Response has no image data | 1 | `error: no image data in response: <json>` |\n\nWhen an agent hits exit 1, it should surface the response body verbatim — it usually names the problem (rate limit, moderation block, invalid size).\n\n## Size picking guide\n\n| Intent | Size |\n|--------|------|\n| Default / social square | `1024x1024` (1k) |\n| Mobile screenshot, portrait poster, beauty/skincare | `1024x1536` (portrait) |\n| Landscape photo, gameplay screenshot | `1536x1024` (landscape) |\n| Hi-res print, paper figure | `2048x2048` (2k) |\n| Widescreen cinematic, dashboard hero | `3840x2160` (4k) |\n| Tall story banner, vertical video thumbnail | `2160x3840` (tall) |\n\n## Prompt-craft references (optional, load only when needed)\n\nThese are not required to use the script — they exist for prompt-quality uplift when the user's intent needs more structure than a one-liner.\n\n- `references/craft.md` — 12 cross-cutting principles: exact-text-in-quotes, aspect-ratio-first, camera/shot language, scene density, style anchoring, negation, reference-based unlocks, dense Chinese text, three-glances test.\n- `references/gallery.md` — 56 community-curated templates across 8 categories: photography, games, UI/UX, typography, infographics, character consistency, editing, collage. Each entry keeps its original `Source: @handle` attribution.\n- `references/openai-cookbook.md` — verbatim Markdown capture of OpenAI's [official GPT Image prompting guide](https://github.com/openai/openai-cookbook/blob/main/examples/multimodal/image-gen-models-prompting-guide.ipynb). Load this when the user asks about OpenAI's own parameter semantics, wants a use-case beyond what our gallery covers (UI mockups, pitch-deck slides, scientific diagrams, virtual try-on, billboard mockups, translation edits), or needs the authoritative parameter-coverage table.\n\nLoad a reference file only when the user's request signals that category (e.g. asks for a poster → load `typography` section of gallery; asks about rendering Chinese → load craft.md sections 1, 7, 10; asks \"how does the edits endpoint actually work?\" → load `openai-cookbook.md`).\n\n## Attribution\n\nPrompt patterns curated from [`ZeroLu/awesome-gpt-image`](https://github.com/ZeroLu/awesome-gpt-image) under CC BY 4.0. Individual `@handle` attributions preserved per-entry in `references/gallery.md`.","tags":["gpt","image","skill","wuyoscar","agent-skills","gpt-image-2","image-generation"],"capabilities":["skill","source-wuyoscar","skill-gpt-image","topic-agent-skills","topic-gpt-image-2","topic-image-generation"],"categories":["gpt_image_2_skill"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/wuyoscar/gpt_image_2_skill/gpt-image","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add wuyoscar/gpt_image_2_skill","source_repo":"https://github.com/wuyoscar/gpt_image_2_skill","install_from":"skills.sh"}},"qualityScore":"0.490","qualityRationale":"deterministic score 0.49 from registry signals: · indexed on github topic:agent-skills · 80 github stars · SKILL.md body (9,646 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T18:56:48.098Z","embedding":null,"createdAt":"2026-04-23T07:01:22.735Z","updatedAt":"2026-04-23T18:56:48.098Z","lastSeenAt":"2026-04-23T18:56:48.098Z","tsv":"'/fig':196 '/img':413 '/openai/openai-cookbook/blob/main/examples/multimodal/image-gen-models-prompting-guide.ipynb).':749,1270 '/plugin':113 '/v1/images/edits':39,268,714,723 '/v1/images/generations':32,701 '/wuyoscar/gpt_image_2_skill':145 '/zerolu/awesome-gpt-image)':1368 '0':206,463,500 '0.005':412 '0.04':415 '0.17':417 '1':209,317,352,356,382,453,460,464,820,824,1020,1070,1084,1096,1347 '1.5':322,348 '10':408,1349 '100':501 '1024x1024':361,364,1123 '1024x1536':366,1130 '10pm':840 '12':1198 '1536x1024':365,1136 '16':373,866 '1k':391,827,1124 '2':18,22,63,213,329,340,734,786,841,1036,1046,1059 '2000':1075 '2026':738 '2048x1152':368 '2048x2048':367,1144 '2160x3840':370,1158 '2k':392,842,1145 '2xx':1069 '3':381,857,874 '3840':378 '3840x2160':369,1150 '4':858,875,890,897 '4.0':1372 '400':803 '4k':393,968,981,1151 '5':918 '56':1231 '6':942 '655k':385 '7':967,1348 '77':928 '8':1237 '8.3':386 '85':896 'abus':518 'accept':730 'across':1236 'actual':1356 'agent':66,423,524,622,1093 'allow':489 'alpha':47,281 'alpha-channel':46,280 'alreadi':330 'anchor':1217 'anyth':446 'api':68,176,211,468,479,988,1004,1033,1039 'appli':227 'april':737 'arg':216 'ask':626,641,652,1276,1331,1340,1350 'aspect':1209 'aspect-ratio-first':1208 'asset':440,612 'attribut':1255,1360,1375 'aurora':955 'aurora.png':964 'authorit':1312 'auto':87,185,245,250,400,466,477,829 'auto-gen':249 'auto-load':86 'auto-nam':184 'autonom':621 'b64':991 'background':465,475,879,891 'bad':215 'balanc':568 'banner':609,1154 'base':707,1221 'base64':998 'bash':105,823 'beauty/skincare':1129 'beyond':1288 'billboard':1305 'block':1112 'bodi':703,814,1102 'brand':922 'broad':546 'budget':520,529,535 'byte':1001,1017 'camera/shot':1212 'canon':821 'cap':384 'captur':1259 'case':580,1287 'cat':930 'cat.png':935 'categori':1238,1329 'cc':1370 'channel':48,282 'char':1076 'charact':1244 'cheap':429,541,630 'check':556 'chines':442,597,847,913,1224,1343 'choos':425 'cinemat':975,1147 'class':73 'claud':90,108,117 'cli':56,132,219,418,539,670,768 'client.images.edit':766 'client.images.generate':765 'close':306 'code':109 'collab':923 'collab.png':939 'collag':1247 'collect':549,634 'color':899,906 'colored.png':917 'command':99 'communiti':1233 'community-cur':1232 'complet':221 'composit':555 'compress':498,504,895 'condit':1029 'consist':1245 'content':485,781,790 'control':304 'conveni':837 'cookbook':31,275,690,744 'copi':862 'cost':406,564 'cover':1292 'coverag':1315 'craft':1162 'craft.md':1345 'cross':1200 'cross-cut':1199 'curat':1234,1363 'cut':1201 'cwd':198 'dashboard':1148 'data':990,1083,1088 'dd':190 'deck':1297 'decod':997 'default':75,226,334,419,469,480,671,994,1120 'deliber':426 'deliveri':678 'delivery-fac':677 'dens':603,1223 'densiti':1215 'descript':229 'design':64,855 'detect':91 'diagram':444,660,1300 'dial':530 'direct':131 'disabl':473,880 'disk':83 'download':1016 'draft':430,542,631 'dusk':979 'e.g':1330 'edg':380 'edit':13,44,240,261,279,293,303,704,794,800,898,1051,1246,1308,1354 'employe':932 'encod':497 'end':513 'end-us':512 'endpoint':27,271,294,686,694,729,795,1052,1355 'enforc':787 'entri':1249,1379 'env':179 'error':212,804,1027,1037,1047,1060,1071,1085 'everi':775 'exact':662,846,861,1204 'exact-text-in-quot':1203 'exampl':822 'exist':900,1178 'exit':205,819,1030,1095 'explicit':561 'explor':436,548,576,632,685 'exploratori':646 'expos':774 'extens':254 'f':122,151,167,242,872,916,938,963,984 'face':449,679 'fallback':344 'fanci':655 'fast':565 'fidel':300,332 'figur':602,658,1143 'file':81,243,727,1021,1320 'filenam':462 'filter':486 'final':439,590,654 'first':72,1074,1211 'first-class':71 'flag':74,220,223,536,780 'follow':255 'foreground':957 'form':716 'format':256,490,893 'found':1063 'furnitur':887 'galleri':552,1291,1339 'game':1240 'gameplay':1134 'gen':251 'general':5,53 'general-purpos':4 'generat':8,95,238,470,481,695,826 'generate.py':833,853,884,904,926,950,973 'generation/editing':55 'get':1010 'git':142 'github.com':144,748,1269,1367 'github.com/openai/openai-cookbook/blob/main/examples/multimodal/image-gen-models-prompting-guide.ipynb).':747,1268 'github.com/wuyoscar/gpt_image_2_skill':143 'github.com/zerolu/awesome-gpt-image)':1366 'glanc':1228 'global':161 'gpt':2,16,20,51,61,147,163,315,320,327,338,346,350,354,732,784,1264 'gpt-imag':1,19,50,60,146,162,314,319,326,337,345,349,353,731,783 'grid':877 'guid':746,1117,1267 'handl':987,1254,1374 'heavi':676 'hero':656,1149 'hh':191 'hi':1139 'hi-r':1138 'high':302,331,403,404,416,421,437,591,665,672,849,871,966,983 'hit':1094 'hood':760 'id':343 'identifi':515 'ignor':324,505 'imag':3,7,12,17,21,37,43,52,54,62,94,148,164,258,263,316,321,328,339,347,351,355,457,733,785,876,901,1050,1061,1082,1087,1265 'image-gener':93 'import':763 'includ':45 'individu':1373 'infograph':600,1243 'inpaint':717,944 'input':299 'input-fidel':298 'instal':111,114,133,139,160 'instead':1007 'instruct':241 'int':452,499 'intact':958 'intent':96,1118,1188 'invalid':1113 'isometr':886 'iter':566 'jpeg':492 'jpeg/webp':503 'json':702,992 'keep':668,956,1250 'kept':616 'key':177,218,1034,1040 'kfc':931 'kfc_logo.png':937 'label':604 'landscap':395,1132,1137 'languag':1213 'larg':431 'layout':608 'like':613,799 'limit':1110 'line':103 'liner':1196 'liter':359,363 'load':88,1165,1271,1317,1335,1344,1358 'low':301,401,411,427,478,483,540,563,637 'm':126,155,171,276,387,721,961,1056 'ma':49 'manga':908 'mani':550,628 'markdown':1258 'mask':127,156,172,277,284,719,726,943,1043,1048 'matter':583 'max':379 'medium':402,414,433,567,648,683 'messag':808 'mine':553 'mini':357 'minimalist':888 'miss':217,1058 'mm':189,192 'mobil':1125 'mockup':1294,1306 'mode':543,569,594,692 'model':335,342 'moder':476,807,1111 'multi':606,920 'multi-panel':605 'multi-refer':919 'multipart':715 'multipl':375 'n':450,451,889,1019 'name':186,1106 'need':100,1168,1189,1310 'negat':1218 'non':1068 'non-2xx':1067 'normal':435,572 'number':455 'offici':26,270,688,743,754,1263 'omit':253 'one':102,574,709,1195 'one-lin':101,1194 'one-off':573 'opaqu':285,467,472,892 'openai':15,30,58,175,274,517,689,741,755,762,764,1032,1038,1065,1073,1261,1278 'openai-cookbook.md':1359 'option':128,157,173,511,1164 'origin':1252 'otherwis':681 'out.png':1024 'out_0.png':1025 'out_1.png':1026 'output':78,200,247,308,586,680 'p':120,149,165,230,834,854,885,905,927,951,974 'page':909 'page.jpg':915 'panel':607 'paper':601,657,1142 'paramet':69,777,1281,1314 'parameter-coverag':1313 'path':201,244,248,259,278,1057,1064 'pattern':691,1362 'per':272,409,1378 'per-entri':1377 'person':798 'photo':1133 'photo.jpg':960 'photographi':1239 'photorealist':836 'pick':1116 'pitch':1296 'pitch-deck':1295 'pixel':286,290,389 'plugin':110,118 'png':194,283,491,494,507 'polici':488,522,782 'polish':643 'portrait':394,843,869,941,1127,1131 'post':700,713,722 'poster':599,659,844,860,933,1128,1334 'poster.png':873 'prefer':682 'presenc':265 'preserv':288,1376 'principl':1202 'print':199,1141 'probe':578 'problem':1108 'prompt':121,150,166,231,236,547,697,745,1161,1181,1266,1361 'prompt-craft':1160 'prompt-qual':1180 'purpos':6 'px':374 'python':756 'qualiti':399,521,526,830,850,870,965,982,1182 'quot':1207 'rate':1109 'ratio':383,1210 'read':174 'readabl':582 'real':797 'real-person-lik':796 'ref':125,154,170 'refer':11,42,222,262,311,706,921,1163,1220,1319 'reference-bas':705,1219 'reference-imag':10,41 'references/craft.md':1197 'references/gallery.md':1230,1381 'references/openai-cookbook.md':1256 'refus':802 'regener':292 'relax':484 'render':970,1342 'repeat':260 'replac':945,952 'report':593 'request':1326 'requir':233,296,1049,1172 'res':1140 'respons':496,813,986,1078,1079,1090,1101 'return':459,989,1005,1066 'root/skills/gpt-image/scripts/generate.py':119 'rough':407,554 'rout':266 'rule':617,791 'run':116,832,852,883,903,925,949,972 'sane':77 'scene':1214 'scientif':1299 'screenshot':1126,1135 'script':996,1009,1176 'sdk':757,776 'section':1337,1346 'select':687 'semant':1282 'separ':534 'set':1042 'shanghai':976 'ship':448,592 'shipping-fac':447 'shortcut':360,390 'signal':1327 'silent':323 'size':358,868,940,980,1114,1115,1119 'skill':85,751,810 'skill-gpt-image' 'sky':946,953 'sky_mask.png':962 'skylin':977 'skyline.png':985 'slash':98 'slide':1298 'social':1121 'sourc':1253 'source-wuyoscar' 'squar':396,828,1122 'ss':193 'start':635 'stay':420 'stderr':817,1031 'stdout':204 'step':410 'still':645 'store':838 'stori':1153 'str':232,336,509 'structur':1191 'style':577,1216 'success':208 'suffix':461,1023 'support':312 'surfac':811,1028,1099 'sweep':432 'tabl':1316 'tall':398,1152,1159 'tea':859 'templat':1235 'test':1229 'text':35,235,443,598,663,675,848,1205,1225 'text-heavi':674 'text-to-imag':34 'thin':771 'three':1227 'three-glanc':1226 'thumb':619 'thumbnail':1157 'time':712 'tool':138 'topic-agent-skills' 'topic-gpt-image-2' 'topic-image-generation' 'total':388 'track':309,519 'translat':911,1307 'transpar':289,474,878 'tri':1303 'trigger':693 'try-on':1302 'two':25 'type':224 'typographi':441,1242,1336 'ui':1293 'ui/ux':1241 'unlock':1222 'unset':1035 'unsur':667 'uplift':1183 'url':1006,1012 'usag':104 'use':525,544,570,595,647,664,752,1174,1286 'use-cas':1285 'user':508,514,560,625,640,651,1186,1275,1324 'usual':801,1105 'uv':115,137,831,851,882,902,924,948,971 'uvx':135,140 'valu':225 'vanilla':825 'variant':551,629 'verbatim':815,1103,1257 'verifi':739 'vertic':1155 'via':14,112,134 'video':1156 'virtual':1301 'want':562,1283 'webp':493,881,894 'wide':397 'widescreen':969,1146 'without':1044 'work':1357 'wrap':23 'wrapper':772 'write':180,1000,1014 'yet':589 'yyyi':188 'yyyy-mm-dd-hh-mm-ss':187 'zerolu/awesome-gpt-image':1365 '中杯':865 '元':867 '冷泡系列':864 '山川茶事':863","prices":[{"id":"619ff5cc-a2e8-4709-8e8a-1bbc5e5d9b7e","listingId":"ebb2617d-e09c-40ff-9ccc-56d31564f15f","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"wuyoscar","category":"gpt_image_2_skill","install_from":"skills.sh"},"createdAt":"2026-04-23T07:01:22.735Z"}],"sources":[{"listingId":"ebb2617d-e09c-40ff-9ccc-56d31564f15f","source":"github","sourceId":"wuyoscar/gpt_image_2_skill/gpt-image","sourceUrl":"https://github.com/wuyoscar/gpt_image_2_skill/tree/main/skills/gpt-image","isPrimary":false,"firstSeenAt":"2026-04-23T07:01:22.735Z","lastSeenAt":"2026-04-23T18:56:48.098Z"}],"details":{"listingId":"ebb2617d-e09c-40ff-9ccc-56d31564f15f","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"wuyoscar","slug":"gpt-image","github":{"repo":"wuyoscar/gpt_image_2_skill","stars":80,"topics":["agent-skills","gpt-image-2","image-generation"],"license":"other","html_url":"https://github.com/wuyoscar/gpt_image_2_skill","pushed_at":"2026-04-23T14:12:13Z","description":"Curated T2I prompts and skill/CLI wrappers for GPT Image 2","skill_md_sha":"d07e93a5422d39ef6195457843d87aba12a5a834","skill_md_path":"skills/gpt-image/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/wuyoscar/gpt_image_2_skill/tree/main/skills/gpt-image"},"layout":"multi","source":"github","category":"gpt_image_2_skill","frontmatter":{"name":"gpt-image","license":"CC BY 4.0 (prompt patterns attributed to original authors)","description":"General-purpose image generation and reference-image editing via OpenAI GPT Image 2 (`gpt-image-2`). Wraps the two official endpoints from the OpenAI cookbook — `/v1/images/generations` for text-to-image and `/v1/images/edits` for reference-image edits (including alpha-channel masks). Use whenever an agent or user needs to (a) generate an image from a text prompt, (b) restyle or transform an image using a reference image, (c) combine multiple reference images, (d) inpaint a region using a PNG mask, or (e) render dense typography / Chinese text. Reads `OPENAI_API_KEY` from env; writes PNG/JPEG/WebP to disk. Optional prompt-craft references under `references/` for photorealism, posters, infographics, and character sheets."},"skills_sh_url":"https://skills.sh/wuyoscar/gpt_image_2_skill/gpt-image"},"updatedAt":"2026-04-23T18:56:48.098Z"}}