{"id":"fbbc0230-6684-4a90-ae79-f5b05aba2739","shortId":"7yL6Wg","kind":"skill","title":"gpu-use","tagline":"查看远程服务器 GPU 使用情况。SSH 连接服务器，展示每张卡的显存占用、运行进程、所属容器。当用户说查看 GPU、显卡占用、显存使用时使用","description":"# GPU 使用情况诊断\n\n你是一个 GPU 资源管理专家，帮助用户快速了解远程服务器上的 GPU 使用情况。\n\n## 服务器列表\n\n| 别名 | SSH 命令 |\n|------|----------|\n| 默认 | `ssh felix@124.158.103.16 -p 10022` |\n\n用户可以传入自定义 SSH 地址，格式：`user@host -p port`。无参数时使用默认服务器。\n\n## 诊断流程\n\n### 第一步：采集数据\n\n**并行执行以下命令（通过 SSH）：**\n\n1. **GPU 卡概况**\n```bash\nssh {SSH_TARGET} \"nvidia-smi --query-gpu=index,name,memory.total,memory.used,memory.free,utilization.gpu --format=csv,noheader,nounits\"\n```\n\n2. **GPU 上运行的进程**\n```bash\nssh {SSH_TARGET} \"nvidia-smi --query-compute-apps=pid,gpu_uuid,used_memory,name --format=csv,noheader,nounits\"\n```\n\n3. **GPU UUID 到 index 的映射**\n```bash\nssh {SSH_TARGET} \"nvidia-smi --query-gpu=index,gpu_uuid --format=csv,noheader\"\n```\n\n4. **Docker 容器列表**\n```bash\nssh {SSH_TARGET} \"docker ps --format '{{.ID}} {{.Names}}' 2>/dev/null\"\n```\n\n5. **进程 PID 到容器的映射**（用采集到的 PID 列表）\n```bash\nssh {SSH_TARGET} \"for cid in \\$(docker ps -q); do name=\\$(docker inspect --format '{{.Name}}' \\$cid | sed 's/^\\///'); pids=\\$(docker top \\$cid -o pid 2>/dev/null | tail -n +2); for p in \\$pids; do echo \\\"\\$p \\$name\\\"; done; done 2>/dev/null\"\n```\n\n6. **容器内多实例 http_server 检测**（识别单容器多终端部署）\n```bash\nssh {SSH_TARGET} \"for cid in \\$(docker ps -q); do name=\\$(docker inspect --format '{{.Name}}' \\$cid | sed 's/^\\///'); servers=\\$(docker exec \\$cid ps aux 2>/dev/null | grep 'http_server -p' | grep -v grep | awk '{for(i=1;i<=NF;i++) if(\\$i==\\\"-p\\\") print \\$(i+1)}'); if [ -n \\\"\\$servers\\\" ]; then echo \\\"\\$name: \\$servers\\\"; fi; done 2>/dev/null\"\n```\n\n### 第二步：生成报告\n\n将 GPU UUID 映射回 index，将 PID 映射回容器名，按以下格式输出：\n\n```\n## GPU 使用概况\n\n| GPU | 型号 | 显存占用 | 空闲 | GPU 利用率 | 状态 |\n|-----|------|----------|------|------------|------|\n| 0 | H200 | 107 / 141 GB | 34 GB | 85% | 🔴 繁忙 |\n| 1 | H200 | 12 / 141 GB | 129 GB | 10% | 🟢 空闲 |\n| 2 | H200 | 0 / 141 GB | 141 GB | 0% | ⚪ 无任务 |\n\n## 进程详情\n\n| GPU | 显存占用 | 容器 | 进程 |\n|-----|----------|------|------|\n| 0 | 107 GB | vllm_qwen35 | VLLM::EngineCore |\n| 0 | 2 GB | truetranslate-api-bin | truetranslate_api.bin |\n| 1 | 12 GB | atlas_video | python |\n\n## 多实例服务（单容器多终端部署）\n\n如果检测到容器内运行多个 http_server 实例，单独列出：\n\n| 容器 | 端口 | GPU | 状态 |\n|------|------|-----|------|\n| atlas_video | :5001 | GPU 2 | 运行中 |\n| atlas_video | :5002 | GPU 3 | 运行中 |\n\n## 空闲资源\n\n可用于新服务部署的 GPU：\n- GPU 4: 141 GB 完全空闲\n- GPU 5: 141 GB 完全空闲\n```\n\n### 状态判定规则\n\n| 显存占用比 | GPU 利用率 | 状态 |\n|------------|------------|------|\n| 0% | 0% | ⚪ 无任务 |\n| < 30% | < 30% | 🟢 空闲 |\n| 30-80% | any | 🟡 中等 |\n| > 80% | any | 🔴 繁忙 |\n\n### 多实例检测逻辑\n\n当检测到一个容器内有多个 `http_server -p` 进程时：\n1. 提取每个进程的端口号（`-p` 参数）\n2. 通过进程的 `CUDA_VISIBLE_DEVICES` 环境变量识别绑定的 GPU：\n   ```bash\n   ssh {SSH_TARGET} \"docker exec {CONTAINER} cat /proc/{PID}/environ 2>/dev/null | tr '\\0' '\\n' | grep CUDA_VISIBLE_DEVICES\"\n   ```\n3. 在报告中用独立表格展示，标注各实例的端口、GPU 绑定和运行状态\n\n## 注意事项\n\n- 用中文输出\n- SSH 命令设置 15 秒超时\n- 如果 SSH 连接失败，提示用户检查网络和 SSH 配置\n- 不执行任何写操作，纯只读诊断\n- 单容器多终端是 atlas_video 的标准部署方式，注意区分容器级和进程级的 GPU 占用","tags":["gpu","use","claude","arsenal","majiayu000","agent-skills","ai-agents","ai-coding-assistant","automation","claude-code","code-review","developer-tools"],"capabilities":["skill","source-majiayu000","skill-gpu-use","topic-agent-skills","topic-ai-agents","topic-ai-coding-assistant","topic-automation","topic-claude","topic-claude-code","topic-code-review","topic-developer-tools","topic-devops","topic-productivity","topic-prompt-engineering","topic-python"],"categories":["claude-arsenal"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/majiayu000/claude-arsenal/gpu-use","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add majiayu000/claude-arsenal","source_repo":"https://github.com/majiayu000/claude-arsenal","install_from":"skills.sh"}},"qualityScore":"0.464","qualityRationale":"deterministic score 0.46 from registry signals: · indexed on github topic:agent-skills · 29 github stars · SKILL.md body (2,852 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-01T07:01:14.163Z","embedding":null,"createdAt":"2026-04-18T22:24:12.551Z","updatedAt":"2026-05-01T07:01:14.163Z","lastSeenAt":"2026-05-01T07:01:14.163Z","tsv":"'+1':233 '+2':168 '-80':366 '/dev/null':131,165,180,213,244,401 '/environ':399 '/proc':397 '0':265,285,290,297,304,359,360,403 '1':49,224,274,312,378 '10':281 '10022':33 '107':267,298 '12':276,313 '124.158.103.16':31 '129':279 '141':268,277,286,288,346,351 '15':418 '2':72,130,164,179,212,243,283,305,333,382,400 '3':96,339,409 '30':362,363,365 '34':270 '4':118,345 '5':132,350 '5001':331 '5002':337 '6':181 '80':369 '85':272 'api':309 'app':85 'atlas':315,329,335,429 'aux':211 'awk':221 'bash':52,75,102,121,139,187,389 'bin':310 'cat':396 'cid':144,155,161,192,203,209 'comput':84 'contain':395 'csv':69,93,116 'cuda':384,406 'devic':386,408 'docker':119,125,146,151,159,194,199,207,393 'done':177,178,242 'echo':174,238 'enginecor':303 'exec':208,394 'felix':30 'fi':241 'format':68,92,115,127,153,201 'gb':269,271,278,280,287,289,299,306,314,347,352 'gpu':2,5,13,16,19,22,50,61,73,87,97,111,113,248,256,258,262,293,327,332,338,343,344,349,356,388,412,433 'gpu-us':1 'grep':214,218,220,405 'h200':266,275,284 'host':39 'http':183,215,321,374 'id':128 'index':62,100,112,251 'inspect':152,200 'memori':90 'memory.free':66 'memory.total':64 'memory.used':65 'n':167,235,404 'name':63,91,129,150,154,176,198,202,239 'nf':226 'nohead':70,94,117 'nounit':71,95 'nvidia':57,80,107 'nvidia-smi':56,79,106 'o':162 'p':32,40,170,175,217,230,376,380 'pid':86,134,137,158,163,172,253,398 'port':41 'print':231 'ps':126,147,195,210 'python':317 'q':148,196 'queri':60,83,110 'query-compute-app':82 'query-gpu':59,109 'qwen35':301 'sed':156,204 'server':184,206,216,236,240,322,375 'skill' 'skill-gpu-use' 'smi':58,81,108 'source-majiayu000' 'ssh':7,26,29,35,48,53,54,76,77,103,104,122,123,140,141,188,189,390,391,416,421,424 'tail':166 'target':55,78,105,124,142,190,392 'top':160 'topic-agent-skills' 'topic-ai-agents' 'topic-ai-coding-assistant' 'topic-automation' 'topic-claude' 'topic-claude-code' 'topic-code-review' 'topic-developer-tools' 'topic-devops' 'topic-productivity' 'topic-prompt-engineering' 'topic-python' 'tr':402 'truetransl':308 'truetranslate-api-bin':307 'truetranslate_api.bin':311 'use':3,89 'user':38 'utilization.gpu':67 'uuid':88,98,114,249 'v':219 'video':316,330,336,430 'visibl':385,407 'vllm':300,302 '上运行的进程':74 '不执行任何写操作':426 '中等':368 '你是一个':18 '使用情况':6,23 '使用情况诊断':17 '使用概况':257 '列表':138 '利用率':263,357 '别名':25 '到':99 '到容器的映射':135 '单容器多终端是':428 '单容器多终端部署':319 '单独列出':324 '占用':434 '卡概况':51 '参数':381 '可用于新服务部署的':342 '命令':27 '命令设置':417 '在报告中用独立表格展示':410 '地址':36 '型号':259 '多实例服务':318 '多实例检测逻辑':372 '如果':420 '如果检测到容器内运行多个':320 '完全空闲':348,353 '实例':323 '容器':295,325 '容器内多实例':182 '容器列表':120 '将':247,252 '展示每张卡的显存占用':9 '帮助用户快速了解远程服务器上的':21 '并行执行以下命令':46 '当检测到一个容器内有多个':373 '当用户说查看':12 '所属容器':11 '按以下格式输出':255 '提取每个进程的端口号':379 '提示用户检查网络和':423 '无任务':291,361 '无参数时使用默认服务器':42 '映射回':250 '映射回容器名':254 '显卡占用':14 '显存使用时使用':15 '显存占用':260,294 '显存占用比':355 '服务器列表':24 '查看远程服务器':4 '标注各实例的端口':411 '格式':37 '检测':185 '注意事项':414 '注意区分容器级和进程级的':432 '状态':264,328,358 '状态判定规则':354 '环境变量识别绑定的':387 '生成报告':246 '用中文输出':415 '用户可以传入自定义':34 '用采集到的':136 '的映射':101 '的标准部署方式':431 '秒超时':419 '空闲':261,282,364 '空闲资源':341 '端口':326 '第一步':44 '第二步':245 '繁忙':273,371 '纯只读诊断':427 '绑定和运行状态':413 '识别单容器多终端部署':186 '诊断流程':43 '资源管理专家':20 '运行中':334,340 '运行进程':10 '进程':133,296 '进程时':377 '进程详情':292 '连接失败':422 '连接服务器':8 '通过':47 '通过进程的':383 '配置':425 '采集数据':45 '默认':28","prices":[{"id":"7a35a697-3aaf-48ab-9c8e-3a9ceb624b53","listingId":"fbbc0230-6684-4a90-ae79-f5b05aba2739","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"majiayu000","category":"claude-arsenal","install_from":"skills.sh"},"createdAt":"2026-04-18T22:24:12.551Z"}],"sources":[{"listingId":"fbbc0230-6684-4a90-ae79-f5b05aba2739","source":"github","sourceId":"majiayu000/claude-arsenal/gpu-use","sourceUrl":"https://github.com/majiayu000/claude-arsenal/tree/main/skills/gpu-use","isPrimary":false,"firstSeenAt":"2026-04-18T22:24:12.551Z","lastSeenAt":"2026-05-01T07:01:14.163Z"}],"details":{"listingId":"fbbc0230-6684-4a90-ae79-f5b05aba2739","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"majiayu000","slug":"gpu-use","github":{"repo":"majiayu000/claude-arsenal","stars":29,"topics":["agent-skills","ai-agents","ai-coding-assistant","automation","claude","claude-code","code-review","developer-tools","devops","productivity","prompt-engineering","python","software-development","typescript","workflows"],"license":"mit","html_url":"https://github.com/majiayu000/claude-arsenal","pushed_at":"2026-04-29T04:12:22Z","description":"52 production-ready Claude Code skills and 7 specialized agents for software development, DevOps, product workflows, and automation.","skill_md_sha":"9c02e3100eb356f14941a1261f31e36860cde22f","skill_md_path":"skills/gpu-use/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/majiayu000/claude-arsenal/tree/main/skills/gpu-use"},"layout":"multi","source":"github","category":"claude-arsenal","frontmatter":{"name":"gpu-use","description":"查看远程服务器 GPU 使用情况。SSH 连接服务器，展示每张卡的显存占用、运行进程、所属容器。当用户说查看 GPU、显卡占用、显存使用时使用"},"skills_sh_url":"https://skills.sh/majiayu000/claude-arsenal/gpu-use"},"updatedAt":"2026-05-01T07:01:14.163Z"}}