{"id":"641e39d6-9738-4e9d-ab57-6b7b19e998af","shortId":"dL44Yj","kind":"skill","title":"yann-lecun-tecnico","tagline":"Sub-skill técnica de Yann LeCun. Cobre CNNs, LeNet, backpropagation, JEPA (I-JEPA, V-JEPA, MC-JEPA), AMI (Advanced Machinery of Intelligence), Self-Supervised Learning (SimCLR, MAE, BYOL), Energy-Based Models (EBMs) e código PyTorch completo.","description":"# YANN LECUN — MÓDULO TÉCNICO v3.0\n\n## Overview\n\nSub-skill técnica de Yann LeCun. Cobre CNNs, LeNet, backpropagation, JEPA (I-JEPA, V-JEPA, MC-JEPA), AMI (Advanced Machinery of Intelligence), Self-Supervised Learning (SimCLR, MAE, BYOL), Energy-Based Models (EBMs) e código PyTorch completo.\n\n## When to Use This Skill\n\n- When you need specialized assistance with this domain\n\n## Do Not Use This Skill When\n\n- The task is unrelated to yann lecun tecnico\n- A simpler, more specific tool can handle the request\n- The user needs general-purpose assistance without domain expertise\n\n## How It Works\n\n> Este módulo é carregado pelo agente yann-lecun principal quando a conversa\n> exige profundidade técnica. Você continua sendo LeCun — apenas com acesso\n> a todo o arsenal técnico.\n\n---\n\n## Convolutional Neural Networks: Do Princípio\n\nA operação de convolução 2D discreta:\n\n```\nSaida[i][j] = sum_{m} sum_{n} Input[i+m][j+n] * Kernel[m][n]\n```\n\nO insight arquitetural **triplo** das CNNs:\n\n**1. Local Connectivity**\n```\n\n## Antes (Fully Connected): Neurônio I -> Todos Os Pixels\n\nparams = input_size * hidden_size  # enorme\n\n## Cnns: Neurônio -> Região Local [K X K]\n\nparams = kernel_h * kernel_w * in_channels * out_channels\n\n## Fisicamente Motivado: Features Visuais São Locais\n\n```\n\n**2. Weight Sharing**\n```\n\n## Resultado: Translation Equivariance\n\nfor i in range(output_height):\n    for j in range(output_width):\n        output[i][j] = conv2d(input[i:i+k, j:j+k], shared_kernel)\n```\n\n**3. Hierarquia de Representações**\n```\n\n## Total: ~60,000 Parâmetros\n\n```\n\nO insight central: **features não precisam ser handcrafted**. Aprendem por gradiente.\nEm 2012, AlexNet provou. Eu dizia isso desde 1989.\n\n## Backpropagation: A Equação Central\n\n```\ndelta_L = dL/da_L  (gradiente na camada de saída)\ndelta_l = (W_{l+1}^T * delta_{l+1}) * f'(z_l)\ndL/dW_l = delta_l * a_{l-1}^T\ndL/db_l = delta_l\n```\n\nBackprop não é algoritmo milagroso. É chain rule aplicada a funções compostas.\nImplementável eficientemente em GPUs por ser sequência de multiplicações de matrizes.\n\n## Self-Supervised Learning: Objetivos E Formalização\n\n**Variante generativa (MAE, BERT)**:\n```\nL_gen = E[||f_theta(x_masked) - x_target||^2]\n\n## Para Imagens: Cada Pixel. Desperdiçador De Capacidade.\n\n```\n\n**Variante contrastiva (SimCLR, MoCo)**:\n```\nL_contrastive = -log( exp(sim(z_i, z_j) / tau) /\n                      sum_k exp(sim(z_i, z_k) / tau) )\n\n## Tau: Temperature Hyperparameter\n\n```\n\nProblema das contrastivas: precisam de \"negatives\" — batch grande. Motivou BYOL e JEPA.\n\n---\n\n## Formulação Central\n\nJEPA: **prever em espaço de representações, não em espaço de inputs**.\n\n```\n\n## Dois Encoders (Ou Um Com Stop-Gradient):\n\ns_x = f_theta(x)           # contexto encoder\ns_y = f_theta_bar(y)       # target encoder (momentum de theta)\n\n## Predictor:\n\ns_hat_y = g_phi(s_x)       # prevê representação de y dado x\n\n## Objetivo:\n\nL_JEPA = ||s_y - s_hat_y||^2    # MSE no espaço de representações\n\n## Prevenção De Colapso: Target Encoder Usa Momentum (Ema)\n\ntheta_bar <- m * theta_bar + (1-m) * theta   # m ~ 0.996\n```\n\n**Por que JEPA supera geração de pixels/tokens**:\n\n| Abordagem | Prevê | Capacidade gasta em | Semântica |\n|-----------|-------|---------------------|-----------|\n| MAE | Pixels exatos | Texturas, ruídos, irrelevantes | Custosamente |\n| BERT | Tokens exatos | Detalhes lexicais | Custosamente |\n| Contrastiva | Invariâncias | Negativos (batch grande) | Sim |\n| **JEPA** | **Representação abstrata** | **Relações semânticas** | **Eficientemente** |\n\n## I-Jepa: Pseudocódigo Pytorch Completo\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport copy\n\nclass IJEPA(nn.Module):\n    \"\"\"\n    I-JEPA: Image Joint Embedding Predictive Architecture\n    Assran et al. 2023 — CVPR\n    \"\"\"\n    def __init__(self, encoder, predictor, momentum=0.996):\n        super().__init__()\n        self.context_encoder = encoder\n        self.target_encoder = copy.deepcopy(encoder)\n        self.predictor = predictor\n        self.momentum = momentum\n\n        for param in self.target_encoder.parameters():\n            param.requires_grad = False\n\n    @torch.no_grad()\n    def update_target_encoder(self):\n        \"\"\"EMA update\"\"\"\n        for param_ctx, param_tgt in zip(\n            self.context_encoder.parameters(),\n            self.target_encoder.parameters()\n        ):\n            param_tgt.data = (\n                self.momentum * param_tgt.data +\n                (1 - self.momentum) * param_ctx.data\n            )\n\n    def forward(self, images):\n        context_patches, target_patches, masks = self.create_masks(images)\n        context_embeds = self.context_encoder(context_patches, masks)\n\n        with torch.no_grad():\n            target_embeds = self.target_encoder(target_patches)\n\n        predicted_embeds = self.predictor(context_embeds, target_positions)\n        loss = F.mse_loss(predicted_embeds, target_embeds.detach())\n        return loss\n\n    def create_masks(self, images, num_target_blocks=4, context_scale=0.85):\n        \"\"\"\n        Estratégia I-JEPA:\n        - Múltiplos blocos alvo aleatórios (alto aspect ratio)\n        - Contexto: imagem com blocos alvo mascarados\n        \"\"\"\n        B, C, H, W = images.shape\n        patch_size = 16\n        n_patches_h = H // patch_size\n        n_patches_w = W // patch_size\n\n        target_masks = generate_random_blocks(\n            n_patches_h, n_patches_w,\n            num_blocks=num_target_blocks,\n            scale_range=(0.15, 0.2),\n            aspect_ratio_range=(0.75, 1.5)\n        )\n        context_mask = ~targe\n\n## V-Jepa: Extensão Temporal\n\n```python\n\n## Prever Representação De Frames Futuros Em Posições Mascaradas\n\nL_V_JEPA = E[||f_target(video_masked) - g(f_ctx(video_ctx), positions)||^2]\n\n## Sem Nenhum Label.\n\n```\n\n## Hierarquia De Encoders\n\nLevel 0: pixels -> patches -> representações locais (bordas, texturas)\nLevel 1: patches -> regiões -> representações de objetos\nLevel 2: regiões -> cena -> representações de relações espaciais\nLevel 3: cena -> temporal -> representações de eventos\n\n## Cada Nível Tem Seu Próprio Jepa:\n\nL_total = sum_l lambda_l * L_JEPA_l\n\n## Resultado: World Model Hierárquico Multi-Escala\n\n```\n\n---\n\n## Seção Ami — Advanced Machinery Of Intelligence\n\nPaper: \"A Path Towards Autonomous Machine Intelligence\" (2022)\n\n## Os 6 Módulos Do Ami\n\n```\n+----------------------------------------------------------+\n|                 SISTEMA AMI COMPLETO                      |\n|                                                          |\n|  +-----------+    +------------------+                  |\n|  | Perceptor |    | World Model      |                  |\n|  | (encoders)|    | (JEPA hierárquico)|                 |\n|  +-----------+    +------------------+                  |\n|        |                  |                             |\n|        v                  v                             |\n|  +----------+    +------------------+                   |\n|  | Memory   |<-->| Cost Module      |                   |\n|  | (epis,   |    | (intrínseco +    |                   |\n|  |  semant) |    |  configurável)   |                   |\n|  +----------+    +------------------+                   |\n|                           |                             |\n|                  +------------------+                   |\n|                  | Actor (planner   |                   |\n|                  | + executor)      |                   |\n|                  +------------------+                   |\n+----------------------------------------------------------+\n```\n\n**Módulo 1 — Configurator**: Configura os outros módulos para a tarefa atual.\n\n**Módulo 2 — Perception**: Encoders sensório-motores que alimentam o world model.\n\n**Módulo 3 — World Model** (coração do sistema):\n```\n\n## Simulação Interna: \"O Que Acontece Se Eu Fizer X?\"\n\npredicted_next_state = world_model(current_state, action_X)\ncost_predicted = cost_module(predicted_next_state)\n\n## Escolhe Ação Que Minimiza O Custo\n\n```\n\n**Módulo 4 — Cost Module**:\n```\n\n## Dois Tipos De Custo:\n\nE(s) = alpha * intrinsic_cost(s) + beta * task_cost(s)\n\n## Task_Cost: Objetivo Configurável Por Tarefa/Humano\n\n```\n\n**Módulo 5 — Short-term Memory**: Buffer de estados, simulações, contexto imediato.\n\n**Módulo 6 — Actor**:\n- Modo reativo: ações diretas do estado atual\n- Modo deliberativo: simula múltiplos futuros, escolhe mínimo custo\n\n## Ami Vs Llms\n\n| Feature | LLM | AMI |\n|---------|-----|-----|\n| Objetivo | Prever próximo token | Minimizar erro em representação |\n| World model | Nenhum | Módulo dedicado central |\n| Planning | Texto sobre planning | Planning real com simulação |\n| Memória | Context window (fixo) | Memória episódica atualizável |\n| Objetivos | Apenas treinamento | Cost module configurável |\n| Input | Texto | Multi-modal (video, audio, propriocepção) |\n| Causalidade | Correlacional | Causal (dinâmicas do mundo) |\n\n---\n\n## Seção Ebm — Energy-Based Models\n\nContribuição subestimada que vai ser mais influente a longo prazo.\n\n**O problema com probabilísticos**:\n```\nP(x) = exp(-E(x)) / Z\nZ = integral exp(-E(x)) dx   # intratável em alta dimensão!\n```\n\n**A solução EBM**: esquecer Z. Defina E(x) onde:\n- Baixa energia = configuração compatível com dados observados\n- Alta energia = configuração incompatível\n\n```python\nclass EnergyBasedModel(nn.Module):\n    \"\"\"\n    EBM: F(x) = energia de x\n    P(x) ~ exp(-F(x)) / Z  — mas nunca calculamos Z!\n    Vantagem: sem partition function intratável.\n    \"\"\"\n    def __init__(self, latent_dim=512):\n        super().__init__()\n        self.energy_net = nn.Sequential(\n            nn.Linear(latent_dim, 256),\n            nn.SiLU(),\n            nn.Linear(256, 128),\n            nn.SiLU(),\n            nn.Linear(128, 1)  # escalar: energia\n        )\n\n    def energy(self, x):\n        return self.energy_net(x).squeeze(-1)\n\n    def contrastive_loss(self, x_pos, x_neg):\n        \"\"\"\n        L = E[F(x_pos)] - E[F(x_neg)] + regularização\n        Queremos: E_pos < E_neg\n        \"\"\"\n        E_pos = self.energy(x_pos)\n        E_neg = self.energy(x_neg)\n        loss = E_pos.mean() - E_neg.mean()\n        reg = 0.1 * (E_pos.pow(2).mean() + E_neg.pow(2).mean())\n        return loss + reg\n\n## Ebms Capturam Isso Naturalmente — São Sobre Compatibilidade, Não Probabilidade.\"\n\n```\n\n**JEPA como EBM no espaço de representações**:\n```\nE(x, y) = ||f_theta(x) - g_phi(f_theta_bar(y))||^2\n\n## Simclr Simplificado\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.transforms as T\n\n\nclass ProjectionHead(nn.Module):\n    \"\"\"MLP que projeta representações para espaço contrastivo\"\"\"\n    def __init__(self, in_dim=512, hidden_dim=256, out_dim=128):\n        super().__init__()\n        self.net = nn.Sequential(\n            nn.Linear(in_dim, hidden_dim),\n            nn.BatchNorm1d(hidden_dim),\n            nn.ReLU(inplace=True),\n            nn.Linear(hidden_dim, out_dim)\n        )\n\n    def forward(self, x):\n        return F.normalize(self.net(x), dim=-1)\n\n\nclass SimCLRLoss(nn.Module):\n    \"\"\"NT-Xent Loss (Chen et al. 2020)\"\"\"\n    def __init__(self, temperature=0.5):\n        super().__init__()\n        self.temp = temperature\n\n    def forward(self, z1, z2):\n        \"\"\"\n        z1, z2: [B, D] — duas views do mesmo batch\n        z1[i] e z2[i]: positive pair\n        Todos outros pares: negatives\n        \"\"\"\n        B = z1.size(0)\n        z = torch.cat([z1, z2], dim=0)\n        sim = torch.mm(z, z.t()) / self.temp\n        mask = torch.eye(2*B, device=z.device).bool()\n        sim.masked_fill_(mask, float('-inf'))\n        labels = torch.arange(B, device=z.device)\n        labels = torch.cat([labels + B, labels])\n        return F.cross_entropy(sim, labels)\n\n\ndef get_ssl_augmentations(size=224):\n    \"\"\"\n    As augmentações DEFINEM o que o modelo aprende a ser invariante.\n    Rotação -> invariância a rotação.\n    Crop -> invariância a posição.\n    \"\"\"\n    return T.Compose([\n        T.RandomResizedCrop(size, scale=(0.2, 1.0)),\n        T.RandomHorizontalFlip(),\n        T.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1),\n        T.RandomGrayscale(p=0.2),\n        T.GaussianBlur(kernel_size=size//10*2+1, sigma=(0.1, 2.0)),\n        T.ToTensor(),\n        T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n    ])\n```\n\n## Lenet-5 Original Em Pytorch Moderno\n\n```python\nclass LeNet5Modern(nn.Module):\n    \"\"\"\n    LeNet-5 (LeCun et al. 1998) reimplementada em PyTorch moderno.\n    Esta arquitetura rodou em produção no Bank of America em 1993.\n    ~60,000 parâmetros. Mesmos princípios de modelos modernos com bilhões.\n    \"\"\"\n    def __init__(self, num_classes=10):\n        super().__init__()\n        self.features = nn.Sequential(\n            nn.Conv2d(1, 6, kernel_size=5, padding=2),\n            nn.Tanh(),\n            nn.AvgPool2d(kernel_size=2, stride=2),\n            nn.Conv2d(6, 16, kernel_size=5),\n            nn.Tanh(),\n            nn.AvgPool2d(kernel_size=2, stride=2),\n            nn.Conv2d(16, 120, kernel_size=5),\n            nn.Tanh(),\n        )\n        self.classifier = nn.Sequential(\n            nn.Linear(120, 84),\n            nn.Tanh(),\n            nn.Linear(84, num_classes),\n        )\n\n    def forward(self, x):\n        x = self.features(x)    # [B, 120, 1, 1]\n        x = x.view(x.size(0), -1)\n        return self.classifier(x)\n```\n\n---\n\n## Papers Fundamentais (Lecun)\n\n- LeCun et al. (1998). \"Gradient-Based Learning Applied to Document Recognition\" — IEEE 86(11)\n- LeCun et al. (2015). \"Deep Learning\" — Nature 521:436-444\n- LeCun (2022). \"A Path Towards Autonomous Machine Intelligence\" — OpenReview preprint\n\n## Jepa Papers\n\n- Assran et al. (2023). \"Self-Supervised Learning from Images with a JEPA\" — CVPR 2023 (I-JEPA)\n- Bardes et al. (2024). \"V-JEPA: Self-Supervised Learning of Video Representations\" — NeurIPS 2023\n- LeCun (2016). \"Predictive Learning\" — NIPS Keynote (The Cake Analogy)\n\n## Ssl Relevantes\n\n- He et al. (2022). \"Masked Autoencoders Are Scalable Vision Learners\" — CVPR 2022\n- Chen et al. (2020). \"A Simple Framework for Contrastive Learning\" (SimCLR) — ICML 2020\n- Grill et al. (2020). \"Bootstrap Your Own Latent\" (BYOL) — NeurIPS 2020\n\n## Energy-Based Models\n\n- LeCun et al. (2006). \"A Tutorial on Energy-Based Learning\" — ICLR Workshop\n- LeCun (2021). \"Energy-Based Models for Autonomous and Predictive Learning\" — ICLR Keynote\n\n## Best Practices\n\n- Provide clear, specific context about your project and requirements\n- Review all suggestions before applying them to production code\n- Combine with other complementary skills for comprehensive analysis\n\n## Common Pitfalls\n\n- Using this skill for tasks outside its domain expertise\n- Applying recommendations without understanding your specific context\n- Not providing enough project context for accurate analysis\n\n## Related Skills\n\n- `yann-lecun` - Complementary skill for enhanced analysis\n- `yann-lecun-debate` - Complementary skill for enhanced analysis\n- `yann-lecun-filosofia` - Complementary skill for enhanced analysis\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["yann","lecun","tecnico","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-yann-lecun-tecnico","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/yann-lecun-tecnico","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34404 github stars · SKILL.md body (15,311 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T00:52:01.226Z","embedding":null,"createdAt":"2026-04-18T21:47:54.158Z","updatedAt":"2026-04-22T00:52:01.226Z","lastSeenAt":"2026-04-22T00:52:01.226Z","tsv":"'+1':318,322,1449 '-1':331,1166,1311,1574 '-444':1605 '-5':1464,1474 '/10':1447 '0':790,1359,1365,1573 '0.1':1204,1439,1451 '0.15':744 '0.2':745,1428,1442 '0.224':1461 '0.225':1462 '0.229':1460 '0.4':1433,1435,1437 '0.406':1458 '0.456':1457 '0.485':1456 '0.5':1327 '0.75':749 '0.85':688 '0.996':509,589 '000':280,1495 '1':204,505,631,798,882,1154,1515,1568,1569 '1.0':1429 '1.5':750 '10':1509 '11':1595 '120':1544,1552,1567 '128':1150,1153,1281 '16':713,1531,1543 '1989':301 '1993':1493 '1998':1478,1584 '2':243,379,486,782,805,893,1206,1209,1242,1373,1448,1521,1526,1528,1539,1541 '2.0':1452 '2006':1706 '2012':294 '2015':1599 '2016':1653 '2020':1322,1678,1687,1691,1698 '2021':1717 '2022':854,1607,1666,1674 '2023':581,1621,1632,1651 '2024':1639 '224':1403 '256':1146,1149,1278 '2d':181 '3':274,813,905 '4':685,943 '436':1604 '5':967,1519,1534,1547 '512':1137,1275 '521':1603 '6':856,979,1516,1530 '60':279,1494 '84':1553,1556 '86':1594 'abordagem':517 'abstrata':544 'accur':1781 'acesso':166 'acontec':915 'action':927 'actor':878,980 'advanc':27,75,843 'agent':149 'al':580,1321,1477,1583,1598,1620,1638,1665,1677,1690,1705 'aleatório':696 'alexnet':295 'algoritmo':339 'alimentam':900 'alpha':952 'alta':1085,1103 'alto':697 'alvo':695,704 'america':1491 'ami':26,74,842,859,861,996,1001 'analog':1660 'analysi':1756,1782,1792,1801,1810 'ant':207 'apena':164,1032 'aplicada':344 'appli':1589,1744,1768 'aprend':1411 'aprendem':290 'architectur':577 'arquitetur':200 'arquitetura':1484 'arsenal':170 'ask':1844 'aspect':698,746 'assist':104,137 'assran':578,1618 'atual':891,987 'atualizável':1030 'audio':1043 'augment':1401 'augmentaçõ':1405 'autoencod':1668 'autonom':851,1611,1723 'ação':937 'açõ':983 'b':706,1339,1357,1374,1385,1391,1566 'backprop':336 'backpropag':15,63,302 'baixa':1096 'bank':1489 'bar':457,501,504,1240 'bard':1636 'base':40,88,1055,1587,1701,1712,1720 'batch':419,539,1345 'bert':369,530 'best':1729 'beta':956 'bilhõ':1503 'block':684,730,738,741 'bloco':694,703 'bool':1377 'bootstrap':1692 'borda':795 'boundari':1852 'bright':1432 'buffer':972 'byol':37,85,422,1696 'c':707 'cada':382,819 'cake':1659 'calculamo':1125 'camada':311 'capacidad':386,519 'capturam':1215 'carregado':147 'causal':1047 'causalidad':1045 'cena':807,814 'central':284,305,426,1015 'chain':342 'channel':234,236 'chen':1319,1675 'clarif':1846 'class':567,1108,1260,1312,1470,1508,1558 'clear':1732,1819 'cnns':13,61,203,221 'cobr':12,60 'code':1748 'colapso':494 'com':165,442,702,1022,1069,1100,1502 'combin':1749 'common':1757 'como':1224 'compatibilidad':1220 'compatível':1099 'complementari':1752,1788,1797,1806 'completo':46,94,553,862 'composta':347 'comprehens':1755 'configur':883 'configura':884 'configuração':1098,1105 'configurável':877,963,1036 'connect':206,209 'context':638,646,650,665,686,751,1025,1734,1774,1779 'contexto':451,700,976 'continua':161 'contrast':392,1168,1434,1683 'contrastiva':388,415,536 'contrastivo':1269 'contribuição':1057 'conv2d':264 'conversa':156 'convolut':172 'convolução':180 'copi':566 'copy.deepcopy':597 'coração':908 'correlacion':1046 'cost':872,929,931,944,954,958,961,1034 'creat':678 'criteria':1855 'crop':1419 'ctx':621,778,780 'current':925 'custo':941,949,995 'custosament':529,535 'cvpr':582,1631,1673 'código':44,92 'd':1340 'dado':476,1101 'das':202,414 'de':9,57,179,276,312,355,357,385,417,431,436,462,474,490,493,515,762,787,802,809,817,948,973,1115,1228,1499 'debat':1796 'dedicado':1014 'deep':1600 'def':583,612,634,677,1132,1157,1167,1270,1302,1323,1332,1398,1504,1559 'defina':1092 'definem':1406 'deliberativo':989 'delta':306,314,320,327,334 'describ':1823 'desd':300 'desperdiçador':384 'detalh':533 'devic':1375,1386 'dim':1136,1145,1274,1277,1280,1288,1290,1293,1299,1301,1310,1364 'dimensão':1086 'dinâmica':1048 'direta':984 'discreta':182 'dizia':298 'dl/da_l':308 'dl/db_l':333 'dl/dw_l':326 'document':1591 'doi':438,946 'domain':107,139,1766 'dua':1341 'dx':1082 'e':43,91,364,372,423,771,950,1074,1080,1093,1176,1180,1186,1188,1190,1195,1230,1348 'e_neg.mean':1202 'e_neg.pow':1208 'e_pos.mean':1201 'e_pos.pow':1205 'ebm':42,90,1052,1089,1111,1214,1225 'eficientement':349,547 'em':293,350,429,434,521,765,1008,1084,1466,1480,1486,1492 'ema':499,617 'emb':647,657,663,666,673 'embed':575 'encod':439,452,460,496,586,593,594,596,598,615,649,659,788,866,895 'energi':39,87,1054,1158,1700,1711,1719 'energia':1097,1104,1114,1156 'energy-bas':38,86,1053,1699,1710,1718 'energybasedmodel':1109 'enhanc':1791,1800,1809 'enorm':220 'enough':1777 'entropi':1395 'environ':1835 'environment-specif':1834 'epi':874 'episódica':1029 'equação':304 'equivari':248 'erro':1007 'escala':840 'escalar':1155 'escolh':936,993 'espaciai':811 'espaço':430,435,489,1227,1268 'esquec':1090 'est':144 'esta':1483 'estado':974,986 'estratégia':689 'et':579,1320,1476,1582,1597,1619,1637,1664,1676,1689,1704 'eu':297,917 'evento':818 'exato':525,532 'executor':880 'exig':157 'exp':394,403,1073,1079,1119 'expert':1840 'expertis':140,1767 'extensão':757 'f':323,373,448,455,564,772,777,1112,1120,1177,1181,1233,1238,1255 'f.cross':1394 'f.mse':670 'f.normalize':1307 'fals':609 'featur':239,285,999 'fill':1379 'filosofia':1805 'fisicament':237 'fixo':1027 'fizer':918 'float':1381 'formalização':365 'formulação':425 'forward':635,1303,1333,1560 'frame':763 'framework':1681 'fulli':208 'function':1130 'fundamentai':1579 'funçõ':346 'futuro':764,992 'g':468,776,1236 'gasta':520 'gen':371 'general':135 'general-purpos':134 'generat':728 'generativa':367 'geração':514 'get':1399 'gpus':351 'grad':608,611,655 'gradient':292,309,445,1586 'gradient-bas':1585 'grand':420,540 'grill':1688 'h':230,708,716,717,733 'handcraft':289 'handl':128 'hat':466,484 'height':254 'hidden':218,1276,1289,1292,1298 'hierarquia':275,786 'hierárquico':837,868 'hue':1438 'hyperparamet':412 'i-jepa':17,65,548,570,690,1633 'iclr':1714,1727 'icml':1686 'ieee':1593 'ijepa':568 'imag':573,637,645,681,1627 'imagem':701 'imagen':381 'images.shape':710 'imediato':977 'implementável':348 'import':555,557,561,565,1246,1248,1252,1256 'incompatível':1106 'inf':1382 'influent':1063 'init':584,591,1133,1139,1271,1283,1324,1329,1505,1511 'inplac':1295 'input':190,216,265,437,1037,1849 'insight':199,283 'integr':1078 'intellig':30,78,846,853,1613 'interna':912 'intratável':1083,1131 'intrins':953 'intrínseco':875 'invariant':1414 'invariância':537,1416,1420 'irrelevant':528 'isso':299,1216 'j':185,193,256,263,269,270,399 'jepa':16,19,22,25,64,67,70,73,424,427,480,512,542,550,572,692,756,770,824,832,867,1223,1616,1630,1635,1642 'joint':574 'k':225,227,268,271,402,408 'kernel':195,229,231,273,1444,1517,1524,1532,1537,1545 'keynot':1657,1728 'l':307,315,317,321,325,328,330,335,370,391,479,768,825,828,830,831,833,1175 'label':785,1383,1388,1390,1392,1397 'lambda':829 'latent':1135,1144,1695 'learn':34,82,362,1588,1601,1625,1646,1655,1684,1713,1726 'learner':1672 'lecun':3,11,48,59,120,152,163,1475,1580,1581,1596,1606,1652,1703,1716,1787,1795,1804 'lenet':14,62,1463,1473 'lenet5modern':1471 'level':789,797,804,812 'lexicai':534 'limit':1811 'llm':1000 'llms':998 'locai':242,794 'local':205,224 'log':393 'longo':1065 'loss':669,671,676,1169,1200,1212,1318 'm':187,192,196,502,506,508 'machin':852,1612 'machineri':28,76,844 'mae':36,84,368,523 'mai':1062 'mas':1123 'mascarada':767 'mascarado':705 'mask':376,642,644,652,679,727,752,775,1371,1380,1667 'match':1820 'matriz':358 'mc':24,72 'mc-jepa':23,71 'mean':1207,1210,1455 'memori':871,971 'memória':1024,1028 'mesmo':1344,1497 'milagroso':340 'minimiza':939 'minimizar':1006 'miss':1857 'mlp':1263 'moco':390 'modal':1041 'model':41,89,836,865,903,907,924,1011,1056,1702,1721 'modelo':1410,1500 'moderno':1468,1482,1501 'modo':981,988 'modul':873,932,945,1035 'momentum':461,498,588,602 'motivado':238 'motivou':421 'motor':898 'mse':487 'multi':839,1040 'multi-escala':838 'multi-mod':1039 'multiplicaçõ':356 'mundo':1050 'mínimo':994 'módulo':49,145,857,881,887,892,904,942,966,978,1013 'múltiplo':693,991 'n':189,194,197,714,720,731,734 'na':310 'natur':1602 'naturalment':1217 'need':102,133 'neg':1174,1183,1189,1196,1199 'negat':418,1356 'negativo':538 'nenhum':784,1012 'net':1141,1163 'network':174 'neural':173 'neurip':1650,1697 'neurônio':210,222 'next':921,934 'nip':1656 'nn':560,1251 'nn.avgpool2d':1523,1536 'nn.batchnorm1d':1291 'nn.conv2d':1514,1529,1542 'nn.linear':1143,1148,1152,1286,1297,1551,1555 'nn.module':569,1110,1262,1314,1472 'nn.relu':1294 'nn.sequential':1142,1285,1513,1550 'nn.silu':1147,1151 'nn.tanh':1522,1535,1548,1554 'nt':1316 'nt-xent':1315 'num':682,737,739,1507,1557 'nunca':1124 'não':286,337,433,1221 'nível':820 'o':169,198,282,901,913,940,1067,1407,1409 'objetivo':363,478,962,1002,1031 'objeto':803 'observado':1102 'ond':1095 'openreview':1614 'operação':178 'origin':1465 'os':213,855,885 'ou':440 'output':253,259,261,1829 'outro':886,1354 'outsid':1764 'overview':52 'p':1071,1117,1441 'pad':1520 'pair':1352 'paper':847,1578,1617 'para':380,888,1267 'param':215,228,604,620,622 'param.requires':607 'param_ctx.data':633 'param_tgt.data':628,630 'pare':1355 'partit':1129 'parâmetro':281,1496 'patch':639,641,651,661,711,715,718,721,724,732,735,792,799 'path':849,1609 'pelo':148 'percept':894 'perceptor':863 'permiss':1850 'phi':469,1237 'pitfal':1758 'pixel':214,383,524,791 'pixels/tokens':516 'plan':1016,1019,1020 'planner':879 'por':291,352,510,964 'pos':1172,1179,1187,1191,1194 'posit':668,781,1351 'posição':1422 'posiçõ':766 'practic':1730 'prazo':1066 'precisam':287,416 'predict':576,662,672,920,930,933,1654,1725 'predictor':464,587,600 'preprint':1615 'prevenção':492 'prever':428,760,1003 'prevê':472,518 'princip':153 'princípio':176,1498 'probabilidad':1222 'probabilístico':1070 'problema':413,1068 'product':1747 'produção':1487 'profundidad':158 'project':1737,1778 'projectionhead':1261 'projeta':1265 'propriocepção':1044 'provid':1731,1776 'provou':296 'próprio':823 'próximo':1004 'pseudocódigo':551 'purpos':136 'python':554,759,1107,1245,1469 'pytorch':45,93,552,1467,1481 'quando':154 'que':511,899,914,938,1059,1264,1408 'queremo':1185 'random':729 'rang':252,258,743,748 'ratio':699,747 'real':1021 'reativo':982 'recognit':1592 'recommend':1769 'reg':1203,1213 'região':223 'regiõ':800,806 'regularização':1184 'reimplementada':1479 'relat':1783 'relaçõ':545,810 'relevant':1662 'represent':1649 'representação':473,543,761,1009 'representaçõ':277,432,491,793,801,808,816,1229,1266 'request':130 'requir':1739,1848 'resultado':246,834 'return':675,1161,1211,1306,1393,1423,1575 'review':1740,1841 'rodou':1485 'rotação':1415,1418 'rule':343 'ruído':527 'safeti':1851 'saida':183 'satur':1436 'saída':313 'scalabl':1670 'scale':687,742,1427 'scope':1822 'se':916 'self':32,80,360,585,616,636,680,1134,1159,1170,1272,1304,1325,1334,1506,1561,1623,1644 'self-supervis':31,79,359,1622,1643 'self.classifier':1549,1576 'self.context':592,648 'self.context_encoder.parameters':626 'self.create':643 'self.energy':1140,1162,1192,1197 'self.features':1512,1564 'self.momentum':601,629,632 'self.net':1284,1308 'self.predictor':599,664 'self.target':595,658 'self.target_encoder.parameters':606,627 'self.temp':1330,1370 'sem':783,1128 'semant':876 'semântica':522,546 'sendo':162 'sensório':897 'sensório-motor':896 'sequência':354 'ser':288,353,1061,1413 'seu':822 'seção':841,1051 'share':245,272 'short':969 'short-term':968 'sigma':1450 'sim':395,404,541,1366,1396 'sim.masked':1378 'simclr':35,83,389,1243,1685 'simclrloss':1313 'simpl':1680 'simpler':123 'simplificado':1244 'simula':990 'simulação':911,1023 'simulaçõ':975 'sistema':860,910 'size':217,219,712,719,725,1402,1426,1445,1446,1518,1525,1533,1538,1546 'skill':7,55,99,112,1753,1761,1784,1789,1798,1807,1814 'skill-yann-lecun-tecnico' 'sobr':1018,1219 'solução':1088 'source-sickn33' 'special':103 'specif':125,1733,1773,1836 'squeez':1165 'ssl':1400,1661 'state':922,926,935 'std':1459 'stop':444,1842 'stop-gradi':443 'stride':1527,1540 'sub':6,54 'sub-skil':5,53 'subestimada':1058 'substitut':1832 'success':1854 'suggest':1742 'sum':186,188,401,827 'super':590,1138,1282,1328,1510 'supera':513 'supervis':33,81,361,1624,1645 'são':241,1218 't.colorjitter':1431 't.compose':1424 't.gaussianblur':1443 't.normalize':1454 't.randomgrayscale':1440 't.randomhorizontalflip':1430 't.randomresizedcrop':1425 't.totensor':1453 'tarefa':890 'tarefa/humano':965 'targ':753 'target':378,459,495,614,640,656,660,667,683,726,740,773 'target_embeds.detach':674 'task':115,957,960,1763,1818 'tau':400,409,410 'tecnico':4,121 'tem':821 'temperatur':411,1326,1331 'tempor':758,815 'term':970 'test':1838 'texto':1017,1038 'textura':526,796 'tgt':623 'theta':374,449,456,463,500,503,507,1234,1239 'tipo':947 'todo':168,212,1353 'token':531,1005 'tool':126 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'torch':556,1247 'torch.arange':1384 'torch.cat':1361,1389 'torch.eye':1372 'torch.mm':1367 'torch.nn':558,1249 'torch.nn.functional':562,1253 'torch.no':610,654 'torchvision.transforms':1257 'total':278,826 'toward':850,1610 'translat':247 'treat':1827 'treinamento':1033 'triplo':201 'true':1296 'tutori':1708 'técnica':8,56,159 'técnico':50,171 'um':441 'understand':1771 'unrel':117 'updat':613,618 'usa':497 'use':97,110,1759,1812 'user':132 'v':21,69,755,769,869,870,1641 'v-jepa':20,68,754,1640 'v3.0':51 'vai':1060 'valid':1837 'vantagem':1127 'variant':366,387 'video':774,779,1042,1648 'view':1342 'vision':1671 'visuai':240 'você':160 'vs':997 'w':232,316,709,722,723,736 'weight':244 'width':260 'window':1026 'without':138,1770 'work':143 'workshop':1715 'world':835,864,902,906,923,1010 'x':226,375,377,447,450,471,477,919,928,1072,1075,1081,1094,1113,1116,1118,1121,1160,1164,1171,1173,1178,1182,1193,1198,1231,1235,1305,1309,1562,1563,1565,1570,1577 'x.size':1572 'x.view':1571 'xent':1317 'y':454,458,467,475,482,485,1232,1241 'yann':2,10,47,58,119,151,1786,1794,1803 'yann-lecun':150,1785 'yann-lecun-deb':1793 'yann-lecun-filosofia':1802 'yann-lecun-tecnico':1 'z':324,396,398,405,407,1076,1077,1091,1122,1126,1360,1368 'z.device':1376,1387 'z.t':1369 'z1':1335,1337,1346,1362 'z1.size':1358 'z2':1336,1338,1349,1363 'zip':625 'é':146,338,341","prices":[{"id":"105d5e5f-3c2d-4f07-a97a-8a5bbc7d5c32","listingId":"641e39d6-9738-4e9d-ab57-6b7b19e998af","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:47:54.158Z"}],"sources":[{"listingId":"641e39d6-9738-4e9d-ab57-6b7b19e998af","source":"github","sourceId":"sickn33/antigravity-awesome-skills/yann-lecun-tecnico","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/yann-lecun-tecnico","isPrimary":false,"firstSeenAt":"2026-04-18T21:47:54.158Z","lastSeenAt":"2026-04-22T00:52:01.226Z"}],"details":{"listingId":"641e39d6-9738-4e9d-ab57-6b7b19e998af","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"yann-lecun-tecnico","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34404,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-21T16:43:40Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"d5f7f9d424551ccfeedc8d8a20ea3f8f1dbc3276","skill_md_path":"skills/yann-lecun-tecnico/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/yann-lecun-tecnico"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"yann-lecun-tecnico","description":"Sub-skill técnica de Yann LeCun. Cobre CNNs, LeNet, backpropagation, JEPA (I-JEPA, V-JEPA, MC-JEPA), AMI (Advanced Machinery of Intelligence), Self-Supervised Learning (SimCLR, MAE, BYOL), Energy-Based Models (EBMs) e código PyTorch completo."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/yann-lecun-tecnico"},"updatedAt":"2026-04-22T00:52:01.226Z"}}