{"id":"049fd2e9-f2f5-4481-8ba5-2c7a45e83363","shortId":"a7hjXR","kind":"skill","title":"python-econ-computing","tagline":"Use when writing Python code for DSGE models, HANK models, numerical economic computation, causal inference, or quantitative economic data analysis","description":"# Python Economic Numerical Computing\n- Author：Wenli Xu\n- Email： wlxu@cityu.edu.mo\n- 2026-03-11\n---\n\n\n## Overview\n\nBest practices for macroeconomic modeling (DSGE/HANK), causal inference, and data analysis in Python. Core principle: **vectorize first, accelerate loops with Numba, keep code structure aligned with economic theory**.\n\n---\n\n## Library Quick Reference\n\n| Use Case | Preferred Libraries |\n|----------|-------------------|\n| Numerical core | `numpy`, `scipy` |\n| Loop acceleration | `numba` (`@njit`, `@njit(parallel=True)`) |\n| Economics toolkit | `quantecon` |\n| HANK / sequence space | `sequence_jacobian` (SSJ) |\n| Heterogeneous agents | `HARK` |\n| **Linear models with FE** | **`pyfixest`** (`pip install pyfixest`) |\n| **DID / DD / DDD** | **`diff-diff`** (`pip install diff-diff`) |\n| **IV / 2SLS / GMM** | **`linearmodels`** (or `pyfixest` for panel IV with FE) |\n| **RD / RDD / RKD** | **`rdrobust`**, `rddensity`, `rdlocrand` |\n| **Synthetic Control** | **`pysynth`**, `synth_control`, `sdid` |\n| **Matching** | **`causalml`**, `pymatch`, `econml` |\n| **Causal ML / DML** | **`econml`**, `dowhy` |\n| Data manipulation | `pandas`, `polars` (large datasets) |\n| Visualization | `matplotlib`, `seaborn` |\n\n---\n\n## DSGE Models\n\n### Linearization and Solution (Blanchard-Kahn)\n\n```python\nimport numpy as np\nfrom scipy.linalg import ordqz\n\ndef solve_bk(A, B, n_fwd):\n    \"\"\"\n    Solve linear DSGE: A E_t[x_{t+1}] = B x_t + C eps_t\n    n_fwd: number of forward-looking variables\n    Returns decision rule matrix P such that x_t = P x_{t-1} + ...\n    \"\"\"\n    AA, BB, alpha, beta, Q, Z = ordqz(A, B, sort='ouc')\n    n = A.shape[0]\n    Z21 = Z[n - n_fwd:, :n - n_fwd]\n    Z22 = Z[n - n_fwd:, n - n_fwd:]\n    P = -np.linalg.solve(Z22, Z21)\n    return P\n```\n\n### Perturbation Methods (Second-Order Approximation)\n\n- Use `quantecon.lqcontrol` for LQ problems\n- Higher-order perturbation: `perturbpy` or manual implementation\n- Steady-state solving: `scipy.optimize.fsolve` / `root`\n\n---\n\n## HANK Models\n\n### Sequence-Space Jacobian Method (SSJ)\n\n```python\nimport sequence_jacobian as sj\n\n# 1. Define steady-state blocks\n@sj.simple\ndef household_ss(r, w, beta, sigma):\n    # Return steady-state aggregates\n    ...\n\n# 2. Build DAG\nmodel = sj.create_model([household_block, firm_block, market_clearing],\n                         name='HANK')\n\n# 3. Solve steady state\nss = model.solve_steady_state(calibration, unknowns, targets)\n\n# 4. Compute Jacobians → solve transition dynamics\nG = model.solve_jacobian(ss, unknowns, targets, T=300)\n```\n\n### Value Function Iteration — Numba Accelerated\n\n```python\nfrom numba import njit\nimport numpy as np\n\n@njit\ndef vfi(V0, a_grid, y_grid, r, beta, sigma, tol=1e-8, max_iter=1000):\n    \"\"\"Heterogeneous agent VFI over asset grid × income grid\"\"\"\n    n_a, n_y = len(a_grid), len(y_grid)\n    V = V0.copy()\n    policy = np.zeros((n_a, n_y))\n\n    for it in range(max_iter):\n        V_new = np.empty_like(V)\n        for ia in range(n_a):\n            for iy in range(n_y):\n                best_val = -1e10\n                best_a = 0\n                for ia2 in range(n_a):\n                    c = (1 + r) * a_grid[ia] + y_grid[iy] - a_grid[ia2]\n                    if c <= 0:\n                        continue\n                    u = c ** (1 - sigma) / (1 - sigma)\n                    val = u + beta * V[:, iy].mean()  # use transition matrix in practice\n                    if val > best_val:\n                        best_val = val\n                        best_a = ia2\n                V_new[ia, iy] = best_val\n                policy[ia, iy] = a_grid[best_a]\n        if np.max(np.abs(V_new - V)) < tol:\n            break\n        V = V_new\n    return V, policy\n```\n\n### Distribution Iteration (Young 2010)\n\n```python\ndef iterate_distribution(policy_idx, trans_mat, dist0, T=500):\n    \"\"\"Iterate joint distribution to steady state given policy indices and income transition matrix\"\"\"\n    dist = dist0.copy()\n    n_a, n_y = dist.shape\n    for _ in range(T):\n        dist_new = np.zeros_like(dist)\n        for iy in range(n_y):\n            for iy2 in range(n_y):\n                dist_new[policy_idx[:, iy], iy2] += dist[:, iy] * trans_mat[iy, iy2]\n        dist = dist_new\n    return dist\n```\n\n---\n\n## Linear Models with Fixed Effects (pyfixest)\n\n**Rule: For any OLS/Poisson/Logit with fixed effects, use `pyfixest`. It mirrors R's `fixest` syntax.**\n\n```python\nimport pyfixest as pf\n\n# OLS with unit + time FE, cluster-robust SEs\nfit = pf.feols(\"y ~ treat_post | unit + year\",\n               data=df, vcov={\"CRV1\": \"id\"})\nfit.summary()\n\n# Multiple high-dimensional FE (Frisch-Waugh absorbed)\nfit = pf.feols(\"y ~ x1 + x2 | unit + year + industry\",\n               data=df, vcov={\"CRV1\": \"id\"})\n\n# Wild cluster bootstrap (few clusters, <50)\nfit = pf.feols(\"y ~ treat_post | unit + year\",\n               data=df, vcov={\"CRV1\": \"id\"})\nfit.wildboottest(param=\"treat_post\", B=9999, seed=42)\n\n# Event study via i() syntax\nfit = pf.feols(\"y ~ i(rel_year, ref=-1) | unit + year\",\n               data=df, vcov={\"CRV1\": \"id\"})\npf.iplot(fit)  # event study plot\n\n# Poisson (count / log-linear) with FE\nfit_pois = pf.fepois(\"y ~ treat_post | unit + year\",\n                     data=df, vcov={\"CRV1\": \"id\"})\n\n# Access results\nfit.coef()           # coefficient estimates\nfit.se()             # standard errors\nfit.pvalue()         # p-values\nfit.confint()        # confidence intervals\nfit._N               # number of observations\n```\n\n### pyfixest vs statsmodels\n\n| Use case | Use |\n|----------|-----|\n| OLS / WLS with any FE | `pyfixest` |\n| Poisson / logit with FE | `pyfixest` |\n| Wild bootstrap | `pyfixest` |\n| Time-series ARIMA, VAR | `statsmodels` |\n| MLE / GLM without FE | `statsmodels` |\n\n---\n\n## Causal Inference: DID / DD / DDD Methods\n\n**Rule: For any DiD, DD, DDD, or staggered difference-in-differences design, use `diff-diff` and follow the General Empirical Workflow below.**\n\nSource: https://github.com/igerber/diff-diff | https://github.com/wenddymacro/A-General-Empirical-Workflow-for-DID\n\n### diff-diff Estimator Reference\n\n| Alias | Class | Use When |\n|-------|-------|----------|\n| `DiD` | `DifferenceInDifferences` | Basic 2×2 DiD |\n| `TWFE` | `TwoWayFixedEffects` | Standard panel DiD |\n| `EventStudy` | `MultiPeriodDiD` | Dynamic effects / event study |\n| `CS` | `CallawaySantAnna` | Staggered adoption, heterogeneous effects |\n| `SA` | `SunAbraham` | Staggered, avoids negative weights |\n| `BJS` | `ImputationDiD` | Borusyak et al. imputation approach |\n| `SDiD` | `SyntheticDiD` | Synthetic DiD |\n| `DDD` | `TripleDifference` | Triple difference |\n\n### Key API Parameters\n\n```python\nfrom diff_diff import DiD, TWFE, EventStudy, CS, SA, BJS, DDD\n\n# Common fit() arguments\nresults = estimator.fit(\n    data,\n    outcome='y',           # dependent variable\n    treatment='treated',   # binary treatment indicator\n    time='post',           # binary post-period (or period var for panel)\n    unit='id',             # unit identifier (panel)\n    covariates=['x1','x2'],# control variables\n    absorb=['region'],     # high-dim fixed effects (within-transform)\n    cluster='id',          # clustered standard errors\n    robust=True,           # HC1 robust SEs\n    inference='wild_bootstrap',  # for few clusters (<50)\n    n_bootstrap=999,\n)\n\n# Results\nresults.att           # ATT estimate\nresults.se            # standard error\nresults.p_value\nresults.conf_int      # confidence interval tuple\nresults.print_summary()\nresults.to_dataframe()\n```\n\n### DDD (Triple Difference)\n\n```python\nfrom diff_diff import DDD\n\nddd = DDD()\nresults = ddd.fit(\n    data,\n    outcome='y',\n    treatment='treated',\n    time='post',\n    third_diff='group_var',  # third differencing dimension\n    cluster='id',\n)\n```\n\n---\n\n## DID Empirical Workflow (11 Steps)\n\nFollow this workflow for every DID/DD/DDD paper or analysis.\n\n### Step 1 — Data & Descriptive Statistics\n\n- Construct panel: define units, time span, treatment variable\n- Document data sources, missing values, winsorization\n- Describe cohort structure (treated units per cohort, policy onset dates)\n- **Table:** full-sample stats + treated vs. control comparison with t-tests\n- Pre-treatment covariate balance test\n\n**Covariate types:**\n\n| Type | Form | Purpose |\n|------|------|---------|\n| Covariates | Pre-treatment, time-invariant | Condition parallel trends |\n| Control variables | Baseline × time trend | Absorb residual heterogeneity |\n\n### Step 2 — Identification Strategy\n\nState and justify:\n1. **Parallel trends** — partially testable via pre-period event study\n2. **SUTVA / no interference** — argue no cross-unit spillovers\n3. **No anticipation** — pre-period coefficients should be zero\n\nDocument policy assignment mechanism; cite policy documents for exogeneity.\n\n### Step 3 — Baseline Regression\n\nModel:\n$$Y_{it} = \\alpha + \\beta(\\text{Treat}_i \\times \\text{Post}_{it}) + \\gamma W_i + \\delta(Z_i^{pre} \\times t) + \\mu_i + \\lambda_t + \\varepsilon_{it}$$\n\nRun **six progressive specifications** (M1–M6):\n\n| Model | Unit FE | Time FE | Covariates | Baseline×Trend | Regional FE | Unit Trend |\n|-------|---------|---------|-----------|----------------|------------|-----------|\n| M1 | ✓ | — | — | — | — | — |\n| M2 | ✓ | ✓ | — | — | — | — |\n| M3 | ✓ | ✓ | ✓ | ✓ | — | — |\n| M4 | ✓ | ✓ | ✓ | ✓ | ✓ | — |\n| M5 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| M6 | ✓ | ✓ | ✓ | ✓ | Industry×Year | ✓ |\n\nCoefficient stability across M1→M6 supports identification. Report **Oster (2019) δ\\*** (selection bias ratio); |δ*| > 1 = basic robustness, |δ*| > 2 = strong.\n\n```python\nimport pyfixest as pf\n\n# M1: unit FE only\npf.feols(\"y ~ treat_post | unit\", data=df, vcov={\"CRV1\": \"id\"}).summary()\n\n# M2: unit + time FE\npf.feols(\"y ~ treat_post | unit + year\", data=df, vcov={\"CRV1\": \"id\"}).summary()\n\n# M3–M4: add covariates and baseline×trend\npf.feols(\"y ~ treat_post + x1 + x2 + baseline:year | unit + year\",\n         data=df, vcov={\"CRV1\": \"id\"}).summary()\n\n# M5: + regional FE\npf.feols(\"y ~ treat_post + x1 + x2 + baseline:year | unit + year + region\",\n         data=df, vcov={\"CRV1\": \"id\"}).summary()\n\n# M6: + industry×year FE\npf.feols(\"y ~ treat_post + x1 + x2 + baseline:year | unit + industry^year\",\n         data=df, vcov={\"CRV1\": \"id\"}).summary()\n```\n\n### Step 4 — Parallel Trends: Event Study Plots\n\n```python\nfrom diff_diff import EventStudy\n\nes = EventStudy()\nres = es.fit(data, outcome='y', treatment='treated',\n             unit='id', time='year', base_period=-1,\n             cluster='id')\nres.plot()  # shows pre/post coefficients with CIs\n```\n\n**Plot standards:**\n- Y-axis: \"Coefficient (ATT)\"; X-axis: \"Years relative to policy\"\n- Reference lines at 0; base period k = −1 omitted\n- Show M6 in main text; M1–M5 in appendix\n- Pre-period coefficients (k ∈ {−4,−3,−2}) should be statistically insignificant\n\n**Run all estimators for staggered designs:**\n\n```python\nfrom diff_diff import CS, SA, BJS\n\n# Callaway & Sant'Anna (2021)\ncs = CS().fit(data, outcome='y', unit='id', time='year',\n              cohort='treat_year', control_group='never_treated', cluster='id')\n\n# Sun & Abraham (2021)\nsa = SA().fit(data, outcome='y', unit='id', time='year',\n              cohort='treat_year', cluster='id')\n\n# Borusyak et al. (2024) imputation\nbjs = BJS().fit(data, outcome='y', unit='id', time='year',\n                cohort='treat_year', horizons=range(5), cluster='id')\n```\n\n### Step 5 — Parallel Trends Sensitivity: HonestDiD\n\nUse Rambachan-Roth (2023) bounds to quantify robustness to parallel trends violations.\n\n```python\n# After event study, extract pre/post coefficients and covariance\n# Pass to HonestDiD (R package via rpy2, or use diff-diff's built-in honest DiD)\nfrom diff_diff import EventStudy\n\nes = EventStudy()\nres = es.fit(data, ..., honest_did=True,\n             sensitivity_constraint='smoothness')\nres.plot_honest_did()  # shows identified set under relaxed PT assumption\n```\n\n### Step 6 — Rule Out Alternative Explanations\n\n| Threat | Test |\n|--------|------|\n| Spatial spillovers | Geographic placebo; effect by distance from treated units |\n| Anticipation effects | Pre-period event-study coefficients ≈ 0 |\n| Policy overlap | Exclude or control for concurrent policies |\n\n### Step 7 — Robustness Checks\n\n```python\nfrom diff_diff import DiD\nfrom diff_diff.diagnostics import PlaceboTest, GoodmanBaconDecomposition\n\n# Goodman-Bacon decomposition (TWFE bias diagnosis)\ngb = GoodmanBaconDecomposition().fit(data, outcome='y', treatment='treated',\n                                      unit='id', time='year')\ngb.plot()\n\n# Placebo tests\nplacebo = PlaceboTest(method='fake_timing').fit(data, ...)\nplacebo = PlaceboTest(method='permutation', n_permutations=500).fit(data, ...)\n\n# Subsample / specification robustness\nfor subsample_mask in subsamples:\n    res = CS().fit(data[subsample_mask], ...)\n```\n\nStandard robustness battery:\n- Placebo / fake timing\n- Falsification on pre-reform period\n- Alternative outcome variables\n- Subsample splits by pre-determined characteristics\n- Randomization inference\n\n### Step 8 — Heterogeneous Treatment Effects\n\n```python\n# Triple difference for effect heterogeneity by subgroup Z\nfrom diff_diff import DDD\n\nddd = DDD().fit(data, outcome='y', treatment='treated',\n                time='post', third_diff='high_exposure', cluster='id')\n\n# Interaction-based heterogeneity in TWFE\nfrom diff_diff import TWFE\nres = TWFE().fit(data, outcome='y',\n                 treatment='treat_post',\n                 interactions=['treat_post:firm_size'],\n                 absorb=['id', 'year'], cluster='id')\n```\n\n### Step 9 — Mechanism Analysis\n\n1. **Outcome ladder:** immediate → intermediate → final outcome\n2. **Mediation:** include proposed mechanism as control; compare ATT with/without\n3. **Heterogeneity as mechanism:** subgroup event studies by initial conditions\n\n### Step 10 — Welfare & Policy Implications\n\n- Aggregate ATTs to policy-level impacts\n- Cost-benefit ratios\n- Distributional effects (winners vs. losers)\n\n### Step 11 — Full Workflow Summary\n\n```\n1. Data & balance\n2. Identification assumptions\n3. Baseline specs M1–M6 + Oster δ*\n4. Event study (TWFE + CS + SA + BJS)\n5. HonestDiD sensitivity\n6. Alternative explanations\n7. Robustness battery\n8. Heterogeneous effects (DDD / interactions)\n9. Mechanisms\n10. Welfare implications\n```\n\n---\n\n## Causal Inference: Method Selection\n\n```\nWhat is your identification strategy?\n├── Policy/treatment with parallel trends → DID (see above)\n├── Exogenous instrument for endogenous X → IV\n├── Discontinuity in assignment rule → RD / RKD\n├── Control units that can be reweighted → Synthetic Control\n├── Selection on observables → Matching / IPW\n└── High-dimensional / ML setting → DML / Causal Forest\n```\n\n---\n\n## Instrumental Variables (IV / 2SLS / GMM)\n\n**Library:** `linearmodels` (preferred over statsmodels for panel IV)\n\n**Key assumptions:** Relevance (F > 10, ideally > 104 per Lee et al. 2022), Exclusion restriction, Independence.\n\n```python\nfrom linearmodels.iv import IV2SLS, IVGMM, IVLIML\n\n# Basic 2SLS: y ~ X_exog + [X_endog ~ Z_instruments]\nres = IV2SLS(dependent=y,\n             exog=X_exog,        # included exogenous (+ constant)\n             endog=X_endog,      # endogenous regressors\n             instruments=Z).fit(cov_type='robust')\n\n# Panel IV with fixed effects\nfrom linearmodels import PanelOLS, BetweenOLS\nfrom linearmodels.iv import IV2SLS\n# absorb FE first (within transform), then IV on residuals\n# or use linearmodels.panel with IV support\n\n# GMM (efficient with heteroskedasticity)\nres = IVGMM(y, X_exog, X_endog, Z).fit(cov_type='robust')\n\n# LIML (less biased with weak instruments)\nres = IVLIML(y, X_exog, X_endog, Z).fit(cov_type='robust')\n\n# Key diagnostics\nprint(res.first_stage)          # first-stage F-statistic\nprint(res.wooldridge_score)     # endogeneity test (H0: OLS consistent)\nprint(res.sargan)               # overidentification test (J-stat, requires overid)\n```\n\n### IV Diagnostics Checklist\n\n| Test | What it checks | Pass if |\n|------|---------------|---------|\n| First-stage F | Instrument relevance | F > 104 (Lee et al.) or > 10 (rule of thumb) |\n| Cragg-Donald / Kleibergen-Paap | Weak instrument (multiple endog) | > Stock-Yogo critical values |\n| Sargan-Hansen J-test | Overidentification (exclusion) | p > 0.1 (can't reject validity) |\n| Hausman / Wooldridge | Endogeneity of X | p < 0.05 → IV needed |\n| Reduced form | Instrument affects outcome | Should be significant |\n\n```python\n# Anderson-Rubin confidence set (robust to weak instruments)\nfrom linearmodels.iv import IV2SLS\nres = IV2SLS(y, X_exog, X_endog, Z).fit(cov_type='robust')\nprint(res.anderson_rubin)  # AR test, valid even with weak instruments\n\n# Conley spatial HAC SEs (geographic instruments)\nres = IV2SLS(y, X_exog, X_endog, Z).fit(cov_type='kernel', bandwidth=5)\n```\n\n### Bartik / Shift-Share IV\n\n```python\n# Bartik instrument: Z_i = sum_k s_{ik} * g_k\n# s_{ik}: industry share of unit i; g_k: national industry growth\nimport numpy as np\n\ndef bartik_instrument(shares, growth):\n    \"\"\"\n    shares: (n_units, n_industries)\n    growth: (n_industries,)\n    returns: (n_units,) Bartik instrument\n    \"\"\"\n    return shares @ growth\n```\n\n---\n\n## Regression Discontinuity (RD / RKD / Fuzzy RD)\n\n**Library:** `rdrobust` (Python port of R package)\n\n**Key assumption:** Units cannot precisely manipulate the running variable around the cutoff.\n\n```python\nfrom rdrobust import rdrobust, rdbwselect, rdplot\n\n# Sharp RD\nres = rdrobust(y, x, c=cutoff)          # default: MSE-optimal bandwidth, local linear\nres = rdrobust(y, x, c=0,\n               kernel='triangular',     # triangular (default) / uniform / epanechnikov\n               bwselect='mserd',        # MSE-optimal (default); 'cerrd' for coverage\n               vce='hc1',              # robust SEs\n               cluster=cluster_var)\nprint(res)\n\n# Fuzzy RD (instrument = 1[x >= c])\nres_fuzzy = rdrobust(y, x, c=0,\n                     fuzzy=treatment_var)  # IV-style, estimates LATE\n\n# Bandwidth selection\nbw = rdbwselect(y, x, c=0, bwselect='mserd')\nprint(bw.bws)    # optimal bandwidth\n\n# Visualization\nrdplot(y, x, c=0)   # binned scatter with polynomial fit\n```\n\n### RD Diagnostics Checklist\n\n```python\nfrom rddensity import rddensity\nfrom rdrobust import rdrobust\n\n# 1. McCrary density test (H0: no manipulation at cutoff)\nden = rddensity(x, c=cutoff)\nprint(den.test)   # p > 0.05: no evidence of manipulation\n\n# 2. Covariate smoothness (placebo on pre-determined covariates)\nfor cov in baseline_covariates:\n    res = rdrobust(cov, x, c=cutoff)\n    print(f'{cov}: {res.coef[0]:.3f} (p={res.pv[2]:.3f})')  # should be insignificant\n\n# 3. Placebo cutoffs (should find no effect at fake cutoffs)\nfor fake_c in [cutoff - 0.5, cutoff + 0.5]:\n    res = rdrobust(y, x, c=fake_c)\n    print(f'Placebo c={fake_c}: {res.coef[0]:.3f}')\n\n# 4. Sensitivity to bandwidth\nfor h in [bw_opt * 0.5, bw_opt * 0.75, bw_opt, bw_opt * 1.25, bw_opt * 1.5]:\n    res = rdrobust(y, x, c=cutoff, h=h)\n    print(f'h={h:.2f}: {res.coef[0]:.3f}')\n\n# 5. Donut hole (exclude units very close to cutoff)\nmask = np.abs(x - cutoff) > donut_radius\nres_donut = rdrobust(y[mask], x[mask], c=cutoff)\n```\n\n### Regression Kink Design (RKD)\n\n```python\n# RKD: identifies effect via kink (slope discontinuity) rather than level jump\nres_rkd = rdrobust(y, x, c=cutoff, deriv=1)  # deriv=1 estimates slope discontinuity\n```\n\n---\n\n## Synthetic Control\n\n**Use when:** Few treated units (often N=1), long pre-treatment panel, no obvious control group.\n\n**Libraries:** `pysynth`, `synth_control` (pip), or manual implementation via `scipy.optimize`.\n\n```python\n# --- Option 1: pysynth ---\nfrom pysynth import Synth\n\nsc = Synth()\nsc.fit(\n    dataprep={\n        'foo_table': df,\n        'predictors': ['gdp', 'trade', 'invest'],\n        'predictors_op': 'mean',\n        'time_predictors_prior': list(range(1980, 1990)),\n        'special_predictors': [('gdp', [1985, 1988], 'mean')],\n        'dependent': 'gdp',\n        'unit_variable': 'country',\n        'time_variable': 'year',\n        'treatment_identifier': 'basque',\n        'controls_identifier': control_countries,\n        'time_optimize_ssr': list(range(1960, 1990)),\n        'time_plot': list(range(1960, 1998)),\n    }\n)\nsc.plot(['trends', 'weights', 'gaps'])\n\n# --- Option 2: manual (scipy) ---\nfrom scipy.optimize import minimize\nimport numpy as np\n\ndef synth_loss(w, Y_pre_control, Y_pre_treated):\n    \"\"\"Minimize pre-treatment fit: ||Y_treated - Y_control @ w||^2\"\"\"\n    return np.sum((Y_pre_treated - Y_pre_control @ w) ** 2)\n\nn_controls = Y_pre_control.shape[1]\nw0 = np.ones(n_controls) / n_controls\nconstraints = [{'type': 'eq', 'fun': lambda w: w.sum() - 1}]\nbounds = [(0, 1)] * n_controls\n\nres = minimize(synth_loss, w0,\n               args=(Y_pre_control, Y_pre_treated),\n               method='SLSQP',\n               bounds=bounds,\n               constraints=constraints)\nw_opt = res.x\nY_synth = Y_post_control @ w_opt\ngap = Y_post_treated - Y_synth\n```\n\n### Synthetic Control Diagnostics\n\n```python\n# Pre-treatment fit (RMSPE)\nrmspe_pre = np.sqrt(np.mean((Y_pre_treated - Y_pre_control @ w_opt)**2))\n\n# Placebo tests: apply SC to each control unit, compute distribution of gaps\nplacebo_gaps = []\nfor ctrl in control_units:\n    Y_treated_placebo = Y_pre[:, ctrl_idx]\n    Y_control_placebo = np.delete(Y_pre, ctrl_idx, axis=1)\n    # ... fit and store gap\n    placebo_gaps.append(gap_placebo)\n\n# Ratio: treated RMSPE_post / RMSPE_pre vs. controls (Abadie et al. 2010)\nratio_treated = rmspe_post / rmspe_pre\n# Inference: fraction of placebos with ratio >= ratio_treated → p-value\n\n# In-time placebo: apply SC using period before actual treatment as fake treatment\n# In-space placebo: already done above\n```\n\n### Synthetic DiD (SDiD)\n\n```python\n# Combines SC weights with DiD — robust to both parallel trends violations and\n# imperfect pre-treatment fit\nfrom diff_diff import SDiD\n\nsdid = SDiD()\nres = sdid.fit(data, outcome='y', treatment='treated',\n               unit='id', time='year', cluster='id')\nres.print_summary()\n```\n\n---\n\n## Matching and Reweighting\n\n**Use when:** Selection on observables; rich baseline covariate data.\n\n**Estimands:** ATT (treated vs. matched controls), ATE (population average).\n\n### Propensity Score Methods\n\n```python\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.preprocessing import StandardScaler\nimport numpy as np\n\n# 1. Estimate propensity score\nX_scaled = StandardScaler().fit_transform(X_covariates)\nps_model = LogisticRegression(C=1.0, max_iter=1000)\nps_model.fit(X_scaled, treatment)\np_score = ps_model.predict_proba(X_scaled)[:, 1]\n\n# 2. Check overlap / common support\nimport matplotlib.pyplot as plt\nplt.hist(p_score[treatment==1], alpha=0.5, label='Treated', bins=30)\nplt.hist(p_score[treatment==0], alpha=0.5, label='Control', bins=30)\nplt.legend(); plt.xlabel('Propensity Score')\n# Trim tails: drop obs with p_score outside [0.05, 0.95]\nmask = (p_score >= 0.05) & (p_score <= 0.95)\n```\n\n### IPW / AIPW (Doubly Robust)\n\n```python\nfrom econml.dr import LinearDRLearner\nfrom sklearn.linear_model import LassoCV, LogisticRegressionCV\n\n# Doubly robust (AIPW) — consistent if either outcome or propensity model correct\ndr = LinearDRLearner(\n    model_regression=LassoCV(),      # outcome model\n    model_propensity=LogisticRegressionCV(),  # propensity model\n    featurizer=None\n)\ndr.fit(Y, T, X=X_het, W=X_controls)  # X: effect modifiers, W: controls\nate = dr.ate(X_het)\nprint(dr.ate_interval(X_het))         # confidence interval\n```\n\n### Nearest-Neighbor Matching\n\n```python\nfrom causalml.match import NearestNeighborMatch\nfrom causalml.propensity import ElasticNetPropensityModel\n\n# Propensity score matching\npm = ElasticNetPropensityModel()\nps = pm.fit_predict(X_covariates, treatment)\n\nmatcher = NearestNeighborMatch(replace=False, ratio=1, random_state=42)\nmatched = matcher.match(data=df, treatment_col='treated', score_cols=['ps'])\n\n# ATT on matched sample\natt = matched[matched.treated==1]['y'].mean() - matched[matched.treated==0]['y'].mean()\n\n# OR: Mahalanobis distance matching (better for low-dimensional X)\nfrom pymatch.Matcher import Matcher\nm = Matcher(test=df[df.treated==1], control=df[df.treated==0],\n            yvar='y', exclude=['id'])\nm.fit_scores(balance=True, nmodels=10)\nm.predict_scores()\nm.match(method='min', nmatches=1, threshold=0.001)\nm.assess_balance(actual=True)\n```\n\n### Covariate Balance Diagnostics\n\n```python\n# Standardized mean differences (SMD) before/after matching\ndef smd(x_treat, x_control):\n    return (x_treat.mean() - x_control.mean()) / np.sqrt(\n        (x_treat.var() + x_control.var()) / 2\n    )\n\nfor col in covariates:\n    before = smd(df[df.treated==1][col], df[df.treated==0][col])\n    after  = smd(matched[matched.treated==1][col], matched[matched.treated==0][col])\n    print(f'{col}: SMD before={before:.3f}, after={after:.3f}')\n# Target: |SMD| < 0.1 after matching\n\n# Love plot\nimport matplotlib.pyplot as plt\nsmds_before = [...]\nsmds_after  = [...]\nplt.scatter(smds_before, covariates, label='Before', marker='o')\nplt.scatter(smds_after,  covariates, label='After',  marker='s')\nplt.axvline(0, color='k', lw=0.5); plt.axvline(0.1, color='r', ls='--')\nplt.legend(); plt.xlabel('Standardized Mean Difference')\n```\n\n### Entropy Balancing\n\n```python\n# Reweight controls to exactly match treated means (and optionally variances)\n# Install: pip install ebal\nfrom ebal import ebal\n\n# Balances moments of X_controls exactly — no propensity model needed\nweights = ebal(X_control=X[treatment==0],\n               X_treated=X[treatment==1],\n               moments=1)   # 1=means, 2=means+variances\n\n# Use weights in weighted regression\nimport pyfixest as pf\ndf[\"w\"] = np.where(treatment == 1, 1.0, weights)\nres = pf.feols(\"y ~ treated\", data=df, weights=\"w\", vcov={\"CRV1\": \"id\"})\n```\n\n---\n\n## Double Machine Learning (DML) & Causal Forests\n\n**Use when:** High-dimensional controls; heterogeneous treatment effects; flexible functional form.\n\n```python\nfrom econml.dml import LinearDML, CausalForestDML, NonParamDML\nfrom econml.dr import ForestDRLearner\nfrom sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier\nfrom sklearn.linear_model import LassoCV, LogisticRegressionCV\n\n# --- Linear DML (Partially Linear Robinson model) ---\ndml = LinearDML(\n    model_y=LassoCV(),              # outcome residualization\n    model_t=LassoCV(),              # treatment residualization\n    discrete_treatment=False,\n    cv=5,\n)\ndml.fit(Y, T, X=X_het, W=X_controls)\nprint(dml.ate(), dml.ate_interval())\n\n# --- Causal Forest (nonparametric CATE) ---\ncf = CausalForestDML(\n    model_y=GradientBoostingRegressor(),\n    model_t=GradientBoostingRegressor(),\n    n_estimators=1000,\n    min_samples_leaf=5,\n    max_depth=5,\n    discrete_treatment=False,\n    cv=5,\n)\ncf.fit(Y, T, X=X_het, W=X_controls)\n\n# Heterogeneous effects\ntau_hat = cf.effect(X_het)            # CATE for each unit\nlb, ub = cf.effect_interval(X_het)   # 95% CI\n\n# Feature importance for heterogeneity\ncf.feature_importances_  # which X drives heterogeneity\n\n# Best linear predictor of CATE\nblp = cf.ate_inference(X_het)\nblp.summary_frame()\n\n# --- IV + DML (DRIV for endogenous treatment) ---\nfrom econml.iv.dr import LinearDRIV\ndriv = LinearDRIV(\n    model_y_xw=LassoCV(),\n    model_t_xw=LassoCV(),\n    model_z=LogisticRegressionCV(),  # instrument model\n    discrete_instrument=True,\n)\ndriv.fit(Y, T, Z=Z_instrument, X=X_het, W=X_controls)\n```\n\n---\n\n## Causal Method Selection Guide\n\n| Setting | Method | Key Library |\n|---------|--------|------------|\n| OLS / WLS / Poisson with FE | Linear models | **`pyfixest`** |\n| Panel + policy shock, parallel trends | DID / TWFE / CS / SA | `diff-diff` + `pyfixest` |\n| Staggered adoption | CS, SA, BJS | `diff-diff` |\n| Exogenous instrument | 2SLS / GMM / LIML | `linearmodels` (or `pyfixest` for panel IV) |\n| Weak instrument concern | AR confidence set, LIML | `linearmodels` |\n| Cutoff assignment rule | Sharp / Fuzzy RD | `rdrobust` |\n| Slope discontinuity | RKD | `rdrobust` (deriv=1) |\n| N=1 treated, long panel | Synthetic Control | `pysynth` / manual |\n| SC + panel structure | Synthetic DiD | `diff-diff` (SDiD) |\n| Selection on observables | PSM / IPW / EB | `causalml`, `ebal` |\n| High-dim controls, binary T | AIPW / DR-Learner | `econml` |\n| Heterogeneous effects | Causal Forest | `econml` |\n| Endogenous T + heterogeneity | DRIV | `econml` |\n\n---\n\n## General Numerical Patterns\n\n### Root Finding\n\n```python\nfrom scipy.optimize import brentq, root\n\n# Scalar: prefer brentq (robust)\nr_star = brentq(lambda r: asset_market_clearing(r, params), -0.05, 0.1)\n\n# Multivariate\nsol = root(equilibrium_system, x0=initial_guess, method='hybr', tol=1e-10)\n```\n\n### Income Process Discretization\n\n```python\nimport quantecon as qe\n\n# Tauchen: AR(1) log y' = rho log y + sigma_e * eps\nmc = qe.tauchen(rho, sigma_e, n=7)\ny_grid = np.exp(mc.state_values)\ntrans_mat = mc.P\n\n# Rouwenhorst (better for high persistence)\nmc = qe.rouwenhorst(n=7, rho=rho, sigma=sigma_e)\n```\n\n### Performance Hierarchy\n\n```python\n# 1. Vectorize with numpy first\n# 2. Must loop → @njit\n# 3. Parallelizable outer loop → @njit(parallel=True) + prange\n# 4. Sparse structure → scipy.sparse\n\nfrom numba import njit, prange\n\n@njit(parallel=True)\ndef parallel_vfi(V, a_grid, y_grid, beta, sigma):\n    n_a = len(a_grid)\n    V_new = np.empty_like(V)\n    for ia in prange(n_a):\n        ...\n    return V_new\n```\n\n---\n\n## Common Mistakes\n\n| Mistake | Correct Approach |\n|---------|-----------------|\n| TWFE with staggered treatment | Use CS / SA / BJS to avoid negative-weight bias |\n| DID without clustered SEs | `cluster='id'` in `diff_diff` |\n| Few clusters (<50) | `inference='wild_bootstrap'` in diff-diff |\n| IV: not checking first-stage F | Always print `res.first_stage`; F > 104 preferred |\n| IV: J-test p < 0.05 with overid | Instrument likely invalid; reconsider exclusion restriction |\n| RD: single bandwidth choice | Show robustness across multiple bandwidths |\n| RD: not testing density at cutoff | Run McCrary / `rddensity` test always |\n| Matching: not checking balance | Report SMD before/after; target \\|SMD\\| < 0.1 |\n| Matching: ignoring common support | Trim p-score outside [0.05, 0.95] |\n| SC: poor pre-treatment fit | RMSPE_pre high → SC weights unreliable; report fit explicitly |\n| VFI inner loops without Numba | Decorate with `@njit` |\n| Uniform grid for income | Tauchen / Rouwenhorst discretization |\n| Linear asset grid | Log/exponential spacing near borrowing constraint |\n| Not checking solver convergence | Inspect `sol.success` and residuals |\n\n---\n\n## Debugging Checklist\n\n**DID:** Pre-period event study coefficients ≈ 0; Goodman-Bacon decomposition for TWFE weight check\n\n**IV:** First-stage F > 104; reduced form significant; J-test p > 0.1 (overid); AR confidence set if weak instruments\n\n**RD:** Density test p > 0.05; covariates smooth at cutoff; robust to bandwidth choice\n\n**SC:** Pre-treatment RMSPE small; placebo RMSPE ratio (post/pre) for inference\n\n**Matching:** |SMD| < 0.1 after matching; Love plot; common support overlap\n\n**DSGE/HANK:** All market-clearing residuals `< 1e-8`; VFI: plot `max|V_{n+1} - V_n|`; Distribution: `assert np.isclose(dist.sum(), 1.0)`; Jacobian: `np.allclose(J_analytic, J_fd, rtol=1e-4)`","tags":["wenddymacro","python","econ","skill","awesome","agent","skills","for","empirical","research","brycewang-stanford","academic-research"],"capabilities":["skill","source-brycewang-stanford","skill-20-wenddymacro-python-econ-skill","topic-academic-research","topic-agent-skills","topic-ai-agent","topic-awesome-list","topic-communication","topic-copaper","topic-economics","topic-education","topic-empirical-research","topic-international-relations","topic-political-science","topic-psychology"],"categories":["Awesome-Agent-Skills-for-Empirical-Research"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research/20-wenddymacro-python-econ-skill","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research","source_repo":"https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 598 github stars · SKILL.md body (31,212 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-02T12:52:56.302Z","embedding":null,"createdAt":"2026-04-18T22:12:43.794Z","updatedAt":"2026-05-02T12:52:56.302Z","lastSeenAt":"2026-05-02T12:52:56.302Z","tsv":"'+1':188,4041 '-0.05':3679 '-03':35 '-1':215,691,1329 '-11':36 '-1e10':430 '/igerber/diff-diff':807 '/wenddymacro/a-general-empirical-workflow-for-did':810 '0':229,433,454,1355,1559,2257,2294,2310,2322,2386,2427,2464,2678,2966,3114,3140,3199,3209,3253,3305,3964 '0.001':3159 '0.05':2085,2357,2985,2990,3859,3907,3998 '0.1':2074,3223,3259,3680,3897,3986,4021 '0.5':2410,2412,2438,2957,2968,3257 '0.75':2441 '0.95':2986,2993,3908 '1':291,441,458,460,1006,1083,1195,1359,1728,1781,2285,2340,2514,2516,2529,2551,2662,2676,2679,2773,2912,2941,2955,3088,3109,3136,3157,3195,3205,3310,3312,3313,3331,3606,3608,3703,3744 '1.0':2927,3332,4048 '1.25':2446 '1.5':2449 '10':1756,1817,1886,2046,3150 '1000':378,2930,3435 '104':1888,2041,3852,3978 '11':994,1777 '1960':2604,2610 '1980':2576 '1985':2581 '1988':2582 '1990':2577,2605 '1998':2611 '1e-10':3692 '1e-4':4056 '1e-8':375,4035 '2':310,823,824,1077,1094,1199,1377,1735,1784,2362,2390,2617,2648,2658,2737,2942,3186,3315,3749 '2010':513,2792 '2019':1189 '2021':1399,1421 '2022':1893 '2023':1470 '2024':1440 '2026':34 '2f':2462 '2sls':116,1872,1905,3577 '3':324,1104,1124,1376,1745,1787,2395,3753 '30':2961,2972 '300':348 '3f':2387,2391,2428,2465,3217,3220 '4':335,1302,1375,1794,2429,3761 '42':678,3091 '5':1457,1461,1801,2151,2466,3407,3439,3442,3447 '50':658,941,3832 '500':524,1618 '6':1533,1804 '7':1569,1807,3718,3735 '8':1660,1810 '9':1725,1815 '95':3474 '999':944 '9999':676 'a.shape':228 'aa':216 'abadi':2789 'abraham':1420 'absorb':639,915,1073,1719,1948 'acceler':55,78,353 'access':724 'across':1182,3874 'actual':2819,3162 'add':1239 'adopt':840,3568 'affect':2091 'agent':94,380 'aggreg':309,1760 'aipw':2995,3011,3639 'al':853,1439,1892,2044,2791 'alia':816 'align':62 'alpha':218,1130,2956,2967 'alreadi':2828 'altern':1536,1647,1805 'alway':3847,3887 'analysi':24,48,1004,1727 'analyt':4052 'anderson':2098 'anderson-rubin':2097 'anna':1398 'anticip':1106,1550 'api':865 'appendix':1369 'appli':2740,2814 'approach':855,3806 'approxim':257 'ar':2125,3589,3702,3988 'arg':2687 'argu':1098 'argument':881 'arima':766 'around':2227 'assert':4045 'asset':383,3674,3940 'assign':1116,1844,3595 'assumpt':1531,1786,1883,2219 'ate':2892,3048 'att':947,1344,1743,1761,2887,3102,3106 'author':29 'averag':2894 'avoid':846,3816 'axi':1342,1347,2772 'b':177,189,224,675 'bacon':1585,3967 'balanc':1051,1783,3147,3161,3165,3269,3289,3891 'bandwidth':2150,2249,2303,2316,2432,3870,3876,4005 'bartik':2152,2158,2185,2200 'base':1327,1356,1696 'baselin':1070,1125,1166,1242,1250,1269,1290,1788,2374,2883 'basic':822,1196,1904 'basqu':2594 'batteri':1637,1809 'bb':217 'before/after':3172,3894 'benefit':1769 'best':38,428,431,475,477,480,487,494,3486 'beta':219,303,372,464,1131,3781 'better':3121,3728 'betweenol':1943 'bias':1192,1588,1981,3820 'bin':2323,2960,2971 'binari':891,896,3637 'bjs':849,877,1395,1442,1443,1800,3571,3814 'bk':175 'blanchard':162 'blanchard-kahn':161 'block':296,317,319 'blp':3491 'blp.summary':3496 'bootstrap':655,761,937,943,3835 'borrow':3945 'borusyak':851,1437 'bound':1471,2677,2696,2697 'break':503 'brentq':3663,3667,3671 'build':311 'built':1502 'built-in':1501 'bw':2305,2436,2439,2442,2444,2447 'bw.bws':2314 'bwselect':2264,2311 'c':192,440,453,457,2243,2256,2287,2293,2309,2321,2352,2380,2407,2417,2419,2423,2425,2454,2488,2511,2926 'calibr':332 'callaway':1396 'callawaysantanna':838 'cannot':2221 'case':70,747 'cate':3424,3464,3490 'causal':18,44,142,774,1820,1867,3349,3421,3538,3646 'causalforestdml':3368,3426 'causalml':139,3631 'causalml.match':3065 'causalml.propensity':3069 'cerrd':2270 'cf':3425 'cf.ate':3492 'cf.effect':3461,3470 'cf.feature':3480 'cf.fit':3448 'characterist':1656 'check':1571,2031,2943,3842,3890,3948,3972 'checklist':2027,2330,3956 'choic':3871,4006 'ci':3475 'cis':1337 'cite':1118 'class':817 'clear':321,3676,4033 'close':2472 'cluster':615,654,657,925,927,940,989,1330,1417,1435,1458,1692,1722,2277,2278,2870,3823,3825,3831 'cluster-robust':614 'code':9,60 'coeffici':727,1110,1180,1335,1343,1373,1485,1558,3963 'cohort':1025,1030,1410,1432,1452 'col':3097,3100,3188,3196,3200,3206,3210,3213 'color':3254,3260 'combin':2835 'common':879,2945,3802,3900,4026 'compar':1742 'comparison':1042 'comput':4,17,28,336,2746 'concern':3588 'concurr':1566 'condit':1065,1754 'confid':737,955,2100,3057,3590,3989 'conley':2132 'consist':2015,3012 'constant':1922 'constraint':1520,2669,2698,2699,3946 'construct':1010 'continu':455 'control':133,136,913,1041,1068,1413,1564,1741,1848,1855,2521,2537,2542,2595,2597,2634,2646,2656,2660,2666,2668,2681,2690,2707,2717,2734,2744,2755,2765,2788,2891,2970,3042,3047,3137,3179,3272,3293,3302,3356,3416,3456,3537,3613,3636 'converg':3950 'core':51,74 'correct':3019,3805 'cost':1768 'cost-benefit':1767 'count':705 'countri':2588,2598 'cov':1931,1976,1994,2119,2147,2372,2378,2384 'covari':910,1050,1053,1058,1165,1240,1487,2363,2370,2375,2884,2922,3081,3164,3190,3239,3247,3999 'coverag':2272 'cragg':2051 'cragg-donald':2050 'critic':2063 'cross':1101 'cross-unit':1100 'crv1':628,651,669,697,722,1218,1234,1257,1277,1298,3343 'cs':837,875,1393,1400,1401,1630,1798,3561,3569,3812 'ctrl':2753,2762,2770 'cutoff':2229,2244,2348,2353,2381,2397,2404,2409,2411,2455,2474,2478,2489,2512,3594,3882,4002 'cv':3406,3446 'dag':312 'data':23,47,147,625,648,666,694,719,884,975,1007,1019,1215,1231,1254,1274,1295,1318,1403,1425,1445,1515,1593,1611,1620,1632,1681,1708,1782,2861,2885,3094,3338 'datafram':961 'dataprep':2560 'dataset':152 'date':1033 'dd':105,777,784 'ddd':106,778,785,860,878,962,970,971,972,1677,1678,1679,1813 'ddd.fit':974 'debug':3955 'decis':204 'decomposit':1586,3968 'decor':3929 'def':173,298,364,515,2184,2628,3174,3773 'default':2245,2261,2269 'defin':292,1012 'delta':1142 'den':2349 'den.test':2355 'densiti':2342,3880,3995 'depend':887,1915,2584 'depth':3441 'deriv':2513,2515,3605 'describ':1024 'descript':1008 'design':792,1387,2492 'determin':1655,2369 'df':626,649,667,695,720,1216,1232,1255,1275,1296,2563,3095,3134,3138,3193,3197,3327,3339 'df.treated':3135,3139,3194,3198 'diagnosi':1589 'diagnost':1998,2026,2329,2718,3166 'did/dd/ddd':1001 'diff':108,109,113,114,795,796,812,813,869,870,967,968,983,1310,1311,1390,1391,1498,1499,1507,1508,1574,1575,1674,1675,1689,1701,1702,2853,2854,3564,3565,3573,3574,3622,3623,3828,3829,3838,3839 'diff-diff':107,112,794,811,1497,3563,3572,3621,3837 'diff_diff.diagnostics':1579 'differ':789,791,863,964,1666,3170,3267 'differenc':987 'difference-in-differ':788 'differenceindiffer':821 'dim':919,3635 'dimens':988 'dimension':634,1863,3125,3355 'discontinu':1842,2206,2501,2519,3602 'discret':3403,3443,3523,3695,3938 'dist':538,549,553,566,572,578,579,582 'dist.shape':544 'dist.sum':4047 'dist0':522 'dist0.copy':539 'distanc':1546,3119 'distribut':510,517,527,1771,2747,4044 'dml':144,1866,3348,3386,3391,3499 'dml.ate':3418,3419 'dml.fit':3408 'document':1018,1114,1120 'donald':2052 'done':2829 'donut':2467,2479,2482 'doubl':3345 'doubli':2996,3009 'dowhi':146 'dr':3020,3641 'dr-learner':3640 'dr.ate':3049,3053 'dr.fit':3034 'driv':3500,3508,3652 'driv.fit':3526 'drive':3484 'drop':2979 'dsge':11,156,182 'dsge/hank':43,4029 'dynam':340,833 'e':184,3710,3716,3740 'eb':3630 'ebal':3284,3286,3288,3300,3632 'econ':3 'econml':141,145,3643,3648,3653 'econml.dml':3365 'econml.dr':3000,3371 'econml.iv.dr':3505 'econom':16,22,26,64,84 'effect':587,595,834,842,921,1544,1551,1663,1668,1772,1812,1938,2401,2497,3044,3359,3458,3645 'effici':1964 'either':3014 'elasticnetpropensitymodel':3071,3076 'email':32 'empir':801,992 'endog':1910,1923,1925,1973,1991,2059,2116,2144 'endogen':1839,1926,2011,2081,3502,3649 'entropi':3268 'ep':193,3711 'epanechnikov':2263 'eq':2671 'equilibrium':3684 'error':731,929,951 'es':1314,1511 'es.fit':1317,1514 'estim':728,814,948,1384,2301,2517,2913,3434 'estimand':2886 'estimator.fit':883 'et':852,1438,1891,2043,2790 'even':2128 'event':679,701,835,1092,1305,1481,1556,1750,1795,3961 'event-studi':1555 'eventstudi':831,874,1313,1315,1510,1512 'everi':1000 'evid':2359 'exact':3274,3294 'exclud':1562,2469,3143 'exclus':1894,2072,3866 'exog':1908,1917,1919,1971,1989,2114,2142 'exogen':1122,1836,1921,3575 'explan':1537,1806 'explicit':3923 'exposur':1691 'extract':1483 'f':1885,2006,2037,2040,2383,2421,2459,3212,3846,3851,3977 'f-statist':2005 'fake':1608,1639,2403,2406,2418,2424,2822 'fals':3086,3405,3445 'falsif':1641 'fd':4054 'fe':99,125,613,635,710,753,758,772,1162,1164,1169,1208,1224,1262,1283,1949,3550 'featur':3032,3476 'final':1733 'find':2399,3658 'firm':318,1717 'first':54,1950,2003,2035,3748,3844,3975 'first-stag':2002,2034,3843,3974 'fit':618,640,659,684,700,711,880,1402,1424,1444,1592,1610,1619,1631,1680,1707,1930,1975,1993,2118,2146,2327,2642,2723,2774,2851,2919,3914,3922 'fit._n':739 'fit.coef':726 'fit.confint':736 'fit.pvalue':732 'fit.se':729 'fit.summary':630 'fit.wildboottest':671 'fix':586,594,920,1937 'fixest':602 'flexibl':3360 'follow':798,996 'foo':2561 'forest':1868,3350,3422,3647 'forestdrlearn':3373 'form':1056,2089,3362,3980 'forward':200 'forward-look':199 'fraction':2800 'frame':3497 'frisch':637 'frisch-waugh':636 'full':1036,1778 'full-sampl':1035 'fun':2672 'function':350,3361 'fuzzi':2209,2282,2289,2295,3598 'fwd':179,196,234,237,242,245 'g':341,2166,2175 'gamma':1139 'gap':2615,2710,2749,2751,2777,2779 'gb':1590 'gb.plot':1602 'gdp':2565,2580,2585 'general':800,3654 'geograph':1542,2136 'github.com':806,809 'github.com/igerber/diff-diff':805 'github.com/wenddymacro/a-general-empirical-workflow-for-did':808 'given':531 'glm':770 'gmm':117,1873,1963,3578 'goodman':1584,3966 'goodman-bacon':1583,3965 'goodmanbacondecomposit':1582,1591 'gradientboostingclassifi':3378 'gradientboostingregressor':3377,3429,3432 'grid':368,370,384,386,393,396,444,447,450,493,3720,3778,3780,3787,3933,3941 'group':984,1414,2538 'growth':2179,2188,2194,2204 'guess':3688 'guid':3541 'h':2434,2456,2457,2460,2461 'h0':2013,2344 'hac':2134 'hank':13,87,277,323 'hansen':2067 'hark':95 'hat':3460 'hausman':2079 'hc1':932,2274 'het':3039,3051,3056,3413,3453,3463,3473,3495,3534 'heterogen':93,379,841,1075,1661,1669,1697,1746,1811,3357,3457,3479,3485,3644,3651 'heteroskedast':1966 'hierarchi':3742 'high':633,918,1690,1862,3354,3634,3730,3917 'high-dim':917,3633 'high-dimension':632,1861,3353 'higher':264 'higher-ord':263 'hole':2468 'honest':1504,1516,1523 'honestdid':1465,1490,1802 'horizon':1455 'household':299,316 'hybr':3690 'ia':417,445,485,490,3794 'ia2':435,451,482 'id':629,652,670,698,723,906,926,990,1219,1235,1258,1278,1299,1324,1331,1407,1418,1429,1436,1449,1459,1599,1693,1720,1723,2867,2871,3144,3344,3826 'ideal':1887 'identif':1078,1186,1785,1827 'identifi':908,1526,2496,2593,2596 'idx':519,569,2763,2771 'ignor':3899 'ik':2165,2169 'immedi':1731 'impact':1766 'imperfect':2847 'implement':270,2546 'implic':1759,1819 'import':165,171,286,357,359,605,871,969,1202,1312,1392,1509,1576,1580,1676,1703,1900,1941,1946,2108,2180,2233,2334,2338,2555,2622,2624,2855,2902,2906,2908,2947,3001,3006,3066,3070,3129,3228,3287,3323,3366,3372,3376,3382,3477,3481,3506,3662,3697,3767 'imput':854,1441 'imputationdid':850 'in-spac':2824 'in-tim':2810 'includ':1737,1920 'incom':385,535,3693,3935 'independ':1896 'indic':533,893 'industri':647,1178,1281,1293,2170,2178,2193,2196 'infer':19,45,775,935,1658,1821,2799,3493,3833,4018 'initi':1753,3687 'inner':3925 'insignific':1381,2394 'inspect':3951 'instal':102,111,3281,3283 'instrument':1837,1869,1912,1928,1984,2038,2057,2090,2105,2131,2137,2159,2186,2201,2284,3521,3524,3531,3576,3587,3862,3993 'int':954 'interact':1695,1714,1814 'interaction-bas':1694 'interfer':1097 'intermedi':1732 'interv':738,956,3054,3058,3420,3471 'invalid':3864 'invari':1064 'invest':2567 'ipw':1860,2994,3629 'iter':351,377,410,511,516,525,2929 'iv':115,123,1841,1871,1881,1935,1954,1961,2025,2086,2156,2299,3498,3585,3840,3854,3973 'iv-styl':2298 'iv2sls':1901,1914,1947,2109,2111,2139 'ivgmm':1902,1968 'ivliml':1903,1986 'iy':423,448,466,486,491,555,570,573,576 'iy2':561,571,577 'j':2021,2069,3856,3983,4051,4053 'j-stat':2020 'j-test':2068,3855,3982 'jacobian':91,282,288,337,343,4049 'joint':526 'jump':2505 'justifi':1082 'k':1358,1374,2163,2167,2176,3255 'kahn':163 'keep':59 'kernel':2149,2258 'key':864,1882,1997,2218,3544 'kink':2491,2499 'kleibergen':2054 'kleibergen-paap':2053 'label':2958,2969,3240,3248 'ladder':1730 'lambda':1150,2673,3672 'larg':151 'lassocv':3007,3024,3383,3395,3400,3513,3517 'late':2302 'lb':3468 'leaf':3438 'learn':3347 'learner':3642 'lee':1890,2042 'len':391,394,3785 'less':1980 'level':1765,2504 'librari':66,72,1874,2211,2539,3545 'like':414,552,3791,3863 'liml':1979,3579,3592 'line':1353 'linear':96,158,181,583,708,2251,3385,3388,3487,3551,3939 'lineardml':3367,3392 'lineardriv':3507,3509 'lineardrlearn':3002,3021 'linearmodel':118,1875,1940,3580,3593 'linearmodels.iv':1899,1945,2107 'linearmodels.panel':1959 'list':2574,2602,2608 'local':2250 'log':707,3704,3707 'log-linear':706 'log/exponential':3942 'logisticregress':2903,2925 'logisticregressioncv':3008,3029,3384,3520 'logit':756 'long':2530,3610 'look':201 'loop':56,77,3751,3756,3926 'loser':1775 'loss':2630,2685 'love':3226,4024 'low':3124 'low-dimension':3123 'lq':261 'ls':3262 'lw':3256 'm':3131 'm.assess':3160 'm.fit':3145 'm.match':3153 'm.predict':3151 'm1':1158,1172,1183,1206,1366,1790 'm2':1173,1221 'm3':1174,1237 'm4':1175,1238 'm5':1176,1260,1367 'm6':1159,1177,1184,1280,1362,1791 'machin':3346 'macroeconom':41 'mahalanobi':3118 'main':1364 'manipul':148,2223,2346,2361 'manual':269,2545,2618,3615 'marker':3242,3250 'market':320,3675,4032 'market-clear':4031 'mask':1626,1634,2475,2485,2487,2987 'mat':521,575,3725 'match':138,1859,2874,2890,3062,3074,3092,3104,3107,3112,3120,3173,3203,3207,3225,3275,3888,3898,4019,4023 'matched.treated':3108,3113,3204,3208 'matcher':3083,3130,3132 'matcher.match':3093 'matplotlib':154 'matplotlib.pyplot':2948,3229 'matrix':206,470,537 'max':376,409,2928,3440,4038 'mc':3712,3732 'mc.p':3726 'mc.state':3722 'mccrari':2341,3884 'mean':467,2570,2583,3111,3116,3169,3266,3277,3314,3316 'mechan':1117,1726,1739,1748,1816 'mediat':1736 'method':253,283,779,1607,1614,1822,2694,2897,3154,3539,3543,3689 'min':3155,3436 'minim':2623,2638,2683 'mirror':599 'miss':1021 'mistak':3803,3804 'ml':143,1864 'mle':769 'model':12,14,42,97,157,278,313,315,584,1127,1160,2901,2924,3005,3018,3022,3026,3027,3031,3297,3381,3390,3393,3398,3427,3430,3510,3514,3518,3522,3552 'model.solve':329,342 'modifi':3045 'moment':3290,3311 'mse':2247,2267 'mse-optim':2246,2266 'mserd':2265,2312 'mu':1148 'multiperioddid':832 'multipl':631,2058,3875 'multivari':3681 'must':3750 'n':178,195,227,232,233,235,236,240,241,243,244,387,389,401,403,420,426,438,540,542,558,564,942,1616,2190,2192,2195,2198,2528,2659,2665,2667,2680,3433,3607,3717,3734,3783,3797,4040,4043 'name':322 'nation':2177 'near':3944 'nearest':3060 'nearest-neighbor':3059 'nearestneighbormatch':3067,3084 'need':2087,3298 'negat':847,3818 'negative-weight':3817 'neighbor':3061 'never':1415 'new':412,484,500,506,550,567,580,3789,3801 'njit':80,81,358,363,3752,3757,3768,3770,3931 'nmatch':3156 'nmodel':3149 'none':3033 'nonparamdml':3369 'nonparametr':3423 'np':168,362,2183,2627,2911 'np.abs':498,2476 'np.allclose':4050 'np.delete':2767 'np.empty':413,3790 'np.exp':3721 'np.isclose':4046 'np.linalg.solve':247 'np.max':497 'np.mean':2728 'np.ones':2664 'np.sqrt':2727,3183 'np.sum':2650 'np.where':3329 'np.zeros':400,551 'numba':58,79,352,356,3766,3928 'number':197,740 'numer':15,27,73,3655 'numpi':75,166,360,2181,2625,2909,3747 'o':3243 'ob':2980 'observ':742,1858,2881,3627 'obvious':2536 'often':2527 'ol':609,749,2014,3546 'ols/poisson/logit':592 'omit':1360 'onset':1032 'op':2569 'opt':2437,2440,2443,2445,2448,2701,2709,2736 'optim':2248,2268,2315,2600 'option':2550,2616,3279 'order':256,265 'ordqz':172,222 'oster':1188,1792 'ouc':226 'outcom':885,976,1319,1404,1426,1446,1594,1648,1682,1709,1729,1734,2092,2862,3015,3025,3396 'outer':3755 'outsid':2984,3906 'overid':2024,3861,3987 'overidentif':2018,2071 'overlap':1561,2944,4028 'overview':37 'p':207,212,246,251,734,2073,2084,2356,2388,2808,2935,2952,2963,2982,2988,2991,3858,3904,3985,3997 'p-score':3903 'p-valu':733,2807 'paap':2055 'packag':1492,2217 'panda':149 'panel':122,829,904,909,1011,1880,1934,2534,3554,3584,3611,3617 'panelol':1942 'paper':1002 'parallel':82,1066,1084,1303,1462,1476,1831,2843,3557,3758,3771,3774 'paralleliz':3754 'param':672,3678 'paramet':866 'partial':1086,3387 'pass':1488,2032 'pattern':3656 'per':1029,1889 'perform':3741 'period':899,901,1091,1109,1328,1357,1372,1554,1646,2817,3960 'permut':1615,1617 'persist':3731 'perturb':252,266 'perturbpi':267 'pf':608,1205,3326 'pf.feols':619,641,660,685,1210,1225,1244,1263,1284,3335 'pf.fepois':713 'pf.iplot':699 'pip':101,110,2543,3282 'placebo':1543,1603,1605,1612,1638,2365,2396,2422,2738,2750,2759,2766,2780,2802,2813,2827,4013 'placebo_gaps.append':2778 'placebotest':1581,1606,1613 'plot':703,1307,1338,2607,3227,4025,4037 'plt':2950,3231 'plt.axvline':3252,3258 'plt.hist':2951,2962 'plt.legend':2973,3263 'plt.scatter':3236,3244 'plt.xlabel':2974,3264 'pm':3075 'pm.fit':3078 'poi':712 'poisson':704,755,3548 'polar':150 'polici':399,489,509,518,532,568,1031,1115,1119,1351,1560,1567,1758,1764,3555 'policy-level':1763 'policy/treatment':1829 'polynomi':2326 'poor':3910 'popul':2893 'port':2214 'post':622,663,674,716,895,898,981,1137,1213,1228,1247,1266,1287,1687,1713,1716,2706,2712,2784,2796 'post-period':897 'post/pre':4016 'practic':39,472 'prang':3760,3769,3796 'pre':1048,1060,1090,1108,1145,1371,1553,1644,1654,2368,2532,2633,2636,2640,2652,2655,2689,2692,2721,2726,2730,2733,2761,2769,2786,2798,2849,3912,3916,3959,4009 'pre-determin':1653,2367 'pre-period':1089,1107,1370,1552,3958 'pre-reform':1643 'pre-treat':1047,1059,2531,2639,2720,2848,3911,4008 'pre/post':1334,1484 'precis':2222 'predict':3079 'predictor':2564,2568,2572,2579,3488 'prefer':71,1876,3666,3853 'principl':52 'print':1999,2008,2016,2122,2280,2313,2354,2382,2420,2458,3052,3211,3417,3848 'prior':2573 'proba':2938 'problem':262 'process':3694 'progress':1156 'propens':2895,2914,2975,3017,3028,3030,3072,3296 'propos':1738 'ps':2923,3077,3101 'ps_model.fit':2931 'ps_model.predict':2937 'psm':3628 'pt':1530 'purpos':1057 'pyfixest':100,103,120,588,597,606,743,754,759,762,1203,3324,3553,3566,3582 'pymatch':140 'pymatch.matcher':3128 'pysynth':134,2540,2552,2554,3614 'python':2,8,25,50,164,285,354,514,604,867,965,1201,1308,1388,1479,1572,1664,1897,2096,2157,2213,2230,2331,2494,2549,2719,2834,2898,2998,3063,3167,3270,3363,3659,3696,3743 'python-econ-comput':1 'q':220 'qe':3700 'qe.rouwenhorst':3733 'qe.tauchen':3713 'quantecon':86,3698 'quantecon.lqcontrol':259 'quantifi':1473 'quantit':21 'quick':67 'r':301,371,442,600,1491,2216,3261,3669,3673,3677 'radius':2480 'rambachan':1468 'rambachan-roth':1467 'random':1657,3089 'rang':408,419,425,437,547,557,563,1456,2575,2603,2609 'rather':2502 'ratio':1193,1770,2781,2793,2804,2805,3087,4015 'rd':126,1846,2207,2210,2238,2283,2328,3599,3868,3877,3994 'rdbwselect':2235,2306 'rdd':127 'rddensiti':130,2333,2335,2350,3885 'rdlocrand':131 'rdplot':2236,2318 'rdrobust':129,2212,2232,2234,2240,2253,2290,2337,2339,2377,2414,2451,2483,2508,3600,3604 'reconsid':3865 'reduc':2088,3979 'ref':690 'refer':68,815,1352 'reform':1645 'region':916,1168,1261,1273 'regress':1126,2205,2490,3023,3322 'regressor':1927 'reject':2077 'rel':688 'relat':1349 'relax':1529 'relev':1884,2039 'replac':3085 'report':1187,3892,3921 'requir':2023 'res':1316,1513,1629,1705,1913,1967,1985,2110,2138,2239,2252,2281,2288,2376,2413,2450,2481,2506,2682,2859,3334 'res.anderson':2123 'res.coef':2385,2426,2463 'res.first':2000,3849 'res.plot':1332,1522 'res.print':2872 'res.pv':2389 'res.sargan':2017 'res.wooldridge':2009 'res.x':2702 'residu':1074,1956,3397,3402,3954,4034 'restrict':1895,3867 'result':725,882,945,973 'results.att':946 'results.conf':953 'results.p_value':952 'results.print':958 'results.se':949 'results.to':960 'return':203,250,305,507,581,2197,2202,2649,3180,3799 'reweight':1853,2876,3271 'rho':3706,3714,3736,3737 'rich':2882 'rkd':128,1847,2208,2493,2495,2507,3603 'rmspe':2724,2725,2783,2785,2795,2797,3915,4011,4014 'robinson':3389 'robust':616,930,933,1197,1474,1570,1623,1636,1808,1933,1978,1996,2102,2121,2275,2840,2997,3010,3668,3873,4003 'root':276,3657,3664,3683 'roth':1469 'rouwenhorst':3727,3937 'rpy2':1494 'rtol':4055 'rubin':2099,2124 'rule':205,589,780,1534,1845,2047,3596 'run':1154,1382,2225,3883 'sa':843,876,1394,1422,1423,1799,3562,3570,3813 'sampl':1037,3105,3437 'sant':1397 'sargan':2066 'sargan-hansen':2065 'sc':2557,2741,2815,2836,3616,3909,3918,4007 'sc.fit':2559 'sc.plot':2612 'scalar':3665 'scale':2917,2933,2940 'scatter':2324 'scipi':76,2619 'scipy.linalg':170 'scipy.optimize':2548,2621,3661 'scipy.optimize.fsolve':275 'scipy.sparse':3764 'score':2010,2896,2915,2936,2953,2964,2976,2983,2989,2992,3073,3099,3146,3152,3905 'sdid':137,856,2833,2856,2857,2858,3624 'sdid.fit':2860 'seaborn':155 'second':255 'second-ord':254 'see':1834 'seed':677 'select':1191,1823,1856,2304,2879,3540,3625 'sensit':1464,1519,1803,2430 'sequenc':88,90,280,287 'sequence-spac':279 'seri':765 'ses':617,934,2135,2276,3824 'set':1527,1865,2101,3542,3591,3990 'share':2155,2171,2187,2189,2203 'sharp':2237,3597 'shift':2154 'shift-shar':2153 'shock':3556 'show':1333,1361,1525,3872 'sigma':304,373,459,461,3709,3715,3738,3739,3782 'signific':2095,3981 'singl':3869 'six':1155 'size':1718 'sj':290 'sj.create':314 'sj.simple':297 'skill' 'skill-20-wenddymacro-python-econ-skill' 'sklearn.ensemble':3375 'sklearn.linear':2900,3004,3380 'sklearn.preprocessing':2905 'slope':2500,2518,3601 'slsqp':2695 'small':4012 'smd':3171,3175,3192,3202,3214,3222,3893,3896,4020 'smds':3232,3234,3237,3245 'smooth':1521,2364,4000 'sol':3682 'sol.success':3952 'solut':160 'solv':174,180,274,325,338 'solver':3949 'sort':225 'sourc':804,1020 'source-brycewang-stanford' 'space':89,281,2826,3943 'span':1015 'spars':3762 'spatial':1540,2133 'spec':1789 'special':2578 'specif':1157,1622 'spillov':1103,1541 'split':1651 'ss':300,328,344 'ssj':92,284 'ssr':2601 'stabil':1181 'stage':2001,2004,2036,3845,3850,3976 'stagger':787,839,845,1386,3567,3809 'standard':730,828,928,950,1339,1635,3168,3265 'standardscal':2907,2918 'star':3670 'stat':1038,2022 'state':273,295,308,327,331,530,1080,3090 'statist':1009,1380,2007 'statsmodel':745,768,773,1878 'steadi':272,294,307,326,330,529 'steady-st':271,293,306 'step':995,1005,1076,1123,1301,1460,1532,1568,1659,1724,1755,1776 'stock':2061 'stock-yogo':2060 'store':2776 'strategi':1079,1828 'strong':1200 'structur':61,1026,3618,3763 'studi':680,702,836,1093,1306,1482,1557,1751,1796,3962 'style':2300 'subgroup':1671,1749 'subsampl':1621,1625,1628,1633,1650 'sum':2162 'summari':959,1220,1236,1259,1279,1300,1780,2873 'sun':1419 'sunabraham':844 'support':1185,1962,2946,3901,4027 'sutva':1095 'syntax':603,683 'synth':135,2541,2556,2558,2629,2684,2704,2715 'synthet':132,858,1854,2520,2716,2831,3612,3619 'syntheticdid':857 'system':3685 't-test':1044 'tabl':1034,2562 'tail':2978 'target':334,346,3221,3895 'tau':3459 'tauchen':3701,3936 'test':1046,1052,1539,1604,2012,2019,2028,2070,2126,2343,2739,3133,3857,3879,3886,3984,3996 'testabl':1087 'text':1132,1136,1365 'theori':65 'third':982,986,1688 'threat':1538 'threshold':3158 'thumb':2049 'time':612,764,894,980,1014,1063,1071,1135,1146,1163,1223,1325,1408,1430,1450,1600,1609,1640,1686,2571,2589,2599,2606,2812,2868 'time-invari':1062 'time-seri':763 'tol':374,502,3691 'toolkit':85 'topic-academic-research' 'topic-agent-skills' 'topic-ai-agent' 'topic-awesome-list' 'topic-communication' 'topic-copaper' 'topic-economics' 'topic-education' 'topic-empirical-research' 'topic-international-relations' 'topic-political-science' 'topic-psychology' 'trade':2566 'tran':520,574,3724 'transform':924,1952,2920 'transit':339,469,536 'treat':621,662,673,715,890,979,1027,1039,1133,1212,1227,1246,1265,1286,1322,1411,1416,1433,1453,1548,1597,1685,1712,1715,2525,2637,2644,2653,2693,2713,2731,2758,2782,2794,2806,2865,2888,2959,3098,3177,3276,3307,3337,3609 'treatment':889,892,978,1016,1049,1061,1321,1596,1662,1684,1711,2296,2533,2592,2641,2722,2820,2823,2850,2864,2934,2954,2965,3082,3096,3304,3309,3330,3358,3401,3404,3444,3503,3810,3913,4010 'trend':1067,1072,1085,1167,1171,1243,1304,1463,1477,1832,2613,2844,3558 'triangular':2259,2260 'trim':2977,3902 'tripl':862,963,1665 'triplediffer':861 'true':83,931,1518,3148,3163,3525,3759,3772 'tupl':957 'twfe':826,873,1587,1699,1704,1706,1797,3560,3807,3970 'twowayfixedeffect':827 'type':1054,1055,1932,1977,1995,2120,2148,2670 'u':456,463 'ub':3469 'uniform':2262,3932 'unit':611,623,645,664,692,717,905,907,1013,1028,1102,1161,1170,1207,1214,1222,1229,1252,1271,1292,1323,1406,1428,1448,1549,1598,1849,2173,2191,2199,2220,2470,2526,2586,2745,2756,2866,3467 'unknown':333,345 'unreli':3920 'use':5,69,258,468,596,746,748,793,818,1466,1496,1958,2522,2816,2877,3318,3351,3811 'v':397,411,415,465,483,499,501,504,505,508,3776,3788,3792,3800,4039,4042 'v0':366 'v0.copy':398 'val':429,462,474,476,478,479,488 'valid':2078,2127 'valu':349,735,1022,2064,2809,3723 'var':767,902,985,2279,2297 'varepsilon':1152 'variabl':202,888,914,1017,1069,1649,1870,2226,2587,2590 'varianc':3280,3317 'vce':2273 'vcov':627,650,668,696,721,1217,1233,1256,1276,1297,3342 'vector':53,3745 'vfi':365,381,3775,3924,4036 'via':681,1088,1493,2498,2547 'violat':1478,2845 'visual':153,2317 'vs':744,1040,1774,2787,2889 'w':302,1140,2631,2647,2657,2674,2700,2708,2735,3040,3046,3328,3341,3414,3454,3535 'w.sum':2675 'w0':2663,2686 'waugh':638 'weak':1983,2056,2104,2130,3586,3992 'weight':848,2614,2837,3299,3319,3321,3333,3340,3819,3919,3971 'welfar':1757,1818 'wen':30 'wild':653,760,936,3834 'winner':1773 'winsor':1023 'with/without':1744 'within':923,1951 'within-transform':922 'without':771,3822,3927 'wls':750,3547 'wlxu@cityu.edu.mo':33 'wooldridg':2080 'workflow':802,993,998,1779 'write':7 'x':186,190,210,213,1346,1840,1907,1909,1918,1924,1970,1972,1988,1990,2083,2113,2115,2141,2143,2242,2255,2286,2292,2308,2320,2351,2379,2416,2453,2477,2486,2510,2916,2921,2932,2939,3037,3038,3041,3043,3050,3055,3080,3126,3176,3178,3292,3301,3303,3306,3308,3411,3412,3415,3451,3452,3455,3462,3472,3483,3494,3532,3533,3536 'x-axi':1345 'x0':3686 'x1':643,911,1248,1267,1288 'x2':644,912,1249,1268,1289 'x_control.mean':3182 'x_control.var':3185 'x_treat.mean':3181 'x_treat.var':3184 'xu':31 'xw':3512,3516 'y':369,390,395,404,427,446,543,559,565,620,642,661,686,714,886,977,1128,1211,1226,1245,1264,1285,1320,1341,1405,1427,1447,1595,1683,1710,1906,1916,1969,1987,2112,2140,2241,2254,2291,2307,2319,2415,2452,2484,2509,2632,2635,2643,2645,2651,2654,2688,2691,2703,2705,2711,2714,2729,2732,2757,2760,2764,2768,2863,3035,3110,3115,3142,3336,3394,3409,3428,3449,3511,3527,3705,3708,3719,3779 'y-axi':1340 'y_pre_control.shape':2661 'year':624,646,665,689,693,718,1179,1230,1251,1253,1270,1272,1282,1291,1294,1326,1348,1409,1412,1431,1434,1451,1454,1601,1721,2591,2869 'yogo':2062 'young':512 'yvar':3141 'z':221,231,239,1143,1672,1911,1929,1974,1992,2117,2145,2160,3519,3529,3530 'z21':230,249 'z22':238,248 'zero':1113 'δ':1190,1194,1198,1793","prices":[{"id":"3b7254d4-9dd3-4755-9954-e40f45ef14ec","listingId":"049fd2e9-f2f5-4481-8ba5-2c7a45e83363","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"brycewang-stanford","category":"Awesome-Agent-Skills-for-Empirical-Research","install_from":"skills.sh"},"createdAt":"2026-04-18T22:12:43.794Z"}],"sources":[{"listingId":"049fd2e9-f2f5-4481-8ba5-2c7a45e83363","source":"github","sourceId":"brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research/20-wenddymacro-python-econ-skill","sourceUrl":"https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research/tree/main/skills/20-wenddymacro-python-econ-skill","isPrimary":false,"firstSeenAt":"2026-04-18T22:12:43.794Z","lastSeenAt":"2026-05-02T12:52:56.302Z"}],"details":{"listingId":"049fd2e9-f2f5-4481-8ba5-2c7a45e83363","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"brycewang-stanford","slug":"20-wenddymacro-python-econ-skill","github":{"repo":"brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research","stars":598,"topics":["academic-research","agent-skills","ai-agent","awesome-list","communication","copaper","economics","education","empirical-research","international-relations","political-science","psychology","public-administration","reproducible-research","skills-library","social-science","sociology"],"license":"other","html_url":"https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research","pushed_at":"2026-04-30T20:01:12Z","description":"🔬 A curated collection of 23,000+ agent skills for empirical research across 8 social science disciplines. | 精选 23,000+ AI Agent 技能库，覆盖8大社会科学学科的实证研究。CoPaper.AI 20分钟完成一篇可复现的规范实证论文，并支持用户上传 Skills。-- Maintained by CoPaper.AI from Stanford REAP.","skill_md_sha":"2e7ee9d73570128895a1514091be9ea9c644d713","skill_md_path":"skills/20-wenddymacro-python-econ-skill/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research/tree/main/skills/20-wenddymacro-python-econ-skill"},"layout":"multi","source":"github","category":"Awesome-Agent-Skills-for-Empirical-Research","frontmatter":{"name":"python-econ-computing","description":"Use when writing Python code for DSGE models, HANK models, numerical economic computation, causal inference, or quantitative economic data analysis"},"skills_sh_url":"https://skills.sh/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research/20-wenddymacro-python-econ-skill"},"updatedAt":"2026-05-02T12:52:56.302Z"}}