Physics-aware Causal DL Attribution
Attribute extreme-precipitation risk to key drivers—moving beyond black-box prediction.
Project Snapshot
Causal attribution
Deep learning
Climate extremes
PyTorch
Reproducibility
Overview
The goal is not only to predict precipitation extremes, but to quantify how key drivers (e.g., temperature forcing) contribute to extreme-event risk. The work emphasizes interpretable attribution outputs and robust, reproducible evaluation across multiple regions/blocks.
What I Built
- An end-to-end pipeline: data → preprocessing → training → attribution → robustness checks → reproducible artifacts.
- Attribution metrics based on fixed factual extreme sets and counterfactual responses (risk curves / CCDF-style views).
- Batch visualization and case studies for individual extreme events (sequence alignment and model responses).
Method (high level)
- Define a factual “extreme set” using quantile thresholds (e.g., top tail of observed precipitation).
- Train a deep model to learn the mapping from climate variables to precipitation outcomes.
- Evaluate distributional changes under counterfactual settings to produce attribution conclusions.
Results (replace with your numbers)
- Key plots: CCDF/risk curves, attribution curves, sensitivity analyses (add screenshots).
- Comparisons: different blocks/thresholds/training settings and stability over time.
- One-sentence takeaway: how risk changes as a key driver is perturbed.
Artifacts
- 📄 Paper/report: TODO
- 💻 Code repository: TODO
- 🖼️ Figures: TODO (2–4 per page works well)
Next Steps
- Standardize the attribution interface: (region, threshold) → curves + uncertainty.
- Expand robustness: dataset splits, alternate reanalysis/model sources, and sensitivity testing.