Section 5 of 5 · 8 min read
Your Role
The goal of this course isn't a more nuanced opinion about AI. It's a clearer sense of what you'll do — and what you won't. That starts with separating what's settled from what's genuinely still open.
What's settled
Some things are not genuinely uncertain, even if they remain contested in public discourse. Treating them as open questions is itself a choice — one that tends to benefit the status quo.
- ✓AI's environmental costs are real but context-dependent. Per-query efficiency is improving, but Jevons Paradox means total consumption continues to rise.
- ✓AI's capabilities for specific climate tasks — planetary-scale monitoring, scientific discovery, grid optimization — are real, specific, and not hype.
- ✓AI development is concentrated in a small number of companies in two countries. This shapes what gets built, who it serves, and what governance is possible.
- ✓The attention shift is the most underrated risk: capital and talent flowing toward AI infrastructure rather than climate solutions at a moment when both are critically scarce.
What's genuinely uncertain
Epistemic honesty requires distinguishing things we know from things we're betting on. These are genuine bets:
- ?Whether AI capabilities keep improving at the current pace, plateau, or accelerate unexpectedly — and which scenario is better for climate.
- ?Whether the climate community can meaningfully shape AI development governance, or whether it will continue to be shaped for them.
- ?How AI safety and alignment questions resolve — and whether the timelines for those resolutions interact with climate crisis timelines.
- ?Whether the current moment of AI enthusiasm produces durable climate applications or mostly creates a bubble that redirects resources and attention.
The asymmetry argument for engaging now
AI tools are currently worse than they will be. You are learning them at their least capable moment. That's not an argument for patience — it's an argument for building expertise now, while the cost of mistakes is lower and the learning curve is steepest.
If AI systems will be embedded in energy infrastructure, conservation finance, climate policy, and corporate sustainability reporting — and they will be, whether climate professionals engage or not — then the question isn't whether to engage. It's whether people with genuine climate expertise will shape how those systems are designed, deployed, and constrained.
The specific skill that makes this possible is what this course calls translation engineering: the ability to bridge what AI can do at scale with what climate work requires on the ground. It isn't a purely technical skill and it isn't a purely domain skill. It lives between them — and climate professionals are almost uniquely positioned to develop it.
Responsible AI means: do no harm, be transparent, maintain appropriate oversight. Good AI means: actually solve a real problem for real people with lasting impact. You need both. This course has been about developing the judgment to know the difference.
Exercise
Your commitments
Two questions. Keep them specific — vague commitments don't stick. Public commitments stick better than private ones. If you share yours on LinkedIn, tag Terra Studio.
Course complete
You've finished Responsible AI for Climate. You can now name what AI actually is, where it specifically helps and harms, and apply a repeatable framework to any proposal you encounter.
← Back to course overview