Terra Studio/Responsible AI

Section 2 of 5 · 15 min read

Where AI Breaks

The harms are real, specific, and frequently obscured by optimistic framing from the same companies selling the tools. This section maps them clearly — not to argue against using AI, but to use it without being naive.

The energy math is more complicated than you've heard

AI uses roughly 1.5% of global electricity today, with demand expected to double by 2030. Per-query energy efficiency is genuinely improving — about 33× over recent years — but this runs directly into Jevons Paradox: when something gets cheaper, people use more of it. Efficiency gains haven't reduced total consumption; they've enabled expansion.

About 60% of the electricity powering data centers still comes from fossil fuels. New data center construction is outpacing the build-out of renewable capacity in many regions — a 2024 Goldman Sachs analysis found that near-term new data center electricity demand would be met approximately 75% by natural gas.

Water consumption is the less-discussed half of the problem. Data centers in Phoenix, Saudi Arabia, and other water-stressed regions use enormous amounts for cooling. There's an inherent tradeoff: air cooling uses energy, water cooling uses water. Neither is neutral.

"AI is getting more efficient" is not the same as "AI's total climate impact is decreasing." Separate these claims whenever you encounter them.

The same companies selling "AI for climate" sell AI for extraction

Microsoft, Google, and Amazon have all made significant net-zero commitments. They have also all signed substantial contracts with oil and gas companies to provide AI services for seismic analysis, reservoir modeling, and drilling optimization. These aren't historical contracts being wound down — they were signed after the climate commitments were announced.

This isn't a contradiction in the sense of hypocrisy — it's a structural fact about how these companies are organized. AI infrastructure revenues fund AI infrastructure, which funds the capability development that climate applications also depend on. The incentive structure isn't aligned with the marketing.

For climate professionals evaluating which AI vendors to work with or recommend, this context matters. "Climate-friendly" AI tools built on infrastructure funded partly by extraction optimization is a real tension — not a reason to avoid the tools, but a reason to understand what you're embedded in.

Two structural properties that cause harm

Hallucination by design

LLMs generate plausible text. They're not designed to verify claims against reality — they're designed to produce coherent completions. This means they'll confidently state incorrect figures, cite papers that don't exist, and fabricate quotes from real people. In climate communication, policy analysis, or any context where a false claim is worse than no claim, this is a real operational risk.

The problem isn't that AI sometimes gets things wrong — humans do too. The problem is that AI errors look identical to AI correct outputs. There's no hesitation, no flagging of uncertainty unless you specifically prompt for it. The confidence is baked in.

The attention shift

In 2023, global AI investment was roughly $200 billion. Global climate tech investment was roughly $40 billion. Senior ML engineers, top researchers, and the institutional attention of the largest technology companies are flowing toward AI infrastructure, not climate solutions. This isn't a conspiracy — it's what happens when a general-purpose technology wave hits and everyone wants in.

The risk isn't that AI is bad for climate. It's that it pulls scarce resources — capital, talent, policy attention — away from the specific problem of decarbonization at the moment those resources are most needed.

Where this plays out in climate sectors

These structural issues manifest differently depending on where you work:

Energy transition

AI that optimizes fossil fuel systems makes them cheaper and more efficient — which makes them harder to displace. An AI-powered natural gas plant that runs at 94% efficiency instead of 87% is less urgently replaceable. Optimization of an existing system is not a neutral act when the goal is to replace that system.

Carbon markets

Satellites and ML can measure canopy cover, biomass, and methane concentrations with real precision. They cannot determine additionality (would this sequestration have happened without the project?), permanence (will it last?), or baseline credibility (is the counterfactual scenario honest?). AI-generated MRV reports can give false precision to fundamentally uncertain claims.

ESG and corporate sustainability

Generating polished sustainability language that passes surface-level credibility checks is precisely what LLMs are optimized for. AI makes greenwashing faster, cheaper, and more convincing. This is already happening in corporate reporting — not as rare exceptions but as a systematic shift in how sustainability claims are produced.

Knowledge check

Two questions from real debates in the field

These come up constantly when working with AI and climate data. Get them right now so they don't trip you up later.

A utility company claims their data center runs on "100% renewable energy." Which question most directly tests whether this claim is meaningful?

A carbon offset project wants to use satellite imagery and ML to automate their MRV (measurement, reporting, verification). Which of the following can satellites and ML reliably assess?

Next: Where AI Helps