Terra Studio/Responsible AI

Section 4 of 5 · 15 min

The Five-Question Filter

A repeatable decision-making tool for evaluating any AI-for-climate proposal — whether you're choosing a vendor, reviewing a grant application, or deciding what to build. Five questions that separate genuine climate value from well-intentioned harm.

Why a filter, not a checklist

A checklist says: have you done these things? A filter says: does this proposal actually do what it claims? The difference matters because most bad AI-for-climate proposals are made in good faith by people who would pass any standard compliance checklist.

The five questions below are not independent — they interact. A proposal that answers "green" on emissions but "red" on governance is not a pass. A proposal with "yellow" across all five probably needs a serious rethink before it receives climate credibility. The filter is a whole, not a sum of parts.

The five questions

1

Emissions & Energy

What is the net emissions impact of this system?

Not gross impact — net. The AI itself consumes energy; that cost has to be subtracted from any benefit claimed. If the system will reduce carbon by some amount, what is the actual model for that reduction? Is it measured or assumed? Does the energy cost outweigh the benefit under realistic projections? "AI will optimize X" is not an answer.

2

Power Distribution

Does this concentrate or distribute power in your domain?

Who gains capability from this system? Who loses agency? Who controls the data it generates and the decisions it informs? A satellite monitoring system that gives a corporation real-time visibility into land use across a national park while providing nothing to the communities who live there is not a climate win regardless of the emissions math.

3

Transparency & Verifiability

Can you verify the outputs and understand the failure modes?

Can an expert examine the outputs and assess their accuracy? Are failure modes documented — not just acknowledged in principle, but actually characterized? For AI used in policy, reporting, or accountability contexts, the answer to this question is often no, and that no should be treated as a serious red flag.

4

Failure & Override

What happens when the AI is wrong? Is there a meaningful human override?

All AI systems are wrong sometimes. The question is: when this one is wrong, how wrong can it be, and how hard is it to catch and correct? A system that flags potential methane leaks for human review is different from a system that automatically notifies regulators. Human-in-the-loop isn't decorative — it has to be genuinely consequential.

5

Governance & Accountability

Who decides how this system is used, and who is accountable?

When the system causes harm — not if, when — who is responsible? Is that accountability legally enforceable? Are the communities most affected by the system's decisions involved in governing it, or were they only consulted after the design was fixed? These are not soft questions. They determine whether a system is climate-aligned or merely climate-branded.

Four patterns to refuse

Beyond the filter, these design patterns reliably produce harm. Recognize them when you see them — they almost always come with compelling arguments for why this case is different.

Fake assistance

Looks helpful, creates work. AI summarizes research while the expert spends hours fact-checking the summary, ending up doing more work than if they'd read the source directly.

Blind optimization

Optimizes a metric without understanding context. Maximizing yield while degrading soil health. Minimizing reported emissions while enabling extraction elsewhere.

Context-blind scaling

Deploys everywhere before proving value locally. The same forest monitoring system built for Brazil gets sold to Kenyan smallholders with different land tenure, ecology, and institutional context.

Unverifiable outputs

Generates reports or claims nobody can check. AI-authored MRV assessments. Sustainability reports produced by black-box models. Confidence without basis.

Exercise

Run the filter on something real

Describe an AI system you've encountered, are considering using, or want to build. Rate it across all five dimensions and get a structured assessment. Use something real — the filter is more useful with a specific proposal than a hypothetical.

1

Emissions & Energy

What is the net emissions impact of this system?

Does the AI energy cost outweigh the climate benefit? Is the impact measured or merely assumed?

2

Power Distribution

Does this concentrate or distribute power in your domain?

Who gains capability? Who loses agency? Who controls the system and the data it generates?

3

Transparency & Verifiability

Can you verify the outputs and understand the failure modes?

Can an expert audit the output? Are failure modes documented and communicated to users?

4

Failure & Override

What happens when the AI is wrong? Is there a meaningful human override?

How often might it fail? What are the consequences? Can errors be caught before causing harm?

5

Governance & Accountability

Who decides how this system is used, and who is accountable?

Are affected communities in the room? Is there a clear person or org that can be held accountable?

Describe your use case to continue

Next: Your Role