Our Take on Responsible AI
AI has captured the world's imagination: and, in the process, siphoned off much of the energy that once powered climate action. Media coverage, capital flows, political attention, and waves of talent have all shifted its way. The technology that was supposed to help us solve hard problems has instead become the main event, pulling focus from the problems themselves.
Within climate circles, skepticism runs deep, and for good reason.
AI's resource intensity is staggering. Data centers now consume more water than the entire bottled water industry. The concentration of power in a handful of corporations raises serious questions about who benefits and who decides. Misinformation risks are well-documented. And AI's most profitable use cases are often extractive: optimizing ad clicks, enabling surveillance, accelerating consumption.
It can just as easily be weaponized to delay climate action as to advance it. Fossil fuel companies are already using AI to optimize operations and extend the life of assets that should be winding down. AI-generated content floods the zone with doubt and distraction. No wonder many in the climate community have chosen to stay hands-off, or written it off entirely.
We see it differently.
The Case for Engagement
Like it or not, AI is transformational: a genuinely world-historical shift in how knowledge work gets done. But at the end of the day, it is a tool. And tools don't have intentions. People do. Organizations do.
Used poorly, AI can erase decades of climate progress. Used well, it can, just maybe, push that progress faster than we thought possible.
The problem is that the climate community is barely in the game right now. We're not shaping how AI gets built, how it gets deployed, or what it gets used for. We've ceded that ground to the companies and industries least aligned with a livable future.
That's a mistake we can't afford.
Why Now
AI has arrived at precisely the moment climate action needs a breakthrough: a way to do dramatically more with dramatically less. Funding is tightening. Political headwinds are intensifying. The window for action is narrowing. And the people doing the work are stretched thin.
Against that backdrop, AI offers two things that matter:
It's powerful enough.
When used thoughtfully, AI can make every kind of knowledge work faster and better. Climate organizations are already using it to write grants in a fraction of the time, translate project materials into dozens of languages overnight, and deliver agricultural advice to farmers over WhatsApp in regions with no extension services. Research that took months can now take weeks. Analysis that required dedicated staff can now be done by a single program manager with the right tools.
It's democratized enough.
For the first time, you don't need to be a software engineer to build useful tools. Anyone who deeply understands a system, a watershed, a supply chain, a policy landscape, can now build an AI application for it. A field agent can create a claims app for farmers recovering from floods. An artist can design AI-assisted climate comics for local audiences. A teacher can turn complex policy documents into an "Explain Like I'm Five" bot for their classroom.
The barriers have dropped. The leverage is real.
Yet most professionals, including most climate professionals, still use AI for trivial tasks: drafting emails, summarizing meetings, writing memos. They're using a power tool as a paperweight.
The Gap We See
There is a large and growing set of people who want to build AI sustainably, use it responsibly, and apply it to solve societal challenges like climate change. They're not naive about the risks. They're not techno-utopians. They simply believe that disengagement is not a strategy: that the best way to shape a powerful technology is to be in the room where it's built and deployed.
At the same time, there are climate organizations desperate for capacity: drowning in important work, under-resourced, and struggling to keep up with the pace of change. They need builders. They need tools. They need help.
Terra Studio exists to connect these two groups.
What We've Learned So Far
When we launched Terra Studio, we treated it as an experiment: a thesis that AI, used responsibly, could accelerate climate action in ways we urgently need. We said if it didn't work, we'd be the first to walk away from it.
It's working.
Our first cohort of 46 professionals shipped real projects for real climate organizations. They built flood risk models, emissions audit pipelines, governance frameworks, and stakeholder communication tools. Many of them came in with no technical background. All of them left with working prototypes and the skills to keep building.
What surprised us most was who showed up. About half were climate professionals looking to add AI to their work. The other half were AI-literate people looking for a way into climate. The intersection turned out to be exactly where the most interesting work happens.
What Terra Studio Is Now
Studio has grown beyond the original six-week cohort. Today it works at three levels:
Five free courses cover the foundations: responsible AI, prompt engineering, climate data, storytelling, and AI agents. They're self-paced, open to everyone, and designed so that every module ends with something you can put to work immediately.
The six-week Intensive is a live cohort where you build six real projects with climate organizations. You work alongside professionals from across 85+ countries, with live labs every week and mentorship throughout. You leave with a portfolio, the skills to keep building, and a network of people doing the same work.
Team workshops bring Studio directly to your organization. Two days, your data, your workflows, your stakeholders. Your team walks out with working prototypes and a shared playbook for using AI across your operations.
Every track teaches the same thing: how to build with AI responsibly, and how to build things that actually matter.
What Responsible Means to Us
Responsible AI is the first course every Studio participant takes, and there's a reason for that. We believe you can't build well if you don't understand what can go wrong.
Every course covers the ethical fault lines: data bias, environmental costs, concentration of power, the ways AI can be used to delay or undermine climate progress. We don't teach these as abstract principles. We teach them as practical constraints that shape how you build, what you build, and whether you should build it at all.
Our goal is not to produce AI enthusiasts. It's to produce AI-literate climate professionals who know when to use AI, when to push back on it, and how to hold the technology accountable to the work it's supposed to serve.
Join Us
Terra Studio is open to anyone who wants to apply AI to climate work responsibly. Whether you're a climate professional picking up AI tools, an AI-literate professional moving into climate, or an organization looking to build capacity across your team.
If this resonates, we'd love to have you.

Anshuman Bapna
Founder & CEO, Terra.do