Section 3 of 5 · 10 min read
Claude Code for Climate Work
Claude Code is Anthropic's AI coding agent with access to your file system and terminal. It's a different kind of tool from the chat interface — more powerful in specific ways, and more dangerous if used carelessly.
What Claude Code actually is
Claude Code is a terminal-based interface for Claude that gives it access to your file system and the ability to execute code. Unlike the chat interface at claude.ai, it operates as an agent with real capabilities: it can read your files, write new ones, run Python scripts, call APIs, install packages, and chain these actions together without prompting you at every step.
The distinction matters. In a chat interface, Claude generates text about what code would do. In Claude Code, it writes the code, runs it, sees the output, and revises based on what it finds — the same Reason-Act-Observe loop that defines an agent. It can also spawn sub-agents to handle tasks in parallel and build and run its own multi-step plans across a session.
To make the scale concrete: in early 2026, an independent researcher ran a Claude Code agent continuously for 24 hours without human intervention. In that single run, the agent completed hundreds of coding projects, wrote approximately 450,000 lines of code, and maintained its own memory across sessions using a vector database — setting its own goals, tracking milestones, and iterating without a human in the loop. The researcher stopped it to save compute costs, not because it got stuck.
What makes this instructive: it was a solo practitioner, not an engineering team. The kind of autonomous run that's now available to anyone with Claude Code and a clear goal. The practical constraint is less about whether agents can run and more about deciding what work is worth the trust investment.
What Claude Code is good at for climate work
The tasks where Claude Code adds real leverage are those involving code, data, and repetitive file operations — the things that are tedious for humans and where the exact wording of instructions matters less than consistent execution.
Scaffolding data pipelines
Give it an API endpoint and a data structure and it will write the ingestion, parsing, cleaning, and normalization code. This is particularly useful for climate data sources — IEA, Global Forest Watch, EPA, NOAA, Climate TRACE — which have inconsistent formats and inconsistent documentation. Describe what you want; let it handle the API plumbing.
Writing analysis scripts
Statistical analysis, anomaly detection, time-series comparisons, GHG calculations. Claude Code can write and iterate on these scripts while you review the logic — which is much faster than writing from scratch. The key practice: ask it to explain what each section of the code does before you run it.
Building dashboards and visualizations
Matplotlib, Plotly, Observable Framework — given a data structure and a description of what you want to show, it can scaffold visualization code that you can then refine. It handles the boilerplate; you make the design decisions.
Automating repetitive file tasks
Processing 200 PDFs with similar structure. Extracting specific tables from regulatory filings. Renaming and reorganizing datasets. Converting between formats. Tasks where a human would spend hours doing the same action repeatedly are exactly where Claude Code earns its keep.
What Claude Code is NOT good at
The limitations matter as much as the capabilities — especially for climate work where wrong outputs can flow into consequential decisions.
Not this: Domain expertise
It doesn't understand climate science — it pattern-matches on text about it. Decisions about which GWP factors are appropriate, whether a measurement methodology is sound, or what a monitoring result actually means require your expertise.
Not this: Data quality judgment
It can follow instructions to flag outliers or check formats. It cannot tell you whether an anomaly is a real signal or an instrument error, or whether the data source is reliable. Those are scientific and contextual judgments.
Not this: Visualization decisions
It will build what you describe. It won't tell you whether you're plotting the right thing, whether the framing is misleading, or whether a different visualization would communicate your finding more clearly. Those are editorial choices.
Safety practices before you run anything
Claude Code has access to your file system and can run code. That's what makes it powerful. It's also exactly why you need deliberate practices before trusting it with real data or production systems.
- 1.Ask it to show the plan first. Before Claude Code executes anything, prompt it to describe what it's going to do and wait for your confirmation. The example prompt below includes this explicitly. It's the single most valuable safety step.
- 2.Sandbox first. Run new scripts on a copy of your data, in a temporary directory, before pointing them at anything you care about. File writes and deletions can't be undone.
- 3.Read the code before running it. Even if you don't fully understand every line, read it. Ask Claude to explain any section you don't follow. You're responsible for what runs on your machine.
- 4.Grant narrow permissions. Don't give Claude Code access to your entire filesystem when it only needs one folder. Don't give it API keys with write access when it only needs to read.
Try it: climate data pipeline prompt
Copy this prompt into Claude Code and adapt the facility IDs and time range to your actual use case. It demonstrates the key patterns: specific outputs, explicit handling of missing data, anomaly detection against a baseline, and asking for plan confirmation before execution.
You are helping me build a climate data pipeline. Here is the task:
Pull monthly facility-level emissions data from the EPA's Clean Air Markets Division API
(https://campd.epa.gov/), for the last 12 months, for the following facility IDs:
[FACILITY_ID_1, FACILITY_ID_2, FACILITY_ID_3]
Then:
1. Parse the API responses and extract: facility_id, date, CO2_tons, SO2_tons, NOx_tons
2. Handle any missing values by logging them to a separate 'missing_data.csv' file
rather than filling them in — I need to know what's actually absent
3. Normalize all emissions values to metric tons CO2e using standard GWP100 factors
(CO2=1, SO2=0, NOx=298 for N2O equivalent — adjust if the data uses different units)
4. Flag any month where a facility's emissions are more than 2 standard deviations
above their own 12-month baseline
5. Output two files:
- emissions_normalized.csv: all clean data, one row per facility per month
- anomalies.csv: flagged months with columns for facility_id, date, value,
baseline_mean, std_devs_above_mean
Before running any code, show me the plan and wait for my confirmation.
After generating the files, summarize: how many records were processed,
how many were missing, and how many anomalies were flagged.To use this: open Claude Code in a working directory you've set up for data work, paste the prompt, replace the facility IDs and any other specifics, and let it run. Review the plan it proposes before confirming.
Adapt the pattern to other climate data sources: Global Forest Watch, Climate TRACE, NOAA, IEA — the same structure applies. Describe the source, specify the outputs, be explicit about missing data handling, and always ask for a plan first.