Section 2 of 5 · 12 min read
Prompting Fundamentals
A prompt is an instruction, but not all instructions are equal. What makes the difference between a prompt that produces something useful and one that produces something generic is precision — and precision has a learnable structure.

What prompt engineering actually is
Prompt engineering is not magic. It's not a secret syntax that unlocks hidden AI powers. At its core, it's about increasing the probability that the model predicts the tokens you actually want — rather than the tokens that are statistically likely given a vague question.
Remember what LLMs actually do: they predict the most plausible next chunk of text given everything that came before it. When you type “write a blog post about carbon markets,” the model has to guess everything — your audience, your angle, your length preference, your tone, your level of expertise. It will guess correctly on the safe, average version of all those things. That's why the output feels generic. It is generic, because you gave it generic input.
Prompt engineering is the practice of reducing that guesswork. Every element of specificity you add shifts the probability distribution toward something more useful. You don't need a framework to understand this — but a framework makes it faster to apply consistently.
The RACE framework
There are a dozen prompt frameworks — CRISPE, ROSES, CRAFT. We use RACE because it's simple, memorable, and directly maps to the four elements that matter most. Think of it as training wheels: use it until prompting well feels natural, then forget the acronym and just apply the habits.
Role
Shapes voice and perspective. A policy analyst writes differently than a grant writer or a Bloomberg journalist. Giving AI a role activates the relevant patterns from its training. It won't make the answer more factually accurate, but it will match the tone, framing, and level of expertise you need.
Action
Your verb. Be specific. "Critique" is better than "analyze." "Write a 600-word opinion piece" is better than "write a blog post." "Summarize in three bullet points for a board audience" is better than "summarize." The verb anchors what the AI produces.
Context
The most commonly skipped element, and the one that matters most. Context tells the AI who you are, what you're working on, who the audience is, what constraints apply, and what you've already tried. Without context, the AI guesses. With it, the AI can actually help with your specific situation.
Example
An optional but powerful addition. LLMs are extraordinarily good at pattern matching. If you show three examples of the format or style you want, the model will match that pattern far more reliably than any description of it. Useful for format-sensitive tasks like data extraction, structured outputs, or tone matching.
What the difference looks like in practice
The clearest way to understand RACE is to see the same request with and without it.
Without RACE
“Write a blog post about carbon markets.”
Result: Generic. Reads like Wikipedia. Safe, bland, appeals to no one in particular. The AI has to guess your audience, angle, length, tone, and expertise level — and guesses the statistical average of all of them.
With RACE
“(Role) You're a seasoned climate finance journalist who writes for Bloomberg Green. (Action) Write a 600-word opinion piece arguing that nature-based carbon credits are undervalued in voluntary carbon markets. (Context) I'm a project developer who works on mangrove restoration in Southeast Asia. My audience is corporate sustainability directors who are skeptical of nature-based credits after the Verra controversy. (Example) The tone should match the attached excerpt — direct, evidence-focused, no hedging.”
Result: A specific argument for a specific audience from a specific perspective. Immediately useful or close to it — instead of a draft that needs complete rewriting.
The extra 45 seconds it takes to write a RACE prompt saves you the 20 minutes it would take to rewrite a generic output. Do the math on how many prompts you write in a week.
Beyond the basics: techniques worth knowing
Prompt chaining
For complex multi-step tasks, don't try to do everything in one prompt. Chain prompts sequentially — complete step one, review the output, then feed it into step two. A grant proposal has different stages: background research, argument framing, evidence gathering, writing, editing. Each stage benefits from its own focused prompt rather than one mega-prompt trying to do everything at once.
This also gives you natural checkpoints. If the research in step one goes in a wrong direction, you catch it before it propagates into the writing. Chaining is slower but produces better results on tasks where quality matters more than speed.
Few-shot prompting
LLMs are extraordinarily good at pattern matching. If you provide three examples of the format, style, or structure you want — before asking the AI to produce a new instance — the model will match that pattern far more reliably than any verbal description of it. This is called few-shot prompting.
It works best for format-sensitive tasks: extracting structured data from unstructured emails, classifying documents, matching a specific editorial voice. One caveat: if all your examples share a superficial characteristic, the model sometimes over-fits to that characteristic rather than the underlying pattern you care about. Vary your examples enough to communicate the general rule.
Meta-prompting
Sometimes you know what you want but not how to ask for it. Meta-prompting uses AI to help you construct the prompt itself. Describe your goal and ask AI to generate the prompt you should be using. Then refine that prompt and use it. This sounds circular but it's genuinely useful — the AI often surfaces constraints and edge cases in your requirements that you hadn't considered.
What better prompting won't fix
Prompting improves the probability of good output — it doesn't guarantee it, and it doesn't substitute for other things you need. A well-crafted RACE prompt can still produce hallucinated facts, biased framings, or confident errors on topics outside the model's training. The next three sections address the other layers: context engineering (giving AI the right background), system instructions (making that background persistent), and verification (catching the errors that get through anyway).