Blue Glow Image
Banner Inner After
Techlusion Logo

R-E-X: The Three-Part Prompt Framework That Most People Skip

Most prompts define what to do and skip everything else. R-E-X is a three-part framework, Role, Examples, Expectations, that gives the model the context, reference point, and output contract it needs to produce consistently useful results. Here is how each component works and why all three matter.

Published Apr 13, 2026 • 4 min

Post Feature Image
Writing a task is the easy part of prompting. It is also the least important part. The output quality of any AI model is determined less by how well the task is described and more by the context surrounding it, who the model is operating as, what good output looks like in practice, and exactly what the output contract requires. The R-E-X framework, Role, Examples, Expectations, is a three-part structure that makes those elements explicit before a single task instruction is written. This post breaks down what each component does, why skipping any one of them It degrades output quality, and how to apply the checklist consistently across your team's prompting workflow.  

Why Most Prompts Fail

The most common prompting failure is not a bad task description. It is an incomplete context structure. When a model receives a task without a defined role, it defaults to a generic assistant persona that may be entirely wrong for the output required. Without examples, it has no calibration point for what good looks like. Without explicit expectations, it fills in format, length, tone, and scope with its own defaults, which rarely match what the team actually needs.

The result is output that requires significant correction, iteration, or complete regeneration. The R-E-X framework eliminates that cycle by front-loading the three structural elements that most prompts skip entirely.

R : Role

The Role component defines who the model is before it receives any task instruction. This is not a surface-level label like “you are a marketing expert.” It is a precise specification of domain, intended audience, operating constraints, and risk tolerance.

A well-written Role line tells the model what perspective to write from, who it is writing for, what it should avoid, and how cautious or confident its outputs should be. The more specific the Role, the narrower the generation surface, and the more consistently the output lands in the right register for the task.

A vague role produces a generic output. A precise role produces a contextually appropriate one.

E : Examples

Examples are the fastest way to calibrate output quality without writing a longer specification. Pasting one or two gold outputs, actual examples of what good looks like, gives the model a concrete reference point that no amount of descriptive instruction can fully replicate.

The examples should include a brief note on why they work: what makes the tone right, what structural choices to replicate, what the output achieves that a generic response would not. Where a specific failure mode is common, an anti-example is equally valuable, showing the model what to avoid is often more efficient than describing the avoidance in abstract terms.

Examples are not optional context. They are the calibration mechanism.

X : Expectations

Expectations define the output contract. Format, word range, tone, banned words or phrases, scoring rubric, and the iteration loop, every constraint that determines whether an output is usable without correction belongs here.

Teams that skip Expectations are effectively asking the model to guess at the delivery requirements. It will produce something. Whether that something matches what the team needed is a coin toss. The Expectations component eliminates that ambiguity by making every constraint explicit before generation begins.

The iteration loop specification is worth calling out separately, defining how the model should handle follow-up, what triggers a revision, and what the escalation path looks like when an output does not meet the rubric. That loop structure is what transforms a single good prompt into a repeatable workflow.

The R-E-X Checklist

Three checks before every prompt is sent:

Role line written : domain, audience, constraints, and risk tolerance are explicit, not implied.

Examples pasted : at least one gold output with a note on why it works; an anti-example added where the failure mode is specific.

Expectations set : format, word range, tone, banned content, scoring rubric, and iteration loop are all defined.

Most people write the task. Almost no one writes all three. The checklist takes two minutes to apply and eliminates the majority of prompt-to-output correction cycles that consume engineering and content team time at scale.