About · The method behind it
Stop letting your AI hand-wave.
27 meta-directives across 6 categories. Tap a preset, paste it above your next prompt, and the model calibrates to the rigor you actually want. Built on Claude Opus 4.7. Transfers to other modern LLMs with reduced effect.
LLMs will hand-wave if you let them. You ask a question that has real stakes — a hardware decision, a patent claim, a financial model, a safety-critical deployment — and the response comes back soft-edged. "Roughly." "Should work." "Empirical only — can't be computed." The confident voice of an intern who hasn't done the reading.
This isn't a model problem. It's a prompting problem. The model is trying to be helpful, and its prior distribution for "helpful" is "conversational, hedged, encouraging." You have to push it somewhere else.
The question is how.
Task templates don't help
The first generation of prompt libraries — AIPRM, PromptBase, countless free-tier PDFs — is a catalog of task templates. "Write an SEO article." "Generate a cold email." "Summarize this meeting." Thousands of them, rated by community vote, one-click injection into your chat.
These work for low-stakes content production. They fail the moment the work is serious, because they solve the wrong problem. When I'm deciding whether to file a provisional on a hardware design, I don't need a template for "write a patent." I need the model to stop hand-waving.
Meta-directives work
The prompt you actually need is not about the task. It's about how the model should behave while doing the task.
"Don't declare things 'empirical only' as an exit ramp."
"Show your math."
"Red-team your own output as the hardest skeptic in the room."
These are meta-directives. They calibrate rigor, not content.
I noticed I was typing the same pushback patterns into Claude over and over — "steel-man the opposite case," "no placeholders," "check against prior work first" — and realized they weren't ad hoc. They clustered. Some of them forced the model to think harder. Some told it to stop asking me questions and just execute. Some named specific failure modes to avoid. Some loaded context. Some controlled output format.
Six clusters. Every meta-directive I wrote landed in one of them.
The six categories
The order matters
The six categories compose in a fixed order. This is the part that took me longest to see, and it's the part I'm most confident generalizes.
Context goes first because the model needs to load state before it can reason. Scope goes next because depth and breadth determine how the Push directives should be interpreted (a first-principles push is different from a holistic-review push). Push comes before Own because you have to calibrate rigor before you can trust execution mode. Own comes before Output because execution mode shapes what kind of output is appropriate. Output comes before Safety because the safety layer is applied as a filter over the final deliverable.
Stack them out of order and they fight each other. Stack them in order and they compose cleanly. In practice, three to six of them give you a preamble block that noticeably raises the signal-to-noise of the response.
Three principles govern every directive
Presets are the unit of use
You rarely use one directive at a time. You use recognizable combinations.
Deep Work is ctx-ergo + revalidate + own-full + no-placeholders + full-speed — five directives that say "respect my constraints, revalidate your assumptions, take ownership, no placeholders, no stopping."
Perfection Mode is holistic + no-exit-ramps + own-full + perfect-us + full-speed + safety-three-layer — six directives for when something is going to be seen by multiple stakeholders and has to survive them all.
Challenge Mode is revalidate + steel-man + compute + no-exit-ramps + hostile-audit — the one you reach for when the model's last response felt lazy.
Eight presets covered essentially every workflow mode I actually use. More than eight starts overfitting; fewer undergeneralizes.
What this isn't
pr0mpt isn't a system prompt replacement. System prompts establish persistent behavior. pr0mpt directives are per-turn calibration, applied as a preamble block to a specific request where the stakes demand it. Use them when the work is serious. Skip them when you're just chatting.
pr0mpt isn't an agent framework. It's text that goes above your text. You paste, you prompt, you read the response. No orchestration, no tool-use, no chaining. Pure prompt engineering at the meta level.
pr0mpt isn't model-agnostic. It's tuned for Claude Opus 4.7. Opus responds best to imperative framing and explicit falsifiable criteria. Smaller models (Sonnet, Haiku, GPT-4o-mini, Gemini Flash) still benefit but drop to 2–3 directives per turn to avoid diluting signal.
The taxonomy is the IP
The specific 27 directives in pr0mpt are my wording, but the 6-category taxonomy and the stack order are the part I believe generalizes. If you fork this library, translate it, or build your own tools on top, what matters is you preserve Context → Scope → Push → Own → Output → Safety as the ordering principle. That's the thing that makes presets compose predictably.
Fork it. Remix it. Credit pr0mpt when you redistribute. The taxonomy is the methodology; the wording is an instance.
Why publish this
Two reasons.
One: shipping lessons is a kindness. I spent two months figuring this out. You shouldn't have to.
Two: making the methodology public is also defensive. The taxonomy belongs in the commons, not behind a paywall. Publishing it here is prior art — pr0mpt is not going to patent the 6-category stack order, and now, neither can anyone else.
Fork it. Ship your version. The best meta-directive libraries will beat pr0mpt by being more specific to their users' workflows. That's the point.
Stop reading. Try it.
The board is where the work happens. Tap a card that matches your situation, paste the directive block above your next prompt, feel the difference.
Open the Board →