Most of the AI complaints I hear from people who don't use AI every day land in the same place. "I tried it. The answers were mediocre."
The interesting thing about that complaint, to me, is that it's almost always true. The answer they got was mediocre. The thing they're wrong about is whose fault it was.
Last December I had a tedious data-cleanup job pending, and I was about to head out for a run. The fast move would have been to open the tool that does the work and type the job in one shot. Instead I opened a different chatbot first and asked it to interview me. What could go wrong. What assumptions it would make if I didn't tell it. What corners of the data needed defining. 10 minutes of back-and-forth produced a brief I copied into the tool that does the work. The job ran cleaner than the one-shot version would have.
I'd been doing the same move on smaller everyday tasks for months without naming it. Two messages instead of one. That was the whole rule.
That version happened to use two different chatbots because that's how the job was already set up. The rule doesn't care. One chatbot, sending two messages in order, works exactly the same. Use whatever you already have.
The rule
Two messages, in this order:
- Think with me. Don't write the thing yet. Tell me what you'd assume if I don't tell you. Ask me what I haven't told you. Sketch how you'd go about it.
- Now produce it. Given what we talked through, write the email, the plan, the memo, the message.
That's the whole rule. Three lines. You can use it in any chatbot, on any device, starting with your next prompt. No tool change required.
I'm calling it a rule because once you've felt the difference between a one-shot answer and a two-message answer, you can't really un-feel it. The single biggest source of mediocre AI output, in my experience and across years of watching other people use these tools, is the one-shot prompt asked at full ambition. The model is quietly making twenty assumptions about what you want, and most of them are wrong, and you don't find out which ones until you're staring at a generic draft wondering why the result is so meh.
Why it works
A one-shot prompt forces the model to commit to an answer before the two of you have agreed on what the answer should look like. The model doesn't know what you want. It guesses. Most of the guesses end up reasonable. A few end up wrong. The wrong ones become baked into the draft, and now you're editing a thing that was built on the wrong foundation, rather than redirecting before the foundation got poured.
The first message in the two-message version is doing one job. Showing you what the model is about to assume. Once you can see the assumptions, you can correct the wrong ones cheaply. A bad assumption caught in conversation costs you a sentence to fix. The same assumption baked into a draft costs you a rewrite.
The second message is doing the work the model is good at. Producing the thing, given a brief that's now accurate.
This isn't a clever trick. Anyone who's ever briefed a designer, a writer, or a junior teammate already knows the move. Don't ask for the thing until you've agreed on what it is supposed to do. The AI lets us skip that step in a way human collaborators wouldn't tolerate, and we mostly take the shortcut.
How to use it in any chatbot
Two examples of the shape. Use them as templates, not as autobiography.
The email you've been putting off
You need to write something that matters. A reply to a difficult client. A note to a teammate about something they're not doing well. A heads-up to your manager about a problem they don't yet know about. The tempting move is to dump the situation and ask for a draft. The two-message move is to dump the situation and ask the chatbot what it would want to know before drafting. Who's the recipient. What's the relationship. What's the worst-case misread of what you say. What tone matches the moment. Three or four exchanges. Then ask for the draft. The draft will be markedly better. You'll also notice you understand the situation better than you did before you started typing.
The decision you've been wrestling with
A choice you keep avoiding because every angle feels valid. The tempting move is to lay out the options and ask which is best. The two-message move is to lay out the options and ask the chatbot to play back three things. What it thinks you care about most. What trade-offs are sitting inside each option. What it'd need from you to give a real recommendation rather than a hedge. The conversation usually brings up the question you should have been asking. Then you can ask for the recommendation, and the recommendation will be about your real question, not the one you started with.
In both cases, the message that produces the answer is the second one. The message that decides whether the answer will be any good is the first.
The wider point
The two-message rule is one move that points at something bigger. The biggest difference between the people who get useful work out of AI and the people who don't isn't access to a particular model, or a clever prompt library, or any of the things the hype cycle wants you to think it is. It's how willing they are to slow down at the front of a task long enough for the AI to understand what's being asked.
An uncle of mine, by his own description "not a huge user, I dabble occasionally," read an earlier post of mine on a flight home about telling the AI who you are before asking for advice. He spent the rest of the flight working through a personal buying decision he'd been chewing on. A few exchanges in, ChatGPT pushed back on his earlier prompts. His words: "almost chiding me for wasting it's earlier time with poor queries." He ended the flight with a different decision than the one he'd started with. His verdict, in his own words: "Game changer."
He didn't use the two-message rule by name. He used a closely related habit. Tell the model who you are before asking it for advice. Different move, same instinct: give the model what it would otherwise have to make up. The chiding moment isn't really chiding. It's the model finally having enough to work for you and noticing that the question you started with isn't the question your situation is really asking.
For the broader version of this idea, the kind of context that lasts beyond a single conversation, see The Conversation Is Disposable. The Context Is Not. Both rules come from the same instinct: slow down at the front of the work to let the model understand what's being asked. The two-message rule is what that looks like inside one chat. Context Hygiene is what it looks like across many.
What to do this week
Pick one real ask you're going to make of an AI in the next few days. Something that matters. An email, a decision you're working through, a plan, a piece of writing.
Don't open the chat with the ambitious version of the question. Open it with this:
Don't write the thing yet. First tell me what you'd want to know to write it well. What you'd assume if I didn't tell you. What you'd want me to clarify before you start.
Have that conversation. Then ask for the thing.
A couple of things to expect. If the chatbot starts writing anyway instead of asking you questions back, tell it "wait, let's talk first." It will redirect. If the questions it asks feel obvious or even a little dumb, answer them anyway. The value isn't in how clever the questions are. It's in you saying the answers out loud, often for the first time. That's the move working. You don't need to know what to say going in. The chatbot is doing the asking.
If you've never done this before, what you're going to notice is that the conversation in the first message is doing more thinking than the answer at the end. That's the rule working. The thinking is the part that was missing before.
Don't skip it
I've spent more time inside AI tools than is probably wise to admit, across more kinds of work than I can list in a paragraph, and the single most reliable habit I've picked up in all of it is sending two messages instead of one. It's almost embarrassing how simple it is.
It's also not a coincidence that the thing most likely to make AI useful for you tomorrow is the same thing that's always made any collaboration useful. Agreeing on what we're trying to do before we start doing it. The AI doesn't change that rule. It only lets us skip it more easily, which is why most of us do.
Don't.