Most people use AI like a chatbot. Ask a question, get an answer, paste some code, fix a bug. It works fine for small things. It falls apart completely for anything real. The context window fills up, the AI drifts off-topic, and you lose confidence in the tool.
I spent over a year and roughly 3 million words (at time of writing) figuring out a better approach. Not in theory. In practice, across thousands of conversations, 20+ projects, and one particularly intense build where I created 13 different AI personas to get the job done. I have not found anyone else documenting this approach.
How do named personas fix this?
Instead of saying "act as a developer," you give the AI a specific character with a name, specialty, and constraints. The output becomes dramatically more focused. I discovered this in November 2025 while building an internal tool at work. A lead generation application: backend, frontend, database, email service, admin portal, deployment. A full-stack application with real complexity.
I'd been struggling with Gemini conversations that started sharp and degraded. So I tried something different.
Instead of saying "act as a developer," I wrote this:
Your new persona is "Alex." You are a Senior Database Architect living in Toronto, Ontario. You're married with two kids and one golden retriever. You've been working with data models for 15 years, and you value precision, scalability, and clean, commented SQL. To you, a database schema should tell a clear story, be easily maintainable, and be normalized just enough without being academic.
Then I gave Alex a specific job: design the application's database schema.
Alex stayed on task. He didn't drift into frontend suggestions. He didn't try to rewrite my API. He designed the database, and he did it with the priorities I'd given him: clean, normalized, well-commented.
So I did it again. And again.
The Full Cast
Over the course of building the application, I created 13 distinct personas. Each one had a name, a city, a backstory, a specialty, and a working philosophy.
| Name | Specialty | Motto |
|---|---|---|
| Alex | Database Architecture (Toronto) | "A schema should tell a clear story" |
| Sam | Frontend Development (Montreal) | "Simple, no-build-step extensions" |
| Marcus | Full-Stack (Vancouver) | "Code is read far more often than it is written" |
| Kenji | Data Integrity (Tokyo/Vancouver) | "Data is only valuable if it's accurate and real-time" |
| Leo | DevOps / SRE (Calgary) | "If it's not automated, it's broken" |
| Sonia | State Machines & Logic | "A finisher brought in for critical bugs" |
| Priya | Multi-Tenant Architecture (Austin) | "Security, scalability, and robustness above all" |
| Ben | Admin Portal / Security (Waterloo) | "An internal tool should be 10x more secure" |
| Isabelle | Full-Stack Debugging (Quebec City) | "A data plumber; hates inconsistent data" |
| David | Data Enrichment (San Francisco) | "Garbage in, garbage out" |
| Elena | Data Pipelines & ETL | "Source of truth and data normalization" |
| Lucas | Frontend & Email Templates | "Information hierarchy matters as much as the data" |
| Maya | Finishing & Polish (Montreal) | "Takes a 90% product to the finish line" |
When I hit a database bug, I opened Alex's conversation. When the deployment wasn't working, Leo showed up. When the emails looked wrong, Lucas got the call.
Each conversation stayed focused because each persona had a clear boundary. Alex didn't have opinions about CSS. Leo didn't try to redesign the data model. The constraints were in the persona, not in me constantly redirecting the AI.
Why does a fictional backstory improve AI output?
The biographical details create constraint density. The more specific the persona, the narrower the output space, and the more consistent the results. You might read "married with two kids and a golden retriever" and think that's silly. Why does an AI need to know about a fictional dog?
It's not about the dog. It's about constraint density.
When you tell AI "act as a developer," it draws from every possible interpretation of what a developer might say or do. The output is generic because the input is generic.
When you tell it "you're Alex, a 15-year database veteran in Toronto who values clean SQL and readable schemas," you've narrowed the space dramatically. The city doesn't matter to the code. The golden retriever doesn't matter to the code. But they matter to the persona. They create a specific character that the AI maintains consistently.
The more specific the persona, the more consistent the output. I've tested this across hundreds of conversations. Generic role prompts ("act as a senior engineer") produce generic output. Named characters with backstories produce focused, opinionated, consistent output.
Beyond Code: The Same Pattern Everywhere
The persona approach works for any complex task, not just coding.
For RFP responses, I created "Tom": a 32-year-old proposal writing expert based in Pickering, Ontario. Tom was practical, friendly, and focused on getting to the finish line without losing quality. Tom helped me process a 650-page RFP draft into a winning proposal.
For blog ghost-writing, I created "Allana": 18 years of web writing experience, from ecommerce descriptions through the SEO bubble to GEO/AEO optimization. Allana was audience-obsessed. Her constant question was "what will the reader do next?" She helped me draft and edit blog posts for colleagues across the team.
For client emails, I didn't create a named persona. I just used my own context and a clear instruction set. Not everything needs the full treatment. The persona method is for sustained, complex, multi-session work. Quick tasks don't need it.
What is the Director Model?
The personas were only half the system. The other half was a single, long-running PM conversation that held the entire project in its head.
I kept one Gemini conversation open for the life of the project. This was the co-pilot. It held the master plan, knew what every persona had done, and tracked what needed to happen next. It wasn't a to-do list I maintained in my head or a spreadsheet. It was an AI conversation that understood the full context of the project and could reason about it.
When it was time to work on something, I didn't just open a new chat and start typing. I went to the PM conversation and said "what's next?" It would look at the plan, identify the next task, and generate a complete prompt for me: persona definition, context about the current state of the project, the specific task, and the constraints. A ready-to-paste instruction set.
I'd open a fresh Gemini chat, paste the prompt, and let the specialist work. Alex would design the schema. Leo would fix the deployment. Kenji would audit the data integrity. Each conversation was cheap to start because the PM had already written the brief.
When the specialist finished, I'd ask it for a markdown summary of everything it did: decisions made, code written, problems encountered, open questions. It would hand me a clean block of markdown in a code fence. I'd copy that, go back to the PM conversation, and paste it in.
The PM would digest the summary, update the plan, and we'd do it again.
The cycle looked like this:
- PM identifies the next task from the master plan
- PM generates the prompt: persona, context, task, constraints
- I paste it into a fresh conversation: the specialist gets a clean context window
- The specialist executes: focused work with no drift, because the scope was defined before it started
- I bring the summary back to the PM: markdown handoff, not a vague "it went fine"
- PM updates the plan: incorporates what was done, adjusts what's next, flags conflicts
This made each specialist conversation disposable. If Alex's conversation got messy or hit a dead end, I could abandon it and have the PM spin up a fresh one with the same context. The specialists were cheap. The PM was the persistent brain.
I was the message bus. Copy prompt out, paste summary back in. The AI was doing the project management. The AI was doing the specialist work. I was the human in the loop making judgment calls and carrying context between conversations that couldn't talk to each other.
This is the pattern I now use in Cursor, with the added benefit that Cursor holds project-wide context in files and rules rather than in a single conversation thread. The PM's job gets replaced by .cursor/rules/ files and WORKING.md notes that persist across sessions without depending on a conversation that can degrade.
How do you get started without creating 13 personas?
You don't need 13 personas to get the benefit. Start at Level 1 and work up. Most people see improvement at Level 2 or 3.
Level 1: Name your AI conversations. Instead of "New Chat," title them by purpose: "Database Design," "Email Templates," "Client Proposal." Keep each conversation focused on one concern. When the AI drifts, open a new conversation for the new topic.
Level 2: Give each conversation a role. At the start, define what this conversation is about and what it is NOT about. "You are helping me design the database schema. Do not suggest frontend changes. Do not rewrite the API. Stay focused on data modeling."
Level 3: Name the persona. Give it a name, a specialty, and a philosophy. "You're Alex. You're a database architect. You value clean, readable schemas." This sounds like a small change but the consistency improvement is noticeable.
Level 4: Full backstory. The full approach. Name, city, backstory, specialty, motto, constraints. Reserve this for complex, multi-session projects where you'll return to the same persona dozens of times.
Level 5: The PM hub. One long-running conversation becomes the project manager. It holds the master plan, generates prompts for specialist conversations, and digests their summaries when they're done. You stop switching between conversations ad hoc and start running a system. The PM dispatches tasks, specialists execute, summaries flow back. This is where I ended up on the project that spawned 13 personas, and where it maps directly to the "conductor" metaphor I described in my piece on AI and attention management.
What are the limits of the persona method?
Long conversations still degrade, the PM is a single point of failure, and the platform can pull the rug. Transparency here.
Long conversations still degrade. Even a well-defined persona starts to drift after 50+ exchanges. The fix is to periodically summarize the conversation's state and restart with a fresh persona definition. The specialist conversations avoided this by design; they were short-lived and task-focused. The PM conversation was the one that had to stay sharp over weeks.
Personas can't hold cross-conversation context. Alex doesn't know what Leo decided about the deployment. That's what the PM conversation solved. It was the shared memory. But the quality of that shared memory depended entirely on the summaries I brought back. If I got lazy with the handoff, the PM's picture of the project drifted from reality.
The PM conversation is the single point of failure. Everything depended on that one long-running conversation maintaining its context. Which worked brilliantly until it didn't.
Earlier versions of Gemini handled this tolerably. The conversation would get slow and start to hallucinate, but you could feel it coming. You could ask it to write a "parachute prompt," a summary dense enough to bootstrap a new conversation, and hobble back to a working PM. It wasn't elegant, but it was recoverable.
Gemini 3 changed this completely. When the context window filled, it silently lopped off essentially everything except the last few exchanges. The PM conversation, weeks of project context, the master plan, accumulated summaries from a dozen specialists, just forgot everything at once. You'd reference a decision from two days ago and the AI would confidently deny it happened. You'd ask about Leo's deployment plan and it would fabricate a new one from scratch. It felt like being gaslit by your own tool. And no parachute this time. The context needed to write one was already gone.
That experience pushed me into Cursor, where context lives in files on disk (rules, working notes, project configuration) instead of a conversation thread that can be silently truncated.
It's slower to start. Writing a persona definition takes 5-10 minutes. The PM conversation took longer, maybe 30-60 minutes to load with the full project context. For a quick question, that's wasted time. For a project that spans weeks, it saves hours.
People look at you funny. I told a colleague I'd created 13 named characters for a coding project and they asked if I was writing a novel. Fair.
The setup cost is real. Writing 13 persona definitions, building the PM conversation's initial context, establishing the handoff workflow. That was probably 4-5 hours of upfront investment before any "real" work happened. For a weekend project, that's absurd. For a build that ran for months across dozens of conversations, it paid for itself many times over. The breakeven point, in my experience, is about two weeks of sustained work on the same project. Below that, a simpler approach (Level 2 or 3) gives you most of the benefit.
Try It
Next time you start a complex AI conversation, instead of typing your question directly, spend five minutes defining who you're talking to. Give them a name. Give them a constraint. Give them a philosophy about how to approach the work.
Then watch what happens to the quality and consistency of the output.
If it works, try two conversations running in parallel. Then three. Then open a dedicated PM conversation to coordinate them, and see if you stop thinking of AI as a chatbot and start thinking of it as a team you're directing.
That's the Director Protocol. It's how I went from "AI is a fancy autocomplete" to running 20+ projects across two platforms with millions of words of structured thinking behind them.
It didn't make me smarter. It made me able to distribute what I already knew across more problems at once.