3 Million Words Into AI: What I Actually Learned

aiproductivitylessons

In February 2025, I typed 378 words into Google Gemini. Five prompts. Tentative. Mostly seeing what it could do.

Fourteen months later, I'd typed 3 million.

Not 3 million words of AI output. 3 million words of my own thinking, externalized into a tool that I did not expect to become the most important part of how I work.

Here's what I know now that I didn't know then.

Close-up of code on a dark screen, lines of text glowing in a dim room
Millions of words of input, one conversation at a time. Photo by Luis Gomes on Pexels.

Does AI actually make you 10x faster?

No. A randomized controlled trial by METR (Model Evaluation & Threat Research) gave experienced open-source developers real tasks in repos they'd contributed to for years, randomly allowing or disallowing AI tools. The result: developers were 19% slower with AI, but believed they were 20% faster. Apollo.io studied their own 250+ engineering team for a year and found a 1.15x improvement. Not 10x. Not 5x. 1.15x.

My experience tracks with this. For raw coding speed, the improvement is real but modest. Maybe 20-30% on tasks where the AI has good context. Close to zero on tasks where it doesn't. Negative on tasks where it hallucinated and I had to debug its mistakes.

The productivity gain is not in speed. It's in scope. I can keep more projects alive simultaneously because each one has an AI conversation holding context I'd otherwise have to keep in my head. That's a fundamentally different kind of improvement than "I write code faster." I wrote more about why that matters for the way my brain works.

How much AI did I actually use?

5,910 prompts (at time of writing), 3 million words of my own input, 207 active days across 14 months. Plus 35 Cursor conversations across 20 projects. Google lets you export your Gemini history. Cursor stores your conversation transcripts. I pulled both.

Gemini (Feb 2025 to Apr 2026, at time of writing):

  • 5,910 prompts across 207 active days
  • ~3,000,000 words of my own input
  • Average prompt: 498 words (I wasn't asking quick questions)
  • Busiest month: 1,033 prompts in May 2025
  • Busiest day: 153 prompts on April 25, 2025

Cursor (Feb to Apr 2026):

  • 35 conversations across 20 projects
  • ~2,700 messages
  • Sessions that got shorter and more focused as I learned

Those aren't vanity metrics. They're data about how a methodology evolved. And the methodology is what's worth sharing.

How fast does AI adoption actually happen?

I went from 17 prompts in March 2025 to 825 in April. A 48x increase in one month. That's not experimentation. That's finding something that fits.

The adoption curve for AI tools is not a gentle slope. It's a cliff. You either find a problem that the tool solves for YOUR brain and go all in, or you poke at it forever and wonder what the fuss is about.

The lesson for anyone trying to get a team to adopt AI: stop sending training links. Find each person's "one problem that fits" and solve it with them. That's the ignition event.

Do named AI personas improve output quality?

I created 13 named AI personas for one project and it changed how I use AI entirely. I wrote the full methodology up in The AI Director Protocol, but the short version: giving the AI a specific character (name, city, backstory, specialty, philosophy) produces dramatically more focused and consistent output than generic role prompts like "act as a developer."

The standard prompt engineering frameworks (GODLE, TCOF, Role-Task-Constraints) are useful starting points, but they treat the AI as a function to be called, not a character to be directed. The persona approach treats AI conversations as collaborative sessions with a consistent partner.

What matters most when using AI for writing?

Telling AI what NOT to do matters more than telling it what to do. 945 of my Gemini prompts were writing-related (blog posts, client emails, RFP sections, colleague coaching) and the single most consistent pattern across all of them was constraint-based editing.

I spent more time telling AI what NOT to do than what to do.

No em dashes. (AI has ruined these for all writers of our generation.) No buzzwords. No contractions in formal content. No hard sell. No "leverage synergies." No jargon for its own sake. Clear-speak only. Does this sound like a human wrote it?

The editorial checklist I developed:

  1. Apply the hard rules (formatting, voice constraints)
  2. Check voice consistency ("Does this sound like them?")
  3. Verify factual accuracy against source material
  4. Apply the audience lens ("If I'm a VP of Retail, do I understand this?")
  5. Resist the pitch ("Let the content stand for what it is")

I used this checklist for my own writing and for coaching 6+ colleagues' content. The same standards applied regardless of who the author was. The AI doesn't have editorial judgment. You do. Your job is to apply it consistently, and AI makes that scalable.

Should you use one AI tool or multiple?

Multiple, each for what it's best at. My Gemini usage declined from 1,033 prompts/month to 89 as Cursor rose from zero to 35 active conversations. The workload distributed across tools naturally. That's not me abandoning one tool. That's tools finding their lanes.

Gemini's lane: Writing, strategic thinking, proposal work, colleague coaching, anything involving long-form text where conversation flow matters.

Cursor's lane: Code, project builds, anything involving files and repositories where the AI needs to read and modify actual artifacts.

The worst mistake I made was trying to use one tool for everything. Gemini was terrible at managing code files. Cursor wasn't great for freeform strategic brainstorming. Once I let each tool do what it's built for, both got better.

If you're evaluating AI tools for a team, don't look for one that does everything. Look for two or three that each do one thing well, and teach people when to use which.

What's the single best way to get more from AI tools?

Invest time upfront loading context: personas, rules files, project descriptions. This is compound interest: a 30-60 minute deposit saves hours across every future interaction.

I built an AI brand persona for my company, loaded it with brand guidelines and a slide template, and every output from that conversation was on-brand from the first draft. I built a Cursor Baseline with 30+ rules files and every new project started from a higher floor. I wrote WORKING.md files so conversations could resume after days of inactivity.

Each of these investments took 30-60 minutes. Each one saved hours over the following weeks. The teams that aren't seeing ROI from AI tools are usually the ones who skipped this step.

Why should you track your AI usage?

You can't optimize what you don't measure. I exported my Gemini data because I was curious. But having the numbers transformed how I understood my own relationship with these tools.

Before the data: "I think AI helps me." After the data: "I can see that I typed millions of words across thousands of conversations, my peak adoption was April 2025, my usage patterns shifted from long marathon sessions to short focused sprints, and the Gemini-to-Cursor crossover happened in December 2025."

The specificity matters. Not just for writing about it, but for improving your own workflow. Most people never look at how they're actually using these tools.

What's the honest take?

This is not a success story about AI making everything better. This is a story about finding a tool that fits how my brain works, investing an unreasonable amount of time into it, and ending up with a methodology I didn't expect.

Some of that time was wasted. Conversations that went nowhere. Prompts that produced garbage. Hallucinations that cost me hours to debug. A week where I was so deep in Gemini conversations that I fell behind on actual client work.

AI is not free productivity. It's a different way of working that has its own costs. The costs are worth it for me. They might not be for you. The only way to find out is to find your one problem that fits, solve it with AI, and see if the ignition happens.

If it does, 3 million words goes faster than you'd think.

Ian Bezanson

Ian Bezanson

Builder, operator, serial reinventor. Writing about AI and commerce because I can't leave either alone.