The PM’s Copilot: Reclaiming Time with AI

Like many PMs, I didn’t really trust AI for serious product work. I mostly used it for general purposes: polishing emails and notes, drafting messages and creating images from time to time.

I used AI in a project only as a second opinion and to validate my discovery work, but that was it. For more on that check my earlier blog post.

While AI felt adequate for technical tasks or copywriting, it didn’t feel reliable for the messy, ambiguous world of product management where curiosity, judgement, and breaking patterns matter most.

Then everything shifted. Here is my experience report.

The Breaking Point

As I transitioned into a leaner team and ventured into new products in unfamiliar domains, my responsibilities broadened beyond what I could realistically manage; I worried I might become the bottleneck.

Early attempts to fill all the gaps by myself were not smooth. On one occasion, a discovery session ended in debates over semantics, and the stress kept building.
________________________________________

The 30% Challenge

Around the same time, CEOs were boasting that AI is writing 30% or more of our code! (1, 2)
Impressive. But It made me ask: Could I delegate 30% of my own work to AI?

Developers have tools like GitHub Copilot, Cursor, etc. What about PMs? Sure, there are specialised PM tools promising magic but they either come with big price tags or limited, or with standalone features. For a small team, were they worth it or just become another forgotten subscription?

I decided to put this to the test: Could general-purpose AI lighten my PM workload?
________________________________________

The Experiment Begins

I started with some of the low-key tasks: polishing a PRD(Product Requirement Document) or creating Acceptance Criteria for a user story or two. 

These early experiments were underwhelming. Without context or crafted prompts, outputs were generic, vague, and sometimes flat-out wrong. I often spent more time fixing AI-generated content than the time I saved drafting them. Newer models showed progress but still fell short.
________________________________________

The Breakthrough: Prompt Mastery 

The turning point came when I stopped treating AI like a search engine and started treating it like someone who can do a job but needs clear, detailed instructions.
Crafting a Prompt 
 
By defining the role, goal, audience, structure, and constraints in my prompts, I made the results more usable in the project. 
  • Decomposition (breaking a big task into smaller steps),
  • Few-shot prompting (providing a few high-quality examples),
  • Critique-and-revise (asking the model to critique an output, then apply the changes), and
  • Role switching (having the model switch perspectives to provide different responses)
made them stronger. More on these techniques here.
Examples of Prompting Techniques for PMs

That was when I built my first prompting templates, reusable patterns that gave me more consistent outcomes.

Still, I was not fully satisfied. The outputs did not sound like me or reflect the full product context.
________________________________________

The Big Shift: Hire Your Copilot

Inspired by Tal Raviv’s 'Build Your Personal Copilot' in Lenny’s Newsletter, I decided to “hire” my first AI copilot. I onboarded it the way I would a new teammate: giving it goals, training it with product knowledge, and then delegating real tasks.
Hiring a Copilot

It wasn’t elegant. I didn’t build an internal LLM with full access to product knowledge; instead, I used ChatGPT’s generic features, and within a few hours, I had a working copilot.

With the right context, it became a solid thinking partner and coach: stress-testing ideas, and flagging potential issues with compliance, integration, and solution gaps, sometimes even better than a real teammate. 

Like with any junior’s work, I still reviewed and polished the outputs, but they were close enough to be a releasable version, saving me real time.

Since that article, I have expanded my copilots and created Agents and GPTs/GEMs. I could not share them with others then due to licence and privacy restrictions.

The next leap came when I introduced my own playbooks.
________________________________________

The Game Changer: Playbooks

While my AI copilot was now a solid partner, I needed a way to make its outputs even more aligned with our team’s processes.

We had been making simple onboarding packs for years for new PMs, POs, and BAs. They contain guidelines and frameworks,  set the expectations and show them “our style” of working.

These playbooks included guides such as how to create realistic data-driven personas, craft a strategic product canvas, and write impact-oriented epics and INVEST (Independent, Negotiable, Valuable, Estimable, Small, Testable) user stories.
Make Result Yours by Playbooks

I converted these into AI playbooks: structured documents with precise instructions for recurring tasks.
These included not just what to do, but how to do it, roles to play, tone to use, when to ask questions, steps to break down a task, how to review, and notes and techniques from authors I follow.

This changed everything. The results not only matched the context but mirrored my approach.

For example, epic and user stories had the same exact sections our POs used to create, with more coverage on Acceptance Criteria. My Product Canvas was not just a report, it was a strategic guide for me shaping the way product is going to market.
________________________________________

What Actually Worked

So, did I hit the magic 30%? Yes, in some areas, I exceeded 50% saving in time for doing the same job.
Real examples:
  • I drafted three epics and with fifteen+ user stories each in under an hour for a big integration module. Doing the same task manually might take two or three days and multiple refinement sessions.
  • Adjusting the scope for a big product took only two hours.  I got my copilot to review the scope, detect dependencies, highlight compliance must-haves, and flag what could and could not be descoped. It helped me to make a call faster and with more confidence. The same task without my copilots normally take 4 to 8 hours.
I now delegate more manual and admin-heavy tasks to my copilots every day. Repetitive tasks that took 4-8 hours of my time in each sprint are now done by copilots in minutes, sometimes on my phone during a commute.
  • Expanding my ideas to stakeholder-ready PRDs (Product Requirement Document) or roadmap, aligned with my tone and style takes less effort. 
  • Discovery sessions run smoother with ready-to-use documents and prototypes.
  • Refinement and planning take less discussion because stories are clearer.
  • Stakeholder and vendor engagement are sharper. Coached by my copilots I create clear agendas and  prepare follow-ups in no time.
  • Backlog slicing, RAID log (Risk, Assumptions, Issues, Dependencies) management, and creation of Scrum ceremonies agenda time are now takes minutes from hours or days.
  • Drafting Jira tickets, Confluence docs, and meeting notes in my voice became routine.
My role shifted from creator back to reviewer, the same way when I worked with teammates after years of collaboration and only needed to review their work. 
My Copilots 
 
I expanded into multiple copilots, each specialised in a project or task. I constantly fine-tune playbooks, retrain copilots with changes, and stay in control.

My AI copilots adapt to my style; I keep the edge.
________________________________________

The Reality Check

Not everything clicked, or meet my initial target of 30% automation.

There were areas I had learnt limitations along the way and adjusted my approach to get the result. 
  • Limits of generic tools as Copilot: ChatGPT Plus came with caps on memory, performance and subscription limits. I also tried other generic LLMs (Grok, Claude, and Gemini) to see if there was any difference, but the limitations remained largely the same in my case. 
  • Technical issues: Crashes and slowness occur, especially at subscription limits. Using multiple tools helps.
  • Team and budget reality: In a lean teams and start-ups working on a critical product, it is hard to fix processes or  justify spending on specialised AI tools or automations. These things take time.
  • Privacy: Before sharing data with AI tools, use anonymization tools or manually redact sensitive information like customer names or financial details. I learned this the hard way after accidentally uploading a file with unredacted client data! Always double-check!
There were areas where AI could not take on my PM workload:
  • Too much overhead: Some tasks took more effort to train than to simply do myself. I now keep expectations at the “junior partner” or coach level.
  • One-off or quick (dirty) jobs: Training and fine-tuning were not worth it for tasks I only needed once, or where speed mattered more than setup.
  • Memory loss / Lost context: Context still breaks. For example after I updated Customer Tiers in project knowledge, it sometimes referred back to outdated scope. I now maintain clean project knowledge and refresh copilots regularly.
  • Hallucinations: Still happen, so double-checking is necessary. I learnt to re-train, break big tasks into smaller pieces, and train them to flag gaps rather than guess.
Lesson learned: Delegation to AI works, but you must stay in control.
________________________________________

The Bottom Line

Considering what worked and what did not, the results were great. I gained more time for strategy, and spent less time drowning in admin.
The real benefit?  
I reclaimed my focus to be on strategy, road mapping, and workshops, the high-value work that needed my time and focus.
________________________________________

What you should know

Before closing with what is next, there are few important points to keep in mind:

  • Scope: This experiment focused only on reducing my personal generalist workload within a lean team. Extending AI into full team/org automation or adding AI features to our product is a separate challenge. It only makes sense when it solves a real problem, driven by genuine need, not just to follow the hype. That same principle was what helped me overcome my initial resistance in this experiment.
  • Role: Product Managers must stay in charge. PM work relies on curiosity, breaking patterns, experimenting, and applying human judgment in messy and complex scenarios. AI cannot and should not replace this. My goal was only to test whether AI could act as a helpful assistant for repetitive and manual tasks.
  • Stage: Because of where our products were in their lifecycle, the experiment mostly focused on Product Discovery and Roadmapping.  Product Managers may use AI as their assistant in other sages of Product Life Cycle differently. 
  • Prompt: Both context and playbooks are elements of structured prompts. Splitting them out makes each reusable across different initiatives, whether as project knowledge or as super prompts.  
  • Tools: There are likely better AI enabled solutions that avoid some of the issues I described in this post (especially memory and privacy). My team and I are also using other tools such as Jira AI and Miro AI, etc . In this experiment I primarily focused on whether a generic LLM could help me lighten my workload quickly and on a low budget, inspired by Lenny’s Newsletter post where Tal Raviv explored the same approach.

________________________________________

What’s Next

This is just the start. I want to extend it.
  • Build internal LLMs to reduce privacy risks and increase memory.
  • Expand and refine playbooks to cover more tasks.
  • Experimenting with AI agents and Agentic AI for end-to-end workflows.
  • Automate more the project lifecycle.
The last few months reshaped how I work: From stretched thin with manual and admin tasks back to focusing on strategy while my copilots handle some of the legwork.

For you: if you have not started, do not wait: 
Start small, master prompting, give AI the right context and hire your copilots. And remember you are in the "captain’s seat"! You will be surprised at how much you can reclaim
.
👉If you’d like to see my playbook templates in action, connect with me here or on LinkedIn, I would love to swap notes.
________________________________________

References

  • AI as Copilot – The framing of AI as a “copilot” (rather than a replacement) is now widely used across product management and engineering circles. I use this term deliberately to emphasise partnership: AI takes on repetitive and structural tasks, while humans stay in charge of strategy, judgement, and creativity.
  • Tal Raviv, Build Your Personal Copilot, Lenny’s Newsletter – My approach was directly inspired by Tal Raviv’s excellent piece on building your own AI copilot. He outlines practical steps for setting up AI systems that extend your capabilities rather than overwhelm them. Highly recommended reading if you’re exploring this space.
  • Lenny’s Newsletter – A valuable resource for product managers and operators, covering best practices, case studies, and emerging trends in product development.
  • Screenshots in this post are from my Miro board created for ARIA’s workshop “AI as a Copilot in Product Discovery, Roadmapping, and Project Planning” held on 3 September 2025.

Comments

Popular posts from this blog

Sell Me This Pen!

3 "Be"s!