Your AI Assistant Keeps Forgetting You. Here's How to Fix That.
What You'll Find In This Article
- •Understand why 'persistent context' makes AI tools dramatically more useful than one-off prompts
- •Know which types of work AI copilots are good at (synthesis, brainstorming) versus where humans must stay in charge (judgment calls)
- •Recognize the basic building blocks needed to set up a context-aware AI assistant
- •Be able to evaluate whether this approach makes sense for your role and workflow
Product managers are discovering that the difference between "using ChatGPT" and "having a real AI assistant" comes down to one thing: memory. When you start every conversation from scratch, you're essentially introducing yourself to a stranger each time. But when you feed an AI your strategy documents, meeting notes, and customer research on an ongoing basis, it becomes something far more valuable—a thinking partner that actually understands your world.
The practitioners making this work are refreshingly honest about what these tools can and can't do. They're excellent for spotting patterns across messy information, brainstorming options, and synthesizing research. But they're not replacing the hard judgment calls—the trade-offs, the prioritization decisions, the stakeholder navigation—that define real product work. Think of it less like hiring a replacement and more like finally having a research assistant with perfect recall.
The Shift
Most professionals use AI the same way they use a search engine: type a question, get an answer, move on. The problem? Every conversation starts from zero. You're constantly re-explaining your company, your product, your customers, and your constraints. It's like having a brilliant colleague with amnesia.
This "stateless" approach wastes the real power of modern AI tools. You get generic answers when you could be getting tailored insights based on everything your organization has learned.
The Solution
Product managers in Lenny Rachitsky's community are building what they call "copilots"—AI setups that maintain ongoing context about their work. Think of it like the difference between calling a random consultant versus working with an advisor who's been embedded in your company for months.
The setup involves three key ingredients:
- A foundation of documents: Strategy decks, product specs, user research, meeting notes—anything that captures what you're building and why
- Regular feeding: New learnings get added continuously, not just at setup
- The right model: Using AI tools with large "context windows" (the amount of information they can hold in memory at once)
The analogy that helps: Imagine explaining a complex decision to a new hire versus a veteran team member. The veteran already knows the backstory, the politics, the failed experiments. That's what persistent context gives you.
The Impact
Practitioners report that a well-fed copilot becomes genuinely useful for:
- Synthesizing scattered information: "What have customers said about feature X across all our research?"
- Brainstorming with guardrails: Getting ideas that actually fit your product's constraints
- Pattern recognition: Spotting connections you might miss across hundreds of pages of notes
- First-draft thinking: Quickly generating options to react to rather than starting from a blank page
The key insight from experienced users: these tools amplify your thinking, they don't replace it. The hard calls—what to prioritize, which trade-offs to make, how to navigate stakeholders—still require human judgment.
Real World Example
Imagine you're a product manager preparing for a quarterly planning session. Without a copilot, you'd spend hours re-reading old research documents, digging through Slack threads, and trying to remember what customers said three months ago.
With a properly-fed copilot, you could ask: "Based on our user research and support tickets from Q3, what are the top three pain points we haven't addressed yet?" And get an answer grounded in your actual data—not generic advice from the internet.
One community member described using their copilot to stress-test a roadmap decision: feeding it the relevant context and asking it to argue against their preferred approach. Not to make the decision, but to pressure-test their thinking before presenting to leadership.
Gather your core documents: product strategy, recent specs, and 2-3 key user research summaries
Choose a tool with a large context window (Claude, ChatGPT with file uploads, or Gemini)
Upload your documents and write a brief 'orientation' prompt explaining your role and product
Test with a real question you'd normally spend time researching manually
Set a weekly reminder to add new meeting notes or research to your copilot's context
PROMPT:
"Based on the documents I've shared, what are the biggest unresolved questions about our product direction?"