I studied a blend of genetics, sociology, and architecture in college. To organize my notes, I used different colored pens for each subject. Blue for hard sciences, green for social sciences, black for architecture. It wasn't just about organization—each color created a distinct mental space for different types of thinking.
This wasn't just personal preference—research shows that multimodal encoding through color and handwriting significantly improves memory retention.
Today I use a suite of tools to help me with memory capture. For the digital world, I use Granola, an AI note-taking tool that automatically captures and summarizes my meetings. The notes show up later, neatly organized and contextualized. For the real world, I use Spacebar, a mobile app that records audio, photo, and location data to create discrete, interactive memory files. It forces me to decide that something is worth remembering, nudging me to exercise judgement and choose to remember moments as they happen.
Not all memory capture tools take the same approach. Granola works ambiently, passively recording all of my conversations and generating comprehensive notes for each meeting. Spacebar, on the other hand, actively requires my discernment: it doesn’t listen unless I open the app and press record.
The judgment required from me in choosing to remember correlates to the strength of a memory, and that’s the focus of this post.
The Unbundling of Memory
Looking at my old notebooks triggers a memory of how the lecture halls smelled. I remember listening to the professor and staring at the words as I translated them into ink with my hand. All of these environmental cues anchor my experience of the moment, acting as mnemonic triggers for the lecture. My AI-generated notes have none of those anchors, leading to some uncomfortable implications.
We’re seeing the progressive unbundling of human agency in memory creation, and I think it’s happening in clear phases.
The first phase started a very long time ago. Cameras, voice recorders, and digital notebooks were primitive tools for documentation. Sure, each of these tools were enhanced by technology. We chose what to capture and the tools just made it easier to document those moments. But ultimately, we still had to choose what moments to remember.
The second phase is happening now. Technology can passively record everything we see, say, and hear. Even so, we still offload memory to tools like password managers, Notion databases, and browser bookmarks. But today, AI can parse through data sources of varying fidelity (raw transcripts, written notes, etc.) and search for patterns to flag what might be important. So now instead of deciding what to remember, we allow tools to consume everything so that we only review the summary. This is where tools like Granola (ambient), Spacebar (active), and (the new AI-integrated) Notion live.
The third phase is emerging. AI systems are starting to interpret our experiences: finding patterns in captured data, generating insights, and in some cases, offering personalized advice. The process of making meaning from memories is being progressively delegated to synthetic companions who can remember every detail of our lives. This is where AI companions like Friend (ambient capture) and Dot (active capture) live.
The fourth phase is on the horizon: autonomous agents who act on our behalf. The foundational memory loop leading to action (capture → recall → synthesize → strategize → act) might bypass human judgment entirely in the future. In this world, expect general-purpose tools embedded with context like ChatGPT will thrive. Who knows… perhaps the beef between Elon Musk & Sam Altman will be squashed and brain-computer interfaces like Neuralink will handle the full-memory loop with “ChatGPT inside”.
Network Effects of Personalization
Personalization is the ultimate unlock for maximally useful AI tools. Recent studies show that AI systems that can access a user profile can significantly improve the ability to accurately predict preferences—what happens when everyone has maximally context-aware AI tools?
AI needs deeper context about our lives to maximize its usefulness. There is a virtuous cycle at play: we generate massive amounts of data, enabling more useful AI, driving more AI adoption, boosting the productivity floor of the average person. It's a feedback loop that's already in motion.
For now, the richness of “processed” data (eg. diary entries) created by humans is more valuable than “raw” data (eg. always-on video feed). Data created by humans is full of nuance in style, theme, etc. The signal derived from this kind of data provides higher-quality context for an AI system to build a personalized profile against. I’d argue this is the bull case for a product like Notion, who owns an entire suite of productivity tools (email, calendar, notes, etc) and high-signal data, spread across both personal & enterprise markets.
But data derived from the days of our lives is messy. It’s cross-contaminated between various apps & devices in our homes and at work. Text messages, Notion databases, Apple Notes, password managers. Digitally encoded memories reveal some of our most intimate moments. How do we protect our privacy while preserving interoperability?
Memory Permissions
In one timeline, a single general-purpose tool becomes the standard AI across personal & commercial use. In another timeline, a rich ecosystem of tools integrate across all layers of personal life & work. We’re still so laughably early.
In any future, handling context interoperability while maintaining data privacy will be critical. The difference between ”AI for work“ and “Your AI at work” will come down to access control.
It’s too risky for companies to allow external intelligence to have imprecise and/or non-revocable access to sensitive company data (passwords, keys, etc). To deal with this, I expect the leading AI companies will coordinate and create a “portable memory” standard to specify access to sensitive data (memories), whether from companies or individuals.
Don’t believe me? Try imagining this scenario: you get fired from a job and you can’t access your computer. You don’t remember the password. In fact, you can’t recall what documents you even wanted to export because sensitive “memories” have just been ripped out of your AI (or your brain).
Paradox of Agency
Memory tools can remove the burden of documentation, which is the problem. Some tools are designed to funnel our attention toward careful observation of the experience, enabling us to capture the moment with deeper awareness (eg. film cameras). Other tools encourage us to be careless, enabling us to document every moment simply because it's possible (eg. always-on voice recorders). Constraints incentivize care.
With productivity tools, there’s always a trade-off. In this case, as we delegate more of our memory-making behaviors to technology, we risk weakening our sense of perception & judgment.
“Not only does the process of recalling a memory reinforce its imprint, it also gives us the agency of shaping our own narrative, even if by default as unreliable narrators.”
— Anastasiia Fedorova, Memory Machines
This isn't just speculation. The "Google Effect" shows that when people expect information to be readily available through technology, they're less likely to commit it to memory. Instead, they remember where to find the information (Google, their camera, etc) rather than the information itself.
Delegating judgment away to unimportant decisions is normal, and net-good. For example, I don’t need to spend energy deciding what water bottle to purchase; the internet can figure that out for me.
The Path Forward
My old color-coded notes remind me that choosing what to remember—and how to remember it—shapes how we make sense of our experiences.
As we adopt increasingly pervasive and discerning memory tools, we should be careful to design products that protect the intimate—and effective—aspects of memory-making. Not because AI tools are categorically wrong, but because the act of choosing what to remember is how we express care within a life that is slowly ticking away. Time, as it turns out, is the last finite resource we all have.
In the long run, I believe ambient memory tools will “win” in the sense that they’ll be the most popular tools to capture personal data—but I also believe that the early interaction patterns that we’re seeing in active memory tools will become a first-class feature of ambient tools. Why? Because it turns out selective forgetting and choosing what’s important to remember can also improve AI memory systems (go figure). The most simple form of this interaction: “Hey Jarvis, I want you to remember this moment.”
Said differently: it’s too economically valuable to prevent AI tools from learning everything about you (especially since you’re probably terrible at being honest about who you actually are). While bleeding edge long-term memory architectures will be able to remember everything, you should still denote your important memories. Whether or not you care to protect your agency, do it because it will make your future AI that much more formidable.
If you’re building tools to improve human memory, I encourage you to reach out. I’d love to help you strategize your approach.
Special thanks to Jonathan Wu, David Rosenberg, and Aryan Naik for their feedback on this post.