Apr 14, 2026
🌿 Sprout

There’s a failure mode in knowledge management that is as old as knowledge management itself. Think of abandoned “read later” tools, commonplace notebooks filled with highlighted passages, Zettelkastens that became graveyards of ideas unfiled.
The failure mode is this: people work backwards from the ideal system rather than forwards from thought.
Somewhere between the intention to think more clearly and the work of actually doing it, the apparatus of knowledge management substitutes for the thing it was supposed to serve. We get elaborate systems that perform the idea of thinking without requiring much of it. And you? You no longer use the system to think; you think about how to tend the system.
I’ve been mulling over this lately because AI tools are experiencing newfound glory in the world of personal knowledge management. I think every generation gets the knowledge management technology it deserves, and every generation finds a way to perform with it rather than think with it.
What makes this moment different is that AI is the first tool powerful enough to perform entirely on your behalf. This makes the temptation almost irresistible, and the stakes of resisting it much higher.
The garden as a philosophy
The digital garden is a discrete philosophy for publishing and interacting with personal knowledge, privately or publicly. Conceptually popularised by designers, anthropologists, and researchers such as Maggie Appleton, it's defined as a collection of evolving ideas that are inherently exploratory, imperfect, and non-linear.
At its core, digital gardening is a rejection of performative chronology.
Traditional blogging structures information by publication date, implicitly demanding that each entry be a final, polished articulation of thought. The digital garden actively rejects this paradigm. Notes and essays within a garden are published as half-finished thoughts that will grow and mature over time. They are not incomplete, much like a seed is not an incomplete plant. The organisational architecture relies on contextual associations and bidirectional hyperlinks rather than rigid taxonomies or hierarchical folders. It is, in essence, a web of information where nodes and silvery threads of connection form over time. It mirrors the way the human brain works, allowing ideas to cluster based on context and conceptual proximity.
The idea is not just to store information, but to cultivate wisdom over a lifetime.
This is why digital gardening is intentionally friction-forward. It is deeply tied to broader societal movements advocating for intentional deceleration. We have slow food and slow fashion, which prioritise quality, sustainability and intentionality over mass production. In the same vein, slow gardening represents a deliberate choice to engage deeply with one's material. We owe a lot to transcendentalist thinkers like Henry David Thoreau, who wrote much about the intimacy we can develop when we interact manually, deliberately, with our environments.
The friction inherent in manually linking thoughts and synthesising disparate pieces of information is not a flaw; it is the core mechanism by which learning and cognitive retention happens. But friction alone is not the point. The point is what the friction demands of you: that you engage with ideas, draw parallels and maps yourself, and create meaning for yourself.
The performance of knowledge curation
Instead of starting from individual notes that connect organically into a web of personal knowledge, many people mine references from everywhere without stopping to ask why they want to save something: its context, personal significance, potential use and evergreenness. By refusing to engage with the material beyond skimming, then handing everything that follows to the tool itself, knowledge absorption becomes a performance rather than a practice. Cognito, ergo sum takes a backseat.
This has been especially relevant ever since you could clone entire knowledge management setups from a GitHub repository in under a minute, complete with their folder hierarchies, tagging taxonomies, note templates, automation flows. You can watch a YouTube video of a beautifully architected Obsidian vault, fork it wholesale, and begin populating it with notes thanks to someone’s generosity.
And sure, the system looks right. It has the right aesthetic, the right labels, the right folders. But it was built for someone else's brain, reading habits and personal context. It feels productive. It isn’t. To really make digital gardening work for you, you have to know yourself.
When you read, do you move linearly, skim, or jump between concepts? How often do you return to saved material? Do you think better by writing long-form, or by accumulating fragments that eventually collide? What does "useful" even mean for your specific context — are you a researcher, a writer, a builder? How are you hoping a personal knowledge base will change how you think and how you work?
This introspective work can be unglamorous and not very pleasurable. Unlike cloning a repository, it produces nothing immediately visible. But it is the only honest starting point for a knowledge system that will actually serve you and grow with your thinking. Your garden can fill up on borrowed structures. But if it was borrowed before you understood what you personally needed, then you are tending someone else’s land.
AI has become this phenomenon’s most powerful amplifier, because it lands on top of an increasing social incentive to adopt AI for its own sake, to scramble to find a use for it in our work and life. The social cost of not using AI feels higher than the cost of using it badly. Perfectly serviceable systems now seem archaic (or counter-cultural, depending on how you present it) and find themselves being stack-ranked against shiny new automated systems. In my circles, the larger trend seems to be people reaching for AI as a solution before they've even honestly identified the problem. I think that is precisely the backwards thinking that produces bad knowledge systems in the first place.

Cartoon by Adam Douglas Thompson for The New Yorker, November 14, 2023
AI as co-gardener
When used well, AI can be a co-gardener. It can be an active participant in tending the garden that you planted. Large language models are better understood as advanced language calculators than as autonomous thinkers just as a calculator does math. The goal is to keep the human as the director: doing the strategic directing, getting AI to handle the computational lifting, while retaining full agency over the narrative and the act of sense-making. This is often called centaur notetaking: The centaur is powerful because of what each half contributes distinctly.
In practice, this looks like using AI to chain together small tasks: ordering paragraphs, cleaning up messy data, reverse outlining to untangle a chaotic draft. It can engineer semantic serendipity: when your knowledge base has grown large enough that you've genuinely forgotten what lives in it, AI can surface a note you wrote two years ago that turns out to be relevant to something you're working on today. In these cases, AI is functioning as an archaeological tool, brushing away sediment to reveal something you already grew. The connection it surfaces is a suggestion, not a conclusion. You still have to decide if it means anything.
Knowledge curation ≠ Knowledge internalisation
The deeper problem, as I've argued, long predates AI. But AI raises the stakes considerably.
I’ll take the example of drawing connections between ideas. When you personally connect two notes, that connection is imbued with the subjective, phenomenal intent that drove you to make it. You know why it matters to you emotionally, historically, logically. On the other hand, when an AI auto-links two notes through vector similarity, it is simply matching mathematical coordinates. You get the destination without the navigation, repeatedly. And if the navigation is always done for you, your personal capacity to navigate atrophies. And while the origin of the prompt doesn't determine the depth of the engagement that follows, the connection is only as valuable as the engagement you bring to them afterward.
I think what makes the AI trap specifically insidious, as opposed to older versions of this failure, is that a neglected notebook remains neglected. An AI-powered knowledge base that you've stopped engaging with can keep going without you. It ingests information, tags, inter-links, and summarises. It performs cognition, and it’s all too easy to mistake that performance for your own.
This creates a false sense of security, because knowledge has been created visibly, but it has not been internalised. It is technically still of no use to you until you consciously put it to work: in writing, reading, speaking, working. Remember that the whole point of digital gardening is not to hoard information but to make use of it, to develop wisdom over a lifetime.
Gardening on sovereign soil
Ultimately, the garden is an intimate space. It is imperfect, contextually bound, and where we do our most vulnerable, messy thinking. It is an act of intellectual self-knowledge. So a digital gardener needs to constantly balance the desire for the automated efficiency of an AI-powered knowledge-building world against the intellectual intimacy gained from manual "slow gardening” of a personal canon.
The threats to digital gardening are still the same: carelessness, laziness, the hunger for the appearance of productivity. I think AI simply makes those existing tendencies more consequential, and therefore rewards for resisting them higher.
AI, when used as a localised semantic trowel rather than a generative bulldozer, can help us cultivate wisdom over a lifetime. It can help us tend the soil. But the wild, beautiful, and deeply human act of growing the garden remains entirely up to us.
