Too Many Cooks
The Tyranny of Infinite Insight
Suddenly, we now all have our own dedicated personal sages. They’re doing more work for us than we’ve ever done for ourselves. Yet we’re working harder than ever to keep up.
Our AI sages are cheap, deep, and arguably smarter than we are. They’re infinitely verbose and insidiously, occasionally, authoritatively, unreliable. We are becoming more informed, but at a pace our biology can’t keep up with.
We are inundated in help. Like The Sorcerer’s Apprentice we order our silicon brooms to fetch us insight and they oblige with the prolixity of AI choreographed megawatts, leaving us frantically bailing out the castle, drowning in labor saving magic.
The Old Guard: Throttled by Friction
Historically, we navigated complex realms like Taxes, Law, and Medicine with a heuristic mix of acquired knowledge and chosen experts. We read a few articles, hired a specialist, and made a decision.
Our research was throttled by friction. We stopped digging because digging was hard. We were content to browse WebMD but rarely attempted to parse a research paper in JAMA. We accepted the boundaries of our understanding because crossing them was too expensive in time, money, and attention.
Ignorance was not just bliss; it was crucial to getting things done.
The New Reality: The Infinite Rabbit Hole
This dynamic has been gobsmacked by Large Language Models (LLMs). These engines don’t just answer questions; they model complexity. They strip away the friction that used to limit our worrying.
I have an estate planning tax optimization quandary as it’s year end.
The Human Expert: I pay my advisor $700. He runs a projection with baked-in assumptions, gives me two options, and explains them simply. If I push for more “what-if” scenarios, his patience (and my budget) runs out. The constraint forces a decision.
The AI Sage: I ask Gemini. It costs pennies. It generates sophisticated options, argues the pros and cons, and creates a decision tree that expands fractally. It is willing to dumb it down and explain it all to me with infinite patience.
“Explain it to me as you would to a child. Or a golden retriever” Margin Call, 2011
The rabbit hole goes as deep as I can chase it. I ask for a projection; it gives me one. I ask for a counter-factual; it provides three. I need to verify its logic to ensure it isn’t hallucinating, which means I have to audit the thinking of a machine that never sleeps.
In the middle of this multi-day tax project, ChatGPT 5.2 drops. So there’s a new oracle on the block which I also have to consult. It too offers prolix perspectives, and builds me a massive custom spreadsheet rivaling a commercial product but without the guardrails and simplifications which make a mature human-designed product usable.
The battling sages have dumped me into a high-fidelity pit. I’m drowning in facts but lack the mental scaffolding cognitive scientist Herb Simon argued was the definition of expertise. I don’t know what the whole puzzle looks like; I just have too many pieces.
The Centipede’s Dilemma
In Katherine Craster’s poem a centipede’s introspection on how to walk disables it.
A centipede was happy quite,
Until a toad in fun
Said, “Pray, which leg comes after which?”
This raised her mind to such a pitch,
She lay distracted in the ditch
Considering how to run.
LLMs make us the centipede. By exposing the infinite nuance of every decision—from tax code to email phrasing—they force us to process complexity we used to gloss over.
The Map is the Territory
In Jorge Luis Borges’ 1946 fragment, On Exactitude in Science, an empire obsessed with rigorous cartography creates a Map whose size is that of the Empire, coinciding point for point with it.
The map is perfect, and useless. It offers no abstraction, no reduction, no guidance. It duplicates the complexity of reality. Borges ends the story by noting that succeeding generations, less obsessed with the study of cartography, abandoned the vast map leaving its tattered ruins to be inhabited by animals and beggars.
LLMs are willing to make these maps for us every day. By giving us all the variables and edge cases, they are overloading our ability to navigate the territory leaving us to wander through a digital wilderness of our own making.
Surviving the Superpower
We are at the start of this era, learning to cope with pocket sages. They grant superpowers, but demand a new discipline.
The challenge is no longer access to truth; it is the truncation of it. We must learn to artificially impose the limits that friction used to provide. We have to learn when to stop asking the oracle, not because it has run out of answers, but because we have run out of the ability to keep up with them.
How are you coping with your new super-powers? Tell us in the comments!
Marc Meyer is a Silicon Valley CEO Coach and Advisor. His background is as a technologist, founder (6 startups, 4 exits, 1 IPO), engineer, executive, investor, teacher and corporate advisor. He has invested in and advised over 200 companies. He advises and works with accelerators and funds including Alchemist, 500 Startups, HBS Alumni Angels, and Berkeley SkyDeck, where he co-chairs the Advisor Council. His Executive Coaching and Advising practice helps leaders achieve their greatest potential.




Interesting observations. Is it because we doubt the LLM's answers that we end up exploring the rabbit holes? After all, we don't ask our doctor what makes them sure that the knee pain is arthritis and not a rogue tumor? Perhaps it is the lack of credibility that is the issue here? If it were in fact truth we were accessing, we would accept it as such and move on.