The Civilizational Risk Of Outsourcing Thinking To Artificial Intelligence
Let's go deep on this one, because the shallow version of the AI risk conversation is dominated by either "AI will take your job" or "AI will become sentient and kill us." Both of those framings miss what is arguably the more likely and more insidious risk: that AI will gradually take the world's thinking, and we will let it, because it's convenient, and we'll discover the problem only after the cognitive infrastructure to solve it has atrophied.
The Cognitive Offloading Literature
Cognitive science has been studying the effects of tool-mediated cognition for decades. The term "cognitive offloading" refers to the process of using external tools to reduce the cognitive work required for a task — and the finding is consistent: when you reliably offload a cognitive function, the internal capacity for that function tends to weaken.
The GPS studies are the clearest example. Research by Hugo Spiers at University College London showed that London taxi drivers, who memorize the entire street map of London in a process called "The Knowledge," have measurably enlarged hippocampal regions corresponding to spatial memory. People who use GPS navigation show no such enlargement, and in longitudinal studies, show declining performance on spatial navigation tasks when the GPS is removed.
The mechanism is straightforward: the brain allocates neural resources to capacities that are exercised and withdraws them from capacities that aren't. Use it or lose it is not a metaphor. It's how neural plasticity works.
Now extend this to reasoning. What happens to the populations that routinely use AI for writing, analysis, argument construction, and decision support? We don't have 20-year longitudinal data yet because the tools are too new. But we have early signals. A 2023 study of AI-assisted writing found that while AI assistance improved the immediate quality of output, it significantly reduced the writer's ability to produce comparable work without assistance. The gap widened over repeated use. The tool was not building the underlying skill — it was substituting for it.
This is the dependency trajectory. And unlike GPS dependency, which affects your ability to navigate cities, AI reasoning dependency affects your ability to evaluate arguments, identify errors in logic, assess evidence quality, and make decisions. These are not peripheral cognitive skills. They are the core of what it means to be a functioning agent in a complex world.
The Evaluation Problem
Here is the specific structural danger. To evaluate whether an AI's output is good, you need to have some independent capacity to assess that domain. A skilled lawyer can read an AI-generated legal brief and notice when the cited cases don't exist or when the logic doesn't hold. A skilled doctor can read an AI-generated diagnosis and notice when it's missing crucial patient context. A skilled economist can read an AI-generated policy analysis and notice when the model's assumptions are unrealistic.
But if AI becomes the primary tool through which people develop their understanding of these domains — if students learn medicine primarily by interacting with medical AI, if lawyers primarily by using legal AI — then the independent evaluative capacity doesn't develop in the first place. And you end up with a medical establishment in which the doctors are sophisticated AI users but not independent medical reasoners.
At that point, what happens when the AI has a systematic flaw? Or when it's tuned — intentionally or through training data selection — to produce outputs that favor certain interests? The human population equipped to detect this has been thinned or eliminated by the same atrophy that made AI dependency attractive.
This is not hypothetical. In high-stakes professional domains, we already see AI systems producing outputs that specialists accept without sufficient scrutiny. Radiologists whose institutions adopt AI-assisted diagnosis have been found in multiple studies to defer to AI recommendations even when the AI is wrong in ways they would have caught without the AI present — a phenomenon called "automation bias." The AI's confident recommendation actually degrades the specialist's independent judgment rather than supplementing it.
Scale this across civilization and you have a world in which decisions about medicine, law, policy, finance, and conflict resolution are all mediated through AI systems that most users cannot critically evaluate — and where the population of people who could evaluate them is shrinking because the AI has made that evaluation feel unnecessary.
Who Controls the Reasoning Layer
There's a political economy dimension to this that doesn't get nearly enough attention.
If AI becomes civilization's reasoning layer — the system through which most people form beliefs, make decisions, and evaluate information — then whoever controls the training and deployment of those AI systems controls, in a meaningful sense, the cognitive environment of civilization.
This is not paranoia. It's just the logic of the situation. Training data shapes what the AI treats as credible, what it treats as fringe, what it treats as settled and what it treats as debatable. The framing choices of the humans who design training pipelines are baked into the reasoning the AI performs. Users interacting with the AI get outputs shaped by those framing choices without necessarily being aware of them.
Media ownership was already a significant civilizational concern — the concentration of information-producing entities in few hands allows those hands to shape what large populations believe. AI reasoning outsourcing is a more intimate version of the same risk. Instead of shaping what you read, it shapes how you think about what you read. The influence is upstream of content.
The people building today's largest AI systems are a small group, concentrated in a few countries, funded by a small number of investors, shaped by a specific cultural and educational context. That's not a critique of their intentions. It's a structural observation about what happens when reasoning infrastructure is controlled by any small group, regardless of their intentions. Monocultures in any domain are fragile. A cognitive monoculture — where billions of people are essentially using the same reasoning system — is a civilizational concentration of brittleness.
The Skills That Must Not Atrophy
If the risk is reasoning atrophy through AI dependency, the response is precision about which cognitive capacities must be protected and actively developed. Not everything — some offloading is fine. Arithmetic to calculators is probably acceptable. But certain capacities, if atrophied at civilizational scale, produce irreversible consequences.
Premise identification. The ability to notice what assumptions are built into any argument or output before evaluating its conclusions. This is the root of critical thinking and the first thing AI can make unnecessary — if you just accept the AI's framing, you never need to notice the framing at all.
Evidence evaluation. The ability to assess the quality, relevance, and sufficiency of evidence for a claim, independent of who's making the claim or how confidently. AI systems present information with a consistent confidence register that doesn't correlate with epistemic quality. Users who can't evaluate evidence independently can't notice when the AI is confident about something it shouldn't be confident about.
Logical validity assessment. The ability to notice when a conclusion doesn't follow from its premises, when there's a missing step, when the argument's form is invalid even if its content sounds right. AI can produce sophisticated-sounding invalid arguments. A population that can't trace logical structure is a population that can't catch them.
First-principles reconstruction. The ability to work a problem through from basic facts and logic rather than pattern-matching to received conclusions. This is slow, difficult, and increasingly unnecessary with AI assistance — which is exactly why it will atrophy first and cost the most when it's gone.
Metacognitive monitoring. The ability to know what you don't know, to notice when you're uncertain and why, to calibrate confidence appropriately. AI provides confident outputs regardless of the underlying uncertainty. Users who outsource their reasoning will also outsource their calibration — and will lose the ability to know when they're in unknown territory.
The Right Relationship to AI
None of this is an argument for AI avoidance. That's not a viable position at civilizational scale, and it's not the right frame. The right frame is: what relationship between human reasoning and AI tools produces maximum capability without creating dependency that degrades the human side of the system?
The analogy is physical fitness and transportation. Cars are extraordinary tools. They extend human mobility by orders of magnitude. No reasonable person argues we should abandon them. But a population that never walks, never climbs stairs, never uses its body for physical effort will develop serious physical health problems regardless of how excellent their transportation is. The solution is not "no cars." The solution is deliberate maintenance of the physical capacities that cars don't exercise.
Similarly: use AI for what it's genuinely powerful at — rapid synthesis of large amounts of information, first-draft generation, identification of what you might have missed, testing your reasoning by engaging counterarguments. But deliberately maintain the cognitive capacities that AI outsourcing doesn't exercise. Work problems through yourself before asking AI. Practice reasoning about domains without AI assistance. Regularly evaluate AI outputs against your own independent analysis, even when the AI is probably right.
This is not inefficiency. This is maintenance of the human cognitive infrastructure that makes AI useful rather than colonizing.
The 1,000-Page Manual in an AI World
Here's the specific relevance. The project of this manual — distributing sovereign thinking capacity to every human being — becomes more important, not less important, as AI becomes more capable and more widely deployed.
A population of sovereign thinkers uses AI as a tool. They bring genuine questions to it, evaluate its outputs critically, identify its errors, use it to expand the scope of their thinking without replacing the act of thinking. They remain the agents in the system. AI remains the instrument.
A population of non-sovereign thinkers gets colonized by AI. They outsource their beliefs, their analyses, their decisions. They can no longer tell the difference between reasoning and the performance of reasoning. They are, at that point, not thinking — they are consuming AI-generated thinking, which is a category difference that will express itself in every domain of human life, from personal decisions to civilizational choices.
Law 2 — Think — in an AI age means something specific: it means building the internal infrastructure that AI cannot replace. The questions only you can ask, because they emerge from your specific position and experience. The judgment that requires your values and your embodied context. The evaluation that requires you to have actually worked through the problem, not just read a summary of it. These capacities are what make you an agent rather than a consumer. They are what make humanity, collectively, the principal in relation to AI rather than its subject.
That's the civilizational stake. And it's what this manual is working toward, one sovereign thinker at a time.
Comments
Sign in to join the conversation.
Be the first to share how this landed.