Think and Save the World

The Difference Between Information And Knowledge

· 8 min read

The DIKW Hierarchy

The distinction between data, information, knowledge, and wisdom is formalized in what information scientists call the DIKW hierarchy — Data, Information, Knowledge, Wisdom. Though the framework is debated in epistemological detail, the basic structure is useful:

Data is raw, uninterpreted signal. Numbers without context. Observations without interpretation. "37 degrees Celsius" is data.

Information is data that has been given meaning through relational connection. "37 degrees Celsius is a fever in a human adult" is information — the data point has been contextualized.

Knowledge is information that has been integrated into a framework that enables action or prediction. A doctor who knows what a fever indicates, what its differential diagnoses are, what the clinical trajectory typically looks like, and what interventions are appropriate has knowledge — they can act appropriately based on the information.

Wisdom is the meta-level — knowing when and how to apply knowledge, what principles should guide action across varied situations, what matters most when everything matters. The experienced doctor who knows when to act aggressively and when to watch and wait, who can read the patient as a whole person and not just a set of data points, is exercising wisdom.

Most online discourse operates at the information level while claiming to operate at the knowledge level. This creates a specific kind of intellectual inflation: confident-sounding claims based on surface familiarity rather than genuine understanding.

The Illusion of Explanatory Depth

One of the most studied phenomena in cognitive psychology relevant to this distinction is the "illusion of explanatory depth," coined by Leonid Rozenblit and Frank Keil in 2002.

In their studies, participants were asked to rate their understanding of how everyday things work — zippers, toilets, bicycles, cylinder locks. Ratings were moderately high; people felt they understood these common objects. They were then asked to provide a step-by-step explanation of how the object actually works. After attempting this, their ratings dropped dramatically — they had discovered, through the attempt to explain, that they didn't understand the mechanism at all.

The illusion persists because you don't need a deep explanation of how a zipper works to successfully use one. Functional familiarity simulates explanatory knowledge. When the need to actually explain arises, the gap becomes visible.

The political version of this is important. Rozenblit and Keil extended their research to policy positions and found the same effect: people rated their understanding of complex policies (healthcare, carbon tax, flat tax) as reasonably high. Asked to explain the mechanisms, their understanding collapsed. After experiencing this, they moderated their policy positions — they became less extreme and more uncertain. Explanatory depth is a moderating force on confidence.

This has a practical implication: if you want to know whether you actually understand something, try to explain it. Not summarize it — explain the mechanism. If you cannot, you have information, not knowledge.

The Neuroscience of Learning: What Integration Actually Requires

Encoding vs. Storage

Cognitive neuroscientists distinguish between encoding (the initial processing of new information), consolidation (the process by which memories are stabilized), and retrieval (accessing stored information). Much of what passes for learning in the information consumption model is shallow encoding — the material is processed briefly and stored in a fragile, short-term form.

For information to become accessible, durable knowledge, it needs to go through consolidation, which requires time (literally — there are biological processes that occur during sleep that are central to memory consolidation), repetition, and ideally, active retrieval. The practice of actively recalling information from memory — rather than re-reading it — dramatically improves both the durability of storage and the accessibility for future retrieval.

This is the basis of the "testing effect" or "retrieval practice effect" — one of the most robust findings in educational psychology. Students who study material by recalling it from memory outperform students who study by re-reading the material, even when the re-readers spent more time studying. The act of retrieval strengthens the memory trace; passive re-exposure does not. Consuming is passive. Knowledge requires active retrieval and engagement.

Schema Formation

Cognitive schemas are organized mental structures — frameworks that organize how you understand and interact with a domain. Expert knowledge is characterized not by having more isolated facts stored than novices, but by having richer, more interconnected schemas — patterns that allow rapid recognition, efficient problem-solving, and the ability to see what novices cannot.

Adriaan de Groot's classic studies of chess expertise are illustrative. Expert chess players and novice players were shown chess positions for a few seconds and then asked to reconstruct them from memory. Experts dramatically outperformed novices — but only when the positions were from real games. When the pieces were placed randomly, the difference between experts and novices nearly disappeared. The experts weren't remembering individual pieces. They were recognizing meaningful patterns that novices couldn't see.

This is what knowledge actually is: pattern recognition built from deep engagement with a domain. You cannot shortcut your way to it with information exposure. The patterns emerge from experience — from working in the domain, making predictions, being wrong, revising, over time. Information can inform this process. It cannot replace it.

Desirable Difficulties

Robert Bjork's research at UCLA introduced the concept of "desirable difficulties" — learning conditions that feel harder in the short term but produce more durable, transferable knowledge in the long term. These include:

- Spaced practice (distributing learning over time rather than massing it) - Interleaving (mixing different types of problems rather than practicing one type to mastery before moving on) - Retrieval practice (testing yourself rather than re-reading) - Generation (producing answers rather than recognizing them from options)

All of these are uncomfortable. They feel like they're not working, because performance during practice is lower than with massed, repeated, recognition-based study. But long-term retention and transfer are dramatically better. The implication: the feeling of learning — the fluency and ease of re-reading familiar material — is inversely related to the actual learning occurring.

Easy information consumption feels like learning. It isn't. Hard engagement that produces friction, confusion, and error is where actual knowledge formation happens.

The Information Flood: How We Got Here

The internet did not create the distinction between information and knowledge. But it made the gap catastrophic in scale.

Before digital media, information was scarce and access to it required effort. The effort itself functioned as a filter: if you traveled to a library, found the book, and read it cover to cover, you were likely to engage with the material deeply enough to integrate some of it. The effort selected for engagement.

Digital media removed almost all friction from information access. The result was a massive increase in information consumption paired with a much smaller increase in actual knowledge formation. Clay Shirky's "cognitive surplus" thesis argued that the internet would unlock enormous collective intelligence from the time previously spent on passive TV consumption. This has partially come true — there are extraordinary communities of people building real knowledge together online. But the more common outcome has been substitution of low-quality information consumption for low-quality television consumption, without the hoped-for increase in knowledge formation.

The recommendation algorithms of social media have compounded the problem by optimizing for engagement, which selects for emotional content, novelty, and confirmation of prior beliefs — none of which are properties that promote knowledge formation. You can spend hours daily on social media and end up knowing less — not just relatively but absolutely — than when you started, because the misinformation absorbed and the confusion generated outweigh any genuine learning.

Neil Postman argued in "Amusing Ourselves to Death" (1985) that the medium of television had transformed public discourse from argument (sustained, written, evidence-based) to performance (emotional, visual, episodic). Social media has intensified this transformation. The epistemics of Twitter are not compatible with knowledge formation: claims are made without justification, positions are stated without premises, complexity is compressed into takes. This produces a sense of intellectual engagement while preventing the conditions under which knowledge actually forms.

The Path from Information to Knowledge

The conditions that convert information into knowledge are well-understood. They require:

Sufficient time with the material. This is not romantic — it's biological. Memory consolidation takes time. Pattern recognition emerges from repeated exposure over time. The insight that comes to you in the shower three days after reading something is not coincidental; it's the result of background processing that couldn't have happened faster. Fast consumption prevents this.

Active engagement, not passive reception. Reading slowly and taking notes is not slower than reading quickly without notes in terms of knowledge formation — it's faster, because the engagement during reading dramatically improves encoding. Writing forces a level of active processing that passive reading does not. Questions, objections, connections — all of these are cognitive operations that deepen encoding.

Application to genuine problems. Knowledge stabilizes through use. If you learn a principle and then immediately have a real situation where you apply it, test it, find where it works and where it doesn't, your understanding of that principle becomes both more accurate and more durable. The knowledge is now embedded in experience, not just stored as an abstraction.

Revision in light of failure. Your initial understanding of anything is almost certainly incomplete or wrong in some ways. The process of discovery — encountering a case where the principle breaks down, a prediction that doesn't come true, an application that doesn't work — and then revising your understanding is not a sign of poor initial learning. It is the learning. Expert knowledge is full of scar tissue from where the initial model was wrong. Novice "knowledge" (really, information dressed up as knowledge) doesn't have this texture because it hasn't been tested.

Explicit connection-making. Isolated facts are brittle. Knowledge that is embedded in a network — connected to other things you know, placed in a framework, linked to examples from different domains — is far more durable and useful. The second brain concept (law_2_005) is the practical infrastructure for this. The practice of explicitly linking new learning to what you already know is the cognitive work of integration.

The Stakes

The distinction between information and knowledge matters personally. The gap between what people think they know and what they actually know explains most bad decisions — personal, professional, and political.

It matters civilizationally. The problems that actually threaten human welfare — climate, pandemics, nuclear proliferation, AI development, food systems — are phenomenally complex. Addressing them requires genuine, deep knowledge from many people across many domains, working together with sufficient intellectual humility to know the limits of their own understanding. What we have instead is an information-flooded public that is extremely confident about extraordinarily complex problems they have only encountered in their social media feeds.

The information-knowledge gap is not merely an intellectual failure. It is a governance failure. You cannot make good collective decisions about complex systems with populations operating primarily on information consumption and familiarity-as-understanding. The appearance of being informed is not the substance of being informed, and the stakes of confusing them at scale are very large.

The first discipline is personal: being honest about which category you're in, on any given subject. You've read about it. That's not the same as knowing it. The courage to maintain that distinction — against the social pressure to seem knowledgeable, against your own desire for certainty — is the foundation of genuine thinking.

Everything else builds from there.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.