Think and Save the World

How Algorithms Sort Us Into Tribes — And How To Resist

· 8 min read

The Mechanism, Precisely

To resist algorithmic sorting, you must understand exactly what the algorithm is doing. It is not doing what most users intuit — it is not simply showing you more of what you like. It is doing something more sophisticated and more insidious.

Modern recommendation systems are trained to predict a specific behavioral outcome, typically expressed as some combination of: time spent on the platform, click-through rate, comment, share, or reaction. These are the behavioral signals that advertising buyers pay for (more time = more ads seen) and that platforms can measure at scale.

The training process is as follows: the system generates predictions for each user about which content from an available pool will generate the target behavioral outcomes. It serves content based on these predictions. It measures the actual outcome. It updates its predictions. Over billions of iterations across billions of users, the model becomes very good at predicting what content generates the target outcomes for each user profile.

What has the model learned to predict? Not preferences in any rich sense — not "what this person would, in a considered moment, want to see." The model has learned to predict what triggers the reflexive behavioral response that the training signal rewards. Because the training signal is engagement — time, clicks, reactions — the model has learned to identify content that triggers reflexive engagement.

Researchers have found that emotional content consistently outperforms neutral content on these metrics. Content that activates moral outrage is among the highest-performing on engagement metrics. Content that generates fear or anxiety keeps people on the platform longer than content that generates calm satisfaction. The model does not know these things in any conscious sense. It has simply learned the empirical regularity: moral outrage works.

The MIT study by Vosoughi, Roy, and Aral (2018), published in Science, found that false information spread roughly six times faster than true information on Twitter and was 70 percent more likely to be retweeted. This finding is explicable through the same mechanism: false information tends to be more novel and more emotionally activating than true information, and novelty plus emotional activation is what engagement optimization rewards.

The algorithm is not selecting for falsehood. It is selecting for emotional activation, and false information happens to be more emotionally activating than true information. The selection pressure for misinformation is a side effect of the selection pressure for engagement.

The Tribal Sorting Process

The sorting into tribes is an emergent property of this optimization, not a deliberate design goal. Here is the mechanism:

Users who are predisposed to partisan content engage more with partisan content. The model learns this association. It therefore shows them more partisan content, which generates more engagement, which updates the model further. The feedback loop strengthens over time. Users who initially had moderate or ambivalent political attitudes, but who showed any early engagement with partisan content, are progressively sorted into more extreme content buckets as the model follows the engagement signal.

This process has been documented internally at Facebook and leaked through whistleblower disclosures. In 2018, Facebook conducted internal research showing that its algorithm was amplifying divisive political content — and then faced internal pressure about what to do with these findings. The researchers proposed "integrity fixes" — modifications to reduce divisiveness signals — but found that these modifications reduced engagement, creating a conflict with business objectives.

The internal documents, later published through the Facebook Papers and Frances Haugen's disclosures, show that Facebook's researchers knew what was happening and found themselves unable to fix it without business consequences. The algorithm was functioning as designed. The design produced division.

The tribal sorting is compounded by another mechanism: the social graph. Social network platforms encourage users to connect with people they know. The connections people have in real life are already partly tribal — people tend to know people like themselves, live in neighborhoods with people like themselves, and belong to institutions with people like themselves. The platform's social graph formalizes and amplifies these real-world social clusters. Then the content algorithm, optimizing for engagement, surfaces the most emotionally activating content from within these already-clustered networks.

The result is that users receive high-activation content that is reinforced by their social network — it comes from people they trust and identify with, not from strangers. This combination of social trust and emotional activation is particularly effective at cementing tribal attitudes.

What the Research Actually Shows

The empirical literature on algorithmic sorting and its effects is more complex than the popular narrative suggests, and it is important to engage with the complexity rather than oversimplify.

The filter bubble hypothesis, as Pariser formulated it, predicted that personalization would expose people primarily to content that confirmed their beliefs. The empirical evidence for this strong version is mixed. Studies of Facebook's content, including one conducted by Facebook researchers and published in Science in 2015, found that the algorithm did reduce exposure to cross-cutting content — but so did individual users' own choices. When users were shown content that might cross their partisan lines, many scrolled past it. The algorithm was filtering, but so were the humans.

A more recent and more comprehensive study by researchers at New York University (2023) used actual browsing data from a large panel of users with their consent, giving them access to what users actually saw and clicked on across the web. They found that social media use was associated with more exposure to partisan news — not through algorithmic restriction but through the structure of sharing behavior. People share partisan content with their networks, so social media channels partisan content even without heavy algorithmic filtering.

However, a set of experiments conducted in collaboration with Meta in 2023 — published across four papers in Science and Nature — found that changing the Facebook feed from algorithmic to chronological did not significantly reduce polarization or misperception of the other side. This was a surprising result that complicated the "algorithm causes polarization" narrative.

The honest reading of the literature: algorithmic optimization contributes to the problem but is not the sole cause. The underlying social dynamics — selective sharing, homophilous networks, motivated reasoning — would produce some degree of tribal sorting even with neutral algorithms. The algorithms amplify tendencies that already exist. The amplification is real and measurable. The magnitude is disputed.

None of this changes the fundamental analysis: systems optimized for engagement are selected for emotionally activating content, and emotionally activating content is predominantly divisive. The algorithm may not be the only cause of polarization. It is certainly not helping.

Individual Resistance

Knowing the mechanism, what can an individual do?

Chronological feed. Twitter/X, Instagram, and Facebook all offer chronological feed options. Switching to chronological breaks the algorithmic selection process. You see what was posted recently by people you follow, not what the model predicts will activate you most. This reduces the optimization pressure on your feed but does not eliminate it — you still follow the accounts you follow.

Source diversification outside platforms. The most robust resistance is building information habits that do not depend on algorithmic curation: subscribing to newsletters, reading long-form journalism, following RSS feeds from specific outlets, listening to podcasts from people whose thinking you respect across political lines. These sources are not optimized for your engagement in real time.

Following people who think differently. Algorithmic feeds can be partly gamed by training them with cross-cutting content — if you engage with thoughtful conservatives as well as thoughtful liberals, the model learns to show you both. This requires deliberate choice and ongoing effort against the model's drift toward your existing engagement patterns.

Friction. Any practice that adds friction to sharing reduces the viral spread of low-quality emotional content. Before retweeting something that makes you angry, looking for the original source, checking whether the quote is complete, asking whether the context changes the meaning — these are individually low-cost practices that would, if generalized, substantially reduce the volume of misinformation in your network.

Scheduled access rather than ambient scrolling. The algorithmic optimization is most effective on ambient, habitual platform use — checking your phone whenever there's a slack moment. Restricting social media use to scheduled periods reduces the total time the algorithm has to shape your information environment.

None of these are complete solutions. They are harm reduction. The underlying system is still operating, and it is reaching the billions of people who are not deliberately managing their information environment.

Structural Interventions

At the platform and regulatory level, several interventions would reduce algorithmic sorting's effects:

Algorithmic transparency. Currently, users have no visibility into why they are seeing what they see. Requiring platforms to disclose the factors that influence content selection — and to allow users to inspect and modify these factors — would give users agency. The EU's Digital Services Act moves in this direction, requiring large platforms to offer algorithmic systems based on criteria other than engagement.

Alternative optimization targets. Platforms could optimize for different outcomes. Twitter's early "show best tweets first" was an experiment in relevance rather than engagement optimization. Some platforms are experimenting with "time well spent" metrics — asking users whether they felt their time was well used — rather than raw engagement. Bridging-based ranking, developed by researchers at Twitter and now implemented in modified forms on some platforms, explicitly rewards content that generates engagement across partisan divides rather than within them.

User-controlled algorithmic settings. Rather than hiding the algorithm, allowing users to explicitly set optimization targets — "show me content that challenges my views," "prioritize accuracy over engagement," "deprioritize political content" — would give users meaningful control. The technical capacity to do this exists; it is a business model choice not to offer it.

Interoperability. The tribal sorting problem is intensified by the social graph lock-in of dominant platforms. If users could take their social graph with them to alternative platforms, they could exit platforms with harmful algorithms without losing their connections. Interoperability requirements would enable this exit.

Cultural Practices That Resist Tribal Sorting

Beyond individual habits and structural interventions, there is a cultural level: the norms and practices of epistemic engagement that either resist or accelerate tribal sorting.

Societies with strong traditions of debate, adversarial collaboration, and intellectual charity are more resistant to tribal sorting because their members have internalized practices that counteract motivated reasoning. Knowing how to steelman an opposing argument, tolerating uncertainty without collapsing into certainty, distinguishing between evidence and interpretation — these are skills that require cultivation.

Educational systems that prioritize critical thinking over content delivery, that teach students how arguments are constructed and how evidence is evaluated, that give students practice in defending positions they don't hold and attacking positions they do — these are cultural investments in epistemic resilience.

Journalism practices that present the strongest version of competing arguments rather than the weakest, that separate factual claims from interpretive frames, that model genuine uncertainty when genuine uncertainty exists — these are professional norms that, when widely practiced, raise the quality of the information environment.

None of these cultural practices can fully counter the raw optimization power of systems running at the scale of billions of users. But they are not irrelevant. The people most resistant to algorithmic tribal sorting are those who have internalized counter-algorithmic epistemic practices. Building those practices into educational and professional cultures is the slow, generational work of civilizational resilience.

The algorithm sorts quickly. Culture builds slowly. The race is already underway, and the algorithm currently leads. But culture is not fixed, and the awareness of what the algorithm is doing is growing. That awareness is the beginning of resistance.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.