Think and Save the World

How Global Movements For Algorithmic Transparency Protect Shared Digital Commons

· 5 min read

The Scope of Algorithmic Governance

The scale of algorithmic decision-making is difficult to overstate:

Information. Google processes over 8.5 billion searches per day. Facebook's algorithm determines the content feed for nearly 3 billion monthly active users. YouTube's recommendation engine drives over 70% of total watch time. TikTok's For You page is the primary news source for a significant and growing percentage of people under 30. These systems don't just deliver information. They construct the information environment. They determine, for billions of people, what counts as reality.

Employment. An estimated 99% of Fortune 500 companies use applicant tracking systems (ATS) that filter resumes algorithmically before a human sees them. Amazon famously had to scrap an AI recruiting tool after discovering it systematically downgraded resumes from women. HireVue and similar platforms use AI to analyze video interviews, scoring candidates on facial expressions, word choice, and tone -- metrics that have been criticized for racial and cultural bias.

Criminal justice. Risk assessment algorithms like COMPAS are used in US courts to inform sentencing and parole decisions. ProPublica's 2016 investigation found that COMPAS was significantly more likely to falsely flag Black defendants as high-risk and to falsely flag white defendants as low-risk. The company that makes COMPAS, Equivant, has disputed these findings, but the debate itself illustrates the problem: when the algorithm is proprietary, independent verification is nearly impossible.

Finance. Credit scoring algorithms determine who gets loans, at what rates, and on what terms. Insurance algorithms set premiums. Fraud detection algorithms freeze accounts. These systems process millions of decisions daily, with limited transparency about their criteria and limited recourse for people who are incorrectly classified.

Government. Welfare eligibility systems in the Netherlands, Australia (the Robodebt scandal), and elsewhere have used algorithms to detect fraud, with devastating consequences for falsely accused citizens. Predictive policing systems in the US and UK direct police resources to specific neighborhoods based on historical crime data -- data that reflects historical policing patterns (where police were deployed) more than actual crime patterns (where crime occurs).

---

Why Opacity is the Default

Algorithmic opacity is not accidental. It's maintained for four reasons:

1. Competitive advantage. Companies argue that their algorithms are trade secrets. Google's search algorithm, Facebook's news feed ranking, and TikTok's recommendation engine are the core assets of their respective businesses. Disclosing them would, they argue, enable competitors and manipulation.

2. Gaming prevention. If people know exactly how an algorithm works, they can game it. Search engine optimization already distorts search results; full transparency might accelerate this. Credit scoring algorithms are partly opaque to prevent people from manufacturing good scores without actually being creditworthy.

3. Complexity. Many modern algorithms -- particularly deep learning systems -- are genuinely difficult to explain. A neural network with millions of parameters can produce accurate predictions without any human being able to articulate why a particular decision was made. This "black box" problem is real, not just an excuse.

4. Liability avoidance. Transparency creates accountability. If a company discloses how its algorithm works and that algorithm turns out to be discriminatory, the company faces liability. Opacity provides legal cover.

None of these reasons is sufficient to justify the current state of affairs. Competitive advantage doesn't override democratic governance. Gaming prevention can be addressed through design rather than secrecy. Complexity is a technical challenge, not an ethical argument. And liability avoidance is precisely the kind of motivation that regulation exists to override.

---

The Transparency Toolkit

Algorithmic transparency can operate at multiple levels:

1. Input transparency: What data does the algorithm use? Personal data? Behavioral data? Demographic data? Data about people you're connected to? Input transparency allows individuals to know what information about them feeds into decisions.

2. Output transparency: What decisions does the algorithm make? How often? What's the distribution of outcomes across demographic groups? Output transparency enables disparate impact analysis -- detecting whether the algorithm produces systematically different outcomes for different populations.

3. Process transparency: How does the algorithm work? What rules, weights, or model architecture does it use? This is the deepest level of transparency and the most contested.

4. Contestability: Can individuals who are affected by algorithmic decisions challenge them? Is there a human review process? Can errors be corrected? The EU's GDPR includes a right to explanation for automated decisions, though enforcement has been inconsistent.

5. Audit requirements: Independent audits of high-impact algorithms, conducted by qualified third parties with access to the system, its training data, and its outputs. This is analogous to financial auditing -- companies don't have to publish their entire accounting system, but they must submit to independent verification.

---

The EU AI Act and Global Regulatory Landscape

The EU AI Act, which entered into force in stages beginning in 2024, establishes a risk-based framework:

- Unacceptable risk (banned): Social scoring by governments, real-time biometric surveillance in public spaces (with limited exceptions), manipulative AI targeting vulnerable populations. - High risk (heavily regulated): AI in employment, education, critical infrastructure, law enforcement, migration, and justice. These systems require risk assessments, data quality standards, transparency, human oversight, and conformity certification. - Limited risk: Transparency obligations (e.g., chatbots must disclose they're AI, deepfakes must be labeled). - Minimal risk: Largely unregulated.

The AI Act is imperfect -- enforcement mechanisms are still developing, definitions are contested, and the intersection with existing data protection law (GDPR) creates complexity. But it establishes a global precedent: algorithmic systems that affect people's lives are subject to public governance.

Other jurisdictions are following: - Brazil: AI regulation framework under legislative development. - Canada: The Artificial Intelligence and Data Act (AIDA) proposed as part of the Digital Charter Implementation Act. - US: Patchwork of city and state-level regulations (New York City's Local Law 144 requiring bias audits for automated employment decision tools). - China: AI regulations focused on content recommendation algorithms, deepfakes, and generative AI.

---

Exercises

1. Algorithm Audit: Pick one digital platform you use daily. Research what is publicly known about its recommendation or decision-making algorithm. What do you know? What can't you find out?

2. Personal Impact Assessment: Have you ever been affected by an algorithmic decision -- a job application filtered out, a credit application denied, a social media post suppressed? What was your recourse? Was the process transparent?

3. Design Challenge: You're tasked with designing a content recommendation algorithm for a social media platform that maximizes user wellbeing rather than engagement. What would it optimize for? How would you measure success? What would you make transparent?

4. The Accountability Question: If an algorithm denies you a loan and you can't find out why, is that acceptable? At what point does algorithmic opacity become a violation of due process?

---

Key Sources

- O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. - Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. - Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. - European Parliament. (2024). Regulation (EU) 2024/1689: The Artificial Intelligence Act. - Angwin, J. et al. (2016). "Machine Bias." ProPublica.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.