How A Thinking Planet Approaches Geoengineering Decisions
Let's be precise about what geoengineering means, because the term covers a wide range of interventions with very different risk profiles.
At one end: enhanced weathering and certain carbon dioxide removal approaches that are slow, diffuse, and relatively reversible. At the other end: stratospheric aerosol injection (SAI), which would involve continuously injecting sulfur dioxide or calcium carbonate into the stratosphere to reflect sunlight and reduce global temperatures. SAI is fast-acting, relatively cheap at scale, and deeply unilateral in its implications — and it has a termination shock problem. If you start and then stop, you get rapid warming that could be more damaging than the original problem.
The feasibility of SAI specifically — and the fact that a coordinated effort by a mid-sized nation or a sufficiently funded private actor could plausibly deploy it — is what makes the governance question urgent rather than theoretical.
Here's the current governance landscape: it's almost nothing. The ENMOD Treaty of 1977 prohibits hostile environmental modification, but it's narrowly construed and predates modern geoengineering concepts. The UN Framework Convention on Climate Change and the Paris Agreement don't address geoengineering in any binding way. The Convention on Biological Diversity has a non-binding moratorium on some geoengineering activities. That's essentially it.
The research governance is similarly underdeveloped. Most major nations have no formal framework for evaluating geoengineering research proposals. Outdoor experiments — which are necessary to understand actual atmospheric behavior — exist in a regulatory gray zone almost everywhere.
This is what happens when decision-makers are not thinking at civilizational scale: they defer the hard questions until the technology is already deployed, at which point the governance discussion is moot.
A thinking planet approaches this differently, and the differences are structural.
The distributed knowledge problem. Geoengineering decisions require integrating information that is genuinely distributed across billions of people. Monsoon dependence, subsistence agriculture patterns, indigenous land and water use, coastal communities' vulnerability to temperature changes — this knowledge doesn't live in universities or government ministries. It lives in the people who are affected. A thinking planet builds decision processes that pull this knowledge into the deliberation rather than representing it through proxies who may or may not understand what they're proxying.
This isn't a call for pure direct democracy on geoengineering. The technical complexity is real. But the distinction between technical questions (what would SAI actually do to precipitation patterns?) and values questions (whose precipitation patterns count more, and by how much?) has to be maintained. Experts can inform the first. They have no special standing on the second. A thinking planet keeps these questions separated in a way that a non-thinking planet never manages, because non-thinking populations can't tell when technocrats are smuggling values disguised as expertise.
The consent problem. Geoengineering at scale is one of the few decisions that is genuinely impossible to make with universal consent. You cannot get 8 billion people to agree on a global thermostat setting. The question is not how to achieve consent — it's how to make non-consensual decisions in ways that are minimally legitimate, maximally informed, and appropriately accountable.
Current international law has some machinery for this: weighted voting in international bodies, regional representation, treaty-based mechanisms for dispute resolution. All of it is inadequate for geoengineering because the affected parties include future generations who have no representation, non-human ecosystems that have no voice, and current populations in low-income nations who are dramatically underrepresented in existing international governance structures.
A thinking planet doesn't solve this with a magic governance formula. It solves it by cultivating the kind of epistemic culture where these inadequacies are visible and contested, not hidden behind procedural legitimacy that looks formal without being substantive. When billions of people understand what consent means and what non-consent costs, the pressure on institutions to improve their legitimacy is real and continuous.
The termination shock problem as a governance design constraint. Once SAI is begun, stopping it suddenly could cause rapid temperature spikes that are worse than the original warming trajectory. This creates a lock-in dynamic that is unlike almost any other policy decision. Once you start, you're committed — not to a specific temperature target, but to the ongoing management of a global system indefinitely, or until some softer form of termination is engineered.
This lock-in has a specific governance implication: the decision to start SAI is effectively a decision for all future generations, not just the current one. Standard democratic legitimacy — a majority votes yes — is deeply inadequate for this. What majority vote by currently living people can commit the unborn?
A thinking planet grapples with this seriously. It develops intergenerational governance frameworks — not as abstract ethics, but as practical institutional design. Future-generation ombudspersons with actual standing in international bodies. Constitutional-style constraints on decisions with irreversible intergenerational effects. Mandatory review periods with genuine off-ramps. These mechanisms exist in fragmentary form in various national contexts. They've never been applied at civilizational scale to a decision of this magnitude.
The unilateral actor problem. The most acute near-term governance risk isn't coordinated global geoengineering gone wrong. It's unilateral geoengineering by a nation or private actor that decides the stakes are high enough to justify acting alone.
This is not paranoid speculation. As climate change damages mount, the temptation for high-latitude, high-income nations to deploy SAI unilaterally — accepting the tradeoffs that fall on low-latitude, low-income nations — will grow. The cost asymmetry is stark: northern hemisphere nations bear less of the precipitation disruption risk from certain SAI approaches while gaining most of the temperature benefit.
A thinking planet doesn't prevent this through international law alone, because international law has no enforcement mechanism against a determined major power. It prevents it by making the political cost of unilateral action prohibitive. When billions of people understand what's being decided, when the communities most affected are vocal and organized and globally connected, when the media and scientific community are capable of explaining the distributional impacts clearly — unilateral geoengineering becomes a reputational and political catastrophe for the actor contemplating it.
That's soft power at civilizational scale. It requires thinking populations to work.
The research governance question. Before deployment, there's research. And research governance is where a thinking planet can build norms now, before deployment pressure makes it politically impossible.
The questions that need governance frameworks are: Who decides which geoengineering research is conducted? What outdoor experiments are permissible, and under what conditions of informed community consent in affected regions? How are research findings made public, and who has access? How are conflicts of interest — particularly from private actors with financial stakes in specific outcomes — managed?
These questions have answers that are technically and institutionally tractable. They haven't been answered well because the political will hasn't existed. A thinking planet — where voters understand why these questions matter — generates that political will.
The actual civilizational question. Here's what I think is the deepest issue with geoengineering, one that goes beyond the specific technical and governance challenges.
Geoengineering is a symptom of a failure mode that a non-thinking planet is extremely vulnerable to: technological solutionism applied to problems that are fundamentally about values, power, and distribution.
Climate change is not primarily a technical problem. It's a problem of who bears the costs of economic activity, across space and across generations. Geoengineering doesn't solve that. At best, it buys time while the underlying distributional problem remains unresolved. At worst, it becomes the excuse to never resolve it — why decarbonize aggressively if you can just turn down the thermostat?
A thinking planet holds this tension clearly: geoengineering may be necessary as a bridge technology if carbon removal and emissions reduction fail to keep pace with climate damage. That's a real possibility worth preparing for. And simultaneously, the availability of geoengineering cannot be allowed to reduce the pressure for genuine emissions reduction and economic transformation.
Holding both of those truths simultaneously, without collapsing into either "geoengineering will save us" or "geoengineering is playing God and must never be touched" — that's the kind of reasoning a mature civilization needs. It requires billions of people capable of tolerating genuine complexity without resolving it into comfortable certainties.
That's not a technical capacity. It's an epistemic one. And it's exactly what Law 2 is about.
The world Jamal's manual envisions is one where this kind of reasoning is the baseline for how civilizational decisions get made. Not perfect. Not unanimous. But honest about the tradeoffs, inclusive of the affected, and resistant to the kind of narrow interest capture that currently governs decisions of this magnitude in the dark.
Geoengineering may be the hardest test case for that vision. Which is exactly why it belongs in the manual.
Comments
Sign in to join the conversation.
Be the first to share how this landed.