Think and Save the World

What Happens When Philanthropic Foundations Publish Full Learning and Failure Data

· 9 min read

The Accountability Gap

Foundations occupy an unusual position in the accountability landscape of modern institutions. Their legal obligations are thin: maintain charitable purpose, file annual Form 990s in the United States, meet minimum distribution requirements, avoid self-dealing. Their accountability to the people they intend to help is effectively zero — there is no mechanism by which a community that received inadequate or counterproductive philanthropic intervention can compel the foundation to acknowledge or address the failure.

This accountability gap is not accidental. Philanthropy emerged, historically, as a voluntary act by individuals of means who wished to direct their wealth toward public benefit. The legal frameworks governing it were designed to enable this voluntary action, not to regulate its quality. The assumption was that good intentions, competent leadership, and freedom from market pressures would produce good outcomes. Several centuries of evidence suggests this assumption is insufficiently warranted.

The accountability gap has structural consequences. First, it enables what sociologists Edgar Villanueva and Anand Giridharadas have called "philanthropic colonialism" — the importation of priorities, frameworks, and solutions from the culture and perspective of wealthy donors into communities whose members would define their own needs differently. Second, it insulates foundations from corrective feedback. When a program does not work, the foundation may learn this from internal evaluation, but the learning stays internal. Grantees, community members, and other funders continue operating in ignorance of evidence that the approach fails. Third, it creates a perverse incentive structure for grantees: organizations that report honestly about challenges risk losing funding, while organizations that perform optimism regardless of evidence get renewed. Over time, this selects for grantees who are skilled communicators rather than effective program operators.

The accountability gap is maintained, in part, by the voluntary nature of reporting. Foundations are not required to publish evaluations. They are not required to share failure data. They are not required to make their decision-making criteria public. What gets reported is chosen, framed, and timed by the foundation itself, with obvious implications for what the resulting picture looks like.

The Cost of Private Learning

The cost of private learning in philanthropy is measured in duplicated failure. When a foundation's $50 million investment in a specific education intervention produces no improvement in student outcomes, and that finding is not published, dozens of other foundations are free to invest in the same approach with the same results. The negative knowledge is locked up where it cannot perform its social function.

Negative knowledge — the knowledge of what does not work, under what conditions, for what reasons — is among the most valuable categories of information in any field. In clinical medicine, negative trials are published because the FDA requires them; the result is a literature that constrains future investment in ineffective treatments. In engineering, failure analysis is a professional discipline because understanding why structures and systems fail is essential to designing ones that do not. In philanthropic practice, negative knowledge is systematically suppressed, with consequences for the entire field's learning rate.

The Foundation Review, an academic journal dedicated to publishing foundation evaluations — positive and negative — was created specifically to address this gap. It has published accounts of programs that failed to achieve their intended impacts, of theories of change that proved wrong, of implementation challenges that turned out to be insurmountable. These publications are valuable precisely because they are rare. Most foundations commission evaluations that never become public, or become public only in curated excerpts that emphasize positive findings.

The Fund for Shared Insight, a collaborative of funders committed to listening to communities and sharing what they learn, documented widespread grantee perception that foundations want to hear good news. A survey of 150 nonprofits found that the majority had withheld concerns from their funders because they feared it would harm their funding relationship. This is the operational reality of private learning: it does not remain private. It shapes grantee behavior, producing systematic distortion in the information that foundations receive about whether their programs work.

Cases of Genuine Transparency

A small number of foundations have made serious, documented commitments to publishing failure alongside success.

The William and Flora Hewlett Foundation has maintained what it calls a "learning agenda" that includes genuine evaluation of strategies, including strategies that have not produced hoped-for results. Its work in education policy, which included a major investment in the Common Core State Standards initiative, generated internal debate about whether the foundation's approach — working through policy rather than practice — had been effective. The foundation published an honest assessment of its role, including what it would have done differently. This transparency generated criticism, but it also generated a body of knowledge about the limits of policy-driven education philanthropy that other funders could use.

The Omidyar Network has published "exit memos" when it ends investment in specific areas, explaining what it learned, what worked, and why it is reallocating capital. These memos are unusual in philanthropic practice because they make explicit the reasoning behind strategic discontinuation — information that is typically kept internal or communicated only in vague terms.

The Robert Wood Johnson Foundation's Culture of Health initiative invested in a framework for measuring population health that was designed from the outset to be evaluable and to track whether foundation investments were actually moving health indicators. The commitment to tracking real outcomes rather than activity outputs is itself a form of transparency — it creates accountability not just for what was done but for whether it worked.

The Development Impact Lab at UC Berkeley operates on the explicit principle that null results from randomized controlled trials of development interventions should be published and shared. Many interventions that were widely assumed to be effective — specific microfinance models, some school-feeding programs, certain conditional cash transfer designs — produced null or negative results in rigorous evaluation. Publishing these results changed the allocation of philanthropic resources in international development.

Perhaps most instructive is the case of Transparency International's own transparency failures. The organization dedicated to fighting global corruption has faced criticism for being insufficiently transparent about its own governance and decision-making. The irony is instructive: even organizations explicitly committed to transparency find the practice difficult when it reflects unfavorably on themselves. Commitment to radical transparency requires institutional design that makes publication of failure the default, not an exceptional act of courage.

The Grantee Relationship Transformation

One of the most significant effects of foundation transparency on failure is its transformation of the funder-grantee relationship. The current model, in most of philanthropy, is structurally hierarchical: funders set priorities, grantees propose projects that match those priorities, funders evaluate whether grantees meet reporting requirements, grantees perform compliance. The relationship is more inspectorial than collaborative, and it is profoundly asymmetric in power.

When foundations commit to publishing honest failure data, including failure attributable to funder decisions rather than grantee performance, the power asymmetry shifts. Grantees learn that they can report problems honestly without automatic funding loss. Program officers learn that surfacing problems early produces better outcomes than discovering them at grant closure. Foundations develop a clearer understanding of where their theories of change were wrong, rather than where their grantees underperformed.

This is not idealistic. It is what the evidence shows about high-performing philanthropic relationships. The Fund for Shared Insight's "Listen for Good" project, which trained foundations to use standardized feedback surveys with grantees and beneficiaries, found that foundations that incorporated this feedback systematically altered their program strategies in ways that produced better outcomes. The mechanism was simple: they learned things they would not have learned otherwise, because grantees who knew the feedback would be taken seriously told them the truth.

The Listening Post Project at Johns Hopkins University documented how foundations that asked nonprofits for honest assessment of what was not working received actionable information that improved program design. The foundations that used structured listening protocols — anonymous surveys, facilitated conversations, independent interviews with beneficiaries — received more useful information than those that relied on grant reports and site visits.

Systemic Field Effects

The systemic effects of widespread philanthropic transparency extend beyond individual foundation learning to the structure of the entire field.

When foundations publish evaluation data openly — including which interventions produce null results — they create a shared evidence base that reduces the costs of learning for the sector as a whole. Organizations considering interventions can check whether similar interventions have been tried and with what results. Program designers can access negative findings that constrain the design space, eliminating approaches proven not to work before investing resources in them. Researchers can use foundation evaluation data to identify patterns across contexts — which conditions are associated with success, which with failure, what moderates impact in different populations.

This is the public goods logic of scientific publishing applied to philanthropic practice. A randomized controlled trial costs substantial resources to conduct. Publishing its results costs essentially nothing additional. But the value of the published result to the field is often multiples of the value to the organization that conducted the study. The same logic applies to foundation evaluations: the marginal cost of publication is low; the marginal benefit to the field is high; and the reason most evaluations go unpublished is that foundations bear the cost of potential reputational damage while the field bears the cost of not having the information.

Changing this requires changing the incentive structure. Several approaches have been proposed and partially implemented. Funders of funders — large national foundations that support capacity-building in the philanthropic sector — can make transparency requirements a condition of capacity-building grants. Institutional investors in philanthropic endowments can make transparency about learning and failure a criterion for investment. The IRS Form 990, which is public, could be expanded to require reporting on evaluation findings. Rating organizations like Charity Navigator and GuideStar could add transparency scores based on publication of evaluation data.

Power, Narrative, and Who Gets to Revise

The deepest issue in philanthropic transparency is not about information. It is about who gets to define what counts as success and failure in the first place.

Most foundation evaluations are designed by funders and conducted by evaluators hired by funders. The criteria against which programs are evaluated reflect the theories of change and the outcome definitions that the foundation has developed, often without significant input from the communities the programs serve. A foundation that publishes its evaluation data transparently but has asked the wrong questions has not achieved genuine accountability.

Genuine transparency requires a prior step: the revision of who shapes the evaluation framework. Community-centered evaluation — in which the communities served define success criteria, participate in data collection, and interpret findings — is a fundamentally different activity from foundation-centered evaluation. It produces different data and different conclusions. It also frequently produces uncomfortable findings: that what the foundation measured was not what the community valued, or that the program addressed a problem the foundation defined but not the problems community members identified as most pressing.

The philanthropy reform movement — represented by organizations like NCRP (National Committee for Responsive Philanthropy), the Trust-Based Philanthropy Project, and GrantCraft — has articulated a vision of philanthropic transparency that includes not just publishing what was learned but opening up the definition of what counts as learning in the first place. Trust-based philanthropy, specifically, emphasizes multi-year unrestricted grants, simplified reporting requirements, and proactive relationship-building that creates conditions for honest feedback. The transparency it seeks is bilateral: not just foundations publishing what they think happened, but genuine exchange in which grantees and communities shape the definition of what matters.

This version of transparency is genuinely civilizational in its implications. Philanthropy represents one of the largest concentrations of private capital directed toward social purposes in human history. Foundations control assets that in aggregate exceed the GDP of most nations. The quality of their learning — and the degree to which that learning is private or public, self-defined or community-defined — shapes the allocation of enormous resources over decades. Getting the learning right, and getting it right in public where it can be challenged and improved, is not a nice-to-have. It is a precondition for philanthropic activity at this scale to be something other than a very expensive way for wealthy people to act on private theories of how other people should live.

Publishing failure is the beginning of taking the people you failed seriously enough to be honest with them.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.