“A Guide to Evaluating Pay for Success Programs and Social Impact Bonds” might well have borne the subtitle, “Reasons why you might not want to mess with these schemes and questions to make you think more critically before you go leaping in with all four feet.” Released on December 4th by a coalition of groups, including the American Federation of State, County and Municipal Employees, the Center for Effective Government, In the Public Interest, the Minnesota Council of Nonprofits, and the Oregon Center for Public Policy, the 10-page guide quickly runs through the problems these financing methods cause and then poses questions to be considered.
Social impact bonds depend upon private investors to take on the risk of social programs, with the government paying off those investments if and when outcome goals are met. Somehow, this is meant to improve the program’s outcomes—because, after all, the free market has its own ineffable magic to bring to bear to whip us all into shape around social programs. To make this complex setup work as conceived, independent evaluators are contracted, as are lawyers all around. Naturally, there are also costs associated with the loans. Everyone gets a little piece of the pie. McKinsey & Company, in some of its early advocacy, wrote, “SIBs are a more expensive way to finance the scaling up for preventive programs than if the government simply went to service providers and paid them to expand an intervention to more constituents.”
And, indeed, if there were any indication that Pay for Success programs consistently worked to ensure success, we could argue about whether or not that success was worth the increased cost. But the outcomes of the experiments with pay for success/SIBs are not showing well. Early transactions have had sketchy results (Peterborough prison, Rikers Island, Utah early childhood), and that has left us with no true success stories.
Additionally, the guide suggests that the PFS interventions don’t recognize that “fixing complex social problems typically requires investments and policy changes on multiple levels.” It quotes Donald Cohen and Dr. Jennifer Zelnick from their NPQ article, saying, “A bias toward programs that produce quick, measurable results narrows the public dialogue and waters down findings.”
A good example of this dynamic can be found in the Rikers Island experiment.
For example, the Rikers Island PFS funded moral cognation therapy to reduce recidivism. Options with a potentially larger impact on recidivism reduction, such as decreasing the number of questionable misdemeanor arrests or making access to bail easier, were not considered because they are not amenable to measurement. States have reduced recidivism through other means, including increasing public investment in community-based treatment, re-entry planning and intensive supervision, and providing continuity of care to people with mental health problems. Because these interventions are difficult, if not impossible to measure, these interventions would not be considered in PFS deals.
What drives the PFS solutions conversation is, “What can be funded by a PFS deal?” rather than “What structural changes and services can best address the problem we are seeing?” But the SIB notion—and we use that word with great purpose, as in “an impulse or desire, especially one of a whimsical kind”—got a lot of traction early on. The promotion stage assumed it would be successful and had a broad and influential group of supporters (Rockefeller Foundation, McKinsey, Kennedy School, Bloomberg, Goldman Sachs, and Urban Institute) plus a gaggle of professional consultants smelling a ripe financial opportunity. The promotion stage also had major funding from Rockefeller and the Obama administration and ready investors for the transactions as they occurred.
Still, the experimentation goes on, with advocates suggesting that detractors are merely sour “social progress spoilers” of a sort.
In the United States, there are eight established PFS contracts, including a program to reduce recidivism among youth detained at Rikers Island that was terminated when projected outcomes were not achieved. As a result of aggressive promotion by intermediaries, boutique firms and consulting agencies, plus pro-PFS policies and funding from the federal government, many more PFS contracts are in the pipeline.
Indeed, in 2014, the Corporation for National and Community Service’s Social Innovation Fund launched its Pay for Success program. Through its eight inaugural grantees, 43 programs across the country are receiving PFS technical assistance. Federal legislation has also been introduced to foster the creation of PFS deals.
Six states, Colorado, Idaho, Massachusetts, Oklahoma, Texas, and Utah have passed PFS enabling legislation and more states are expected to follow. Federal legislation has also been introduced to foster the creation of PFS deals.
The story here is about playing catch-up. The promotion of Pay for Success and Social Impact Bonds ran way ahead of any impartial or detailed scrutiny of how this would actually work. The excitement over the transactions and the disproportionate attention paid to the novelty and financing aspects resulted in little attention being paid to the underlying services that were supposed to produce these results. This guide is an organized inquiry, response, and opposition to a popular but dubious fad. Now, it is time for the long-overdue re-evaluation stage. Was this such a good idea? What questions should have been asked in the first place?
Based on its huge stumbles, this whole field should be under a very intense microscope, and the questions identified by this guide are an attempt to catch up to a runaway field. Was it all too good to be true? Why did so many institutions jump in with both feet? Why do people keep falling for Ponzi schemes? The answers are partly related to public policy and partly related to human psychology.
When all is said and done, this report contains a set of excellent questions for anyone considering a pay for success arrangement.
ABOUT JON PRATT
Jon Pratt is the executive director of the Minnesota Council of Nonprofits and a contributing editor to the Nonprofit Quarterly.
ABOUT RUTH MCCAMBRIDGE
Ruth is Editor in Chief of the Nonprofit Quarterly. Her background includes forty-five years of experience in nonprofits, primarily in organizations that mix grassroots community work with policy change. Beginning in the mid-1980s, Ruth spent a decade at the Boston Foundation, developing and implementing capacity building programs and advocating for grantmaking attention to constituent involvement.