We have often been asked in our training and technical assistance (TTA) work to explain the value of including a pilot or ramp-up period in a pay for success (PFS) project. Pilots and ramp-ups serve very similar functions: both are trial runs of the project during which success or failure does not affect outcome payments, allowing the service provider and the intermediary the flexibility to address potential issues that may affect the overall success of the project. Ramp-ups are an early stage of the project timeline—generally a three, six, or twelve-month period, whereas pilot programs are typically distinct, standalone programs. This blog focuses on ramp-ups over pilot periods simply because most of the existing projects that have included a trial period built it into the overall project.
Ramp-ups allow projects to iron out potential coordination problems among stakeholders.
Several PFS projects have required the intermediary, service provider, evaluator, and government agency to deploy new recruitment, referral, and enrollment procedures, which can lead to coordination challenges. Ramp-ups provide an opportunity to test these procedures and address any issues.
Consider, for example, a chronic homelessness PFS project in which participants are first identified based on contact with the police and which also includes a randomized control trial for the evaluation. The police department would be the initial point of contact and the coordinated intake point for the project. The master list of eligible individuals may be maintained by the police or the police would need to refer names to an outside organization. Because the project includes an RCT, eligible participants would also need to be randomized into a treatment and control group, a step most likely done by the evaluator. Complicating this process is that the list may need to be de-identified before randomization to protect the participant’s privacy and ensure impartiality, but then compared against the master list to identify individuals in the treatment group so that the service provider can find and enroll them.
Transferring this information across multiple organizations requires a high level of efficiency and teamwork, and could take some time to properly set up, particularly if a wholly new program is being implemented or the organizations involved do not have a history of working together. Ramp-ups allow the project to smooth out those potential issues early in the project timeline.
Ramp-ups allow project stakeholders to estimate take-up and retention rates.
Do eligible individuals actually agree to participate? After enrollment, do they follow through with the program? A ramp-up period is a good chance to see how take-up and retention may shake out during the larger program. Low take-up or retention rates could mean that there are too few people in the treatment and control groups, in which case the evaluation will be underpowered and unable to determine causal impacts.
Consider, for example, a program designed to help families that speak limited to no English. Due to the language barrier, the service provider may find it difficult to effectively communicate the purpose and benefits of the study and thus may be hard-pressed to enroll participants. In addition, some of the families may be immigrants who feel uncomfortable being contacted by unknown organizations or governmental bodies. In that case, the project would need to restructure its outreach to reflect the needs of the target population. Perhaps the project needs to build stronger relationships with the community or hire ambassadors who are better able to connect with community members.
With a ramp-up period, the project stakeholders may be able to identify these structural barriers that make it difficult for individuals to participate in the program and adjust accordingly during project implementation.
Ramp-ups allow projects to determine the difficulty of monitoring outcomes.
During the ramp-up, the project can refine its evaluation tools and develop clear and consistent practices for data collection and data monitoring. If a practical obstacle prevents a procedure from being implemented with fidelity, the problem can be spotted before it may cause an issue with the data used to make success payments.
Because repayments in PFS projects are based on outcomes achieved, it is essential that a project accurately and reliably measure what it sets out to measure. Many PFS projects use administrative data already being collected by a government body, which often makes collection easier, but it is not without its problems. Say a project is collecting information on substance use among a prison population. Relying on existing administrative data from the prison on substance abuse might seem like the easier course, but if substance abuse is measured within that prison by asking inmates if they are using drugs, the data may not be as reliable as testing inmates for drug use.
Pilots and ramp-ups should not be used to determine a program’s effect.
It is important to note that pilots and ramp-ups cannot predict the future success of the intervention. They usually do not have a large enough sample size to determine the likely effect of a program with any statistical significance. Furthermore, using a pilot’s effect size to determine the sample size needed for the full project can lead to false negatives, particularly if it causes the project to underestimate the needed sample size.
Pilots and ramp-ups provide an excellent opportunity to work out the kinks of a project before success is being measured so that a project’s success is focused on the effectiveness of the services rather than their logistics.
For more information on the benefits of a pilot or ramp-up and considerations on how to structure them, please see our December 2016 paper “Practical Considerations for Pay for Success Evaluations.”
Have a Pay for Success question? Ask our experts at PFSsupport@urban.org!
As an organization, the Urban Institute does not take positions on issues. Scholars are independent and empowered to share their evidence-based views and recommendations shaped by research. Photo via Shutterstock.