The Principled Agent

Thoughts on development economics and impact measurement

Honey, I shrunk the Social Impact

with 4 comments


Lee Crawfurd, who you may know has the best economics blog (previously) in South Sudan, recently highlighted a study that assessed the impact of a job placement program in France. The paper provides an excellent case study in how perceived impact can disappear, highlighting the importance of sound estimation of what would happen without an intervention, i.e., the counterfactual.

The program provided assistance to jobseekers, and depending on how you measured your impact, you would have received a very different story. Let’s assume the program under consideration will work with 10,000 unemployed jobseekers.  We have three groups of people all concerned with increasing the number of job placements with their investment, yet each group has a different perspective on impact measurement:

a) Just track the outcomes!  This group may even be paying the organizations based on their job placement success.  You want to put money in program, so rather than spend money on an RCT, you’re going to check in after a year and see where participants in the program are and see how well you did.

b) OK, OK, we’ll do the RCT. You’re going to let the academics in and create a statistically similar control group through random selection and track both the participants in the job placement program as well as the control group to see how you did.

c) Hmmm, we’re intervening in a complex system, should we measure the impact of our program on jobseekers not in our program? You have enraged your staff, but there you have it, you have decided to evaluate not only the impact of your program on your participants, but also the unintended consequences of your program on others around them.

So what happened?

According to Team a) “Just track the outcomes!“,  Success! The program only cost 3,305 euros per job placement added. Of the 10,000 unemployed jobseekers, 18% found jobs, good for 1,770 new happily employed participants. Send it to the printers!

Yet Team b) “OK, OK, we’ll do the RCT” interrupts, “Well, you’re right that 1,770 jobseekers ended up with jobs, but in our control group where they received no help at all, 1,600 ended up with jobs!”  Team b argues that really 170 jobs were added by the program, good enough for 34,412 euros per job added, an order of magnitude worse than Team A had stated.

Then there’s team c) “Hmmm, we’re intervening in a complex system…” Now no one likes team c. At least team b is focused on the program’s participants, this team appears to be navel-gazing while others try to actually help people. So team c waits for the debate to quiet down and announces, “Actually, the program had no impact at all.”

Team  a and b throw up their hands, yet team c has a point, namely that the job placement program appears to be very good at helping jobseekers improve their search techniques, which just means that another jobseeker in their town ends up without that job. Rather than increasing jobs, the programs just slightly altered the allocation. A result that only team c could know.

As the authors of the paper note, “The externalities we estimate suggest that part of the program effects in the short run were due to an improvement in the search ability of some workers, which reduced the relative job search success of others.”

Technically, the net impact on placements was negative, but team c (and the authors) hasten to point out that a nil effect is certainly a statistical possibility given the results (and this blog author would suggest more likely than the negative result).

A quick table I put together based on the paper’s results follows. Comments, corrections welcome.

Tracking Outcomes Controlling for outside factors with an RCT Controlling for crowding out
 Net Job Placements Added  1,770  170 -119
Jobs Added 360
Jobs Displaced 479
% of Jobseekers placed due to program 18% 1.7% -7.4%
Euros per Job Placement Added  3,305  34,412  (49,091)
Advertisements

Written by Chris Prottas

January 30, 2013 at 11:28 pm

Posted in Uncategorized

Tagged with

4 Responses

Subscribe to comments with RSS.

  1. […] Honey I shrunk your social impact – Chris Prottas nicely summarized three approaches to […]

  2. Chris,

    Thank you for the succinct and entertaining post! You’ve put solid voice to a dynamic we see time and again in the field. My question for you is whether you have any thoughts on how to approach groups A and B so that they don’t just “throw their hands up”? Is it sufficient to focus on educating the skeptics within the organization from the get-go on the benefits of doing rigorous evaluation? Is it an ongoing process whereby you’re “preparing them for the worst,” while also working as a team to strategize for the dreaded “no impact” finding? Would be happy to hear your thoughts.

    Greg Lestikow

    February 14, 2013 at 5:49 am

    • Greg,

      Thanks for the interesting question. Honestly, I think you have to segment NGOs:

      1) NGOs who define themselves by a service
      2) NGOs who define themselves by the outcome they wish to see

      For Type 1 NGOs, evaluation is scary, even life-threatening. For Type 2 NGOs, less so. While nearly NGOs are in between these extremes, diversity of programs is one indicator of an organization that may be more accepting of a “no impact” finding. Unfortunately, this can be counteracted by the fact that many “diversified” organizations are large bureaucracies, which have institutionalized an aversion to reporting anything less than complete success.

      Which is all to say that I think that working with an organization to establish an identity that’s not tied to the success of a single program is one important step. A second step is creating an evaluation plan that’s aligned with their fundraising strategy and branding, where there’s a clear pilot phase designed not simply to test a single concept, but to test multiple assumptions of the organization’s working theory of change.

      There will be a lot of problems if you have development and communications staff selling a program as the answer to poverty while the evaluation and program team consider the program to be a pilot where there is a decent chance of marginal impact. The whole organization needs to be on the same page, so that when the evaluation results say “no impact”, there’s no panic, but rather the organization waiting for the lessons learned from the evaluation that now give the organization a special insight and core competency that will make them a leader in their field in the future. The more mature the organization, the more difficult this becomes. In those cases, it may make sense to cordon off a certain “experimental” program area, which, hopefully after (eventual) successes, will create proven programs that displace current programs.

      Sorry for the rambling but just some quick thoughts!

      Chris Prottas

      February 15, 2013 at 3:44 pm

      • Great thoughts, thanks Chris! The “experimental” programs division of an organization is a great thought. Any big NGOs that have done this well?

        Greg Lestikow

        February 20, 2013 at 2:59 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: