Honey, I shrunk the Social Impact
Lee Crawfurd, who you may know has the best economics blog (previously) in South Sudan, recently highlighted a study that assessed the impact of a job placement program in France. The paper provides an excellent case study in how perceived impact can disappear, highlighting the importance of sound estimation of what would happen without an intervention, i.e., the counterfactual.
The program provided assistance to jobseekers, and depending on how you measured your impact, you would have received a very different story. Let’s assume the program under consideration will work with 10,000 unemployed jobseekers. We have three groups of people all concerned with increasing the number of job placements with their investment, yet each group has a different perspective on impact measurement:
a) Just track the outcomes! This group may even be paying the organizations based on their job placement success. You want to put money in program, so rather than spend money on an RCT, you’re going to check in after a year and see where participants in the program are and see how well you did.
b) OK, OK, we’ll do the RCT. You’re going to let the academics in and create a statistically similar control group through random selection and track both the participants in the job placement program as well as the control group to see how you did.
c) Hmmm, we’re intervening in a complex system, should we measure the impact of our program on jobseekers not in our program? You have enraged your staff, but there you have it, you have decided to evaluate not only the impact of your program on your participants, but also the unintended consequences of your program on others around them.
So what happened?
According to Team a) “Just track the outcomes!“, Success! The program only cost 3,305 euros per job placement added. Of the 10,000 unemployed jobseekers, 18% found jobs, good for 1,770 new happily employed participants. Send it to the printers!
Yet Team b) “OK, OK, we’ll do the RCT” interrupts, “Well, you’re right that 1,770 jobseekers ended up with jobs, but in our control group where they received no help at all, 1,600 ended up with jobs!” Team b argues that really 170 jobs were added by the program, good enough for 34,412 euros per job added, an order of magnitude worse than Team A had stated.
Then there’s team c) “Hmmm, we’re intervening in a complex system…” Now no one likes team c. At least team b is focused on the program’s participants, this team appears to be navel-gazing while others try to actually help people. So team c waits for the debate to quiet down and announces, “Actually, the program had no impact at all.”
Team a and b throw up their hands, yet team c has a point, namely that the job placement program appears to be very good at helping jobseekers improve their search techniques, which just means that another jobseeker in their town ends up without that job. Rather than increasing jobs, the programs just slightly altered the allocation. A result that only team c could know.
As the authors of the paper note, “The externalities we estimate suggest that part of the program effects in the short run were due to an improvement in the search ability of some workers, which reduced the relative job search success of others.”
Technically, the net impact on placements was negative, but team c (and the authors) hasten to point out that a nil effect is certainly a statistical possibility given the results (and this blog author would suggest more likely than the negative result).
A quick table I put together based on the paper’s results follows. Comments, corrections welcome.
|Tracking Outcomes||Controlling for outside factors with an RCT||Controlling for crowding out|
|Net Job Placements Added||1,770||170||-119|
|% of Jobseekers placed due to program||18%||1.7%||-7.4%|
|Euros per Job Placement Added||3,305||34,412||(49,091)|