What Have We Learned from the Millennium Villages Project?

“We’ve begun to see what’s possible when the best scientific research is combined with local expertise and the latest technologies.”

-MillenniumVillages.org

Yesterday, the British journal, The Lancet, released an online preview of a new study published by Jeffrey Sachs and other leading researchers at Columbia University’s Earth Institute. The article, The effect of an integrated multisector model for achieving the Millennium Development Goals and improving child survival in rural sub-Saharan Africa: a non-randomised controlled assessment, follows an escalating discourse in the international development community regarding the role of evaluation.

The Millennium Villages project (MVP) implemented development interventions covering agriculture, the environment, business development, education, infrastructure, and health in a rural, impoverished community in each of nine Sub-Sahara African countries. This study is the MVP evaluation of progress in the Millennium Development Goals’ sub-targets for the first three years.

Critics of the MVP approach have largely focused on the absence of randomized studies in monitoring and evaluation. Given the high publicity media coverage of claimed successes and celebrity involvement over the past five years, some argue MVP is responsible for showing more evidence to support their methods—particularly honing in on the need for a randomized study with comparison villages. In response to these critics, MVP involved matched communities as controls in this study. In some cases, such as income, the control villages did just as well. However, in 0-5 survival rates, MVP villages averaged a 24% reduction while controls increased by 6%.

Some researchers outside of the MVP community have gone so far as to conduct studies of their own and found evidence that MVP is taking credit for improved outcomes that may well have taken place without the $120 per capita per year interventions. Others found that the claimed $120/head spent is far below actual amounts if indirect costs are included. The Lancet, anticipating public response, simultaneously published a reaction from two of these researchers, Grace Malenga and Malcolm Molyneux, who wrote a generously diplomatic response, saying the comparison data is not as strong as it could be because the study design followed, rather than preceded the launch of the intervention. In addition, no evidence yet exists as to whether the methodology can be replicated or scaled:

“Allocation to intervention or control was not randomised initially and prospectively […] What is certain is that the project will not have been a success if it merely improves the statistics in its target villages, any more than a clinical trial is useful if its benefits are confined to the trial participants.”

Aside from the development outcomes, which are already being hotly debated, this study reveals the positive impact of social media on policy makers in development. The public discourse that critics initiated and the media pursued resulted in a shift in MVP policy toward increased transparency and improved project monitoring design. When the project began, “Year 0,” there were no comparison villages. DHS surveys are used in this paper to provide some retrospective data on the comparison villages, which were eventually chosen in Year 3 after a virtual outcry in response to conflicting studies and back-and-forth blog posts and their comments. This is a result that almost assuredly would not have been ushered in if the conversation only took place in academic journals. In 2007, Paul Collier wrote in The Bottom Billion that there is “the buzz and the biz”: the people who talk in high profile circles about development and the people who do development. This study is evidence that those distinctions no longer hold.

Three years in, perhaps the most troubling finding in the article is that the successes are slim. The Millennium Villages approach represents a utopian dream of development: to eradicate extreme poverty and the devastating health impacts that extreme poverty entails. The villages’ conditions over those first three years were created under the direct guidance and oversight of some of the world’s most respected experts in each field.  The approach is an experiment of unprecedented size and potential: if we impact everything at once, addressing the community as a whole instead of in bits and pieces, intervening on individual health and wellbeing as well as economic structures, can we make it better? Not by much. While a 25% reduction in child mortality is a statistical victory, in absolute numbers, child mortality in the villages fell from 116 deaths per 1000 live births to 92. Compare this to the global average of 57, or the child mortality rate in the US, 7.5 or 5.4 in the UK.

Given the world’s very best effort to fix the crisis that is child survival rates, poverty still killed a horrendous number of children in the intervention villages. The results suggest international development researchers still don’t know how to stop child mortality and it is time to work harder at innovating and evaluating new solutions rather than celebrating a modest result.

UPDATE: Kyu Lee, Assistant Director of Communications and Marketing for the Earth Institute,writes:

The article states: “Critics of the MVP approach have largely focused on the absence of randomized studies in monitoring and evaluation. Given the high publicity media coverage of claimed successes and celebrity involvement over the past five years, some argue MVP is responsible for showing more evidence to support their methods—particularly honing in on the need for a randomized study with comparison villages. In response to these critics, MVP involved matched communities as controls in this study.”

It also states: “which were eventually chosen in Year 3 after a virtual outcry in response to conflicting studies and back-and-forth blog posts and their comments.”

Both critics and randomized studies she cites were conducted in Oct 2010. The Year 3 comparison villages were established in 2009-2010 (approx 3 years after the project began in 2006. There is some variation based on sites) and conceived well before these studies conducted by Clemens and Demombynes.

Annie replies:

 Lee pointed out that the link under the words “back-and-forth blog posts and their comments” direct to discourse between MVP and the public in 2011. His point is well taken as my purpose in linking the public arena discussions was to provide a timeline of sorts. The villages were chosen in 2009, so a more appropriate interactive discourse to link to is this 2007 paper by EI’s Pedro Sanchez et al, which explains their position that MVP does not use comparison villages because they consider them unethical; and this transcript to a public discussion at The Center for Global Development, in which audience members engaged Jeff Sachs about their practice of not using comparison villages to measure the effectiveness of MVP’s interventions.