22 Sep 16 16 Nov 16

Where is Everyone? A note on the methodology of MSF's report

The focus of the Where is everyone? study was to identify common problems in emergency response, and to explore what might be the causes of these problems.

Because the research was focused on questions of “how” and “why” (rather than “how many”), we chose a qualitative research methodology, based on the existing MSF guideline[1].

More specifically, we sought to construct a narrative synthesis from our sources, based on existing guidelines for such[2]. The use of case studies to draw out common themes is widespread in humanitarian literature[3].

The authors believe that this was the correct methodology to use, considering the subject matter, and believe it to be robust.

{{ ctaright.node.field_explanation }}

Stage One: Literature/data review

In 2012,  we prepared a paper for the senior leadership in MSF: Emergency response capacity in the humanitarian system.

The starting point was the criticisms made, by MSF and also by many others (including the reviewers of the official UN-system Real Time Evaluations), of the responses to the 2010 Haiti earthquake, the 2010 Pakistan floods and the 2011 Somalia famine.

The question was raised: how could this come about?

  • We first reviewed existing data on the humanitarian system and on MSF, finding very high rates of growth of the previous decade.
  • We then conducted an extensive (100+ papers) literature review for what might explain these difficulties in emergency responses.
  • We also interviewed ten prominent researchers and representatives from other INGOs.
  • From these sources, we identified a wide range of themes (e.g. “growth in complexity and levels of bureaucracy”, “changes in approach to watsan”, etc.), and grouped them according to three categories (“external”, “structural” and “choice”).

The paper was then shared across the MSF movement globally at every level - executive and associative - and peoples' feedback and criticisms incorporated. It was also presented in summary form externally to SCHR.

One particular point of criticism made in internal meetings was that the level of analysis was too global, which made it difficult to weigh which of the themes identified were more important than others.

Stage Two: Preparation of case studies

Based on suggestions made during the review process of Stage One, it was then decided that, for the next stage of the analysis, we would seek to address the “too-global” criticism by looking at a set of three case studies.

The concept was that use of case studies would allow us to take into account greater levels of contextual specificity and detail, and therefore allow us to judge better what the constraints were.

  • The emergencies were chosen on the “first-cab-off-the-rank” basis, so whatever was happening at the time. In order, Maban [South Sudan], North Kivu [DRC], Jordan. In each case, the review period was approximately 9-12 months, dependent on the date of the visit.
  • Given all three emergencies were related to displacement and conflict, it was at this point decided to confine our final work to only that type of emergency.
  • TORs [Terms of Reference] were prepared for each visit and were shared with the relevant missions and desks, and feedback incorporated.
  • Before each case study, an extensive review was conducted of MSF (sitreps [Situation Reports], etc.) and external (OCHA bulletins, INGO press releases, etc.) sources. In each case, 50-100 documents were reviewed.
  • In each case study, we visited MSF project sites in the countries and spoke to MSF coordination and field teams, about our own work and their view of the overall response.
  • In each case study, we conducted a very wide range of key informant interviews with government authorities, UN agencies, INGOs, MSF and local community representatives (in Maban: 42 key informants were interviewed; in North Kivu: 57; and in Jordan: 37). Interviews were typed up, to easier allow for identifying common themes.
  • In all cases, the interviews were conducted according to the MSF guideline book. Specifically, interviews were semi-structured, using open questioning, designed to bring out the interviewees’ own insights.
  • For each case study, the authors then reviewed the interviews and identified common themes – e.g. constraints which were repeatedly mentioned by key informants and the documentary record, such as watsan [Water and Sanitation] technical capacity in the Maban case, or assistance to the urban refugee caseload in Jordan, etc.
  • Each case study was then worked up into a long version (20 pages plus), plus a 2-page executive summary. This was then shared with staff at headquarters and in the field for comment and their feedback and criticisms incorporated. In several cases, presentations were also made to relevant MSF platforms.
  • The Maban and Jordan case studies also went through an additional review stage, as they were shortened and published in Humanitarian Exchange magazine. Extensive comments were received from the editors and criticisms and questions addressed.

Stage Two-and-a-half: Systematic review of evaluations

In parallel with the conduct of the case studies, we also undertook a systematic review of evaluations conducted of emergency operations over the five-year period 2008-12.

  • Various external evaluation databases (e.g. ALNAP, IFRC, IASC, etc.) and MSF databases (e.g. Vienna Evaluation Unit, MSF intranet [Tukul] etc.) were searched.
  • Each evaluation was read through, and its principal conclusions were summarised in a table.
  • Sixty four evaluations were found which matched initial search criteria. Once the case study review was limited to displacement emergencies only, those evaluations related to other types of emergencies (e.g. natural disasters) were excluded.
  •  A summary paper was then written, identifying common themes.

Stage Three: Synthesis and conclusions

Once the case studies were completed, we conducted a synthesis of the findings:

  • Firstly, we used the long-established OECD-DAC criteria for evaluations (efficiency, relevance/appropriateness, coverage, effectiveness and efficiency), as well as a criteria for impact[4], and constructed a detailed table scoring each emergency against these criteria.
  • Secondly, we used the themes we had identified in Stage One and constructed a table, in which we checked for the presence or non-presence of each of those themes in our case study.
  • We then wrote a Preliminary Conclusions paper, synthesising the findings of the two tables in written form. This formed the basis of our final paper to the MSF General Directors' platform, presented in December 2013 and then discussed again in February 2014. It was also discussed in many other forums (e.g. in October 2013, it was presented to operational managers in Amsterdam and Brussels, at the internal 'Reflections Centres' meeting, Berlin Humanitarian Congress, etc.).
  • Various comments and feedback were solicited from ODI's Humanitarian Policy Group at various stages.
  • In February 2014, MSF General Directors took the decision to publish the report under the authors’ name and with a foreword by the International President. The final report,  Where is Everyone?, was prepared, based on the Preliminary Conclusions paper, plus the executive summaries of each of the long-form case studies. Given that the public report was intended for general (non-academic) audiences, the focus was accessibility, without extensive description of the methodology used.

[2] For example, this one from the University of York:

[3] For example, this recent report produced by UK-based INGOs, with a very similar methodology:

[4] Impact is very difficult to assess in humanitarian emergencies, as we explicitly footnote in the report. However, we did want to incorporate whatever evidence on this we could, especially based on mortality estimates. We are very circumspect in the report about the quality of data on this, and in our conclusions.