There’s no place like home? Assessing how the interview method affects PPI results >

Sharada Ramanathan
•10/05/15
• Posted in Data Collection
• 2 Comments

In this age of ‘lean data’, organizations are seeking ways to reduce the data collection burden and cost in order to get meaningful, actionable data about their clients. Practitioners value the PPI for its statistical rigor, relatively low implementation cost, simplicity and transparency…the PPI is ‘lean data’. The national surveys upon which the PPI is based are conducted by enumerators in respondents’ homes, so the PPI is most accurate when it replicates that and the survey is done in-person and at-home. However, it can become expensive and time-consuming to administer the PPI if regular visits to a client’s home are not already built into an organization’s business model.

Given the trend to explore leaner ways to collect impact data, including PPI data, we wanted to test alternative interview methods that could potentially be less expensive, faster, and make it easier for organizations to administer the PPI. With funding from the Ford Foundation, Grameen Foundation worked with Mark Schreiner, developer of the PPI, to test the accuracy of the following alternative methods:

  • In-person/away-from-home: an enumerator separately interviews multiple participants in one visit at a central point away from their homes—for example—at a microfinance loan group meeting. Individual interviews are conducted privately,
  • In-person/by-phone: an enumerator interviews participants by telephone
  • Automated/by-phone: PPI questions are administered to clients via interactive-voice response (IVR) or SMS/text-messaging 1

The study results show that the interview method does affect estimated poverty rates, primarily due to its effect on survey completion rates. Completion of an interview is linked both with the method and with participants’ poverty. By-phone methods had the lowest completion rates. This leads to lower poverty-rate estimates because less-poor households are more likely to have access to phones and participate in the interview. The completion rates in our study are shown in the table below.

Survey completion rates across interview methods
Interview Method Completion Rate
Benchmark (In-person/at-home) 84%
In-person/away-from-home 91%
In-person/by-phone 60%
IVR 12%

The study findings indicate that, holding all else constant, the in-person/away-from-home method can substitute in-person/at-home as long as its completion rate, as defined in the study2, is similar to that of the at-home method. The in-person/by-phone method was found to underestimate poverty rates and may not be an acceptable substitute. Automated methods such as IVR or SMS are unlikely to give results close to the at-home method (regardless of completion rates) and are not recommended.

It’s important to note that ‘In-person/at-home’ interviews are the ideal, officially recognized method for collecting PPI data. However, we recognize that it may not always be practical for social enterprises to use the method that gives the most accurate results. Sometimes trade-offs must be made. The goal of this study is therefore to help users make deliberate, transparent decisions about which interview method to opt for, after taking into account the associated risks and costs.

The full report can be downloaded here. We’re interested in hearing about your experiences collecting PPI or other impact data via alternative interview methods. Send your story to ppi@poverty-action.org.

1. SMS was not tested here because Indian regulation would have required respondents to send one text for each question and to bear all related fees, which was not feasible to expect this survey population to do. However, it can be reasonably assumed that the SMS interview method would share many of IVR’s weaknesses. 2. This study defines "non-completers" as households who are not found, not contacted, do not have mobile numbers, or do not answer their mobile phones.
2 Comments

Comments

no image
I agree that the best place to interview clients is at their home.it gives the interviewer an opportunity to have the true picture of the interviewee household.furthermore, some of the PPI questions can be answered by observation i.e roof of a house.on the other hand, for organisations that implement projects in very rural areas it is difficult to conduct the PPI in the clients homes because interviewers have to travel long distances to the next clients homes.in this case, the PPI can be administered during group meetings
 
To add to the conversation, I would like to share with you our own experience in trying to implement the PPI in a different context, which we had previously shared with the Grameen team: In 2012, Fundación Capital developed a tablet-based app designed to provide financial education and training at scale in order to support the demand side of financial inclusion. This was the first attempt at fully digitizing traditional capacity building efforts, and given out work with national governments and the opportunities for scale and sustainability, we wanted to design and explore as many innovative approaches as possible. In the pilot phase, we were interested in testing the limits of automation, and indeed received a lot of push-back from partners as to whether our approach would even work (as it had never been tried before). The LISTA app (more info on our website www.fundacioncapital.org) was designed in such a way that as long as the hardware reached a poor person (community leaders facilitated tablet distribution) users should be able to self-study the contents. This was an audacious assumption, given that most users had never previously even held a touch-screen device, had low or no literacy, and were being presented with incentive-free financial education (skipping ahead a few years, it turned out that these obstacles were minor and the innovation risk paid off). In an effort to test this low-touch approach to the fullest, we looked to automate as much as possible, including tracking usage and user data. To that end, we designed a survey that would be integrated into the app, and decided to include the PPI in order to track whether we were actually reaching the poor. To explain how this works, once participants received the tablet and the community leader showed them how to turn it on and what buttons to press, users were then able to view an introductory explanatory video (that encourages them to practice swipe, click, and type functions - breaking the tech barrier) and then register and complete the text-based short survey (which included the PPI). Over the course of 10 weeks, 1270 users completed the registration process and filled out the survey, and these were primarily low-income, female, recipients of cash transfers in Colombia. What we found, however, was that the quantitative results from the PPI were not very clear, so we gathered qualitative feedback and can share the following findings: (1) Lacking a trained enumerator to guide the PPI survey process, respondents were unable to ask for clarification / follow-up questions (which is obvious). Beyond that, the literacy barrier was very strong, so given the extensive wording of the questions, it was a challenge for the end-user to self-administer the PPI. We even attempted to change 1-2 words in the survey itself (to clarify or address the respondent directly), but the literacy challenge, inability to ask follow-ups, and complex language made it hard for users at the base of the pyramid to self-respond. Our survey is now image based and we only ask if they are affiliated with a government social program, and our partners are able to confirm this information, as well. Given the focus on G2P clients, the PPI didn't provide us with nuanced data that would have served a further purpose. (2) The survey actually generated some negative feedback, and in some cases led to participants discontinuing the training process because the questions generated distrust (given their similarity to government surveys assessing eligibility for social programs, there was the worry that the survey would be used to kick them out of the program). This situation was ameliorated by having field staff close-by to clarify the intention of the survey, but by asking such 'personal' questions without the proper presentation or generating significant confidence with the respondent, we were faced with a challenge that is difficult for low-touch approaches to overcome. (3) Mixing measurement and service delivery can be challenging, especially when implementing an evaluation of high academic rigor. We have found that surveys can often generate distrust or uncertainty within a community, yet they are extremely important to ensure that the investments we make are generating the positive social impact we aspire towards. Finding ways to achieve this at a lower cost would really be useful, and we've found that the mix of data sources can be a potential avenue to complement the monitoring and evaluation efforts.