Data accuracy and reliability: findings from the data verification
We compared recounts of data from OTP cards performed during our visits to facilities, with data from the paper records held by the facilities for July 2017.
Figure 3 illustrates the findings from the data verification relating to admissions. It compares the numbers recorded on paper in the weekly tallies with the number of admissions we recounted from the OTP cards for the month of July 2017. Discrepancies were greater than 10% for half of the selected ten facilities, and greater than 30% in three facilities, with a maximum value of 49%.
CMAM exit data
Figure 4 shows findings from data verification relating to exits from treatment, where exits are defined as in the CMAM guidelines  as cured, defaulter, death, non-recovered (did not meet the discharge cured criteria after 4 months in treatment) or transferred to inpatient care or another OTP. The figure compares numbers recorded in the weekly tallies with the number of exits recounted from the OTP cards for July 2017. Discrepancies were greater than 10% for eight of the nine visited facilities, and greater than 30% in three facilities, with a maximum value of 66%.
We also intended to recount exits by the same exit categories used on the tally forms (recovered, death, defaulter, non-recovered or transferred). However, this task was not straightforward due to gaps in data recorded in the “outcome” cell on the OTP cards. Hence where outcome data were missing, we used relevant data from elsewhere in the card to identify the most likely exit category. We combined defaulters and deaths, because without a functioning system of follow-up after discharge it is impossible to know if a defaulted beneficiary is alive and thus a ‘true’ defaulter, or if they have died. The verification suggested a considerably higher number of defaulters or deaths than were reported in the electronic databases. For July 2017, the reported percentage of “deaths or defaulters” was 29 children or 6.5% of all exits reported through the CMAM system, compared to 72 children or 22% based on the recount of OTP cards.
Figure 5 shows findings from data verification relating to RUTF and compares numbers of RUTF cartons consumed recorded in the weekly tallies with the number of cartons recounted from the OTP cards for July 2017. Discrepancies were 10% or more for eight of the nine facilities visited, and greater than 30% in five facilities, with a maximum value of 52%.
Explanations for patterns identified
Where discrepancies existed for admissions, total exits and RUTF consumption, the reported values tended to be higher than the recounted values (see Figs. 3, 4 and 5). This suggests either that admissions and exits were over-reported and/or that OTP cards were lost. It is plausible that both factors contribute, since our observations indicated a lack of formal modus operandi for the tallying procedure and for storage of OTP cards, as well as lack of dedicated storage space for the cards. The large size of discrepancies for some facilities perhaps indicates the loss of cards as a major explanatory factor at least for those facilities. Also, given the evidence for misappropriation of RUTF elsewhere in northern Nigeria, this must be mentioned as a possible reason for discarding records in the facilities.
With respect to the underestimates of defaulting and death rates, there are several contributory factors. During facility visits we observed a lack of consistent observance of the protocols described in the national CMAM guidelines : the child should be discharged as a defaulter on their third consecutive absence. Our data verification revealed that absences are often not noted on the OTP cards of absentee children, and even after three absences their cards are commonly kept “active” and stored together with those for children attending. For example, Fig. 6 shows an OTP card for a child that between 26/5 to 28/7 should have been recorded as absent and then discharged. Also, the outcome cell on OTP cards is often left blank, as in the card in Fig. 6, so that during recounts it is not clear in which week and using which exit category the staff had noted the child’s exit in the tally. Interviews revealed that CMAM staff may rely on information from clients or, occasionally, community volunteers to identify defaulters, and other children who do not return after a long absence are classified as “non-recovered”.
Observations and interviews indicate two main underlying explanations for the inconsistencies between CMAM operations and national guidelines, which affect accuracy and reliability of the CMAM data:
Firstly, the CMAM programme (like the wider government health service) is affected by severe resource constraints. Due to this there is inadequate provision of forms, electricity and storage; and inadequate human resources, high workloads and capacity issues. An NFP commented “It would be helpful to have a computer because then I wouldn’t need to drive to the state office to submit my data – other units at LGA level have them”. Expertise is frequently lost – a UNICEF officer commented “CMAM-experienced staff are often transferred away to non-CMAM sites and this impacts on quality of service. So we request that if staff must be transferred, they be moved to another CMAM site”. All these constraints on resources inevitably affect the quality of services and monitoring data.
Secondly, although CMAM trainings were reported to include sessions on how to complete forms and tally data; there is a lack of clear printed guidelines, training materials and protocols on CMAM data capture for health-workers. The national CMAM guidelines were not at hand in any of the facilities visited; and in any case do not include details on data tallying, protocols for storing paper records, or the new SMS system. Also, we observed that various versions of forms for recording data are being used.
Data accuracy and reliability: findings from secondary data analysis of LGA and state level data
We compared the weekly tallies collected at health facilities (the source data for both the SMS- and paper-based datasets, see Fig. 1) with data records at the two next levels of the paper-based monitoring system as follows: (a) data from the paper records held at LGA level, where tallies are consolidated by the NFP for all facilities in the LGA; and (b) the electronic datafile generated at state-level from the paper records submitted by the NFPs. We did so in order to assess whether discrepancies are introduced into the paper-based dataset when data are aggregated in LGA offices, and/or when they are entered into the spreadsheets by personnel at state level.
Some discrepancies between the weekly paper tallies and the LGA reports were noted for admissions (− 35 and 12%) and exits (12 and 2%) for two facilities. The values entered at LGA level carried over into the paper-based datafile indicating data transfer between facility and LGA level as a potential source of inaccuracy in the electronic data.
The analysis found very few errors in data entry at state level: The state-level electronic datafile matched the LGA paper-based data for exits and admissions in all but one instance.
Explanations for patterns identified
The discrepancies observed between the LGA reports and weekly tallies could be due to introduction of errors when the NFP copied data between paper forms. Alternatively, errors may have been introduced when the facility-in-charge copied data from the paper form stored at the facility (viewed by the study team) and the paper form they later submitted to the NFP. Both factors could plausibly contribute to the discrepancies, since our observations indicated a lack of formal quality assurance procedure for data transfer between levels.
Data accuracy and reliability: findings from secondary data analysis of federal level data
We compared the electronic datafiles from the paper-based and SMS systems for all records between January and July 2017 (n=299 facility-months). The datasets should be consistent since they are both derived from the same paper records (OTP cards and weekly tallies) held at the facilities. For those 46 facilities for which information was available, Table 2 shows the proportion of records where the data on CMAM outcomes were different in the two data systems. The differences between the data values do not have a consistent direction.
Explanations for patterns identified
It is somewhat surprising that there is not a closer match between the paper-based and SMS datasets, given that the source data (paper tallies) are the same. A potential source of discrepancies is errors introduced during data entry to mobile phones. Although no such errors were observed during the study, the study team visited only nine facilities, and the presence of observers may have positively affected conduct of the data entry to mobile phones on those few occasions, while the data in Table 2 are derived from 46 facilities over 7 months. There is no formal quality assurance procedure for the process of sending weekly data by SMS, so errors in the SMS dataset are plausibly introduced at this stage. Also, as noted above, errors in the paper dataset are introduced when data from the facilities are aggregated at LGA level.
Findings pertaining to completeness and timeliness from secondary data analysis
For the paper-based system, all facilities reported on a monthly basis throughout the reference period, so reporting was complete in terms of months with valid observations. In contrast, completeness of the SMS dataset was deficient with respect to weeks with valid observations. We found that, on average, 7% of weekly reports were missing (with a maximum of 37% missing reports from one facility as shown in Fig. 7).
Both the paper and SMS data had relatively good completeness with respect to missing values in specific variables. For example, for the SMS dataset, key reporting variables (admissions, exits) were missing for only 0.1% of weekly reports, and RUTF stock reported at the beginning of the week was missing for 5.9% of weekly reports. For the paper dataset, the numbers of children in treatment at beginning and end of month were missing for 11 and 12% of monthly reports respectively.
The timeliness of the SMS data was weak. We assessed this using the indicators produced within the UNICEF dashboard, which allows facilities to report until the Monday following the OTP day. Figure 8 shows that data for slightly more than half of the weeks reported came in late for the period of weeks 1 to 30 (January to July) 2017.
Explanations for patterns identified
The missing values and inconsistencies noted during secondary data analysis appeared to be affected by the absence of a formal process to verify the numbers of children in the programme, based on child-level records. A key challenge was that staff carry over the number of “children in treatment” from the end of the previous week to the start of the new week, rather than by counting OTP cards. The practice of carrying over data is also used for the paper dataset (data are carried over from the end of previous month). The variable “number of children in treatment” is used to calculate requirements for RUTF, so this is a potential incentive for maintaining inflated values in the dataset.
With respect to the relatively poor timeliness of weekly report submission and incompleteness of the resultant SMS dataset revealed by our secondary data analysis, the coverage and reliability of the mobile phone network was perceived to be the major challenge. For example, one recently appointed CMAM-in-charge (whose CMAM clinic takes place on Mondays, and whose friend X is also a CMAM-in-charge) said.
“I’m expected to send the SMS report the same day as the CMAM clinic, but can’t always do this because of the phone network. Last week I couldn’t send it until Friday, and so contacted X who said it was a problem for everyone, and that I should just keep trying.”
While most of Nigeria has good network coverage, there are pockets where coverage is lacking, and also network quality varies across time and space. Another factor may be low motivation to submit the texts. While study interviews revealed health-workers’ high motivation to submit their data on time, and frustration when this was not possible, these sentiments may not be ubiquitous. The paper-based CMAM monitoring is part of the formal governmental reporting system, while the parallel SMS-based monitoring may be considered less important as it is not owned by the health system.