Strengthening IATI data quality: towards a comprehensive user feedback system

Hi all – cross posting here a blog just published on the IATI website on our research (with @Claudia_Schwegmann) around a user feedback mechanism for IATI data.


Over the last ten years, a wide range of organisations have invested considerable efforts in publishing IATI data. However, the consensus is that there is still a need to increase and diversify the usage of IATI data. The new IATI Strategic Plan (2020-2025) commits to “develop feedback mechanisms so users can alert publishers to issues with their data”.

Data Quality Feedback Mechanism project

UNDP, on behalf of the IATI Data Use Task Force, commissioned Catalpa to undertake research “to explore the opportunities for increasing the communication between data users and publishers, with the primary goal of improving the quality of IATI data”.

As agreed, the research will focus particularly on partner countries, but our aim is for the recommendations to be applicable to a broad set of IATI data users. Claudia Schwegmann is the lead author on this work, supported by myself and Anders Hofstee (read UNDP’s Request For Proposal in full).

While a range of different tools and processes currently exist to measure data quality, at the moment, there are limited avenues for individual users to provide feedback to publishers.

Last year UNDP conducted an IATI Data Use Survey, which found that 58% of publishers do not have any mechanism in place for “seeking or receiving feedback from users” of their published data - though 97% said they would be interested in receiving feedback on their data.

The survey showed that the remaining 42% of feedback received by publishers is principally via email. Email clearly has advantages as an established communications channel, but it also has its limitations. As feedback is not captured in a structured and systematic way across different users and different publishers, it is hard to curate and coordinate different users’ inputs, and there is no systematic way of tracking when publishers have resolved particular issues.

Over the coming weeks, we’ll be reaching out to a wide range of IATI stakeholders to understand how feedback on data could best be channeled back to publishers and made accessible to other data users, in a way that is most likely to lead to improvements in data quality. We’ll be sharing more information throughout this process, and you’re welcome to get in touch with us if you’d like to hear more about our work. Contact Claudia and myself by emailing cl.schwegmann@gmail.com and mark@brough.io (or commenting below!)

6 Likes

Thanks. Sounds interesting

In the early years of IATI I recall we put together a simple public system for logging and tracking such feedback - it had an elaborate URL --> something like “data dot tickets dot iatistandard dot org” (it was taken down by the @IATI-techteam at some point - I don’t know when. There’s a tiny bit of it on the internet archive) !

Whilst this was not the most compelling of user experiences, it did help surface some of the issues around feedback on published open data. I recall @bill_anderson was rightly very interested to ensure that feedback was as precise as possible, and of a technical nature that could be acted upon (eg: “the transactionType code is badly formatted in XYZ activity”). Hence, for someone charged with ensuring their IATI publication remains of quality, they could act to fix it, or at least provide feedback as to the underlying issue.

In modern day terms could / should the new IATI validator be equipped to pinpoint and parcel up of specific technical feedback for publishers? What more is therefore needed, in terms of feedback on data vs the schema, codelists & rulesets? Surely all the issues are now known and available publicly? Do we need publishers to there respond as to when and how they may fix things, where possible?

In that early prototype, we were also minded of how feedback could easily creep into / be interpreted as criticism of aid policy / activity, or move towards requests for more information and explanation (eg: “who is the “redacted organisation” you disbursed $Y to ?”). In broad terms, this may also be a data quality issue, and a user may want to ask that question, too. Where does that then position the publisher?

I don’t think this happened, but it was part of our planning. It’s inevitable that we move into power dynamics between supply and demand. It’d be good to know what appetite there is for discussing “failings” in published data, and potential fixes. What expectations do we then bring to this conversation? Who gets listened to? Who doesn’t? Let’s be clear that of the thousand conversations that could be bloomed by this, maybe only a few might flower… (or something like that).

We should also be minded that people are time-pressed and have other priorities. People are often just trying to use IATI data to get on. Quite rightly, they simply may not always care too much about the shortcomings of a specific publisher, or the minutiae of the IATI standard. They might find that a tool is broken and post that to Discuss or even GitHub, as we have seen before.

There’s a serious resource implication to try and gather, respond, direct and resolve the support requests and feedback. It cannot - in my opinon - be resolved by technical platforms alone. Perhaps we can learn from the support tickets that are posted privately to the @IATI-techteam? How long do they take to resolve? How similar are they? Can we mint FAQs to help others?

Whilst this research is vital, I would like to understand how the IATI Board will resource and mobilise the way forward. Otherwise we risk having another nice experiment that will be quietly retired, again…

2 Likes

Data Tickets closed: The IATI Data Tickets website has closed and then the followup site being closed (data.tickets) Data Quality Issues…presumably there was some note on these decisions that might help define the way forward?