Indicator Coverage Methodology - consultation space

Coverage

When assessing the overall quality of a publisher’s data it is necessary to judge both breadth and depth. The Coverage dimension covered by this table assesses what proportion of an organisation’s total output is published through IATI. This is done by comparing a publisher’s total ‘IATI’ spend for a given year with an external source. This produces a Spend Ratio. As IATI is a multi-stakeholder initiative representing a range of organistion types there is no single source for this comparison: the OECD DAC provides comparison for DAC-reporting bilaterals; the UN System Chief Executives Board for Coordination reports UN agency expenditures; INGOs and foundations publish annual reports, usually with audited financial statements.

As it is impossible to create an exacting methodology applied to disparate comparisons, the percentage Coverage applied to the overall score uses a broader framework. The Spend Ratio will be adjusted using the following scale of coverage: Excellent: 80% or over (adjustment factor 100%); Good: 60 – 80% (adjustment factor 80%); Fair: 40 – 60% (adjustment factor 60%); and Poor: less than 40% (adjustment factor 40%).

The IATI technical team is still working on pulling these disparate sources together into a single table for use here. For the moment it is acting as a placeholder, with all publishers assessed at 100%.

1 Like

For Belgium, as indeed for a lot of other bilateral donors, it would be problematic to calculate the coverage ratio by relating the IATI-published spend to the OECD-DAC’s total ODA amount: only 2/3 of Belgian ODA is imputed on budgets overseen by the federal DG Development (MinFA), the other 3rd consisting of

  • aid financed by regional/communal authorities, for which we generally don’t dispose of details: no commitments, geographical precision, dates (start/end), forward looking info, long descriptions, docs etc
  • donor-country-expenses, such as the cost (first 12 months) of accomodating refugees (from DAC-countries), administrative costs, imputed student costs etc., for which the details mentioned above simply don’t exist at all.
    That’s why we don’t incorporate these activities/flows/disbursements in our IATI data, as we mentioned also explicitly in our “common standard” implementation schedule.

So, in order to calculate a coverage ratio, wouldn’t it be correcter/more revealing to relate our IATI “spend” with the OECD-DAC’s ODA disbursed on the budgets of the main Belgian agency, which is identifiable separately in the DAC CRS reporting (agency code=10) ?

Hi Toon

You are right. I have added this to our running list of issues to deal with here

Thanks

Bill

It seems a bit modest, to consider >80% as ‘Excellent’. Shouldn’t we aspire to do it better than that – e.g. setting the threshold at 95%?

We are certainly aiming for a clean-cut 100%

And, Toon, we are pretty much in the same situation, as usual, but find it feasible to include the expenditures managed outside MOFA in our IATI-data, when we receive the data for CRS++ reporting. Of course, this only allows us to report ex-post once a year, and in less detail, but it should not affect the ambition of having 100% of our DAC-reported disbursement-amounts presented in IATI-format as well.

At the SC meeting there was further discussion on how to assess coverage. I seem to recall we had identified a potential solution, which was to be circulated along with some of the other changes discussed at the SC. Do we now have a final methodology for this?

Apologies Yohanna.

Yes we have a proposal that needs finalising and circulating (here and to all SC members). I’ll try to get this out by end of tomorrow latest.

Here is the proposal that is being circulated to SC members.

Coverage

The proposal is for the existing table to be replaced with the following (where the current year is 2016) - [apologies for having to branch out to a Gdoc but I can’t get a table into Discuss]

• The Spend ratio is calculated by finding the best result in comparing IATI spend in the previous two years against a reputable third-party source or an official publication of the publisher.
• Coverage is then determined by raising the spend ratio to the top of its quintile (i.e. a ratio of over 80% will result in 100% coverage)

Comprehensiveness

Modify the Value-added table by:

  • Removing
  • Activity Website
  • Conditions Attached
  • Adding
  • Aid Type
  • Recipient Language
    (This will compare the language of the title and description against the recipient country, irrespective of the publisher’s ‘home’ language. Only activities targeted at a single recipient country will be assessed.)

Fairness

There is ongoing work to ensure that all the varying and legitimate business models employed by a wide range of publishers are fairly treated and that exceptions and exclusions are properly documented in the methodology.

Outstanding Issues

  • Forward-looking
    Not all activities can be expected to contain forward-looking budgets. We do not yet have robust machine logic that handles all legitimate exceptions fairly.
  • IATI Version
    There was consensus at the Steering Committee that an assessment should be made of the IATI version being used by publishers. A method of scoring this has not been agreed. This also requires an assessment of how long after a new version is released should it be included in the assessment.

In the revised note on the GPEDC transparency indicator, which was circulated by IATI Sec. January 15 it is proposed to remove ‘Conditions Attached’ and ‘Activity Website’ from the ‘Value Added’ list.

I disagree with the proposals because both links to conditions and to activity web sites represents linkages from the IATI standard to aspects of transparency that are difficult to capture in the xml format, and therefore actually adds value…

Linking to documents that contain any conditions makes sense. In my recollection of the SC meeting in Jube, the full discussion of the issue of conditions entailed the following: Canada proposed to remove conditions from the score. Sec. (Bill) replied that he agreed if the proposal was supported by others. Period. The practical use of presenting conditions in xml is much more doubtful because the conditions often will be highly contextual and users will therefore need to refer to the relevant document (often an agreement) in order to read the conditions in their proper context.

I don’t recall any discussion at the SC meeting in June in regard to dropping linking to activity website as part of the score.

On a technical note, I see that the proposal for calculation of the ‘Spend ratio’ (column F) in the accompanying table is proposed to be the highest result from rows:

 A/C
 B/D
 B/C
 B/E

Is B (2015 IATI Spend) / C (2014 Reference Spend) correct, since this represents data for different years?

Yes, this is correct. It covers circumstances where comprehensive publishing to IATI took place in 2015 but no 2015 reference spend is yet available.

Thanks for clarifying.

Regarding the coverage methodology, we’re still consulting internally but at first glance, the B/C ratio (comparing 2015 to 2014) is not a good idea, as aid levels can fluctuate quite a lot (e.g. spikes related to humanitarian crises, debt forgiveness etc).

For 2015 data (column D), we think we would be able to provide preliminary data by the end of February/early March. This would be the data provided to the DAC for the preliminary reports - while the DAC only publishes agregate figures until the data is verified, we do collect the data by department and could provide it to the IATI Secretariat as preliminary data. It would still be subject to revisions before becoming the official 2015 data, but normally these revisions would be in a 5-10% range, so it would seem to be a reasonable basis (at least better than using 2014).

We expect all bilateral donors to be able to provide the data, but wanted to put it out for others to weigh in. Would this be feasible?

Bill, is there a space to discuss the forward-looking methodology? We have a couple of questions to (we hope) help us move forward.

Re: the B/C ratio. Say you begin publishing at the beginning of 2015. Without this ratio you won’t have any coverage until March 2016 if you are DAC-reporting bilateral and much later in the year depending on when your annual report (or other reference data) is published. Remember in the interests of fairness we are looking for the best ratio that provides a broad guide to coverage, not an exact calculation (which is impossible). The aim of the coverage is not to nitpick over small differences and spikes but rather to penalise those who regularly only publish a small part of their portfolio.

As from March 2016 we definitely hope to be able to use preliminary 2015 DAC data for bilaterals, but this will only apply to a small (if important) group of publishers.

Re: forward-looking

Yohanna, yes please. (You might as well do it in this thread). My note above was a conservative disclaimer saying that I don’t think we are going to fairly cover all cases. But the closer we can get to this the better.

Any solution (or improvement on the existing methodology) would be most welcome.

Re 2015 data for coverage: fully understand the need for the ratio, but comparing 2015 to 2014 doesn’t work. We’re trying to suggest a reasonable alternative. You won’t be able to use the DAC preliminary data for bilaterals, at least not what is published, since for many donors it will include data from more departments than those publishing to IATI. Hence our suggestion that bilaterals provide their disaggregated data.

We’re also hoping that since bilaterals are able to do this, other organisations will be able too. Unless they’re happy using 2014 as the reference, which is quite possible.

Regarding forward-looking:

As is often the case, we’re assessing the methodology by looking at our own data, which we know best. We see that only 25% of our (still operational) projects have a budget in 2018. It’s somewhat better in 2017, but still much lower than we’d expect.

One fairly obvious issue is that most projects involving multilateral organisations would be front-loaded: we’ll pay most if not all the money at the beginning of the project. So it’s normal that these projects would not have budgets in outer years. I think this is fairly standard when dealing with multis. In our case, filtering projects by the type of partner or type of collaboration would probably exclude most of these instances. Have you tried this kind of approach?

We also wondered if comparing the total commitment with the amounts already paid (disbursements and expenditures) might help exclude the front-loaded projects. Let’s say, if the total disbursments are within 10% of the total budget (allowing for some hold-back amount to be paid at closure - again, fairly standard procedure), then we’d consider that the full budget has been paid and would exclude this project from calculations.

This would not deal with all the problematic cases, but some.

The other question is: have you tried using planned disbursements instead of budgets? After all, planned disbursements are more dynamic and better reflect the actual state of the project.

We’re still looking into our data, will let you know if other ideas emerge.

A further suggestion from my colleague Jérémie regarding the forward-looking data, to calculate this way to adjust for projects in the last year of activity:

If (end - start <= 370 days)
If end month >= 7
budget year = end year
Else
budget year = end year - 1

With this adjustment and the one mentioned before (excluding grants to multilaterals), we’d have around 70%.

Of the remaining projects, part of the issue can be that the project end date was changed, but the “Budget” field looks at the original budget which would have ended earlier. Using planned disbursements rather than budgets, as the more accurate and up-to-date data, would address this.

Excellent suggestion. Will see what our techies can do with this.

This also makes sense. Onto our worklist.