Activity results (and objectives)

Continuing the discussion from Bright ideas for the TAG:

I agree it is extremely important that aid agencies report their activity results, and that everyone can access these reports through IATI. I am in awe of the analytical grasp that Herb_Caudill has brought to bear on this. But it seems over-ambitious in the forseeable future to think in terms of a way in which indicators can be rigorously defined machine-readably for all activities.

A key point is that different activities - and within those, different objectives - are differently amenable to standardization. It may not be too difficult to count the number of children to whom your project has helped give a basic education, and it is certainly useful to be able to aggregate that number with others in local and national statistics. But not all objectives are like that.

Let’s say your objective is to build an all-weather road from Amule to Buba. A simple way of reporting the achievement of that objective in an aggregatable way is to say you built 100km of road. But the validity of any aggregation along those lines will be dubious because a road from Amule to Buba is a unique matter, entirely different in character from a road between Calakal and Dor. Measuring things like volume of traffic or value of merchandise does not absolve you from that because any assessment of the impact of the project has to take into account a host of other costs and benefits. Unless you follow the path of calculating everything into a single number using a full social cost-benefit analysis, you are left with a hopeless number of interpretable dimensions. Social cost-benefit analysis is less popular than it once was, and for good reason. In reality it hides many doubtful and politically controversial questions behind a facade of economic expertise.

Or take another example: one which you use in your exposition, Herb: training journalists. What matters in the result is not merely the number of journalists trained, and how they may be disaggregated by gender, ethnic group and so on. It is important to know how thorough the training was (a day or a year? Full-time or part-time etc), and about the features of the curriculum being used. It would be extremely complex and difficult to resolve these matters into machine-readable indicators, and the effort to do so would have unwanted side-effects. Not only would it increase the number of headaches in the publishing agency and deter it from using IATI at all; it might even end up forcing them to design their projects to fit in with the standardized indicators (including packages of standardized curricula) on offer. (I know you will say they can define their own indicators, Herb, but suspect they might feel it easier - or be pressurized - to take indicators off the peg.)

In the end, why do we want to aggregate results? Mainly for purposes of centralized planning and policy discussions. In some cases (like the provision of basic education) it is quite legitimate. In others - like roads - the planners and policy-makers really should not be shielding themselves from the social and political complexities behind pseudo-objective numbers. And in others again - like training journalists - it may be better not to involve central planners at all.

So what would my recommendations be for the TAG?

Remember that the essence of IATI is not necessarily to provide aggregatable or machine-analyzable data. The more fundamental purpose is to provide accessible information. For citizens, this may mean being able to find, understand, and follow up on the individual activities which are going to impact them, or which have impacted them, or which their government has funded or allowed to happen in their country.

There may be a case for providing some facility for using standardized indicators, but this should not be seen as something generalizable to every reported activity.

Being new to this space, I do not understand how the IATI standard has come to support a family of tags called “result” rather than “objective”. The IATI standard was supposed to be forward-looking (presumably in contrast to the OECD CRS). “Result” seems very backward-looking. Citizens and recipient-country governments need an easier way of knowing what aid agencies are planning to do and what they are in the middle of doing. Recording obejctives at the beginning also strengthens post-hoc accountability by allowing clear comparison of promises with, well, claimed results.

I do not think it should be stipulated that a “result” (or an “objective”) needs to be measurable. This job is done by indicators.

Apart from allowing and encouraging the reporting of pre-completion objectives, do not make the standard more complex. The priority should be to make donors and other publishers actually use those tag fields for reporting objectives, results and - above all - links to useful documents.

Michael - all of your points are well taken. This is a thorny problem. Every aid organization that tries to measure results ends up trying to come up with indicators that are comparable, without being so over-broad that they’re meaningless. Sometimes they err on one side, sometimes on the other.

Still, the answer isn’t to throw up our hands and say that there’s no way to measure the results of development work, or to remove this from IATI’s purview.

The problem that my company is trying to solve is a very practical, down-to-earth one. The US government (for example) needs to be able to answer some very basic numeric questions like:

  • How many people received HIV treatment in Southern Africa, paid for with US funds? Where? What regimens were administered? What’s the demographic breakdown by age and by gender?

  • How many new houses were built in Haiti using US funds after the earthquake? How many people were housed? Where?

Ultimately, what we really want to know is outcomes (have HIV survival rates increased in Southern Africa, have homelessness rates dropped in Haiti, etc.) But if we can’t even count our outputs, then something’s very wrong. If the US can’t provide those simple statistics, it becomes that much harder to make the case for aid in the first place.

And we can’t even count our outputs, at this late date. The reason why we can’t is not because the data isn’t being collected, it’s because the data has to make its way from one organization to another to another - from an HIV clinic in Mozambique, which reports to an implementing partner like FHI, which reports to USAID/Mozambique, which reports to USAID/Washington. Getting the data in consistent, usable form across all of those gaps, and aggregating it at each level in a non-destructive way, remains a mostly unsolved problem.

If IATI chooses not to contribute to a solution to this problem, someone else will have to start from scratch, and that would be a real shame.

First, because IATI is so close. If you look at the specific tweaks I’ve proposed, they’re quite minor, and backwards-compatible. The main thing I’m asking for is just a couple of additional elements to allow for unambiguous indicator references. If you’re reporting on the US Foreign Assistance Framework indicator 3.1.1-6 then you should be able to indicate that with precision, instead of saying “Number of adults and children with advanced HIV infection newly enrolled on ART” and hoping that everyone else uses the exact same words.

Second, because results reporting is absolutely part of IATI’s charter according to the Accra Agenda. Either IATI supports results reporting, or it doesn’t. And if it does, the schema needs to support it in a way that’s actually usable.

Hi Herb. Your last message - like the previous one, and discussions in the Google Group - is very constructive. But I do protest against the suggestion that I’m saying ‘there’s no way to measure the results of development work’ or that IATI should not attempt to record results.

One of the things I’m saying is that IATI can usefully record results (claimed and/or corroborated) even if it does not do so in a way which is aggregatable or machine-analyzable. Many will be interested in this information activity-by-activity. A lack of standardized codes should be no excuse for agencies not to use the results tag.

I do acknowledge that there is a strong case for supporting standardized indicators in many instances, although I have strong misgivings about their use in many instances too. I agree that the Accra Agenda points strongly in that direction, although I would point out that this is very much in the context of harmonizing performance assessment frameworks within recipient country systems. The US Foreign Assistance Framework with its 500+ standard indicators looks rather unilateral. What happens for a US-funded activity in a country whose standard indicators are different from the US ones?

As an information-facilitator, it is probably not up to IATI to solve problems like that. But IATI should provide room and facilities for solutions. So I would not oppose the creation of a way in which publishers can report objectives and performance in terms of standardized indicators. But I feel strongly that any use of any standardized codes here should be optional rather than mandatory within the IATI standard. Donors or recipient countries may nevertheless insist that their implementing agencies use them, but this should not bind other agencies in other relationships.

Thanks, Michael. I think we’re entirely in agreement so I won’t belabor this thread much further. Just to clarify:

  1. I absolutely agree that the proposed indicator code element should remain optional.
  2. I don’t think it’s IATI’s job to come up with standard indicators, or to opine on the quality of indicators, or to recommend one set of indicators over another.

The state of the world today is that (a) some organizations have created standard lists of indicators, and (b) some organizations use these indicators - some by choice, some as a condition for funding. Whether that’s a good thing or a bad thing is an interesting and important question, but is a topic for another day.

What the IATI schema lacks today is a way for an organization to say that they’re reporting on indicator XYZ from repository ABC.

This is glaring shortcoming in the schema. Without this simple element, the only quantitative analysis anyone can make of IATI data involves the amount of money that went from A to B. And while that’s important information, it’s the crudest indicator there is.

Fortunately this is a shortcoming that’s easily remedied. I’m looking forward to continuing the conversation in Ottawa.

Hi Herb. I’m glad we found our core of agreement, and that a session on improving the results element has been scheduled in the Ottawa meeting.

But a further thought occurs to me, which you may not like but I think is important:

In view of the very definite emphasis in the Paris, Accra and Busan agreements on harmonization and alignment according to recipient-country systems, shouldn’t the presumption be that vocabularies of standard indicators valid in the IATI standard would not include unilateral donor vocabularies like that of the US Foreign Assistance Framework?

This is an interesting thread and I want to highlight something important that Michael Medley said above:

The IATI Standard has actually moved somewhat away from this in the 2.01 version by making the Indicator element mandatory within the Results element. This makes it impossible to report Results in a narrative way (thus not aggregatable or machine-analyzable).

We had not noticed this important change in the 2.01 upgrade process, as it was not listed among the elements that were becoming mandatory. We plan to bring it up in Ottawa as something perhaps worth revisiting, given its impact on the availability of information on results (and objectives, which we are publishing in this element as “expected results”).

Thanks, Yohanna. I had not gone carefully through the 2.01 rules to see how the Results element enables the pre- or mid- project reporting of objectives. I’ve now done so (using http://iatistandard.org/201/activity-standard/).

I wonder if the hierarchy of tags involved (Result -> Indicator -> Period -> Target) might be obscuring the capability of reporting on pre- or mid- project objectives. I’d be interested in learning from publishers whether the complexity here does in fact significantly deter them from recording objectives at an early stage, and whether improved reference materials, software or training is needed to help.

On the earlier point about the fact that if you use the Results element you must also use the Indicator element: I don’t see how it makes narrative reporting of results impossible within the system. The only mandatory items within the Indicator element seem to be a narrative title, and an attribute called “Measure” which makes you specify whether you are measuring percentages or absolute quantities. The Period element, which would contain any hard quantitative information, does not seem to be mandatory (although then what is the point of “Measure” being mandatory?).

If you take the view that any specification of an indicator intrinsically takes you beyond narrative reporting, I would ask why you want to avoid that. I think indicators are useful even if sometime trivial. What I am very cautious about is the standardization of indicators necessary for aggregation and machine analysis.

It is true that, if you want to go into the Period element and use the Target and Actual elements, then for any Target or Actual you must specify a quantitative Value. I would agree that this is not always going to be appropriate. Indicators can be qualitative, and although qualitative indicators are frequently converted to pseudo-quantitative ones through a process of grading by experts, this raises questions which I think it is too early to grapple with.

Hi Michael

Somehow there didn’t seem to be an opportunity to discuss this at the TAG (perhaps it happened in sessions I wasn’t able to attend) so I thought I’d provide further input here, as it’s an important conversation.

You said: “If you take the view that any specification of an indicator intrinsically takes you beyond narrative reporting, I would ask why you want to avoid that. I think indicators are useful even if sometime trivial.”

The simple answer is that our systems are not able to take us beyond narrative for now. At the moment we receive progress reports from partners and synthesize the information into a narrative summary of outcomes that is recorded in our internal IM system and published on our website and in IATI file (see for example this human-readable profile: http://www.acdi-cida.gc.ca/cidaweb/cpo.nsf/vWebCSAZEn/67F75976EC11F417852579C10035AA4D).

I’m sure many people will find this approach insufficient, but that’s the best we can do for now. While we’ve been looking at ways to capture indicator-specific data in our IM systems, it’s a huge challenge that I don’t expect us to overcome soon. Moreover, one could argue that implementing organizations are much better placed anyway to provide detailed, indicator-by-indicator data - in which case we need to think how best donors at the beginning of the aid chain (ie bilateral) can communicate results.

Hi Yohanna

Thanks very much for continuing this important discussion. It is especially good to be able to refer to real-world examples like the one you mention.

I do understand that existing information management systems may be hard to change or adapt. Your approach of using narrative fields to provide information that doesn’t come to you in more precise categories is certainly preferable to revealing nothing about results (intended and actual) at all, which is what most agencies (don’t) do.

But it raises the question of how you actually manage your aid. There’s a lot of truth in the dictum “if you can’t measure it, you can’t manage it”. Instead of “measure” we could put “clearly represent”. And since we believe in the future of democracy we might put “clearly and transparently represent”.

Looking at your GROW/MEDA example, I am struck by the fragmentary look of the results that you cite. There are six different items in the result/description element, most of which mention a number whose significance is very hard for a reader to judge. Is 6,155 women farmers mobilized a lot or not very much? Then I see that the human-readable profile on your website cites five separate items, which are different from (overlapping with?) those in the IATI record.

In a project of this size ($18.5 million over five years), there may well be a plethora of outputs (low-level objectives). It is almost certainly worth trying to keep track of them all, but maybe putting them all into an IATI element (or nest of elements) is not the answer. Shouldn’t these be tracked in project documents (e.g. the initial plan and then annual reports), and then the documents linked through IATI?

I would say the IATI results element would be better used for a smaller number of higher level results, such as the ones mentioned under “Expected results” in the project profile on your website or the (different/overlapping) set mentioned in the activity description. Then, at different points during the implementation process you could add a new instance of the result element, titled as an “Interim Result” or a “Final Result”.

If these are just done descriptively in the IATI elements, they really ought to be backed up by some indicators in the linked project documents. For example, against the expected result “Increased diverse agricultural productivity…”, you might narratively report “Agricultural productivity has greatly increased in diversity, and moderately increased in cash value”. But for real accountability you need to commit yourself to more specifics.

Re the argument that implementing organizations are much better placed to provide the detailed information, I would hate to see this become a widespread donor cop-out. Having worked in implementing agencies, my experience is that donors were always (I think rightly) demanding detailed reports. So donors should have the reports, and presumably have as much authority to publish them as the grant-receiving agency (even if the project had multiple donors). Any issues here about privacy and other sensitivities need tackling on their own account; devolving the publishing responsibility to the implementer does not look like a good answer. But this could be a whole new topic of discussion.

Best wishes - Michael

After writing the above, I read the notes on the “Improving the results element” session at the TAG meeting. My argument (above) that the results element is better used at a level higher than activities or outputs seems to conflict with Bill Anderson’s view that IATI should record activities and outputs but not outcomes and impacts. Of course, output, outcome and impact levels are hard to define precisely and universally; the terms are often used to mark relative differences within the exposition of a theory of change for a particular project with its unique circumstances. But I would say “outcome” often suggests indirect effects which nevertheless can be largely attributed to the individual project and whose measurement is too granular for national statistics. Knowing about this level is surely often key to understanding what a project is about and how it is performing.

Thanks Michael

I agree with you in principle. You are right that we receive more detailed information than what is in these fields, as this is a crucial part of project management. Unfortunately, our legal requirements regarding official languages and accessibility make it very, very difficult to publish project documents. We continue to work on alternative approaches to capture and share results information from our systems, but it’s difficult and taking longer than we’d hope.

Bill’s view leads us back, I think, to the notion that implementing partners are best placed to provide results details ie outputs and outcomes. We continue to engage with them to encourage them to publish.

The last point in my previous message remains relevant, tough: it may be worth thinking more concretely about what kind of results initial funders (like bilateral donors) can and should report, especially where a full chain of data was published (ie down to final implementing partner). Perhaps something for a future TAG.

Just a quick look at the page you reference shows that someone, at least, is counting the beans: 160 agricultural workers trained, of which 50% women; gardening training and inputs delivered to 76 farmers; etc. etc.

The end goal is for your partners to be able to report these numbers to you in IATI format, so that you can then report them in turn and aggregate them when possible, also in IATI format. We can’t get there without adding a bit more structure to the results element.

(Having said that, if a narrative is all there is, it seems like the schema should support that as well; so I don’t disagree with you that it might make sense to roll back that requirement in the 2.01 standard.)

No, and here’s why:

My primary takeaway from Ottawa was that IATI has huge potential as an intermediate exchange format from all along the chain of accountability: Not just for the donor to report to the world, but for the INGO to report to the donor, and for the small local grantee to report to the INGO.

In fact, it’s not realistic to expect the donors to report reliable machine-readable data to the world if there hasn’t been reliable machine-readable data all the way up the chain.

The fact of the matter is that the donors who are committing to report results data via IATI are still struggling to figure out how to assemble that data from disparate sources. This is where IATI can be very useful, assuming that it allows more precision in the ways I’ve proposed. If all US-funded actors are required to report to their US government donor agencies using IATI, using the indicator codes defined in the Foreign Assistance Framework, then the US has a chance of making good on its promises. Otherwise it doesn’t have a prayer.

The problem of cross-referencing indicators still remains, so that results data from different donors can be compared and aggregated where appropriate. Here I think the solution is not a global standard, but a crowd-sourced crosswalk API that can tell you which US indicator corresponds to which Mozambique indicator, or which UN SDG indicator corresponds to which World Bank indicator. More thoughts on that idea are here - please let me know if you’re interested in pursuing this idea.

I think this has a political dimension that we need to acknowledge. Our judgements about what is realistic for donors to implement and what is not realistic may be correct. Or they may not be. Bureaucrats sometimes need (and sometimes want) to be pushed to do things which do not seem realistic. I don’t think it is good in this case for technocrats to pre-judge and pre-empt.

Unless the TAG has guidelines/mandate which clearly covers this kind of issue, I think the question of whether to allow any standardized set of indicators to be used, or whether to wait for a common standard, or whether not to try to standardize indicators at all - ought to be referred politically upwards. The work done in showing that the first option is technically feasible is useful to inform such a decision.

The vision of donors and other aid agencies using IATI (not just IATI-consistent categories) as the medium for managing their own reporting chains - rather than relying mainly on internal processes - is an intriguing one. It could have far-reaching implications. It is attractive in many ways, but I’d have thought it would meet with strong resistance in many aid chains. My impression is that many donors - USG agencies not least - often like to intervene with advice and directives to implementers along the way. Sometimes this is because things have not gone as previously planned; sometimes it a cause of things not going as planned. Goalposts get changed, compromises are negotiated. Surely it can suit both the donor and the implementer to work things out in private: feel their way towards a common story rather than present their different versions as a public commentary going along. Not that it would be a bad thing if they could be made to do so. And I’m not saying we should block or pre-empt the choice of any agency to do so. Just that it is rather surprising to be told that if USG agencies can’t do things this way then they can’t meet their aid transparency promises at all.

@Herb_Caudill, you have a much closer knowledge of how USG agencies operate than I do. I genuinely think it would be of great interest if you could explain more about why it is so hard for USG agencies to adapt their internal reporting systems to IATI forms that they would actually find it more workable to use a public information channel.

Sorry, I wasn’t clear. When I say that NGOs should “report to their US government donor agencies using IATI”, what I mean is that they should report using the IATI schema, not via the public IATI registry. USG agencies “adapting their internal reporting system to IATI forms” is precisely what I’m talking about.

Here’s how things often work today:

  1. USAID/Elbonia tasks NGOs A, B, and C with building rural water systems
  2. USAID requires each NGO to report on their progress on a quarterly basis, including numbers on three simple indicators:
    • 1.2.3 # communities served with potable water
    • 1.2.4 # households served with potable water
    • 1.2.5 # individuals served with potable water [disaggregated by gender, age, and income category]
  3. Each NGO sends USAID a PDF quarterly report, in which these numbers are in a table on page 9 preceded by lots of blabbety-blah and pictures of happy Elbonian children drinking clean water
  4. If we’re lucky, some poor soul at USAID/Elbonia copies and pastes (or retypes) the numbers from the PDFs into a spreadsheet or some tracking system. In real life it’s more likely that the PDFs are filed away and that data goes nowhere.

All I want is for the PDF in step 3 to be replaced (or at least supplemented) with an XML document following the IATI schema, so that step 4 can be done by software. Right now that’s not feasible because the schema has no place for the codes 1.2.3 etc.

I don’t see this as a political matter, as technocrats vs bureaucrats, or as anything that should be remotely controversial. It’s just a matter of creating the possibility for structured, machine-readable results reporting, which is currently not an option because this element is inexplicably missing from the schema.

Sorry I misunderstood! But it is good to have the clarification: a helpful example.

However, what’s to stop USAID internally requiring the XML with the element as you recommend even if IATI does not adopt it? Wouldn’t it be a relatively simple matter for USAID to run their internal XMLs through a programe which converted them into IATI standard? Such a programme would, for example, see the code “1.2.3”, look up the corresponding verbal definition - “# communities served with potable water” - and insert the latter into

iati-activities/iati-activity/result/indicator/description/narrative

You might then still think the IATI data is not as useful as it could be. But the unchanged IATI schema would not seem to be seriously getting in the way of USAID improving its own practices.

Why would that be preferable?

I’m a little mystified by the fact that there’s any resistance at all to this change. The status quo is equivalent to having a budget element with no currency code element. This should be a complete no-brainer.

If this turns out to be unfixable within the primary IATI schema, I’ll submit it as an extension. Or, as you suggest, we’ll just extend the schema internally and clean it later so it validates for external publication. But I’m really scratching my head here to understand why this is controversial at all.

Standardizing and aggregating data is a powerful practice and a dominant technology in modernity, but it is powerfully dangerous as well as useful. The complexities of reality are chopped, stretched and squeezed into forms that the machine can ingest. What comes out are strong statements. These can be used as a basis for discussing inaccuracies, variations, biasses. But often they are used crudely to justify insensitive top-down policies and planning, and to browbeat stakeholders with facts and figures whose validity and relevance is dubious. There is also the question of whether some standards drive out others and - in that case - what political interests are at stake, and whose power is prevailing.

Describes all of human invention throughout history.

There is always the risk that people will use IATI in ill-advised ways. Not a strong argument for hobbling the standard.