Home > News > Chinese Aid Data
169 views 3 min 0 Comment

Chinese Aid Data

- April 30, 2013

China is often accused of using aid strategically to advance its own interests with little regard for the welfare of the developing countries in which it is investing. Axel Dreher and Andreas Fuchs have pointed out that all countries use aid to advance their strategic interests and that the pattern of aid giving from China does not look all that different from other OECD countries, including the United States.

The problem with assessing this claim is the lack of transparency: there simply is no good single source of data that captures how much aid (and of what types) China gives. The AidData project, run from William & Mary, just launched a new detailed dataset that details Chinese aid projects in Africa using an innovative methodology:

The dataset uses a media-based data collection methodology developed by AidData, which helps synthesize and standardize vast amount of project-specific information contained in thousands of English and Chinese language media reports. A new report, China’s Development Finance to Africa: A Media-Based Approach to Data Collection, describes the development and application of the methodology, which will be released in conjunction with the database.

It is fascinating how internet-based data collection allows us to get at information that states jealously guard. That only works, of course, if that information is accurate. Deborah Brautigam offers a critique. The most damning criticism is that several of the aid projects mentioned in the media reports were either proposals that were never realized or were nowhere near the size claimed by the new data. As a result, the data yield substantively implausible patterns, such as that Mauritania is one of the top recipients of Chinese aid.

I do not have the expertise to evaluate this claim but if true this could signal a major problem not just with the data but also with the media-based approach more generally. The concluding paragraph by Brautigam struck me as odd though:

Bottom Line:  The authors are striving for a database that can be replicated by anyone. But that’s the problem. This is not research that can be done by just anyone, and especially not by only looking at media reports

Why is it a problem to produce a database that can be replicated by anyone? Isn’t the main advantage that she can pick over the projects that count and hopefully improve the data that way? In other words, isn’t the real promise here to combine the quantitative advantages of web scraping with real area expertise (crowd sourcing)? Here is a response from Austin Strange (AidData) along similar lines.