This article was originally published by SciDev.Net and is republished with permission.
Linnet Taylor, data and society researcher at Tilburg University in The Netherlands
The prospect of using big data for social good is far from the reality of how we use the vast tracts of online information each person leaves behind, says Linnet Taylor, data and society researcher at Tilburg University in The Netherlands.
International organisations, including the UN, support big data for its promise to boost the limited statistical capacity in the developing world, and to create social change.
But as things stand it’s the commercial sector that stands to gain, and no one is held accountable for safeguards to human rights, Taylor tells SciDev.Net in an interview for the Bellagio Residency 2018 series.
Big data is often talked about in terms of its promise to boost statistical capacity and create social change. Do you think it can do that?
I think big data is nothing without social change. The data is definitely related to a lot of economic potential ‒ whether that gets spread and distributed is a matter of choice for countries themselves. In terms of what it brings to statistical analysis, that is a really mixed bag. Big data offers more power to the North over statistics in the South, which is different from making statistics better, or creating action out of the statistics.
What’s an example of where that power dynamic has appeared?
One phenomenon we’re seeing right now is a huge shift in statistics around migration. The EU announced a month and a half ago that it was going to sponsor this initiative to do big data for migration tracking. The language around this is that it will help refugees, make people more immediately visible to authorities who want to help them, enable countries to prepare when they’re going to get an influx of migrants. This is true. But the fact that you’re going to get an influx of migrants has never been that much of a mystery – there’s a lot of news reporting on this in countries of origin that’s very easy to see. So what it actually does is it makes migration more identifiable, more controllable. This may or may not lead to benefits for migrants.
I think there are some things where tech needs to be regarded with caution, and human rights issues are a big one
Linnet Taylor, Tilburg University in The Netherlands.
Being able to distinguish whether people are Ghanaian, Pakistani or Syrian for instance is likely to work against Ghanaians and Pakistanis who may have a perfectly valid claim to asylum but will be shut out in a world of big data ‒ where it’s believed that by analyzing their social media output, the GPS details on their mobile phone, where their mobile phone plan comes from, you [could] tell whether somebody has a valid [asylum] claim or not. I think there are some things where tech needs to be regarded with caution, and human rights issues are a big one.
Is that kind of tracking happening at the moment?
Yes and no. This is all very commercial – this is not governments doing any of this work. There are a lot of start-ups pitching projects to governments and to statistics agencies. So far, those projects have been relatively successful, because we’re still at proof of concept stage. Nobody really knows what they do.
What I see in this big data for migration statistics initiative, on the part of the EU, is that statistics agencies who are having to collaborate with startups on, for instance, analysing satellite data in combination with social media postings from people coming out of refugee camps on the Turkish border … they can’t do any kind of meaningful analysis because they don’t have the cultural and linguistic capacity to understand what they’re seeing.
Are developing country governments in a position to commission such analyses themselves?
Basically, no ‒ because it’s incredibly difficult and expensive to educate data scientists. iHubs [IT innovation spaces] are where we can look to see home-grown data analytic talent creating less solutionistic sort of things ‒ which just means things dreamt up to answer the needs of the technical expert rather than the needs of the beneficiary. I have a lot of hope about that.
This all sounds like a hopeless position – is it?
Not particularly – statistics, all forms of technology can be used for good or for ill. Where they are directed towards serving the needs of corporations based in rich countries, that is not going to do any good necessarily for development. We have lots of very good scholars and activists and NGOs who don’t think that way – and I have a lot of hope in them.
Having some sort of exceptionalism about big data causes more problems than it solves. We shouldn’t treat it any differently from any other [issue] in development – we should scrutinise it, we should try to make it participatory on the part of people in the global South, and we should give people a voice in what they need. And that’s not a problem with big data, it’s a problem with development.
Who do you see taking the lead on this?
There’s some really good critical work in academia. There’s a move to decolonise data. There’s a very strong legal and activist community in India doing good critical work on what kind of data structures are appropriate for India, and how to preserve people’s rights while also allowing digitization to happen; because both are equally important. So there are various communities who are starting to come together around this. And I find that really exciting. The meeting I held at Bellagio [in 2014] was exactly that kind of meeting.
How have things evolved since then?
A movement called Global Data Justice is emerging. It is a group of scholars and activists thinking together about how to use standards of social justice for thinking about tech in development, and tech governance in particular. Because where you have tech you need some governance. Right now there’s the idea that tech is this wonderful freestanding miracle that doesn’t need governance. And that is problematic.