Blog
Artificial intelligence vs. COVID-19 in developing countries

Priorities and trade-offs

The rush to harness Artificial Intelligence (AI) in the fight against the pandemic may be an opportunity for developing countries to accelerate the digitalization of their economies.

In theory, AI can help track and predict the spread of the infection, make diagnoses and prognoses, and search for treatments and a vaccine. It can also be used for social control – for instance, to help isolate those that are infected and monitor and enforce compliance with lockdown measures.

Limitations of AI

Unfortunately, AI is currently not yet ready to track and predict the infection, it cannot yet provide reliable assistance in diagnoses, and, while its most promising use is to search for a vaccine and treatments, these will take a long time. The main reason for this somewhat pessimistic conclusion is inadequate data. The problem in the current crisis is that there is, on the one hand, not suitable enough (that is, unbiased and sufficient) data to train AI models to predict and diagnose COVID-19. Most of the studies that have trained AI models to diagnose COVID-19 from CT scans or X-rays have made use of small, biased, and unrepresentative samples, mainly from China.

On the other hand, there is too much noisy social media data associated with COVID-19. This is a problem which the failures of Google Flu Trends illustrated more than five years ago. This failure is dissected by Lazer and colleagues in a 2014 paper in Science, in which they identified noisy social media data and its effects on algorithm dynamics as the end of “big data hubris.” This same problem also bedevils efforts to automate tracking of COVID-19.

Furthermore, and perhaps more importantly, the systemic shock which the outbreak has caused has led to a deluge of outlier data. In essence, COVID-19 is a massive unique event. This sudden deluge of new data is invalidating almost all prediction models in economics, finance, and business. The consequence is that  “many industries are going to be pulling the humans back into the forecasting chair that had been taken from them by the models."

Surveillance

So, while we will not likely see AI in prediction and diagnoses during the current pandemic, we are likely to see the growing use of AI for social control. For example, the well documented use of mass surveillance to enforce lockdown and isolation measures in China, including infrared cameras to identify potentially infected persons in public.

OneZero has compiled a list of at least 25 countries that by mid-April 2020 had resorted to surveillance technologies – many of which violate data privacy norms – to track compliance and enforce social distancing measures. These include developing countries such as Argentina, Brazil, Ecuador, India, Indonesia, Iran, Kenya, Pakistan, Russia, South Africa, and Thailand. In the case of South Africa, the country is reported to have contracted a Singapore-based AI company to implement a “real-time contact tracing and communication system." Singapore is using an app called TraceTogether, which sends out warnings if social distancing limits are breached. 

In addition to social control, compliance measuring, and contact tracing, AI systems via apps and mobile devices can help health authorities manage provision of care. According to  Petropoulos, these can “enable patients to receive real-time waiting-time information from their medical providers, to provide people with advice and updates about their medical condition without them having to visit a hospital in person, and to notify individuals of potential infection hotspots in real-time so those areas can be avoided."

 AI surveillance tools can also be valuable in carrying out large scale diagnostic testing, to identify those still infected and keep them in isolation. Testing is also necessary for learning . It is not known accurately  how many people are in fact infected and how many are asymptomatic.

Research reported in Science suggests that more than 80 percent of infections are not documented. If true, there are two implications, one bad and one good. One, the pandemic may easily rebound once lockdowns are lifted. Two, the virus may not be as lethal as is thought. In this regard, The Economist points out, "If millions of people were infected weeks ago without dying, the virus must be less deadly than official data suggest.”

The upsides of surveillance technology come with one substantial risk: that after the pandemic, mission creep will set in and governments will continue close digital surveillance of populations.  Even worse, data gathered in the fight against COVID-19 may be used for more nefarious purposes.  

Tradeoffs

This risk of using AI in the fight against COVID-19 is perhaps reflective of the general risk in using AI.  AI has both positive and negative impacts. Take two examples of how AI can do both good and bad at the same time.

First, if we consider the Sustainable Development Goals (SDGs) broadly, a recent survey published in Nature Communications emphasized that “AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets”.

Second, while Natural Learning Programmes(NLP) may warn against the possible outbreak of an epidemic by mining written reports on social media and online news, a recent study found that to train a standard NLP model to do this using Graphics Processing Unit (GPU) hardware, emits 626,155 pounds of CO2. This is five times more than an average car emits in its lifetime (120,000 lbs.).

Hence, the authors in Nature Communications recommend that “the fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development”. The key point is that we need to limit the potential adverse consequences of AI, and we need to do so through adequate governance of AI.

Priorities

Developing countries are already having to deal with the economic fallout of the pandemic. As Hausmann argues, with revenues, trade, and investments dropping, developing countries would need to increase their indebtedness massively if they are to implement basic healthcare support and social distancing measures against the disease. They are losing policy space precisely when they need it the most. Therefore, prioritization of resources is vital.

Developing countries should prioritize their scarce resources on propping up their health sectors and providing social security to their citizens. In essence, they should not be investing their resources in the current rush to deploy AI. Although AI can be helpful in finding a vaccine, developing countries, and particularly those in Africa, would be wasting money on AI research. This is because, as I document elsewhere, around 30 companies in three regions – North America, the EU, and China – perform the vast bulk of research, patenting (93%), and receipt of venture capital funding (more than 90 percent).

This is not to say that developing countries have no interest in harnessing AI to find a vaccine – they do. And this illustrates that such a vaccine is a global public good. Scott Barret has put forth the concept of a “single-best effort public good," which can be applied to the search for a COVID-19 vaccine.  

Thus, while developing countries should not be spending resources on finding pharmaceutical solutions to the crisis through AI, they should be part of a global coalition to harness the AI capabilities of high-income economies and China in this respect. Developing countries can and should partake in the gathering and building of large public databases on which to train AI. The costs of doing so are small, and the potential benefits, given the need for unbiased and representative data on the pandemic, is high. What should be avoided is an uncoordinated response, an “AI arms race” between countries and regions, and uncertainty about the distribution of and access to such a vaccine.

How developing countries go about their AI-based surveillance and testing will also be crucial. Developing country governments and the global community need to ensure adherence to the highest ethical standards and transparency. If they do not, then they may face the prospect that people will lose what little trust they had in government, which will, as Ienca and Vayena pointed out, “make people less likely to follow public-health advice or recommendations and more likely to have poorer health outcomes."

For the developing countries of Africa, this makes it imperative that they ratify the African Union's "Convention on Cyber Security and Personal Data Protection" - the Malabo Convention – as soon as possible. And, they should also, consistent with the convention, stop limiting internet access and restricting digital information flows.

The views expressed in this piece are those of the author(s), and do not necessarily reflect the views of the Institute or the United Nations University, nor the programme/project donors.