Blog
Nobel prize in economics – experiments are no substitute for diagnosis

Sitting in front of a class of 20 young students at a private technical-vocational college on the edge of Maputo, the capital city of Mozambique, I cannot help but reflect on the fervent social media debates regarding the Royal Swedish Academy of Sciences award of the 2019 Nobel Prize in economics.

The committee made a worthy choice. It recognised the growing importance of practical experimental work in the field of development. Even so, it is not self-evident that the bulk of gains in knowledge that will support more effective social and economic policies in the developing world are going to start and end with experiments.

My engagement with technical students in Mozambique is part of a baseline survey of their ‘school to work’ experiences. We are collecting information on the young students’ backgrounds. We are doing some short psychometric tests and asking about their expectations for the future. Over the course of next year, we will be following up with them, plus the rest of the planned sample of 2,000 final year students, to find out what kind of work they have been able to get, or what else they are doing.

Shutterstock

Those familiar with the challenges of youth employment in sub-Saharan Africa will recognise the relevance of this type of study. As elsewhere, Mozambique’s demographic structure means that hundreds of thousands of young people are entering the labour market every year, but good jobs seem few and far between. Aside from anecdotes and ‘guesstimates’, little is known about what is really happening in the labour market here, or about the pathways leading to different job outcomes. The last comprehensive labour force survey was undertaken in 2005 and the last national household survey in 2014/15, just before a debt-fuelled economic crisis erupted.

Different approaches

What has this got to do with the recent Nobel in economics? At a basic level, our survey chimes with the commitment of the winners to collect high quality empirical data of concrete relevance to the lives of the global poor. It may sound surprising, but before the 2000s this kind of bespoke micro-economic survey work was not so common in low income contexts. So the (indirect) recognition given to these activities by the Nobel committee is welcome indeed.

But there is a difference between our work in Mozambique and the research agenda of the Nobel winners, sometimes called ‘randomistas’. Among their many contributions, they have shown how randomised controlled trials – often used in clinical research – can be used to test the effectiveness of developmental interventions. Their emphasis on so-called ‘clean identification’ (via experiments) has helped shed light on many topics, from free bed nets to de-worming pills and better hygiene practices.

Where we differ is that our baseline study is not the first phase of a randomised trial or any other form of field experiment that seeks to identify ‘what works’ to improve school-work transitions. Our study represents the more bread-and-butter activity of collecting broadly representative data to begin to understand the complex social and economic phenomenon of youth (un)employment.

It is just the start of a process of bringing some evidence to the table, from which we hope further debate will ensue.

Different types of knowledge

In this light, it’s helpful to distinguish between the different kinds of questions social scientists – including economists – set out to answer. At the start of a research agenda, we are often interested in understanding what can be termed the ‘causes of effects’. This is the attempt to get a grip on the (likely) multiple mechanisms that produce outcomes, such as: why is youth unemployment so prevalent in Mozambique? Is it (low) technical skills, barriers to internal migration, search frictions, macroeconomic management, or demand-side failures?

Different combinations of these factors tend to be present in each particular case. Thus, careful diagnostic analysis of the nature and form of the problem at hand is an essential first step.

Just as in medicine, the search for treatments tends to come after a diagnosis. And it is here – typically, at a later stage in a research programme – that experimental manipulation can play a critical role and help answer questions about the ‘effects of causes’. For example: can a wage subsidy help tackle youth unemployment?

Arriving at rigorous, concrete answers about the effects of specific interventions or events is vital. The pioneering work of the Nobel prize winners, and the movement they have led, has greatly added to the methodological toolbox of development academics and practitioners. A fair assessment of much previous work in development economics, against which the Nobel winners explicitly pushed back, was that it often lacked rigorous identification of causal effects.

Experiment with care

But experimentation typically provides us with one fairly narrow kind of knowledge – i.e., how does thing X affect outcome Y? It’s also a tool to be used with great care. Testing different ‘treatments’ is expensive, time-consuming and ethically tricky, particularly when undertaken on a large scale. So, unless there is a solid diagnosis pointing to consistent working of a specific mechanism, which can be manipulated experimentally and is reasonably expected to be effectively addressed by a proposed treatment, then testing such a treatment is wasteful, at best.

This is not trivial. In our school-to-work study, we considered including an experimental component. But on reflection, there was no clear diagnosis pointing us towards even a minimal set of practical interventions suitable for experimental testing. And there was even less clarity on whether any feasible interventions could garner sustained public policy support.

Of course, we might have drawn on other-country experiences or topical academic debates to come up with a clever experiment. But then the focus of our work would have shifted away from diagnostic understanding towards making a purely academic contribution (or more bluntly, enhancement of our academic careers).

More diagnosis as well, please

This gets to the core point. The Nobel committee made an excellent and well-deserved choice. But it is not self-evident that the bulk of the gains in knowledge that can support more effective social and economic policies in the developing world are going to start and end with clean(er) identification of the effects of specific causes.

Other types of knowledge should not be disregarded. While I am sure the ‘randomistas’ would agree (who wouldn’t?), the allure of clean causal identification, especially to achieve top publications in academia, can distract us from grappling with the more messy challenges of understanding and diagnosis. As philosophers have long pointed out, socio-economic outcomes are far more contingent than those studied in the medical and natural sciences. So, improving our diagnostic capacity is not a minor challenge. And, in keeping with other fields, to make progress on understanding and diagnosis it is not helpful to take randomized controlled trials as the only benchmark.

To be clear, experiments have their place. In the right contexts, they can help distinguish between competing diagnoses. But in Mozambique, creating and sustaining a broad church of evidence, of different types and from different perspectives, is also vital to address developmental challenges. Training future economists and policy-makers to put diagnosis before treatment, and to value different forms of evidence, should not be forgotten.The Conversation

The views expressed in this piece are those of the author(s), and do not necessarily reflect the views of the Institute or the United Nations University, nor the programme/project donors.

This article is republished from The Conversation under a Creative Commons license. Read the original article.