Orthodox economics has long come in for criticism regarding its presumptions about the world, especially those regarding human decision-making. The question is, if economics is to become more empirically attuned to real-world behaviour, how should it seek to do so? Equally importantly, to what extent does it draw on existing power structures (including some of questionable legitimacy) to acquire more fine-grained behavioural insight, and if so, how can it then offer any critical feedback to those structures, if at all?
It was these sorts of questions that motivated the agenda for the third Spaces of Evidence Seminar, held at Goldsmiths, University of London on 26th September 2014. The seminar was entitled Trials and Tribulations of Economics: New Directions for Economic Policy Evidence, and brought together economists, policy thinkers and sociologists. Here I offer some reflections and thematic summary, drawing on what was discussed over the course of the day (1).
Which professional metaphor?
One of the recurring issues over the course of the seminar was that of the metaphors that are used, to justify certain roles for economists in the policy-making process. In his paper, ‘Evidence for Control’, Tiago Mata discussed the recent work of Cass Sunstein, which proposes that government design ‘choice architectures’, to promote certain behavioural responses.
Mata highlighted the ways in which this agenda implicitly revives the work of a number of early social psychologists in the United States who, inspired by Veblen and critical of neo-classical economics, developed a policy-oriented science of ‘social control’ in the early 20th century. Government acts as the rationalist ‘architect’; the public can then act impulsively within the crowd. Mata explored how notions of ‘open data’ are now an integral part of this much longer-standing vision of the function of public policy. Data ‘architectures’ become the guarantors that policy can be made in a rational way, based on aggregations of millions of emotional and non-rational choices.
A more frequently proposed metaphor for economic policy today is that of the medical doctor. To some extent, this vision is threaded through the agenda for economic policy field trials and randomised control trials (RCTs), which was discussed a great deal over the course of the seminar. The importing of a medical epistemology into economics does appear to have a certain amount of traction right now, at least on a rhetorical level (as I mentioned in my introduction to the seminar, it is interesting that George Osborne has repeatedly employed medical metaphors when defending his economic policies).
In his discussion of the What Works Centre for Local Economic Growth, where he is Deputy Director, Max Nathan mentioned how the metaphor of a ‘NICE for policy’ had originally inspired the creation of What Works centers, but which inevitably obscured the complexities and complications involved in trialing and evaluating public policy.
Politicians and civil servant may wish that economic evidence could be as clear-cut as physiological evidence (to the extent that that is clear-cut either), but Nathan outlined various reasons why this is scarcely ever possible. The nature of viable policy evidence depends heavily on the nature of the issue or the policy at hand, and there is no reason why RCTs should be held up as superior in general.
In his critique of RCTs in economics, Angus Deaton discussed a case in which he believe RCTs had indeed helped to develop a workable policy, a case involving the Manpower Demonstration and Research Corporation in the early 1970s. And yet, for Deaton, economics was in this case “working as engineering, not as science”. He admitted there are cases when economics is an active force in constructing economic institutions (much as the ‘performativity’ tradition of Michel Callon has explored), but that these were uncommon. By and large, economists are better off restricting themselves to a scientific vocation, rather than some broader professional or public purpose.
Field trials, as far as Deaton is concerned, can perform important political functions, much like legal trials, if the goal is to win an argument or achieve a consensus. But epistemologically, they rest on forms of “magical thinking”, namely that randomisation can reveal forms of truth that deliberate investigation of causality cannot. Randomisation is based on paranoia surrounding the threat of bias, but in the process encounters far more problematic statistical and epistemological challenges.
Why are professional metaphors important? Why do we want to speak of economists as ‘architects’ or as ‘medics’ in the first place? Perhaps the answer is that, at our current political and cultural juncture, we want economists (and other experts) to leave their ‘ivory towers’ or government offices, to enter the messy world of economic activity, and make sense of it. Hence, there’s this pressure to redefine economics along the lines of Victorian professions, as a mode of expertise which is wedded to an engaged, public vocation to build or deliver something useful. The metaphor of the medical doctor suits policy-makers today, because it’s entirely clear that doctors need to do something; they can’t just sit there observing the situation and applying theories.
Political and technological expediency
From a pragmatist or STS perspective, the social and political function of something like a field trial has always been crucial to its legitimacy and performativity. Forms of knowledge which work in influencing decision-makers, which work in winning arguments, and which are politically and economically viable, will inevitably achieve a higher profile in the policy process than those which are merely theoretically credible. But this then introduces the question of political and technological expediency: are behaviourists and field trial randomisers pursuing the path of least resistance? To what extent are economists equipped to navigate the messy and sometimes hostile world of institution-building?
A relatively uncontroversial example of methodological expediency was outlined by Suzy Moat, in her presentation exploring how online data sources (Google search terms, wikipedia views and edits etc) could be used to understand changes in market prices. The audience discussion of her presentation focused on some relatively obvious limitations of these data sources, as indicators of offline behaviour, such as the fact that powerful financial traders would be unlikely to use wikipedia as a source of market information. Moat accepted the limitations, though had various ways of trying to work around them.
And as she explained in her introduction, these data have certain unrivalled advantages: they are very cheap to use, they have global scope, they are extremely up-to-date and they are ‘naturally occurring’ (in the sense that the data is accumulated by default). It was a fascinating insight into what big data might mean for economics in the coming years. As ever with big data, the question is what precisely can researchers get access to. The data used by Moat may not be the very best in existence for the effects she was trying to show, but it is publicly available for scholars to use, which is key.
This issue of expediency has some far more controversial manifestations. Dawn Teele’s paper, asking whether field trials are ethical, suggested that too often trials are conducted in ways that exploit vulnerable populations, because that is cheaper, easier and out of the public eye. Teele questioned why so many field trials occur so far away from the researchers home institution. At present, RCTs in public policy interventions have no code of ethics, and experimenters resist the idea of ‘informed consent’ for methodological reasons. Teele argued that this vacuum of guidelines and principles needed filling.
The one paper which reflected explicitly on economics as a ‘performative’ tool was that of Vera Ehrenstein, exploring the role of economists in developing a new policy solution to the problem of deforestation. The case she discussed is a classic case of economics operating as an intervention, not only in the provision of evidence for policy, but in the very design and construction of policy. In this case, the problem of deforestation was represented as one of incentives: there is insufficient benefit in not destroying forest. The apparent solution to this is to offer payments to nations that can reduce deforestation. But this poses the question of what should be the ‘baseline’, that is, at what point should payments kick in?
Part of Ehrenstein’s argument was that, despite the claims to objectivity on this question, economics failed to achieve a high degree of disciplining power over the policy. The ‘baseline’ had to be set at different levels in different contexts. As it became more entangled with the realpolitik of international climate change policy, so its forms of calculation had to adapt to different national circumstances. The implication is that when economics does become politically performative, so it sacrifices some of its calculative authority. Reinforcing Nathan’s argument (though from a very different theoretical angle), she showed how context invades the frame of analysis and alters how expertise works.
Another risk of experimenters following the path of least resistance, therefore, is not that they necessarily exploit vulnerable populations, but that they lose the capacity to discipline or resist rival political strategies. Speaking as economists, both Deaton and Nathan recognised that one beneficial aspect of field trials can be that they help discredit policies that are politically motivated. This can be an important achievement, but it doesn’t in itself help identify what policy-makers should do instead.
Deaton concluded with a recent case of an economist who had transformed health policy thinking about public defecation in India, using a great deal of observational data, but no trials whatsoever. There are clearly different ways of entering the ‘field’, which do not necessarily involve seeking to reconstruct it for methodological purposes. This poses the question of why RCTs have elicited so much policy excitement in the first place.
Speaking truth to power, or constituting truth by power?
The mood of the seminar, as I suggested in my own introductory remarks, was plainly a sceptical one, though not necessarily any more sceptical than the epistemology of the randomisers or experimenters themselves. To perform field experiments is already to be doubtful of a policy’s viability and legitimacy. Accompanying RCTs is an empiricist form of scepticism, which hopes that data might ‘speak for itself’ in some way, an aspiration that also accompanies some of the hype surrounding big data. Traditional forms of expertise (in policy and the social sciences) are treated with suspicion, hence the need to try things out in the field, to avoid bias.
While one might welcome the idea of the state opening itself up to evidence in this way (as the What Works centers intend), the question is whether empiricism can still win the day, where matters become politically controversial in any way. And in those cases where this form of empiricism can win the day, is this really because the objects of research are too powerless (or distant, invisible or politically irrelevant) to generate any controversy? The methodological and the ethical concerns bleed into one another. Eventually, it is impossible to duck the more constructivist critique of behaviorism, which is that experiments cannot be conducted without some political inequality between the experimenter and the experimented.
One interesting aspect of these debates is that, for various reasons, they play out entirely differently in relation to the state, from how they play out in relation to corporations. Because we have certain normative expectations of the state (legitimacy, democracy etc) the notion of government experts reconstructing the world for methodological purposes seems deeply worrying.
And yet, as was mentioned more than once during the seminar, manipulation is a common feature of how corporations operate, in workplaces, via social media platforms, through advertising and market research. We seem to retain some hope that the social sciences will speak truth to power, where the state is concerned; but have become resigned to the notion of them constituting power where business is concerned.
This dichotomy may be dissolving at present, especially in areas of necessarily public-private partnership such as ‘smart cities’. Moat’s presentation demonstrated what data produced by giant corporations can offer university scientists, and potentially policy-makers as well. Meanwhile, economists entering the ‘field’ (be they modeled as ‘architects’, ‘medics’ or ‘engineers’) encounter messiness that places a strain on the discipline of their theories and methods. As these different professional identities mingle, so new forms of ethical and disciplinary codes will need to emerge.
Will Davies is a Senior Lecturer at Goldsmiths, University of London, and one of the conveners of the Spaces of Evidence seminar series.
(1) This is Will Davies’s own reflection on and interpretation of the day, aimed at spotting connections between the papers. This summary should not be seen as an exhaustive representation of what each speaker said!