top of page

The Man with the Rawlsian Tattoo

by Becky Clark


How does artificial intelligence (AI) shape our understanding of democracy? There are few better people to ask than Dr Ted Lechterman, a political philosopher at Oxford’s new Institute for Ethics in AI, who is currently teaching a brand-new course in the ‘Ethics of AI and Digital Technology’. His work focuses on the fraught relationship between democratic principles and recent social trends, ranging from the rise of AI to billionaires’ self-congratulatory turn to philanthropic giving.

‘So, is 2021 a turning point for AI?’ He qualifies his answer with a self-deprecating caveat: ‘people who identify as philosophers are bad at making predictions about the future’. As a philosophy student, I find myself both nodding in agreement (frustrated by the fact that philosophers tend to bury themselves in the writings of niche historical figures and abstract paradoxes, with little real-world application) and bristling with indignation (good philosophy should engage with empirical literature).

Somewhat surprisingly, Lechterman holds a broadly hopeful view about our ability to overcome the ethical problems that arise from AI. For one thing, he ventures, criticisms of AI have gained prominence in public discourse, creating greater pressure on tech firms and governments to respond; what’s more ‘some of the particular problems with AI are a bit tractable... I am relatively optimistic about making progress on things like bias and explainability and transparency, at least in democratic contexts where societies are heavily invested in ensuring that the technologies that they use and that run their lives have some sort of public justification. Where I am worried about AI is in non-democratic contexts, especially in authoritarian societies where there are very few constraints on the development of this technology.’


Lechterman proceeds to rattle off various ways in which AI offers a ‘tremendous source of power for legal enforcement agencies’. Autonomous weapons, when placed into the wrong hands, could transform into weapons of mass destruction. Surveillance technologies could easily be used for wrongful ends. (For instance, reports suggest that facial recognition is currently being used in China to identify members of the Uighur ethnic minority to facilitate their persecution; according to Human Rights Watch, ‘one technical requirement of the Chinese Ministry of Public Security’s video-surveillance networks is the detection of ethnicity — particularly of Uighurs’). Roxana Akhmetova, an Oxford DPhil researcher in Migration Studies, notes that the replacement of human border control enforcement with AI also has a ‘great potential to increase the efficiency of producing discriminatory decisions in the immigration sphere’.


I begin to wonder if Lechterman has too much faith in democracy. After all, Akhmetova is talking with reference to the Canadian migration system — and Canada is standardly considered as one of the most progressive nations in the world. Imagine what such a system might look like in the hands of Donald Trump. In fact, we don’t have to imagine: surveillance technologies are already employed in major American cities (and it’s not just law enforcement who want in — private landlords such as Nelson Management Group have also attempted to join), and facial recognition software is currently used by US Immigration and Customs Enforcement (ICE) to keep track of almost 100,000 immigrants.

I suspect that Lechterman, as a political theorist, would not regard any of these countries as ‘true’ democracies relative to an ideal model of democracy. (According to the political theorist Joshua Cohen, the three core features of genuinely democratic politics are equality amongst citizens, a public conception of the general good, and the public deliberation of citizens with the aim of promoting that good.)


Yet even if we believe that AI would not be misused in ‘truly’ democratic contexts, it is nevertheless important to recognise the existence of authoritarian spheres within contemporary democratic societies. In her book Private Government, the philosopher Elizabeth Anderson contends that private firms are analogous to small-scale authoritarian regimes, since they are typically not governed by democratic norms and workers lack many rights. Should we be equally concerned about the use of AI to track employees and undertake hiring-and-firing decisions?


Whilst Lechterman admits that AI is currently used for unjust purposes in the workplace, he again appears optimistic that this is only a transitory problem due to 'a growing interest in workplace justice'. I am suspicious of any argument which simply concludes that 'greater public scrutiny of [insert phenomena] will solve [insert problem]', but I let this slide for now.


In the late 1990s and early 2000s, there was a great deal of optimism regarding the ways in which technology could widen access to political deliberation and hold governments more accountable. This optimism has now given way to concerns about electoral manipulation by bots, information bubbles, and other apparent political threats to democracy. ‘Meanwhile,’ Lechterman notes, ‘the philosophical attention to AI has been focused either on extreme long-term risks (AI takeover, various catastrophes), or concerns about bias and explainability in the deployment of narrow AI. There has been relatively little reflection about how AI specifically can be used to enhance or undermine democratic decision making, or how AI should be governed in accordance with democratic ideals. Those are two areas which are worth thinking more about.’


How could AI enhance democratic decision making? Harking back to the optimism of the 90s, Lechterman stresses that AI can be used to facilitate greater participation in political decision-making and represent those who are otherwise excluded from political participation. In the style of a classic academic philosopher, Lechterman now reaches for a thought experiment. Imagine, he says, that each citizen has a democracy bot. This need not come in the form of a physical robot. The idea is rather that everyone downloads a piece of software where they then provide inputs (for instance, by answering a survey, giving the software permission to access their social media profiles and online shopping habits, and so on), from which the bot can elicit people’s political preferences. In essence, the bot creates a finely grained political profile of you. In the best version of this proposal, you have an opportunity to deliberate with the bot to see whether the profile it has constructed is something that you reflectively endorse. Once you have approved the political profile generated by the bot, that profile can then be combined with other individuals’ profiles and applied to legislative or administrative questions that are being asked in real time.


Of course, there are many, many objections to this kind of algo-cracy. (What if the bot misrepresents your political preferences? What if the existence of democracy bots means that citizens will engage less with each other in reality? Would these bots create an even greater incentive to manipulate voter preferences?) Lechterman stresses that he is not endorsing this proposal, but nevertheless finds the thought experiment useful in contemplating the potential possibilities which AI brings and why we even value democracy in the first place.


I ask Lechterman about his biggest intellectual influences. To my surprise, he responds by rolling up his sleeves to proudly reveal a sprawling Rawls-inspired tattoo. Two images sit beside each other: on the left is a graph displaying two lines, one at a 45-degree angle and the other plateauing below it; on the right is an eerie depiction of an owl. ‘This is the difference principle, and this, of course, is the owl of Minerva.’ (… Of course!)

John Rawls is commonly regarded as the most influential analytic political philosopher of the 20th century, whose seminal work A Theory of Justice is almost guaranteed to feature in any introductory course to political theory in the Anglosphere. The most famous of his principles of justice, the so-called ‘difference principle’, states that socio-economic inequalities ought to be arranged ‘to the greatest benefit of the least advantaged’. The emblazoning of this ideal on Lechterman’s arm is just one of many giveaway clues of his political inclinations; throughout the interview, he speaks in a Rawlsian prose, endorsing a ‘property-owning democracy’ and defending the value of ideal theorising within political philosophy.


‘It represents — I think it represents — a conflict between justice and knowledge… We have these principles of justice, and to some extent that is the purpose of political philosophy: to help us think through these important moral questions. But the owl of Minerva, of course, Hegel says, only spreads its wings and alights at dusk, meaning that philosophers only come onto the scene too late, and can only understand the world retrospectively; that philosophers do not have enough knowledge to engage in any kind of idealisation or to offer advice for reform. Political philosophy is just about reconciling ourselves to the status quo that already exists.’ This seems to be a rather depressing conclusion for someone who pursues academic philosophy for a living. However, for Lechterman, the need to recognise the severe limitations of one’s own theorising is imperative.


Lechterman’s new book, The Tyranny of Generosity, sketches the dangers that philanthropy poses to a functioning democracy. These are threefold. Firstly, philanthropy ‘can be a way of privatising what should be collective decisions about public matters’. Secondly, it ‘can be a way of amplifying or augmenting the voices of the rich over important public decisions, to the exclusion of people who are poor or less well off’. Thirdly, it ‘can give the dead excessive control over the living and unborn’.


I find this final objection to philanthropy particularly striking. He elaborates: ‘The structure of foundations and many instruments of philanthropic giving are ways of allowing people who lived in the past to constrain how resources are used in the future, with bequests and trusts and various other legal instruments.’ (Could this complaint not similarly be levelled at wills more generally? This problem speaks to the wider question as to why we should respect the wishes of the dead.) ‘In all of these settings, on the one hand, we have an appearance (and I think that it is authentic) of generosity, and nonetheless, these acts are in tension with various aspects of the democratic ideal.’


Chapter Six of his monograph is entitled ‘The Effective Altruist’s Political Problem’. Roughly put, Effective Altruism (EA) is the doctrine that, if we are morally required to give, then the reasons that we have for this are also good reasons to give in the most effective way. Oxford is the beating heart of the EA movement. Indeed, the term ‘Effective Altruism’ was initially devised to serve as an umbrella name for two Oxford-centric organisations: Giving What We Can (co-founded in 2009 by Oxford philosopher Toby Ord, with the aim of encouraging people to give 10% of their income each year to alleviate world poverty) and 80,000 Hours (co-founded in 2011 by Oxford philosopher William MacAskill, with the aim of providing high-impact career advice for young people). Oxford-based EA-affiliated institutions such as the Future of Humanity Institute and the Global Priorities Institute are currently run by Oxford philosophy professors (Nick Bostrom and Hilary Greaves, respectively). I wonder how Lechterman feels about EA serving as a major dividing line within his own faculty.


Lechterman sighs. ‘I have tortured views about Effective Altruism,’ he confesses. He welcomes the appeal to both evidence and moral reflection when it comes to deciding which social interventions to fund. Moreover, he subscribes to many of EA’s conclusions, whether that is about the importance of accounting for animal welfare or combatting climate change. However, Lechterman continues, ‘Effective Altruism can be a way of privatising what are really important political questions, and also a way of dominating or subordinating vulnerable recipients or those working on behalf of the vulnerable.'


For instance, consider the effective altruist who donates a sum of money towards the purchase and distribution of mosquito nets in a country within Sub-Saharan Africa. ‘You can think of this as the effective altruist as saying: “We would like to help this community, but we will only do this on our terms.”’ Of course, there might be a range of different interventions which are efficacious — yet other voices (locals, development experts) do not count in the EA calculus. Donations are subject to the whim of EA leaders, and place the recipients in a position of dependency. Lechterman’s hope is that ‘the political angle invites effective altruists to think more about how they exercise power over receiving communities and also amongst other people who might be in a position to help, and have different views about what is needed’.


Given his critiques of philanthropy, I expect Lechterman to be critical of the reliance of many governments on the charitable generosity of the public (for example, through the proliferation of foodbanks) throughout the COVID-19 pandemic. In my mind’s eye, I see Captain Tom hobbling across his garden to fundraise for the UK’s National Health Service and, in the process, turning an underfunded public service into a charity case; this depoliticises the issue and diverts responsibility away from the government.


To my surprise, Lechterman responds that a global emergency such as a pandemic in fact offers a good case for philanthropy: ‘there is a certain amount of risk-prevention and disaster-mitigation that, even in the best circumstances, no government can fully account for or be prepared for. It’s really helpful to have a sort of reserve force of people with means and strategies and existing infrastructure to be able to pitch in.’ He continues, ‘the problem, of course, is to treat emergencies as normal… to not make the proper investments in the public infrastructure because we are relying on philanthropists to fill the gap’.


With intellectual beliefs strong enough to compel him to get a tattoo inspired by the philosophical giants of Rawls and Hegel, Lechterman's unconventional style gives pause for thought. How many of us have the conviction to get inked for our political principles?


BECKY CLARK reads for a BPhil in Philosophy at Balliol College. Her strongest conviction is that no cinema trip is complete without pic 'n' mix.


Art by Jemima Storey.

bottom of page