Article

The tech author says we haven't given enough thought as to how these profound shifts affect us psychologically and as a society

With both experience in the tech world (he was Nokia’s head of user interface design) and in academia (a senior fellow in the LSE’s urban studies centre, LSE Cities), Adam Greenfield is well placed to analyse the political ramifications of today’s cutting-edge technologies. Below, Mathew Lawrence, senior research fellow at the Institute for Public Policy Research, questions Greenfield about the unrecognised effects that today’s technologies – from smartphones, to 3D printing, AI, to Bitcoin and blockchain – have on 21st century consciousness, and the possibilities they unearth for a better world.

Lawrence: One of the central arguments of your superb new book, Radical Technologies: The Design of Everyday Life (Verso 2017), is the need to ask what new and emerging technologies do, what inequalities they create or reproduce, and what our blind spots are when we look at contemporary technologies. I want to start by going through a series of technologies you explore in the book and unpick what they do and their implications for politics and power. Starting with the smartphone, you describe the device as a ‘roiling envelope of contestation’ that mediates our lives through the network. Can you delve a little deeper into this ‘sovereign artefact’ of the contemporary economy?

Greenfield: I don’t know if you remember how Steve Jobs originally introduced the concept of the iPhone. He did it in a really clever way. He started by saying, ‘We’d like to announce a couple of different products today: a touchscreen iPod…a revolutionary mobile phone…and a breakthrough internet device.” And then he repeated it a few times, in this sort of incantatory rhythm: “a touchscreen iPod, a phone, and an internet device.” After a couple of repetitions, people in the audience started to get it, and that’s when the applause really kicked in. That was the moment they understood for the first time that he was referring to one device that did all those things.

This is both the power of the smartphone, and the key to its protean nature. It is simultaneously an interface artefact, a sensor platform, a tool for personal expression and a remote control for whatever services are accessible from and to the global network. It’s one object that functions well in an extraordinary number of different modes.

There’s a conscious level at which we purposively engage with a phone, and then there are all of the things that it’s doing in the background, that we tend not to be consciously aware of. So when you’re walking down the street having a conversation on the phone, it’s simultaneously recording your location, plotting that location on a virtual map, comparing it to the locations of venues of interest or other people in your social network, and so on.

Now, some of this is driven by the mechanical necessities of communication — for example, location recording results from your phone establishing “handshake” relationships with cellular base towers. And some of it’s simply an artefact of business model, like most of the data analytics. But what’s of most concern to me is that this entire mode of being we enter into when we engage the world via smartphone exerts a remarkable influence over our behaviour, and the interface through which we mediate the world in all its diversity and complexity draws from a relatively circumscribed vocabulary, generated by a remarkably small number of people and institutions. And a consequence of this is that, while they may be doing ten thousand different things for us in the course of a day, our smartphones are all the while reproducing a very distinctive affect, and that affect corresponds to a particular ideology that was developed in a particular place, at a particular time in history, by identifiable individuals and institutions. I find it extraordinary that there’s been relatively little attention to what that implicit ideology is.

Lawrence: It seems as if the Silicon Valley nexus of financiers and developers, and its ideology, find themselves in a hegemonic position in our current conception of capitalism. Yet, as you say, there is little discussion of this. Why do you think that is?

Greenfield: I suspect it’s because so many of us know full well that we’re complicit in it. I truly cannot imagine life in 2017 without my smartphone or the pleasure, utility and convenience it furnishes me. So we aren’t particularly inclined to go digging into the matter. God forbid we ever thought deeply about it, and were confronted with the idea that it might be better for us if we don’t use this thing.

Neither do I think there is a way we can meaningfully opt out at this point. We produce data; we are subjects of data. That data will be mined for actionable inferences, and used to generate value for some third party, and condition the state and posture of all the networked systems we confront in everyday life. And it will do all of this whether or not we choose to actively participate.

Lawrence: Moving on to 3D printing and digital fabrication. As with many of the radical technologies you examine, its potential is described in emancipatory rhetoric: that it could democratise the means of production and end scarcity. What are the barriers, material or otherwise, that might stand in the way?

Greenfield: Of all the technologies I discuss, I think digital fabrication brings us closest to being able to realise genuinely utopian political-economic ambitions. We could, in principle, use these devices to democratise and distribute the ability to shape matter to virtually anybody on Earth. But in order to do that, you’d need very cheap energy. You’d need low- or no-cost feedstock, and that feedstock would itself need to be sustainably produced.

But many of the conversations around 3D printing utterly ignore things like the cost of assembly labour, or how very few of us actually have the ability to articulate and then produce a 3D design template. I mean, I’ve been around these technologies a long time, and I can’t even do this. I can use Illustrator and Photoshop, sure, but I can’t do jack in a 3D programme, even something really basic, like, say, a spoon. It would be really difficult for me to say, “I want a spoon just like that one there, I’m going to gin up a design for that, and have it output.” So, right there, we can see that there’s a cascading set of requirements and preconditions for the widespread adoption of these technologies that the rhetoric just isn’t taking into account. And until we do, entirely conventional fulfilment of requirement or desire via the market and the commoditized form of matter is entirely safe. The market has nothing to worry about.

We do a disservice to the radical possibilities that are bound up in this set of technologies by not considering all these preconditions. Those who are committed to realising the vision of distributed production need to be clear about how the fabrication itself is actually the easy part, and it’s everything that’s stacked up behind that needs designerly attention. If those issues could be addressed, I think yes, a fundamental shift in the political economy of matter is achievable. But we’re not anywhere near that yet.

Lawrence: This is a key theme of your book: that, to enable greater abundance and flourishing, the challenge is not about technical feasibility, but the politics that shape the use of technologies and the institutions we construct. With this in mind, can you explain what blockchain is, and its problematic political appeal?

Greenfield: It almost doesn’t matter what the blockchain actually is or how it works, because it doesn’t do what the popular media and therefore the popular imagination understand it to be doing. But at root, what it’s doing is articulating a protocol for the calculational establishment of reliability — that if someone asserts that a given transaction or document or artefact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parties, computationally and in a distributed manner, without the need to invoke any centralized source of authority.

Now, the blockchain does this because the people who devised it have a very deeply-founded hostility to the state, and in fact to central authority in all its forms. What they wanted to do was establish an alternative to the authority of the state as a guarantor of reliability, and in fact to found the reliability of assertions about provenance outside of history entirely. But there’s nothing transhistorical about calculating processes. That’s the first blunder that I think is made in the account that we’re generally offered of the blockchain.

If the blockchain certifies that I have a certain amount of Bitcoin in my account, and I haven’t spent it anywhere else, the transaction by way of which I pass that amount to you is validated, and that amount of currency winds up being transferred to your account. We don’t have to trust any institution of state to make that finding of validity. It’s all established computationally. But this procedure is still opaque, perhaps more opaque than any process of state, and certainly not accessible to any process of ordinary democratic review. So this is not in any way a trustless architecture, as it is so often claimed to be. It merely asks that we repose trust in a different set of mediating agencies, institutions and processes: ones that are distributed, global, and are in many ways less accountable. And above all this is because the blockchain and its operations are even more poorly understood by most people than the operations of state that they purport to replace.

I don’t find that to be a particularly utopian prospect. In fact, there’s something shoddy and dishonest about the argument. What the truly convinced blockchain ideologues aim to do is drain taxation and revenue away from the state, because — in the words of Grover Norquist — they want to shrink the state to the point that it can be drowned in a bathtub. This is their explicit ambition. Why shouldn’t we take them at their word?

Lawrence: You also probe the algorithmic management of economic life via the application of machine learning techniques to large, unstructured data sets. You argue that this is a huge and unprecedented intervention in people’s lives, despite us lacking knowledge on how crucial decisions made about our everyday existence are reached. How has this come to be and why is it so un-interrogated? Is there any way of democratising the algorithms that are increasingly governing our lives?

Greenfield: What we need to get clear about is that the justification for an outcome is irretrievable from the neural networks involved in this kind of algorithmic decision making. There is no way for such a system — which Frank Pasquale rightly refers to as a ‘black box’ — to account for the specific triggers that decide an allocation of resources, or the adjudication of an outcome.

In the past, when approaching a bank for credit, there was presumably a loan officer who would have said, “Well, the guy’s got a great credit history, and he’s got references from all of his previous lenders. I chose to take a decision based on those credentials.” Or conversely, “He looked shifty. His credentials were flaky. His paperwork didn’t check out. That’s why I denied him the loan.” And maybe — not generally, in fact not particularly often at all, but at least in principle — that human loan officer was accountable for such decisions. And there existed — again, in principle — procedures to reverse decisions that didn’t really pass the institution’s or the larger society’s smell test. But today, no one can tell you why your loan application came back the way it did. It could be something like whether or not you’ve chosen to fill out your application in block capitals, which evidently statistically correlates to other behaviours that indicate a higher or a lower propensity to repay your loans. And you’ll never, ever know that. All you’ll have is the result, the decision, and it won’t be subject to review or appeal.

Even the institution that relies on that algorithm won’t be able to say to any particular degree of assurance whether or not your handwriting’s been the triggering factor, or if it was something else in the cascade of decision gates that’s ultimately bound up in the way this kind of network does work in the world.

So I think that the call for algorithmic accountability, however correct and well-meaning it is, is misguided. I think that the sophistication of these systems is rapidly approaching a point at which we cannot force them to offer up their secrets. And we’re about to compound the challenge further. For example, AlphaGo and its successors are designing their own core logics. So increasingly, human intelligence is no longer crafting the algorithmic tools by which decisions are reached.

Lawrence: When Deep Mind’s AlphaGo beat a Go champion, the champion remarked that they’d never seen a move by like that in a match. It’s the eclipse of the human faculty and imagination.

Greenfield: Yeah. That ought to scare us half to death.

Lawrence: Advocacy of algorithmic management seems to be bound up in a politics of escape — that ‘smart’ cities and systems are preferable to the mess and contestation of democracy when organising our society.

Greenfield: I think that’s exactly right. And, you know, I’m just not a technocrat at heart. I believe in politics. I believe that in anything as complicated and as riven by difference as a city, there is andcan be no such thing as a Pareto-optimal outcome. Not everybody can get everything that they want. It’s just not in the rules of the game.

What I find most troubling about the discourse of coolly detached, technocratic management bound up in the smart-city value proposition is the contempt it has for everyday politics, and the seemingly obvious idea that we don’t all share the same set of values. Cities offer us each a different deal. And as I understand it, anyway, municipal politics is the process of roughly accommodating as many of those demands and desires as possible without doing too great injury to the ability of other people to do as they wish. By contrast, technocratic management is a gigantic insult to the notion that people can articulate their own desires in a way that’s politically efficacious, and act upon those desires.

Lawrence: Automation technologies - machine learning, advanced robotic sensors, AI, the growing internet of things - constitute a coherent set of techniques and technical powers for an increasingly post-human economy in which labour is gradually made more superfluous to production. At the same time, discussions about the potential impact of automation are often breathless and overstate, at least in the short term, the likely implications, particularly for employment. Could you take us through how you think automation is likely to reshape our economy and society?

Greenfield: I’d like to start with a fairly basic power analysis of precarious, entry-level labour in the early 21st century.

You and I have just gone to buy coffee from a relatively upmarket coffee place. And if you remember, in that transaction, there was a device with a customer-facing camera mounted at the point of sale. I would wager, based on my experience, that this has been installed to eventually do away with the need for a card or payment device of any kind. It’s attempting to use facial recognition as our payment credential. That’s why there’s a camera there. I would be very surprised if it’s for any reason but that.

What was fascinating to me about that interaction was that the person who rang up our transaction didn’t know or care what that device was, became tangibly frustrated by my polite and rather gentle attempt to find out — even though there wasn’t anybody waiting in line behind us — and has no power to act upon the device or determine the circumstances of its operation in any way. The nature of her employment compels her, like any employee in a similar position, to accept whatever proposition the vendor of that technology has made to the employer.

There are all kinds of technologies that are being deployed, in the run up to automating away jobs entirely, that attempt to place brackets around the autonomy of the individual worker. People have virtually no ability to contest these things.

This has always been the case since the beginning of Taylorist and Fordist labour operations, of course, which break what might have been a skilled craft job up into little deskilled modules that can be repeated. Workers can be trained up easily, and their performance can more readily be policed, measured and monitored. But I find that this is even more so in the contemporary workplace, and not only on the shop floor but in the preserves of what used to be thought of as more creative, intellectual or managerial labour. For example, in the United States, if your health insurance, or more broadly your access to any other essential service, is contingent on your having employment, you have virtually no platform on which to stand and contest the introduction of any of these technologies. It leads to a kind of learned helplessness, in which people simply do accede to whatever technology appears on the scene – we saw this just now in the café. The cashier was like, “Oh. Well, it’s just whatever the latest thing is. This is just how we’re going to do things now.”

There are two things going on, both of which are equally scary to me. The first is that people just aren’t curious about what this machine in front of them is doing, and above all why it’s been asked to do that. The other thing is that even if they do know, they’re just like, “Well, I don’t have the power to change that.” And on that assessment, given the politics of solidarity in the contemporary working environment, they’re probably correct.

Lawrence: If labour is imperilled, both because of emerging technologies but also because of political, economic, and legal changes over the last 30 to 40 years, is that why you engage with the ‘fully automated luxury communism’ movement in the book? You take issue with the argument that it is an ethical imperative to move to a world in which much of the labour world is basically superfluous to needs, given the power imbalances laced through our economy.

Greenfield: I think I open myself up to charges of being sentimental here. I’ve been told flat out that some people find the book to be under theorised on the question of precisely what is the human. That’s a fair critique, and I hope to address it in full in my next book.

One of the things I will say now is that I don’t think we’re prepared psychically or culturally for a world of full leisure. I don’t think we understand what a planet of nine billion people with a whole lot of time on their hands looks like. It is always possible that I’m generalising from my own idiosyncratic experience, but I need a project. I need to feel like I’m producing something useful for other people, or else I simply don’t feel very good about myself.

I’m certainly not arguing in favour of bullshit jobs or busy work. But I think that there is a naiveté in the articulation of what’s been called “Fully Automated Luxury Communism.” As a body of rhetoric, I can’t escape the kind of tongue-in-cheek-ness of some of it. I wish it were a little bit more serious, because then I could come to terms with it a little more seriously. And in particular I don’t think the folks responsible for developing this line of argument have really quite reckoned with what it’s like when each one of us can have anything we want whenever we want it. I don’t necessarily know if that’s good for human psyches.

Lawrence: The series of radical technologies you talk about may one day profoundly change not just the relationship between employment and the production of value, but the nature of a commodity and how we conceive of scarcity. Are we prepared for the foundations of society being changed in this way? If not, how do we prepare politically?

Greenfield: I think scarcity is more deeply ingrained in us than we understand. I’m not a nutritionist, for example, but I take it as given that the obesity problem so many societies are contending with largely results from the massive availability of calories beyond anything in the organic history of our species. For a number of decades now, most people in the global North have had access to absurd surpluses of caloric energy, at what is by any historical reckoning a ridiculously low cost, more or less whenever we want it, but this timespan is nothing compared to the millions of years over which we’ve been evolving. This is a useful point of comparison in grasping how challenging it’s going to be to unwire the hold that scarcity has on our psyche.

It may simply be that somebody my age — born when I was born, and into certain material and social circumstances — will never be fully at home with unlimited abundance in that regard, even if it is achieved. I don’t like to psychoanalyse people, but it is manifestly the case that the extremely wealthy people I know aren’t happy. Not that I feel particularly sorry for them compared to a lot of other people, of course, but I just don’t think that material abundance itself is the same thing as fulfilment. And this is something that’s been fairly well understood, actually. There are a great many spiritual traditions on Earth that have had a line on that for a long time. Maybe in the end we’ll all wind up Taoists.

Lawrence: Why do you think digital technologies, especially the internet, haven’t fulfilled their utopian promises and the boundless hope invested in them? What was it about the architectures that we developed, the policies we developed? And what can we learn about how to engage with emerging technologies?

Greenfield: I think a bunch of things happened: that capital captured those technologies, first of all, and that we misunderstood the necessity of maintaining an institutional governance for them that was consonant with the values that they were supposedly realising.

But I also think those technologies did succeed in doing what they were supposed to do, and in many ways the techno-libertarian project of 1993 or so was actually realised. I think that, in retrospect, the corrosion of the structures of solidarity that we’d historically been able to rely on for a very long time wasn’t merely a result of social processes, but also the result of new technologies appearing. You can trace a lot of it to the John Perry Barlow rhetoric of the early internet.

I was looking at an article in the New York Times yesterday about Venmo, which is a transaction platform that lives on your phone and allows you to settle up bills, send money back and forth on a peer-to-peer basis. The article explored the social implications of this technology for a generation of pub-goers who, instead of saying “Oh, I’ll pick up this round. You’ll get the next round,” settle up on the spot, in real time. Now, whether that’s at all intended to or not, it’s atomising. It’s corrosive of the practices and traditions and etiquettes that between them constitute the lived experience of community, of interpersonal solidarity. Whenever technologies like this work at scale, they likely transform communities of the generation that adopt them — in San Francisco, in New York, in every place where people use them. And before long, there’s nobody around who remembers any other way of doing things. The instinct, the rather lovely habit of buying your mate a round and letting them pick up the next one is lost to memory.

The fundamental thing about the Californian ideology which I think a lot of us missed is how well it corresponds with the needs of capital. And so barriers to its spread have been low. We are being turned into singular economic actors, severed from the possibility of any kind of mutuality or human sympathy with other actors. That this has been stimulated by technological change represents the triumph of the techno-libertarian impulses of 1993 — which, by the way, I certainly endorsed and championed at the time, out of an abundance of optimism and a headlong desire to get on with tomorrow. Mea culpa.

Lawrence: I want to turn towards how the left should develop new institutions and political economies to respond to and make the best of technological change. How do you ensure technologies don’t reflect and reinforce existing logics of accumulation and exploitation, but actually generate durable, shared value? I think of someone like the late Robin Murray who would say, “Well, actually, sociality is embodied within the network. It just needs to be decoupled from accumulatory cycles of capital.” What do you think?

Greenfield: I’m agnostic on that question. As a municipalist myself, I’m honour-bound to say that local ownership of certain technical infrastructures would likely lead to more positive social structures and an interaction with those technologies that’s more fruitful than any that currently exists. But I also have a nagging suspicion that networks of links between people who are only conceived of as atomic, sovereign individuals are hard to reconcile with a shared common future. I worry about that.

I would love to think that fifty years from now, people just like the ones we can see sitting on this terrace would be part of multiple, consciously-constructed collectives — some based on residence, others based on identity, and others affinity groups based on how they spent their days, in whatever framework replaces “jobs.” People would be imbricated in multiple kinds of networked condition in the course of their lives, and that would constitute the fundamental framework out of which they emerged as social and political beings. I would love to see that happen. Do I think it’s particularly likely? It has a non-zero chance of happening, but not a very large one.

Lawrence: One of the reasons this future appears unlikely is because venture-capital models of investment are dominant in driving, and in some ways cannibalising, the source and direction of social innovation. Are there different ways of organising investment that you think could bring a more fruitful configuration of how we develop and deploy technology?

Greenfield: I definitely think that the cooperative model, with its eye on the long-term stewardship of resources, is an interesting pathway to follow. It’s definitely, without question preferable to the governance and investment structure of something like Uber, which has been able to decimate the prospects for truly public transportation and is predicated solely on the demands of its investor class. The venture-capital mania for high-multiple exits on relatively constrained timelines deforms our understanding of what technology is about and what it’s for. I regard Uber itself as something very little beyond a Ponzi scheme, and yet it’s structured in a way that has allowed it to have a remarkable influence on the way people in municipal government think about moving people around. It’s been corrosive in many places, and has presented problems even for a city where public transit is as highly developed as London, which is shocking to me.

It’s kind of weird to me, for that matter, that with all of the things that a network mode of social organisation ought to be able to facilitate, we haven’t been able to break out of these very standard models of ownership, investment and stewardship. It may be that somebody needs to conceive of cooperatives at the network scale before we have a test case, a testable proposition. I certainly don’t have the imagination or the chops to be able to articulate what that would look like. But very often, we’re limited in our ability to think our way through problems by the examples we have ready to hand. And the trouble is that right now, most of the people who are generating those hints come out of the Silicon Valley culture.

Lawrence: Many of these technologies are ultimately rooted in the capture, colonisation and monetisation of data that we create via use of the platform or the technology. How can we think differently about the governance and ownership of both data and data infrastructure? Are there different ways of conceiving of the infrastructures that sit behind these vast engines of capital accumulation?

Greenfield: Yes, there are, absolutely there are. The trouble is that so many questions that appear to be purely technical in nature present us with vexingly complicated implications socially or psychologically. I’ll give you an example. For many years, I was passionately involved in a movement calling for open municipal data. This movement started from a political and economic belief, that the power bound up in data that’s collected by municipalities can best be leveraged by citizens when it’s available to all. When it’s not held restrictively by a government entity, when it’s licensed and provided through an application programming interface that allows our devices and our systems to grab that data easily and use it for other purposes.

That to me seems so clearly preferable to the existing model where the data was held close, was restrictively licensed, and was provided only to the large corporate vendors who happened to be the municipality’s partners. I thought that it would be better off, given that each of us generated that information in the first place, that we could have access to it and make such use of it as we would. That’s a very nice notion, putting to one side for the moment that it dovetails very nicely again with this neoliberal idea about where agency lives in the world. And then Gamergate happened.

One of the things we saw in Gamergate, which - again, another mea culpa - should have been obvious in retrospect, was how trivially easy open data can be weaponised by people who do not feel themselves bound by the same social contract as the rest of us. Under our current way of doing things, if the salary or home address of a municipal worker is available through the government database, I can promise you that that same information will be weaponised and used to subject people to targeted harassment and worse. It’s just not a tenable situation, and it’s nothing that I would want to endorse. And for the moment, it’s kind of stymied my ability to think of more humane and productive arrangements. If neither the restrictive release of data to favoured partners nor its open availability produces desirable social outcomes, what’s left?

Here, ironically perhaps, is where another use case for the blockchain arises. Maybe access to municipal data is free of charge, but individual requests for data are logged in a distributed ledger so anybody can see who’s asked for what, and what they’ve done with it. But that feels a lot like a technical fix for a social problem.

Lawrence: I want to turn to the conclusion of your book. You say that there’s no unitary future waiting for us, that we shouldn’t just await our technological liberation but actually politically organise, demand it, and create institutions for that outcome. What are the key lessons we should take from your book?

Greenfield: I would break it up into two parts. The first is to always be self-critical about what we mean when we talk about using some technology “for good,” what we define as broad human benefit. Benefit for whom, and at what cost? We must be clear about that, and make arguments that indicate we understand there is a cost to our choices, and that that cost generally has to be borne by someone or -ones. To make robust arguments that we think that the collective outcome will be improved to a degree by our actions that justifies the costs that are borne. That’s the first thing.

Secondly. I find the calls that everyone should learn to code quite fatuous, but I do think that the left needs to be better at not surrendering the terrain of engineering and emergent technology. It’s almost a question of neurocognitive inclination. I will speak for those of us who got into the humanities. We were perhaps spurred toward the humanities by a disinclination to deal with numbers, or structured facts, in quite the same way that ‘the hard sciences’ do, or engineering does, or applied science. But this is something we’re going to have to overcome, because otherwise the only people who are going to develop that affinity into practice are the people who are immersed in the politics of the domains in which those practices are articulated. And broadly speaking, they split into authoritarian and techno-libertarian lobes. Those are just the rules of the game right now.

I think that we need to be braver about understanding code. Understanding what an API is, how it works — understanding network topographies, and what the implications of network topographies are for the things which flow across and between their nodes. Understanding corporate governance. Understanding all these things that so many of us who think of ourselves as being on the left prefer not to address. We need badly to develop expertise in these things so that we can contest them, because otherwise it’s black boxes all the way down. We don’t know what these technologies do or how they work. We just don’t see what politics they serve.

There are what we call in the business “weak signals,” all across the horizon. There are emergent groups and conversations that are happening, places and scenes in which people with progressive politics are developing the kind of facility with networked digital information technologies I think is so necessary. But they’re all green shoots, and they all need to be nurtured. For the rest of my life, that’s part of what I see my role as being. Helping to help these conversations identify themselves to one another so that they can join and develop greater rigour and greater purchase on the disposition of resources in the world.

Lawrence: One final question, which came from the network. I put out a request on Twitter for any questions for you, and the most popular response was, “Google, Amazon, Facebook: Snog, Marry, Avoid?” Do you know this term?

Greenfield: [laughs] We call it Fuck, Marry, Kill on my side of the pond. Well, I think Amazon is actually the most dangerous of all of these. Jeff Bezos is a dangerous, dangerous person. So far as I can tell, he has more direct operational control over the shape or the distribution of resources at Amazon than the respective governance structures do at either Facebook or Google.

Now Mark Zuckerberg terrifies me, but not in the same way that Jeff Bezos does. I’ve met Jeff Bezos. I’ve played a game called Werewolf with him, and it was one of the most disturbing things I’ve ever done in my entire life. That his ambitions include space colonisation…let’s not go there.

Lawrence: Yes, the perfect example of escape.

Greenfield: The escape. Exactly. It’s like, we’re going to leave the planet irreparably befouled, and then we’re just going to fuck off to Mars. I think that, weirdly enough, Google is probably the least offensive of those three organisations. And that’s saying a lot, actually, because I don’t regard them with any particular love either.

Lawrence: Well, we can grant you a quick annulment...Adam Greenfield, thanks very much.