We take almost all of the decisive steps in our lives as a result of slight inner adjustments of which we are barely conscious. ‘Austerlitz’ — W. G. Sebald
When you look in a mirror who, or what, looks back at you?
The question of whether we have free will is a fruitful area for scientific research. See, for example, A Famous Argument Against Free Will Has Been Debunked. It is also an important one with regard to the sense each of us has as a ‘self’, and in our social relations and in politics. See Our sense of agency, whether illusory or not, has a deeply important social purpose by Chris Frith, and Whatever agency we do have only makes sense in relation to others by Abeba Birhane.
A no less vital matter is the extent to which our hopes and fears can be manipulated by those whose interests do not align with ours. A striking contemporary example is One Coin, a supposed crypto-currency which has duped gullible people out of around four billion dollars. The story is brilliantly explored in The Missing Cryptoqueen presented Jamie Bartlett.
OneCoin was a scam. But there are ways in which even legitimate entities in big tech offering services of genuine value can be shape the self. In an introduction to Re-engineering Humanity by Brett Frischmann and Evan Selinger, Nicholas Carr writes:
As we transfer agency to computers and software, we also begin to cede control over our desires and decisions. We begin to ‘outsource’… responsibility for intimate, self-defining assessments and judgments to programmers and the companies that employ them.
Among the most disturbing potential impacts may be those on children. “The pernicious thing that’s happening,” Brett Frishmann told the Talking Politics podcast, is “[they’re] conditioning kids. It’s the social engineering of beliefs and preferences about the technology” (Talking Politics: Re-engineering humanity).
Examples of new tech with the potential to extend control abound. In the working environment, according to a clip published by The Economist, companies such as ‘Humanyze’ make the near total surveillance of employees possible (An electronic badge can monitor workers’ conversations, posture and time spent in the toilet). Reuters reports that Emotiv, a San Francisco-based enterprise, is developing a headset that tracks workers’ brain activity in order to maximise productivity (Heady stuff: Tracking your brain at work).
When it comes to consumer behaviour, the innovations are no less startling. Amazon, for instance, is reported to be toying with the idea of ‘predictive shopping’, in which goods are delivered to shipping hubs or trucks in anticipation of what consumers will order, based purely on their past shopping behaviour (Mark Zuckerberg and the End of Language). “The next stage would be to mail these goods to consumers, before they had even consciously decided to buy them.”
Even ostensibly non-transactional relations may increasingly be governed by algorithms such as status apps, which reveal information about our ‘social credit.’ “This may look like a dystopia,” writes Robin Hanson (Status apps are coming), “but it is probably coming whether you like it or not.”
In many cases, the intentions of those developing the new tech may be largely benign, and many will welcome them. Some research indicates, for example, that the idea of predictive shopping is, in general, well received.
But leaving aside for a moment the question of how much is added to human wellbeing by buying more stuff more quickly, there is an even bigger question: an inequality of power .
This is a key point in a presentation by Shoshana Zuboff briefly described in a previous post (The machine and the mirror 1): when we look into the screens of our smartphones or other electronic devices a flattering image may be reflected back at us, but behind the ‘mirror’ of the screens our behaviour is being monitored and monetised.
In her remarks, Zuboff compares the advent of the Internet in the 1990s to the advent of printing in the fifteenth century. The Internet was welcomed initially as “a fearsome force for empowerment, amplifying Gutenberg’s revolution as it liberated information from the old institutions and distributed it directly to the people.” But, Zuboff says, “the shooting star…faded leaving in its place new, and young middlemen.”
The impact of printing on society in the fifteenth and sixteenth century was not as unambiguously beneficial is is often said. ( Zuboff would probably be among the first to acknowledge this in anything but an abridged talk.) The revolution of printing unleashed by Gutenberg and others did not only liberate information; as Andrew Marantz points out (The Dark Side of Techno-Utopianism), from the first information wanted to be free, but so did misinformation. Martin Luther was quick to use printing to agitate for assaults on Jews.
How might we achieve better outcomes in our present circumstances? Mariana Mazzucato (Preventing Digital Feudalism) writes:
Algorithms and big data could be used to improve public services, working conditions, and wellbeing of all people. But these technologies are currently being used to undermine public services, promote zero-hour contracts, violate individual privacy and destabilise democracies all in the interest of personal gain.
Innovation does not just have a rate of progression; it also has a direction. The threat posed by artificial intelligence and other technologies lies not in the pace of their development, but in how they are being designed and deployed…
To realise that potential, we will need to rethink the governance of data, develop new institutions, and, given the dynamics of the platform economy, experiment with alternative forms of ownership.
Mazzucato’s focus is the rich countries of the West, but the global context is no less important. According to John Lanchester (Document Number Nine), the Chinese Communist Party is the process of building the most perfect surveillance system the world has ever seen:
China is about to become…an AI-powered techno-totalitarian state. The project aims to form not only a new kind of state but a new kind of human being, one who has internalised the demands of the state and the completeness of its surveillance and control…
The ultimate goal of this apparatus is to make people internalise the controls, to develop limits to their curiosity and appetite for non-party information….
This is as pure a dream of a totalitarian state as there has ever been – a future in which the state knows everything and anticipates everything, acting on its citizens’ needs before the citizen is aware of having them.
We should take China seriously, writes Lanchester, and consider what we might want to do differently. He suggests we begin with a complete ban on real-time facial recognition. And then, “we need to have a big collective think about what we want from the new world of big data and AI, towards which we are currently sleepwalking.”
Caspar Henderson is an Associate at Perspectiva
Image: Hugo Gernsback wearing his Isolator, which eliminates external noises for concentration, from Science and Invention, July, 1925