Smartphones, robots and computers offer to make our lives ever-more efficient, but what do we lose by accepting that seductive promise? Tom Chatfield spoke to author Nicholas Carr about the perils of over-automation.
It’s easy to assume that automating everything will lead to a better world. Computers that supercharge our productivity. Apps that make life faster and easier. Robots that spare us from the drudge.
The steady stream of innovations coming out of Silicon Valley only serves to feed that narrative of a better life through technology.
Yet there are some who would question those assumptions. The author Nicholas Carr has a talent for picking apart digital modernity’s most cherished boasts. His 2008 essay for The Atlantic magazine, “Is Google making us stupid?” continues to be debated today, alongside the bestselling 2010 book that followed it: The Shallows. For those looking to technology to save the world, Carr is among our most influential naysayers. For those uneasy about tech’s effects on our minds and lives, his is among the most measured of dissenting voices.
Now he has turned to a new question: should we fear a world where we are not challenged anymore? Will technology make life too efficient?
Earlier this year, I discussed his new book The Glass Cage: Where Automation is Taking Us, and the thinking behind it with Carr himself. We spoke about the myths of technology, the lessons to be learned from video games and what happens when machines replace people:
1. An important myth about technology
“We have come to assume that efficiency and convenience are always good…this is a naive approach”
Tom Chatfield: It seems to me that what you’re trying to do in The Glass Cage is myth-busting: dismantling the simplicity of the assumption that technology is there to make things easier, and that making things easier is a good thing, full stop.
Nicholas Carr: At both a personal and an institutional level, we have come to assume that efficiency and convenience are always good, and maximising those things is always a worthy goal. And it does seem to me that this is a naive approach to take when thinking about technology in all its forms: in particular when thinking about computer automation, but also when thinking about our own desires and experience of life and of the world.
TC: Yet a lot of technologists are tied into a utilitarian perspective, which argues that our worst mistakes come from neglecting efficiency and logic – that we don’t know what is good for us. So the great task for technology, in this view, is to identify our irrationalities and the gaps in our thinking, and then to make systems that compensate for these. Are they wrong?
NC: On the one hand, a lot of what we see going on with computers and the design of automated systems does stem from this fundamental assumption that human beings are horribly flawed, particularly when compared to computers themselves. You can program computers to do certain things, and they will then do them perfectly well over and over again. You can’t rely on human beings to have that kind of tractability, that kind of precision.
So that’s one part of it. But then the other part is to go even farther and say, human beings are so fundamentally flawed that we need to give them as small a role as possible, and look to computers to do everything possible that we can make computers do. It’s not only about trying to remedy human beings’ flaws, it’s also about saying, if we can just get humans out of the picture and let the computers do things, we will be much better off.
TC: None of which sounds like a good idea. So is there a ‘right’ kind of automation?
NC: I think that gets to a fundamental point, which is that the question isn’t, “should we automate these sophisticated tasks?”, it’s “how should we use automation, how should we use the computer to complement human expertise, to offset the weaknesses and flaws in human thinking and behaviour, and also to ensure that we get the most out of our expertise by pushing ourselves to ever higher levels?”
We don’t want to become so dependent on software that we turn ourselves into watchers of computer monitors and fillers-out of checklists. Computers can play a very important role, here, because we are flawed; we do fall victim to biases or we overlook important information. But the danger is that you jump from that to saying, just let the computer do everything, which I think is the wrong course.
2. Should life be more like a video game?
“The reason we enjoy video games is because they don’t make it easy for us.”
TC: I was glad to see that you use video games in the book as an example of human-machine interactions where the difficulty is the point rather than the problem. Successful games are like a form of rewarding work, and can offer the kind of complex, constant, meaningful feedback that we have evolved to find deeply satisfying. Yet there is also a bitter irony, for me, in the fact that the work some people do on a daily basis is far-less skilled and enjoyable and rewarding.
NC: Video games are very interesting because in their design they go against all of the prevailing assumptions about how you design software. They’re not about getting rid of friction, they’re not about making sure that the person using them doesn’t have to put in much effort or think that much. The reason we enjoy them is because they don’t make it easy for us. They constantly push us up against friction – not friction that simply frustrates us, but friction that leads to ever-higher levels of talent.
If you look at that and compare it to what we know about how people gain expertise, how we build talent, it’s very, very similar. We know that in order to gain talent you have to come up against hard challenges in which you exercise your skills to the utmost, over and over again, and slowly you gain a new level of skill, and then you are challenged again.
And also I think, going even further, that the reason people enjoy videogames is the same reason that people enjoy building expertise and overcoming challenges. It’s really fundamentally enjoyable to be struggling with a hard challenge that we then ultimately overcome, and that gives us the talent necessary to tackle an even harder challenge.
One of the fundamental concerns of the book is the fear that we are creating a world based on the assumption that the less we have to engage in challenging tasks, the better. It seems to me that that is antithetical to everything we know about what makes us satisfied and fulfilled and happy.
3. Will computers remove the need for people?
“Human beings simply have been rendered obsolete by the speed at which computers can trade financial instruments.”
TC: Unlike video games, the real world is not a place in which hard work always wins out; it’s not fair or balanced. Perhaps the alarming thing is that there are more and more environments where what is good for people – psychologically, personally, even in terms of survival – does not coincide with what corporations and nations need to do to succeed. In that vein, are you concerned about computers replacing people altogether?
NC: One of the scariest things I came across in doing the research for the book was an article that I quote by a military strategist about how, as we bring computers more and more into warfare, there may simply be no role for human beings. Everything gets so fast that human beings simply can’t handle the decision-making. Only computers are fast enough, and therefore inevitably that means moving towards completely robotic warfare, where you have drones programmed to make their own decisions about when to fire a missile, robotic soldiers making decisions about when to shoot.
I think that that is something that we are seeing not only in warfare, but in many other aspects of life – in the financial world, for example. Human beings simply have been rendered obsolete by the speed at which computers can trade financial instruments.
What happens then is that you not only lose the distinctive strengths of human intelligence – the ability of human beings to actually question what they are doing in a way that computers can’t – but you push forward with these systems in a thoughtless way, assuming that speed of decision-making is the most important thing. And then you find you can’t go back from that, even if you discover that this is horribly flawed; once you completely rebuild a sphere of activity around computers, it often becomes impossible to back up and to re-insert a human being in that process.
TC: I did find that passage about automated warfare terrifying: this sense that the logic leading to fully autonomous systems in warfare is inescapable. And part of the horror, for me, is that when you look at finance and the crisis of 2008, trillions of dollars were effectively annihilated, but you can now have people thinking twice about finance. Whereas with warfare, when the bad thing happens, it’s not dollars but human lives being annihilated.
I also think there’s a point here that connects to your previous book, The Shallows: we human beings are more-or-less the same biologically as we were hundreds of thousands of years ago, so what happens now we are suddenly living in this new environment? Where the pace at which consequences can proliferate and multiply is appallingly mismatched to our own intuitions and nature; where an invention in the form of information and algorithms, and its consequences, can spread so fast?
NC: It’s not only the fact that technology, particularly software, can be replicated and diffused very quickly; it’s also the fact that these things tend to play out in competitive environments. It might be the competition of warfare, or business competition, but as soon as any entity gains any short-term advantage, there becomes an almost overwhelming pressure to push the technology everywhere possible, because nobody wants to be at a disadvantage.
I do think that it becomes very easy in those circumstances to lose sight of the fact that we are animals: we are creatures who have developed through the ages and through evolution to live in the world. Our role as human beings and our satisfaction and fulfilment is very much tied up in our experience of that world – which has its own pace and its own speed.
So when you put ourselves, with our own physical constraints but also physical capabilities, against computers that can be so fast and so precise, it becomes very easy to say, well, ‘let the computer define our experience’. We lose sight of the fact that, if we defer to the computer, we may end up creating a world and an experience for ourselves that really isn’t very fulfilling.
4. So how should we automate the world?
“We can do this wisely, or we can do it rashly…”
TC: I believe we need a critically engaged attitude towards technology – but I worry when people start making a fetish out of difficulty and anti-tech “authenticity.” So there is a modern school of thought that wants everything to be artisanal and authentic, which argues that it’s inherently morally superior to be putting in lots of effort, to be labouring by hand. But this strikes me as very elitist, very sniffy and snobbish about much of the great good that comes with technological democratisation: far too dismissive, for example, of the ways in which the last few decades have seen an astonishing opening up of areas which through most of history were the province of a minority.
NC: Yes, and I can’t simplify the complexity there, because I think you’re absolutely right. I was doing one interview about the book, and the interviewer said, what about people working in horrible meat-cutting plants? And I said, you know, I’m not saying that there is no role for labour-saving technology; I’m saying that we can do this wisely, or we can do it rashly; we can do it in a way that understands the value of human experience and human fulfilment, or in a way that simply understands value as the capability of computers. Navigating these choices is not easy, and it’s not helpful to think of this in a black-and-white way, as either somebody is blindly in favour of hard and exhausting work in all circumstances, or thinks that utopia lies in complete leisure and ease.
We human beings are tool makers and tool users. From the very beginning we have had to negotiate the distribution of labour, the distribution of effort between our tools and ourselves; and I think because computers are so adept at doing so many things this negotiation is very, very pressing right now.
5. The future…
“I believe we should ask of our computers that they enrich our experience of life… instead of turning us into passive watchers of screens”
TC: So where are we headed?
NC: A historian of technology who died last year, Thomas Hughes, talked about the concept of technological momentum: that technology, once it is built into our social structures and processes, just begins to take on a momentum of its own and pulls us along with it. So it may well be that the trajectory is set, that we are going to continue to go down the path we are on, without challenging the direction we are taking. I don’t know. The best I can do is try to think as clearly as possible about these things, because they do seem complicated and confusing.
I hope that, as individuals and as a society, we maintain a certain awareness of what is going on, and a certain curiosity about it, so that we can make decisions that are in our best long-term interest rather than always defaulting to convenience and speed and precision and efficiency.
I believe we should ask of our computers that they enrich our experience of life; that they open up new opportunities to us instead of turning us into passive watchers of screens. And in the end I do think that our latest technologies, if we demand more of them, can do what technologies and tools have done through human history, which is to make the world a more interesting place for us, and to make us better people. Ultimately, that is something that is up to us.