|Intelligent Machines “Artwork for the cover of a 1959 issue of the French science fiction magazine Galaxie.”|
Photo Credit: CCI/Art Archive/Art Resource
In a book review article (“How Robots & Algorithms Are Taking Over,”; April 2, 2015) in The New York Review of Books, Sue Halpern revisits the idea that automatons (or robots) are not only displacing workers—they are—but also that they pose some threat to humanity—they might—if left unchecked and start developing intelligence greater than ours. This idea resurfaces every generation or so, notably when the economy tanks, as it did in 2008.
In citing Nicholas Carr's The Glass Cage: Automation and Us, Halpern writes that job losses are almost certain to take place, notably in fields where intelligent machines can perform the tasks better, that is, with greater speed and with less mistakes.
In September 2013, about a year before Nicholas Carr published The Glass Cage: Automation and Us, his chastening meditation on the human future, a pair of Oxford researchers issued a report predicting that nearly half of all jobs in the United States could be lost to machines within the next twenty years. The researchers, Carl Benedikt Frey and Michael Osborne, looked at seven hundred kinds of work and found that of those occupations, among the most susceptible to automation were loan officers, receptionists, paralegals, store clerks, taxi drivers, and security guards. Even computer programmers, the people writing the algorithms that are taking on these tasks, will not be immune. By Frey and Osborne’s calculations, there is about a 50 percent chance that programming, too, will be outsourced to machines within the next two decades.
In fact, this is already happening, in part because programmers increasingly rely on “self-correcting” code—that is, code that debugs and rewrites itself*—and in part because they are creating machines that are able to learn on the job. While these machines cannot think, per se, they can process phenomenal amounts of data with ever-increasing speed and use what they have learned to perform such functions as medical diagnosis, navigation, and translation, among many others. Add to these self-repairing robots that are able to negotiate hostile environments like radioactive power plants and collapsed mines and then fix themselves without human intercession when the need arises. The most recent iteration of these robots has been designed by the robots themselves, suggesting that in the future even roboticists may find themselves out of work.Another concern is that automation, including the human use of Google search engines, dulls the brain, an effect that is likely true at least as it applies to how we think about finding information. Another way of looking at information retrieval is that if it is now easier and faster, our brains can be used for other, perhaps more important, matters. This is a positive change.
Yet, doubt about our automated future persists, and has for some time. The book to read is Norbert Wiener's The Human Use of Human Beings; although it was published in 1950, it is still relevant today [see my post here]. The fear of unbounded and amoral technology has a long history in literature; Frankenstein's monster is itself a modern rendering of the myth of Prometheus.
If even half of this takes place within the next 20 years, what we have before us is a bleak dark and dystopian future of humanity, where many individuals will not only be unemployed, but where machines will almost make humans superfluous, unnecessary in what many predict will be a fully formed and functioning machine age. This is the kind of thinking that has supplied and supported many a science-fiction novel. There are variations of this theme, including machines revolting against their human masters and makers, humans revolting against their machine overlords, humans united with a few courageous machines to bring freedom to the world, and machines dominating humans as we have dominated our planet.
It is true that machines will replace humans in jobs that they can do better; it is the nature of technology to do this, especially when allied with commercial interests to make money and profit from it; what will happen to so many displaced workers is hard to predict now. It is possible that there will be new industries dedicated to robotic age, the age of automatons. It is disheartening to many to see machines, no matter how intelligent, more intelligent than us. These are valid concerns.
On the flip side, there are social robots, the article notes:
What is a social robot? In the words of John Markoff of The New York Times, “it’s a robot with a little humanity.” It will tell your child bedtime stories, order takeout when you don’t feel like cooking, know you prefer Coke over Pepsi, and snap photos of important life events so you don’t have to step out of the picture.When I mentioned this scenario to my 13-year-old son, his reaction was positive, excited. “Great, I can sit in front of my screen at work and the robot will order me pizza and coke.“ He believes and thinks that he will have a job. I think he will, but it might be in a completely different field that we have yet to envision. The stuff of sci-fi? Perhaps.
A more important question is why we humans see machines as a threat to our autonomy. If we do see intelligent machines as a threat, can it be that we are imposing our views of the world on the machine, in effect, giving human qualities to machines? I hope not, because I want to believe that intelligent, rational ”beings” will progress and learn from all the mistakes that humans have made. Is it possible that we can learn from such intelligent beings? I think so. I do not see machines as a threat, no matter how intelligent they become, but as an benefit to us.
For more, go to [NYRB]