by futurist Richard Worzel, C.F.A.
We grew up with robots. There was Rosie the Robot from the Jetsons, the “I, Robot” science fiction series from Isaac Asimov (which later became a movie with Will Smith), the model B9 robot from the “Lost in Space” television series, R2D2 and C3PO from the Star Wars space operas, and the Terminator from the governor’s mansion in California. In the real world, there were robots in car factories, which were big, bulky pieces of machinery bolted to the floor that moved pieces of cars into place, welded seams, and painted car bodies. More recently, we’ve had cute little toys that roll around holding trays on which you can place drinks or snacks, replicas of R2D2 that were either remote control operated or voice-activated, and Roomba and Scooba floor-cleaning robots. And because the real-world robots seemed to fall far short of the fictional robots, and because we’ve been disappointed by real robots for decades we’ve concluded that robots will always be fictional, and will always be disappointing. (For instance, I have a wind up robot in my office that walks and shots sparks, but sadly, refuses to exterminate the people I don’t like.)
Accordingly, we’re about to be surprised, for real robots and their non-physical counterparts, computer intelligences, are about to enter our lives in a very real way. And initially at least, our reactions to them are likely to be that they are either creepy, or infuriating. Let’s start with the ways in which we are likely to encounter robots and computer intelligences, and then let me move on to where the evolution of robots is headed.
Not the Kind of Robots We Expect
First of all, the kinds of robots we expect are not the ones we will first experience. All of the robots described above are essentially humanoid robots – robots that stand (or move) on two legs, have two arms, and are formed more or less in the shape of a human being. (And yes, I know that Rosie the Robot rolled on one small wheel. Please don’t interrupt when I’m generalizing.)
If you look at the robots that are already emerging into the real world, they generally are not humanoid in appearance, but have a form that suits their function. Bomb disposal robots aren’t pretty. They have a camera mounted on a frame with gripping appliances of some kind, and caterpillar tracks that allow them to move over broken terrain. Exoskeletons are wearable robots that are being developed both for military purposes to increase a soldier’s strength and stamina, and to assist people who are weak or have some kind of disability, such as paraplegics. They typically strap on your back, around your shoulders, and onto your legs, sense what movements you’re trying to make, and then echo and amplify them. Roomba and Scooba, which I mentioned earlier, look like oversized hockey pucks, and scoot around the floor to do their cleaning.
So much for today’s robots. Let’s look at what’s in development.
You’ve probably seen television clips about self-driving cars, like the ones Google has been testing. It’s sort-of a robot in that for the moment it needs sensors and other equipment attached to the car so it can tell what’s going on around it, and control the car. Eventually, such equipment will be integrated into the car, at which point I guess you could consider the car to be a robot.
There are robot fish that swim with the same kinds of motions that fish use, flicking the after parts of their bodies back and forth. There are quadrupeds that look like headless mules, dogs, or cheetahs that are under development for the military. The headless mule (called “BigDog”) is being developed to carry up to 300 pounds of equipment for foot soldiers. The cheetah can be armed, perhaps with a bomb, and might run into enemy territory to attack a target that’s difficult to reach from the air. There are automated flying drones, already in use by the military, that can fly, and attack objects in the air or on the ground, with or without human guidance. And there are robots that are being designed in the form of insects.
This last group is actually quite important, because many of the applications will be for swarms of small robots. These might be released in the aftermath of a tornado or an earthquake, for instance, to crawl around and look for people trapped in buildings. Flying insect robots could canvas an area, looking for a lost child, or, in military applications, be sent into enemy territory to send information about enemy troop locations. Moreover, swarming robots could create an intelligent network that provided a comprehensive view of what’s going on in a given area – and the military won’t be the only application. Shopping malls might deploy them to look for shoplifters, lost children, or other kinds of trouble. Or they could be deployed in order to identify shoppers who had been there before. The shops where such shoppers had made earlier purchases might use that information to text them an offer to tempt them to return. In a home, robot insects could continually (and imperceptibly) roam around, looking for things that needed to be cleaned or repaired. A team of chipmunk-sized robots might scurry out to clean up, pick up, or tidy up your home when you’re not in the room, or at night, when you’re asleep. In the homes of elderly people, networked robots could track their movements, and trigger a call for help if something goes wrong, if they fall or need assistance in some way.
Of course, it need not be something creepy that acts as a guardian for an elderly parent. It will also be possible to make robots that look like a cat, for instance, and act like a cuddly, intelligent friend. Such robots could become companions, potentially providing conversation, doing things around the house that might be difficult for the elderly parent to do themselves, sending reassuring messages to worried children, and helping to manage things like banking or buying things in cyberspace. Early, very rudimentary version of this kind of robot, in the form of a soft, plush, moving seal that makes soothing noises, are already in use quite successfully in Japan as companions for elderly people, especially those who benefit from warm physical contact. In short, robots have enormous potential in military, civil, industrial, health care, and home applications. The key issues are going to be, first, the level of intelligence, and then, cost. Or it might be the other way around: cost first, and then intelligence.
We may eventually move towards the humanoid robots of fiction, but such robots will take time to make it into the home because they are going to be very expensive, starting off being as expensive as a high-end luxury car. It will only be over time, perhaps 20 years or so, that they will become affordable household appliances. So because of cost, robots are more likely to appear in military, industrial, and health care applications before they appear in the home in large numbers. But make no mistake: robots will come to the home, and become the next big, household durable purchase, which is why so many of the car companies and other industrial organizations are investing billions of dollars in them. And this motivation is pushing the pace of development in a kind of competitive frenzy.
But for robots to fill these niches, in and out of the home, they are going to have to get a lot smarter, and that’s where computer intelligence comes in.
We’ve already seen, and to some extent, experienced computer intelligences, and so far, we mostly don’t like them. Phone and cable companies, as well as airlines, have been using them for some time now as the front-end of their “customer service” operations under the mistaken impression that frustrating and infuriating customers is a good way to keep personnel costs low, thereby increasing efficiency. As I’ve written elsewhere, this truly does achieve efficiency – if you’re objective is to efficiently alienate your customer base. Yet, they’ve become so widespread in these industries that customers are stymied; there’s nowhere else to go if you’re dealing with these kinds of oligopolistic industries. (Of course, this opens the door for aggressive, new competitors to steal customers by offering real customer service, but that’s a different subject for another time.)
The so-so news about such computerized customer service agents (CCSAs) is that they are going to keep getting better and better. This means that they will be more flexible, better suited to understanding what you really want, and better suited at making sure you don’t get an iota more than the bean counters in their companies have decreed that you should have. Getting to a human being, who can exercise judgment and make allowances for unusual circumstances, is going to get progressively more difficult as CCSAs become smarter.
There is, though, better news as well: computer intelligences will start working for us as well as against us, and there are two high-visibility examples already on the scene: Watson, and Siri.
Watson is a computer intelligence created by IBM. It came to prominence when IBM arranged for it to play the television game show, Jeopardy!, against the two most successful human players and mopped the floor with them. Jeopardy! was chosen as a field test of Watson’s capability to understand human speech, perform requested research, and deliver answers quickly. IBM didn’t create Watson for the purpose of winning game shows, though. That was merely a way for them to find out how successful their work had been in understanding the nuances, innuendos, puns, and idiosyncrasies of natural human speech, and because of Jeopardy!’s use of convoluted language, it served as an excellent field test.
IBM developed a highly capable computer research system that could (mostly) understand human speech in order to work as an assistant to humans performing tasks that required assessment of large amounts of data, and the ability to quantify a large number of variables in a quantitative manner, neither of which humans do very well. The first application IBM had in mind for Watson was to work with doctors in performing complicated and difficult diagnoses. There is so much research published in the field of medicine that doctors have a hard time keeping up, whereas a smart computer can scan through massive amounts of data and come up with relevant information that it can then deliver to a human to help them make a more informed assessment. And the human body is such a complicated machine, with so many systems running simultaneously, and so many things that can go wrong, that weighing the probabilities of a range of different possibilities is far easier for a machine than it is for a human. In this way, human judgment can be augmented and supported by a computer assistant that “gets” what the human is trying to do.
Needless to say, Watson is not yet available to individuals to solve their problems. This will change over time, much as computer operating systems used to be expensive pieces of software that were only operated by highly trained specialists, but are now used quite casually by consumers through their desktop and laptop computers, smartphones, televisions, and automobiles. So computer assistants that can research and help individuals weigh alternatives, and solve problems will eventually become commonplace.
A first, tentative step in this direction is the Siri software installed by Apple on their iPhone 4S in 2011. Siri is a quantum leap beyond the menu-driven CCSAs used by phone companies, and although it is currently listed as being a beta version, will get better and better fairly quickly as all Siri requests are actually filtered through Apple’s computers. As such, the company can build up a knowledge base of what consumers are likely to ask, how they tend to ask for it, and where there are problems that make Siri respond inappropriately, or unsatisafactorily. Such a knowledge base builds up incredibly quickly as more and more consumers use Siri, and as Apple software engineers devise ways of improving the quality of Siri’s responses. (For more on Siri, see an earlier blog here.)
It’s tough to gauge how quickly we will see robots and computer intelligences emerge (or intrude, depending on your point of view) into our lives, not because the technology is hard to forecast, but because it’s very difficult to anticipate how consumers will react and interact with these new technologies. Moreover, it’s not possible to ask consumers what they want with a piece of technology they’ve never experienced, because they have no idea whether they’ll like it or not. My favorite example comes from my own background.
I was part of a group that bid (unsuccessfully) for one of the very first cellphone licenses (I wrote the science fiction section of the application, about the future developments and applications of the technology.) One of our financial backers wanted to know if there was a viable business in this new technology, so we commissioned a consumer survey to see how many people would be interested enough in having a “telephone in their pocket” to pay good money for it. The survey came back with good news: there was, indeed, a viable business as somewhere between 7-8% of adults would be interested in having one of these cellphones, which was enough to construct a successful business plan, and convince the backers to invest.
In retrospect, this seems silly; our projections of 7-8% penetration were off by a factor of 10; today somewhere well in excess of 80% of adults have cellphones. But consumers had nothing to compare this new technology with, and therefore had no idea if they wanted it or not.
The Remarkable Mr. Jobs and Tomorrow’s World
This is where someone like Steve Jobs was so remarkable. He was able to envision where technology could take us, and then divine what consumers would eventually want before they had ever even considered it! This is a rare talent, and one, unfortunately, that I have in only limited amounts (or else I’d be far richer than I currently am).
But we can come to a number of conclusions:
First, there are robots and computer intelligences in your future.
Second, they won’t initially fit with the images of robots with which we grew up.
Third, consumers will use them in unexpected ways, leading to new applications and stretching the technology in remarkable directions.
Fourth, they will force decisions on us that we may really not like. For instance, eventually cars driven by robots will be shown to be safer than cars driven by humans. Will insurers and governments eventually force us to use computer intelligences to monitor, override, or even manage our driving to make us safer – and literally take the controls from our hands?
Fifth, these changes will affect us directly and personally, both in how we run our lives, and even in how we manage our bodies and minds. Smart computers will monitor and augment our body’s natural physical defenses and abilities, while enhancing our intellectual abilities as well. And this means that the so-called “digital divide”, between those who can afford new technologies and those who can’t, will widen at an ever-accelerating rate.
And finally, these changes will cause ripples far beyond just plopping Rosie the Robot into our households, much as the development of a robust communications system capable of surviving a thermonuclear strike (which we now call the Internet) wound up changing our lives, businesses, occupations, and even friendships. The changes arising from robots and computer intelligences will come in the workplace, in our social structures, in the way our businesses run, in the jobs available for the young and the unemployed, in the way our health care system works, in how our governments function, and in how we relate to each other. And that is a much, much bigger and more important topic than any cute or technical discussion about the future of robots.
© Copyright, IF Research, March 2012.