Three Things You Need to Know About Artificial Intelligence

Articles

by Senior futurist Richard Worzel, C.F.A.

Pay attention. Your life is about to be significantly changed by Artificial Intelligence (AI), whether you want it to be or not.

Every once in while, something happens that tosses a huge rock into the pond of human affairs. Such rocks include things like the discovery of fire, the invention of the wheel, written language, movable type, the telegraph, computers, and the Internet. These kinds of massive disturbances produce pronounced, remarkable, unexpected changes, and radically alter human life.

Artificial Intelligence is just such a rock, and will produce exactly those kinds of disturbances. We’re not prepared for the tsunami that AI is going to throw at us.

AI has been the technology of the future since the 1960s, but one that always seemed just over the horizon, and never arrived. Certainly, AI was widely discussed when I got my degree in computer science, more than 30 years ago.

But now AI is becoming a reality, and it is going to hit us far faster than we now expect. This will lead to an avalanche of effects that will reach into all aspects of our lives, society, the economy, business, and the job market. It will lead to perhaps the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.

There are three keys to AI that will help to understand what’s happening:

  • AI is the Swiss Army knife of technology
  • AI is not a shrink-wrapped product, and
  • Once AI is properly established, the domino effects occur with astonishing speed.

But before I dive into these three keys, let me tackle what AI is because there is no real agreement what the term “artificial intelligence” means. I read one article, for instance, that claimed that there are 33 kinds of AI. And, indeed, the term covers a broad range of techniques and technologies.

But in my view, they all have a central, defining characteristic. I define AI as a computer system that is adaptive, and can solve problems that it has not encountered before. Some of those problems are ones humans have solved – but increasingly, many of such problems are ones humans haven’t solved, and might not be able to solve unassisted.

With that in mind, let’s turn to the three keys to AI.

AI Is the Swiss Army Knife of Technology

AI is not restricted to any narrow range of fields or areas of human endeavor, but will be applied anywhere and everywhere where some smarts would be helpful. The highest profile results will be things like robots and self-driving cars, but there are thousands of other places where AI will be used.

Security systems, whether at airports or a local clothing store, will use AI to identify faces, either those that might commit crimes, or those that are more likely to buy something.

It will be used to assess satellite images to locate submarines, pods of whales, which agricultural areas are producing vibrant crops, and which are suffering and hence what will happen to crop prices, or to judge how well traffic is flowing in a city and how it could be improved.

It will be used in cars and home heating systems to determine, on a second-by-second basis, how to most efficiently use fuel while keeping human users happy.

It will manage investment portfolios better than all but the most gifted humans – and be more consistent in their management results than their human betters.

It will work in all aspects of health care, energy management, manufacturing, industrial process control, accounting, law, weather and climate prediction, drug discovery, toy design, and entertainment, ranging from responsive, real-time virtual reality to traditional game playing.

It will help chefs design more interesting, nutritious, and economical food, and hotels provide more satisfying, more profitable stays. It will watch over babies and the elderly to make sure they’re safe, and potential criminals to make sure everyone else is safe.

It will be used to design a unique, virtual newspaper for each subscriber that appeal to that reader’s particular interests.

It will design bridges and desserts and cars and fashions and farms and teeth, and just about anything else that humans use, build, or think about.

Bad guys will use it to identify the highest value targets, and design cheap, effective explosive devices. They’ll use it to commit identity theft at a rapidly accelerating pace – even as white hat hackers use it to thwart evil AI.

Politicians will use AI to identify silent voters who would be inclined to vote for them if asked – and opponents will use it to develop custom-tailored messages to make sure such voters stay home.

It will be used to identify weaknesses in opponents’ armies, or their economies, or their political appeal.

And just about anything else you can think of.

AI will be used everywhere, all the time, and by everyone – whether they know it or not.

AI Is Not a Shrink-Wrapped Product

Using AI is not easy, simple, or straightforward. You can’t just take it out of the box, plug it in, and start getting fabulous results. It takes three major, difficult-to-achieve things: good data, smart analytics, and clear objectives.

AI’s are fundamentally data driven, because they use data to interpret patterns, and to create patterns for which they can search or use to select behaviors or actions. If the data is dirty, meaning it contains errors or too much irrelevant data points, or isn’t timely, which means it’s not indicating what’s happening now, then the results won’t be very useful. This is the classic computer observation, GIGO: “Garbage In, Garbage Out”.

Hence, if you’re trying to get an Artificial Intelligence to help you trade stocks, but are using data from three months or even three hours ago, you’re not going to get good results as markets don’t stand still.

Smart analytics means having someone identify what patterns the AI system is looking for. If you can’t show the AI how to use the data you’ve provided to clearly analyze what’s happening, then you’re not going to be able guide it to figure out what it needs to do to produce the results you want.

For instance, you can’t get an AI to assess a satellite photo of a farming region and determine whether the region is suffering from drought unless you can provide the analytical tools to tell it what that looks like.

If you want AI to look at tissue samples to determine whether a particular kind of cancer is present, you need to be provide a means to tell when that cancer is present. Sometimes this can be done in a rough way, by presenting thousands of cases where the outcome is already known, then telling the AI, “These samples have the cancer we’re looking for, but these ones don’t.” But at other times, you need to have very precise parameters to inform the AI what to search for, and how to do it, depending on the application and means of discovery.

And finally, you need to define what you consider a successful result to be:

“[OpenAI] researcher Dario Amodei showed off an autonomous system that taught itself to play Coast Runners, an old boat-racing video game. The winner is the boat with the most points that also crosses the finish line.

“The result was surprising: The boat was far too interested in the little green widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy. It drove in endless circles, colliding with other vessels, skidding into stone walls and repeatedly catching fire.”[1]

The AI obviously thought its objective was simply to get the highest number of points, rather than the highest number of points while finishing the race, and not crashing and burning.

So, unless you can provide the data necessary for an AI to learn what’s successful and what isn’t, have the means of analyzing that data, and have clearly identified what you want the AI to do, you’re not going to get very far in using AI.

Once AI Is Established, the Domino Effects Occur with Astonishing Speed

AlphaGo is a software AI developed by DeepMind, a machine-learning company owned by Alphabet/Google. AlphaGo was developed to play the traditional Asian game of Go, which is much more difficult to master than chess. Computer scientists speculated that it would take AlphaGo 15-20 years to become competitive with the best human players. Yet AlphaGo beat Lee Sedol, the world champion, 4 games out of 5 in March of 2016 – two years after it was created.

In November of 2015, a company called Kensho unveiled an AI to evaluate and summarize the monthly Bureau of Labor Statistics’ (BLS) monthly employment report[2]. The Kensho AI compared the BLS report with statistics from dozens of other databases, and produced a summary, and 13 key exhibits, along with a forecast of how it would affect dozens of investments, based on how they had responded to earlier reports. It used to take 2-5 days for an experienced and intelligent research analyst, working full-time, to do this. Kensho produced and distributed this report on its own, within minutes of the release of the BLS report – and does the same with many other kinds of economic and financial data.

Even the so-called “Masters of the Universe” ­– the highly paid, high profile institutional stock traders on Wall Street – aren’t immune:

“At its height back in 2000, the U.S. cash equities trading desk at Goldman Sachs’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s large clients. Today there are just two equity traders left.” [3]

Note that these 600 traders probably made $500,000 or more each back in 2000. Now they’ve been put out of work by AI.

The legal profession seems to be particularly susceptible to early occupation by AIs:

“At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours. The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers.”[4]

So, before June of 2017, lawyers and loan officers spent 360,000 hours a year interpreting commercial loan agreements for JPMorgan Chase. Since June, that specific kind of work has vanished.

ROSS is a computer system based on IBM’s Watson AI platform. ROSS performs legal research and prepares legal briefs. In so doing, it has the potential to replace the work done by hundreds or thousands of paralegals and junior lawyers. Is the law profession concerned?

“[A recent] survey of large U.S. law firms … asked whether Watson would replace various timekeepers in these firms in the next five to 10 years. Half the respondents said it would replace paralegals, 35% said first-year associates. … The other interesting aspect of that survey was the response to the option ‘Computers will never replace human practitioners.’ That got a 46% affirmative response four years ago; this time around, just 20%. That’s a huge drop.”[5]

Finally let me offer a personal anecdote. Not long ago I had a conversation with a computer scientist who has clients in the financial industry. He confided in me that most people didn’t realize how quickly the domino effects cascade once an AI is properly established. “Once a front-line job can be done by AI,” he said, “then usually all of the back-office jobs that support it can also be replaced. Companies have no idea how fast this is happening.”

He didn’t want to go public with his thoughts because he was afraid it would scare his company’s clients.

So, AI is coming, and it’s coming far faster than people realize, and the consequences will be far-reaching.

What Happens Next?

There has been a lot of news stories and popular pieces about how the robots (meaning AI and automation generally) are coming to get us. A much-cited 2013 study done by Oxford University academics Carl Frey and Michael Osborne said that “According to our estimates, about 47 percent of total US employment is at risk.”[6]

Almost half of all jobs in the U.S. are susceptible to automation, according to Frey & Osborne. And whether they are precisely right, or even close, that’s an enormous disturbance in the force of our society and economy. But will it happen that way?

Based on my almost 30-years of study and work as a futurist, and based on the best analyses I can find, I can confidently say: yes and no.

Yes, the consequences will be dramatic. No, they don’t have to decimate the labor force, and therefore the economy and society. But they will produce some enormous challenges for which we are not ready.

There’s a long-running debate between the groups that I’ve called the neo-Luddites, who say that automation will destroy our jobs and our society, and the technologists, who insist that as old jobs disappear, new ones will be created to replace them. I’ve explored this at some length in an earlier blog, found here. This is a legitimate debate, and one that’s been going on for at least two centuries.

My feelings are that, yes, new jobs are being created even as the old ones are being destroyed but:

  • Jobs are appearing and disappearing with increasing rapidity, which makes it hard to keep up on the credentials you need in order to stay employed;
  • The best new jobs have very high standards, requiring specific kinds of hard-to-get credentials, and thus are not available to the vast major of people displaced from existing jobs; and
  • Many of the jobs created are relatively low-level service jobs that just don’t pay very well.

I’ve been writing about this for more than 20 years, as you can see from a book I published in 1994 titled Facing the Future:

“This is not a problem that will burst on the scene in the next five or ten years. Humans are still capable of offering a flexibility, initiative, and creativity that machines cannot duplicate. But at some point, whether it’s twenty years away or one hundred, I’m afraid that the time will come when there are very few jobs that computers can’t do better, faster, cheaper, and more reliably than humans. As that day approaches, we will be confronted with several problems.”

“In the first place, we will need a new economic system. Much as it grieves me to say so, free market capitalism may be dying, for it pays only those who are part of the production process. If virtually no one is part of this process, all the fruits of production will belong to those who own the machines—a recipe for the peon-and-aristocracy patterns of Third World economies. But where will the machine owners find their customers? People can’t be consumers unless they have money to spend.” [7]

I didn’t use the term “the 1%” in that book, but that’s clearly who I was referring to when I described them as an emerging aristocracy.

So, yes, many jobs will be affected, many jobs will be eliminated, and many people will have to find a new way to work, all of which sounds disastrous.

And yet, that’s not all of the story.

The Borg vs. The Hybrid

In an earlier blog, entitled “I, Cobot”, I explored the interaction between robots and humans, and concluded that they are more productive together than either can be on their own. This is broadly true of AI and automation and humans as well.

As I said at the outset of that blog, reality is messy, and people are good at messy situations whereas AI isn’t. On the other hand, AI is good at numbers, and in-depth scrutiny using multivariate analysis. Combining the two kinds of strengths – real world flexibility plus analytic rigor – would produce a better result for everyone.

What’s more, every analysis that I’ve read about this issue basically says that increased productivity means that we can produce the same amount of stuff (goods and services) with fewer people. However, increased productivity could also mean that the same number of people can produce much more stuff.

I call these two different models The Borg (yes, from Star Trek) vs. The Hybrid.

In the Borg model, automation moves in and shoves people aside, throwing them out of work and “resistance is futile.”

In the Hybrid, AI works in cooperation with humans to maximize the strengths of each, and to use increased productivity to increase their output. “Increase output and revenues” is the motto for this model.

When I first introduced this idea to a group of professionals in a keynote address at a conference in the Summer of 2017, they asked how this would work in the real world. For instance, many of the conferees were lawyers. They asked, why should we continue to employ paralegals and junior lawyers when ROSS or other legal AIs can do the job better, faster, and cheaper?

Rather than answering the question specifically and directly, I answered indirectly and generally. I suggested that when someone’s job was going to be eliminated by automation, that rather than just escorting the person involved off the premises, that the organization sit them down and say something like this: “Bob, you know that our new AI has taken over the work you were doing. Ordinarily, we’d give you 8-weeks notice and say good-bye. Instead, we want you to take that 8 weeks, and think about what else you could do that would help us to serve our clients better. And in particular, we’d like you to think about how you could work with us to make this new AI even more valuable – and you with it.

“In short, we want you to invent a new job for yourself. Your experience with us has been valuable, you know our business and our clients, and we’d like you to help us become even more successful. Come back to us in 5-7 weeks with a some thoughts or a proposal that we can explore together. Can you do that?”

It’s possible that Bob (or whomever) can’t come up with a good enough answer to stay employed – but I suspect that, faced with unemployment as an alternative, Bob would get really creative, and might well come up with a completely unexpected, and imaginative new way that he could become even more productive, especially if challenged to learn how to leverage the new-found strengths of the AI.

And, what’s more, if organizations became much more productive, then prices would come down, clients and consumers would be able to buy more, and the general standard of living would go up – just as it did in the Industrial Revolution.

A Possible Hybrid Example

Suppose, for instance, that Bob is a junior lawyer in a firm that has just started using an AI to do legal research and legal briefs. The obvious thing to do would be to get rid of Bob. However, if the firm does that, and maintains that approach, who will emerge to become the senior lawyers later on?

Meanwhile, what else could Bob do? Well, if what people do best is handle the messy stuff, suppose that Bob steps back and considers if there’s a more creative way to solve the legal issues for a particular client. And, sifting through the brief created by the firm’s AI, he looks at what kinds of cases have been cited as precedents.

Next, he makes use of the AI to look for off-beat or unusual settlements or outcomes, and assesses whether they would be preferable to the straightforward resolution being prepared. In other words, he uses the AI to leverage human creativity.

If Bob can come up with a superior result for the client by enlarging the possible outcomes, and offering a better, more unconventional approach, he will complement the work done by the AI to get a better, more valuable result.

Will This Solve All Problems?

Our world is about to be turned upside down. If we aren’t proactive about how we manage this, then lots of people could become unemployed, and possibly unemployable. In turn, this would mean that lots of people couldn’t afford to buy as much from companies, which would mean those companies wouldn’t make as much in profits. And that, in turn, would mean that the value of such companies would go down, making the owners poorer.

There’s a cliché in the stock market that a rising tide lifts all boats, meaning that everyone makes money in a bull market. The converse is true as well: in a falling economy, everyone gets hurt. Therefore, even those people who fall into the 1% should be thinking about how we can embrace the Hybrid model of AI + Humans, not because it’s the moral and kind thing to do (which it is), but because it’s the smart and selfish thing to do.

If, instead of just saying that this is happening and doing nothing about it, we instead all treat it as an opportunity to create a more prosperous society, then everyone benefits. The alternative is potential economic chaos, social turmoil, and, possibly, riots and revolution.

So, is resistance futile? That’s up to us to decide.


[1] Metz, Cade, “Teaching A.I. Systems to Behave Themselves”, New York Times website, 13 August 2017, https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html

[2] Popper, Nathaniel, “The Robots Are Coming for Wall Street”, New York Times, 25 Feb 2016.

[3] “As Goldman Embraces Automation, Even the Masters of the Universe Are Threatened”, MIT Technology Review, 7 Feb. 2017

[4] “JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours”, Hugh Son, Bloomberg website, 27 Feb. 2017

[5] “How will artificial intelligence affect the legal profession in the next decade?”, Queen’s University Law website, http://law.queensu.ca/how-will-artificial-intelligence-affect-legal-profession-next-decade

[6] Frey & Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerization?”, Oxford Martin School, Oxford University, 17 Sept. 2013, website: http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

[7] Worzel, Richard, Facing the Future: The Seven Forces Revolutionizing Our Lives, Stoddart Publishing, 1994, p.83.