Future Tense: Tomorrow’s Risks and the Future of Internal Audit

Articles

by futurist Richard Worzel, C.F.A.

What follows is a lightly-edited version of a presentation I gave in Chicago for a group of internal auditors earlier this year. If you have any questions about any aspect of it, please contact me at futurist@futuresearch.com.

The reason thinking about the future is so difficult is that is is so vast. It encompasses everything, all the time, everywhere, and you don’t know ahead of time what might become important, what might catch you by surprise. And it involves change, which makes everyone uncomfortable, and which we tend to resist.

My job here is to talk about the things happening around internal audit that will affect you. So, let’s start our tour of tomorrow by talking about how a futurist assesses risk.

A futurist’s view of risk

My definition of risk is different from the conventional one: Risk is the cost of being wrong. Let me illustrate this with an analogy.

If you try walking across a broad, strong plank laid on a grass field on a calm day, you will have little trouble doing so. But if you walk over the same plank set over a 1,000-foot chasm on a calm day, the risk is much greater. What’s changed? The task is the same, but the cost of making a mistake – of being wrong – is much greater.

Next, a futurist’s definition of risk management as the process of asking the right questionsabout what might happen in the future, and then preparing the best contingency plansyou can to deal with events that may affect you.

Hence, if there’s a major recession, and you’ve considered that possibility and have a plan prepared to deal with it, and the plan works reasonably well, then you have adequately managed that risk.

Yet, I very much doubt that any contingency plan, no matter how well you prepare it, will deal with everything that happens – you will still be caught by surprise in some regards. But the flexibility of thinking through contingencies will improve your flexibility of responding to surprises.

The organizations that will win the future are the ones that recover fastest, and respond most constructively when surprises occur.

There are three, broad kinds of risks to be concerned about: Rapid onset risks, gradually emerging risks, and unexpected risks.

Typically, even if you have prepared for a rapid onset event, when it happens, it’s still a shock because of the speed with which it occurs. For example, a major flu epidemic that forces 20% of your workforce to stay home, a major fire or flood, a radical change of government policy, plus the effects these things will have on your cash flow and P&L, would all be examples of rapid onset risks.

With gradually emerging risks, you can see them coming, but because they develop over a long period of time, no one day seems urgent enough to prompt you to act. Examples of gradual onset risks might be the financial stresses that are accumulating due to aging population, or the effects of climate change. But eventually you reach a tipping point where a gradually emerging situation becomes a crisis. At that point, if you haven’t prepared for it, you can look awfully foolish, because, after the fact, everyone saw it coming, so why didn’t you? This is precisely what happened with the financial panic of 2008-09 and the Great Recession that followed.

For governments, gradual-onset risks are particularly difficult to manage, because the political planning horizon is so short – and seemingly gets shorter all the time.

When something happens that you haven’t foreseen or considered, and you are forced to respond to it, that is an unexpected risk. Examples would include 9/11, the blackout of 1993, or the earthquake & tsunami that hit Japan in 2011.

And although internal audit doesn’t prescribe a response, let me just note there are classic responses these three types of risk. For rapid onset, the classic response is contingency plans, if they exist; otherwise people scramble to respond. For gradual emerging risks, organizations typically ignore them and hope for the best. But with unexpected risks, the typical response is panic.

As it turns out, you canidentify unexpected risks, and prepare for them, but it takes a concerted effort, and a specific mind-set, which I’ll discuss later.

Now let’s turn to the second issue: technology

Technology

You have other sessions that will go into detail on technology, so I’m just going to highlight some basic principles and potential risks & returns. There are lots of areas of technology that pose both positive and negative risks for an auditor, including developments in the biosciences and health management, but I’m going to mention three specific ones: AI, Blockchain, and the Internet of Things.

Artificial Intelligence: How It Will Affect Audit and Operations

According to the EY website, “AI is an evolving technology that promises to be a game-changer.”[1]

The Institute of Internal Audit website says “The internal auditing profession cannot be left behind in what may be the next digital frontier — artificial intelligence.”[2]

The concept of artificial life and intelligence has been around for hundreds, even thousands of years. When I studied computer science in university more than 30 years ago, it was often discussed.

AI will bring changes at least as profound as the introduction of the Internet, and it will affect virtually all aspects of life and business. It is more than a once-in-a-lifetime disruption, it’s a once-in-history disruption, and will affect almost everyone in every aspect of business, government, or life.

Meanwhile, there are several problems with the name, and the concept, of Artificial Intelligence.

First, people keep moving the goalposts. When Deep Blue, IBM’s chess-playing program, beat Gary Kasparov, the human champion, in 1997, people said, “That’s not AI – that’s just a computer program”. When AI started being used to interpret X-rays and other medical imaging, people said, “That’s just pattern recognition, that’s not Artificial Intelligence”.

Every time the range of computer capabilities advances, people exclude that from AI. The underlying assumption is that AI is something magical, and always just beyond the horizon. In fact, I don’t actually like the term because it is so vague, and so misused. But it has become widely accepted, and people think they know what you mean when you say it.

I’ve seen analyses that claim that there are anywhere from two to 30 different techniques and technologies that fall into the AI basket. The definition of AI, therefore, is elusive – it means many different things to different people.

My definition is that AI is that it is a computer system that is adaptive, and can solve problems– including problems that humans haven’t, or can’t, solve.

There are three key things to remember about AI. First, it’s the Swiss Army knife of technology – it will be used everywhere. This is as true in higher education as it is in customer service technology in retailing or foodservice, weather forecasting, or process control in manufacturing. AI has the real prospect of falling into the category of an unexpected risk – not because we don’t know about it, but because it will show up in places we don’t expect it.

Next, it’s not a shrink-wrapped product, and it’s hard to implement properly because it requires mountains of clean, relevant, timely data; great analytical skills to be able to identify the factors you want to maximize or minimize; and a clear definition of your objectives – if you don’t know what you want, you won’t like what you get. AI can’t do things by wishing for them to happen – it needs to be programmed, or at least prepared to behave in clearly defined ways.

Finally, once it’s properly established, the ripple effects of AI spread veryquickly. For example: once you’ve automated a front-line function, you can probably immediately automate all of the supporting jobs as well.

And AI is NOT magic – and it’s still in its infancy, which means it’s still being overhyped, despite all of its promise. Smart computers, for instance, have no common sense. Remember that!

Areas of Potential Risk & Return in AI

Some of the ways in which AI will assist in audit include:

  • Contract or lease review – more finely focused review, and review of a greater number of contracts.
  • Contract management to ensure all conditions are monitored & met.
  • Fraud detection, and patterns of fraud – and fraud detection that continues to improve and become more sophisticated. This is one area where sharing experiences from competitors and other industries can help everyone, as AI learns from a greater range of data.
  • AI can permit the auditing of a full population of data, not just random reviews, and identify outliers in data, and accounting entries.
  • Assessing of potential legal and operational risks presented in a contract being reviewed. The legal profession is being profoundly affected by AI.
  • By performing more, and deeper, reviews, and by highlighting questionable, exceptional, or outlying data, it can allow auditors to ask smarter questions, and probe a wider, and deeper, range of issues in an audit.
  • And through Deep Learning, auditors can also search unstructured data, such as social media, for information that may affect your organization, or early indications that problems – or opportunities – are emerging.

Fundamentally, if used properly, AI will allow auditors to do more, such as reviewing all contracts and leases rather than random samples, focus on exceptions rather than routine entries, cover more ground, and ask smarter, more penetrating questions. AI can help auditors optimize their time, and use their judgment on critical questions, rather than having to perform the drudgery of plowing through mountains of routine information to identify critical issues.

It’s also critically important to note that AI and humans are good at very different, and typically complementary, tasks. AI is good at tasks involving meticulous repetition, complex analysis of massive amounts of multivariate data, and the matching of known patterns, plus the identification of potential new patterns.

AI is not good at anything outside of its programming, which is why human experience and common sense are critical when making use of AI. You wouldn’t expect IBM’s Deep Blue chess program to be helpful in performing an audit of a manufacturing operation, for example.

Now let’s talk about how AI can give Internal Auditors seemingly super powers by talking about the potential risks & returns inherent in this emerging technology. I’m going to review AI in terms of traditional IA risks: financial, operational, strategic, technological, and reputational risks. And where there are available images that might illustrate some of these things, I’m just going to throw them on the screen, but not talk about them. I’ll be answering questions at the end, so if you have questions about any of these things, please keep track of them.

  • Financial risk & return
  • Credit risk management for banks & credit unions. This can also become a reputational risk if, for example, AI displays the prejudices or biases of those who programed it against, for example, women or minorities.
  • AI based funding for new ventures.
  • Rating the likelihood of success – this can be both a potential strength, due to deep analytics, or a weakness if common sense is not also applied.
  • Health care – Diagnoses of such conditions as diabetic retinopathy, radiology interpretations. Prognoses of the likelihood of reoccurrence of colorectal cancer, and the suitability of chemotherapy for any given individual.
  • Operational risk & return.
  • Precision farming – minimize input costs by applying fertilizers, pesticides, and water in the precise amounts at the precise locations needed.
  • Robot / cobot manufacturing & assembly. Cobots are “cooperative robots” that work with humans rather than replacing them.
  • Robot construction – for example, a self-building bridge. Note the almost organic construction style – based on the next issue.
  • Optimization of yields – AI design, especially when combined with 3D printing, reduces materials, manufacturing time.
  • Facial recognition allows for improved physical screening & security.
  • The use of robots or drones in areas of physical danger.
  • Used in education for routine tutoring and custom tailored management of an individual’s learning. Needs to be supervised & supplemented by a human teacher, but will be able to take over much of the routine instruction. One example, among many is Udacity – online, post-secondary education without instructors.
  • Self-driving cars, trucks – who’s liable for problems or collisions?
  • AI controlled flying cars

The Potential Risks in Automation

Before I leave AI, I’d like to mention some of the potential risks involved with it.

Automation, through AI and robots, has the potential to put a lot of people out of work – but in my opinion, that would be a major mistake. Computers are good at processing huge amounts of multivariate data and finding patterns in it. People are good at qualitative evaluations, poorly defined situations, novel situations, and issues that cross boundaries and are not cleanly split, plus all of the soft skills I just mentioned. Together humans and computers are better than either alone.

Eliminating jobs also eliminates corporate memory, disrupts corporate culture, and destroys employee loyalty. Rather than approve wholesale replacement of people with computers / robots, ask management to look at how they could augmenttheir employees with automation to make them more productive first. This is long-term smart – and can lead to goodwill, both internally and externally. After all, you need to give loyalty to get loyalty. This is a gradual onsetrisk.

Next, there is the potential that improperly supervised AI or robots could cause harm, leading to both legal and financial liability. There is also a reputational risk from such problems.

Ultimately, cybersecurity will involve AI in both defensive and offensive efforts, both because of the speed, sophistication, and rapidly increasing incidence of cyberattacks. Remember that cyberattacks will, themselves, begin to involve AI to seek chinks in your armor.

Reputational risk & return: When does AI oversight become violation of privacy? And how do you police it? Using Big Data to recommend things to customers may lead to charges of Big Brother surveillance – and do you really want to wind up on the front page of the newspaper? Look at Facebook & Google, and the problems they are having, particularly in the EU.

So, to sum up, AI is applicable to almost every area of endeavour, private sector, public sector, health care, or non-profit. But, and this is important, the more we automate routine things, the more important this makes the so-called “soft” or human, skills in areas like teamwork, leadership, empathy, common sense, inspiration, and so on.

For all of these reasons, and because of the speed with which AI unfolds, internal audit needs to be involved in the oversight of AI implementation from the outset!

AI is an enormously powerful tool, and can produce enormous positive and negative risks for your organization. And that, of course, requires auditors to be knowledgeable about AI, and its applications. This does not mean you need to know all the details about howthings are done, but you must know and understand whatis being done, and what its potential risks and benefits are.

But perhaps the greatest potential risk is that failure to adopt or learn about AI in a timely fashion will cause your organization to be left behind in a rapidly accelerating arms race!

The Blockchain Revolution

IBM defines blockchain as “a shared, immutable ledger for recording the history of transactions.”

Blockchain will bring – is bringing – a revolution in transactions, exchange, and data management, including privacy. Cybercurrencies, like Bitcoin, are the highest profile uses of blockchain techniques, but are mostly frauds that operate on the Greater Fool Theory. They can be useful ways of transferring value in secure ways, but that has not been what has elicited all the headlines. Blockchain more generally, on the other hand, will revolutionize all aspects of commercial transactions. Some of the changes it will provoke include:

  • Blockchain has the potential to facilitate frictionless, fool-proof transactions, seamless settlements, and automatic enforcement of agreements – ifa given agreement is coded properly.
  • Another application of Blockchain is identity tokens – proof of who you are, or what you are selling/buying/transporting, and all of the most sensitive information that goes along with it. Information can be made available only when and how authorized. It will still be possible to swindle people and organizations, but (probably) not to hack them. Ultimately, identity systems may well be the “killer app”of blockchain.
  • If widely adopted, Blockchain would eliminate the need for trusted third parties (e.g., the SWIFT bank funds transfer system). That, in itself, is a potential risk if not properly supervised and audited, because of the potential loss of a paper trail.
  • Traceability / transparency of food supplies – here tracing the origins of a tuna catch (some high-end restaurants). In another example, IBM, using blockchain and the Internet of Things, worked with Walmart to trace the origin of a shipment of mangoes. Using traditional methods, it took nearly 7 days. Using BC, it took 2.2 seconds. In terms of dealing with potential liabilities, the increased transparency of your supply chain will make it substantially easier to manage risks, and contain crises when they occur, which gets back to risk management.

Some examples of real-world supply chain problems that need to be solved are:

  • Counterfeit medicines in the pharmaceutical industry.
  • Food supply chain in China (including the tragic case of adulterated infant formula). Counterfeit auto parts in North America.
  • Enterprise IT equipment — one manufacturer of enterprise networking equipment estimates 10% of products in its multi-billion-dollar supply chain are grey market.

Blockchain risks that auditors need to keep in perspective

In operation, blockchain is autonomous. That means if there are mistakes in the codification of the blockchain, it is vulnerable, and you may not be able to correct it. As one example, the Decentralized Autonomous Organization (DAO) almost lost $50 million to hackers in 2016 because of a flaw in their blockchain coding[3].

The people doing the coding for a blockchain are typically techies – which opens the possibility that they could introduce self-serving transactions for fraudulent purposes.

Administrative oversight of tech operations can be lax unless those performing the oversight are both competent in, and vigilant over, the technology involved.

Moreover, blockchain is a complex technology that has been over-hyped. You can waste an awful lot of resources if you don’t know how to use it properly. Use it organically: start small, learn what you’re doing, and then scale up as quickly as you can maintain control & quality

Internet of Things (IoT): How It Will Affect Audit and Operations

IoT has been defined as a system “in which objects, animals, or people are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”[4]

In looking at IoT, it’s important to realize that it will transform everyday objects, such as refrigerators, thermostats, doors & door locks, vehicles, people, animals, and almost anything else, into important sources of data. This data will allow an extraordinary new breadth of analysis and monitoring. It will also require massive amounts of computing power to cope with this avalanche of data, most of which will be routine and of very little interest.

Auditors are used to this kind of problem – but this is going to be on a scale completely without precedent. It will trigger a new flood of data, at least an order of magnitude bigger than anything you’ve encountered before. How will you manage it? What will you do with it? How will you take advantage of it?

Data are now multiplying far faster than computer capacity, expanding as a factorial expansion, much faster than the exponential advances in computing power (Moore’s Law).

Some examples of applications:

  • Tracking of individual container shipments – and all of the contents of each container – to provide real-time estimates of just-in-time component delivery.
  • Smart City traffic management, parking enforcement and allocation, lighting, environmental monitoring, exceptions alerts, and individual and physical security. Example: a traffic jam that becomes a networked computer, and solves itself by having cars communicate traffic conditions to each other.
  • Monitoring, auditing, and control over physical assets, including such physical assets as trucks, instruments, tools, computers, machinery, and equipment.
  • Smart buildings, that manage lighting, climate control, and physical and operational security. Marriott in China has reported HVAC savings of 10-15% through tracking individual presence, use, and preferences.
  • Maintenance indicators by vehicle or piece of equipment.
  • We will be surrounded by computer intelligence and watchers – and that’s good & bad news. It will improve security – but it also move us further towards a Big Brother – and Little Brother & Little Sister – world.

How IoT Will Help and Hurt Audits

IoT will allow you to know where everything is all the time, which means that operations will be more efficient, with less waste, less idling.

But imagine how hard it’s going to be to do the software updates to all those devices! Consider how much trouble you have with updates to your smartphone / tablet / laptop now – then multiply that by 1 million times!

Suppliers will use this as an excuse to sell you an ecosystem rather than isolated devices, locking you into a sole-supply arrangement. And consider how you will secure all of these devices against cyberattacks.

The history of computer technology is first it’s invented, then it gets wide distribution, and then its vulnerabilities become evident, which are then attacked. This happened with mainframe computers in the 1970s, it happened with desktop computers in the 1980s and 1990s, it has happened (and is still happening) with smartphones. Now it’s going to happen all over again with IoT devices.

If every piece of equipment your organization owns is Internet enabled, have you changed all the user names from “ADMIN” and all the passwords from “12345”? And how do you keep updating them? This is an incredibly dangerous vulnerability. We’ll come back to cyberattacks a little later

The Environment Auditors Will Operate Within: Other Risks

Climate change – There is widespread agreement that climate is changing, even if there isn’t agreement on why. So, let’s ignore the debate, and look at the consequences, which is, after all, what auditors have to prepare for.

Weather forecasters and climatologists agree that extreme weather events are going to become more common, and even more extreme. This includes all weather extremes, from thunderstorms, to flooding, to blizzards, to cold snaps, to heat waves, to dangerous wind storms, to hurricanes and tornadoes. Examples include:

  • Cyclone, Idiai that have hit eastern Africa in March of this year, killing more than 750 people, destroying homes, and displacing hundreds of thousands of people.
  • Wild fires in California in 2017 & 2018 happened because rainfall was unusually low and temperatures unusually high. In fact, the dry ground was actually pulling moisture out of the air in California, making the situation even worse. And poorly maintained equipment by PG&E is thought to have sparked the Camp fire, causing the company to declare bankruptcy, anticipating around $30 billion in liabilities.
  • Tornado report in American Midwest last December – winter tornados are really rare.
  • Flooding is getting more extreme. The Spring floods in the Midwest were labeled “historic” and “unprecedented” for a reason – and the flooding lasted for months in some areas.

Cyberwar – This is a severely underestimated risk. Everyone in business today is aware of cyber-risks – but virtually everyone underestimates how significant they are.

You’re undoubtedly aware of data breaches, which can harm your competitiveness, or be a major public embarrassment. Target, Equifax, Marriott, eBay, JPMorgan Chase, Yahoo. You’re probably also aware of the risks involved in ransomware.

Between 2013 & 2014, over 500,000 machines were locked up by Cryptolocker, which worked by encrypting people’s disk drives, and demanding money for the decryption keys.

By 2015, TeslaCrypt emerged. It followed a similar pattern, this time by loading malware onto game files that were widely shared.

TeslaCrypt added a new wrinkle in that the creators kept upgrading the software as countermeasures were introduced.

In 2015-16, a similar kind of ransomware attacked Android smartphones. Smartphones & tablets typically have less security than mainframe, desktop, or laptop computers. This makes them a conspicuous vulnerability for organizations where the use of such devices is widespread.

In mid-2017, the WannaCry worm, exploited a vulnerability in Microsoft software that didn’t rely on users opening an infected file. As a result, it spread quickly and widely, and was responsible for shutting down the computers of many organizations that had not kept their software up-to-date, including hospitals and radio stations.

NotPetya was similar to WannaCry. It used software developed by the NSA, and was updated to respond to defences. But what may be particularly important is that it may have been created by Russia to attack the Ukraine – but then spread across the globe.

And that brings me to the first of two points I want to make about the future of cyber-risks. National governments are now actively engaged in cyberattacks, and not just against other governments. Some of them are also attacking businesses and anyone who is unprotected.

President George W. Bush, with the support of Israel, authorized one of the first, secret cyberattacks using the Stuxnet virus to sabotage the centrifuges that Iran was using to purify uranium for a potential atomic bomb.This secret attack was continued by President Obama. The operation was quite successful: it destroyed 1,000 of Iran’s 6,000 centrifuges[5].

Today, the list of state-sponsored hacking includes Russia, North Korea, China, Iran, Turkey, and Israel – and, of course, the United States. The point here is that you might be attacked by a national government intent on piracy, ransom, or just disruption of America’s business operations. And, of course, national governments have significantly more resources to throw at hacking.

Which leads to my second cyber-point: Artificial Intelligence is coming to hacking. To date, hacking has been done by individuals creating malware, worms, viruses and the like that attack targets of opportunity. Such targets occur when someone responds to a phishing attack, or they exploit a vulnerability that has not been addressed.

But now introduce AI into the equation. Whereas some past hack attacks have been particularly successful because the hackers kept upgrading the software to respond to defender’s attempts to stop them, now AI-backed hacks can respond in micro-seconds, and share information from around the world, making them impervious to all but the newest, smartest defensive efforts. Moreover, AI can be targeted to specific companies, governments, or organizations. Instead of attacking a vulnerability, an AI could be set to seek openings in an organization’s defences thousands of times per second – and to keep trying until it finds one – and do that for thousands of organizations simultaneously.

My point in all of this is that you almost certainly have spent time and effort protecting your organization from cyberattacks – but those attacks are going to become substantially more sophisticated and harder to protect against. This is an on-going issue, and one where you need to constantly revisit and upgrade your defences.

And remember that the single most popular time for organizations to prepare for a cyberattack is after they have been hit

Summing Up

What’s an auditor to do? How do you prepare for tomorrow?

You need to surf on top of the changes to come, or you will be overwhelmed by them. You can use any or all of the futurist tools I’m about to describe. Any of them will increase your preparedness for the future, and they can be used in combination. You don’t need to use all of them – so think about what makes most sense for you, and then try them, because they all take time, resources, and attention to use.

Some futurist tools:

  • Environmental scanning– what’s happening now? Yogi Berra: “You can see an awful lot just by looking”. To that I would add, you can miss an awful lot by not This presentation is a form of environmental scanning, where I’ve laid out things that you might not have thought about, or haven’t considered them in quite the light I’ve presented them. Scanning involves looking out at the margins of what’s happening, and then asking yourself, “How could this affect us? How could we use it to our advantage?”.
  • Scenario Planning– what if you’re wrong about tomorrow? This relates directly to risk management by encouraging you to ask “What if?” questions. A word of caution: there is a natural tendency for any organization to project the future you wantto have happen – and that’s risky because it blinds you to other possibilities.
  • Wild Card Analysis– how to expect the unexpected. Itispossible to expect the Spanish Inquisition (with apologies to Monty Python). Example: Royal Dutch Shell’s anticipation of the collapse of the Soviet Union in 1989-early 90s. They told the CIA – and the CIA didn’t to believe them.
  • The Desired Future & Backcasting– what future do you really want, and how do you get there? This relates directly to strategic planning. It involves clarifying the future you really want – which is surprisingly difficult – and then walking backwards from that Desired Future into the present. The results dovetail perfectly with tools like Precedence Planning, or Critical Path Analysis. It’s a lot of work, but can lead to extraordinary results.
  • Cooperationup and down the supply chain – moving your supply chain into the future.
  • Innovation– not just a motherhood issue: forcing the pace of change to your own advantage. Organizations actually don’t like to innovate for three major reasons: Innovation requires you to do things you’ve not done before, which means you’ll look stupid & clumsy doing them – and no one likes to look stupid. It requires you to come up with fresh, new thoughts about things you’ve thought about thousands of times before. And, worst of all, it involves personal risks: If you’re the one that comes up with the Bold, New Idea that flops, it will hurt your career. There are ways around these issues, but you have to structure your efforts specifically to do that. It involves becoming an Innovation Organization, where innovation is an all-the-time thing, and not a when-we-have-time special effort. If you’d like to know more about any of these techniques, I’ll be discussing them further in the breakout session. One thing you could do would be to set up a future-focused committee within your organization. Intel does this in their factories in an attempt to get a clearer focus on what’s going to affect their products, their company, and their supply chain, and to develop a more robust action-plan for their operations.

The Bottom Line

The nature and magnitude of risks are changing rapidly, and in important ways. It’s important to get out in front of these changes for the safety and future of your organization!

Or, as Alan Kay put it: “The best way to predict the future is to invent it.” There are going to be more opportunities of all kinds than ever before – but it’s going to belong to those organizations that are prepared for our rapidly mutating future. And there’s going to be more competition than you’ve ever experienced before.

The future is changing faster than ever before, and probably faster than we expect. Fortune favors the prepared mind. If I can help you along the way, don’t hesitate to reach out to me. I wish you good luck, and God speed. Thank you.

© Copyright, IF Research, June 2019.


[1]https://www.ey.com/en_gl/assurance/how-artificial-intelligence-will-transform-the-audit

[2]“Artificial Intelligence – Considerations for the Profession of Internal Auditing”, p.8. https://na.theiia.org/periodicals/Public%20Documents/GPI-Artificial-Intelligence.pdf

[3]https://www.wired.com/2016/06/50-million-hack-just-showed-dao-human/

[4]TechTarget, as quoted in “Auditing the Internet of Things”, Internal Auditor website. https://iaonline.theiia.org/2015/auditing-the-internet-of-things

[5]https://www.washingtonpost.com/world/national-security/stuxnet-was-work-of-us-and-israeli-experts-officials-say/2012/06/01/gJQAlnEy6U_story.html?utm_term=.957e45ec2728.