Fog: What’s Next in Computing

Articles

by Senior Futurist Richard Worzel, C.F.A.

Cloud computing is now a widespread computing commodity, useful, ubiquitous, and powerful. But Cloud computing is fundamentally a 1970s concept, as it amounts to little more than an adaptation of timesharing, which IBM pushed hard in that era. Moreover, Cloud computing – indeed, any centralized computing network that requires backhauling data – is reaching its limits because of the recent, massive explosion in data.

Fog computing, on the other hand, is a new concept that promises not to replace or supplant Cloud computing, but to augment it and make it even more valuable – even as Fog computing breaks new ground.[1]

And, as with most truly new ideas, Fog is difficult to explain, because it’s not an adaptation of something we already know. As a result, I’m going to give a description of Fog computing and why it’s needed, but then illustrate it with two examples. I believe the examples will actually be of greater value.

The Data Explosion, or Why the Cloud Is Not Enough

For the past 50 years or so, computing resources have expanded faster than data. Processor speeds have increased, and data storage costs have fallen, in accordance with Moore’s Law, which says that computers will double in speed, and halve in cost, every 18 months. As a result, every time we’ve had a problem that involved crunching data, we just threw more computing resources at it, and got it solved.

Now that has changed. Suddenly, data sets are growing much faster than computer resources. Computers are still following Moore’s Law, which means their cost-effectiveness is expanding exponentially. But now, many kinds of data are expanding factorially, which is dramatically faster than exponential expansion[2].

There are two fundamental reasons why data are expanding so much more quickly than before. The first is that we are analyzing data sets that are both new to us, and massively bigger than any we’ve ever considered trying to tackle before. For instance, each human being has a genome, which consists of 3 billion base pairs (being the precise composition of their DNA strand) – or, in data terms, every human genome has 3 billion data points. There are currently about 7.3 billion humans, so if we were going to analyze the entire human genome to search for the causes of disease, the effects of different kinds of nutrition, or many of the other questions that we are now contemplating, we would be dealing with a data set that could be as large as 3 billion x 7.3 billion data points. And if we wanted to analyze the data for all of the billions of species of plant, animals, and other organisms on Earth, we would have to multiply that by many billions more. We would never have even considered this as possible in the past, but we are now edging towards this kind of analysis.

But that’s actually the slower growing of the two kinds of data that are now emerging. The faster growing has to do with how different entities relate to each other on the Internet. This is important because the Internet of Things (“IoT”), which consists of devices attached to the Internet, from smartphones to thermostats to devices to monitor your workout and much more, is exploding. In fact, the year 2015 is the first year when it appears that the majority of communications over the Internet are between IoT devices, communicating with each other, rather than people communicating with each other, or even people communicating with things. Current estimates project that there will be 40 billion IoT devices within 5 years, and that may be conservative.

Meanwhile, a lot of analysis now consists of metadata, which is data about data. As a result, how data are communicated from one entity to another becomes grist for this kind of analysis. So as the number of IoT devices explodes, the number of potential relationships between IoT devices explodes much, much faster.

There is no way that central processing of data, say through the Cloud, can handle these quantities of data. And that’s why Fog computing is going to emerge and be so critical

What Is Fog Computing?

Fog computing is computing done at the edge, where data is first gathered, rather than transmitted to a central computer, where it is crunched before results are returned to the edge. Instead, a Fog computer is structured to allow each data node to compute data where it originates, then pass along the data and the computations to another node, which adds its data, performs its calculations, and passes them along. The result is decentralized computing, with results emerging without a central processor controlling or directing the computation. This is a very different kind of computing than we are used to. Let me illustrate how it might work, and why it will be important.

A Traffic Jam as a Computer Network

I’m sure you’ve experienced this kind of situation: You’re driving along a superhighway through a city, comfortably above the speed limit and making good time, when suddenly you see red brake lights blossoming in front of you. You slow, and then stop, then crawl, then stop, moving forward only in unsatisfying lurches. You have no idea what’s happened, how long traffic will be like this, or how far the traffic jam extends. Worse, you’re coming up on an exit, and can’t tell whether you’d be better off staying on the superhighway, or exiting and trying to push your way through city surface road traffic.

Meanwhile, you look at the cars whizzing by in the opposite direction, and wish that, somehow, they could tell you what’s ahead, how bad it is, and how far you would have to go to get by this stop-and-go traffic jam.

Virtually all contemporary cars have many onboard computers, and newer ones have several different kinds of communications capacities. Now add a Fog computing system.

With cars equipped to participate in a Fog network computer, each car, in both directions on the superhighway, makes available data about the traffic around it to every other car, but with all personal identifiers stripped off for the sake of privacy. Hence, to you and the other cars stuck in traffic, your car quickly learns that the stop-and-go traffic you are experiencing actually gets worse, and that the road ahead is completely blocked about a mile ahead due to a tractor trailer tipping over, blocking all the lanes on your side of the superhighway. Each car then uses this data to calculate its best options, and presents them to the driver (or executes them itself, if it’s a self-driving car).

In your case, as you are coming up to an exit, the car indicates that you will be far better off exiting immediately, and finding a new route around the blockage. The same is true for all the cars around you, as they each reach the same calculation based on the traffic data they have collected.

But now hundreds of cars are exiting the superhighway, and flooding the local surface streets of the city, causing a different kind of traffic jam. Again, each car is broadcasting details of the traffic around it and behind it. And as each car gathers data, each car starts figuring the best routes around this unexpected, and rapidly growing, new volume of surface street traffic, so that cars diverge along many different routes.

Now the city’s central traffic computer, located in the Cloud, gets into the act. It notes the blockage on the superhighway, and notifies emergency services. In turn these services dispatch police, fire, and ambulance crews to the scene, and start diverting traffic from farther back on the superhighway. The traffic computer also considers the flow of traffic exiting the superhighway, and starts changing its traffic flow controllers, such as changing when traffic lights go red and green, how long turn lanes are given priority, and switching the direction of some lanes to further diffuse the traffic, and keep things moving.

As a result, while traffic slows and thickens, the effects are minimized, and things keep moving.

This is Fog computing interacting with Cloud computing.

A Farmer’s Field as a Computing Network

Tom Hauptman is a cutting edge farmer, but even he is surprised by the latest development. In conjunction with the University of Saskatchewan in Saskatoon, and Rice University in Houston, Texas, Tom is one of a limited number of farmers running a new kind of experiment: turning one of his fields into a network, and networking, computer. He was approached by the joint venture team as he has spoken and blogged on the future of precision agriculture, and so was receptive to the concepts involved in the SmartField project.

The first step was to plant sensors along with his normal wheat crop. This was done by adding a small robotic attachment to his planter. Every so often, as his field was being seeded, it would punch a foot-high probe, containing a sensor-computer node, into the ground along with the wheat, with the result that Tom’s field is now growing both wheat and data.

The nodes have a variety of sensors in their package, as well as an obsolete smartphone computer chip. All of this equipment has been selected first to use as little electricity as possible, and then to be as cheap as possible in order to keep overall costs down. In addition to the metal probe that supports the node in the ground, there are also two electricity generators: a soil or microbial battery, that actually generates electricity from the microbes in the soil around the probe; and a small infrared solar panel that collects electricity from heat rather than direct light. The designers realized that as the wheat grew, it would quickly overshadow the probe, eliminating any potential power from visible light solar cells. The infrared cell, while more expensive, will continue to operate even when overshadowed.

The computers on the nodes were programmed before they were distributed around the field. Their purpose is to assess the data collected by the sensors, and relate it to an elaborate computer model that compares their readings with the projected data from a computer model of a hypothetical, optimally-growing wheat stalk at the same stage of development. The purpose is to identify what might hurt or help the wheat around the sensor to grow optimally. Such things might include emerging pest threats, too much or too little water, nutrient deficiencies, or anything else that might cause development to be sub-optimal.

Now, a month following the planting of the wheat field and the probes, something remarkable has happened. A group of drones, about the size of a badminton shuttlecock, regularly spreads out, moving from probe to probe, both collecting and disseminating data. The data gathered from the probes are relayed both to a computer in Tom’s home office, and to the other probes in the field. Now, some of the probes are reporting that soil moisture is falling below the optimal band, while others are reporting an improper development of the surrounding wheat stalks. The Fog computer network created by the sensor nodes has thrown up two hypotheses: first, that there is a section of the field that drains more quickly than Tom suspected, and therefore needs more irrigation; and second that some external factor is harming some areas of the wheat field. The areas related to these two hypotheses overlap, but are not identical.

This data is relayed up to a Cloud computer run by the university researchers. Their model concurs that some areas of Tom’s field drain more quickly, which is not a terribly interesting result (at least for the researchers), but conclude that there may be a new kind of wheat leaf rust emerging that they haven’t encountered before, and which isn’t included in their optimal-development model. The researchers take the data from the Cloud model, and use it to search recent scholarly papers, as well as ag agent reports of emerging threats. They find a handful of reports of such rust, but no research – it’s too new. One of the reports suggests a new, recently introduced fungicide seems to be helping resist the new rust infection. The researchers therefore send the data about the nature of this new rust, and the possible fungicide remedy, to the Cloud model.

The Cloud model then acts on the data, downloading it to all of the farms involved in the tests, including Tom’s farm, where it goes down to the individual nodes. The nodes compare the data they receive as the rust emerges, and conclude that this is the threat that is causing the affected plants to be growing sub-optimally. As their data is cross-tabulated by the Fog computer, a recommendation is made to Tom that he immediately obtain the new fungicide, and apply it in controlled amounts to the specific areas affected, as well on as a barrier of uninfected plants around these areas. The amount involved is much less than would otherwise be used, first because only those affected or threatened are sprayed, and then because the infection has been caught at a very early stage, before much damage has been done.

The first Tom knows of any of this is when the recommendation to spray appears as a text message on his smartphone. He calls the university researchers to talk to them, because Tom has no experience with this. They confer, and Tom concludes that he will go ahead with the proposed spraying, orders the fungicide, and prepares to spray the next morning.

This action is the result of data being gathered by a Fog network of sensors and computers; processing by that network identifying that something is amiss at a very early stage; throwing that conclusion to other nodes for comparison and reinforcement or clarification; the results uploaded first to Tom’s computer, then to the researchers’ Cloud system; research being done based on these early findings and conclusions being updated in the Cloud system, then downloaded to the farm test nodes, which then compared the expected development profile with the reality in the field; and finally remedial action proposed.

Where the Action Is: Everywhere Through Interlocking Networks

The computing involved in this example happened at the sensor level, at the farm level, and in the Cloud. Here again, Fog and Cloud computers interacted, each bringing their own strengths.

A fully-articulated global farm network would include Fog computing at the field level, databases at the farm, regional, national, and global levels, and Cloud computing happening at the regional, national, and global levels. Information would pass both upwards and downwards from every level, and conclusions would emerge through techniques such as evolutionary algorithms, both with and without human intervention. As new threats or events occurred in one location, the relevant data would quickly be spread to everyone involved in the various networks. Hence, a farmer’s Fog network would be warned to look out for certain characteristics of an emerging threat, and data from the field, along with Fog-generated conclusions, would be forwarded to Cloud networks for broader comparison and assessment.

In theory, sensors will eventually watch over every plant on every farm in the world, gathering data, with Fog networks computing it, and passing conclusions and observations to local, regional, national, and global Cloud networks. No Cloud system, regardless of how powerful, could possibly handle such a volume of data. And no stand-alone Fog system, encompassing a single farm, could have the depth of knowledge that being networked globally could provide.

What’s missing from this picture is the economics that would make this idea financially feasible. This not only involves the cost of the sensor-processor nodes, but the interchange of data itself. I strongly suspect that data will be bought and sold at all levels of these interlocking networks. Precisely how such data sales will be tolled is unknown, but that will emerge. Moreover, the buying and selling of data may become of much greater significance to farm suppliers, like Deere, BASF, or Bayer, than their current, physical product lines. Indeed, their current product lines may become merely the means by which they are involved in selling data, much as Gillette sells razors virtually at cost in order to sell replacement blades at a profit.

Welcome to the World of Fog

Fog computing is virtually unknown in the world today. Very few people have even heard about it, even among high tech companies that look for such things. That’s about to change, and with a vengeance. And the combination of Fog computers with techniques like evolutionary algorithms means that such networks – or networks of networks – will, in many cases, reach conclusions without human intervention.

We are entering a strange new world, but one that holds enormous promise, as well as the potential for enormous misuse. Welcome to the Fog.

© Copyright, IF Research, May 2015.

[1] Disclosure: I’ve recently made an investment in an early stage Fog computing company, Fog Lifter™, which reflects my belief in the importance of this emerging sector of the computer industry.

[2] To illustrate factorial expansion, 2 raised to the 5th power (exponential) is 2 x 2 x 2 x 2 x 2 = 32. For comparison, five factorial is 1 x 2 x 3 x 4 x 5 = 120.

Comments on this entry are closed.

  • Adrian Browne Aug 11, 2015 Link

    Another view of this phenomenon occurred back in the 1970’s when computing was moving in the opposite direction – from timesharing to microcomputers -, albeit at a much simpler level.
    Timesharing simply became too clumsy and inflexible (and expensive), so PCs were introduced to gather data locally and process it on a minute to hour basis, and then transferred to the timesharing computer at day’s end to analyze and distribute the data around the country.
    The phrase we used to describe this approach was ‘articulated’ computing.