The 7 Megatrends that Will Affect the Future of Infrastructure, Part II

by senior futurist Richard Worzel, C.F.A.

This is the second half of a complete report on the future of infrastructure in Ontario, commissioned by the Residential and Civil Construction Alliance of Ontario. The first half can be found here, and provided the conclusions, along with dealing with demographics and technology. This half will deal with climate change & environmental degradation, the global economy, human longevity & health management, the widening tears in the fabric of society, and the rapidly eroding job market. While this report is about Ontario’s infrastructure needs, the comments more broadly apply to virtually all jurisdictions in the developed world.

Climate Change & Environmental Degradation

The most important thing to remember about the coming effects of climate change is that Mother Nature always gets paid. Damage from extreme weather cannot be avoided, ignored, postponed, or overridden by political opinions. Repairing the damage left by such events can be ignored or left to someone else, if the political will to do so is strong enough, but there would still be economic costs that would affect everyone.

Climatologists have been quite clear that no individual weather event can be traced specifically to climate change. However, the rising incidence of extreme weather events is directly traceable to climate change. This means that as the Earth’s climate changes, regardless of why it is changing, we will experience a growing number of weather disasters, from flooding (as happened in Calgary and, to a lesser extent, Toronto), to drought (as is happening in Western Canada right now), stronger hurricanes, thunderstorms, blizzards, ice storms, and so on.

In other words, we cannot predict a once-a-century storm, but we can predict that once-a-century storms will now happen more frequently. Hence, we may have to plan on enduring such events once-a-decade, or even more often, instead of once-a-century. This will require a much stronger – and more costly – response to weather and climate than in the past, and a more robust infrastructure to be prepared for such events.

In some ways, the worst part of this is that we don’t know how changing climate will play out in terms of weather, so we don’t know how to prepare. Will Ontario experience flooding or drought? Will our winters be warmer and snowier, or colder and drier? We don’t know, and that uncertainty carries its own costs in planning terms.

For instance, suppose that in Ontario, Tornado Alley, currently focused in southwestern Ontario, were to shift eastward somewhat, and the GTHA were to start experiencing regular tornados. Would we be prepared? Current building codes do not contemplate frequent storms of such power. Imagine downtown Toronto, say at King & Bay, experiencing an F3 tornado, for instance.

What we do know is that extreme weather events are becoming more frequent. It is therefore clear that we must consider this in any future infrastructure plans.

Water Supply

A more predictable future issue relates to water supply, partly because Ontario, like most other jurisdictions, has avoided necessary investments in maintaining and upgrading water management systems, and because the availability of fresh, potable water is becoming a critical issue almost everywhere.[1]

Moreover, Canadians generally, and Ontarians in particular tend to feel we have all the water we need, and hence don’t tend to think about water supplies. Walkerton proved that this isn’t necessarily the case, but there’s more to the issue of water than just bad management, as this quote from Statistics Canada indicates:

“In Ontario, the threat to water availability is high (more than 40%) in the urbanized south-west part of the province. This is caused by large industrial and municipal water use and a low inland surface water supply.  According to the OECD classification scheme then, this region was under water stress during these years [2005 & 2007]. In other parts of the province, the results of the indicator calculations show a low threat to water availability.”

And, as mentioned earlier, almost all of Ontario’s population growth is in the south-western parts of the province. Accordingly, Ontario cannot afford to be complacent about water.[2] Moreover, while this Statistics Canada study studied water usage during 2005 & 2007, the study uses a 30-year average of the water supply. Hence, this wasn’t just a case of two years that happened to be unusually dry, this is a much broader problem related to the concentration of industry, and population growth in southern Ontario.

One of the simplest ways that municipalities can deal with potential water shortages is quite simple, relatively cost-effective, and uses well-established, off-the-shelf technologies. It is to process sewage back into potable water, which would significantly reduce the need for additional fresh water. The problem is the so-called “yuck” factor.[3] Some communities in California have overcome this by pumping purified water back into aquifers, which also increases aquifer . Or, to make this approach more palatable, municipalities can return the sewage, processed to drinking water quality, to streams, rivers, or lakes for other, downstream centres to use. This happens in lots of places in North America, including Saskatoon and Edmonton, which make use of South and North Saskatchewan Rivers, respectively.

A more exotic future solution may be the use of nanotechnology water filters, such as those created using graphene – a highly organized form of carbon that is finding many applications. The potential to create a filtration system using graphene that is relatively cheap and effective on an industrial scale has not yet been proven, but is worth watching.[4] However, even if it proves to be successful, it leaves unanswered the other fundamental question (after cost) that bedevils desalinization efforts: What do you do with the toxic impurities that have been separated from salt or polluted water?

But however it’s done, population growth, especially in southern Ontario, will require that water infrastructure be given a high priority.

Solid Waste

Next garbage, or solid waste, will be a persistent problem until we face it squarely, and stop trying to sweep it under the carpet. Efforts to divert solid waste from landfill to recycling are commendable, but won’t be enough as we are running out of landfill sites.

The major problem with recycling is that it depends heavily on the market prices for the materials recycled. This will be particularly problematic in future as China, which has been the engine of demand for commodities of all kinds, will experience lower rates of economic growth in the future, which will lower the demand for, and hence the prices of, most commodities. In turn, this will make recycling less appealing economically.

Some parts of Europe have taken a different approach to recycling by legislating that the cost of a product should include the cost of recycling (or disposing of) the materials involved. Whether Ontario adopts that approach or not, we should be studying what other jurisdictions have done with garbage, and adopt those techniques that are most cost-effective, and that take fullest account of the environmental consequences of use. The days of ignoring environmental consequences are ending, no matter how big the tantrums of those who want to continue to just dump.

Sweden has a very successful, if somewhat controversial, approach that is economically very successful: they first recycle as much material as they can, typically about 60% of solid waste, and then incinerate the balance, generating power by doing so. They have been so successful in these efforts that they have run out of garbage, and are now letting their neighbours pay them to take garbage for incineration.[5]

Many environmentalists in North America deplore this practice (and in the process seem to feel that they are holier than the Swedes, but on what seems to me to be thin evidence). They typically object on two principal grounds: first, that incineration produces dangerous pollution, and second, that it’s a sin to destroy materials we may be able to reuse.

The first point can be refuted: “SEMASS, a waste-to-energy facility in Massachusetts, in the US, uses 1 million tonnes of municipal solid waste to generate 600 million kilowatt-hours of electricity every year and recycles 40,000 tonnes of metals. The annual toxic emission is less than half a gram”[6]

As for the second, I’d say let the burden on proof be on those who believe there’s an economic way to deal with the roughly 40% of solid waste that isn’t currently being recycled. If they can demonstrate ways of doing so, then such techniques should absolutely be adopted. If not, then waste-to-energy incineration should be given serious consideration.

Fortunately, Ontario has a test case in its own backyard. The Durham Region York Energy Centre is just completing a waste-to-energy facility. This $286 million facility is projected to process as much as 140,000 tonnes of waste each year and generate approximately 17.5 MW of energy. As operations start up, the rest of the province will be able to witness, first hand, the feasibility of waste-to-energy as a means of dealing with the residue of solid waste after all possible recycling avenues have been exhausted.

The Global Economy

I want to touch on two aspects of the global economy that will affect infrastructure.

The first is that they global economy is likely to grow much more slowly over the next 20 years than the last 20 years. This is happening for a number of reasons.

First, China’s population is aging very rapidly, and it’s workforce is actually in decline. This means that virtually all of its future growth will come from productivity growth. Admittedly, this still leaves them with a lot of growth potential, but it also means that their future growth is more likely to be in the range of 5-7% than 8-12%, and will gradually slow even The current crash in Chinese stock market, and the subsequent economic fallout, could cause an even more rapid deceleration in economic growth.

Next, the other major sources of growth are experiencing significant teething problems. India has yet to show the will to cut through their thickets of red tape, and until they do, their growth will remain modest rather than robust. Brazil is sliding back to its socialist ways, and reverting to the habits of bad . As a result, their growth is stalling. Rounding out the BRICs, Russia was never really a growth story, but rather a country that rode high while oil prices were high, but didn’t diversify their economy. Add to this that Russian population is in rapid decline and demographics argue strongly against solid economic growth.

There are other, emerging countries that will boost global growth, many of them in Africa, but they are not yet of a size or importance to matter as much as China and India on their own.

Education Must Change

Next, I want to turn to the importance of education and its infrastructure to Ontario’s future.

The hollowing out of Ontario manufacturing due to globalization, which took place over the past 20-30 years, is largely done, but the fundamental lesson from globalization needs to be remembered: There is now one, world-wide marketplace, and we are competing not only with each other and our American neighbours, but with everyone else in the world as well. The stakes are high, the competition is unforgiving, and there is no going back.

The ultimate implication of that is that we need to have a globally superior education system, and education can no longer end when people cease to be young adults, but must carry on through our working lives. As well, our education system has to take account of the faster pace and the unforgiving demands of a global economy.

Ubiquitous access to the Internet has rendered the memorization of facts to be of minor importance, while the ability to perform wide-ranging research, absorb information quickly, ask critical questions, and be creative enough to produce innovative solutions to real-world problems are key. Yet, our primary and secondary schools continue to be hobbled by a “back to basics” mentality more suitable to the 19th Century than the 21st. Meanwhile, roughly 75% of budgets for public education are spent on salaries.

In an era when globally competitive organizations are lean and forced to be innovative, this antiquated model needs to be phased out. In particular, education should be customized to each individual student to enable them to approach their greatest potential.

With computers becoming far more capable – I’m hesitant to say intelligent – Ontario could be investing in technologies that allow human teachers to be more effective, working one-on-one with students when students have a problem, and allowing them to work in a self-directed fashion under computer supervision most of the rest of the time.

But no matter whether this is done in traditional ways, with teachers, desks, and classrooms, or through technology, Ontario must move its schools to focus on creativity, critical thinking, and customized education rather than lecturing and memorization.

Meanwhile, post-secondary education is experiencing a revolution, with or without the permission of Ontario colleges and universities. Distant learning and online education, are becoming commonplace, and the traditional role of the lecturer is under scrutiny. Why should a college employ local teaching assistants, for instance, to perform lectures when some of the best lecturers in the world can be available online, and when the students can view such lecturers on their own schedule rather than the ?

Tutoring would still be necessary, but even that can take place remotely. And the emergence of MOOCs (Massive Online Open Courses) and online degree and diploma programs indicates that the future of the traditional, ivy-covered campus is very much in question.

I would suggest that Ontario should be focusing on finding the best technological solutions being used anywhere in the world, asking each post-secondary institution to focus on what they are best at doing, and aiming to provide post-secondary education to a much broader audience than at present. Let me take these one at a time.

That technology is often, but not always, replacing traditional post-secondary models is clear and irrefutable. But we should learn from the eHealth fiasco: rather than re-inventing the wheel, we should find who’s doing the best work in this already-well-traveled field, and buy the technologies off the shelf.

Next, we should be prepared to offer not only traditional degree and diploma-granting programs, but also just-in-time learning for a wide-range of fields. In this way, Ontarians can upgrade their skills piecemeal, and often without having to take time off work. Such learning may or may not lead to major credentials, like Masters or Doctorate degrees, but would encourage incremental learning, and credentialing that is focused on specific tasks for workers in the public, private, and non-profit worlds.

And we shouldn’t restrict such learning only to Ontarians. I believe we could make a sound financial case for selling Ontario education – from primary school through graduate studies – around the world. Indeed, I believe we might be able to make Ontario’s education system self-financing. Even more important is that by so doing, our post-secondary institutions should be allowed to increase the resources they have available to pursue excellence.

What we should not be doing is building mausoleums to pander to the egos of rich donors in support of 19th Century education.

Human Longevity & Health Management

According to Statistics Canada, life expectancy in Canada for men rose from 59 to 77 years in the 80 the years from 1920 to 2000, while women did even better, going from 61 to 82 years. That means Canadians saw an increase in life expectancy of almost 3 months per calendar year, on average, through most of the 20th Century.[7]

Much of this was due to advances in health care, particularly in childbirth. However, other, related advances were also helpful, such as the refrigeration of food, and the identification of antibiotics.

The future holds even greater promise. Researchers now have a rapidly expanding understanding of human genetics, how diseases affect the body, and how environment and heredity interact to help, and harm, health. As a result, we can seek cures and treatments deliberately rather than by accident, or by trail-and-error.

Meanwhile, technology is making it possible to do things earlier eras would not have believed possible. We are already growing replacement parts for the human bodies, from kidneys to heart valves, and the expectation is that we will eventually be able to replace virtually every human organ from an individual’s own stem cells (with the possible exception of the brain itself). Hence, if your heart is wearing out, or has incurred significant damage due to a heart attack, we can grow you a new heart from your own tissue, and replace the old one with a new, healthy one.

We are learning how killers like cancer or diabetes work, and finding ways of stopping them. We are starting to be able to design vaccines, antivirals, or pharmaceuticals for a specific purpose, such as stopping or curing previously incurable diseases, such as SARS or Ebola. We may even be able to come up with a vaccine to prevent the common cold.

Meanwhile, wearable computers, with computer genies or avatars, will be able to monitor our health, heartbeat-by-heartbeat. This will let us intervene much earlier than we can today. We’ll be able to significantly improve outcomes when a crisis develops, such as a heart attack or stroke, or when a disease, such as influenza, is developing. Indeed, precursors are already emerging in the marketplace that can perform some of these functions, from the Nike+ app that monitors your heart and running pace, to IntraXon’s MUSE system, that monitors brain activity and provides feedback to help the user reach a calmer state of mind. Systems like these, and many others, will continue to be expand in scope until they become wide-ranging health and well-being monitors.

As well, the exchange of data will supercharge medical research. Individual health information (stripped of personal identifiers) will be shared between each person’s wearable computers, and regional, provincial, national and global health databases. This will provide a massive amount of searchable data that will enable computer intelligences and medical researchers to identify risk factors, genetic strengths, and help locate cures for existing and emerging diseases. (For more detail on this, see the FutureSearch blog post, “Health Care to the Year 2035”.[8])

While all of this is wonderful news, it does have two implications for our health care infrastructure. First, people will be living longer, perhaps decades longer, than they have in the past. And second, this could add to the overburdening of the health care system. Accordingly, in planning the future of health management infrastructure in Ontario, it will be critical to identify the most cost-effective means of health management.

Cost-Effective Health Management

Cost-effective health management will be very different from traditional health care. The practice of medicine should make steadily increasing use of technologies, such as IBM’s Watson computer intelligence, to assist health care providers in making faster, more accurate diagnoses, to map out an evidence-based health management regime for every Ontarian that needs it, and to do so using the least-expensive means possible.

This approach may lead to non-traditional approaches that raise the hackles of many groups involved in today’s health care system. Demographics implies that we will have fewer doctors, and their services may be too precious for them to continue to act as the health system’s gatekeepers. And it may be that hospitals should be avoided unless there is no other alternative that will serve. This is so because hospitals are enormously expensive, and because they serve as an inadvertent breeding ground for infection, especially antibiotic-resistant bacterial infections.

In place of these traditional entry points to the health care system, it may be that money should be invested in clinics that specialize in initial visits (i.e. gatekeepers), staffed by nurses or physician associates and supported by computer diagnostic systems; others that specialize, and create assembly lines, for in-demand procedures, like endoscopies, knee, hip, or retina replacements, or the treatment of hernias. Such clinics would cut waiting times, improve outcomes by having procedures done by doctors who specialize in them, and relieve the pressure on the rest of the health system by dealing with the most demanded procedures.

In turn, this might mean that Ontario should no longer build or expand hospitals for treatment (as opposed to research) except in locations that are significantly underserved. What is clear is that we will not be able to afford the traditional answers that have grown up, organically, over the decades at a time when cost-effectiveness will be critical to the survival of taxpayer-funded health care system.

The Widening Tears in the Fabric of Society

The rise of homelessness, and the growth in the penal system have important implications both for the social good, and for infrastructure planning.

What I will not address are the moral implications of these issues. There are people who believe that being homeless or being in jail is a sure sign that someone is a bad, unworthy person. Others believe it means such people are victims who must be helped. I don’t wish to enter into that discussion.

Instead, my concern is whether we are properly allocating the infrastructure investments related to these issues, because both will become more expensive in the future.

In the case of homelessness, there is a very real risk that an increasingly difficult and unrewarding job market will throw a steadily rising number of people onto the streets, to become homeless.

In the case of the penal system, there are two issues. The first is that in a difficult employment environment, having a prison term on your résumé will almost certainly kill your job prospects. In effect, when someone is imprisoned, they become almost automatically unemployable for the rest of their lives. The second problem with the penal system is that aging prisoners require a steadily increasing amount of health care, making their upkeep more and more expensive.

Neither homelessness nor the penal system are of interest to the general public, but the costs to society of sweeping the problems under the rug are probably high enough to justify a radical revamping of both. Yet, part of the problem is that our reactions to these two issues are so close to being knee-jerk that we don’t even collect much data on the costs.


On the subject of homelessness, two American jurisdictions did collect data, and also tried an apparently radical solution: giving homes to the homeless with few, if any, strings attached. One was liberal New York City; the other conservative Utah. The result?

“Between shelters, jail stays, ambulances, and hospital visits, caring for one homeless person typically costs the government $20,000 a year. Providing one homeless person with permanent housing, however — as well as a social worker to help them transition into mainstream society — costs the state $8,000”[9]

Yet, there’s a real barrier to this kind of reform, which is public opinion. Most people are opposed to giving homeless people something for nothing, especially if it encourages others to take advantage of the system. We fail to realize that we are implicitly paying what might be called a “homeless tax” by not giving shelter to the homeless.

Therefore, a better solution might be to find a way to have the recipient of such housing contribute something in return. They could, for instance, be offered the opportunity to buy their home through an installment system. Or they might be asked to earn their housing by helping build additional housing.

Ironically, this may actually be harder and more expensive to police, but the politics of something-for-nothing may require it.

The Penal System

There is much more documentation relating to the costs of the penal system, more so in the U.S. than in Canada. In fact, even neo-conservative Republicans, such as the arch-conservative Koch brothers, in the United States have flipped positions, and are now advocating a revamping of the entire legal system, particularly jail sentencing, because the result are so costly, and the system is so ineffective.[10] No thinking person still advocates that “getting tough on crime” is an effective answer.

To pick a particularly stark example of the direct costs of the penal system, New York City’s Independent Budget Office found that “in 2012 it cost the city $167,731 to hold each of its daily average of 12,287 inmates, or about $460 per inmate per day. Undergraduate tuition at Harvard University is $38,891 annually, or $155,564 for a four-year degree.”

In other words, it would be cheaper to send a NYC inmate to Harvard for four years than to lock them up for one year.[11] This is, admittedly, an extreme example. In 2010, for example, the average annual cost of imprisoning an inmate in a U.S. federal prison was US$28,284. In California in 2009, the cost of keeping someone in a state prison was US$47,102.[12]

In Canada, the costs are comparable. A 2012 report from Corrections Canada indicates that it costs an average of C$113,974 to keep an inmate in a Canadian federal prison.[13]

Are there alternatives? Yes there are, and technology will increase the range and subtlty of these alternatives as smart computers and wearable computers will be able to monitor the locations and behavior of people convicted of non-violent crimes with increasing sophisticated and precision. But we don’t have to wait for technology to bail us out.

The Don Drummond report, commissioned by the Province of Ontario, indicated that it costs $183 a day (which projects to $66,795 a year) to keep someone accused of a crime in jail, compared to $5 s day ($1,825 a year) to keep them on supervised release.

It’s clear that Ontario should learn from America’s mistakes, and stop looking at incarceration as the only solution for people accused, or convicted, of committing a crime. In fact, a recent Globe & Mail editorial noted that more than half – 55% – of people held in provincial and territorial jails have not been convicted, but are awaiting trial. The editorial concluded that “The system is broken.”[14]

Paying attention to the megatrends relating to these two aspects of society clearly requires fresh, open-minded thinking – and a clear fix on finding better uses of infrastructure spending than on traditional facilities to cope with homelessness and .

The Rapidly Eroding Job Market

It is much harder for someone to get a job today than it was 50, or even 20 years ago. This is largely due to two factors that have drastically reshaped the job market, one well known and documented, the other widely acknowledged, but largely overlooked: the first is foreign competition, and the second is domestic automation.

Foreign competition has hollowed out employment in Ontario’s economy, notably in the manufacturing sector, as Rapidly Developing Countries (RDCs) grew with the emergence of the global economy. In particular, China and India drew tens of millions of jobs away from more expensive, developed countries, including Canada. The result is that it is no longer possible for someone who has no desire to go to college or university to have a friend or family member speak to the foreman at the local factory, and get a job on the line. That just doesn’t happen any more, although it was commonplace in the 1960s and before.

Foreign competition is not going away any time soon. China may no longer be as big a draw for manufacturers as it was, but manufacturing jobs will chase low wages to new places around the world. They are unlikely to return to Ontario because it costs too much to live in an expensive, developed country like Canada.

Meanwhile, even this trend is being disrupted by the other factor at work: Automation. As computers continue to get cheaper, faster, and more sophisticated at greater-than-exponential speeds, the work that they can do faster, more effectively, and more cheaply than humans expands at ever-accelerating rates as well. This has been discussed, but its importance has been largely overlooked. And it’s no longer just blue collar jobs that are being replaced by machines. For instance, law and accounting jobs are rapidly being replaced by sophisticated computer systems. Indeed, any job, at any level, that involves routine, doing the same kinds of things repeatedly, is very much at risk to being replaced by computers, robots, and automation.

Although this doesn’t directly relate to any specific infrastructure system, it does affect all of them. If these trends continue – and I see nothing that can stop either of them, short of massive global disasters of some kind – then our governments and our society will need to take a completely different approach to the employment markets.

Moving Our Education System Out of the 19th Century

If we do not change how we educate and equip people for employment, then our governments will see their tax base erode, the divide between the haves and have-nots will expand, economic growth will be stunted by lack of consumer demand, and, based on what has happened elsewhere, we will see a rise in social unrest. And, as an important side effect, this will undercut the investment funds required for infrastructure investments.

What can we do about this? First we need to move our education system from the 19th Century to the 21st, as described above, including encouraging grown-ups to return for additional educational “top-ups” on a just-in-time, as-needed basis. As well, students in secondary school and higher should be tutored in practical job-seeking skills.

Then we need to be more proactive about helping people find – or create – jobs. At the moment, most job seekers are pretty much on their own, with occasional, inconsistent government help. This needs to become more systematic, and more robust to cope with the labor markets of tomorrow. And such systems should provide access to additional training to allow workers to upgrade the skills they need to find work.

And helping job seekers create their own jobs as entrepreneurs will also be necessary as people will increasingly be responsible for their own careers, whether they sign their own paycheques, or someone else does. This includes providing course materials in the Ontario education system on how to create and run a business, plus systems in the economy to help people start and sustain businesses. Hence, low-cost services that help with accounting, payroll, taxes, plus providing mentors for entrepreneurs, much as CIDA does abroad, would all be valuable. The government doesn’t necessarily need to run such programs, merely make sure that they are available.

Governments should seek to work with private sector employers to accomplish these things, rather than try to do it all on their own. And they should remind employers that if consumers aren’t earning any money, they are unlikely to buy many products. This was something that Henry Ford knew quite well, but which corporate chieftains seem to have forgotten.

© Copyright, IF Research, October 2015.

[1] Worzel, Richard, FutureSearch blog post, “Water Is Not the New Oil: The Future of Water, Part I”,

[2] “Water Availability”, Environment Canada website,

[3] Poon, Linda, “Bill Gates Raises A Glass To (And Of) Water Made From Poop”, NPR website,

[4] Harper, Tim, Agenda website, “Can graphene make the world’s water clean?”,

[5] Pierce, Alan, “Models of Sustainability: Sweden Runs Out of Garbage”, Pachamama Alliance website, 25 Nov. 2012,

[6] Kushal, Neeraj, “Growth vs garbage”, The Times of India website, 28 Apr. 2012,

[7] Statistics Canada website,

[8] Found on the FutureSearch website at:

[9] Bertrand, Natasha, “Utah found a brilliantly effective solution for homelessness”, Business Insider website, 19 Feb. 2015,
See also Surowiecki, James, “Home Free?”, The New Yorker magazine, 22 Sept. 2014, from their website,

[10] Goodwin, Liz, Yahoo! News website, 12 Nov. 2014,

[11] Aljazeera America website, “Report: Annual NYC inmate cost exceeds four years at Harvard”,

[12] Hirby, J., “What Is the Average Cost to House Inmates in Prison”, The Law Dictionary website,

[13] Thibault, Eric, “It costs $113,000 a year to lodge a federal prisoner: Report”, Toronto Sun, 28 February 2012,

[14] “Most of Canada’s prisoners have never been convicted of anything. Why are they in jail?”, Globe & Mail editorial, 17 July 2015, from the website,

Posted in Articles | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

The 7 Megatrends that Will Affect Tomorrow’s Infrastructure

by senior futurist Richard Worzel, C.F.A.

I was commissioned by the Residential and Civil Construction Alliance of Ontario to produce a report by September, 2015 on the future infrastructure needs of the Province of Ontario, Canada. The conclusions of my report really apply to the United States as well as Canada, and, indeed, to most of the world’s developed countries.

Because of the length of this report, I’ve divided it in half. You can find the second half here.


Infrastructure is the set of systems that supports our way of life, and includes things like roads, transit, water and sewer systems, communications, electric power, garbage disposal, health care, housing, the penal system, and education.

It is a subject totally lacking in sex appeal, and absolutely necessary to our lives.


In preparing this report, I came to three primary conclusions about the future of infrastructure investments in Ontario:

  1. We will have to address the massive underinvestment of the past several decades, as well as prepare for the growing needs of Ontario’s future. If we do otherwise, the costs to society will be higher than the costs of investment. We will pay either way, but we will be better off if we make the investments needed.There are always people who will take a populist stand, arguing against raising taxes for any purpose. They are either ignorant of the costs of inadequate infrastructure, or deliberately advocating something they know to be harmful in order to gain a selfish, political advantage.
    Some argue that governments cannot be trusted to use tax funds effectively. This is a reasonable argument, and, with the scarcity of funds that I expect we will experience in future, it is vital that we make good use of any funds earmarked for infrastructure. Yet, while finding ways of making sure results are measurable and transparent, and that those responsible are accountable, is entirely appropriate, refusing to allocate money for infrastructure investment is simple-minded, selfish, and, ultimately, self-defeating.
  2. We will be stretched to find the money to make the investments that we must make. In particular, demographics, notably the aging of the Boomers and the cost of their health care needs, threatens the financial solvency of Canada’s government-sponsored health care system. Climate change will make us spend money in places we don’t wish to. The global economy will force us to be lean and effective, giving us no cushion for bad planning, or careless investing. And the rapidly mutating job market threatens the underpinnings of our economy, as well as the very fabric of our society.
    We will have to plan carefully, allocate funds on the basis of real, measurable needs as opposed to political expediency, and use means of ensuring that the taxpayer is not left on the hook for sloppy implementation, or unreasonable cost overruns.
  3. The infrastructure systems that we have used in the past may be too expensive to use in future. Accordingly, we must seek new solutions to infrastructure needs when such solutions can be shown to be more cost-effective. In particular, we need to look at new possibilities being brought forth by ever-accelerating technologies for ways to do things more effectively, and with less money.

The raw necessity of investing heavily in infrastructure should drive us to find better ways of doing things in a time when resources will be scarce. In a very real sense, we will invent our future. We should work hard at doing it well.


Why and How We Need to Make Major Infrastructure Investments

As the human race has changed, so have the systems we need for our societies and ways of life. We no longer need caves, or horse stables in every town, or extensive canal systems. Our infrastructure needs have changed, and continue to change. And now, with the light-speed acceleration of technology, the changes coming to the Earth’s climate, and the unprecedented aging of society, we and our governments need to respond more quickly, and to think differently about infrastructure, than we have in the past.

As well, we have seriously neglected investing in infrastructure in the past, and will be forced to make up for it, whether we like it or not. According to an article contributed to the Globe & Mail in December of 2014, Canada has an infrastructure deficit of between $350- and $400 billion.[1]

Deciding what infrastructure to invest in, when to make such investments, and how much to invest are all difficult decisions, but they all have one common element which can simplify such decisions: they can all be rendered in financial terms. Making an infrastructure investment has a cost associated with it, plus an expected rate of return to society. (Or alternatively, not making such an investment imposes a cost on society, which can also be measured or estimated.)

Where the rate of return is greater than the cost, the investment should either be made, or the government involved should provide a clear explanation why it is preferable to pay the higher cost of not making the investment. In the present, low interest rate environment, the cost of investing is probably about as low as it is likely to get, which means we should be aggressively pursuing infrastructure investments right now.

The likely direction of interest rates in the future, and the steadily rising costs of delaying infrastructure investments clearly indicate that now is a better time to make such investments than later.

But what else about the future will affect infrastructure decisions?


The 7 Megatrends that Will Affect Infrastructure Planning

With the changing needs for, and forms of, infrastructure in mind, I would identify the following megatrends that will affect infrastructure investing in Ontario’s future:

  • Demographics – An aging population has many implications, some of which are daunting.
  • Technology – We’ll be able to substitute technology for earlier, more expensive solutions, as well as do things that were never possible in the past.
  • Climate change & environmental degradation – Mother Nature’s bills always get paid. We must plan accordingly.
  • The global economy – The continuing emergence of a unified, world-wide economy has major implications for Ontario, especially in education.
  • Human longevity & health management – While related to demographics, this factor has major implications that go much further.
  • The widening tears in the fabric of society – The rising costs of the penal system, plus the rise of homelessness, unemployment, and underemployment, have significant implications for Ontario.
  • The rapidly mutating job market – Lifetime employment is long gone, and the future is ever more uncertain, with major implications for society, the economy, and infrastructure funding.

Let’s consider each of these seven megatrends, and their effects.



In many ways, demographics is destiny. It is not the only force that drives change in our future, but it is the central one. After all, you can’t have an economy without people.

There are many implications of demographics that will affect the province of Ontario, and its needs for infrastructure. Let’s start with a demographic profile.

The first graph, below, shows the current population of Ontario, distributed by age. The big hump highlighted is (by my definition) the Baby Boom, who are currently between ages 48 and 68.

Ontario 2015

The second graph shows how Ontario population age groups will either increase or decrease in size over the next 10 years. (The groups shown are 5-year age groups: 0-4 years old, 5-9 years, 10-14, and so on.) Hence, the 70 to 74 year old group will increase by roughly 265,000 between 2015 and 2025, for instance.

Ontario 2015:25:2

What this means is that while several groups will increase in size, such as young children, and 30-45 year olds, the biggest change is going to be the Baby Boomers moving towards retirement age.

These changes have some clear implications for infrastructure:

  • We will need more schools (but, as I’ll point out later, most of these needs will be near the major urban centres).
  • Millennials will be moving into the family formation stage of their lives, which means they will need all of the community infrastructure appropriate to young families, including playgrounds, pediatric care, and the ability to get to and from work, which can be some combination of roads, public transit, and bike trails. At the same time, most of them probably won’t be able to afford homes in the downtown areas of the major urban centres, particularly Toronto, and so will move farther and farther into the suburbs.
  • The number of retired and elderly is going to grow faster than at any time in history, which means the needs of the elderly are going to overwhelm virtually all other infrastructure needs. This is due to three overlapping trends: greater life expectancy; the growing number of “oldest elderly”, being people 80 and up; and the aging of the Boomers. Combined, this makes people 65 and up the fastest growing group in the population. And they are politically potent, more or less getting anything they vote for, and defeating anything they vote against.
  • Where the Boomers choose to retire is going to have a huge impact on communities, transportation, and social services. Some will stay in their family homes for a time, usually in urban centres. Some will sell their family homes for something smaller in other parts of those same urban centres. And some will move to smaller communities, partly in order to harvest the funds tied up in their houses. What we don’t know, at this time, is how many Boomers will make which choices.
  • The costs of health care for the Boomers are going to dominate government finances, eating into funds for all other government activities. In terms of infrastructure, it means that the Government of Ontario is going to have to choose wisely which infrastructure it chooses to underwrite, find cost-effective ways of encouraging others to build infrastructure without government money, but without costing Ontario residents unreasonable amounts of money (think of highway 407). , outcome-driven planning is going to be critical.
  • What is not shown, but is implicit in the graphs above is immigration. Among large, developed countries, Canada has one of the highest immigration rates and one of the highest proportions of first generation immigrants. As immigrants overwhelmingly tend to settle in the major urban centres, this means that a disproportionate amount of Ontario’s population growth will be in and around the major urban centres, especially in the Golden Horseshoe. This implies continued sprawl, and problems with affordable housing, not only for immigrants, but also people born in Canada who are in the household formation stage of their lives.
  • Contrariwise, Aboriginal peoples have the fastest population growth among non-immigrant Canadians. As such, they should represent a steadily increasing percentage of employed citizens in Ontario society. However, that can only happen if the health, education, and living conditions of First Nations, Inuit, and Métis peoples are significantly improved. Moreover, the recent Truth and Reconciliation Commission of Canada report clearly shows a social and cultural imperative to make good on generations of mistreatment and neglect by Ontario society and government, along with the other provinces, territories, and the government of Canada. Consequently, projects related to the infrastructure needs of such groups should be placed higher on the political agenda than they would otherwise be. It’s time, and past time, that Aboriginal needs were given higher, rather than lower, priorities.

Another important aspect of demographics is where people will want to live. For more than a century, people have been leaving the rural areas of the world and moving into the urban centres. This megatrend continues unabated, and, if anything, has accelerated here because of Canada’s immigration policies.

All of this implies that there will be more demand for infrastructure in Ontario’s cities – especially in the Greater Toronto and Hamilton area (GTHA) – and less in the exurban and rural areas of the province. This is evident from the following census maps produced by Statistics Canada

Change in Population between 2001 & 2006 Census Tallies

2006 Census map


Change in Population between 2006 and 2011 Census Tallies

2011 census map

This movement away from exurban areas could leave a lot of towns and smaller municipalities unhappy with the way the Government of Ontario allocates infrastructure funding, especially as such funding becomes scarce. This may, if not provided for, lead to ineffective choices being made for projects that have a much higher political value than real value to the citizens of Ontario.

Consequently, an objective means of setting infrastructure investment priorities will be needed to identify the most important – as opposed to the most politically attractive – infrastructure investments. An independent assessment of the rate of return vs. the cost of each project would offer such an objective measure.

The continuing in-migration into the cities will also create steadily worsening bottlenecks. The 400-level highways, and especially those going into the major cities, are already heavily clogged with trucks bringing all the goods and products needed to support city dwellers. Train and pipeline transport, especially of hazardous substances, must grow to support urban populations, but is being widely opposed because it is perceived as being too lightly regulated, and therefore dangerous.

Urban populations are going to continue to grow, and in areas already heavily developed. As a result, the volume of truck, rail, and pipeline transport heading into, and supporting the cities will also swell. The increase in truck traffic, added to the volume of commuters heading into work, is going to make gridlock and bottlenecks worse, and, in some places, impossible. This will be particularly evident – and difficult – in the GTA, where population is projected to grow by almost 3 million people to 9.4 million by 2041.[2]

More roads are probably not the answer as there is often no room for additional, or expanded, roads. Different, often unpopular, choices will be necessary, such as congestion tolling, which is spreading among major urban centres around the world, and has been shown to work. And drivers must be given workable alternatives to encourage them to leave their cars. Here we should draw on the experiences of congested, cramped, densely populated parts of Europe, where rapid transit and bike lanes are created as parallel infrastructure systems in order to allow the largest number of people to move with the smallest possible footprint.

One partial solution will be to encourage the development and use of telecommuting by GTA businesses. As wireless and high speed Internet technologies continue to develop, and as online conferencing tools become more sophisticated, this may allow Area businesses to grow without requiring as frequent physical commuting.

Alternatively, planners could find ways of encouraging a greater decentralization of activity, spreading the commuting load around the major centres rather than continuing to find ways to funnel more people into relatively concentrated areas. This might mean promoting UK-style “new towns” supported by low-cost, high-speed transport, and offering more affordable housing outside of core areas. The intra-regional transit plans in place in the GTHA are an important step in that direction, but more would need to be done.

What will almost certainly break down in the next 10 years and beyond are the attempts to shoehorn ever more workers into the GTA’s downtown core, which has already created costly and exasperating gridlock. If that pattern is unworkable now, with approximately 6 million people in the GTA, it will be impossible as the region exceeds 9 million. New solutions have to be found through a variety of means.



Technology will be both a blessing and a curse for infrastructure planners. On the one hand, it will offer the possibility of new, more cost-effective solutions. Among these might be:

  • Remote, smart sensors may mean that visits by seniors (and others) to doctors or hospitals, or visits by nurses to those needing health care in the home, may be significantly reduced. Indeed, I would contend that future advances in health care technology should be seriously studied as an alternative to new hospitals.
  • Autonomous vehicles (self-driving cars & trucks) may – over time – significantly reduce the number of additional roads required to support population growth in the urban centres. Moreover, self-driving trucks may become more widely used late at night, arriving to make deliveries before commuters seek the roads.
    Driveless cars may also be much more efficient at moving through traffic. They could be able to consult with other cars, and a region’s traffic computer about the best route from A to B, diffusing congestion, and lowering the amount of time – and hence number of cars – on the road at any given time.
    Autonomous vehicles may also significantly lower traffic collisions, injuries, and fatalities. This would lower the number of emergency vehicles and crews required, and reduce medical expenditures. And such vehicles may also significantly reduce traffic violations – and revenues.
    But I keep saying “may” and “would” rather than “will” because AVs require a major changeover, both in the way we do things, and in the infrastructure necessary to support these new ways. As well, the mix of AVs and human-driven cars on the highways will significantly affect how much savings there will be. A roadway completely devoted to self-driving cars will have a substantially greater capacity than one that has 10% human-driven cars, because those driven by people will require all vehicles to allow more space and affect the rates of speed and acceleration. A complete changeover is unlikely to happen quickly.
  • As mentioned above, telecommunications continues to grow and expand, and its importance will grow apace. Indeed, it is no longer a frill for early adopters, but is now an absolute necessity for almost everyone engaged in the economy, as well as for most people in society.To date, Internet access has been provided almost exclusively by private sector suppliers. However, the importance of widespread, fast Internet service is too important to be left only to a private sector oligopoly. They will almost certainly remain the backbone of Internet service, but other alternatives are emerging that should be considered, and which may provide a spur to private sector offerings.
    Chattanooga, Tennessee created an Internet infrastructure through their municipal electric power utility almost as an afterthought in the construction of a smart power grid. “The Gig”, as it’s called, offers residents and businesses Internet speeds of 1 Gigabyte per second, or about 50 times faster than the U.S. national average. As a result, “Chattanooga has gone from close to zero venture capital in 2009 to more than five organized funds with investable capital over $50m in 2014 – not bad for a city of 171,000 people.”[3] And, predictably, cable and telecom Internet service providers are petitioning the U.S. Federal Communications Commission to block such developments.
    Elected officials in Ontario’s exurban areas are anxious to see broadband extended to their communities. “Building broadband is as important as paving roads and building bridges” one leader was cited as saying.[4] And the $170 million Eastern Ontario Warden’s 4G broadband initiative, which is a P3 project involving federal, provincial, and local governments, is one example of how such services might evolve.Fast Internet service can provide important tools in a wide variety of applications, and many sectors of the economy. Telecommuting has been mentioned. Distance education will be discussed below. Telehealth is a rapidly expanding field that can help stretch scarce resources in the health care system. Moreover, new applications always emerge from more powerful communications tools that can add significant value to Ontario’s economy, and make it more attractive as a place to do business – just as has happened in Chattanooga.
  • Computer monitoring systems, notably Fog computing[5], may allow us to identify pipeline breaks almost immediately, and dispatch crews to fix them before they cause significant damage. This could not only allow leaks in oil pipelines to be identified early, but also in water, storm water, and sewage pipes. At the moment, an unknown, but significant, percentage of the water piped to Ontario residents is lost due to leaking underground pipes.
  • Three-dimensional printing[6] is a commonly used phrase to describe a range of different technologies that will be as revolutionary in the real world as the Internet is in the virtual world. These technologies will have some obvious effects, such as changing some parts of the manufacturing industries from mass production to mass customization, or eliminating some mass production in distant locations in favour of local production. But it will also have more subtle effects, such as changing distribution industries, like shipping, trucking, rail transport, and last-mile delivery services. Hence, the plans for an object might be bought for what amounts to a royalty payment to the originator, but produced locally, either at home, or at a local store like a Canadian Tire or Home Depot, rather than shipping it from, say, China to Ottawa.
    But 3D printing has even broader implications, notably in printing organic materials. It may, for instance, be possible to print food directly from constituent compounds, leading to the development of steaks without steers, or food without farms. This has longer-term implications for food transportation, safety, and nutrition that will – over time – affect public services.
    Of course, we don’t know yet whether producing food without farms is financially attractive; it’s too early to tell. And there is also the consumer acceptance issue: Will consumers buy food that is identical in almost all measurable ways to farm-grown food – or will such food be thought of as undesirable, like GMO ? Ironically, printed foods may have unexpected allies: PETA, People for the Ethical Treatment of Animals, believes that meat produced in this way is ethically preferable to raising steers for slaughter.
  • In the biosciences, we are now developing the ability to grow replacement organs, like hearts, lungs, kidneys, livers, and so on, from a recipient’s own stem cells. We are also developing the knowledge that will gradually enable us to “turn off” cancers and some chronic diseases, and lock out infectious diseases. Such developments will further extend life expectancies, with significant consequences for both individuals and society. These developments will discussed in more detail later.
  • Alternative energy sources combined with steadily improving energy storage (“battery”) technology will create significant challenges to traditional electric power generation and grids, and may destroy their economic feasibility.[7] Rooftop solar power, in particular, threatens to be a game changer, as the price per kilowatt of capacity is dropping at speeds approaching those of Moore’s Law. In places, rooftop solar prices per kilowatt-hour are already lower than conventional electricity generation, even without including transmission or other costs.These changes threaten to disrupt the business plan of Ontario Power Generation within the next five years, before smoothly functioning alternatives are widely in place. This threatens to create power disruptions.
    OPG, as well as other electric power utilities, should take this developing trend seriously, and find ways to turn it to advantage. If they try to ignore it or block it, it could well destroy them as time and economics are on the side of the disruptive technologies.

So, it’s clear that the potential gains in using alternatives to today’s infrastructure systems will be remarkable.

However, cost is also a major issue, and for two distinct reasons. First, new technologies always start out being expensive before they come down in price, sometimes at startling speeds. This actually is solvable as long as planners are willing to wait for a technology to prove itself, and to become affordable. There aren’t usually a lot of prizes for being the first adopter of a new technology.

The second, and more difficult, cost problem is the cost of switching from the techniques we use now to the new techniques that are emerging. Hence, while autonomous vehicles may lead to massive cost savings over the long run, they would require hefty up-front investments for them to achieve those savings.

There will be other ripple effects relating to technology that I’ll deal with later – notably its effects on labour markets.

To be continued…

© Copyright, IF Research, September 2015.



[1] Swedlove, Frank, “Government alone can’t fix Canada’s infrastructure deficit”, Globe & Mail, 5 Dec. 2014,

[2] Ontario Ministry of Finance website, “Ontario Population Projections”,

[3] Rushe, Dominic, The Guardian website, 30 Aug. 2014,

[4] Van Brenk, London Free Press, 2 Feb. 2015,

[5]Worzel, Richard, FutureSearch blog post, “Fog: What’s Next in Computing”,

[6] Worzel, Kit, FutureSearch blog post, “Printing the Future: The Implications of 3D Printing”,

[7] Worzel, Richard, FutureSearch blog post, “Deadly Shock: The Coming Devastation of Power Utilities”,

Posted in Articles | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

What’s on Your Plate: Feeding the Future

by futurist Kit Worzel

The United Nations report on world population estimates a total of 9.6 billion people on the planet by 2050. That’s an increase of more than 2 billion in the next thirty-five years. There have been discussions on how to feed such a large population, but many of those discussions fall short on certain key points.

First off, demographics have changed in such a manner that people are not only consuming more calories, but also higher cost calories, particularly meat and dairy products. The consumption of meat has increased seven-fold since 1950, while the world population has not quite tripled. And meat is the most expensive form of food we consume, since animals need to be fed for years before being turned into food, meaning they are consuming massive quantities of food before becoming food themselves. Red meat tends to require 5 or more kilos of feed per kilo of meat produced, with slightly more favorable ratios for poultry and fish.

Second, climate is changing. This is quite evident, and California is an example of what could happen worldwide. A decade ago, California was the food basket of America. With the ongoing drought now in its fourth year, farmers in California are cutting back, closing business, or converting to less water-intensive plants, moving away from avocados, almonds and citrus fruits. With climate change being a world-wide issue, this means that farmers all around the world will have to adapt to new climate conditions in order to keep producing.

Not only will we need to plan on feeding 2.3 billion more people, but providing much more resource-intensive food as well. This will multiply the resources needed by an equivalent of 2-3 times, all while coping with the disruptive effects of climate change.The question is: How can we manage it?

Green Eggs and Ham

Meat substitutes are nothing new. They’ve been on the market, in various forms, for over a thousand years, with mentions of tofu available as far back as 965 CE, and a vegetable sausage first mentioned in the Western world in 1852. But substitutes that tasted like, and more importantly, had the texture of meat remain no more than a work in progress.

Most meat substitutes only please vegetarians, and fail to impress people who actually eat meat. But the Beyond Meat company has managed to produce a product that they claim does just that. Now, I’m not a food expert, or qualified to judge this product. But Alton Brown, world renowned chef and TV personality, is, and did just that, not being overly thrilled with the taste, but praising the texture. He said that the product, as is, could easily replace chicken in about 30% of recipes out there, and has the advantage of having no saturated fat or cholesterol. If this meat substitute can impress one of the biggest names in food in the world now, imagine what it will be able to do within thirty years?

Beyond Meat is already widespread, sold in a major retail chain in 39 American states, the District of Columbia, and Vancouver, BC. It’s completely vegetarian, and is the start of a partial solution for the high demand for meat through meat substitutes.

Tea, Earl Grey, Hot

While the replicators of Star Trek are still just science fantasy, 3D printing has come to food in a big way. Right now, it’s slow and has a host of other issues, but as technology rapidly improves, this is becoming a viable option.

High-end restaurants are already using 3D printers to make fantastic designs from sugar and chocolate as unique desserts, printing off things that no pastry chef could make. At the other end of the scale, certain nursing homes in Germany are using 3D printers to deliver a product called Smoothfoods to residents who can’t chew solid food anymore. It’s an alternative to purees, which tend to be unappetizing. Smoothfoods look much more real, but have the same consistency as a puree, thereby increasing appetites, and helping to prevent under-eating and malnutrition.

Another bonus is the ability to incorporate disliked, though nutritious, ingredients invisibly into conventional foods. Things like duckweed and mealworm are packed with nutrients, but the taste and texture – and the thought of eating them – put people off. Adding them to something like shortbread cookies makes for a pleasant eating experience, coupled with the benefits of those horrible tasting and sounding additions. Considering that even in the developed world, malnutrition is still an issue, any chance we have of getting people to eat healthy, even if we have to trick them into doing so, is a good one.

Lastly, 3D printing of food requires no skill. So in future, when you come home to make supper, you will simply select the food you want from your computer, send it off to your printer, wait for it to print, and you’re done. You’ll be able to program a pizza at home with no more effort than ordering for delivery.

You can always tell a happy motorcyclist…

It won’t just be folks who ride Harleys that have bugs in their teeth if this next idea comes to fruition. Entomophagy, or eating insects, is already a source of protein for more than a third of the world, but certain green factions are pushing for fried grasshoppers to be sold on every street corner. The arguments are strong: to produce 1 kg of usable meat requires between 7 and 20 kg of feed for a cow, depending on breed and type of feed, but only 2.1 kg of feed for crickets. Cricket meat is also lower in saturated fats and cholesterol, and seems to be a greener alternative.

Of course, the natural reaction for the vast majority of the world when they see a bug on the plate is to scream and try and get rid of it. The disgust reflex is hard to overcome, so food producers are looking for a work-around. Producing cricket-flour, or converting bug meat into innocuous cubes is a start, but there are some who look to historical precedent.

More than a century ago, lobster was considered low-class, and it was illegal to serve it in prisons, as it was considered cruel and unusual punishment. Now it’s one of the most expensive things on the menu, after receiving one hell of a PR makeover, and there are those who seek to do the same for bugs. It may be a hard sell today, but if the choice is between eating insects or starving, then pass the grasshoppers.

It’s alive, IT’S…not quite?

Lab grown meat was in the news a few years back, when a team from Maastricht University in the Netherlands produced a burger that was cultured from beef muscle tissue. The flavor was disappointing to the designated tasters, but the texture was spot-on, which was the more important issue. The team that developed the burger says they can add fat to the culture to make it more flavorful. Of course, the other memorable part of the story was that a single 5 oz burger cost $330,000 in US funds, so don’t expect to order one from a fast-food joint anytime soon.

One of the best parts about it is that it’s real beef, and you don’t have to kill a steer. The folks in the lab can play around with the level of fat to make it taste right, and also determine how healthy it is. And they’ve managed to lower the price. The proof of concept may have cost almost a third of a million dollars, but they’ve managed to lower the cost all the way down to $12 for that same 5 oz burger. What they haven’t managed to do is produce it quickly, or in large quantities yet, and they are still estimating decades to iron out the process.

Something said for doing it the old-fashioned way

All of these techniques and technologies are interesting, but none of them can work without resources. We still need farms to grow the produce used, particularly for 3D printers and meat replacements. None of the plans mentioned above are enough on their own. If anything, they are enhancements of a more traditional farming paradigm, instead of replacements. Farming will change, that is unquestionable, but it will still exist. We won’t feed 9.6 billion people with crickets and lab-grown meat alone. Farming will become less labor intensive, more frugal with water and electricity, and will likely have many other changes as well, but that’s a topic for another blog.

Making room at the Table

The bottom line is that we have the resources and know-how to produce the food needed to not only feed this growing planet, but keep everyone happy in doing so. There will still be challenges in growing the food, and distributing it, but the hurdles we have yet to face seem smaller in light of a world where the perfect-tasting cricket and tofu burger is just the push of a button away.

© Copyright, IF Research, September 2015.


Beyond Meat – Alton Brown review

Meatless History

Tellspec food tricorder (not yet on the market)

3D printing food

Cloned Meat


Eating Bugs

World population projected to be 9.6 billion by 2050 by UN

Meat limit


Posted in Articles | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

What’s Wrong with Medical & Drug Research?

by senior futurist Richard Worzel, C.F.A.

Suppose you were a high school teacher who had just taken a job at a new school, and that you agreed to take over as the coach of the basketball team. You look at your new school’s team record, and find that it’s pretty mediocre, so decide you want to try something new. New school, new ideas – why not?

After looking around and thinking things over, it occurs to you that perhaps there’s a relationship between the kinds of shoes the players wear and how well a player does – and hence how the team fares. After all, a casual observation of the kids on the floor seems to show that those who have better shoes score more.

Since you have such a small sample of kids, you reach out online and get data from dozens of schools around the country, outlining what kinds of shoes their players wear, and how well they perform.

Once you start to analyze the data, the results are confusing. Some schools provide shoes, which are all the same brand, and the players do particularly well, but other schools that do the same do not. Some brands seem to have a slight correlation with good performance, but nothing that’s really enough to hang your hat on. There are hints of some kind of relationship, but nothing really powerful. In confusion, you take your results to one of the science teachers, whom you have befriended since your arrival.

She looks over the data, asks a couple of questions, then says, “Why don’t you look at how tall the players are instead of what shoes they wear?”

In my view, medical research is like that.

Is Butter Bad for You?

You’ve undoubtedly seen conflicting reports about food, for instance. Butter’s bad for you, says one, because it has saturated fats that increase bad cholesterol; you should stay away from butter and eat margarine. No, says another, margarine contains trans-fats and chemicals; butter’s natural and better for your body, so you should eat butter. You’re both wrong, says a third study: cholesterol is irrelevant, it’s all been blown out of proportion, so it doesn’t matter whether you eat butter or not.

None of these kinds of studies talk about the relationship between individual genetics and the studies that are performed. Yet, in my opinion, that’s like ignoring the height of basketball players in trying to identify the factors that relate to success in the game.

Suppose, for instance, that one group involved in a study on the effect of butter accidentally happens include a lot of people who respond badly to butter, for whatever reason. Then the study will conclude that butter is bad for people.

Another study, with a different group of people, may have accidentally included people whose reactions to butter are all over the spectrum. This study will conclude that butter is irrelevant to health.

And a third group may have an unusual number of people who respond well to butter, or who respond badly to margarine. This study, then, will conclude that margarine is bad, and butter is good.

All of these studies would miss a fundamental issue: the importance of genetic differences. And that will invalidate their findings. Worse, from a layperson’s point of view, it gives contradictory signals, and erodes confidence in medical research.

The Implications of Genetic Testing

Genetic testing is still in its infancy, so that even though it has already taken enormous strides, there is still a tremendous amount we can and eventually will learn from it. It will take time for research to come up with clear indications of how particular foods interact with an individual’s genetics, but eventually you’ll be able to get an analysis of your DNA (for less than, say, $100), that will tell you how different foods will affect you.

Some foods (like leafy, green vegetables) will be great for you, and are ones you should eat regularly. This will be your “A” list. Some foods (like fish, chicken, and whole grains) will be good for you, but should be eaten in moderation (the “B” list). Some foods, like hot fudge sundaes, red meats, and white breads, you can eat occasionally, but shouldn’t eat often – this is your “C” list. And some foods may be positively harmful to you, say if you have an allergy to nuts or dairy products, or a food sensitivity to things like wheat gluten or corn. This is the “X” list, and you should always avoid everything on it.

Each list will be different for each person, although there will be large areas of overlap. I doubt, for instance, whether hot fudge sundaes will be on anyone’s “A” or “B” lists. I suspect that genetic variations will probably show the greatest differences in each person’s “X” list, which spells headaches for foodservice organizations, which already have long lists of allergies of which they need to be aware.

The same will apply to other environmental factors, such as the adhesives used in laying carpet, or particular kinds of trees, flowers, or aromas. This may eventually change how you decorate your house, or even which materials you use to build it.

As a result, genetic analysis will affect a broad range of industries, from farming through foodservice, to builders and construction companies, to schools and public institutions, and many more. In fact, I suspect that the more we learn about our genomes and how they work, the more aware we will be of things that have affected us, perhaps without us ever knowing about them.

Imagine, for instance, that we didn’t know that some people are allergic to ragweed or pollen. We’d be mystified by their symptoms. It’s this kind of revelation that we will consistently stumble upon as we learn more about how our bodies (or, more accurately, our biomes) interact with our environment. It will be both revolutionary, as new discoveries lead to blazing insights, and evolutionary, as we learn how to make adjustments in diagnosis and treatment for ever-smaller groups of individuals.

But one industry is already being shaken to its core: pharmaceuticals.

What’s Ahead for Big Pharma

We already know, from drugs like Herceptin, that genetic testing can make an enormous difference in the value and effectiveness of a drug. Herceptin can be very effective for certain kinds of breast cancer – but only if the patient has a particular genetic pattern. Otherwise, the only thing Herceptin will do for them is to empty their wallets.

The ironic part about Herceptin is that it was originally slated to be dropped because it wasn’t effective enough across a broad spectrum of patients. It was developed by drug company Genentech in cooperation with the University of California at Los Angeles (UCLA). Yet, once prospective patients were screened for a particular receptor, the results were remarkably improved. Hence, a drug that was going to contribute only to the cost of failed drug research wound up being a big money maker for Genentech and UCLA.

But until very recently, Herceptin was largely an exception. There are a few, but only a few, other stories like it. And this is largely because genetic screening didn’t fit the business model of Big Pharma.

Until they accepted the inevitable, major drug companies needed to sell millions of doses for billions of dollars with big, blockbuster drugs. They more or less had to do this because they are so big that smaller drugs wouldn’t move the needle on their revenues and profits. Without big increments in income, Wall Street, with its miniscule attention span, gets restless about drug company results, and hence, with drug company managements.

The problem, therefore, with genetic screening is that it can take a drug that either isn’t very effective, or has unacceptable side effects – the two biggest reasons why drugs are dropped after years of development – and turn it into a modest success. Such drugs, once given only to the appropriately screened patients, will either prove to be very effective, or not have anywhere near the side effects that it would have on a broader group of patients.

But Wall Street isn’t interested in modest successes. The problem is that the screened drugs will sell to a much smaller potential market, with tens of thousands of doses bringing in tens or hundreds of millions of dollars in revenue – which isn’t enough. Yet, despite this, some drug companies are changing as they get into synch with the broader underlying knowledge of genetics.

Why New Drugs Are So Costly

I believe Big Pharma was ignoring a real opportunity, because revenue is only half of the profit equation.

Coming up with a figure of how much it costs to develop a successful new drug from scratch is difficult and controversial. I’ve seen numbers that range from $800 million to $1.2 billion per drug, and those estimates are several years old. These figures are controversial partly because the drug companies have a vested interest in pumping them up as a means of defending the high prices they charge for new drugs. Hence, a big number is politically more useful than a smaller one.

Yet, no matter who does the counting or what their motives are, it is still true that it’s very expensive to develop a new drug. And one of the biggest expenses in drug development is discarding drugs that seem promising, but don’t prove out for some reason.

Before approval, drugs go through several stages, from lab tests, to animal trials, to four stages of human trials. Each stage of the development and approval process is significantly more expensive than the ones before it. Yet, a drug that fails in human clinical trial Stages III or IV has incurred substantially all of the costs of drug discovery, and usually fails because it is either not as effective with a broad spectrum of people as earlier stages indicated, or because unacceptable side effects (up to, and sometimes including death) are more prevalent than expected.

So drugs that get to Stage III or IV clinical trials have shown that they have a high probability of being valuable. If they fail at that point, they represent the highest costs in the entire drug development process.

Why Lower Sales May Be More Profitable

As the drug companies start to do genetic screening while drugs are being started in human trails, they will inevitably rescue many of the drugs that fail late in the process – and save themselves the huge development costs involved. This would mean that although their revenues per drug might fall by a significant amount, so, too would their costs. In other words, their revenues might go down, but their profits could go up.

That’s not the model that Big Pharma and Wall Street have liked in the past – but increasingly that’s the way that drug development is going. And research results could be improved further by using novel data searching techniques, such as evolutionary algorithms, to further lower development costs.

But first more drug companies have to accept that the old business model is dying, and that the tools that are now emerging should be used to create a better, more profitable model.

And, as our understanding of how our individual genetics interact with our environment, at the very least we’ll find out whether we should be eating butter or not.

© Copyright, IF Research, September 2015.

Posted in Articles | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Paying the Water Piper: The Future of Water, Part III

by senior futurist Richard Worzel, C.F.A.

This is the third blog about water. The first two can be found here and here, plus a blog about California’s water situation can be found here.

Fresh water is vital, and we’re running out of it. Water shortages will disrupt economic plans, sap economic vitality, increase costs, and cause friction and disruptions that range from disagreements between communities to outright war. Yet, despite all this, very few people pay much attention to water. Indeed, as Steven Solomon, author of the frightening book Water, summarizes the issue:

Despite its growing scarcity and preciousness to life, ironically, water is also man’s most misgoverned, inefficiently allocated and profligately wasted natural resource. … Almost universally, governments still treat water as if it were a limitless gift of nature to be freely dispensed by any authority with the power to exploit it. In contrast to oil and nearly every other natural commodity, water is largely exempted from market discipline.[1]

Water is a perfect example of bad habits and bad planning. We have become so used to thinking of water as free that we have fashioned our lives and economic policies on that basis. Yet, even as we run low on water in places, our reaction is to throw temper tantrums because we think of free access to water as a natural right. Of all the commodities that are subsidized by governments, water is the most explosive.

And we are seeing the signs of strain almost everywhere, starting with the Middle East, including Israel, and then on to India, China, most of the African countries, and many other rapidly developing countries as well. But it’s not limited to developing countries; we have problems of our own in the rich world that we will have to deal with.

In America, the Southwest’s growth will depend on its ability to buy enough water rights to allow further growth in its Sunbelt, which attracts retirees from the colder states farther north. The farmers of the Central Valley of California claimed that the historically cheap rates that they used to pay for water, plus their almost unlimited access to water, were theirs by right, and complained bitterly when they are forced to sell water rights to allow California cities to grow, even though the farmers made billions of dollars in profits by doing so. America’s cities are also facing water problems, and not necessarily because of a shortage of fresh water, but because they have consistently underinvested in the infrastructure to gather and deliver it to their residents.

New York City: A Good News Story About What Intelligent Planning Can Accomplish

New York City is a good example of this, but fortunately a good news story as well. They have surprised everyone, perhaps even themselves, by managing to update an ancient and failing water system after decades of delay and dithering. They were supplied by three aqueducts: the gravity-fed Croton water system drawing water from the upper reaches of the Delaware River and the Catskills, which opened in 1842; and Tunnel 1, built in 1917, and Tunnel 2, built in 1936, both of which drew water from upstate New York.

After being prodded into action by tougher drinking water standards in the 1980s, New York invested something in excess of $7 billion to upgrade their water filtration and delivery systems – and possibly just in time to save the City from a disaster. Some say that had their existing water infrastructure failed prior to their new system coming online, which was a very real threat, it would have produced a disaster far worse than 9/11: the complete inability to provide drinking water to a large fraction of the city’s population, rendering huge chunks of the City uninhabitable. How can a major city exist without drinking water?

The Situation Is Getting Critical

And it’s not just a money issue, either. By overdrawing water from aquifers, we are destroying or contaminating many of them. Some aquifers, when they are overdrawn, actually crack and collapse, and cannot hold anywhere near as much water as they previously did. In other cases, when the water level falls too far, groundwater seeps in that has been contaminated with runoff pesticides from farms and golf courses, plus bacterial contamination from animal farms and human cities, rapidly reducing the usable freshwater available from that source.

The situation in many parts of the world is getting critical, and it’s hard to generalize where. Some areas, like India and the Middle East, are going to experience widespread water problems and shortages. In America, there may be large areas that are affected, like the Southwest and West, but the problems may also skip from one community to another, according to the state of their water infrastructure and investment. What is clear is that the way we use water now is unsustainable, and we will be forced to spend huge amounts of money to remedy our sublime ignorance of the facts.

The good news is that we have a lot of leeway to deal with water shortages in part because we have been so wasteful in our use of it. Merely improving our efficiency would probably be enough to solve many of our problems.

For instance, in farming, Israel has pioneered the use of drip irrigation, which delivers water directly to a plant’s roots, along with computer monitoring of soil moisture so as to deliver just the right amount of water at just the right time. This, combined with the recycling of wastewater, has allowed Israeli farmers to double or triple crop yields per gallon of water.

Likewise, there are other solutions to other forms of water waste, such as New York City’s preservation of parkland in upstate New York to naturally filter water that runs into their reservoirs, rather than to build massive – and more expensive – industrial filtration systems.

Overcoming the Yuck Factor

And I can give you one simple way that cities can implement (relatively) quickly and cheaply to increase the amount of water available to their citizens: recycle sewage into potable water. Yet the “yuck” factor, as it’s called, is likely to cause politicians to avoid this, even though the technology is well known and has been proven through decades of space flight. As astronauts say, “We turn yesterday’s coffee is tomorrow’s coffee.”[2]

Yet, of all the problems we face with regard to water, the biggest is that we will have to adopt a very different way of thinking about water, and a major change in government policies around water to solve our looming problems. In particular, we will have to start charging what it costs to supply water, and that will cause major protests among most of the world’s population, including here in the rich world. As well, we will have to invest huge amounts of money in water infrastructure at a time when investment capital is going to be hard to come by. That spells trouble, because we don’t have a choice where water is concerned. We have to spend the money because we can’t live without it.

When water runs out, people behave badly. They never expect the taps to run dry; it’s not a possibility they even consider, so it quickly becomes a major crisis. But it will happen with increasing frequency everywhere, and may prove to be a significant limiting factor to food production, economic activity, and standards of living. Worse, water will become the cause of economic, political, and even military conflict.

What water will not be is free. It won’t even be cheap.

© Copyright, IF Research, July 2015.

[1] ibid. p.376.

[2] Howell, Elizabeth, “Yesterday’s Coffee”,

Posted in Articles | Tagged , , , , , , , , , , , , | Comments Off on Paying the Water Piper: The Future of Water, Part III

Overdrawn at the River Bank: The Future of Water, Part II

This is the second of three posts about the future of water. The first can be found here.

by senior futurist Richard Worzel, C.F.A.

Water scarcity has several causes, all of which compound each other. First is that of the freshwater available above ground or from precipitation, most of it is either located far from where people want it, or it falls at times and in ways so it can’t be adequately used, such as in torrential rainfalls or blizzards.

Hence, for instance, Canada is a freshwater heavyweight. With ½ of 1% of the Earth’s population, it has about 7% of the world’s renewable freshwater supplies, or about the same amount of freshwater as all of China, which has 20% of global population. But while 84% of Canada’s population lives within 100 miles of to its southern border with the United States, over 60% of its freshwater supplies are far north of that, flowing up into the Arctic Ocean.

Beyond these naturally occurring barriers to the world’s fresh water, there are more recent developments that are tightening the water supply. Climate change is raising temperatures in many places, which puts more water into the atmosphere through evaporation and clouds instead of on the ground. It also changes rainfall patterns, which means many dry climates, such as California’s, are getting drier. It is changing the monsoon season in India, where agriculture is dependent on getting just the right amount of rain at the right times, so a changed monsoon season will leave it with either not enough rain, or with flooding, which quickly runs off. Either would severely restrain food production.

Water Abuses

Agriculture is a major factor in water mismanagement as well. Although there are a few farmers who use water wisely, in most of the world farmers abuse water supplies, as when they use it in open-ditch, or large-scale spray, irrigation schemes that can lose half of all the water allocated to evaporation before it ever reaches the crops being “watered.”

Our rising standard of living hurts as well, because, as with energy use, there is very strong correlation between how well we live, and how much water we use. The global trend towards greater urbanization hurts, too, as major urban centers not only use more water per person than rural areas, but waste more of it through aging, leaky infrastructure. In some places in India, for example, it’s guestimated (because nobody really knows) that perhaps as much as 40% of all the water that flows in municipal pipes leaks into the soil before it reaches users. But because water pipes are invisible, no one can see the problems developing or can tell precisely how bad they are. And no one wants to pay more taxes to maintain invisible infrastructure. As a result, water mains are in poor repair almost everywhere, in rich and poor countries alike.

And because surface water is not being used well, farmers, cities, and industry are all pumping a steadily increasing amount of water from underground aquifers. The problem here is that in many cases, these aquifers represent fossil water that has been deposited over periods of thousands or millions of year, and are replenished as slowly, if at all.

The Ogallala aquifer, also called the High Plains aquifer, for example, is one of the largest in the world, and runs through eight American states: Wyoming, South Dakota, Colorado, Kansas, Oklahoma, Texas, and New Mexico. It provides approximately 30% of the water used for irrigation in the United States, as well as more than 80% of the drinking water for the people within its boundaries[1]. Yet, while the water level of the Ogallala is estimated to drop by five feet or more per year in places, it is replenished at an estimated rate of about one-half an inch per year.[2]

Nor is this unusual. In the year 2000, the residents in the basin of the Jordan River in the Middle East used 3.2 billion cubic meters of water, but received only 2.5 billion in rainfall, drawing the remaining 22% from the regions’ aquifers. (Three of these aquifers are on Palestinian lands in the Israeli-occupied West Bank, and the other in coastal Israel, all of which aggravates the political instability and acrimony in the region.)[3]

But perhaps the clearest example of the problems with the future of water comes from India, which has 17 percent of world population, but only 4% of freshwater supplies. There, in the breadbasket regions of Punjab and Haryana, the water table or aquifer is falling by 3 feet a year because of human withdrawals. In the drier state of Gujarat, the water table is falling anywhere from 50 to 1,300 feet a year. No other country in the world pumps as much water from their underground water supplies. As Steven Solomon, author of the remarkable and distressing book, Water, puts it, because of government policies encouraging the careless and profligate use of water, India is, as a nation, committing what amounts to “slow motion hydrological suicide”, with the result that:

Food produced from depleting groundwater is tantamount to an unsustainable food bubble—it will burst when the waters tap out. One warning occurred in 2006 when, for the first time in many years, India was forced to import large quantities of wheat … textile plants have been forced to shut down, and information technology companies have moved away from Bangalore, over water shortages and undependable supplies.[4]

Scapegoating Instead of Acting

India is a clear-cut case where water shortages changed a food exporter into a food importer in order to import “virtual water”, and where industries were forced to move because of water shortages. Partly as a result of growing water shortages, India decided they needed to take action – and did so by fingering foreign corporations as official scapegoats.

Coke and Pepsi are high-profile global companies whose products are water-based. India revoked their licenses to draw water from local supplies on the grounds that they were responsible for the region’s exhausted groundwater. The two companies were eventually forced to adopt a policy that will, over time, become widespread: that of becoming “water neutral.” They have managed to find ways of restoring to community water supplies as much water as they draw from them, which is a remarkable accomplishment, and one that all major corporations should keep in mind in their long-term planning. Water neutrality will join carbon neutrality as a desirable objective, even a mandated one.

But this is also a good example of another aspect of our future: scapegoating, deception, double-dealing, and theft. The history of water is replete with individuals, communities, states, and countries scheming to take possession of scarce water supplies at the expense of their neighbors, from Egypt to Turkey to Texas to China and beyond.

Indeed, at least one military conflict was acknowledged as being caused by water. Ariel Sharon, former Israeli prime minister and a former military commander in the Six Day War of 1967, commented that “In reality, the Six Day War started two and a half years earlier on the day Israel decided to act against the diversion of the Jordan [River]. While the border disputes between Syria and ourselves were of great significance, the matter of water diversion was a stark issue of life and death.”[5]

I have one more blog post about water for now, which I’ll put up next week, about how we are are reacting to the coming water shortages.

© Copyright, IF Research, July 2015.

[1] Wikipedia website,

[2] Solomon, p.345.

[3] ibid., p. 401.

[4] ibid. pp.423-4.

[5] ibid. p.402.

Posted in Articles | Tagged , , , , , , , , , | Comments Off on Overdrawn at the River Bank: The Future of Water, Part II

Water Is Not the New Oil: The Future of Water, Part I

by senior futurist Richard Worzel, C.F.A.

Water is not the new oil; it’s much more important than that. There are substitutes for oil, but there are no substitutes for water, and we are running dry. As that happens (and it will happen at different rates in different places), the conflicts over water will rise, as will the economic consequences.

In my previous blog post, I talked about the drought that is haunting California, and suggested that, should it continue for a few more years, we are going to see water refugees leaving the state. But California is by no means the only part of the world experiencing problems with water, nor are these unusual weather patterns at the core of our water problems. Our real problem with water is us. We have grown accustomed to over-using water supplies for thousands of years, during which time human population has exploded at exponential rates. The combination of these two factors – population growth and overuse – are what will cause us extreme problems with water.

For instance, both China and India are already experiencing major water shortages. This reduces the standard of living of the people affected; increases the costs of everything they do, produce, and consume; and, if things get really bad, may even derail the economic miracles of these two countries. Water is that important.

Water Comes Second Only to Oxygen

Think about it: after oxygen, water is the thing you need most, most often, and whose absence you feel most deeply. If your town or city has its water supply disrupted, or if your water supply is contaminated, this quickly becomes the number one problem you face. In such a situation, stocks of bottled water vanish from supermarket shelves, and people hoard their supplies. People who are denied water become desperate very quickly; a few days without is usually enough to strip off the veneer of civilization to the point where most people will scheme or steal to get the water they need.

How would we grow our food? Indeed, this may be the biggest issue in the next two decades. Some commentators say that countries that import food are actually importing “virtual water.” Countries that import food often don’t have enough water to grow the food domestically, so they are forced to buy food that, had they sufficient water, they could grow for themselves.

And we use water for an amazing range of things. Without water, how would we fight fires? How would we cool nuclear reactors? How would we wash ourselves and our clothes? How would we provide basic sanitation? How would we manufacture most industrial products, from ships to steel to microchips to clothing? How would we extract oil and natural gas from the ground? How would we generate most kinds of power, from hydroelectric to natural gas generators to nuclear, without water? How would we run our cars? How would we prepare food to eat?

Going back over the (very abbreviated) list of things we use water for, it might be possible to work around the absence of water, or use other techniques if water’s not available, but it would be dramatically more expensive to do so. So one of the principal effects of water shortages is going to be economic, and that’s what I want to focus on first.

Where It Comes From, Where It Goes

We’re not running short of water, what we are running short of is useable, fresh water. Almost three-quarters of the Earth’s surface is covered in water, but most of that isn’t useable. Of the water on Earth, 97% is saline, mostly in the oceans; freshwater accounts for only 3%. Of that 3%, almost 69% is locked up in ice and snow, and 30% lies belowground (and only a tiny portion of that is available by pumping from aquifers, which I’ll come back to in a later post), leaving only about 0.03% of all the Earth’s water available as fresh, potable water where we need it, typically in rivers, lakes, ponds, and streams.

Now, water is a remarkable substance, one of the most remarkable – even bizarre – in nature. It is one of the very few substances that expands instead of contracting when it freezes. It boils and freezes at a substantially higher temperature, and has a much higher surface tension, than molecules of comparable structure. It is one of the only substances that can act as both an acid and a base. It has been called the “universal solvent” because the large majority of substances are dissolved to at least some extent by water. And it is the one substance that virtually all life on Earth must have to exist.

But one of the most important aspects of this remarkable substance is that it doesn’t get used up, the way oil does, for example. When oil is burned, it changes into something else and is no longer available as oil. On the other hand, when water is used, whether to drink, irrigate plants, used in an industrial process, or whatever, most of the time it returns to water, and goes back into the Earth through the water cycle. So, in one sense, we aren’t using water up.

We’re Using It Faster Than It Can Be Renewed

What we are doing is using water – fresh, drinkable water – faster than it can be renewed. And this is why we are experiencing shortages, and why those shortages are getting worse.

The most important drivers of water shortages are population growth combined with the intensification of water use. Since the beginning of the 19th century, human population has grown from just under 1 billion to well over 7 billion people, while the amount of freshwater used per person has doubled. Worse, as more and more of the Earth’s human population moves out of poverty, and wants to change their diets to something akin to what we eat in the developed world, especially consuming more meat and dairy products, the rate of water consumption per person is accelerating. From 1900 to 1975, America’s population tripled, but the consumption of water increased 10-fold, or more than three times as fast as population.[1]

Much of that increase in water consumption has arisen because of our richer, more resource-intense diet. Here’s how much water it takes to produce some of what we consider our basic foodstuffs[2]:

1 lb. of wheat                                    125 gallons

1 lb. of rice                             250-650 gallons

1 glass of cow’s milk            200 gallons

1 hamburger                         700 gallons

Now, again, water doesn’t disappear when used to produce these things. It returns to the water cycle. But if you don’t have that much water available, then you simply cannot produce these foodstuffs.

Indeed, agriculture is the single biggest consumer of water, absorbing about 70% of all water used by humanity today. In the 60 years from 1950 to 2010, the amount of farmland being irrigated doubled, and the amount of water used for farming tripled. And as I’ve described here and elsewhere, the demand for food, and the demand for more resource-intensive food, is growing far faster than population growth. It required roughly a quadrupling of the resources needed to produce sufficient food to feed a doubling of China’s population from 1960 to 2000 during the period of China’s most rapid growth. I don’t have figures that relate specifically to the amount of water required for that purpose, but I would be very much surprised if it were less than a four-fold increase, and would not be at all surprised if it were actually much higher.

After agriculture, the biggest users of water are industry, which makes about 22% of humanity’s water withdrawals, and domestic activities (cooking, washing, sanitation, and so on), which take 8%. Moreover, industrial and domestic demands for water quadrupled in the last half of the 20th century, or even faster than the demands from farming.[3]

Demand Increases Much Faster Than Population

All of this adds up to dramatic increases in the demand for water in future, even if human population stood still – which it won’t. Global population is projected to continue to grow from about 7.3 billion people today, to about 9.3 billion by 2050. And recently revised UN projections indicate that global population could level out at around 9.5 billion by 2100 – or could continue to increase, to as high as 13.5 billion.

But let’s look at just the period to 2050, and do a back-of-the-envelope calculation based on the experience of the last 50 and 200 years. Let’s assume we will be adding two billion more people to global population. That would, on its own, increase water consumption by about 30%. If we also include a factor for the greater intensification of water use per person, then we get at least a 75% increase in the demand for water in 2050 over today.

The problem is that this is flatly impossible because we’re already using more fresh water than we have available. And that’s where I’ll start the next post.

© Copyright, IF Research, June 2015.

[1] Solomon, Steven, Water: The Epic Struggle for Wealth, Power, and Civilization”, Harper Perennial, New York, 2010, p.344.

[2] ibid., p. 373.

[3] Grimond, John, “For want of a drink: A special report on water”, Economist, May 22, 2010, p.4.

Posted in Articles | Tagged , , , , , , , , , , , , , , , | Comments Off on Water Is Not the New Oil: The Future of Water, Part I

What Happens When California Runs Out of Water?

by senior futurist Richard Worzel, C.F.A.

California is exceptional. We all know that. Unfortunately, California has become exceptional in a particularly unpleasant way: it’s running out of water. According to NASA, California’s current four-year drought is the worst in 1,200 years, and the state’s reservoirs have only about a one year supply of water left – which raises the question: What do they do if they run out of water?

Pray for rain, that’s what. There just doesn’t seem to be another simple answer. But what is of broader concern is what California’s drought means to everyone else. There are important lessons we need to draw from California’s situation, but let’s start by considering how bad things really are in California.

How Bad Is It in California?

On March 12th, 2015, NASA Earth scientist and hydrologist James Famiglietti wrote an op-ed piece for the Los Angeles Times stating that “Groundwater and snowpack levels are at all-time lows. We’re not just up a creek without a paddle in California, we’re losing the creek too.” But that’s surface reservoirs. What about underground basins, from which California farmers have been drawing most of the water they’ve needed over the past few years? “Farmers have little choice but to pump more groundwater [i.e. from basins] during droughts, especially when their surface water allocations have been slashed 80% to 100%,” said Farmiglietti. “But these pumping rates are excessive and unsustainable. Wells are running dry. In some areas of the Central Valley, the land is sinking by one foot or more per year.”[1]

Another NASA scientist, Ben Cook, a professor at the Lamont-Doherty Earth Observatory, which is part of Columbia University, says that this is the worst drought that the western United States has experienced in 1,000 years – or since more than 400 years before Columbus.[2] Clearly neither governments nor residents are prepared for this. They’ve never experienced anything like it, and so do not know how to respond, so their responses will largely be based on their own experiences.

So, if their prayers for rain are not answered, and the drought persists (and California is already through the rainy season for 2015), then California will run through the water in their surface reservoirs within the year. This will be offset by pumping water from underground basins – but those basins are not bottomless, and are, in many places, running dry. Which means that from 2016 and beyond, California will literally live and die based on how much water is stored in their underground basins.

Some commentators insist this isn’t a problem. “The water tables are dropping so the supply is not infinite but there’s enough to get us through a few more years of drought.,” said Hearst Publication website SFGate, “Right now, some 70 to 80 percent of lost surface water in agriculture is being made up by pumping ground water.”[3] [The emphasis is mine.]

But if there’s only enough water below ground to get through a few more years of drought, what happens after that?

The key question here is: Will this drought persist beyond a “a few more years”? The prolonged outlook appears to be that yes, it will, and potentially for many years. There’s a 12 percent chance, says NASA’s Ben Cook, that this is a mega-drought, which means it could persist for 20-40 years or more. And the longer that we keep emitting carbon at our present rate, the greater the odds of a mega-drought hitting the American West. If California is already in a mega-drought, then it is in deep trouble – and the consequences will affect us all.

What California Means to the Rest of Us

California’s economy is the largest of all American states, and would place it 8th among national economies, roughly around the size of Brazil, and larger than Italy, India, or Canada, based on purchasing power parity. A failing California would be an economic disaster for America, with significant repercussions for the global economy.

Getting into the nitty-gritty makes this more concrete.

California is America’s biggest producer of food, and has been for over 50 years, producing more than half of the nation’s fruit, nuts, and vegetables. It is the nation’s number 1 dairy producer, and its leading commodity is milk and cream. It produces 99% or more of the nation’s almonds, artichokes, dates, figs, kiwifruit, olives, persimmons, pistachios, prunes, raisins, clovers, and walnuts. It produces 83% of the country’s fresh and frozen strawberries, 43% of its green onions, and 25% of onions. And this is just a selection of what’s grown in California.[4]

The problem is that much of this is grown unsustainably, using fossil water faster than it is being renewed. That didn’t seem to matter when the state received rain and snow reliably, but that’s no longer the case. And that means that eventually California will no longer be able to grow these things – or, at any rate, not as much of them.

California’s problems will tear a huge hole in America’s food basket – and that will affect food prices and supplies globally at a time when many farming areas around the world are also running low on water.

But it’s not just fruits, nuts, and vegetables that are at stake, because industries and cities use water as well. I’ve already had reports from people I know in Washington state of Silicon Valley companies moving there from California because of their concerns over water. That’s not a statistically valid measure, but neither is it a surprise.

And this focuses on a simple question that even people concerned about the problem are avoiding: What do people drink, and how do they live, if there’s no water? We’re not there yet, but we appear to be heading in that direction.

Disaster Is Discontinuous

One of the biggest mistakes people make when thinking about the future is that they expect change to be gradual and continuous. But sometimes, especially with disasters, change is abrupt. One day New Orleans was going about its business. Then Hurricane Katrina hit, and the city was changed forever. One morning, New York City awoke to a beautiful, seemingly ordinary autumn day – then the planes hit the World Trade Towers and the city – and the world ­– changed forever.

People expect solutions to appear as if by magic to long-standing problems, like the over-use of water. But when long-term changes are not instituted, eventually time runs out, and the future jumps from one state to a very different one. It becomes discontinuous. And of all the challenges we face, water is one of those that we have worked hardest to ignore.

That may be about to change in California, and California may be our very big canary in the coal mine. No one is currently saying that the population of California will have to be evacuated to places that have more water. And there are many things that can still be done to avert such an unthinkable result. But the time that would be required for such changes is largely being wasted because we keep thinking that things will return to “normal”.

And there may be a temporary stay of execution. About the only thing on the horizon that might change California’s situation, in the short run, is if the state’s prayers for rain come true, notably through the kind of drenching that the emerging El Ñino might deliver – or not. This could go some ways to replenishing California’s reservoirs and restoring some of its underground basins. But this would be a reprieve, not a rescue.

Drought As the New Normal

If the climate scientists are right, drought is the new normal, not only for California, but for large parts of western North America. NASA’s Ben Cook says there’s a 12 percent chance California is at the beginning of a mega-drought, as I mentioned earlier. But he went on to say that if we continue to spew carbon into the atmosphere, then the chances of a mega-drought, lasting 20-40 years or more, for the American West would rise to 60% by 2050, and 80% by 2100. So what California is experiencing now holds important lessons for all of us to consider: What will our future be if we don’t have enough water?

Many will undoubtedly read this and think that I’m being foolishly alarmist, or far-fetched. I can’t seriously be suggesting that California will actually and truly run out of water, can I?

That is precisely what I’m saying. If present trends continue, and if the state’s prayers are not answered, we will see a mass migration of water refugees out of California. It doesn’t have to happen that way, but avoiding this abrupt disaster will be difficult, wildly expensive, and catch the vast majority of people by surprise, even though the problem has been coming for decades.

But without such actions, difficult and distasteful as they may be, when the last quart of water has been pumped – and fought over – from the last underground basin, and there’s no water left, it will be too late. A disaster that has taken more than a century to develop will have arrived – and its landing will be abrupt and catch many people by surprise.

Some will call this “bad luck”.

I’ll have more to say about our abuse of, and attitudes towards, water in my next blog.


© Copyright, IF Research, June 2015.

[1] Farmieglietti, James, “California has about one year of water stored.” Los Angeles Times, 12 March 2015,

[2] Cook, Ben et al, as quoted in “NASA Study Finds Carbon Emissions Could Dramatically Increase Risk of U.S. Megafroughts”, 12 February 2015,

[3] Graff, Amy, “10 California drought myths debunked”, SFGate blog,


Posted in Articles | Tagged , , , , , , , , , , , , , , , , , , , , , , | 6 Comments

We Can See Right Through Them: Transparency and How to Use It

by Kit Worzel, futurist

In 2014, Canada ranked #10 in lack of corruption worldwide, the UK ranked #14, and the US ranked #17. In comparison, Denmark scored as the least corrupt country in the world, India was in the middle of the pack at #85, and North Korea and Somalia tied for last, at #174, and are considered the most corrupt nations in the world. There is direct correlation with the level of openness and transparency in government and lack of corruption in any given nation, leading to the thought that lack of transparency leads to corruption.

These measures come from Transparency International, a non-profit organization that collects data from numerous sources, including World Bank, Reporters Without Borders, World Economic Forum, the Tax Justice Network, and their own research. They are not the only international group that researches corruption across all sectors, but they are one of the largest and most trusted (one would hope. The irony would be painful otherwise).

When it comes to corrupt and closed nations, certain characteristics stand out. We hear about the Great Firewall of China, and how Google refused to continue filtering their search results there as they consider China to be a totalitarian state, a fact that is largely true. North Korea has the most restrictive and corrupt government in the world, and the press is entirely controlled by government propaganda. In Somalia, where the World Bank has essentially said the law has entirely failed and there is zero control over corruption, the press is somewhat less restricted than in China, but reporters still get arrested or killed with distressing regularity. These nations have very low levels of transparency, and the citizens have very little say in governing, which encourages closed and corrupt regimes. So transparency is not the same as corruption – but lack of transparency is clearly a hallmark of corrupt governance.

By comparison, Denmark, which I mentioned earlier as the least corrupt nation in the world, holds that top spot for a number of reasons. They are part of the Open Government Partnership, an international platform dedicated to make government open, accountable and responsible. They have an enforced code of conduct for public servants, and a legal system that not only criminalizes various forms of corruption, but an independent judicial system to enforce it. Not coincidentally, they are rated as one of the happiest countries in the world (#1 in 2013 and 2014, slipping to #6 in 2015, according to the UN World Happiness report).

Coming in 17th, the US has a comparatively low level of corruption. A free press, combined with nearly omni-present civilian recording and online reporting of official actions, make corruption more difficult to hide. And when people are caught stealing or in wrong-doing, they are usually punished for it. There are laws in place to protect people against false seizure of assets by governments or even against large corporations. So while many Americans complain about government corruption, the US is significantly better off than most of the world.

The Shock of Transparency

Yet, even those regimes that deem themselves honest and open are experiencing a form of future shock, as information they consider secret (or embarrassing) is increasingly finding its way onto the public view. And it’s not just governments that are vulnerable to this trend towards transparency. Most major corporations dislike transparency since it tends to tarnish their image when their dirty deals or deeds get out. Examples might be Nestle taking water from drought-ridden California, or similar behavior by Coca-cola in India, as just two examples, both of which have started boycotts over their products. As this affects their bottom lines, such companies strive to keep their actions secret, and prevent outsiders from finding and airing their dirty laundry. Like their government counterparts, they’re increasingly going to be disappointed as transparency continues to spread.

Conversely, companies who are open in their dealings are typically seen as being more trustworthy. Elon Musk’s Tesla open-sourced their patents for their electric cars. SEOmoz, where former CEO Rand Fishkin not only posted his own performance reviews, but also released their funding process, complete with an accurate and reasonably complete financial breakdown of the company and their profit design. Such companies believe that transparency generates more business by increasing trust. Transparency pays, and those who drag their feet on it can suffer.

As new stories of corruption and backroom deals emerge from media – or increasingly from bloggers – people are taking individual action against it. Nick Rubin, a US teen, created a browser plugin called Greenhouse, which recognizes the names of members of Congress and shows where their funding comes from. He uses the data from, the website for the Center for Responsive Politics, a group dedicated to tracing the money in US politics. In India, the website I Paid a Bribe asks people to report where officials solicited a bribe, and for how much, showing the corruption in various parts of the country, and creating a starting point for preventing it in the future. In Kenya and Uganda, the site Not in my Country tracks solicitation of bribes of different types at Universities, and helps guide users through the legal process if they wish to take action. These are but a small sample of websites and utilities designed to expose and end corruption, many of them grassroots, and there are a few official government sites as well. Combine this with ease of access to information, and this is putting a large magnifying lens over any public official, or anyone in the public eye. But transparency and public display are, in themselves, not enough.

The Past Is Gone…the Future Can’t Hide

JFK and FDR were able to get elected in part by hiding serious health issues (Addison’s disease for Kennedy, and Polio, or possibly Guillian-Barré syndrome, for Roosevelt), something that would be impossible today. Now there’s too much media, and too many eyes watching, which would make it impossible to conceal something like that, and attempting too do so unsuccessfully would be seen as an even greater sin by the voters. One of the reasons that John McCain didn’t win in 2008 was because of his known health issues, so that voters feared he would die half-way though his term. Anyone who tried to cover up a serious health issue today wouldn’t even make it to the ballot.

It’s not just health. Financials, one of the issues that hurt Mitt Romney, can leave politicians exposed, as well as the contents of donor lists, voting records and all sorts of data that’s becoming available. Yet the raw data isn’t that useful for most people. It’s the application of it that we are interested in.

We are likely to see an outgrowth of the current technologies like Greenhouse and the corruption websites, and have them tied into social media. Rather than wonder if the real estate agent who is serving you is giving you a fake contract, you’ll be able to take a picture with your phone and know all about their reliability and honest right off. Politicians won’t be able to hide skeletons in the closet any more, they’ll have their failings and broken promises displayed alongside their pictures on your computer or TV screen. We are rapidly approaching a post-secrecy world, where dirty laundry is almost impossible to hide, for politicians, corporate CEOs – or private citizens like you and me.

The Downside of Transparency

Greater openness also means that governments may use this declining secrecy as a weapon to control people, tap their phones, scan their e-mail, even access confidential medical records. This is already happening, as you can see from my links, and greater openness, voluntary or otherwise, will also give people who have an axe to grind more material with which to work.

All governments want to monitor all the data they can on their citizens, even if it is for benign reasons, and the unprecedented level of access, particularly with the proliferation of social media, is a goldmine for anyone who is so interested. It will pay to remember the old political saying that just because you’re not interested in your government doesn’t mean that your government isn’t interested in you.

Seeing Is Not Enough

As recent events demonstrate, we’re finding that even a hint of government cover-up can create a major public uproar. Julian Assange (Wikileaks) and Edward Snowden (release of NSA spying documents) are examples of this in the US. In Canada, Bill C-51, the so-called anti-terrorism bill, has been shown to be a gag bill which would empower the government to silence groups they felt were saying uncomplimentary things. Government prosecution (or persecution, depending on your point of view) Assange and Snowden is wildly unpopular, and the passage of C-51 through the Canadian House of Parliament has been met with wide-spread outrage, but this hasn’t stopped either the desire to prosecute the two whistleblowers, or the passage of the bill. Knowing about something may not be enough to stop it.

Bills that go to Congress frequently have deliberately confusing language, making them difficult for laypeople to understand, and this language is used to hide raises for Congress, changes to voting laws, tax breaks for major donors, and all sorts of sins that governments would rather the voters remained ignorant of. Complication is one way governments have of fighting transparency. And that calls for different kinds of counter-measures.

Beyond Transparency

Which brings me to my second point. We now live in an age where technology is making privacy almost obsolete. We have an unprecedented level of access to public officials, and they are frequently in the headlines for one scandal or another. However, all this exposure is not significantly lowering the level of corruption, which may means that transparency, while important, is not enough. Indeed, in a world where notoriety is often seen as a form of celebrity, many evil-doers will make hay out of coming clean. To keep those who lead and govern honest, we need something more: accountability; and that may be harder to achieve.

So when software can show you a politician’s voting record, that’s great. But it will be better still when that voting record is cross-referenced with their donor records, so you can see that they vote against solar panels when the oil industry gives them a hefty donation. And we are coming to a point when the news won’t just show that a bill sponsoring green energy will be voted on next week, but that the coal and oil industries have donated a total of $5 million to 238 members of congress, buying themselves a majority, and give the contact information for all those members of Congress, as well as that information for the coal and oil senior executives. And, of course, berating your member of Congress is now just the push of a button away.

So, all this information pushing towards greater transparency is largely beneficial, and since it’s already being used against us, there should really be no hesitation in using it for our defense. But without a way to create accountability, this information isn’t going to be as useful as it might – or as we need. Mark Sanford was almost impeached and was formally censured when he was Governor of South Carolina in 2011, but that didn’t stop him from winning the congressional seat for South Carolina’s first district in 2013. In 2010, Representative Charlie Rangel was found guilty of 11 violations by the House Ethics Committee, most involving money mis-management, but he kept his seat. This is just the tip of the iceberg. While we may be able to find out the dirty dealings of those in power, doing something about it will take more than a smartphone. It is not enough to create technology that enforces transparency, we must also find a way to create accountability. Corruption will continue to exist until we can combine transparency and accountability.

Is there an app for that – yet?

Copyright, IF Research, June 2015.

Posted in Articles | Tagged , , , , , , , , , , , | Leave a comment

Fog: What’s Next in Computing

by Senior Futurist Richard Worzel, C.F.A.

Cloud computing is now a widespread computing commodity, useful, ubiquitous, and powerful. But Cloud computing is fundamentally a 1970s concept, as it amounts to little more than an adaptation of timesharing, which IBM pushed hard in that era. Moreover, Cloud computing – indeed, any centralized computing network that requires backhauling data – is reaching its limits because of the recent, massive explosion in data.

Fog computing, on the other hand, is a new concept that promises not to replace or supplant Cloud computing, but to augment it and make it even more valuable – even as Fog computing breaks new ground.[1]

And, as with most truly new ideas, Fog is difficult to explain, because it’s not an adaptation of something we already know. As a result, I’m going to give a description of Fog computing and why it’s needed, but then illustrate it with two examples. I believe the examples will actually be of greater value.

The Data Explosion, or Why the Cloud Is Not Enough

For the past 50 years or so, computing resources have expanded faster than data. Processor speeds have increased, and data storage costs have fallen, in accordance with Moore’s Law, which says that computers will double in speed, and halve in cost, every 18 months. As a result, every time we’ve had a problem that involved crunching data, we just threw more computing resources at it, and got it solved.

Now that has changed. Suddenly, data sets are growing much faster than computer resources. Computers are still following Moore’s Law, which means their cost-effectiveness is expanding exponentially. But now, many kinds of data are expanding factorially, which is dramatically faster than exponential expansion[2].

There are two fundamental reasons why data are expanding so much more quickly than before. The first is that we are analyzing data sets that are both new to us, and massively bigger than any we’ve ever considered trying to tackle before. For instance, each human being has a genome, which consists of 3 billion base pairs (being the precise composition of their DNA strand) – or, in data terms, every human genome has 3 billion data points. There are currently about 7.3 billion humans, so if we were going to analyze the entire human genome to search for the causes of disease, the effects of different kinds of nutrition, or many of the other questions that we are now contemplating, we would be dealing with a data set that could be as large as 3 billion x 7.3 billion data points. And if we wanted to analyze the data for all of the billions of species of plant, animals, and other organisms on Earth, we would have to multiply that by many billions more. We would never have even considered this as possible in the past, but we are now edging towards this kind of analysis.

But that’s actually the slower growing of the two kinds of data that are now emerging. The faster growing has to do with how different entities relate to each other on the Internet. This is important because the Internet of Things (“IoT”), which consists of devices attached to the Internet, from smartphones to thermostats to devices to monitor your workout and much more, is exploding. In fact, the year 2015 is the first year when it appears that the majority of communications over the Internet are between IoT devices, communicating with each other, rather than people communicating with each other, or even people communicating with things. Current estimates project that there will be 40 billion IoT devices within 5 years, and that may be conservative.

Meanwhile, a lot of analysis now consists of metadata, which is data about data. As a result, how data are communicated from one entity to another becomes grist for this kind of analysis. So as the number of IoT devices explodes, the number of potential relationships between IoT devices explodes much, much faster.

There is no way that central processing of data, say through the Cloud, can handle these quantities of data. And that’s why Fog computing is going to emerge and be so critical

What Is Fog Computing?

Fog computing is computing done at the edge, where data is first gathered, rather than transmitted to a central computer, where it is crunched before results are returned to the edge. Instead, a Fog computer is structured to allow each data node to compute data where it originates, then pass along the data and the computations to another node, which adds its data, performs its calculations, and passes them along. The result is decentralized computing, with results emerging without a central processor controlling or directing the computation. This is a very different kind of computing than we are used to. Let me illustrate how it might work, and why it will be important.

A Traffic Jam as a Computer Network

I’m sure you’ve experienced this kind of situation: You’re driving along a superhighway through a city, comfortably above the speed limit and making good time, when suddenly you see red brake lights blossoming in front of you. You slow, and then stop, then crawl, then stop, moving forward only in unsatisfying lurches. You have no idea what’s happened, how long traffic will be like this, or how far the traffic jam extends. Worse, you’re coming up on an exit, and can’t tell whether you’d be better off staying on the superhighway, or exiting and trying to push your way through city surface road traffic.

Meanwhile, you look at the cars whizzing by in the opposite direction, and wish that, somehow, they could tell you what’s ahead, how bad it is, and how far you would have to go to get by this stop-and-go traffic jam.

Virtually all contemporary cars have many onboard computers, and newer ones have several different kinds of communications capacities. Now add a Fog computing system.

With cars equipped to participate in a Fog network computer, each car, in both directions on the superhighway, makes available data about the traffic around it to every other car, but with all personal identifiers stripped off for the sake of privacy. Hence, to you and the other cars stuck in traffic, your car quickly learns that the stop-and-go traffic you are experiencing actually gets worse, and that the road ahead is completely blocked about a mile ahead due to a tractor trailer tipping over, blocking all the lanes on your side of the superhighway. Each car then uses this data to calculate its best options, and presents them to the driver (or executes them itself, if it’s a self-driving car).

In your case, as you are coming up to an exit, the car indicates that you will be far better off exiting immediately, and finding a new route around the blockage. The same is true for all the cars around you, as they each reach the same calculation based on the traffic data they have collected.

But now hundreds of cars are exiting the superhighway, and flooding the local surface streets of the city, causing a different kind of traffic jam. Again, each car is broadcasting details of the traffic around it and behind it. And as each car gathers data, each car starts figuring the best routes around this unexpected, and rapidly growing, new volume of surface street traffic, so that cars diverge along many different routes.

Now the city’s central traffic computer, located in the Cloud, gets into the act. It notes the blockage on the superhighway, and notifies emergency services. In turn these services dispatch police, fire, and ambulance crews to the scene, and start diverting traffic from farther back on the superhighway. The traffic computer also considers the flow of traffic exiting the superhighway, and starts changing its traffic flow controllers, such as changing when traffic lights go red and green, how long turn lanes are given priority, and switching the direction of some lanes to further diffuse the traffic, and keep things moving.

As a result, while traffic slows and thickens, the effects are minimized, and things keep moving.

This is Fog computing interacting with Cloud computing.

A Farmer’s Field as a Computing Network

Tom Hauptman is a cutting edge farmer, but even he is surprised by the latest development. In conjunction with the University of Saskatchewan in Saskatoon, and Rice University in Houston, Texas, Tom is one of a limited number of farmers running a new kind of experiment: turning one of his fields into a network, and networking, computer. He was approached by the joint venture team as he has spoken and blogged on the future of precision agriculture, and so was receptive to the concepts involved in the SmartField project.

The first step was to plant sensors along with his normal wheat crop. This was done by adding a small robotic attachment to his planter. Every so often, as his field was being seeded, it would punch a foot-high probe, containing a sensor-computer node, into the ground along with the wheat, with the result that Tom’s field is now growing both wheat and data.

The nodes have a variety of sensors in their package, as well as an obsolete smartphone computer chip. All of this equipment has been selected first to use as little electricity as possible, and then to be as cheap as possible in order to keep overall costs down. In addition to the metal probe that supports the node in the ground, there are also two electricity generators: a soil or microbial battery, that actually generates electricity from the microbes in the soil around the probe; and a small infrared solar panel that collects electricity from heat rather than direct light. The designers realized that as the wheat grew, it would quickly overshadow the probe, eliminating any potential power from visible light solar cells. The infrared cell, while more expensive, will continue to operate even when overshadowed.

The computers on the nodes were programmed before they were distributed around the field. Their purpose is to assess the data collected by the sensors, and relate it to an elaborate computer model that compares their readings with the projected data from a computer model of a hypothetical, optimally-growing wheat stalk at the same stage of development. The purpose is to identify what might hurt or help the wheat around the sensor to grow optimally. Such things might include emerging pest threats, too much or too little water, nutrient deficiencies, or anything else that might cause development to be sub-optimal.

Now, a month following the planting of the wheat field and the probes, something remarkable has happened. A group of drones, about the size of a badminton shuttlecock, regularly spreads out, moving from probe to probe, both collecting and disseminating data. The data gathered from the probes are relayed both to a computer in Tom’s home office, and to the other probes in the field. Now, some of the probes are reporting that soil moisture is falling below the optimal band, while others are reporting an improper development of the surrounding wheat stalks. The Fog computer network created by the sensor nodes has thrown up two hypotheses: first, that there is a section of the field that drains more quickly than Tom suspected, and therefore needs more irrigation; and second that some external factor is harming some areas of the wheat field. The areas related to these two hypotheses overlap, but are not identical.

This data is relayed up to a Cloud computer run by the university researchers. Their model concurs that some areas of Tom’s field drain more quickly, which is not a terribly interesting result (at least for the researchers), but conclude that there may be a new kind of wheat leaf rust emerging that they haven’t encountered before, and which isn’t included in their optimal-development model. The researchers take the data from the Cloud model, and use it to search recent scholarly papers, as well as ag agent reports of emerging threats. They find a handful of reports of such rust, but no research – it’s too new. One of the reports suggests a new, recently introduced fungicide seems to be helping resist the new rust infection. The researchers therefore send the data about the nature of this new rust, and the possible fungicide remedy, to the Cloud model.

The Cloud model then acts on the data, downloading it to all of the farms involved in the tests, including Tom’s farm, where it goes down to the individual nodes. The nodes compare the data they receive as the rust emerges, and conclude that this is the threat that is causing the affected plants to be growing sub-optimally. As their data is cross-tabulated by the Fog computer, a recommendation is made to Tom that he immediately obtain the new fungicide, and apply it in controlled amounts to the specific areas affected, as well on as a barrier of uninfected plants around these areas. The amount involved is much less than would otherwise be used, first because only those affected or threatened are sprayed, and then because the infection has been caught at a very early stage, before much damage has been done.

The first Tom knows of any of this is when the recommendation to spray appears as a text message on his smartphone. He calls the university researchers to talk to them, because Tom has no experience with this. They confer, and Tom concludes that he will go ahead with the proposed spraying, orders the fungicide, and prepares to spray the next morning.

This action is the result of data being gathered by a Fog network of sensors and computers; processing by that network identifying that something is amiss at a very early stage; throwing that conclusion to other nodes for comparison and reinforcement or clarification; the results uploaded first to Tom’s computer, then to the researchers’ Cloud system; research being done based on these early findings and conclusions being updated in the Cloud system, then downloaded to the farm test nodes, which then compared the expected development profile with the reality in the field; and finally remedial action proposed.

Where the Action Is: Everywhere Through Interlocking Networks

The computing involved in this example happened at the sensor level, at the farm level, and in the Cloud. Here again, Fog and Cloud computers interacted, each bringing their own strengths.

A fully-articulated global farm network would include Fog computing at the field level, databases at the farm, regional, national, and global levels, and Cloud computing happening at the regional, national, and global levels. Information would pass both upwards and downwards from every level, and conclusions would emerge through techniques such as evolutionary algorithms, both with and without human intervention. As new threats or events occurred in one location, the relevant data would quickly be spread to everyone involved in the various networks. Hence, a farmer’s Fog network would be warned to look out for certain characteristics of an emerging threat, and data from the field, along with Fog-generated conclusions, would be forwarded to Cloud networks for broader comparison and assessment.

In theory, sensors will eventually watch over every plant on every farm in the world, gathering data, with Fog networks computing it, and passing conclusions and observations to local, regional, national, and global Cloud networks. No Cloud system, regardless of how powerful, could possibly handle such a volume of data. And no stand-alone Fog system, encompassing a single farm, could have the depth of knowledge that being networked globally could provide.

What’s missing from this picture is the economics that would make this idea financially feasible. This not only involves the cost of the sensor-processor nodes, but the interchange of data itself. I strongly suspect that data will be bought and sold at all levels of these interlocking networks. Precisely how such data sales will be tolled is unknown, but that will emerge. Moreover, the buying and selling of data may become of much greater significance to farm suppliers, like Deere, BASF, or Bayer, than their current, physical product lines. Indeed, their current product lines may become merely the means by which they are involved in selling data, much as Gillette sells razors virtually at cost in order to sell replacement blades at a profit.

Welcome to the World of Fog

Fog computing is virtually unknown in the world today. Very few people have even heard about it, even among high tech companies that look for such things. That’s about to change, and with a vengeance. And the combination of Fog computers with techniques like evolutionary algorithms means that such networks – or networks of networks – will, in many cases, reach conclusions without human intervention.

We are entering a strange new world, but one that holds enormous promise, as well as the potential for enormous misuse. Welcome to the Fog.

© Copyright, IF Research, May 2015.

[1] Disclosure: I’ve recently made an investment in an early stage Fog computing company, Fog Lifter™, which reflects my belief in the importance of this emerging sector of the computer industry.

[2] To illustrate factorial expansion, 2 raised to the 5th power (exponential) is 2 x 2 x 2 x 2 x 2 = 32. For comparison, five factorial is 1 x 2 x 3 x 4 x 5 = 120.

Posted in Articles | 1 Comment