The View from 2030: The Future of the Life Sciences

Articles

June 2006

What follows is a summary of a presentation I recently made for the ‘2006 Life Sciences Industry Summit,’ co-hosted by the Long Island Life Sciences Initiative, and the Center for Biotechnology.

The Life Sciences Century

When we look back, later this century, we’ll identify the completion of the Human Genome Project as the time when we hit the knee of the exponential growth curve, even though there’s much more to the life sciences than genetics. We are now experiencing the beginning of a period of growth rates in this industry that are beyond anything that humanity has experienced before, possibly eclipsed only by the explosion in computers and communications. Moreover, the effects of this research on our daily lives is going to be even more dramatic than that of IT, and will change our society and the world in fundamental ways that we find hard to either foresee, or believe.

There are three reasons why the growth in life sciences is going to be so dramatic. First, life science research is starting to grow at computer speeds rather than in vitro speeds. One of the reasons why the Human Genome Project will be seen as a pivotal point in history is Craig Ventnor beat the U.S. government and the traditional science establishment by using computers instead of graduate students to sequence the human genome. Moreover, the HGP could not have been completed at all without the convergence of computing with the life sciences, plus advances in materials science and chemistry. Accordingly, we are now doing things deliberately instead of by trial and error.

Second, even though they may not understand the scientific details, people generally want the results that life sciences produce. We want a cure for cancer or Alzheimer’s. We want to postpone or reverse the effects of aging, such as the damage done to our skin by sunlight. We want leaner beef, a cleaner environment, positive means of establishing our identities in the face of rising theft, fuels that don’t produce greenhouse gases, and cheaper goods produced with less pollution. And we want to be able to see and hear better. We want, in other words, the kinds of results that the life sciences promise.

And third, the potential investment returns in the life sciences will produce a virtuous cycle of investment leading to success leading to investment leading to success. At times this is going to produce a feeding frenzy akin to the dot-com bubble of the late 1990s, as well as financial disasters that are the inevitable companion of any new, rapidly developing field. If that doesn’t describe what you, as a life sciences group struggling for financing are seeing, just remember that investors are camp followers, not leaders. If your sector is not flavor of the month right now, wait a while – it will be again. Unfortunately, that’s just the fickle, whimsical nature of the investment markets.

So what’s the potential for the life sciences? Dupont estimated that genetically-engineered organisms would develop into a $500 billion a year industry this decade. Mackenzie Consulting has extrapolated that biotechnology will grow to a $230 billion a year industry by 2010. Both of these are only a small part of the entire life sciences field. Because it is such an enormous field, I’m going to concentrate on the parts that relate most immediately to humans, especially health.

So the potential of this wild and woolly field is enormous. Over the next 20 years, I believe that the growth in investor profits from the life sciences may be greater than from all other industries combined. This is not something I can prove, but I made a back-of-the-envelope estimate that over the next 25 years, all of the various fields within the biosciences industries will create an increase in market capitalization in excess of US$20 trillion, which would make it larger than the world’s 10 biggest stock markets today. And before you run off and start quoting me on that figure, just remember that this is what is called a SWAG: a ‘Silly, Wild-Ass Guess.’ The truth is the no one knows how big the life sciences are going to be, but the difference is not whether it will be big, but is it going to be huge, or merely enormous? Now, having put some definition on how big the industry could get, let’s look at some of the factors that are going to drive this growth.

The Demand for Health Care

Annual health care expenditures per person remain pretty constant from about age 4 through age 55, and then they start to go up almost exponentially. The Baby Boom generation – the largest in history – is now entering the danger zone, with the leading edge of Boomers turning 60 this year. Their demand for health services is going to explode, especially as the boomers want to pretend that we’re not aging. The demand for health management solutions – and money to fund the development of such solutions – is going to follow the boomer demand. As a result, expenditures on health threaten the budgets of U.S. federal and state governments alike. Medicare and Medicaid fund the two groups most likely to need medical care: the elderly and the poor. Those who don’t have any form of health insurance wind up at hospital emergency rooms, which constitutes an unofficial, but extraordinarily expensive, form of socialized medicine. Therefore, the growth in government spending on health care will exceed the growth in any other aspect of public spending.

And, of course, the private sector’s consumption of health care is going to grow about as fast as people can afford to fund it. One crucial debate that we have studiously avoided so far is whether we want more expensive and more effective health care, or cheaper, less effective health care. Hence, new pharmaceuticals that are more expensive, but dramatically more effective, are often dismissed by governments and insurers as too expensive.

So, the demand for health care, which is rising rapidly now, is going to skyrocket, and that is going to drive demand for pharmaceuticals, diagnostics, imaging systems, medical devices such as retinal replacements or cochlear implants, prosthetic limbs and joints, and tests and treatments. And it’s going to be as much about the quality of life as it is about saving lives. While cancer drugs, for example, are an obvious area where there’s a lot of research, and will be a lot of breakthroughs, lifestyle drugs are going to grow at least as fast. This is everything from treatments for Alzheimer’s to treatments for erectile dysfunction, male-pattern baldness, plus weight management, skin aging and anti-wrinkle treatments, and other beauty enhancements.

The Trend Towards Personalized Medicine

One of the most dramatic shifts going on today is the trend towards personalized medicine. Currently, a successful new pharmaceutical is effective between 50 and 70% of the time, which means it is either ineffective, or has unacceptable side effects 30 to 50% of the time. The reason seems to be because of SNPs – Single Nucleotide Polymorphisms – the subtle genetic differences between one person and another. As we learn more about these subtle differences, we will be able to screen someone to determine if a particular drug will work for them. Perhaps the highest-profile example is Herceptin, used for certain kinds of breast cancer. The overexpression of the HER2 protein in conjunction with a genetic variation indicates that Herceptin will be effective for a particular kind of breast cancer. If these two factors don’t exist, then Herceptin won’t be effective, and should not be prescribed. As a result, a woman’s genetic pattern will determine whether Herceptin is used to treat her breast cancer. And the stakes are high when you consider the cost of using an ineffective drug. Some of these costs are obvious, such as the money invested in a drug that is wasted on those for whom it either doesn’t work or for whom it has negative side-effects, plus there’s the cost of possible lawsuits. But consider as well the cost to a cancer patient, for example, who only has a limited amount of time to live, and limited resource to pursue a cure. The administration of an ineffective drug to this kind of patient can mean the difference between life and death.

Ultimately, new drugs screened for applicability will approach 100% effectiveness – but only because they will be used for that fraction of the population for whom they’ve been shown to be effective. This is completely contrary to the financial blockbuster model the big pharma companies have used to date, and is going to mean we will see an explosion in niche drugs, and in the boutique players developing them. In fact, over the last six months or so, the FDA seems to have quietly moved towards demanding not only clinical demonstrations of the effectiveness of a drug, but also identification of which individuals can safely and effectively use it. This is a crucial shift in drug-approval protocols, and will create dramatic changes in the pharmaceutical industry.

Among other things, it will push the big pharmaceutical companies to move towards the magazine industry model: rather than research all their new drugs themselves, they’ll buy up smaller firms, Enzo Life Sciences or OSI Pharmaceuticals, and develop fewer new drugs themselves because it’s more cost-effective that way. I have no inside information about whether this is about to happen to these particular companies – I’m merely saying these are the kind of small shops with bright people and new drugs that are likely to attract attention.

The Data Threat

However, the dramatic increase in data accumulation that is producing such an enormous resource is also threatening to overwhelm us. The quantity of relevant data in the life sciences is growing far beyond the ability of existing techniques to cope with the flow. In fact, we are now starting to experience a ‘perfect storm’ of data far beyond anything we’re prepared to deal with. The researchers in this room grew up in a world where you struggled to get sips of data dribbling from a garden hose – and now you’re getting blasted with a fire hose at full bore.

One researcher I know of is working with SNP-chips, each of which produces 200,000 data points, and he has 1,000 patients in his study universe. On its own, this amounts to 200 million data points, which is bad enough, but then when you add potential interactions between sites, you’re looking at factorial expansions of the data that need to be examined. As this researcher put it, ‘I have a 1,000 node cluster computer available, but I have no idea where to start looking.’

Therefore, whoever makes the most effective use of data will have huge scientific, economic, and business advantages. Let me give you a couple of examples of how different groups are devising new ways of dealing with information. Some are pioneered by companies from this area, like the approach taken by OSI Pharmaceuticals here in Melville, whose high-throughput techniques, extensive compound library, unattended robotic screening, and data mining tools let them sift through compounds quickly, and identify promising candidates far faster than traditional analysis. This approach allowed them to develop their new cancer drug Tarceva, which they eventually sold to Genentech.

A very different approach is represented by the Calgary Automatic Virtual Environment (or ‘CAVE’) in the Sun Centre for Visual Genomics at the University of Calgary. CAVE is a virtual reality room where someone with goggles can walk through a projection of a molecular model of a new drug, a chemical reaction, or the interior of the human heart. The value of this is that humans evolved to integrate enormous quantities of data visually, and this is still a far more efficient way of absorbing the massive quantities of data involved in complex biological systems.

Genetic programming software

John Koza is a pioneer in the field of genetic programming, which evolves software solutions through a form of machine learning rather than hand-crafted coding. Genetic programming searches a given data space by mimicking the techniques of evolution to find solutions when other approaches don’t work, and which can make those solutions accessible for further analysis and development. On one of his GP websites, Koza talks about how GP is now producing results that are, as he calls it, ‘human competitive.’ ‘There are now 36 instances where genetic programming has produced a human-competitive result.’ Koza says, ‘… These human-competitive results include 15 instances where genetic programming has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, 6 instances where genetic programming has done the same with respect to a 21st-century invention, and 2 instances where genetic programming has created a patentable new invention. These human-competitive results come from the fields of computational molecular biology, cellular automata, sorting networks, and the synthesis of the design of both the topology and component sizing for complex structures, such as analog electrical circuits, controllers, and antenna.’

So GP is now a functioning reality, not just an interesting pipe dream. I’m about to tell you about a company called Genetics*Squared that is using GP in the pharmaceutical industry, but in the interests of disclosure, I must also tell you I have worked with this company, and own shares in it. This is also why I know so much about it, as it is privately held, and not open for investment.

Genetics*Squared is a ‘dry’ biotech company that analyzes clinical trial data to produce diagnostics predicting who will respond positively to a given therapy or pharmaceutical. Because they’re not analyzing from the top down, the way a human researcher does, they aren’t limited to conventional solutions, nor conventional techniques. They literally evolve solutions to problems, and are especially good at multivariate and non-linear problems. G*Squared has already shown that a drug that was discarded as ineffective by one of the major multinational drug companies might be rescued if potential patients were screened to identify those who would respond positively. G*Squared is now under contract to research two more drugs for this multinational. It has also worked with data from the University of Southern California to identify the specific stages in the progression of bladder cancer by better use of existing data rather than trying to invent better lab tests. Here, G*Squared is combining markers rather than looking for a single factor, producing rules as: ‘IF Protein X is 5 times Protein Y THEN Stage 3 cancer exists.’ This kind of information would radically improve the survivability rates of cancer, especially for such sneaky killers as ovarian and pancreatic cancers.

In developing this GP technique, Genetics*Squared learned an awful lot about evolution, and working with cancer drugs they’ve also learned an awful lot of how DNA works. One of the people there observed that Watson & Crick, who won the Nobel prize for identifying the structure of DNA, only identified the purpose of about 20% of the DNA sequence – the codon portions that program proteins, which actually do the work within the body. The balance of the DNA strand appeared to be garbage, nonsense syllables with no purpose. But if they had no purpose, why have they been preserved through millions of years of evolution? You would have thought they’d disappear as unnecessary coding. But suppose they’re not garbage, but active programs? My contact believes that the genome is actually a Turing machine, and the nonsense syllables are recursive, overlapping programs within the strand, that DNA is actually a really powerful, flexible, creative, self-replicating, self-assembling, self-repairing computer that makes use of massively-parallel processing to solve incredibly complex real world problems that we can only think about today.

So how is the genome coded? Putting aside for the moment the question of Who coded the genome, how do you program a mechanism that is so complex? Clearly, you can’t do it by the top-down methods we use to solve problems today, because they are far too inefficient. The answer, in essence, is genetic programming: you let the real world define the fitness function for the organism, then the genotype attempts to evolve a solution to the problems posed by the real world. Or, in this researcher’s words: ‘Natural evolution is the genotype’s attempt to seek a workable phenotype to fit real world conditions.’ Or, ‘Evolution is chaotic because the genotype is experimenting to find a way to fit the needs of the phenotype.’

But while data analysis is a crucial pivot in the future of the life sciences, it isn’t all there is.

Wearable computers and health monitors

If you think about the development of computers, we’ve gone from hand-crafted monstrosities, like EINIAC (1946), that filled an entire gymnasium, to mass produced machines like the IBM System/360 (1964), to desktop computers, like the Macintosh IIc (c. 1987), to handheld computers like the RIM Blackberry and the Apple iPod. We’ve reached the stage where the only way for computers to get smaller and more portable, yet continue to be usable is to move in a different direction: wearable computers, which are already emerging today, such as those produced by Xybernaut of Virginia, or MIThrill produced by the borg lab of MIT.

Simultaneously, we are also seeing diagnostic computers appearing for a range of applications, everything from runners, who use them to gauge their workout, to diabetics, who continuously monitor their blood sugar. Once we reach the stage where people wear computers the way they wear wristwatches, then their computer butlers will be able to apprehend health problems as they emerge, from such clues as elevated heart rate or blood pressure; small muscle movements; galvanic skin response; or body temperature. Hence, if you are experiencing a heart attack, your computer butler will be able to detect that it is happening, and call for help, either indirectly, through your spouse or doctor, or directly by calling 911. This may emerge most visibly with the elderly at first. Financially, we are going to want the elderly to stay in their homes – which is what they want as well. If we have dynamic monitors that can gauge their health, this will be simpler, and provide peace of mind to all concerned. Current examples that indicate where we’re going are the ‘Health Buddy,’ a computer that coaches individuals with around 45 health conditions. The Health Buddy is currently being used by health-care organizations to keep tabs on more 5,000 chronically ill patients by sending data by phone between patient and doctor every day.

As they emerge, computer companions & wearable computers, monitoring our health heartbeat-by-heartbeat, will change the way we intercept and respond to health problems. This will lower costs and improve results, because we will treat emerging problems and conditions earlier.

The next computer-related area is the development of computer health records. This is a problematic area, because electronic health records are developing in incompatible ways all over the place, when what would be most valuable, to patients and researchers alike, is a universal format, including a way to strip out individual identifiers to protect personal privacy. This would allow researchers to assess health information across enormous populations, and identify patterns that we currently don’t even suspect. It will allow us to identify the origins of disease, define what is a healthy diet for different genotypes and environments, and recognize emerging epidemics before they become widespread. Indeed, one thing that went almost unnoticed following the terrorist attacks of 9/11 was that the federal government asked large drug store chains to monitor their sales data for unusually high sales of over-the-counter drugs. When someone isn’t feeling well, their first reaction is usually to buy some over-the-counter remedy. If enough people do that in a given community, it could be an early warning of a developing epidemic or of an attack with biological weapons. Data mining this kind of extensive health and genetic information represents an entirely new kind of gold mine, virtually untouched (outside of Iceland), and with enormous potential.

Longevity Research

And finally, consider longevity research. If you think about what we’re doing with research into the health sciences, and where we’re going with it, we are, one-by-one, picking off the causes of death through cure or treatment. Beyond this, though, there are researchers looking at aspects of aging itself as if it were a disease that could be treated or cured. As a result, there are some groups that say that life expectancy could increase by 50% over the next 20 years, which would put it well beyond the century mark. And then there are the extremists, like Raymond Kurzweil. In a recent book he co-authored, Fantastic Voyage, his central thesis is that if you can survive the next 50 years, then you can live forever – or at least as long as your money holds out.

I can tell you, from the reaction of my mostly-boomer audiences, that the idea of postponing death is very appealing, but the idea of curing it completely produces very mixed feelings. In fact, the whole field of life sciences is going to bring with it enormous ethical, moral, and practical challenges outside the scope of normal research for which we have no good answers, and for which we have no historical precedents. For example, Social Security is already headed for rocks, financially. But what happens if people start living to 120, or even 200? Are we entitled to 135 years of retirement based on 45 years of work? We’ve never had to ask this kind of question before – but we will be forced to in future. This is going to be one of the most complicated aspects of being a researcher in this field: grappling with complex, shades-of-gray, non-research-related decisions.

What’s ahead?

Peter Drucker once said that every important new technology goes through two stages of development that catch people by surprise. The first is in the initial development or based-building period, when all that people hear about the new technology is the hype associated with it. When the hype fails to materialize quickly enough, people get bored with the technology, and dismiss it as a bust. To a certain extent, for instance, that’s what happened with the dot.coms, and is currently happening with nanotechnology. But the second stage occurs once the technology has gotten past the knee of the curve, and developments start piling in thick and fast. At that point, everyone underestimates the potential of the technology, and are surprised again. Look at how quickly Google emerged, seemingly from nowhere, faster than Microsoft, Apple, or IBM before it.

Remember I said earlier that the Human Genome Project would eventually be remembered as the knee of the curve. I can’t promise you that every new development, and every promising biotech, imaging, bioinformatic, medical device, or pharmaceutical company is going to produce only positive results. That never happens. But I can promise you that we are going to look back at this period in history and wonder how we could have so badly underestimated how dramatic the changes to come were. We have no real feel for the size, scale, or scope of the changes ahead of us. We are standing on the edge of the biggest shift that humanity has ever experienced, that will shape and shake our world in ways beyond our imagining. And many of those tremors are going to originate with the people in this room.

Thank you.

by futurist Richard Worzel, C.F.A.
© Copyright, IF Research, June 2006