Is enterprise genomics good enough yet?

Genomics will make the dream of targeted therapies a reality, which will have a massive health and economic impact.

Yet most large life science enterprises, from pharma to providers, have yet to fully adopt genomics as part of their toolkit for clinical trials, development, commercialization and diagnostics.

When will these organizations adopt genomics? Is genomics good enough yet to be deployed at this kind of scale? The answer lies in part on when the Genomics Tech Stack becomes modular.

Illumina is often called the Intel of genomics —  it sits at the core of any genomics application and, like the processor, has led to rapid innovation in both hardware and software that use it.

Illumina’s success has enabled a diverse set of genomics applications: prenatal testing, agriculture, drug development, clinical trial recruitment, oncology, nutrition and fitness, and many more yet to be invented.

Yet the key difference here is that while you can build genomics applications without being Illumina machine “compatible,” you can not build Intel software without supporting Intel architecture.

Illumina does not sell an engineering building block, but rather a mechanism to measure aspects of the observable world. Software built to use Intel processors has to be strictly compatible with Intel chip architecture, whereas the particulars of Illumina’s chemistry for capturing the state of one’s genome are largely irrelevant when building an application that uses DNA sequencing.

genome

(Caveat: Sequencing errors often occur in non-random ways as a consequence of the chemistry used in the sequencing process, so sometimes it is useful to understand the assay.)

This means that Illumina may not have as deep a moat to defend its business: It is largely only as good as its latest product. Intel, of course, has benefited from increasing returns to scale in the form of network effects connecting Intel chips, the computers that use them and the software built to be compatible with them.

For Illumina, those effects are less obvious. To be clear, this has nothing to do with Illumina’s technology or strategy, but simply the difference between science and engineering: There are many independent ways to measure a natural phenomenon and usually fewer ways to build something that you invented and no one else knows about.

All this may not be good for the long-term enterprise value of any sequencing technology (though Illumina is doing ok, their market cap is between $20 billion and $30 billion), but it is really good for pharma and diagnostic enterprises deploying genomics.

The full control, independent of the sequencing provider, of what you do with the DNA data once you sequence it allows companies to more tightly architect how they use genomics technology and potentially capture more value than the sequencers.

The problem with using genomics to develop targeted therapies, companion diagnostics or other valuable applications involves more than just setting up a sequencer and pressing play.

DNA sequencing “reads,” the raw output of a sequencing machine, need to be processed to properly identify variants that are the representative variable differences among genomes and the focus of any further application or study. That’s the first step.

Then the variants must be filtered and interpreted based on all the context-specific information from studies, reports and experiments that you can get your hands on. In an academic context, much of this has been done by clinical labs and researchers. In the enterprise, all of these tasks, including the human-driven interpretation and assessment, must be deployed in scalable, reproducible processes.

genomeThere’s more: The processes have to be run in a compliant and secure computational environment.

The question facing many life sciences enterprises is whether the Genomics Tech Stack is at the “good enough stage” where modular components can be swapped in and out to build scalable workflows, or can enterprises only trust fully integrated, vertically assembled systems?

Going “full stack” versus employing a modular system is an age-old argument, especially in tech. Clay Christensen proposes a theory that integrated systems are always better until they aren’t: When products become good enough for the market, integrated systems actually overserve most customers and modular products become cheaper to produce and distribute because each part of the supply chain becomes standardized and optimized.

Apple obviously did well by controlling everything from hardware to software to distribution, but Microsoft did perhaps better in the enterprise context by focusing on a very valuable slice of the modular IT ecosystem and letting other modular systems develop in the value network adjacent to them.

The argument becomes somewhat pointless at the extremes: No company is actually “full stack” if you define that broadly enough. Harry’s owns a factory to make razors, but unless there’s a very big new funding round in the works they are unlikely to roll their own aluminum mines, ships and planes to really control the full supply chain that affects the customer experience. And companies don’t make their own operating systems, processors or computers.

Nonetheless, for a business, it is crucial to evaluate the ROI of vertically integrated solutions versus modular external substitutes. Clearly, there are advantages to both. Internally there is more control and customization, but external tools benefit from scale: AWS is such a great resource for its customers because Amazon achieves greater scale than its customers would individually.

Like any business that survives and grows, it has increasing returns to scale, meaning it is more efficient or profitable at scale, which translates to a better customer experience and product offering.

Image: Iaremenko Sergii/Shutterstock

Image: Iaremenko Sergii/Shutterstock

Genomics is showing signs of becoming more modular. Each component of the stack can be integrated with upstream and downstream components. The enablers are sequencing machines that are agnostic as to the downstream processes, cloud computing which allows for the deployment of interconnected software solutions and APIs that allow for software solutions to integrate and transmit data.

An enterprise deploying a scalable, secure genomics solution in 2016 can now choose from an array of cutting edge tools that are compatible with each other. They can integrate their sequencers and lab automation systems with cloud providers, like AWS, Google Cloud and Microsoft to run variant calling algorithms like DNAnexus, SevenBridges or RealTimeGenomics. They can use algorithms to filter variants based on reference data using software like Omicia, Bina and Ingenuity.

As the history of tech shows, modularity in the industry value network makes development easier, faster and cheaper. If the Genomics Tech Stack becomes modular, where components can be plugged in and swapped out, 2016 may be the year that large pharma companies, biotech, CROs and health providers deploy genomics at internet scale.

Already there are initiatives to sequence 100,000 people by Regeneron, and AstraZeneca recently announced a project to sequence, analyze and research 2 million patients.

Access to pipelines that work at scale is no small feat, and a major barrier to rapid innovation. Regulation and compliance are must-haves and software and data systems must be reproducible to be relevant. Reproducibility and automation are especially difficult for homespun systems whose components are open source, unsupported and/or non-existent.

Once large enterprises in life sciences are given the tools to deploy genomics at scale, the pace of R&D will dramatically increase and the results could change the industry and medicine as a whole. Genomics will re-invent and re-invigorate pharma and biotech business and has the promise of significantly improving healthcare. The first step is reproducible and validated software systems — and a modular Genomics Tech Stack is getting us there.