Inside DeepMind's epic mission to solve science's trickiest problem

DeepMind's AI has beaten chess grandmasters and Go champions. But founder and CEO Demis Hassabis now has his sights set on bigger, real-world problems that could change lives. First up: protein folding

Demis Hassabis – former child chess prodigy, recipient of a double first at the University of Cambridge, five times World Mind Sports Olympiad champion, MIT and Harvard alumnus, games designer, teenage entrepreneur, and co-founder of the artificial intelligence startup DeepMind – is dressed in a yellow helmet, a hi-viz jacket and worker’s boots. Raising his hand to shield his eyes, he gazes across London from a rooftop in King’s Cross. The view is largely uninterrupted in every direction across the capital, which is bathed in spring sunshine. Hassabis crosses the paved roof and, having used his phone to determine the direction, scans his eyes northwards to see if he can see Finchley, where he spent his childhood. The suburb is lost behind trees on Hampstead Heath, but he is able to make out the incline leading to Highgate, where he now lives with his family.

He is here to inspect what will be the new headquarters of DeepMind, the startup he founded in 2010 with Shane Legg, a fellow researcher at University College London, and childhood friend Mustafa Suleyman. Currently the building is a construction site, ringing with the relentless percussion of hammering, drilling and grinding – there are 180 contractors on-site today and this number will rise to 500 at the peak of the build. Due to open in mid 2020, the site represents, literally and figuratively, a new beginning for the company.

“Our first office was on Russell Square, a little ten-person office at the top of a townhouse next to the London Mathematical Society,” Hassabis recalls, “which is where Turing gave his famous lectures.” Alan Turing, the British pioneer of computing, is a totemic figure for Hassabis. “We're building on the shoulders of giants,” Hassabis says, mentioning other pivotal scientific figures – Leonardo da Vinci, John von Neumann – who have made dramatic breakthroughs.

The location of the new headquarters – north of King’s Cross railway station in what has recently become known as the Knowledge Quarter – is telling. DeepMind was founded at a time when the majority of London startups submitted to the gravitational influence of Old Street. But Hassabis and his co-founders had a different vision: to "solve intelligence" and develop AGI (artificial general intelligence) – AI that can be applied in multiple domains. Thus far, this has been pursued largely through building algorithms that are able to win games – Breakout, chess and Go. The next steps are to apply this to scientific endeavour in order to crack complex problems in chemistry, physics and biology using computer science.

“We’re a research-heavy company,” Hassabis, 43, says. “We wanted to be near the university,” by which he means UCL – University College London – where he was awarded a PhD for his thesis, The Neural Processes Underpinning Episodic Memory. “That’s why we like being here, we’re still near UCL, the British Library, the Turing Institute, not far from Imperial…”

A few floors down, Hassabis inspects one of the areas that he’s most excited about, which will house a lecture theatre. With contentment he considers blue prints and renderings of what the space will look like.

Towards the north-east corner of the building he peers into a large void encompassing three floors, which will house the library. The space will eventually contain the feature that Hassabis seems most eager to see in its fully realised form: a grand staircase shaped like a double helix, which is in the process of being manufactured in sections. “I wanted to remind people of science and to make it part of the building,” he says.

Hassabis and his co-founders are aware that DeepMind is best known for its breakthroughs in machine learning and deep learning that have resulted in highly publicised events in which neural networks combined with algorithms have mastered computer games, beaten chess grandmasters and caused Lee Sedol, the world champion of Go – widely agreed to be the most complex game man has created – to declare: “From the beginning of the game, there was not a moment in time when I thought that I was winning.”

In the past, machines playing games against humans demonstrated characteristics that made the algorithm apparent: the style of play was relentless and rigid. But in the Go challenge, the DeepMind algorithm AlphaGo beat Sodol in a way that appeared to have human characteristics. One outlandish move – number 37 in game two – drew gasps from the live audience in Seoul and baffled millions watching online. The algorithm was playing with a freedom that, to human eyes, might be considered creative.

For Hassabis, Suleyman and Legg, if the first nine years of DeepMind have been defined by proving its research into reinforcement learning – the idea of agent-based systems that are not only trying to make models of their world and recognise patterns (as deep learning does) but also actively making decisions and trying to reach goals – then the proof points offered by gameplay will define the next ten years: namely, to use data and machine learning to solve some of the hardest problems in science. According to Hassabis, the next steps for the company will be based on how deep learning can enable reinforcement learning to scale to real-world problems.

“The problem with reinforcement was it was always working on toy problems, little grid worlds,” he says. “It was thought what maybe this can't scale to messy, real-world problems – and that's where the combination really comes in.”

For DeepMind, the emergence of the new headquarters is symbolic of a new chapter for the company as it turns its research heft and compute power to try to understand, among other things, the building blocks of organic life. In so doing, the company hopes to make breakthroughs in medicine and other disciplines that will significantly impact progress in a number of fields. “Our mission should be one of the most fascinating journeys in science,” Hassabis says. “We’re trying to build a cathedral to scientific endeavour.”

When studying at UCL and later at MIT, Hassabis found that interdisciplinary collaboration was a hot topic. He recalls that workshops would be organized involving different disciplines – neuroscience, psychology, mathematics and philosophy, for instance. There would be a couple of days of talks and debates before the academics returned to their departments, vowing that they must gather more regularly and find ways to collaborate. The next meeting would be a year later – grant applications, teaching assignments and the churn of research and scholarly life would get in the way of co-operation.

“Interdisciplinary research is hard,” Hassabis says. “Say you get two world-leading experts, in maths and genomics – there obviously could be some crossover. But who is going to do the work to understand the other person's field, their jargon, what their real problem is?”

Identifying the right question to ask, why that question hasn’t been answered – and what, if it’s not been answered, the tricky thing about it is – may seem, to outsiders, relatively straightforward. But scientists, even in the same discipline, don’t always see their work in the same way. And it’s notoriously hard for researchers to add value to other disciplines. It’s even harder for researchers to find a joint question that they might answer.

The current DeepMind headquarters – two floors of Google's King’s Cross building – has become increasingly populous in the past couple of years. There are six or seven disciplines represented in the company’s AI research alone, and it has been hiring specialists in mathematics, physics, neuroscience, psychology, biology and philosophy as it broadens its remit.

“Some of the most interesting areas of science are in the gaps between, the confluences between subjects,” Hassabis says. “What I've tried to do in building DeepMind is to find 'glue people', those who are world class in multiple domains, who possess the creativity to find analogies and points of contact between different subjects. Generally speaking, when that happens, the magic happens.”

One such glue person is Pushmeet Kohli. The former director at Microsoft Research leads the science team at DeepMind. There is much talk in artificial intelligence circles of the "AI winter" – a period where there was little tangible progress – having ended during the past decade. The same sense of movement is now also true of protein folding, the science of predicting the shape of what biologists consider to be the building blocks of life.

Kohli has brought together a team of structural biologists, machine-learning experts and physicists in order to address this challenge, widely recognised as one of the most important questions in science. Proteins are fundamental to all mammalian life – they make much of the structure and function of tissues and organs at a molecular level. Each is comprised of amino acids, which make up chains. The sequence of these determines the shape of the protein, which determines its function.

“Proteins are the most spectacular machines ever created for moving atoms at the nanoscale and often do chemistry orders of magnitude more efficiently than anything that we've built,” says John Jumper, a research scientist at DeepMind who specialises in protein folding. “And they're also somewhat inscrutable, these self–assembling machines.”

Proteins arrange atoms at the angstrom scale, a unit of length equivalent to one ten-billionth of a metre; a deeper understanding would offer scientists a much more substantial grasp of structural biology. For instance, proteins are necessary for virtually every function within a cell, and incorrectly folded proteins are thought to be contributing factors to diseases such as Parkinson’s, Alzheimer’s and diabetes.

“If we can learn about the proteins that nature has made, we can learn to build our own,” Jumper says. “It’s about getting a really concrete view into this complex, microscopic world.”

What has made protein folding an attractive puzzle for the DeepMind team has been the widespread availability of genomic data sets. Since 2006 there has been an explosion in DNA data acquisition, storage, distribution and analysis. Researchers estimate that by 2025 two billion genomic data sets may have been analysed, requiring 40 exabytes of storage capacity.

“It's a nice problem from a deep learning perspective, because at enormous expense, enormous complexity and enormous time [commitment], people have generated this amazing resource of proteins that we already understand,” Jumper says.

While progress is being made, scientists urge against false exuberance. The esteemed American molecular biologist Cyrus Levinthal expressed the complexity of the challenge in a bracing manner, noting that it would take longer than the age of the universe to enumerate all the possible configurations of a typical protein before reaching the right 3D structure. “The search space is huge,” says Rich Evans, a research scientist at DeepMind. “It’s bigger than Go.”

Nevertheless, a milestone in the protein-folding journey was reached in December 2018 at the CASP (Critical Assessment of Techniques for Protein Structure Prediction) competition in Cancun, Mexico – a biennial challenge that provides an independent way of plotting researchers’ progress. The aim for competing teams of scientists is to predict the structure of proteins from sequences of their amino acids for which the 3D shape is known but not yet made public. Independent assessors verify the predictions.

The protein-folding team at DeepMind entered as a way of benchmarking AlphaFold, the algorithm it had developed over the previous two years. In the months leading up to the conference, the organisers sent data sets to the team members in King’s Cross, who sent back their predictions with no sense of how they would fare. In total, there were ninety protein structures to predict – some were template-based targets, which use previously solved proteins as guidance, others were modeled from scratch. Shortly before the conference, they received the results: AlphaFold was, on average, more accurate than the other teams. And some metrics put DeepMind significantly ahead of the other teams, for protein sequences modeled from scratch — 43 of the 90 – AlphaFold made the most accurate prediction for 25 proteins. The winning margin was striking: its nearest rival managed three.

Mohammed AlQuraishi, a fellow at the Laboratory of Systems Pharmacology and the Department of Systems Biology at Harvard Medical School, attended the event, and learned about the DeepMind approach before the results were published. “Reading the abstract, I didn't think ‘Oh, this is completely new’,” he says. “I accepted they would do pretty well, but I wasn't expecting them to do as well as they did.”

According to AlQuraishi, the approach was similar to that of other labs, but what distinguished the DeepMind process was that they were able to “execute better”. He points to the strength of the DeepMind team on the engineering side.

“I think they can work better than academic groups, because academic groups tend to be very secretive in this field,” AlQuraishi says. “And so, even though the ideas DeepMind had in their algorithm were out there and people were trying them independently, no one has brought it all together.”

AlQuraishi draws a parallel with the academic community in machine learning, which has undergone something of an exodus in recent years to companies like Google Brain, DeepMind and Facebook, where organisational structures are more efficient, compensation packages are generous, and there are computational resources that don’t necessarily exist at universities.

“Machine learning computer science communities have sort of really experienced that over the last four or five years,” he says. “Computational biology is just now waking up to this new reality.”

This echoes the explanation given by the founders of DeepMind when they sold to Google in January 2014. The sheer scale of Google's computational network would enable the company to move research forward much more quickly than if it had to scale organically, and the £400 million cheque enabled the startup to hire world-class talent. Hassabis describes a strategy of targeting individuals who have been identified as a good fit for specific research areas. “We've our roadmap that informs what subject areas, sub-fields of AI or neuroscience will be important,” he says. “And then we go and find the world's best person who fits in culturally as well.”

“So far as a company like DeepMind can make a dent, I think protein folding is a very good place to start, because it's a problem that’s very well defined, there’s useable data, you can almost treat it like a pure computer science problem,” AlQuraishi says. “That's probably not true in other areas of biology. It's a lot messier. So, I don't necessarily think that the success that DeepMind has had with protein folding will translate automatically to other areas.”

For a research company, DeepMind is big on project management. Every six months, senior managers examine priorities, reorganise some projects, and encourage teams – especially engineers – to move between endeavours. Mixing of disciplines is routine and intentional. Many of the company’s projects take longer than six months – generally in the range of two to four years. But, as much as DeepMind’s messaging is consistently around its research, it is now a subsidiary of Alphabet, Google's parent company and the world’s fourth most valuable company. While the expectation from the academics in London is that they are involved in long-term, ground-breaking research, executives in Mountain View, California, will naturally have an eye on ROI – return on investment.

“We care about products in the sense that we want Google and Alphabet to be successful and to get benefit out of the research we're doing – and they do, there are dozens of products now with DeepMind code and technology in them all around Google and Alphabet – but the important thing is that it’s got to be a push, not a pull,” Hassabis says. DeepMind for Google, led by Suleyman, comprises about one hundred people, mostly engineers who translate the company’s pure research into applications that are productised. For example, WaveNet, a generative text-to-speech model that mimics the human voice is now embedded in most Google devices from Android to Google Home, and has its own product team within Google.

“A lot of research in industry is product led,” Hassabis says. “The problem with that is that you can only get incremental research. [That’s] not conducive to doing ambitious, risky research, which, of course, is what you need if you want to make big breakthroughs.”

In conversation, Hassabis talks rapidly, often punctuating the end of a sentence with the interrogative "right?", guiding the listener through a sequence of observations. He makes frequent, lengthy digressions into various tributaries – philosophy (Kant and Spinoza are favourites), history, gaming, psychology, literature, chess, engineering and multiple other scientific and computational domains – but doesn’t lose sight of his original thought, often returning to clarify a remark or reflect on an earlier comment.

Much like the 300-year vision of Masayoshi Son, the founder of SoftBank – the Japanese multinational with large stakes in many of the world's dominant technology companies –Hassabis and the other founders have a “multi-decade roadmap" for DeepMind. Legg, the company chief scientist, still has a hard copy of the initial business plan circulated to potential investors. (Hassabis has lost his.) Legg occasionally reveals it at all-hands meetings to demonstrate that many of the approaches the founders were thinking about in 2010 – learning, deep learning, reinforcement learning, using simulations, ideas of concepts and transfer learning, and using neuroscience, memory and imagination – are still core parts of its research programme.

During its infancy, DeepMind had a single web page featuring just the company logo. There was no address, no phone number, no jaunty "about us" information. To make hires, the founders had to rely on personal contacts for people who already knew they were “serious people and serious scientists and had a serious plan”, as Hassabis puts it.

“With any startup, you're really asking people to trust you as management,” he says. “But [with DeepMind] it’s even more because you're basically saying you're going to do this in a completely unique way that no one's ever done before, and a lot of traditional, top scientists would have said was impossible: ‘You just cannot organise science in this fashion.’”

How scientific breakthroughs occur is as unknown as some of the problems that researchers are trying to solve. In academia, great minds are gathered together in institutions to undertake research that’s iterative, often with uncertain outcomes. Progress is usually painstaking and slow. Yet, in the private sector, supposedly free of restraint and with access to highly compensated management consultants, productivity and innovation are also declining.

In February 2019, Stanford economist Nicholas Bloom published a paper demonstrating declining productivity in a wide-ranging number of sectors. “Research effort is rising substantially while research productivity is declining sharply,” Bloom wrote. “A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. Across a broad range of case studies at various levels of (dis)aggregation, we find that ideas – and in particular the exponential growth they imply – are getting harder and harder to find.”

Hassabis mentions the billions invested into research by Big Pharma: driven by quarterly earnings reports, the industry has become more conservative as the costs of failure have risen. According to a report by innovation foundation Nesta in 2018, over the past 50 years biomedical R&D productivity has steadily fallen – despite significant increases in public and private investment, new drugs cost much more to develop. According to the report, “the exponentially increasing cost of developing new drugs is directly reflected in low rates of return on R&D spending. A recent estimate puts this rate of return at 3.2 per cent for the world’s biggest drug companies; substantially less than their cost of capital.” Similarly, research from Deloitte estimated that R&D returns in biopharma had declined to their lowest rate in nine years, from 10.1 per cent in 2010, to 1.9 per cent in 2018.

“If you look at the CEOs of most of the big pharma companies, they're not scientists, they come from the finance department, or the marketing department,” Hassabis says. “What does that say about the organisation? It means that what they're going to do is try and squeeze more out of what has already been invented, cut costs or market better, not really invent new things – which is much more risky. You can’t put that down so easily in a spreadsheet. That’s not the nature of blue sky thinking… that's not how you do it if you're trying to land the rocket on the moon.”

For many startup founders, there is a degree of serendipity to their mission – a problem they’ve encountered that they decided to solve, a chance encounter with a co-founder or investor, an academic advocate. This is not the case for Hassabis, who has purposefully made a series of decisions – some very early in life – that would lead to DeepMind. “It’s what I spent my whole life preparing for,” he says. “From games design to games playing to neuroscience to programming, to studying AI in my undergrad, to going to a lot of the world's top institutes, doing a PhD as well as running a start-up in my earlier career… I’ve tried to use every scrap of experience. I've consciously picked each of those decision points to gather that piece of experience.”

Add to that list being a CEO, which is now his day job. He has another role – that of researcher – and, in order to do both, he structures his time into distinct periods so that he can balance the running of the business with his academic interests. Having played the role of executive during the day, he returns home around 7.30pm to have dinner with his young family before embarking on a “second day” around 10.30pm, which will generally end around 4.00am to 4.30am.

“I love that time,” he says. “I’ve always been a nocturnal person, since I was a kid. Everything is quiet in the city and the house and I find it very conducive to thinking, reading, writing these kinds of things. So that's when I mostly keep up to speed with the scientific literature. Or maybe I'll be writing or editing a paper, or thinking up some new algorithmic idea, or thinking about something strategic, or be investigating some area of science that AI could be applied to.”

He listens to music when he works. The nature of the music – from classical to drum and bass – depends "on the emotion I’m trying to evoke in myself. It depends on whether I’m trying to be focused or inspired.” There are a couple of rules: there can be no vocals, otherwise he will try and listen to the lyrics; and there needs to be a level of acquaintance with the music. “It needs to be something I’m familiar with, but not too familiar with. And it can’t be a new piece of music because that is too disturbing for the brain. You’ve got to break a tune in and then you can use it.”

Hassabis says that he would like to spend 50 per cent of his time on direct research. As part of this, in April 2018, he hired Lila Ibrahim, a Silicon Valley veteran who spent 18 years at Intel before becoming Chief of Staff at Kleiner, Caulfield, Perkins and Byers – one of the most established venture capital firms in the Valley – before moving to the startup Coursera. Ibrahim is taking on many of Hassabis’s managerial tasks – he says his direct reports have dropped from 20 people to six. Ibrahim describes her decision to join DeepMind as “a moral calling,” prompted by conversations she had with Hassabis and Legg regarding the establishment of its Ethics & Society initiative, which is attempting to establish standards around the application of the technology.

“I think being based in London it brings a slightly different perspective, she says. “What would have happened if DeepMind had been headquartered in Silicon Valley would have been a very different, I think. London feels like there's so much more humanity… the art, the cultural diversity. There’s also what the founders brought in from the start and the type of people who choose to work at DeepMind brought in certain ways of doing things, a mindset.”

One incident perhaps offers insight into the approach Ibrahim describes. Hassabis was a chess prodigy. Starting at the age of four, he rose up the rankings until, when 11, he found himself competing against a Danish master at a big, international competition in the town hall of a village outside Liechtenstein.

After playing for close to twelve hours, the endgame approached. It was a scenario that Hassabis had never seen before – he had a queen, while his opponent had a rook, bishop and knight, but it was still possible for Hassabis to force a draw if he could keep his opponent’s king in check. Hours passed, the other games ended and the hall emptied. Suddenly, Hassabis realised that his king had been trapped, meaning that check mate would be forced. Hassabis resigned.

“I was really tired,” he says. “We were 12 hours in or something and I thought somehow I must have made a mistake and he's trapped me.”

His opponent – a man Hassabis recalls being in his 30s or 40s – stood up. His friends were standing around him and he laughed and gestured at the board. Hassabis realised that he had resigned unnecessarily – the game should have been a draw.

“All I needed was to sacrifice my queen,” he says. “This was his last roll of the dice. He'd been trying for hours to outmanoeuvre me. And that was his final cheap trick. And it worked. Basically, I had nothing to show for 12 hours of slog.”

Hassabis recalls that, at that moment, he had an epiphany: he questioned the purpose of the brilliant minds in the room competing with each other to win a zero-sum game. He would go on to play the game at the highest level, captaining his university team, and still talks of his continued love of complex games, but the experience led to him channeling his energy into something beyond games. “The reason that I could not become a professional chess player, he says. “Is that it didn't feel productive enough somehow.”

Even as the company expands into its new headquarters, Hassabis maintains that DeepMind is still a startup, albeit one that is competing on a world stage – “China is mobilised and the US… there are serious companies trying to do these things,” he says. Indeed, the US and the China are both positioning themselves to standardise the field to their own advantage, both commercially and geopolitically. He mentions several times that, despite having made progress (“small stepping stones on the way”), there is still a long way to go in DeepMind’s bigger mission of solving intelligence and building AGI. “I still want us to have that hunger and the pace and the energy that the best startups have,” he says.

Innovation is hard and often singular. Building the processes and culture of an organisation that will enable it to “make a dent in the universe,” as Steve Jobs told the team building the Macintosh computer – and doing so in multiple fields or with more than one product – is something that few companies or institutions achieve. As DeepMind grows, it will be the role of the founders to pursue the road ahead, while keeping an eye on the founding principles of a business focused on what is likely to be the most transformative technology of the coming years, one fraught with possible dangers, as well as opportunities.

“You’re going to hit a lot of rough days and I think, at the end of the day, trying to make money or whatever isn't going to be enough to get you through the real pain points,” Hassabis says. “If you have a real passion and you think what you're doing is really important, then I feel like that will carry you through.”

This article was originally published by WIRED UK