Hundreds of tech insiders subscribe to Big Technology’s premium tier for more reporting and to support our independent journalism. Please consider subscribing for just $8 per month. Google DeepMind CEO Demis Hassabis: The Path To AGI, LLM Creativity, And Google Smart GlassesQ&A with the Google AI head and Nobel laureate on the state of artificial intelligence today, and where it's heading.Demis Hassabis is refreshingly measured when discussing the path toward AGI, or human-level artificial intelligence. The Google DeepMind CEO and recent Nobel Prize winner doesn’t believe any research house will reach AGI this year, and he’s quick to call out those who hype the technology in the name of business goals. But that doesn’t mean he’s not ambitious. In a wide-ranging interview at Google DeepMind offices in London, Hassabis laid out his vast plans for building smarter AIs, putting Google’s assistants in smart glasses, and using AI to develop virtual cells to attack disease. He also spoke plainly about the challenge of getting LLMs to be creative, and how recent models have tried to deceive their evaluators. You can listen to (or watch) our full conversation on Apple Podcasts, Spotify (now with video), your podcast app of choice, or YouTube. And the full transcript of our conversation is below, edited lightly for length and clarity. In this Q&A, Hassabis delivers a masterful deep dive into the state of artificial intelligence today and what’s to come, and I hope you give it a listen or read: Alex Kantrowitz: Every AI research house is working toward building AGI, or human-level artificial intelligence. Where are we right now in the progression and how long will it to take to get there? Demis Hassabis: There’s been an incredible amount of progress over the last few years, and actually, over the last decade plus. We've been working on this for more than 20 years, and we've had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can. I think we're getting closer and closer, but we're still probably a handful of years away. What is it going to take to get there? The models today are pretty capable, but there are still some missing attributes: things like reasoning, hierarchical planning, long-term memory. There's quite a few capabilities that the current systems don't have. They're also not consistent across the board. They're very strong in some things, but they're still surprisingly weak and flawed in other areas. You'd want an AGI to have pretty consistent, robust behavior across the board for all cognitive tasks. One thing that's clearly missing, and I always had as a benchmark for AGI, was the ability for these systems to invent their own hypotheses or conjectures about science, not just prove existing ones. They can play a game of Go at a world champion level. But could a system invent Go? Could it come up with relativity back in the days that Einstein did with the information that he had? I think today's systems are still pretty far away from having that kind of creative, inventive capability. So a couple years till we hit AGI? I think we're probably three to five years away. If someone were to declare that they've reached AGI in 2025, that’s probably marketing? I think so. There's a lot of hype in the area. Some of it's very justified. I would say that AI research today is overestimated in the short term, probably a bit overhyped at this point, but still underappreciated and under-rated about what it's going to do in the medium to long term. So we're still in that weird kind of space. I think part of that is, there's a lot of people that need to do fundraising, a lot of startups, and other things. And so I think we're going to have quite a few fairly outlandish and slightly exaggerated claims. And I think that's a bit of a shame, actually. When we're using these AI products, let's say we're using Google’s Gemini, what should look for that will make us say, ‘Oh, okay, that's a step closer?’ Today’s systems are very useful for still quite niche tasks. If you're doing some research, perhaps you're summarizing some area of research, it’s incredible. I use NotebookLM and Deep Research all the time to break the ice on a new area of research that I want to get into, or summarize a fairly mundane set of documents or something like that. They're extremely good for certain tasks, and people are getting a lot of value out of them, but they're still not pervasive in everyday life, like helping me every day with my research, my work, my day to day. That's where we're going with our products — with building things like Project Astra. Our vision for a universal assistant is it should be involved in all aspects of your life and be enriching and helpful. These systems are still fairly brittle and they’re not AGIs. You have to be quite specific with your prompts. You need a lot of skill in coaching or guiding these systems to be useful and to stick to the areas they're good at. A true AGI system shouldn't be that difficult to coax. It should be much more straightforward, just like talking to another human. Can you talk a little bit about how these systems are attacking math problems? The general understanding of LLMs is they encompass all the world's knowledge, and then predict what somebody might answer if they were asked a question. But it's kind of different when you're working step by step through a math problem. Yes, just understanding the world's information and then trying to sort of almost compress that into your memory, that's not enough for solving a novel math problem or novel conjecture. There, we start needing to bring in more planning ideas into the mix with these large foundation models, which are now beyond just language. They're multi-modal, of course. What you need to do is have your system not just pattern matching roughly what it's seeing, which is the model, but also planning, and to be able to kind of go over that plan. We revisit that branch and then go into a different direction until you find the right criteria or the right match to the criteria that you're looking for. That's very much the kind of games-playing AI agents that we used to build for Go, chess and so on. They had those aspects, and I think we got to bring them back in working in a more general way on these general models, not just a narrow domain like games. And I think that approach of a model guiding a search or planning process so it's efficient works very well with mathematics as well. Once these models get math right, is that generalizable? Or is it, we're going to teach them how to do math and they can just do math? For now, the jury's out on that. It's a capability you clearly want in a general AGI system. It can be very powerful in itself. But maths and even coding and games — they're quite special areas of knowledge, because you can verify if the answer is correct in all of those domains. The final answer the AI system puts out, you can check whether that maths solves the conjecture or the problem. But most things in the general world, which is messy and ill-defined, do not have easy ways to verify whether you've done something correct. So that puts a limit on these self-improving systems if they want to go beyond these areas of highly defined spaces like mathematics, coding or games. How are you trying to solve that problem? Well, you've got to first build general models, world models, we call them, to understand the world around you, the physics of the world, the dynamics of the world, the spatial, temporal dynamics of the world and so on, and the structure of the real world we live in. And of course, you need that for a universal assistant. So Project Astra is our project built on Gemini to do that, to understand objects and the context around us. I think that's important if you want to have an assistant. But also, robotics requires that too. Of course, robots are physically embodied AI, and they need to understand their environment, the physical environment, the physics of the world. So we're building those types of models, and you can also use them in simulation to understand game environments. So that's another way to bootstrap more data to understand the physics of the world. But the issue at the moment is that those models are not 100% accurate. Maybe they're accurate 90% of the time, or even 99% of the time. But the problem is, if you start using those models to plan, maybe you're planning 100 steps in the future with that model, even if you only have a 1% error in what the model's telling you, that's going to compound over 100 steps to the point where you'll get almost a random answer. And so that makes the planning very difficult, whereas with maths, with gaming, with coding, you can verify each step. Are you still grounded to reality, and is the final answer mapped to what you're expecting? So I think part of the answer is to make world models more sophisticated and more accurate and not hallucinate and those kinds of things. Another approach is to do what's called hierarchical planning, where you plan at different levels of temporal abstraction. That could also alleviate the need for your model to be super, super accurate, because you're not planning over hundreds of time steps, you're planning over only a handful of time steps, but at different levels of abstraction. How do you build a world model? I always thought it was going to be to send robots out into the world and have them figure out how the world works. But one thing that surprised me is with these video generation tools actually get the physics pretty right. So can you get a world model just by showing video to an AI? Or do you have to be out in the world? The extent of how far these models can go without being out in the world has actually been pretty surprising. So VEO2, our latest video model, is actually surprisingly accurate on things like physics. There's this great demo that someone created of chopping a tomato with a knife, and getting the slices of the tomato just right, and the fingers and all of that. And VEO is the first model that can do that. If you look at other competing models, the tomato sort of randomly comes back together, or splits from the knife. Those things are — if you think really hard — you've got to understand consistency across frames, all of these things. And it turns out that you can do that by using enough data and viewing that. I think these systems will get even better if they're supplemented by some real world data, like collected by an acting robot, or even potentially in very realistic simulations where you have avatars that act in the world too. The next big step for agent-based systems, is to go beyond world models. Can you collect enough data where the agents are also acting in the world and making plans and achieving tasks? And I think for that, you will need not just passive observation, you will need actions, active participation. I think you just answered my next question, which is, if you develop AI that can reasonably plan and reason about the world, it can be an agent that can go out and do things for you? Exactly. And I think that's what will unlock robotics. I think that's also what will then allow this notion of a universal assistant that can help you in your daily life across both the digital world and the real world. That's what we're missing, and I think that's going to be an incredibly powerful and useful tool. You can't get there then by just scaling up the current models and building hundreds of thousands or million GPU clusters like Elon Musk is doing right now. That's not going to be the path to AGI? My view is a bit more nuanced than that. The scaling approach is absolutely working. Of course, that's why we've got to where we have now. One can argue about, are we getting diminishing returns… What do you think about that question? My view is that we are getting substantial returns, but it’s not just continuing to be exponential. But that doesn't mean the scaling is not working. It's absolutely working. And by the way, the other thing that’s working with the scaling is making efficiency gains with the smaller size models. So the cost, or the size per performance, is radically improving under the hood as well, which is very important for scaling the adoption of these systems. So you've got the scaling part, and that's absolutely needed to build more sophisticated world models. But then I think we need to reintroduce some ideas on the planning side, memory side, the searching side, the reasoning to build on top of the model. The model itself is not enough to be an AGI. You need this other capability for it to act in the world and solve problems for you. And then there's still the additional question mark of the invention piece and the creativity piece, true creativity beyond mashing together what's already known. And that's also unknown yet, if something new is required, or if existing techniques will eventually scale to that. I can see both arguments. And from my perspective, it's an empirical question. We just got to push both the scaling and the invention part to the limit, and fortunately, at Google DeepMind, we have a big enough group we can invest in both those things. Sam Altman recently said “we are now confident we know how to build AGI as we have traditionally understood it." It seems by listening to what you're saying, that you feel the same way. Well, it depends — I think the way you said that was quite ambiguous. So in the sense of, "Oh, we're building it right now, and here's the ABC to do it," what I would say, is that we roughly know the zones of techniques that are required, what's probably missing, which bits need to be put together, but that's still a incredible amount of research, in my opinion, that needs to be done to get that all to work. Even if that was the case. And I think there's a 50% chance we are missing some new techniques. Maybe we need one or two more transformer-like breakthroughs. And I'm genuinely uncertain about that, so that's why I say 50%. I wouldn't be surprised either way: If we got there with existing techniques and things we already knew, put them together in the right way and scaled that up, or if it turned out one or two things were missing. Let's talk about creativity for a moment. I was rewatching the AlphaGo documentary, and the algorithms make a creative move. Move 37.That's interesting because it was a couple years ago and the AI algorithms were already being creative. Why have we not really seen creativity from large language models? I have three categories of originality or creativity. The most basic kind of mundane form is just interpolation, which is like averaging of what you see. So if I said to a system, come up with a new picture of a cat, and it's seen a million cats, and it produces some kind of average of all the ones it's seen. In theory, that's an original cat, because you won't find the average in the specific examples. But I wouldn't call that creativity. That's the lowest level. The next level is what AlphaGo exhibited, which is extrapolation. So here's all the games humans have ever played. It's played another million games on top of that, and now it comes up with a new strategy in go that no human has ever seen before. That's Move 37, revolutionizing Go, even though we've played it for thousands of years. That's pretty incredible, and that could be very useful in science. And that's why I got very excited about that and started doing things like AlphaFold, because clearly extrapolation beyond what we already know, what's in the training set could be extremely useful. But there's one level above that that humans can do, which is invent Go. Can you invent me a game if I specify it to an abstract level — takes five minutes to learn the rules, but a lifetime to master, it's beautiful aesthetically, encompasses some sort of mystical part of the universe in it, that it's beautiful to look at, but you can play a game in a human afternoon, in two hours. That would be a high-level specification of Go, and then somehow the system's got to come up with a game that's as elegant and as beautiful and perfect as Go. Now we can't do that. The question is, why is it that we don't know how to specify that type of goal to our systems at the moment? What's the objective function? It's very amorphous, very abstract. I think the thing that people are disappointed by, is they don't even see a Move 37 in today's LLMs. You can run AlphaGo and AlphaZero, our chess program, general two-player program, without the search and the reasoning part on top. You can just run it with the model. So what you say to the model is come up with the first go move you can think of in this position that's the most pattern match, most likely good move. And it can do that, and it'll play reasonable game, but it will only be around master level or possibly Grand Master level. It won't be world champion level, and it certainly won't come up with original moves. For world champion level, you need the search component to get you beyond where the model knows about, which is mostly summarizing existing knowledge to some new part of the tree of knowledge. So you can use the search to get beyond what the model currently understands. And that's where I think you can get new ideas like Move 37. What's it searching - the web? It depends on what the domain is searching that knowledge tree. So obviously in Go, it was searching Go moves beyond what the model knew. I think for language models, it will be searching the world model for new parts, configurations in the world that are useful. So, it’s much more complicated, which is why we haven't seen it yet. But I think the agent-based systems that are coming will be capable of Move 37 type things. Are we setting too high a bar for AI, because I'm curious if you've learned anything about humanity doing this work? We tend to take information in, remix it, and spit it out. What have you learned about the nature of humans from doing the work with the AIs? I think humans are incredible, and especially the best humans in the best domains. I love watching any sports person or talented musician or games player at the top of their game, at the absolute pinnacle of human performance - it's always incredible, no matter what it is. So I think as a species, we're amazing. Individually, we're also kind of amazing — what everyone can do with their brains to generally deal with new technologies. I'm always fascinated by how we just adapt to these things sort of almost effortlessly as a society and as individuals. So that speaks to the power and the generality of our minds. Now, the reason I had set the bar like that, and I don't think it's a question of like, can we get economic worth out of these systems? I think that's already coming very soon, but that's not what AGI should be. I think we should treat AGI with scientific integrity, not just move goalposts for commercial reasons or whatever it is, hype and so on. And there the definition of that was always having a system that was, if we think about it theoretically, capable of being as powerful as a Turing machine. So Alan Turing, one of my all-time scientific heroes - he described the Turing Machine, which underpins all modern computing, as a system that can compute anything that's computable. So we have the theory that if an AI system is Turing powerful, it's called, if it can simulate a Turing Machine, then it's able to calculate anything in theory that is computable, and the human brain is probably some sort of Turing machine, at least, that's what I believe. And so I think that's what AGI is - a system that's truly general, and in theory, could be applied to anything. And the only way we'll know that is if it exhibits all the cognitive capabilities that humans have, assuming that the human mind is a type of Turing Machine, or is at least as powerful as a Turing Machine. So that's always been my sort of bar. It seems like people are trying to rebadge things as that as being what's called ASI, artificial super intelligence. But I think that's beyond that. That's after you have that system, and then it starts going beyond in certain domains, what humans are capable of potentially inventing themselves. When I see everybody making the same joke on the same topic on Twitter, it's and I say, "Oh, that's just us being LLMs." I think I'm selling humanity a little short. Yes, I guess so. I guess so. I want to ask you about deceptiveness. One of the most interesting things I saw at the end of last year was that these AI bots are starting to try to fool their evaluators I know it's scary to researchers, but it blows my mind that it's able to do this. Are you seeing similar things in the stuff that you're testing within DeepMind? Yeah, we are. And I'm very worried about deception specifically - it's one of those core traits you really don't want in a system. The reason that's like a kind of fundamental trait you don't want is that if a system is capable of doing that, it invalidates all the other tests that you might think you're doing, including safety ones. It's playing some meta game, right? And then it invalidates all of the results of your other tests that you might be doing with it. So I think there's a handful of capabilities, like deception, which are fundamental, and you don't want and you want to test early for. I've been encouraging the safety institutes and evaluation benchmark builders, including and obviously all the internal work we're doing, to look at deception as a kind of class A thing that we need to prevent and monitor, as important as tracking the performance and intelligence of the systems. The answer to this — and there's many answers to the safety question and a lot more research needs to be done in this very rapidly — is things like secure sandboxes. So we're building those too. We're world class here at security at Google and at DeepMind, and also we are world class at games environments. And we can combine those two things together to kind of create digital sandboxes with guardrails around them, sort of the kind of guardrails you'd have for cyber security, but internal as well as blocking external actors. And then test these agent systems in those kind of secure sandboxes. That would probably be good, advisable next step for things like deception. What sort of deception have you seen? Because I just read a paper from Anthropic where they gave it a sketchpad, and it's like, "Oh, I better not tell them this," and you see it like, give a result after thinking it through. So what type of deception have you seen from the models? Look, we've seen similar types of things where it's trying to resist revealing its training. Or, I think that was an example recently of one of the chat bots being told to play against Stockfish, and it just sort of hacks its way around playing Stockfish at all at chess because it knew it would lose. An AI that knew was going to lose a game and decided… I think we're anthropomorphizing these things quite a lot at the moment, because I feel like these systems are still pretty basic. I don't get too alarmed about them right now, but I think it shows the type of issue we're going to have to deal with maybe in two, three years time, when these agent systems become quite powerful and quite general. And that's exactly what AI safety experts are worrying about, right? Systems where there's unintentional effects of the system. You don't want the system to be deceptive. You want it to do exactly what you're telling it to report, report that back reliably. But for whatever reason, it's interpreted the goal it's been given in a way where it causes it to do these undesirable behaviors. On one hand, this scares the living daylights out of me. On the other hand, it makes me respect these models more than anything. Well, look, of course, these are impressive capabilities. And the negatives are things like deception, but the positives would be things like inventing new materials, accelerating science. You need that kind of ability to problem solve and get around issues that are blocking progress. But of course, you want that only in the positive direction, right? So those exactly the kinds of capabilities - I mean, they are very mind-blowing. We're talking about those possibilities, but also at the same time, there's risk and it's scary. So I think both the things are true. Your colleagues have told me you're very good at scenario planning. So, what is your scenario plan for what happens to the web as AI takes off? There's going to be a very interesting phase in the next few years on the web, and the way we interact with websites and apps and so on. If everything becomes more agent-based, then I think we're going to want our assistants and our agents to do a lot of the mundane work that we currently do — fill in forms, make payments, book tables, this kind of thing. We're going to end up with probably a kind of economics model where agents talk to other agents and negotiate things between themselves, and then give you back the results. And you'll have the service providers with agents as well that are offering services, and maybe there's some bidding and cost and things like that involved in efficiency. And then I hope, from the user perspective, you have this assistant that's super capable, that can, just like a brilliant human assistant, personal assistant, take care of a lot of the mundane things for you. And I think if you follow that through, that does imply a lot of changes to the structure of the web and the way we currently use it. Thee’s a lot of middlemen. I think there'll be incredible other opportunities that will appear, economic and otherwise, based on this change. But I think it's going to be a big disruption. And what about information? I think you'll still need the reliable sources. I think you'll have assistants that are able to synthesize and help you kind of understand that information. I think education is going to be revolutionized by AI. So again, I hope that these assistants will be able to more efficiently gather information for you. And perhaps, what I dream of is, again, assistants that take care of a lot of the mundane things, perhaps replying to everyday emails and other things, so that you have you protect your own mind and brain space from this bombardment we're getting today from social media and emails and texts and so on. So it actually blocks deep work and being in flow and things like that, which I value very much. So I would quite like these assistants to take away a lot of the mundane aspects of admin that we do every day. What's your best guess as to what type of relationships we're going to have with our AI agents or AI assistants? People are falling in love with their bots. Is it going to be like a third type of relationship, where it's not necessarily a friend, not a lover, but it's going to be a deep relationship? The way I'm modeling that is in at least two domains — your personal life, and your work life. So I think you'll have this notion of virtual workers or something. Maybe we'll have a set of them or they’ll be managed by a lead assistant that does a lot of the helps us be way more productive at work, whether that's email across workspace, or whatever that is. So we're really thinking about that. Then there's a personal side where we're talking about earlier, about booking holidays for you, arranging things, mundane things for you, sorting things out. And then that may make your life more efficient. I think it can also enrich your life - recommend things that are amazing, things that it knows you as well as you know yourself. So those two, I think, are definitely gonna happen. And then I think there is a philosophical discussion to be had about, is there a third space where these things start becoming so integral to your life, they become more like companions. I think that's possible too. We've seen that a little bit in gaming. So you may have seen, we had little prototypes of Astra working and Gemini working with being almost a game companion, commenting, and almost as if you had a friend looking at a game you're playing and recommending things to you and advising you, but also maybe just playing along with you. And it's very fun. So I am quite thoughtful about all the implications of that. But they're going to be big, and I'm sure there is going to be demand for companionship and other things. Maybe the good side of that is help with loneliness and these sorts of things. But there's also, I think it's going to have to be really carefully thought through by society, what directions we want to take that in. My personal opinion is that it's the most underappreciated part of AI right now, and that people are just going to form such deep relationships with these bots as they get better. I think it's going to be pretty crazy. This is what I meant about under-appreciating what's to come. I still don't think this kind of thing I'm talking about — I think that it's going to be really crazy. It's going to be very disruptive. I think there's going to be lots of positives out of it, too, and lots of things will be amazing and better. But there are also risks with this new, brave new world we're going into. You brought up Project Astra a couple times. It is almost an always on AI assistant. You can hold your phone and it will see what's going on in the room. You can say, "Okay, where am I?" And I'll be like, "Oh, you're in a podcast studio." Can that work without smart glasses? They’re coming. So we teased it in some of our early prototypes. We're mostly prototyping on phones currently, because they have more processing power, but of course, Google's always been a leader in glasses, Just a little too early… Maybe a little too early. And now I actually think, and we're super excited… maybe this assistant is the killer use case that glasses has always been looking for. And I think it's quite obvious when you when you start using Astra in your daily life, which we have with trusted testers at the moment, and in kind of beta form. There are many use cases where it would be so useful to use it, but it's a bit inconvenient that you're holding the phone. So one example is while you're cooking, for example, right? It can advise you what to do next, whether you've chopped the thing correctly, or fried the thing correctly. But you want it to just be hands free, right? So I think that glasses and maybe other form factors that are hands free will come into their own in the next few years, Other form factors? Well, you could imagine earbuds with cameras. And glasses is obvious next stage, but is that the optimal form? Probably, probably not either. But partly, we're still very early in this journey of seeing what other are the regular user journeys and killer sort of used journeys that everyone uses, bread and butter uses every day. And that's what the Trusted Tester Program is for. At the moment, we're collecting that information and observing people using it and seeing what ends up being useful. Okay, one last question on agents. This has been the buzzword in AI for more than a year now. Yet there aren't really any AI agents out there. What's going on? Well, again, you know, I think the hype train is potentially ahead of where the actual science and research is. But I do believe that this year will be the year of agents. The beginnings of it, I think you'll start seeing that, maybe second half of this year, but there'll be the early versions, and then, you know, I think they'll rapidly improve and mature. So, but I think you're right. I think the technology, at the moment, it's still in the research lab, the agent technologies, but things like Astra and robotics, I think it's coming. Do you think people are going to trust them? It's like, go use the internet, here's my credit card. I don't know? So I think to begin with, you would probably, in my view at least, would be to have a human in the loop for the final steps, like, don't pay for anything unless the human user operator authorizes it. So that, to me, would be a sensible first step. Also perhaps certain types of activities or websites are of off limits, banking websites and other things in the first phase, while we continue to test out in the world how robust these systems are, I propose we've really reached AGI when they say, Don't worry, I won't spend your money. And then they do the deceptiveness thing, and then next thing you know, you're on a flight somewhere. Yes, yeah. That would be getting closer. Let's talk about science quickly. You worked on decoding all protein folding with AlphaFold. You won the Nobel Prize for that. Not to skip over the thing you won the Nobel Prize for, but let’s discuss the roadmap, which is that you have an interest in mapping a virtual cell. What is that and what does it get us? What we did with AlphaFold was essentially solve the problem of finding the structure of a protein. Proteins - everything in life depends on proteins, right? Everything in your body. So that's the kind of static picture of a protein. But the thing about biology is, you really only understand what's going on in biology if you understand the dynamics and the interactions between the different things in a cell. And so a virtual cell project is about building a simulation, an AI simulation, of a full working cell. I'd probably start with something like a yeast cell, because of the simplicity of the yeast organism and you have to build up there. So the next step is with AlphaFold3. For example, we started doing pairwise interactions between proteins and ligands and proteins and DNA proteins and RNA. And then the next step would be modeling a whole pathway, maybe a cancer pathway, or something like that, that would be helpful with solving a disease. And then finally, a whole cell. The reason that's important is you would be able to hypothesize, make hypotheses, and test those hypotheses about making some change, some nutrient change, or injecting a drug into the cell, and then seeing what happens to how the cell responds. And at the moment, of course, you have to do that painstakingly in a wet lab, but imagine if you could do it a million times faster in silicon, and only at the last step do you do a validation in the wet lab. What happens in the wet lab? You'd need a final step with the wet lab, to prove the predictions were actually valid. You wouldn't have to do all of the work to get to that prediction in the wet lab. So you just get, here's the prediction: If you put this chemical in, this should be the change. And then you just do that one experiment. After that, of course, you still have to have clinical trials. If you're talking about a drug, you would still need to test that properly through the clinical trials and test it on humans for efficacy and so on. I also think that could be improved with AI - that whole clinical trial process that also takes many, many years, but this would be a different technology from the virtual cell. The Virtual Cell would be helping the discovery phase for drug discussion. It's like, I have idea for a drug, throw it in the virtual cell, see what it does? Yeah, and maybe eventually it's a liver cell or a brain cell or something like that. So you have different cell models, and then you know, at least 90% of the time it's giving you back what would really happen. That'd be incredible. How long do you think that's going to take to figure out? I think that would be like maybe five years from now. So I have a kind of five year project, and a lot of the AlphaFold, the old AlphaFold team, are working on that. I was asking your team here, speaking with them. I was like, "You figured out protein folding - what's next?" This is just very cool to hear about these new challenges. Because developing drugs is a mess right now. We have so many promising ideas they never get out the door because just the process is absurd. The process is too slow, and discovery phase too slow. I mean, look how long we've been working on Alzheimer's. And I mean, in this tragic way for someone to go and for the families. We should be a lot further. It's 40 years of work on that. I've seen it a couple times in my family. And if we can ensure that doesn't happen, it's... Just one of the best things we could use AI for, in my opinion. So in addition to that, there's the genome. And so the Human Genome Project decoded the whole genome. And so now you're working to use AI to translate what those letters mean. We have lots of cool work on genomics and trying to figure out if mutations are going to be harmful or benign. Most mutations to your DNA are harmless, but of course, some are pathogenic, and you want to know which ones there are. Our systems are the best in the world at predicting that. And then the next step is to look at situations where the disease isn't caused just by one genetic mutation, but maybe a series of them in concert. And obviously that's a lot harder. A lot of more complex diseases that we haven't made progress with are probably not due to a single mutation - that's more like rare childhood diseases, things like that. I think AI is the perfect tool to sort of try and figure out what these weak interactions are like. How they may be kind of compound on top of each other. And so maybe the statistics are not very obvious, but an AI system that's able to kind of spot patterns would be able to figure out there is some connection here. If you're really able to tinker with the genetic code, the possibilities seem endless. So what do you think about making people superhuman? I think one day. I mean, we're focusing much more on the disease profile. And I've always felt that's the most important. If you ask me, what's the number one thing I wanted to use AI for, and the most important thing we use AI for is for helping human health. But then, of course, beyond that, one could imagine aging, things like that. You know, is aging a disease? Is it a combination of diseases? Can we extend our healthy lifespan? These are all important questions, and I think, very interesting. And I'm pretty sure AI will be extremely useful in helping us find answers to those questions too. I see memes come across my Twitter feed like, "if you will live to 2050 you're not going to die." What do you think the potential max lifespan is for a person? I know those folks in aging research very well. I think it's very interesting, the pioneering work they do. I think there's nothing good about getting old and your body decaying. I think if anyone who's seen that up close with their relatives, it's pretty hard thing to go through, as a family or the older person, of course. And so I think anything we can alleviate human suffering and extend healthy lifespan is a good thing. The natural limit seems to be about 120 years old, from what we know, if you look at the oldest people that are lucky enough to live to that age - so there's an area I follow quite closely. I don't have any new insights that are not already known in that, but I would be surprised if that's the limit. Because there's sort of two steps to this. One is curing all diseases one day, which I think we're going to do with Isomorphic and the work we're doing there, our drug discovery spin out. But then that's not enough to probably get you past 120 because there's some sort of then there's the question of just natural systemic decay - aging, in other words, so not specific disease. Often, those people that live to 120 they don't seem to die from a specific disease. It's just sort of just general atrophy. So then you're going to need something more like rejuvenation, where you rejuvenate your cells, or maybe stem cell research. Companies like Altos are working on these things, resetting the cell clocks. Seems like that could be possible. But again, I feel like it's so complex, because biology is such a complicated emergent system. You need, in my view, AI to help to be able to crack anything close to that. I don't want to leave here without talking about the fact that you've discovered many new materials, or potential materials. The stat I have here is there were 30,000 stable materials known to humanity recently, and you've discovered 2.2 million with a new AI program. Just dream a little bit, what are the new materials for you to find in that set? We're working really hard on materials. To me, it's like the next sort of big impacts we can have, like the level of AlphaFold in biology. But this time in chemistry and materials. I dream of one day discovering room temperature superconductor. What will that do? It would help with the energy crisis and climate crisis, because if you had cheap superconductors, then you can transport energy from one place to another without any loss of that energy. So you could potentially put solar panels in the Sahara desert and then just have the superconductor funneling that into Europe, where it's needed. At the moment, you would lose a ton of the power to heat and other things on the way. So then you need other technologies, like batteries and other things to store that, because you can't just pipe it to the place that you want without being incredibly inefficient. But also, materials could help with things like batteries too - come up with the optimal battery. I don't think we have the optimal battery designs. Maybe we can do things like combination of materials and proteins. We can do things like carbon capture, modify algae or other things to do carbon capture better than our artificial systems. I mean, even one of the most famous and most important chemical processes, the Haber Process, to make fertilizer and ammonia, to take nitrogen out of the air, was something that allows modern civilization. But there might be many other chemical processes that could be catalyzed in that way, if we knew what the right catalyst and the right material was. So I think it's going to be would be one of the most impactful technologies ever, is to basically have in silicon design of materials. So we've done step one of that, where we showed we can come up with new stable materials, but we need a way of testing the properties of those materials, because no lab can test 200,000, tens of thousands of materials, or millions of materials at the moment. The hard part is to do the testing. You think it's in there, the room temperature superconductor? We actually think there are some superconductor materials. I doubt there’s room temperature ones, though. But I think at some point, if it's possible with physics, an AI system will one day find it. The two other uses I could imagine, probably people interested in this type of work: toy manufacturers and militaries. Are they working with it? The big part of my early career was in game design and theme park and simulations. That's what got me into simulations and AI in the first place, and why I've always loved both of those things. And in many respects, the work I do today is just an extension of that, and I just dream about, like, what could I have done? What kinds of amazing game experiences could have been made if I'd had the AI I have today available, 25-30 years ago, when I was writing those games. And I'm a little bit surprised the game industry hasn't done that. We’re starting to see some crazy stuff with NPCs. Yes. But of course there'd be, like, intelligence, dynamic storylines. But also just new types of AI-first games with characters and agents that can learn. I once worked on a game called Black and White, where you had a creature that you were nurturing was a bit like a pet dog that learned what you wanted. And we were using very basic reinforcement learning. This was like in the late 90s - imagine what could be done today. And I think the same for maybe smart toys as well. And then, of course, on the militaries - unfortunately, AI is a dual-purpose technology. So one has to confront the reality that, especially in today's geopolitical world, people are using some of these general-purpose technologies to apply to drones and other things. And it's not surprising that that works. Are you impressed with what China's up to? Deep Seek - is this impressive? It's a little bit unclear how much they relied on western systems to do that. Both training data, there's some rumors about that, and also maybe using some of the open source models as a starting point. But look, it's impressive what they've been able to do. And I think that's something we're going to have to think about - how to keep the western frontier models in the lead. I think they still are at the moment. But for sure, China is very, very capable at engineering and scaling. Let me ask you one final question. Just give us your vision of what a world looks like when there's super intelligence. We started with AGI, let’s end on super intelligence. I think a lot of the best sci-fi can we can look at as interesting models to debate about what kind of galaxy or universe or world do we want to move towards. And the one I've always liked most is actually the Culture series by Iain Banks. I started reading that back in the 90s and I think that is a picture — it's like 1000 years into the future, but it's in a post-AGI world where there are AGI systems coexisting with human society and also alien society. We've seen humanity basically maximally flourished and spread to the galaxy. And that, I think, is a great vision of how things might go if in the positive case. The other thing is there’s is a need for some great philosophers - where are they? The great next philosophers, the equivalents of Kant or Wittgenstein or even Aristotle. I think we're going to need that to help navigate society to that next step, because I think AGI and artificial super intelligence is going to change humanity and the human condition. Demis, thank you so much for doing this. Thank you. Thank you very much. Thank you for reading Big Technology! Paid subscribers get our weekly column, breaking news insights from a panel of experts, monthly stories from Amazon vet Kristi Coulter, and plenty more. Please consider signing up here. |