Google DeepMind CEO Demis Hassabis sees LLMs talking to experts as “next era” of AI


Researchers are investigating AI systems with the ability to use specialized tools for specific tasks, which could be an essential path to achieving artificial general intelligence, according to Google Deepmind CEO Demis Hassabis.

One line of research focuses on how large language and multimodal AI models with expertise in specific domains could call on other specialized AI systems, or tools, to help solve complex problems. These tools could provide solutions and communicate the answers back to the user through natural language or images, making the whole process seem seamless to the user.

Demis Hassabis, CEO of Google Deepmind, suggests in the latest Decoder podcast that the next generation of AI systems could use these capabilities, with the central AI model acting as a switch that directs queries to the appropriate specialized tool. This process could then provide solutions in a digestible and understandable way, primarily through natural language.

Users would still feel like they’re interacting with a large AI system, when in fact they’re interacting with many smaller systems connected by a large language model. Interestingly, OpenAI’s GPT-4 is said to work similarly, using a mixture of experts system, with eight expert networks managing specific tasks.


Hassabis may be foreshadowing Google’s upcoming Gemini AI model, when he states that he believes this architecture is “probably going to be the next era” for AI systems. Gemini will be multimodal and, according to Hassabis, future-proof in its architecture and capabilities.

Tool-based LLMs as a stepping stone to AGI

Developing these capabilities is essential for projects such as a universal AI assistant, and can help drive advances in planning, memory, and reasoning – all important aspects of AGI. This development could potentially lead to an AGI or AGI-like system within the next decade, says Hassabis, who sees these types of AI systems as “on the critical path to AGI.

Obviously, if there are a lot of breakthroughs still required, those are a lot harder to do and take a lot longer. But right now, I would not be surprised if we approached something like AGI or AGI-like in the next decade.

Half Hassabis

Despite the progress and innovation needed in terms of architecture and tools, Hassabis also stresses that scaling is still important, although he “would not be surprised” if “just scaling the existing systems resulted in diminishing returns in terms of the performance of the system. “

“At the moment, I think nobody knows which regime we’re in. So the answer to that is you have to push on both as hard as possible. So both the scaling and the engineering of existing systems and existing ideas as well as investing heavily into exploratory research directions that you think might deliver innovations that might solve some of the weaknesses in the current system,” Hassabis says.

Internal memo about open-source threats was real, but it doesn’t bother Hassabis

With so much innovation and computing still needed to gain ground in AI, Hassabis sees Google DeepMind “with a lot of resources” in a good position to make progress in both directions, architecture and scaling.

Hassabis also weighed in on a leaked Google memo that described open-source AI as a threat to Google’s own AI efforts and questioned the strategies of DeepMind and its parent company.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top