François Chollet

Software engineer and AI researcher, Google

“For five years, we have not made any progress“ with Large Language Models (LLMs), argues François Chollet in his keynote at AGI24. According to Chollet, LLMs still suffer from the same fundamental weaknesses they’ve had since their inception: hallucinations, sensitivity to phrasing, and poor generalization to new tasks. Chollet, the creator of the neural network library Keras—later integrated into Google’s codebase—is skeptical about the current trajectory of AI research. He contends that the focus needs to shift: AI, in his view, shouldn’t be defined by its ability to perform specific tasks but by its capability to solve entirely new problems without prior exposure.

The thing that makes ARC special is that it was designed with this intent to resist memorization.
— François Chollet

To encourage progress in this direction, Chollet developed the Abstraction and Reasoning Corpus (ARC), a benchmark for evaluating Artificial General Intelligence (AGI). Solving the ARC challenge requires an AI system to predict graphical patterns based on minimal examples, using limited computational resources. A successful program would not only abstract from the examples but also solve similar, novel problems effectively. In 2024, Chollet introduced a $1 million prize for anyone who could develop a program that meets this challenge.

Chollet is also the author of a widely used textbook on deep learning and a frequent public speaker. He has been with Google since 2015. To aspiring AI researchers, his advice is clear: the future of AI isn’t about scaling models but about building systems capable of effective abstraction.