Neuroscientist Grace Huckins, a lecturer at Stanford University, has raised questions about the role of data and artificial intelligence in scientific progress. In an essay that won the international Nine Dots Prize, Huckins argues that while AI tools and large datasets are leading to practical advances, they may not be deepening our understanding of fundamental scientific questions.
Huckins began exploring this issue during their PhD studies in neuroscience and philosophy at Stanford starting in 2018. “Practical problems are the dimension that you often have to emphasize in grant applications – how will your research help fight diseases, or advance technology, or otherwise improve people’s everyday lives?” said Huckins. “But scientists are also really curious people. Even if science had no obvious practical benefits, many of us would still be driven to understand how and why the world works the way that it does.”
In their winning essay for the Nine Dots Prize, Huckins wrote: “Never before has it made sense to ask whether science is about developing new technologies and interventions or about understanding the universe – for centuries, those two goals have been one and the same. Now that big data and AI have dissociated science’s two objectives, we have the responsibility to decide which matters most.”
The Nine Dots Prize recognizes innovative thinking on contemporary issues and includes a $100,000 award to support development of a book published by Cambridge University Press.
Reflecting on their work with machine learning in neuroimaging research at Stanford’s Russ Poldrack lab, Huckins noted both opportunities and limitations: “It’s really common these days to use machine learning to try to predict some attribute of an individual – like personality, intelligence, or psychiatric diagnosis – based on a brain scan. And some of that research does have real practical benefits: If you could perfectly predict the ideal psychiatric drug for a given patient, that would change lives. But I didn’t see the benefit of, say, predicting whether or not someone has depression, since it’s much cheaper and easier to identify depression with a diagnostic interview. And those depression prediction studies don’t actually tell us much about how depression works in the brain, partly because some of those machine learning approaches can be really tough to interpret. It seemed to me like there was a trend in the field of valorizing prediction for its own sake, and that didn’t seem like the best approach.”
Huckins pointed out examples where AI has provided significant advancements but left gaps in understanding underlying mechanisms. They cited AlphaFold from Google DeepMind as an example: “AlphaFold can take in the sequence of amino acids that make up a protein and accurately predict the three-dimensional structure of that protein. That brings real, practical benefits… AlphaFold is a huge, ridiculously complicated system, and no one understands how it works. It makes great predictions, but the source of those predictions is a mystery.” Huckins added that while such tools accelerate scientific work by providing new data sources without traditional experiments, there is concern that understanding may become secondary as practical results take precedence.
Discussing these themes with students in Stanford’s Civic, Liberal, and Global Education (COLLEGE) program courses has informed Huckins’ perspective further: “While this book is specifically about AI and science, it’s also more broadly about how AI is forcing us to reassess and redefine so many human endeavors… For the most part my students aren’t just AI boosters or AI doomers… they also worry about AI changing the way they engage with their education and with the world. That lesson – that AI brings potential and risk in equal measure – is something I’m working hard to reflect in my book.”
The full excerpt from Grace Huckins’ prize-winning essay is available online.



