An AI Renaissance of Scientific Instruments
Essay 1: Learning from the Past
Executive Summary
This article examines AI as a transformative scientific instrument, comparing its potential adoption trajectory to historical scientific tools like microscopes (slow, 150-200 years) and PCR technology (rapid, 5 years). It argues that AI represents not just a single tool but a "Cambrian explosion" of scientific instruments that could dramatically accelerate discovery across disciplines. Policymakers must prioritize interventions that facilitate PCR-like adoption patterns rather than microscope-like timelines to address urgent societal challenges through AI-powered scientific progress.
AI as a New Scientific Instrument
The picture of artificial intelligence (AI) referenced most often today is the chatbot: a text interface where people interact with large language models and ask them to perform different tasks. Chatbots are helpful, and will be important for a multitude of functions, but they are only a small part of what artificial intelligence will do for the future of science. Like several writers and AI-pioneers have pointed out:
“It is more accurate and important to see AI as a new kind of telescope—or scientific instrument—that enables scientists to address problems that society has made very little progress on hitherto.”
Like other general technologies before it, fully realizing its impact requires a number of different factors to align, and the speed at which they do warrants a closer look.
The Microscope's Gradual Revolution
Consider the microscope as an example of an instrument that transformed its field of practice. The origins of the modern microscope stretch back to the mid-17th century, but its adoption as a transformative instrument in biology progressed gradually over the next two centuries.
Timeline of Microscope Evolution:
While the microscope's principle was discovered in the 17th century, it took well over 150-200 years of incremental technical improvements before it was widely adopted and reached its full transformative potential in biology.
Accelerated Adoption: The PCR Revolution
By contrast, the dramatic impact of polymerase chain reaction (PCR) technology on genetic analysis unfolded rapidly after its conception in 1983.
Conceptualized by biochemist Kary Mullis, PCR provided a technique to quickly amplify specific DNA sequences billions of times over. By 1987, the first automated thermal cyclers were commercialized to perform PCR reactions, making the process efficient and accessible in virtually any laboratory.
“Adoption exploded through the 1990s, with an estimated over 100,000 labs globally using PCR to revolutionize and democratize a myriad of genetics applications from research to medical diagnostics and forensics.”
This astronomical growth occurred within just 5 years of PCR's invention - a stunningly accelerated timeline compared to the microscope.
What enabled this rapid adoption?
Unlike optics and equipment, PCR's foundations lie in a chemical and enzymatic process. No material or engineering barriers constrained its rapid ascent as a cornerstone technology, with only speed of information diffusion slowing adoption down. Its versatile utility was immediately apparent as quantification unlocked insights previously out of reach. Much like the microprocessor, price-performance improvements led to further adoption and innovation, culminating in a positive feedback cycle of exponential advancement.
AI: The New Scientific Instrument
Looking at AlphaFold, the protein folding prediction engine that Google DeepMind has evolved, it is increasingly clear that AI is not only a new scientific instrument but an enabler of many different scientific instruments. One might start referring to this period as a revolution, or “Cambrian explosion” of scientific instruments, each tailored and developed to address specific issues and research domains in ways that were impossible to imagine a few decades ago.
This is a departure from the current view of how science has progressed to-date: without AI as a core component, yet still projecting significant growth. Imagine adding AI instruments, which would enable further growth and change at an accelerated pace.
Of course, this isn’t a phenomenon that began solely with the advent of AI, but rather, an extension of the role that computers are playing in science overall. The introduction of computers enabled the development of specific applications to explore a wide range of sciences:
These and other tools represent a shift into the age of the computation-based scientific instrument, moving science from constraints of biological time into a pace defined by computation.
Just as there are universal Turing machines, there may be universal Turing instruments in the future: a single instrument that can be used for scientific inquiry and exploration in ways that are limited only by what is physically observable and measurable (or can be modeled) in different ways. This is not a new idea: the tricorder in Star Trek, a device that could sense, record, and analyze its surroundings, is evidence that people have been considering this possibility for some time now. These developments suggest that the role of instruments in science is worth exploring more closely.
Key Considerations for AI Instruments in Science
The wealth of possible instruments that can be coded and built with machine learning is daunting, even if they are not universally applicable, presenting us with a number of different questions.
Here are a few basic assumptions and observations to reflect on based on how society has introduced and used scientific instruments in the past:
The Policy Opportunity
These assumptions seem to suggest that there is an opportunity here for policymakers to incentivize fast, efficient and robust adoption of new scientific instruments.
“What scientists and society need today is PCR-adoption patterns rather than microscope adoption patterns given the urgency of societal challenges we face.”
There is nothing to suggest that this is impossible. Yet it is incumbent on policymakers to prioritize interventions that will accommodate and accelerate this wave of general technologies in ways that maximize equity and social benefit—along with speed of adoption — for constituencies to judge whether their efforts were successful or not.
So how can they facilitate that?
The second and third paper of this series will explore in turn:
This is the first article in a three-part series exploring AI as a transformative scientific instrument and the policy considerations needed to maximize its societal benefits.
About the Authors
Dorothy Chou is the Director of the Public Engagement Lab at Google DeepMind , where she helps enable meaningful public discussions through translating complex AI concepts. Dorothy is passionate about using technology as a force for positive change, both through her policy work and as an angel investor supporting underrepresented founders. With interests spanning bioethics and technology governance, she enjoys building bridges between technical innovation and social responsibility, working toward a future we can look forward to.
Nicklas Berild Lundblad is the Director of Public Policy at Google DeepMind , where he explores powerful questions at the intersection of technology, policy, and society. He thrives on connecting diverse stakeholders around shared visions for AI's future, describing his work as "a mix of foresight, insight and listening." An enthusiastic ambassador for thoughtful AI development, Nicklas enjoys facilitating conversations that bridge technical innovation with social impact, finding deep satisfaction in building collaborative networks that shape positive technological futures.
Terra Terwilliger is the Director of Strategic Initiatives at Google DeepMind , where she brings her Georgia roots and down-to-earth perspective to complex AI topics. As a strategic thought partner to the COO, she finds purpose in building a shared imagination about AI-enabled futures. Terra is passionate about harnessing technology's potential to improve lives, working with diverse teams to ensure AI benefits humanity in meaningful ways.
The views expressed in this article represent the authors' personal perspectives and not necessarily those of their affiliated organizations.
© 2025 Google DeepMind
Sources
[1] Not least for summarising the growing amount of information facing researchers in almost all fields – publishing today vastly exceeds our ability to read, digest and analyse content. See eg Fire, M. and Guestrin, C., 2019. “Over-optimization of academic publishing metrics: observing Goodhart’s Law in action.” GigaScience, 8(6), p.giz053 – suggesting that around 5-7 million academic articles are published each year. Now, this may or may not be close to the maximum amount of articles that available researchers could publish every year – if we assume there are 8 million researchers and they have roughly 500 hours after completing everything else they need to do as researchers, and publishing an article takes 100 hours - then the maximum amount is on the order of 40 million articles, or 5 articles each (which is a lot, but less than we have today). We may not have hit peak publishing rates yet either, and if AI increases scientific productivity as we measure it today - in publications - we may face an even trickier situation. Now, if we assume, on the other end, that researchers use their time to read, and reading an article properly and integrating it takes 10 hours, then researchers should be able to digest, at an optimum, 50 articles per person and year, or one a week, roughly. That equates to 50x8 400 million articles a year, but, of course, we then must ask the question of which articles, and what the value is of researcher reading similar articles and so sharing a research horizon. It all gets quite messy, quite fast – suggesting that a key service AI could perform is not just speeding up reading and publishing, but organizing the papers so that it is possible to read bodies of literature, or specific syllabi, and then work within them to progress. There may be a need for AI-produced canons of different kinds.
[2] The quote often referred to here is one allegedly from Edsgar Dijkstra: “Computer Science Is Not About Computers, Any More Than Astronomy Is About Telescopes”. There is significant doubt over whether or not this is indeed Djikstra however: https://quoteinvestigator.com/2021/04/02/computer-science/
[3] For more on AlphaFold see https://alphafold.ebi.ac.uk/
[4] There are different numbers and definitions, but one assessment is that the market is in the $40-50 billion range, and grows at a pace between 5-6% annually. See https://www.thebusinessresearchcompany.com/report/scientific-instruments-global-market-report