Friday, February 21, 2025
More
    HomeTechnologyGoogle’s new AI generates hypotheses for researchers

    Google’s new AI generates hypotheses for researchers

    -


    Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. Google has robots summarizing search results, interacting with your apps, and analyzing the data on your phone. And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science?

    Google Research is now angling to turn AI into a scientist—well, a “co-scientist.” The company has a new multi-agent AI system based on Gemini 2.0 aimed at biomedical researchers that can supposedly point the way toward new hypotheses and areas of biomedical research. However, Google’s AI co-scientist boils down to a fancy chatbot. 

    A flesh-and-blood scientist using Google’s co-scientist would input their research goals, ideas, and references to past research, allowing the robot to generate possible avenues of research. The AI co-scientist contains multiple interconnected models that churn through the input data and access Internet resources to refine the output. Inside the tool, the different agents challenge each other to create a “self-improving loop,” which is similar to the new raft of reasoning AI models like Gemini Flash Thinking and OpenAI o3.

    This is still a generative AI system like Gemini, so it doesn’t truly have any new ideas or knowledge. However, it can extrapolate from existing data to potentially make decent suggestions. At the end of the process, Google’s AI co-scientist spits out research proposals and hypotheses. The human scientist can even talk with the robot about the proposals in a chatbot interface. 

    The structure of Google’s AI co-scientist.

    You can think of the AI co-scientist as a highly technical form of brainstorming. The same way you can bounce party-planning ideas off a consumer AI model, scientists will be able to conceptualize new scientific research with an AI tuned specifically for that purpose. 

    Testing AI science

    Today’s popular AI systems have a well-known problem with accuracy. Generative AI always has something to say, even if the model doesn’t have the right training data or model weights to be helpful, and fact-checking with more AI models can’t work miracles. Leveraging its reasoning roots, the AI co-scientist conducts an internal evaluation to improve outputs, and Google says the self-evaluation ratings correlate to greater scientific accuracy. 

    The internal metrics are one thing, but what do real scientists think? Google had human biomedical researchers evaluate the robot’s proposals, and they reportedly rated the AI co-scientist higher than other, less specialized agentic AI systems. The experts also agreed the AI co-scientist’s outputs showed greater potential for impact and novelty compared to standard AI models. 

    This doesn’t mean the AI’s suggestions are all good. However, Google partnered with several universities to test some of the AI research proposals in the laboratory. For example, the AI suggested repurposing certain drugs for treating acute myeloid leukemia, and laboratory testing suggested it was a viable idea. Research at Stanford University also showed that the AI co-scientist’s ideas about treatment for liver fibrosis were worthy of further study. 

    This is compelling work, certainly, but calling this system a “co-scientist” is perhaps a bit grandiose. Despite the insistence from AI leaders that we’re on the verge of creating living, thinking machines, AI isn’t anywhere close to being able to do science on its own. That doesn’t mean the AI-co-scientist won’t be useful, though. Google’s new AI could help humans interpret and contextualize expansive data sets and bodies of research, even if it can’t understand or offer true insights. 

    Google says it wants more researchers working with this AI system in the hope it can assist with real research. Interested researchers and organizations can apply to be part of the Trusted Tester program, which provides access to the co-scientist UI as well as an API that can be integrated with existing tools.



    Source link

    Must Read

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Trending