Google stated a big language mannequin it developed to seek out vulnerabilities not too long ago found a bug that hackers have been getting ready to make use of.
Late final 12 months, Google introduced an AI agent called Big Sleep — a challenge that advanced out of labor on vulnerability analysis assisted by massive language fashions finished by Google Mission Zero and Google DeepMind. The software actively searches and finds unknown safety vulnerabilities in software program.
On Tuesday, Google stated Massive Sleep managed to find CVE-2025-6965 — a crucial safety flaw that Google stated was “solely identified to menace actors and was liable to being exploited.”
The vulnerability impacts SQLite, an open-source database engine standard amongst builders. Google claims it was “in a position to really predict {that a} vulnerability was imminently going for use” and was in a position to reduce it off beforehand.
“We imagine that is the primary time an AI agent has been used to straight foil efforts to use a vulnerability within the wild,” the corporate stated.
A Google spokesperson advised Recorded Future Information that the corporate’s menace intelligence group was “in a position to establish artifacts indicating the menace actors have been staging a zero day however couldn’t instantly establish the vulnerability.”
“The restricted indicators have been handed alongside to different Google staff members on the zero day initiative who leveraged Massive Sleep to isolate the vulnerability the adversary was getting ready to use of their operations,” they stated.
The corporate declined to elaborate on who the menace actors have been or what indicators have been found.
In a blog post touting a wide range of AI developments, Google stated since Massive Sleep debuted in November, it has found a number of real-world vulnerabilities, “exceeding” the corporate’s expectations.
Google stated they’re now utilizing Massive Sleep to assist safe open-source tasks and known as AI brokers a “recreation changer” as a result of they “can release safety groups to give attention to high-complexity threats, dramatically scaling their impression and attain.”
The tech big printed a white paper on how they constructed their very own AI brokers in a approach that allegedly safeguards privateness, limits potential “rogue actions” and operates with transparency.
Dozens of corporations and U.S. authorities our bodies are laborious at work growing AI instruments constructed to shortly seek for and uncover vulnerabilities in code.
Subsequent month, the U.S. Protection Division will announce the winners of a years-long competitors to make use of AI to create methods that may robotically safe the crucial code that undergirds outstanding methods used throughout the globe.
Recorded Future
Intelligence Cloud.