The Guardian | AI-powered hacking has exploded into industrial-scale threat, Google says by Aisha Down and Dan Milmo
AI generated summary, Read the full article for complete information.
In a recent Google threat‑intelligence report, AI‑powered hacking is already an industrial‑scale threat, with criminal groups and state‑linked actors from China, North Korea and Russia leveraging commercial large‑language models such as Gemini, Claude and OpenAI tools to accelerate, scale and sophisticate attacks—including faster malware development, persistence, and zero‑day exploitation. Google’s chief analyst John Hultquist says the “AI vulnerability race” has begun, noting that AI lets threat actors test operations, refine exploits and launch mass‑exploitation campaigns, as illustrated by a group on the brink of using a non‑Mythos LLM for a large‑scale zero‑day attack. Anthropic’s decision to withhold its Mythos model after it uncovered pervasive zero‑day flaws underscores the danger, while experts like UCL’s Steven Murdoch warn AI is reshaping vulnerability discovery. Meanwhile, the Ada Lovelace Institute cautions that public‑sector productivity gains touted for AI may rest on untested assumptions, urging more rigorous, long‑term evaluation of AI’s real impact.
#Google #Anthropic #OpenAI #JohnHultquist #Gemini #Claude #Mythos #OpenClaw #AdaLovelace #UKgovernment #aiartificialintelligence #business #cybercrime #cyberwar #hacking #technology






