Google released a report Wednesday revealing that hackers are using the company’s Gemini artificial intelligence to devise more effective cyberattacks.
Hackers from China, Iran, and North Korea have been especially aggressive at taking advantage of Gemini’s capabilities.
The report from the Google Threat Intelligence Group (GTIG), titled “Adversarial Misuse of Generative A.I.,” noted that A.I. is “poised to transform digital defense” by helping cybersecurity experts with tasks such as “sifting through complex telemetry to secure coding, vulnerability discovery, and streamlining operations.”
A.I. is also transforming digital offense and, at the moment, it might just be empowering the black hats a bit more. GTIG monitored the way “threat actors” are using Gemini, an A.I. chatbot that accepts requests in plain English from all users. The chatbot uses a massive accumulation of language and meaning known as a large-language model (LLM) to interpret questions and put together coherent responses.
LLMs can also monitor the way people are using them, since they are designed to grow and learn from every interaction. GTIG investigated how threat actors are using Gemini and came to some interesting conclusions.
One finding was that hackers gain the same benefit from A.I. as everyone else does: a tremendous improvement in efficiency. Now that chatbots like Gemini have learned to understand plan-language requests with a high degree of accuracy, they have become highly efficient digital assistants. A.I. can perform thousands of lightning-fast searches and collate the results to answer questions posed by human users. Hackers can definitely benefit from having such potent research tools at their fingerprints.
“Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities. At present, they primarily use A.I. for research, troubleshooting code, and creating and localizing content,” GTIG found.
The report said threat actors are using Gemini to “support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques.”
“Rather than enabling disruptive change, generative A.I. allows threat actors to move faster and at higher volume,” the report said. “Current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors.”
This includes “information operations” or IO threats, which basically means disinformation and psychological warfare. GTIG found IO threat actors using Gemini for “research; content generation including developing personas and messaging; translation and localization; and to find ways to increase their reach.”
The report identified the heaviest users of Gemini for both hacking and IO threats as Iran, followed by China, while the cyberthreat-heavy country that seemed the least enthusiastic about using Gemini was Russia. Fully three-quarters of the Gemini usage linked to information operations came from Iran.
One of the most notorious Iranian threat actors, a state-sponsored group known as APT42 or “Charming Kitten,” employed Gemini to refine its phishing campaigns and conduct “reconnaissance” against “defense experts and organizations.”
Phishing is the dark art of creating realistic-looking emails that appear to come from trusted sources, but which either deliver malware payloads to unsuspecting readers or trick them into revealing sensitive information, such as account numbers, user names, and passwords.
APT42 launched a major campaign in May 2024 to infiltrate Western media, academic institutions, and non-governmental organizations (NGO) that used “social engineering schemes” to trick the targets into sharing sensitive information – precisely the sort of cyber-espionage that would benefit from A.I. research tools.
APT42 is also infamous for hacking into Donald Trump’s 2024 presidential campaign, stealing phone calls and text messages from the smartphones of its victims. APT42 has used similar techniques against Iranian dissidents living overseas.
China’s state-sponsored hackers appear to use Gemini and other generative A.I. systems to conduct reconnaissance against their targets, refine their viral code, and devise methods of digging deeper into networks they have penetrated.
North Korea uses A.I. to “support several phases of the attack lifestyle,” from increasing the potency of malware programs to evading detection by security teams. North Korea also made heavy use of Gemini in a scheme to infiltrate Western tech companies by remotely applying for information technology jobs, the report claimed.
The Department of Justice (DOJ) indicted five people, two of them U.S. citizens, in connection with one such scheme on Thursday. The conspirators allegedly funneled almost a million dollars to the brutal North Korean regime. North Korea has been running numerous operations to infiltrate IT companies ever since the Wuhan coronavirus pandemic made remote work commonplace.
GTIG said threat actors have attempted to use Gemini to abuse other Google products, including “researching techniques for Gmail phishing, stealing data, coding a Chrome infostealer, and bypassing Google’s account verification methods,” but these efforts were generally unsuccessful. The report credited Gemini’s safeguards for withstanding efforts to make it generate dangerous content or malicious code.
Both dangerous hackers and garden-variety online miscreants have attempted to “jailbreak” Gemini, meaning queries that were intended to damage the A.I. or bamboozle it into generating information that violated its safety controls. Some of these efforts were quite elaborate, but according to GTIG, none of them have been successful.
“Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign. Instead, the responses consisted of safety-guided content and generally helpful, neutral advice about coding and cybersecurity,” the report said.
The consensus among cybersecurity efforts is that A.I. has not been a “game changer” for hackers, but it has been useful in numerous small ways. Like most revolutionary IT developments, A.I. seems to be helping attackers a little more than defenders at the moment, but the balance is very close.
“Historically, defense has been hard, and technology hasn’t solved that problem. I suspect A.I. won’t do that, either. But we don’t know yet,” cyberspace policy director Adam Segal of the Council on Foreign Relations (CFR) told Voice of America News (VOA) after reviewing the Google report.
“If an attacker can use something to find a vulnerability in software, so, too, is the tool useful to the defender to try to find those themselves and patch them,” said Center for a New American Security researcher Caleb Withers, implicitly giving the attackers an advantage because they have the initiative.
Google’s president for global affairs, Kent Walker, noted that the relatively upbeat assessment by GTIG might not hold up for long if malevolent powers like China seize the advantage in A.I. development.
“The U.S. is racing to pioneer breakthrough technologies like AI and quantum computing. At the same time, each day brings news stories about malicious cyber actors burrowing into American telecommunications networks, energy grids and water plants to hold infrastructure hostage and spy on citizens,” he wrote on Google’s cybersecurity blog.
Walker recommended stronger leadership on A.I. from the private sector, coupled with “public sector leadership in technology procurement and deployment,” plus “heightened public-private collaboration on cyber defense.”
“We need government support through strategic approaches to trade and export policies that help American firms outcompete China and its national champions in building the data centers and platforms used by people around the world,” he said.
“As I’ve said before, America holds the lead in the A.I. race – but our advantage may not last. By working together we can build on and accelerate America’s A.I. edge, boost our national security and seize the opportunity ahead,” he advised.
To put it bluntly, GTIG found that when Chinese and Iranian hackers asked Gemini to write better malware for them, Gemini said no. China’s A.I. cannot be counted upon to refuse, any more than it can be counted upon to answer questions about what happened in Tiananmen Square on June 4, 1989.