Home / Security / Cybercriminals are using Meta’s Llama 2 AI, according to CrowdStrike

Cybercriminals are using Meta’s Llama 2 AI, according to CrowdStrike

Shield representing cybersecurity

Yuichiro Chino/Getty Images

Cybercrime outfits have taken fledgling steps to use generative AI to stage attacks, including Meta’s Llama 2 large language model, according to cybersecurity firm CrowdStrike in its annual Global Threat Report, published Wednesday.

The group Scattered Spider made use of Meta’s large language model to generate scripts for Microsoft’s PowerShell task automation program, reports CrowdStrike. The program was used to download login credentials of employees at “a North American financial services victim,” according to CrowdStrike.

Also: 7 hacking tools that look harmless but can do real damage

The authors traced Llama 2’s usage by examining the code in PowerShell. “The PowerShell used to download the users’ immutable IDs resembled large language model (LLM) outputs such as those from ChatGPT,” states CrowdStrike. “In particular, the pattern of one comment, the actual command and then a new line for each command matches the Llama 2 70B model output. Based on the similar code style, Scattered Spider likely relied on an LLM to generate the PowerShell script in this activity.”

The authors caution that the ability to detect generative AI-based or generative AI-enhanced attacks is currently limited, because of the difficulty of finding traces of LLM use. The firm hypothesizes that LLM use is limited thus far: “Only rare concrete observations included likely adversary use of generative AI during some operational phases.” 

But malicious use of generative AI is sure to increase, the firm projects: “AI’s continuous development will undoubtedly increase the potency of its potential misuse.”

Also: I tested Meta’s Code Llama with 3 AI coding challenges that ChatGPT aced – and it wasn’t good

The attacks thus far have met with the challenge that the high cost of developing large language models has limited the kind of output attackers can generate from the models to use as attack code. 

“Threat actors’ attempts to craft and use such models in 2023 frequently amounted to scams that created relatively poor outputs and, in many cases, quickly became defunct,” the report states. 

Another avenue of malicious use besides code generation is misinformation, and in that regard, the CrowdStrike report highlights the plethora of government elections this year that could be subjected to misinformation campaigns. 

In addition to the US presidential election this year, “Individuals from 55 countries representing more than 42% of the global population will participate in presidential, parliamentary and/or general elections,” the authors note. 

Also: Tech giants promise to combat fraudulent AI content in mega elections year

Tampering with elections is divided into the high-tech and low-tech. The high-tech route, says the authors, is to disrupt or degrade voting systems by tampering with both the voting mechanisms and with the dissemination to voters of information about voting.

The low-tech approach is misinformation, such as “disruptive narratives” that “may undermine public confidence.”

Such “information operations,” or, “IO,” as CrowdStrike calls them, are already occurring, “as Chinese actors have used AI-generated content in social media influence campaigns to disseminate content critical of Taiwan presidential election candidates.” 

The firm predicts, “Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct IO against elections in 2024. Politically active partisans within those countries holding elections will also likely use generative AI to create disinformation to disseminate within their own circles.”

Source link

About admin

Check Also

Gen AI training costs soar yet risks are poorly measured, says Stanford AI report

The number of significant new AI models coming out of industry has surged in recent ...

Leave a Reply

Your email address will not be published. Required fields are marked *