Fake military IDs, bogus résumés: How North Korean and Chinese hackers use AI tools to infiltrate companies and other targets

2 hours ago 3

North korean hacker

AI chatbots are helping North Korean and Chinese hackers infiltrate companies by faking résumés, forging IDs, and running cyber campaigns. Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images
  • North Korean hackers recently used ChatGPT to make fake military IDs in phishing emails.
  • Pyongyang hackers aren't new to using AI to supercharge espionage. Chinese groups are doing it too.
  • OpenAI, Anthropic, and Google have detailed cases of their chatbots being misused by these hackers.

From bogus IDs to made-up résumés, North Korean and Chinese hackers have been using AI tools to supercharge espionage and slip into companies and other targets.

In the latest case, a North Korean hacking group known as Kimusky used ChatGPT to generate a fake draft of a South Korean military ID. The fake IDs were attached to phishing emails that impersonated a South Korean defense institution responsible for issuing credentials to military-affiliated officials, South Korean cybersecurity firm Genians said in a blog post published Monday.

Kimsuky has been linked to a string of espionage campaigns against individuals and organizations in South Korea, Japan, and the US. In 2020, the US Department of Homeland Security said the group is "most likely tasked by the North Korean regime with a global intelligence-gathering mission."

ChatGPT blocks attempts to generate official government IDs. But the model could be coaxed into producing convincing mock-ups if the prompt was framed as a "sample design for legitimate purposes rather than reproducing an actual military ID," Genians said.

This is not the first time North Korean hackers have used AI to infiltrate foreign entities. Anthropic said in a report last month that North Korean hackers used its Claude tool to secure and maintain fraudulent remote employment at American Fortune 500 tech companies. The hackers used Claude to spin up convincing résumés and portfolios, pass coding tests, and even complete real technical assignments once they were on the job.

US officials said last year that North Korea was placing people in remote positions in US firms using false or stolen identities as part of a mass extortion scheme.

China's hackers are doing it, too

Anthropic said in the same report that a Chinese actor spent over nine months using Claude as a full-stack cyberattack assistant to target major Vietnamese telecommunications providers, agricultural systems, and government databases.

The hacker used Claude as a "technical advisor, code developer, security analyst, and operational consultant throughout their campaign," Anthropic said.

Anthropic said it had implemented new ways to detect misuse of its tools.

Chinese hackers have also been turning to ChatGPT for help with their cyber campaigns, according to an OpenAI report published in June. The hackers asked the chatbot to generate code for "password bruteforcing"— scripts that guess thousands of username and password combinations until one works. They used ChatGPT to dig up information on US defense networks, satellite systems, and government ID verification cards.

The OpenAI report flagged a China-based influence operation that used ChatGPT to generate social media posts designed to stoke division in US politics, including fake profile images to make the accounts look like real people.

"Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses," OpenAI said in the June report.

It's not just Claude and ChatGPT. North Korean and Chinese hackers have experimented with Google's Gemini to expand their operations. Chinese groups used the chatbot to troubleshoot code and obtain "deeper access to target networks," while North Korean actors used Gemini to draft fake cover letters and scout IT job postings, Google said in a January report.

Google said Gemini's safeguards prevented hackers from using it for more sophisticated attacks, such as accessing information to manipulate Google's own products.

OpenAI, Anthropic, and Google did not respond to a request for comment from Business Insider. The companies have said they published their findings on hackers to help others improve security.

AI makes hacking easier

Cybersecurity experts have long warned that AI has the capacity to make hacking and disinformation operations easier.

Hackers have been using AI models to infiltrate companies, Yuval Fernbach, the chief technology officer of machine learning operations at software supply chain company JFrog, told Business Insider in a report published in April.

"We are seeing many, many attacks," Fernbach said, adding that malicious code is easily hidden inside open-source large language models. Hackers typically shut things down, steal information, or change the output of a website or tool.

Online businesses have also been hit by deepfakes and scams. Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, told Business Insider in a June report that he isn't surprised at the surge in personalized phishing attacks against small businesses.

GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand's image and write flawless, convincing scam messages within minutes, Duncan said. With cheap tools, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," he added.

Read Entire Article
| Opini Rakyat Politico | | |