Cybercriminals nonetheless not absolutely on board the AI practice (but) – Sophos News

0
217
Cybercriminals nonetheless not absolutely on board the AI practice (but) – Sophos News


In November 2023, Sophos X-Ops printed analysis exploring menace actors’ attitudes in direction of generative AI, specializing in discussions on chosen cybercrime boards. While we did word a restricted quantity of innovation and aspiration in these discussions, there was additionally a variety of skepticism.

Given the tempo at which generative AI is evolving, we thought we’d take a recent look to see if something has modified up to now yr.

We famous that there does appear to have been a small shift, a minimum of on the boards we investigated; a handful of menace actors are starting to include generative AI into their toolboxes. This largely utilized to spamming, open-source intelligence (OSINT), and, to a lesser extent, social engineering (though it’s value noting that Chinese-language cybercrime teams conducting ‘sha zhu pan’ fraud campaigns make frequent use of AI, particularly to generate textual content and pictures).

However, as earlier than, many menace actors on cybercrime boards stay skeptical about AI. Discussions about it are restricted in quantity, in comparison with ‘traditional’ subjects reminiscent of malware and Access-as-a-Service. Many posts give attention to jailbreaks and prompts, each of that are generally shared on social media and different websites.

We solely noticed a couple of primitive and low-quality makes an attempt to develop malware, assault instruments, and exploits – which in some instances led to criticism from different customers, disputes, and accusations of scamming (see our four-part sequence on the unusual ecosystem of cybercriminals scamming one another).

There was some proof of modern concepts, however these have been purely aspirational; sharing hyperlinks to reputable analysis instruments and GitHub repositories was extra widespread. As we discovered final yr, some customers are additionally utilizing AI to automate routine duties, however the consensus appears to be that the majority don’t depend on it for something extra complicated.

Interestingly, we additionally famous cybercriminals adopting generative AI to be used on the boards themselves, to create posts and for non-security extracurricular actions. In one case, a menace actor confessed to speaking to a GPT each day for nearly two years, in an try to assist them cope with their loneliness.

Statistics

As was the case a yr in the past, AI nonetheless doesn’t appear to be a scorching matter amongst menace actors, a minimum of not on the boards we examined. On one outstanding Russian-language discussion board and market, for instance, we noticed fewer than 150 posts about GPTs or LLMs within the final yr, in comparison with greater than 1000 posts on cryptocurrency and over 600 threads within the ‘Access’ part (the place accesses to networks are purchased and offered) in the identical interval.

Another outstanding Russian-language cybercrime web site has a devoted AI space, in operation since 2019 – however there are fewer than 300 threads on the time of this writing, in comparison with over 700 threads within the ‘Malware’ part and greater than 1700 threads within the ‘Access’ part within the final yr. Nevertheless, whereas AI subjects have some catching as much as do, one may argue that that is comparatively quick progress for a subject that has solely develop into extensively identified within the final two years, and remains to be in its infancy.

A preferred English-language cybercrime discussion board, which makes a speciality of knowledge breaches, had extra AI-related posts. However, these have been predominantly centered round jailbreaks, tutorials, or stolen/compromised ChatGPT accounts on the market.

It appears, a minimum of for the second, that many menace actors are nonetheless targeted on ‘business as usual,’ and are solely actually exploring generative AI within the context of experimentation and proof-of-concepts.

Malicious growth

GPT derivatives

In November 2023, we reported on ten ‘GPT derivatives’, together with WormGPT, FraudGPT, and others. Their builders sometimes marketed them as GPTs designed particularly for cybercrime – though some customers alleged that they have been merely jailbroken variations of ChatGPT and related instruments, or customized prompts.

In the final yr, we noticed solely three new examples on the boards we researched:

  1. Ev1L-AI: Advertised as a free various to WormGPT, Ev1L-AI was promoted on an English-language cybercrime discussion board, however discussion board workers famous that the supplied hyperlink was not working
  2. NanoGPT: Described as a “non-limited AI based on the GPT-J-6 architecture,” NanoGPT is seemingly a piece in progress, educated on “some GitHub scripts of some malwares [sic], phishing pages, and more…” The present standing of this venture is unclear
  3. HackerGPT: We noticed a number of posts about this device, which is publicly out there on GitHub and described as “an autonomous penetration testing tool.” We famous that the supplied area is now expired (though the GitHub repository seems to nonetheless be stay as of this writing, as does another area), and noticed a relatively scathing response from one other consumer: “No different with [sic] normal chatgpt.”

A screenshot from a criminal forum

Figure 1: A menace actor advertises ‘Ev1l-AI” on a cybercrime discussion board

A screenshot from a criminal forum

Figure 2: On one other cybercrime discussion board, a menace actor supplies a hyperlink to ‘HackerGPT’

Spamming and scamming

Some menace actors on the boards appear more and more considering utilizing generative AI for spamming and scamming. We noticed a couple of examples of cybercriminals offering ideas and asking for recommendation on this matter, together with utilizing GPTs for creating phishing emails and spam SMS messages.

A screenshot from a criminal forum

Figure 3: A menace actor shares recommendation on utilizing GPTs for sending bulk emails

A screenshot from a criminal forum

Figure 4: A menace actor supplies some ideas for SMS spamming, together with recommendation to “ask chatgpt for synonyms”

Interestingly, we additionally noticed what seems to be a industrial spamming service utilizing ChatGPT, though the poster didn’t present a worth:

A screenshot from a criminal forum

Figure 5: An advert for a spamming service leveraging ChatGPT

Another device, Bluepony – which we noticed a menace actor, ostensibly the developer, sharing without cost – claims to be an online mailer, with the power to generate spam and phishing emails:

A screenshot from a criminal forum

Figure 6: A consumer on a cybercrime discussion board provides to share ‘Bluepony.’ The textual content, translated from Russian, reads: “Good day to all, we have decided not to hide in the shadows like ghouls anymore and to show ourselves to the world and come out of private, to look out into the public light, in order to provide a completely free version of Bluepony. Webmailer – works mainly on requests based on BAS, there are small moments when GMAIL needs authorization through a browser, but we are trying to do it as quickly as possible. In the free version, 1 thread will be available, but even with 1 thread on requests it shoots like a machine gun. Bluepony Free works with such domains as: Aol, Yahoo, Gmail, Mail.com, Gmx.com, Web.de, Mail.ru, Outlook, Zoho and even SMTP (we will work on it here). In the future, we will add more domains. Some domains may fall off, but we are trying to fix them urgently, because they also do not stand still and can add all sorts of things. The mailer has OPENai gpt [emphasis added], you can generate emails and images, html emails… a bunch of settings and moments, so you can use AI during the mailing, you describe the required topic and details in the prompt and receive a 100% generated email during the mailing itself.”

Some menace actors can also be utilizing AI to raised goal victims who communicate different languages. For occasion, in a social engineering space of 1 discussion board, we noticed a consumer discussing the standard of assorted instruments, together with ChatGPT, for translating between Russian and English:

A screenshot from a criminal forum

Figure 7: A menace actor begins a dialogue concerning the high quality of assorted instruments, together with AI, for translation

OSINT

We got here throughout one put up the place a menace actor acknowledged that they used AI for conducting open supply intelligence (OSINT), albeit they admitted that they solely used it to save lots of time. While the poster didn’t present any additional context, cybercriminals carry out OSINT for a number of causes, together with ‘doxing’ victims and conducting reconnaissance towards firms they plan to assault:

I’ve been utilizing neural networks for Osint for a very long time. However, if we discuss LLM and the like, they can’t fully substitute an individual within the technique of looking and analyzing info. The most they’ll do is immediate and assist analyze info primarily based on the information you enter into them, however you should understand how and what to enter and double-check all the pieces behind them. The most they’ll do is simply an assistant that helps save time.

Personally, I like neurosearch techniques extra, reminiscent of Yandex neurosearch and related ones. At the identical time, companies like Bard/gemini don’t all the time deal with the duties set, since there are sometimes a variety of hallucinations and the capabilities are very restricted. (Translated from Russian.)

Malware, scripts, and exploits

As we famous in our earlier report, most menace actors don’t but seem like utilizing AI to create viable, commodified malware and exploits. Instead, they’re creating experimental proof-of-concepts, usually for trivial duties, and sharing them on boards:

A screenshot from a criminal forum

Figure 8: A menace actor shares code for a ‘Netflix Checker Tool’, written in Python “with the help of ChatGpt”

We additionally noticed menace actors sharing GPT-related instruments from different sources, reminiscent of GitHub:

A screenshot from a criminal forum

Figure 9: A menace actor shares a hyperlink to a GitHub repository

An additional instance of menace actors sharing reputable analysis instruments was a put up about Red Reaper, a device initially offered at RSA 2024, that makes use of LLMs to determine ‘exploitable’ delicate communications from datasets:

A screenshot from a criminal forum

Figure 10: A menace actor shares a hyperlink to the GitHub repository for Red Reaper v2

As with different safety tooling, menace actors are prone to weaponize reputable AI analysis and instruments for illicit ends, along with, or as an alternative of, creating their very own options.

Aspirations

However, a lot dialogue round AI-enabled malware and assault instruments remains to be aspirational, a minimum of on the boards we explored. For instance, we noticed a put up titled “The world’s first AI-powered autonomous C2,” just for the creator to then admit that “this is still just a product of my imagination for now.”

A screenshot from a criminal forum

Figure 11: A menace actor guarantees “the world’s first AI-powered autonomous C2,” earlier than conceding that the device is “a product of my imagination” and that “the technology to create such an autonomous system is still in the early research stages…”

Another menace actor requested their friends concerning the feasibility of utilizing “voice cloning for extortion of Politicians and large crypto influencers.” In response, a consumer accused them of being a federal agent.

A screenshot from a criminal forum

Figure 12: On a cybercrime discussion board, a consumer asks for suggestions for tasks for voice cloning as a way to extort individuals, solely to be accused by one other consumer of being an FBI agent

Tangential utilization

Interestingly, some cybercrime discussion board discussions round AI weren’t associated to safety in any respect. We noticed a number of examples of this, together with a information on utilizing GPTs to write down a ebook, and proposals for numerous AI instruments to create “high quality videos.”

A screenshot from a criminal forum

Figure 13: A consumer on a cybercrime discussion board shares generative AI prompts for writing a ebook

Of all of the non-security discussions we noticed, a very attention-grabbing one was a thread by a menace actor who claimed to really feel alone and remoted due to their occupation. Perhaps due to this, the menace actor claimed that they’d for “almost the last 2 years…been talking everyday [sic] to GPT4” as a result of they felt as if they couldn’t discuss to individuals.

A screenshot from a criminal forum

Figure 14: A menace actor will get deep on a cybercrime discussion board, confessing to speaking to GPT4 in an try to cut back their sense of isolation

As one consumer famous, that is “bad for your opsec [operational security]” and the unique poster agreed in a response, stating that “you’re right, it’s opsec suicide for me to tell a robot that has a partnership with Microsoft about my life and my problems.”

We are neither certified nor inclined to touch upon the psychology of menace actors, or on the societal implications of individuals discussing their psychological well being points with chatbots – and, in fact, there’s no approach of verifying that the poster is being truthful. However, this case, and others on this part, could counsel {that a}) menace actors are usually not completely making use of AI to safety subjects, and b) discussions on felony boards generally transcend transactional cybercrime, and may present insights into menace actors’ backgrounds, extracurricular actions, and life.

Forum utilization

In our earlier article, we recognized one thing attention-grabbing: menace actors seeking to increase their very own boards with AI contributions. Our newest analysis revealed additional cases of this, which regularly led to criticism from different discussion board customers.

On one English-language discussion board, for instance, a consumer prompt making a discussion board LLM chatbot – one thing that a minimum of one Russian-language market has completed already. Another consumer was not notably receptive to the thought.

A screenshot from a criminal forum

Figure 15: A menace actor means that their cybercrime discussion board ought to have its personal LLM, an concept which is given quick shrift by one other consumer

Stale copypasta

We noticed a number of threads by which customers accused others of utilizing AI to generate posts or code, sometimes with derision and/or amusement.

For instance, one consumer posted an especially lengthy message entitled “How AI Malware Works”:

A screenshot from a criminal forum

Figure 16: A menace actor will get verbose on a cybercrime discussion board

In a pithy response, a menace actor replied with a screenshot from an AI detector and the message “Looked exactly like ChatGPT [sic] output. Embarrassing…”

A screenshot from a criminal forum

Figure 17: One menace actor calls out one other for copying and pasting from a GPT device

In one other instance, a consumer shared code for malware they’d supposedly written, solely to be accused by a outstanding consumer of producing the code with ChatGPT.

A screenshot from a criminal forum

Figure 18: A menace actor calls out particular technical errors with one other consumer’s code, accusing them of utilizing ChatGPT

In a later put up in the identical thread, this consumer wrote that “the thing you are doing wrong is misleading noobs with the code that doesn’t work and doesn’t really makes [sic] a lot of sense…this code was just generated with ChatGPT or something.”

In one other thread, the identical consumer suggested one other to “stop copy pasting ChatGPT to the forum, it is useless.”

As these incidents counsel, it’s affordable to imagine that AI-generated contributions – whether or not in textual content or in code – are usually not all the time welcomed on cybercrime boards. As in different fields, such contributions are sometimes perceived – rightly or wrongly – as being the protect of lazy and/or low-skilled people searching for shortcuts.

Scams

In a couple of instances, we famous menace actors accusing others of utilizing AI within the context of discussion board scams – both when making posts inside arbitration threads, or when producing code and/or instruments which have been later the topic of arbitration threads.

Arbitration, as we clarify within the above linked sequence of articles, is a course of on felony boards for when a consumer thinks they’ve been cheated or scammed by one other. The claimant opens an arbitration thread in a devoted space of the discussion board, and the accused is given a possibility to defend themselves or present a refund. Moderators and directors function arbiters.

A screenshot from a criminal forum

Figure 19: During an arbitration dispute on a cybercrime discussion board (concerning the sale of a device to test for legitimate Brazilian identification numbers), the claimant accuses the defendant of utilizing ChatGPT to generate their clarification

A screenshot from a criminal forum

Figure 20: In one other arbitration thread (this one concerning the validity of a offered dataset) on a unique discussion board, a claimant additionally accuses the defendant of producing an evidence with AI, and posts a screenshot of an AI detector’s output

A screenshot from a criminal forum

Figure 21: In one other arbitration thread, a consumer claims {that a} vendor copied their code from ChatGPT and GitHub

Such utilization bears out one thing we famous in our earlier article – that some low-skilled menace actors are in search of to make use of GPTs to generate poor-quality instruments and code, that are then known as out by different customers.

Skepticism

As per our earlier analysis, we noticed a substantial quantity of skepticism about generative AI on the boards we investigated.

A screenshot from a criminal forum

Figure 22: A menace actor claims that present GPTs are “Chinese rooms” (referring to John Searle’s ‘Chinese Room’ thought experiment) hidden “behind a thin veil of techbro speak”

However, as we additionally famous in 2023, some menace actors appeared extra equivocal about AI, arguing that it’s helpful for sure duties, reminiscent of answering area of interest questions or automating sure work, like creating pretend web sites (one thing we researched and reported on in 2023).

Figure 23: A menace actor argues that ChatGPT is appropriate for automating “shops” (pretend web sites) or scamming, however not for coding

A screenshot from a criminal forum

Figure 24: On one other thread in the identical discussion board, a consumer means that ChatGPT is helpful “for repetitive tasks.” We noticed related sentiments on different boards, with some customers writing that they discovered instruments reminiscent of ChatGPT and Copilot efficient for troubleshooting or porting code

We additionally noticed some attention-grabbing discussions concerning the wider implications of AI – once more, one thing we additionally commented on final yr.

A screenshot from a criminal forum

Figure 25: A consumer wonders whether or not AI will result in extra or fewer breaches

A screenshot from a criminal forum

Figure 26: A consumer asks – probably as a response to the overall tone of derision we noticed elsewhere – whether or not individuals who use AI to generate textual content and code need to be denigrated

Conclusion

A yr on, most menace actors on the cybercrime boards we investigated nonetheless don’t seem like notably enthused or enthusiastic about generative AI, and we discovered no proof of cybercriminals utilizing it to develop new exploits or malware. Of course, this conclusion is predicated solely on our observations of a number of boards, and doesn’t essentially apply to the broader menace panorama.

While a minority of menace actors could also be dreaming massive and have some (probably) harmful concepts, their discussions stay theoretical and aspirational in the meanwhile. It’s extra possible that, as with different facets of safety, the extra fast danger is menace actors abusing reputable analysis and instruments which might be (or will probably be) publicly or commercially out there.

There remains to be a major quantity of skepticism and suspicion in direction of AI on the boards we checked out, each from an OPSEC perspective and within the sense that many cybercriminals really feel it’s ‘overhyped’ and unsuitable for his or her makes use of. Threat actors who use AI to create code or discussion board posts danger a backlash from their friends, both within the type of public criticism or via rip-off complaints. In that respect, not a lot has modified both.

In truth, during the last yr, the one important evolution has been the incorporation of generative AI right into a handful of toolkits for spamming, mass mailing, sifting via datasets, and, probably, social engineering. Threat actors, like anybody else, are possible wanting to automate tedious, monotonous, large-scale work – whether or not that’s crafting bulk emails and pretend websites, porting code, or finding attention-grabbing snippets of data in a big database. As many discussion board customers famous, generative AI in its present state appears suited to those types of duties, however to not extra nuanced and complicated work.

There may, subsequently, be a rising marketplace for some makes use of of generative AI within the cybercrime underground – however this may increasingly turn into within the type of time-saving instruments, relatively than new and novel threats.

As it stands, and as we reported final yr, many menace actors nonetheless appear to be adopting a wait-and-see method – ready for the expertise to evolve additional and seeing how they’ll finest match generative AI into their workflows.

LEAVE A REPLY

Please enter your comment!
Please enter your name here