Featured Article

Is ChatGPT the newest gateway to fraud?

Cybercriminals and hackers are frequently admired, albeit begrudgingly, for their skills. Hacking and cybercrime require significant knowledge, which thankfully restricts these activities to a small select group. But the new bot ChatGPT could change all that. Here’s why and what fraud examiners can do to fight back.



Just weeks after OpenAI, a U.S. artificial intelligence (AI) research laboratory, launched ChatGPT in November last year, the U.S.-Israeli cybersecurity firm Check Point noticed a spike in dark web conversations. The AI chatbot has astounded, but also unnerved, the public with its ability to do many human tasks from coding to passing an MBA-level exam to writing flawless text. However, in a popular hacking forum, a post entitled “ChatGPT – Benefits of Malware” described how one fraudster used the AI tool to recreate malware strains, such as so-called stealers that infiltrate computer systems, search for specific types of files and then remove those files without the owners’ knowledge. While the creation of malware is nothing new, the discussion on the dark web was disturbing because, as Check Point notes, its apparent purpose was to show less tech-savvy fraudsters that they too could use ChatGPT to code for malicious purposes. (See “Cybercriminals starting to use ChatGPT,” Check Point, Jan. 6, 2023.)

As evidence of this, Check Point highlighted another dark web post where a cybercriminal dubbed USDoD talked about how OpenAI helped him create his first-ever script, a series of instructions that’s like computing programming but is usually easier. While scripting isn’t malicious in and of itself, Check Point notes that USDoD probably has limited technical skills and is known in the underground community for selling access to compromised companies and stolen data.


For full access to story, members may sign in here.

Not a member? Click here to Join Now. Or Click here to sign up for a FREE TRIAL.