top of page

Trustwave SpiderLabs Dissects the Cyber Ramifications of OpenAI's ChatGPT

Updated: Dec 23, 2022

OpenAI's ChatGPT and its AI chat-bot capabilities have made headlines since its release in November, but few understand the tech’s cyber ramifications. Trustwave SpiderLabs recently published a new report illustrating how ChatGPT can be used as a virtual colleague to help discuss and perfect cyber exploits.

Trustwave SpiderLabs

We spoke with Karl Sigler, Senior Security Research Manager, Trustwave SpiderLabs, to learn more about the cyber ramifications of ChatGPT.

What makes ChatGPT different from the other AI solutions available in the market?


There are a lot of AI-based chatbots, and they will likely see a massive investment in both interest and financial backing due to the popularity of ChatGPT. What differentiates them is the methodology used to train the AI as well as the dataset they use for that training. That competition and results will drastically improve AI in general.

Karl Sigler, Trustwave

How can White Hat hackers use ChatGPT for their benefit?


There are likely many ways White Hats might use AI down the road. It could be help in identifying vulnerabilities in code, malware analysis, and proper identification of phishing email. In our recent SpiderLabs ChatGPT blog, we show a SQL Injection example where ChatGPT identifies a weakness in a code snippet, allowing it to create a cURL request for exploitation. Alternatively, you could use a similar injection to obtain sensitive information from a database.

How can ChatGPT be used for malicious purposes?


So far, ChatGPT is not being used as an attack platform, but its usefulness to cybercriminals is already obvious. Researchers have used the platform to generate malware and craft convincing phishing emails.

Is there anything that we can do to prevent the malicious use of solutions like ChatGPT? Or is this just a byproduct of the advancement in AI?


Right now, ChatGPT is not a direct threat to any organization's security posture. The main issue with ChatGPT is the speed with which it can generate functional code or other text. If used at all, it will likely provide a supplement to existing criminal operations; at least until better controls are put in place.

Where do you see AI solutions for exploits heading in 2023 and beyond?


These AI engines are getting better and faster. The use cases for them will be explored in depth in the coming year and likely in creative ways we haven’t even thought of yet. From an exploit perspective, I imagine we’ll see AI used as a tool on both sides. While it may provide threat actors with some quick exploit code, it will also give Defenders and Infosec professionals the same tools to audit software for bugs and vulnerabilities.


###

bottom of page