You may have seen the chatter about OpenAI’s ChatGPT. It’s in beta but has really taken off. It’s a powerful example of AI because it is so demonstrable and consumable for people outside of the field.
But this isn’t just a sandbox that people can use for everyday tasks and clever beta testing, this is something that bad guys can use to create very convincing phishing emails, with natural language and without the tell-tale signs of bad grammar. We heard from two cyber experts on the pros and cons of the technology.
Matt Psencik, Director, Endpoint Security Specialist, Tanium weighed in on the promise of the technology:
"ChatGPT is one of the first chatbots that has impressed me with its ability to be asked incredibly complex questions and then provide back an understandable reply. Is it free of bugs and perfect? No, but it never claimed to be, given it's still in beta. Even once it moves to production, it will likely still not get everything right as all learning models have some flaws which poke through to individual answers.
The power I see here is the ability to rapidly get a gist of what's going on and then be able to search a related topic to check that answer when starting from nothing. A good example from the cybersecurity side of the house is the ability to take a snippet of code (be that raw hex, assembly, or a high-level language like python or C++) and ask the bot "What does this code do?” I could spend hours taking each section of that code, searching what each keyword or flag does, and then eventually figure out what it's doing, or I can ask ChatGPT to give me a high-level summary and then examine broken-down sections of the explanation to rapidly learn what it all does. It's not a magical orb that gives us all the answers, but it's akin to a group of tutors and experts in a room answering what they know about a subject in a digestible manner that allows for rapid knowledge transfer.
ChatGPT should be used as a supplemental tool on your belt but it's only as good as the questions it's asked, the models it was trained on, and most importantly the comprehension abilities of the brain who asked the question in the first place.”
But ChatGPT can also be used for malicious purposes.
Lomy Ovadia, SVP of Research & Development, IRONSCALES weighed in on the potentially nefarious uses of ChatGPT technology:
“Hackers can use AI models such as OpenAI GPT-3 to improve their phishing attacks in various ways. AI models can generate content that is near impossible to distinguish from genuine content, allowing malicious intent to go undetected. It also makes it easier to generate large volumes of varied/polymorphic content to overwhelm existing rules-based email security systems. More experienced attackers can use content used in previous attacks as input to AI models to adapt their output to evade detection. Lastly, foreign bad-actors can use AI models to create professional translations of their content.” ###