top of page

Report Reveals Hackers Believe AI Can't Replace Human Creativity in Security Research

Bugcrowd, the crowdsourced cybersecurity platform, has published its "Inside the Mind of a Hacker" report for 2023, providing insights into the perspectives of hackers regarding artificial intelligence (AI) and its potential impact on security research and vulnerability management. The report, based on analysis of 1,000 survey responses from hackers on the Bugcrowd Platform, as well as millions of proprietary data points on vulnerabilities, highlights that 72% of hackers do not believe AI will replace the creativity of humans in the field.

The 2023 report places significant emphasis on generative AI, with 55% of respondents stating that it either already outperforms hackers or will do so within the next five years. However, hackers appear unfazed by the rise of generative AI, with 72% confidently asserting that it cannot replicate their level of creativity.

Regarding the applications of generative AI, hackers identified various functions where it proves useful. These include task automation (mentioned by 50% of respondents), data analysis (48%), vulnerability identification (36%), validation of findings (35%), and reconnaissance (33%). Moreover, an overwhelming majority of 64% of hackers believed that generative AI technologies have enhanced the value of ethical hacking and security research.

The Bugcrowd report has sparked interest among cybersecurity experts, who have provided their perspectives on the findings. Timothy Morris, Chief Security Advisor at Tanium, a Kirkland, Washington-based provider of converged endpoint management (XEM):

It’s good to see that effort is being put forward to test and hack generative AI itself. The risks of prompt injections and data poisoning are real. They are becoming the new "watering hole" attack, in my opinion. Using AI to write malware faster is one thing, but using it to build and launch or establish C2 (command & control) with a generative AI user's machine, is an even greater risk.

Disinformation is mentioned briefly along with bias. I believe that disinformation is one of the larger risks with generative AI. Regulating training models and output is still a big challenge. As with, "one person's garbage is another person's treasure,” the new version is, one person's disinformation is another person's truth. Nation-state actors will attempt to exploit this as will good and bad security researchers.

John Bambenek, Principal Threat Hunter at Netenrich, a San Jose, Calif.-based security and operations analytics SaaS company:

"The results on the impact of AI/ML advancements on hacking are an interesting mixed bag where most thought generative AI will outperform hackers but AL/ML will not replace human creativity. To put in more general terms, AI/ML can help automate lower value work so the humans can focus on things that humans do best and thus be more efficient. If generative AI used in code generation can help reduce software vulnerabilities generally, that is a huge win for technology and society. That being said, one of the reasons I love this industry is that there is always something new to learn and some new risks or attacks to try to fix. For instance, we are just now starting to come to grips with the cybersecurity risks of AI/ML systems and in some ways, we have to recreative everything we know about red teaming when it comes to these applications."

Craig Jones, Vice President of Security Operations at Ontinue, a Redwood City, Calif.-based managed detection and response (MDR) provider:

"One cannot deny the impact of AI on hackers' workflow, as evidenced by the significant number of hackers who have already embraced generative AI technologies. Surprisingly, 85% of hackers have utilized these tools to enhance their hacking endeavors. However, it's worth noting that only 64% currently incorporate AI into their security research workflow. The remaining 30% have expressed their intention to integrate AI in the future, recognizing its potential to streamline their hacking practices.

Among the various generative AI technologies utilized by hackers, AI-powered chatbots dominate the scene. The overwhelming majority of hackers, 98% to be exact, have used ChatGPT as their go-to chatbot, followed by Google Bard and Bing Chat AI. These chatbots prove invaluable in assisting hackers during their security research, offering automated and efficient support. But AI chatbots are just the tip of the iceberg when it comes to AI's influence on hacking. Hackers are eager early adopters of technologies, continuously exploring new possibilities to expand their skill sets and improve their efficacy. The survey reveals that hackers utilize generative AI in diverse ways, encompassing text generation, code automation, search insights, chatbot automation, image generation, data processing, and even machine learning platforms.

As the symbiotic relationship between hackers and AI continues to evolve, it is evident that AI has become an indispensable tool for hackers. It empowers them to automate processes, analyze data, and augment their problem-solving capabilities. However, the human element, with its creativity and adaptability, remains a vital component that sets hackers apart. The future promises exciting developments as hackers and AI forge a path towards a safer and more secure digital landscape."

Mike Heredia, Vice President, EMEA at XM Cyber, a Tel Aviv-based provider of hybrid cloud security:

"It comes as no surprise that 84% of hackers believe that less than half of companies understand their true risk of being breached as the majority or organizations do not currently leverage technology that continuously understands exploitable attack paths covering the entire attack surface – this is a major failing as organizations still over focus on detection and response technologies.

With the much hyped skills shortage in the industry, automation and adoption of AI can help plug the gaps and help defenders stay several steps ahead of the threat actors."


bottom of page