top of page

Nearly Half of Security Professionals Cite Generative AI as a Top Risk, New HackerOne Data Reveals

As the use of artificial intelligence rapidly expands, security professionals are growing increasingly concerned about its risks. New data from HackerOne, a leader in human-powered security, shows that 48% of security professionals view generative AI (GenAI) as one of the most significant security risks to their organizations. This insight comes from a survey of 500 security professionals, highlighting their fears around AI misuse and vulnerability.


The early findings, released ahead of HackerOne’s annual Hacker-Powered Security Report, shed light on the most pressing concerns. Top issues include the potential leaking of AI training data (35%), unauthorized use of AI within organizations (33%), and the risk of AI models being hacked by external actors (32%). These risks highlight the growing realization that while AI offers operational benefits, it also creates new attack vectors that security teams must address.


In response to these challenges, the majority of security professionals believe that external audits are key. About 68% of respondents indicated that external and unbiased reviews of AI implementations are the most effective way to identify security issues. This process, known as AI red teaming, involves bringing in independent researchers to evaluate AI systems for vulnerabilities, biases, and potential exploits. Organizations such as Anthropic, Adobe, and Snap have already turned to global security researcher communities to help secure their AI deployments.


"While we're still reaching industry consensus around AI security best practices, there are some clear tactics where organizations have found success," said Michiel Prins, co-founder of HackerOne. "Leaders in tech are leveraging external security researchers to get an expert third-party perspective on their AI models."


As AI becomes more embedded in both offense and defense strategies, a HackerOne-sponsored SANS Institute report indicates that 58% of respondents predict an "arms race" between security teams and cybercriminals, driven by AI. While AI helps streamline tasks and improve productivity - 71% of respondents report satisfaction from AI's ability to automate routine tasks - there’s concern that cybercriminals are using the same technology to their advantage. AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%) top the list of threats that worry security teams the most.


“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations - or risk creating more work for themselves,” noted Matt Bromiley, Analyst at The SANS Institute. “AI should be seen as a tool that enables teams to focus on strategic activities, rather than as a threat to jobs.”

bottom of page