top of page

HackerOne Highlights Confidence and Challenges in Defending Against AI-Driven Cyber Threats

HackerOne disclosed striking findings from a recent research survey that underscore a significant gap between the confidence of IT and security professionals in handling AI-driven cyber threats and the harsh realities of the evolving threat landscape. Despite 95% of respondents expressing confidence in their ability to defend against AI-driven threats, a concerning one-third admitted that their organization had faced an AI-related security incident in the past year.

The survey also delved into how security teams are adapting their strategies and budgets to mitigate risks associated with AI technologies. Notably, nearly three-quarters of those surveyed have allocated at least 20% of their security budgets this year to tackle AI security risks. This substantial investment is driven by a mix of factors, including new AI-focused regulations, the internal adoption of generative AI (GenAI) tools by employees, and firsthand experiences with security incidents triggered by AI, highlighting the increasing integration of AI in corporate environments.

Michiel Prins, co-founder of HackerOne, emphasized the importance of a cautious approach to AI security: “We must all take GenAI threats seriously, but confidence should come with understanding, and none of us fully comprehend what the biggest GenAI security and safety threats are for most organizations quite yet," he said. Prins pointed out that AI red teaming, or adversarial testing, is being recognized as a critical method to proactively identify and mitigate these risks. According to the survey, 37% of organizations have already implemented AI red teaming initiatives to strengthen their defenses against potential AI exploits.

In response to these challenges, HackerOne has been actively involved in AI red teaming engagements with major organizations such as Zoom, Snap, and PayPal, aiming to enhance the security of AI tools and features through rigorous pentesting, security assessments, and bug bounty programs. Additionally, in February, HackerOne introduced its AI copilot Hai, a GenAI tool designed to boost program insights for both customers and hackers, now accessible via the HackerOne platform.

This concerted effort to bolster AI security capabilities reflects a growing recognition of the sophisticated nature of AI-related threats and the need for comprehensive strategies to address them, underscoring the industry's move towards more resilient and forward-thinking cybersecurity defenses.

bottom of page