EU’s Draft AI Code Draws HackerOne’s Praise for Red-Teaming and Bug Bounties
- Cyber Jill

- Jul 10
- 2 min read
The European Union has taken another definitive step toward shaping global AI governance with its newly unveiled draft Code of Practice for general-purpose AI (GPAI) models—an ambitious voluntary framework tied to the sweeping AI Act set to take effect next month.
The code, crafted by 13 independent experts and released Thursday by the European Commission, invites developers behind major models—including OpenAI, Meta, Google, Anthropic, and France’s Mistral—to commit to a range of transparency, copyright, and safety measures in exchange for legal certainty. But beneath the transparency mandates and copyright controls lies something far more interesting to cybersecurity insiders: a firm embrace of red-teaming, bug bounty incentives, and whistleblower protections.
Among the first to applaud these built-in security safeguards is HackerOne, a platform known for crowdsourced vulnerability disclosure and ethical hacking. Ilona Cohen, the company’s Chief Legal and Policy Officer, welcomed the code’s security-first provisions.
“HackerOne believes that securing AI systems and ensuring that they perform as intended is essential for establishing trust in their use and enabling their responsible deployment,” Cohen said in a statement. “We are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies.”
While the broader media narrative has focused on copyright transparency—like public training data disclosures and crawler limits on protected content—the security requirements buried deeper in the draft code reflect a growing concern about AI’s systemic risks and susceptibility to adversarial manipulation.
The EU code explicitly calls on developers of large language models like GPT, Gemini, LLaMA, and Claude to establish robust frameworks for assessing and mitigating those risks. That includes evaluation across multiple methodologies—not just for performance metrics, but also for safety, unintended consequences, and exploitable flaws.
For HackerOne, which has long advocated for security-by-design in emerging technologies, the inclusion of structured red-teaming and proactive third-party reporting channels signals a sea change in how the EU expects AI to be secured.
Importantly, adherence to the code is voluntary—but opting in provides a “clear, collaborative route” to eventual compliance with the binding obligations of the EU’s AI Act, said EU tech chief Henna Virkkunen.
The code is expected to be finalized by year’s end, pending approval from EU member states and the Commission. But the timeline is already moving: GPAI-related rules become legally binding on August 2, 2025, with enforcement staggered based on model release dates. New models released after August 2025 will face immediate compliance deadlines; existing models get until 2027.
Until then, the message from Brussels is clear: transparency and security aren’t just guidelines—they’re prerequisites for AI’s legitimacy. And for the hackers behind HackerOne, that’s a future worth investing in.


