top of page

Biden's Executive Order Charts a Middle Path in Governing Artificial Intelligence

Pressure on governments to act has mounted, driven by concerns about AI's impact since the rise of generative AI applications like ChatGPT. While AI companies have testified before Congress, emphasizing the technology's potential and pitfalls, activist groups have urged the government to counter AI's misuse, such as creating cyberweapons and deepfakes.

President Biden's new artificial intelligence executive order seeks to chart a middle path. It allows continued AI development while introducing modest regulations and signaling increased government oversight. Unlike social media, which went unregulated for over a decade, this order demonstrates the administration's intent to closely monitor AI.

The executive order addresses various stakeholders' concerns. AIs' safety advocates will appreciate new requirements for companies developing powerful AI systems. These firms must notify the government and share safety testing results before releasing their models publicly.

Cloud providers like Microsoft, Google, and Amazon will have to inform the government about foreign customers. Additionally, standardized tests to measure AI model performance and safety will be developed.

For AI ethics advocates, the order prevents AI algorithms from exacerbating discrimination and offers guidance on watermarking AI-generated content to combat misinformation.

While not every stakeholder will be entirely satisfied, the executive order strikes a pragmatic balance between innovation and caution. It also serves as a signal that the U.S. government is moving swiftly, recognizing that AI's pace of development remains rapid. Industry experts weighed-in on what this AI executive order could mean for the future of AI. Poppy Gustafsson, CEO, Darktrace

“AI safety and AI innovation are not in conflict: they go hand in hand. In this journey to build these exciting new technologies that can substantially benefit society, the safer we make AI, the faster we’ll be able to realise the opportunities.

AI is already a broad toolkit, with a wide range of applications. Ensuring AI is safe is not a one-size-fits-all challenge: it needs to be tailored to the use case. It is up to humans to decide when, how and where to use AI. It is not something that is being done to us.

While I am mindful of the risks, I remain an AI optimist. In Darktrace’s world of cyber security, AI is already essential as it’s the only way to spot novel attacks. I’m excited to see how the conversation evolves - and what action it will drive.” Morgan Wright, Chief Security Advisor, SentinelOne

"The danger is not in the regulation of AI but in the overregulation. A challenge for this executive order is how to keep the US in a leadership position with respect to the development of AI-enabled technologies and capabilities while at the same time not stifling innovation.


Anything that slows down reasonable and responsible development only benefits those with adversarial interests—bad actors and hostile nation-states. I don’t foresee China or Russia slowing down the development of AI in order to align with US, UK, or EU interests. Quite the opposite. They are accelerating the development of capabilities such as LAWS - Lethal Autonomous Weapons Systems.


The prosecution of war is now becoming automated. The role of AI has risen significantly in both offensive and defensive weapons. Critical infrastructures are more at risk than ever before from advanced AI-enabled cyber attacks. The use of AI as a force for good mustn't be at the mercy of onerous government regulations. The line between riding the wave of AI or being swallowed by it is a fine one. In war, the enemy always gets a vote. We must ensure their vote doesn't override our ability to defend ourselves from current and future attacks." Michael Leach, Compliance Manager, Forcepoint:

"The Executive Order on AI provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning. The new Executive Order provides valuable insight for the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.

The emphasis in the Executive Order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting American’s privacy with the advent of new technologies like AI. Since the introduction of global privacy laws like the EU GDPR, we have seen numerous U.S. state level privacy laws come into effect across the nation to protect American’s privacy and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data. The various U.S. state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customizing, data collection, processing, etc.) generally require the following: the right for individual consumers to opt-out profiling and automated-decision making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI. The new Executive Order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements. The establishment of consistent national AI and privacy laws will allow U.S. companies and the government to rapidly develop, test, release and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI."

bottom of page