White House Executive Order Aims To Protect Americans From AI Risks

White House Executive Order Aims To Protect Americans From AI Risks

November 13, 2023

On Oct. 30, the Biden-Harris administration signed an executive order designed to protect Americans from the risks of artificial intelligence (AI) systems. 

From requiring AI developers to share safety test results to setting new standards to protect against AI-enabled fraud, this initiative sets the path for future uses of AI. Here are the highlights.

AI safety and security

Developers of the most powerful AI systems must share their safety test results and other critical information with the United States government. Using the Defense Production Act, the administration ordered AI developers to share information about their safety tests and any critical findings with the U.S. government. If a company is building a super-powerful AI system, they now have to let the U.S. government know how their safety checks went. They also have to share other top-secret details.

The government will develop standards, tools and tests to help ensure AI systems are safe, secure and trustworthy. AI has far-reaching consequences for all industries, including national security, energy and health. Government agencies will test the AI technology to look for any weak spots before it’s in public hands. This is known as red-team testing. The major players of the security team are:

  • The National Institute of Standards and Technology (NIST) — Will set rigorous standards for red-team testing to manage AI risks to individuals, organizations and society.
  • The Department of Homeland Security — Will apply NIST standards to infrastructure sectors and establish an AI safety board.
  • The Departments of Energy and Homeland Security — Will measure critical infrastructure threats and chemical, biological, radiological, nuclear and cybersecurity risks posed by AI technologies.

The government will develop standards for biological synthesis screening to prevent AI from being used to engineer dangerous materials. Agencies that fund life science projects will establish these standards as a condition of their federal funding. Life sciences companies cover various industries like pharmaceuticals, biotechnologies, and environmental and health studies. With science and technology as powerful as AI, there’s potential for misuse.

For example, the National Institutes of Health provides a hefty chunk of funding to hundreds of companies and research institutions that work on new drug therapies and genetic disease research. The Biomedical Advanced Research and Development Authority focuses on health security threats with private companies that develop medical countermeasures. If these agencies were compromised, the danger could be extreme.

AI has immense potential for advancement in these areas but also needs strict safeguards. The presidential directive aims to ensure AI tools are handled responsibly and don’t fall into the wrong hands. They could be used to create harmful biological materials or other threats.

The government will protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The U.S. Department of Commerce will establish ways to verify and differentiate content created by AI. The U.S. Artificial Intelligence Safety Institute (USAISI) will develop guidance for content authentication and watermarking to label AI-generated content. 

These guidelines and tools will make it easy for Americans to verify that the communications they receive from the government are authentic. The USAISI will partner with experts from academia, business, government and civic groups to comprehensively improve AI safety.

The government will develop AI tools to find and fix vulnerabilities in critical software. Cybersecurity that uses AI can monitor and respond to threats 24/7, fix issues and switch its countermeasures on the fly.

The government will order the development of a National Security Memorandum that directs further actions on AI and security. This memo will be the road map for AI’s safe and ethical use in the military and intelligence communities.

AI and personal data privacy

Companies use a lot of data to train their AI systems, and AI makes it easier to identify, extract and exploit personal data during collection. Cybercriminals are counting on lax cybersecurity. They are already using AI systems to steal data.

The presidential order asks Congress to pass data privacy legislation that:

  • Prioritizes federal support for privacy-preserving techniques
  • Preserves the privacy of data used for AI training
  • Strengthens research for technologies like cryptographic tools to safeguard individual privacy
  • Evaluates how agencies, especially data brokers, collect and use available personal and commercial information
  • Develops guidelines for federal agencies to assess the effectiveness of data confidentiality

AI, equity and civil rights

Improperly trained AI can lead to or deepen discrimination and bias. The White House issued a Blueprint for an AI Bill of Rights in October 2023 that touched on five main areas of concern:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration and fallback

The executive order addresses the need to combat algorithmic discrimination using the following techniques:

Provide clear guidance to landlords, federal benefit programs and federal contractors to ensure AI doesn’t unfairly disadvantage certain groups of people.

Address algorithmic discrimination and develop best practices to educate people, investigate bias and take action when AI is involved in discrimination. This will involve teaming up with the Department of Justice and federal civil rights offices to train law enforcement and legal systems on equitable uses of AI. For example, conducting surveillance, predicting crime hot spots, assessing a person’s risk of reoffending, or making sentencing recommendations can be discriminatory if misused.

AI, consumers, patients and students

AI can benefit consumers when used responsibly. Per the order, the Department of Health and Human Services must establish a safety program that accepts reports of harmful health care practices involving AI.

Educators will be given resources to help them deploy AI-enabled tools, like personalized tutoring.

AI and workers’ rights

The order also addresses the dangers of increased workplace bias, surveillance and job displacement. The government will develop best practices to prevent employers from using AI to discriminate or oppress workers. It will also produce a report on AI’s impact on labor markets and identify options to strengthen federal support for workers.

There’s also an initiative to accelerate hiring AI professionals as part of a governmentwide AI talent surge. Agencies will provide AI training for employees at all levels in relevant fields.

Innovation and competition using AI

The government plans to launch a software resource that makes AI tools and data available to researchers and students. It will provide more funding to support AI research, particularly in mission-critical sectors like health care and climate science.

Another goal is to help small businesses and entrepreneurs gain traction in the AI sector. The government will give them tools and resources to bring their AI creations to market. It will expand its visa criteria and review process to retain highly skilled immigrants and nonimmigrants with AI expertise.

The presidential order also commits to expanding the development and implementation of AI to a global partnership that promotes safe, responsible, rights-affirming use.

Government oversight and AI

The executive order kicks off many initiatives to address AI and its uses. Both it and the Blueprint for an AI Bill of Rights are intended to set guardrails in the transformative age of AI.