On October 30, 2023, President Biden issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO establishes sweeping directives and priorities for federal agencies regarding the development and use of AI across a broad swath of areas touched by the US federal government. The EO reflects the Administration’s goal of advancing US leadership in this critical emerging technology, mitigating risks for individual consumers, patients, workers, businesses, and addressing US economic and national security considerations.

The EO applies directly to federal government agencies and will significantly impact the way the government funds AI development and procures AI products and services; however, its impacts also will be felt by the private sector, including those companies providing services and supplying materials to the US government and throughout the federal procurement supply chain. The EO may ultimately create “de facto” standards and practices in the private sector given the size and influence of the US government as a customer to major technology companies, a funding source for and regulator of research and development, and payer in the healthcare space. The EO also sets out the Biden Administration’s vision on AI and establishes the groundwork for impending legislation and regulations across an array of subject matters and sectors.

BACKGROUND

The EO is the latest in a series of actions from the White House, executive branch agencies, and legislative leaders to tackle the challenges posed by AI, discussed in a recent Goodwin webinar. With the European Union driving the agenda for AI regulation with its impending AI Act, the White House has advanced various initiatives given the challenges of passing comprehensive AI legislation in the United States.1 Notable developments since the Trump Administration’s issuance of Executive Order 13960 include President Biden’s Blueprint for an AI Bill of Rights (2022), and the US Department of Commerce’s National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0). The White House also secured voluntary commitments in July and September 2023 from leading AI and technology companies to help advance the development of safe, secure, and trustworthy AI. 

WHAT YOU NEED TO KNOW

Purpose: The EO’s underlying goal is to establish a framework that ensures the responsible development and use of AI while protecting individuals from potential misuse. 

Scope: The EO is wide-ranging and includes directives in the following domains, among others:

  • Safety and Security (including Cybersecurity): The EO recognizes that securing AI systems in their development and usage lifecycles is critical to US national and economic security, as well as public health and safety. The EO contains a broad array of directives, many dealing with cybersecurity, to protect AI systems from tampering, misuse, and foreign interference, manage critical risks, and improve US cyber defenses and capabilities, including those:
    • Requiring that the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Framework (NIST AI 100-1), which was released earlier this year and discussed in a recent Goodwin webinar, be incorporated into the safety and security guidelines used by critical infrastructure owners and operators.
    • Establishing an “Artificial Intelligence Safety and Security Board” to be comprised of AI experts from the private sector, academia, and government, that will provide advice and recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.
    • Requiring companies that develop what the EO terms “dual-use foundation models” (i.e., powerful AI models meeting specific criteria defined in the EO that pose a serious risk to national security and public health and safety) to report to the government on:
      • their model development, training and production activities (including physical and cybersecurity controls);
      • ownership and possession of sensitive model information (including physical and cybersecurity controls around such information); and
      • the results of the “AI red-teaming” efforts, which the EO defines as “structured testing effort(s) to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”
    • Requiring several agencies to pursue coordinated initiatives to capitalize on AI’s potential to improve US cyber defenses and offensive capabilities as well as to assess risks to critical infrastructure sectors and consider approaches for mitigating such vulnerabilities.
    • Directing the Secretary of Commerce to propose regulations that will require US IaaS Providers to report the identity of foreign persons who transact with that provider to train large AI models that has potential capabilities that could be leveraged in specified malicious cyber-enabled activity (referred to in the EO as a “training run”).
  • Health: The EO calls for the advancement of the responsible use of AI technologies in healthcare and the development of affordable and life-saving drugs. Specifically, the EO directs the Department of Health and Human Services (HHS), in consultation with relevant agencies, to:
    • Create an “HHS AI Task Force” in order to develop a strategic plan on the responsible deployment and use of AI.
    • Develop a strategy to determine whether AI-enabled technologies are sufficiently high-quality, including for research and discovery, drug and device safety, healthcare delivery and financing, and public health.
    • Consider appropriate actions to advance understanding and compliance with federal nondiscrimination and privacy laws as they relate to AI.
    • Establish an “AI safety program” for capturing data on issues related to AI deployed in healthcare settings, including those caused by bias or discrimination, and to develop recommendations, best practices, or other informal guidelines for appropriate stakeholders based on assessment of such data.
    • Develop a strategy for regulating the use of AI or AI-enabled tools in drug development.
  • Competition: The EO instructs agencies to police AI competition, including by “addressing risks arising from concentrated control of key inputs” and “taking steps to . . . prevent dominant firms from disadvantaging competitors.” The EO also encourages the FTC to use its rulemaking authority to “ensure that consumers and workers are protected from harms that may be enabled by the use of AI.” The FTC Chair Lina Khan as well as the FTC as a whole have previously expressed the view that AI technologies may present substantial competition concerns.
  • Privacy: To address the privacy risks posed by AI technologies (including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals), the EO directs the federal government to ensure that the collection, use, and retention of personal data is lawful and secure. Specifically, the EO directs the following actions:
    • Evaluate how agencies collect and use commercially available information—including any personal information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks.
    • Prioritize federal support for accelerating the development and use of privacy-enhancing technologies (PETS)—including ones that use cutting-edge AI and that let AI systems be trained while preserving individuals’ privacy.
    • Fund the creation of a “Research Coordination Network” dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of PETs.
    • Develop guidelines for federal agencies to evaluate the effectiveness of PETs, including those used in AI systems.
    • Address heightened risks to employees, including through the adoption of AI tools for workplace surveillance.
    • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI
  • Semiconductors: The EO seeks to promote competition within the semiconductor industry, which manufactures the chip technology used in many AI applications. Specifically, the EO provides support for smaller semiconductor / chip companies to allow better competition against larger more established players. The EO pushes the Commerce Department to include smaller companies in the National Semiconductor Technology Center, a newly established research consortium expected to receive significant government funding from last year’s CHIPS and Science Act. The EO also requires the establishment of mentorship programs for smaller semiconductor / chip companies to increase access to critical resources like funding, datasets, and physical assets.
  • Copyright and Digital Authentication: The EO tasks the Department of Commerce to develop guidance for content authentication and watermarking techniques to facilitate labeling original content, and potentially detecting AI-generated synthetic content. As with other authentication technologies, the goal of these digital “breadcrumbs” will be to distinguish AI-generated content from legitimate content, and provide individuals with a high level of confidence that verifiable content is in fact authentic. Such techniques are already used to track and enforce copyrights associated with digital assets. The EO also directs the US Patent and Trademark Office and Copyright Office to provide guidance for both patent examiners and applicants on how to address the use of artificial intelligence with respect to patent eligibility and copyright protections afforded to AI-generated or AI-assisted content. The EO further directs the Department of Homeland Security to develop a training, analysis and evaluation program to mitigate AI-related IP risks, including IP theft and violations.
  • Labor/Equity and Civil Rights in Employment: The EO notes that AI offers the promise of improved productivity but also highlights (i) the importance of supporting workers’ existing rights; and (ii) advancing equity and civil rights when AI-tools are incorporated into the employment sphere. With respect to the first issue, the Chairman of the Council on Economic Advisers is required to submit a report to the President on the labor-market effects of AI to enable to federal governments to address AI-related workforce disruptions. In addition, the Secretary of Labor will both submit a report that is focused on analyzing the ability of agencies to support workers that will be disrupted by AI-advancements and, within 180 days, publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees and maximize potential benefits.  The Secretary of Labor will also provide guidance to ensure that employees are being paid overtime and other wages appropriately when AI enters the workplace. Regarding the second area of employment-related focus, algorithmic discrimination in automated technology, which can negatively impact people with disabilities other protected classes, is of particular concern. Federal heads of civil rights offices will meet to discuss comprehensive approaches to prevent discrimination in the use of automated systems and employers will be encouraged to take steps aimed at ensuring that their use of artificial intelligence and automated systems also advances equity.
  • Government Procurement: The EO provides direction that will impact federal contracting and grant awarding, both by establishing agency specific obligations and by creating avenues for increased funding opportunities for commercial entities focused on AI development and deployment in the federal sector. Specifically, the EO’s requirements will force agency improvements in the acquisition of specified AI products and services by requiring that agencies adopt more rapid and efficient contracting procedures. The EO also prioritizes the acceleration of the hiring of AI professionals as part of a government-wide AI talent surge. To ensure compliance, the EO requires the Director of OMB, within 180 days of the issuance of the EO, to develop a means to ensure that agency contracts for the acquisition of AI systems and services align with the goals and guidance set forth in the EO. The EO also aims to provide small business developers and entrepreneurs working on AI projects with access to technical assistance and resources, which will assist with the commercialization of AI breakthroughs. To this end, for example, and to advance the development of AI systems that improve the quality of veterans’ healthcare, and in order to support small businesses’ innovative capacity, the EO requires the Secretary of Veterans Affairs to host two 3-month nationwide AI Tech Sprint competitions and provide participants in these AI Tech Sprints access to technical assistance, mentorship opportunities, individualized expert feedback on products under development, potential contract opportunities, and other programming and resources.

Interagency Collaboration: The EO appoints a White House AI Council to coordinate the federal government’s AI activities, chaired by the White House Deputy Chief of Staff for Policy and staffed with representatives from every major agency.

OUTLOOK 

The coming months will be pivotal in observing how federal agencies interpret and implement the EO’s directives and the EO’s impact on relevant businesses. This EO is particularly important and timely given the raft of AI initiatives being announced by other countries and international bodies to establish standards for safe and trustworthy AI. The G7 group of leading democratic economies announced a non-binding code of conduct for foundation models and generative AI the same day the EO was released. In early November, the UK will host a two-day summit for world leaders and frontier AI providers on AI safety. Last week, the United Nations announced the creation of a High-Level Advisory Body on Artificial Intelligence to address issues in the international governance of AI.

Although AI has drawn bipartisan interest and there is some support for regulatory supervision, there are numerous obstacles challenging Congressional initiatives for overarching federal AI legislation in the near term. Senate Majority Leader Chuck Schumer recently cautioned that any broad AI bill is not likely to be introduced until next year. As a result, in the absence of federal legislation, the EO is a crucial statement of US standards, foreshadowing for the world how the US is likely to approach AI regulation.

 


[1] Several AI-focused bills have been introduced in Congress in an attempt to develop legislation to promote and regulate AI. Most recently, Senators Blumenthal (D-Conn) and Hawley (R-Mo) released a Bipartisan Framework for AI legislation, which focuses on transparency and accountability to address harms of AI and protect consumer personal data.

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.