10 minute read 21 Sep 2023
Corridor of supercomputers with visuals

Eight AI-related US policy issues for boards and management to consider

Authors
Bridget Neill

EY Americas Vice Chair, Public Policy

Regulatory and policy strategist. Three decades in shaping public policy impacting global financial markets and accounting profession. Passionate about family. Outdoor sports enthusiast.

John D. Hallmark

EY US Political and Legislative Leader

Public policy professional with a deep understanding of the Washington legislative and political arenas. Works with key stakeholders to formulate and execute on the firm’s policy initiatives.

10 minute read 21 Sep 2023
Related topics Public policy

Show resources

  • US public policy spotlight: artificial intelligence (pdf)

As the use of AI evolves, boards and the C-suite should consider these key AI-related issues attracting US policymaker attention.

In brief

  • US policymakers, alongside the C-suite and board, are considering what artificial intelligence (AI) could mean for capital markets, the economy and society.
  • Dynamics in Washington and the increasing complexity surrounding AI could lead to a patchwork of regulations for companies to navigate.
  • Moving forward, policymakers are looking to create a stable regulatory scheme that addresses concerns and remains relevant as AI continues to evolve.

Artificial intelligence (AI) has seized the attention of US policymakers in recent months. The launch of new AI tools and the rapid adoption of AI have sparked a dialogue about how best to foster innovation and opportunity while addressing associated risks.

Perspectives on AI include predictions that the technology will lead to promising scientific breakthroughs and an explosion of innovation and efficiencies, as well as serious concerns that AI could threaten national security, replace workers, result in discriminatory decision-making, introduce a host of privacy and copyright infringement risks, and promote deepfake content.

Show resources

  • US Public Policy Spotlight: Artificial Intelligence

Whatever the perspective, AI policymaking also faces challenges such as:

As the US public policy debate around AI evolves, several themes have emerged. This publication explores eight key AI-related issues attracting US policymaker attention as well as related developments at the federal and state levels and considerations for c-suite leaders and boards of directors engaging on the issue.

.

(Chapter breaker)
1

Issue 1

National Security

Many lawmakers are concerned with the implications of AI for national security.

Many lawmakers are concerned with the implications of AI for national security, including the pace of adoption by the US defense and intelligence communities and how AI is being used by geopolitical adversaries. For example, congressional hearings1 have examined2 barriers to the Department of Defense (DoD) adopting AI technologies and considered risks from adversarial AI. There have also been calls for guidelines to govern the responsible use of AI in military operations, including weapons systems, to avoid unintended actions when AI is used.3

Establishing and maintaining a competitive advantage on the global stage is a top priority of many lawmakers. Launching a bipartisan initiative to develop AI regulation, Senate Majority Leader Chuck Schumer (D-NY) expressed4 the need for the “U.S. to stay ahead of China and shape and leverage this powerful technology.”

(Chapter breaker)
2

Issue 2

Workforce

Policymakers have raised concerns about AI’s potential impact on jobs.

Many policymakers have raised concerns about AI’s potential impact on jobs, particularly in areas where workers could eventually be replaced, and who should bear the cost of displacement and retraining workers. In a new world powered by AI, there are also questions about how to train a workforce to adjust to the rapidly evolving technology and whether AI-reliant companies should be regulated and taxed differently than companies staffed by humans. While concerns about the impacts of technology on workers are not new, the rapid pace of companies adopting AI technology is unparalleled, creating additional challenges and pressure.

(Chapter breaker)
3

Issue 3

Bias and discrimination

Policymakers are focused on the risk AI technologies carry in making discriminatory decisions.

Bias issues have been examined in several congressional hearings on AI and will continue to be a key concern as regulatory approaches are considered. Policymakers are focused on the risk AI technologies carry in making discriminatory decisions — just as human decision-makers do — and how AI technologies are only as effective as the data sets and algorithms they are built upon and the large language models that underpin them. In congressional hearings5, policymakers have expressed concerns about the potential for AI to discriminate and have heard testimony about the misidentification of individuals, particularly those in minority groups, by facial recognition software.

A report6 from the National Institute of Standards and Technology (NIST) provides an “initial socio-technical framing for AI bias” that focuses on mitigation through appropriate representation in AI data sets; testing, evaluation, validation, and verification of AI systems; and the impacts of human factors (including societal and historical biases).

(Chapter breaker)
4

Issue 4

Transparency and explainability

Some policymakers are focused on the need for consumers to understand how and why AI technologies work.

Some policymakers are focused on the need for consumers to understand how and why AI technologies work, to help promote acceptance of the technologies and create trust in the results AI produces.

In its Four Principles of Explainable Artificial Intelligence report⁷, NIST identifies key qualities of an explainable AI system: “We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users; provide explanations that correctly reflect the system’s process for generating the output; and that a system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output.”

These factors are aimed at addressing the so-called “black box problem”: Consumers might understand what data is inputted into an AI system and see the result it produces, but they don’t understand how that result is reached.

Transparency is also part of the policymaking debate as being critical to building trust. AI typically works behind the scenes, which means consumers often are unaware that they are engaging with an AI system that is making recommendations, calculations and decisions based on an algorithm. To address transparency concerns, some policymakers have called for new rules requiring disclosure to consumers when they are communicating with AI software so they can make an informed decision about the use of the technology.

EY Center for Board Matters
We support board members in their oversight role by helping them address complex boardroom issues.
 
(Chapter breaker)
5

Issue 5

Data privacy

Policymakers are concerned that consumers may not be aware how personally identifiable information is being collected.

AI systems often collect, analyze and use large sets of data, including individuals’ personally identifiable information. Policymakers are concerned that consumers may not be aware that such information is being collected or know how long it is being retained and for what purposes. At a hearing in May 20238 of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, senators on both sides of the aisle voiced concerns about data privacy, including calls for greater awareness of how consumer data is being used in AI applications. There also is growing discussion in Washington about whether consumer data protection measures are needed to specifically address the use of AI; for example, the Federal Trade Commission reportedly has launched an investigation into OpenAI’s use of consumer data in its ChatGPT system.9

(Chapter breaker)
6

Issue 6

Deepfakes

Modern AI technologies have the potential to push disinformation and inaccuracies to a new level.

Recent congressional hearings also have highlighted that while disinformation and inaccuracies are rampant on the internet, modern AI technologies have the potential to push those concerns to a new level. AI can fabricate videos of individuals, generate lifelike photographs of fictitious people and create social media profiles for nonexistent people. During a hearing earlier this year, Sen. Richard Blumenthal (D-CT) used AI to impersonate himself and demonstrate to committee members the risks of deepfakes.

As deepfakes proliferate, it will become increasingly difficult for consumers to trust the content they encounter even from seemingly trusted sources.10,11 Proposals to address the threat include requirements to “watermark” AI-generated content12 and outright bans13 of certain deepfake content. Most recently, the Federal Election Commission in August 2023 advanced a petition14 that calls for banning political campaigns from disseminating deepfake content that may fraudulently deceive voters about candidates

(Chapter breaker)
7

Issue 7

Accountability

Some policymakers have suggested governance requirements for the development and deployment of AI.

Some policymakers have suggested governance requirements for the development and deployment of AI to address concerns about bias and potential unintended consequences. The Algorithmic Accountability Act15 is one response being considered. The bill seeks to “bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives” by requiring assessments of algorithms and public disclosures about their use.

The US Equal Employment Opportunity Commission (EEOC) is also exploring16 the potential benefits and harms of AI in employment decisions through hearings and the efforts of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative17.

Real accountability can only be achieved when entities are held responsible for their decisions. A range of AI accountability processes and tools … can support this process by proving that an AI system is legal, effective, ethical, safe, and otherwise trustworthy—a function also known as providing AI assurance.
U.S. Department of Commerce’s National Telecommunications and Information Administration

In addition, policymakers could look to some of the accountability mechanisms contemplated in the NIST AI Risk Management Framework to address their concerns. The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) delved specifically into the issue of AI assurance in an April 13, 2023, request for information, which observed¹⁸ that: “Real accountability can only be achieved when entities are held responsible for their decisions. A range of AI accountability processes and tools (e.g., assessments and audits, governance policies, documentation and reporting, and testing and evaluation) can support this process by proving that an AI system is legal, effective, ethical, safe, and otherwise trustworthy — a function also known as providing AI assurance.”

On the subject of accountability, regulators and others are also looking at outcomes based on AI technologies. For example, Securities and Exchange Commission (SEC) Chair Gary Gensler recently remarked in an interview that investment advisors who use AI remain responsible for their recommendations: “Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients. And whether you’re using an algorithm, you have that same duty of care.”¹⁹

Comment Letter – NTIA AI Accountability Policy
We respond to select questions on artificial intelligence (AI) and accountability mechanisms in the request for comment from the National Telecommunications and Information Administration (NTIA).
 
(Chapter breaker)
8

Issue 8

Copyright

Policymakers are also raising questions about the rights and ownership of content created by AI.

Policymakers are also raising questions about the rights and ownership of content created by AI. During recent congressional20 hearings21, members have considered whether AI-generated content is protected via patents, trademarks and copyright like other intellectual property and raised questions about who owns the AI-generated content and the data sets that are used to train AI systems.22 These and other questions have already been the subject of litigation and will continue to be debated as the AI regulation discussion evolves.

Fall 2023 AI-related policy updates

  • Despite the recent spike in media coverage of AI issues, Congress has been considering the technology for some time

    • Both the House (in 2017) and Senate (2019) formed AI Caucus groups to inform members about the technological, economic and societal implications of AI deployment. Likewise, the wide array of committees with jurisdiction over AI and its applications has created a multitude of forums for examination of the technology.
    • Congress has notably passed legislation to increase the resources available to the federal government as it confronts the rise of AI technologies. The AI in Government Act23 (enacted as part of appropriations legislation in December 2020) required the General Services Administration to establish an AI Center of Excellence to promote government acquisition of novel uses of AI technologies, provide guidance for government use of AI and update federal employee systems for positions with AI expertise.
    • Also approved by Congress that year (as part of annual defense policy legislation), the National AI Initiative Act24 sought to maintain continued US leadership in AI by the establishment of a coordinated program across government to boost AI research and specifically created the National Artificial Intelligence Initiative Office to carry out these responsibilities.
    • In addition to the numerous bills introduced in Congress to regulate AI, Senate Majority Leader Chuck Schumer (D-NY) in June 2023 announced the SAFE Innovation Framework25, which is intended to provide an outline for potential legislation.
  • Executive action on AI similarly has been ongoing since at least 2019

    • In 2019, then-President Donald Trump signed an executive order26 (EO) that directed NIST to develop an AI framework. The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.027) was released in January 2023.
    • Since 2019, the previous and current administrations have issued other EOs that create voluntary guidelines and resources for stakeholders developing and deploying AI, such as the National Artificial Intelligence Initiative Office28 to oversee the federal agencies’ strategy on AI in accordance with the National AI Initiative Act and the Blueprint for an AI Bill of Rights29 to “help guide the design, development, and deployment of artificial intelligence.”
    • The White House under President Joe Biden has directly partnered with industry as well. In May 2023, several tech companies agreed to participate in a public evaluation of their AI systems. More recently, in July 2023, the White House announced30 nonbinding commitments31 from several tech companies to manage risks of the deployment and development of AI systems.
  • Regulatory activity continues to accelerate in 2023

  • Several state legislatures have introduced and in some cases passed legislation to govern AI

Questions for boards to consider

  • Boards seeking to balance the opportunities and risks of AI should ask these questions

    • How does management stay informed about regulatory and legislative developments related to AI, machine learning, data privacy, and emerging technologies in relevant jurisdictions? How is it monitoring whether the company is staying in compliance and assessing potential impacts to strategy?
    • How is the board structured to oversee and monitor a company’s use of generative AI? What information does the board or its committees receive, and whom from management is the board engaging about related strategic initiatives, risk management and policy developments?
    • How is the organization using sensitive data to support innovation — for example, via AI, machine learning and automated decision-making? How would these uses be perceived by consumers, employees, the media, regulators, investors, or other stakeholders?
    • How is the company assessing and mitigating the risks of generative AI? Is it using an external framework such as the NIST AI Risk Management Framework40? How does management establish that these applications are performing as intended to mitigate ethical and compliance risks?
    • How is the company using generative AI to challenge the existing business model and key strategic assumptions?
    • How will the company’s AI strategy empower its people and business to be unique and best-in-class? Does the company have a professional development plan in place that includes new AI-related training programs, career paths and retention methods — and ways to reward new AI competence?

EY.ai - A unifying platform

Introducing EY.ai - A unifying platform that combines our vast experience in strategy, transactions, transformation, risk, assurance and tax, with EY technology platforms and leading-edge capabilities.

Learn more

Summary

It is unlikely that Congress will pass comprehensive legislation regulating AI in a highly polarized political environment leading up to the 2024 US elections. In the absence of congressional action, state legislatures may fill the policy void, which could lead to a patchwork of laws. We also expect the Biden administration to continue to work with leading AI companies to enact change on a voluntary basis, and the federal agencies to continue to use enforcement actions to police AI use. Differing national approaches in the development of AI regulation may complicate the regulatory landscape for multinational companies using AI technology.

About this article

Authors
Bridget Neill

EY Americas Vice Chair, Public Policy

Regulatory and policy strategist. Three decades in shaping public policy impacting global financial markets and accounting profession. Passionate about family. Outdoor sports enthusiast.

John D. Hallmark

EY US Political and Legislative Leader

Public policy professional with a deep understanding of the Washington legislative and political arenas. Works with key stakeholders to formulate and execute on the firm’s policy initiatives.

Related topics Public policy