In November 2022, the world was introduced to ChatGPT – a large language model form of generative artificial intelligence (“AI”) designed to simulate conversation and provide users with instant access to a wealth of information.
In the just one year that ChatGPT has become part of our lives, businesses and individuals alike have scrambled to understand these AI systems and determine how to use them, or if they should use them at all. The potential risks posed by AI (about which I previously wrote) explain why recent studies suggest that nearly three out of four business worldwide are implementing or considering plans to prohibit AI systems like ChatGPT. Try as they may, bans may not work given that 70% of employees confess to using ChatGPT without disclosing their usage to their supervisors.
Given the widespread use of AI in the workplace, many businesses are looking to federal and/or state regulation to provide guidance on how best to address the use of AI in the workplace. Before this week, that guidance has been scant. At the local level (in Massachusetts), a bill is presently pending at Beacon Hill proposing to regulate generative AI (like ChatGPT), which, ironically enough, was drafted with the help of ChatGPT. At the federal level the guidance provided by various agencies focused on the potential effects of discrimination and bias stemming from the use of AI. This includes a joint statement issued by four federal agencies, as well as non-binding guidance issued separately by the U.S. Equal Employment Opportunity Commission.
Enter the Biden-Harris Administration. Earlier this week, on October 30, 2023, President Biden issued a sweeping Executive Order concerning Artificial Intelligence titled, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This Executive Order sets forth eight “guiding principles and priorities” concerning AI:
- Working to make AI safe and secure;
- Promoting responsible innovation, competition and collaboration on developing AI;
- Committing to supporting American workers;
- Ensuring AI policies are consistent with the Biden-Harris Administration’s “dedication to advancing equity and civil rights”;
- Protecting the interests of Americans who use, interact with, or purchase AI and AI-enabled products;
- Protecting citizens’ privacy and civil liberties;
- Managing risks from the federal government’s own use of AI; and
- Ensuring the federal government leads the way to “global societal, economic and technological progress.”
A reasonable and fair question in response to President Biden issuing this Executive Order is, “what does this mean for me?” In truth, the vast majority of the Executive Order will have little to no effect on Americans’ daily lives for the foreseeable future, nor will it change how or when we can (or cannot) use AI. The Executive Order focuses primarily on tasks for various federal agencies to accomplish in order for the federal government to better understand the AI landscape, including potential national security threats. Although important to our safety and security, these aspects of the Executive Order will likely not change how we go about our daily lives.
That said, there are certain aspects of the Executive Order about which business should keep a watchful eye. For example, by June 26, 2024, the Secretary of Commerce must submit a report about methods to authenticate content, detect if was made or altered by an AI system, and label content made or altered by AI. The potential future labeling of content created by AI systems could have significant repercussions for how businesses use AI systems, as it would now be clear internally (e.g., to supervisors) or externally (e.g., to clients) that the content was not created by a human.
In a similar vein, the Executive Order initiates the process for the U.S. Patent Office and U.S. Copyright Office to provide guidance on issuing patents and copyrights for work created entirely or partially through AI systems. Given the various lawsuits occurring alleging copyright infringement against ChatGPT’s parent company, OpenAI, any movement by the federal government on patent or copyright protections to AI systems (or AI generated content) will affect a significant number of people and businesses.
The Executive Order also directs the Secretary of Labor, by April 27, 2024, to publish “principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” Nothing within the Executive Order suggests that the U.S. Department of Labor will issue any binding or mandatory regulations concerning the use of AI in the workplace. Nonetheless, these forthcoming “principles and best practices” may provide some useful information to employers who are seeking to better understand how to best utilize AI in the workplace so as to support the business and its employees.
The Executive Order addresses AI in a broad, ambitious and, at times, vague manner. Although it is a significant first step in providing guidance to businesses on understanding and utilizing AI systems, it will likely not have any practical effect for the foreseeable future. Until then, individuals should determine their own strategy with respect to how, when and where they use AI systems.
In the absence of local or national regulation, Ruberto, Israel & Weiner, P.C. is here to help answer questions about best practices and set you and your business up for success. Contact Adam Gutbezahl, the author of this alert for more information.