Sorry, you need to enable JavaScript to visit this website.
Executive Order Promotes Responsible AI Use January 9, 2024

by Sarbari Gupta

On October 30, 2023, The White House released Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a long and long-awaited document. At over 100 pages, EO 14110 details actions and deadlines required of federal agencies to ensure AI is used responsibly within the federal government.

The document’s opening sentence captures the challenge that agencies, industries, and individuals face: “Artificial intelligence holds extraordinary potential for both promise and peril.” This statement both expresses the AI conundrum succinctly and reduces the complex issues surrounding its use to simple terms. Yet, there is nothing simple about harnessing AI technology for good and minimizing its application for evil.

There is nothing simple about harnessing AI technology for good and minimizing its application for evil.

In March 2023, the Future of Life Institute saw how research in AI was exploding. It crafted a letter requesting a pause in all AI research for 6 months, especially the training of AI systems. The rationale was straightforward: the competition to be first or best was leading to powerful AI models whose potential effects, both good and bad, were not understood, could not be predicted and might not be controllable. Elon Musk, one of 1,000 letter signers, went so far as to express the opinion that AI could lead to the destruction of civilization.

Unsurprisingly, no halt resulted from the letter’s release. Yet, Max Tegmark, president of the institute, reflected, “I was never expecting there to actually be an immediate pause. So I was overwhelmed by the success of the letter in bringing about this sorely needed conversation. It was amazing how it exploded into the public sphere.”

Fast forward to the OpenAI debacle right before Thanksgiving where CEO Sam Altman was fired and days later reinstated. The NY Times reported that colleagues were “… growing alarmed that the company’s technology could pose a significant risk, and that Mr. Altman was not paying close enough attention to the potential harms.” AI is now at the center of concern by many, including labor unions. Consider that strikes by both The Writers Guild of America and the Screen Actors Guild–American Federation of Television and Radio Artists sought contract protections involving AI. Issues such as the use of digital replicas and synthetic performers, consent, compensation, AI-generated scripts, copyrights, and more became the focus of union demands.

Issues such as the use of digital replicas and synthetic performers, consent, compensation, AI-generated scripts, copyrights, and more became the focus of union demands.

Ethical concerns are also being raised. In particular, military use of AI prompts an ongoing debate. Opponents articulate a concern that life-and-death/kill-no kill decisions should not be left to machines. They posit that algorithms and the data that drive them are susceptible to errors in judgment and bias, just like humans. Others feel the judgment, emotion, and context that humans bring to decisioning are far more valuable than choices based on analysis and machine learning. Still others see AI as protecting the lives of military personnel, while others suggest AI represents an unfair military advantage.

These are just a few of the dilemmas surrounding AI. For every good AI might contribute to society (e.g., computer-aided medical diagnoses), there is a downside (e.g., “deepfakes” in the form of manufactured videos, images, and audios that could be used in propaganda). Others take their criticisms even further, suggesting replacing humans totally or, worse still, an AI world takeover.

So this brings us back to EO 14110 and the reasons behind its issuance. Its purpose is encapsulated in this sentence: “The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.” And, this leadership role will position the federal government to “advance and govern the development and use of AI in accordance with eight guiding principles and priorities.” Simply stated, they are:

  1. AI must be both safe and secure.
  2. The nation must lead AI development by promoting responsible innovation, competition, and collaboration.
  3. AI development and use must support American workers.
  4. AI policies must advance equity and civil rights.
  5. AI and AI-enabled products must protect the interests of Americans.
  6. AI must safeguard Americans’ privacy and civil liberties.
  7. Federal AI use must incorporate risk management principles and concurrently increase the government’s capacity to regulate, govern, and support responsible AI use.
  8. The nation should continue to lead global societal, economic, and technological progress.

“The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.”

The EO offers detailed implementing actions and timelines. Broadly speaking, some key efforts include

  • developing guidelines, standards, and best practices as well as a national security memorandum;
  • requiring private sector reporting on AI development efforts;
  • addressing AI management relative to critical infrastructure and cybersecurity;
  • reducing risk of AI and chemical, biological, radiological, and nuclear threats intersecting;
  • addressing the issues surrounding synthetic content;
  • promoting innovation and competition through attracting AI talent, offering grants and funding, etc.;
  • evaluating guidance on patents and copyrights; and
  • much, much more.

EO 14110 is expansive in nature and recognizes AI impacts on federal programs ranging from national security to healthcare, education, immigration, consumer protection, criminal justice, and employment. It stands as a federal roadmap for AI, but acceptance of the implementing guidance published by the Office of Management and Budget and anticipated congressional action will largely tell the story.

AI, however, transcends national boundaries. It is a global issue with international agreements (non-binding) already being signed. Striking the balance between allowing innovation and exerting control will be both difficult and interesting to watch. Still, EO 14110 is a major step forward in promoting responsible use of AI in the nation’s public and private spheres.

Return to Electroblog
Top