TL WATT JAKE TAPPER LIVE _00010001.png
OpenAI CEO Sam Altman tells Congress some federal regulations for artificial intelligence would be wise
04:22 - Source: CNN

Editor’s Note: Kara Alaimo, Ph.D., an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Take It Back” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.

CNN  — 

At a Senate Judiciary subcommittee hearing on Tuesday, tech experts laid bare some of the harrowing risks posed by advances in artificial intelligence — and made clear that lawmakers can’t afford to wait to better understand this technology before they regulate it. Even Sam Altman, CEO of the company that created ChatGPT — the application that has captivated the country by using artificial intelligence to write text in response to prompts — testified that he recognizes the potential dangers of the technology, and that government oversight can help to mitigate them.

Kara  Alaimo

AI could be used to manipulate public opinion, impersonate politicians and share potentially deadly disinformation such as inaccurate medical advice.

And these aren’t just hypotheticals. Some of the harms are already being felt. Former New York University professor emeritus, Gary Marcus, testified Tuesday that one such application seems to have prompted someone to commit suicide, while another gave support and encouragement to a user pretending to be a 13-year-old girl arranging a trip to have sex with a 31-year-old man.

When asked about his worst fears, Altman acknowledged that the industry could cause “significant harm to the world,” which could happen “in a lot of different ways.” He said, “If this technology goes wrong, it can go quite wrong.” Just imagine the possibilities of AI-driven weapons. Some even worry it could overtake humanity.

A common refrain in Tuesday’s hearing was that lawmakers have failed to regulate social media companies and shouldn’t fall down on the job again when it comes to regulating AI. One crucial reason Congress didn’t do a better job of regulating social networks to protect our privacy and mitigate against things like disinformation and the online hate that has spilled over into real-world violence is that most members didn’t fully understand the technology and couldn’t figure out how to solve these problems. Now, it seems like no one fully understands AI — even the folks who are building it. But that doesn’t need to — and, indeed, should not — prevent members of Congress from regulating it.

The hearing raised two solutions that lawmakers can implement right now — even without knowing the extent of how AI will impact our lives.

First, Congress should mandate disclosure and choice. AI-generated material should specify that it was generated by AI so that anyone viewing or reading it can be necessarily skeptical, since these systems are known to sometimes simply make up information — a phenomenon the industry calls hallucinating.

We also shouldn’t be forced to negotiate with an AI-generated robot when we call our health insurance company, for example.

Get Our Free Weekly Newsletter

“AI shouldn’t be hidden,” Christina Montgomery, IBM’s vice president and chief privacy and trust officer, testified. “Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system.”

Second, we need a federal agency that licenses AI products — a proposition that Altman suggested during the hearing. An independent group of scientists working for the agency should test AI products and force companies to answer questions and address potential safety risks before they are given licenses to use them commercially.

The possible uses and dangers of AI are dizzying to us all — including, as Tuesday’s testimony made clear, members of the industry that is building these products. But lawmakers can’t wait for all the answers before they act. Tuesday’s hearing made clear that they can and must take action now to protect us by requiring disclosures and creating an agency to regulate products built by AI.

To get the ball rolling on disclosures, I’d like to state for the record that I wrote this whole piece myself, without any help from ChatGPT.

Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.