U.S. federal AI regulation is on the way, Sen. Marsha Blackburn

0
9


Senator Marsha Blackburn on why support is growing to ban kids from using phones before age 16

As U.S. states start to react to growing constituent concerns around the risks associated with artificial intelligence use, Tennessee Republican Sen. Marsha Blackburn said moving forward with a federal preemption standard is “imperative.”

Earlier this week, California Gov. Gavin Newsom signed a series of bills focused on those concerns — while also vetoing some strict AI conditions legislators hoped for — requiring safeguards around chatbots, labels around the mental risks of social media apps, and tools that require age verification in device maker app stores.

In addition, Utah and Texas have also signed laws implementing AI safeguards for minors, and other states have indicated similar regulations could be on the horizon.

“The reason the states have stepped in, whether it’s to protect consumers or protect children, is because the federal government has, to date, not been able to pass any federal preemptive legislation,” Blackburn said at the CNBC AI Summit on Wednesday in Nashville. “We have to have the states standing in the gap until such time that Congress will say no to the big tech platforms.”

Blackburn has long been a proponent of legislation around children’s online safety and regulation of social media, introducing the Kid’s Online Safety Act in 2022 that aims to establish guidelines to protect minors from harmful material on the platforms. The bipartisan legislation has passed the Senate with an overwhelming majority, and Blackburn said while big tech companies have worked to hold up the legislation from passage in both chambers, “We are hopeful the House is going to take it up and pass it.”

The concerns that the Act was aimed to address as it relates to social media have now cascaded alongside the rise in AI, Blackburn said.

Sen. Marsha Blackburn (R-TN) speaks during a rally organized by Accountable Tech and Design It For Us to hold tech and social media companies accountable for taking steps to protect kids and teens online on January 31, 2024 in Washington, D.C.

Jemal Countess | Getty Images Entertainment | Getty Images

“One of the things we’ve heard from so many people involved in this is that you have to have an online consumer privacy protection bill so that people have the ability to set those firewalls and protect the virtual you, as I call it,” she said, adding that “once an LLM scoops [your data and information], then they are using that to train that model.”

Blackburn is also focused on several other ways of safeguarding the information that AI is using, including a bill focused on how AI can use your name, image or likeness without your consent.

“We have to have a way to protect our information in the virtual spaces just as we do in the physical space,” she said.

With the fast advancement of AI, Blackburn acknowledged that regulation would require a focus on “end-use utilizations and legislate that framework in that manner and not focus on a given delivery system or a given technology.”

That also means reacting to the ways that AI companies change their products. Earlier this week, OpenAI CEO Sam Altman said the company will be able to “safely relax” most restrictions now that it has been able to mitigate “serious mental health issues,” adding that the company is “not the elected moral police of the world.”

Blackburn said that legislators are increasingly hearing from “parents who know what is happening to their children and that they can’t un-experience or unsee something that they have been through with these chatbots or in the virtual world or the metaverse.”

“I have talked to so many people who are now saying kids are not going to get cell phones until they’re 16, and many parents believe that is just like driving a car,” she said. “They’re not going to allow their kids to have that because we as a society have to put rules and laws in place that protect children and minors.”