AI Made Friendly HERE

Sen. Josh Hawley pitches law making AI companies liable for harmful content users generate

Sen. Josh Hawley presented legislation aimed at stripping AI of its Section 230 protections, which would open tech companies to legal liability for content produced by artificial intelligence.

Hawley spoke on this legislation in the U.S. Senate on Wednesday, phrasing his arguments around a core point that led to the enactment of Section 230 in the first place — the protection of children from harmful content.

“Let’s give parents the right to protect their kids,” Hawley said in a Senate hearing. “Let’s make it clear that the biggest technology companies, with all of the inside access to the White House and this body and everywhere else, that they’re not a government unto themselves, that they don’t run this country.”

Section 230 of the Communications Decency Act ensures that a platform cannot be held liable for third-party content. For example, Facebook could not be sued for something that a user posts nor could it be held liable for failing to remove that harmful content from its platform.

The statute was enacted in 1996 in response to concerns about stifling the growth of the fledgling Internet and pressures from lawmakers who were concerned that children might be exposed to explicit content.

Online platforms were facing legal challenges when they attempted to moderate content. When some harmful content was removed, these platforms could be held liable for not removing all of it. Lawmakers enshrined a “Good Samaritan” clause into Section 230, to allow websites to moderate content without being held liable for all the content on their site.

Individual content creators can still be held liable for the things that they post on the Internet, but large companies, with deeper pockets to pay damages in a lawsuit, have immunity from being sued for what users say and do on their platforms.

Many scholars have argued that this statute is the foundation on which the Internet was built, and altering that law would chill free speech and destroy the business model for any website encouraging user-generated content.

Section 230 has faced increasing scrutiny as the once burgeoning Internet has grown into a modern behemoth, central to the functions of everyday life for most people. Lawmakers have attempted to amend the statue to hold Big Tech firms accountable for the perceived harm inflicted by user-generated content.

Senator elect Josh Hawley speaks to the crowd at the Greene County Republican’s watch party at the University Plaza hotel on Tuesday, Nov. 6, 2018.

Hawley’s “No Section 230 Immunity for AI Act” would allow for companies using AI to be held liable for content produced by that artificial intelligence in response to prompts from the user.

“[The bill] just says that these huge companies can be liable like any other company, no special protections from government,” Hawley said. “It just removes government protection. It just breaks up the big government big tech cartel. That’s all it does.”

However, Hawley’s legislation is vague as to what specific applications of artificial intelligence would be affected by this legislation. AI doesn’t just mean Chat GPT or some other generative model, because the technology has been integrated into transcription services and customer service chat bots, among many other applications.

Jared Schroeder, a professor of media law at the University of Missouri School of Journalism, feels that, although lawmakers should be turning their attention to AI, the lack of specifics in Hawley’s legislation leave many questions unanswered.

“If this bill were passed, what would not be protected here?” Schroeder said. “The entire functioning of the social media world is algorithms. Algorithms are not technically AI, but algorithms are used to create AI.”

More: U.S. Supreme Court will hear Missouri case challenging federal influence on social media

Sen. Ted Cruz, R-Texas, also raised concerns when Hawley presented his legislation to the Senate. He pointed out that Hawley’s request for unanimous approval of his bill at that moment would bypass the necessary committee hearings that all legislation is subject to.

“AI is an incredibly important area of innovation,” Cruz said. “Simply unleashing trial lawyers to sue the living daylights out of every technology company for AI, I don’t think that’s prudent policy.”

U.S. Sen. Ted Cruz (R-Tex.) speaks to members of the press outside the “AI Insight Forum” at the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C.

U.S. Sen. Ted Cruz (R-Tex.) speaks to members of the press outside the “AI Insight Forum” at the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C.

Cruz worries that placing restrictions on AI could chill innovation in the country and that foreign nations would pull ahead of the U.S. in advancing the technology.

“It would be bad for America if China became dominant in AI,” Cruz said. “Right now, the $38 billion that was invested this past year in American AI companies is more than 14 times the investment of Chinese AI companies. We need to keep that differential. We need to make sure that America is leading the AI revolution.”

However, both Hawley and Cruz agree on the fact that they would like to change aspects of Section 230. Hawley wants to level the playing field, arguing that the lack of liability for companies hosting user-generated content gives them an unfair advantage in the free market.

They aren’t the only lawmakers to harbor such sentiments, as the issue of revising Section 230 has a good deal of bipartisan support. The problem is that lawmakers can’t agree on how to change Section 230 in a way that won’t dismantle the Internet as we know it.

“Section 230 is kind of like a Bermuda Triangle,” Schroeder. “Everybody agrees there’s a problem there. Even Republicans and Democrats agree on this, but they don’t agree on what to do about it.”

Even the courts have shied away from issuing judgments that alter Section 230. The U.S. Supreme Court was expected to rule on some aspect of the statute last year in the case Gonzalez v. Google, but ended up skirting the issue entirely in its ruling.

However, Schroeder feels the time may be coming for the nation’s highest court to litigate on the topic.

“They’ve got five social media cases this term,” Schroeder said. “They’re not algorithm cases, but between the two cases last term and five cases this term about social media, it’s an incredible amount of attention to one very specific thing.”

This article originally appeared on Springfield News-Leader: Hawley proposes bill making AI companies liable for harmful content

Originally Appeared Here

You May Also Like

About the Author:

Early Bird