• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

OpenAI Backs Invoice That Would Restrict Legal responsibility for AI-Enabled Mass Deaths or Monetary Disasters

Admin by Admin
April 10, 2026
Home Tech News
Share on FacebookShare on Twitter


OpenAI is throwing its help behind an Illinois state invoice that might protect AI labs from legal responsibility in instances the place AI fashions are used to trigger severe societal harms, equivalent to demise or severe damage of 100 or extra folks or at the very least $1 billion in property injury.

The hassle appears to mark a shift in OpenAI’s legislative technique. Till now, OpenAI has largely performed protection, opposing payments that would have made AI labs liable for his or her expertise’s harms. A number of AI coverage specialists inform WIRED that SB 3444—which might set a brand new customary for the business—is a extra excessive measure than payments OpenAI has supported up to now.

The invoice would protect frontier AI builders from legal responsibility for “crucial harms” attributable to their frontier fashions so long as they didn’t deliberately or recklessly trigger such an incident, and have revealed security, safety, and transparency experiences on their web site. It defines a frontier mannequin as any AI mannequin educated utilizing greater than $100 million in computational prices, which probably might apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

“We help approaches like this as a result of they deal with what issues most: Decreasing the danger of significant hurt from essentially the most superior AI methods whereas nonetheless permitting this expertise to get into the palms of the folks and companies—small and large—of Illinois,” mentioned OpenAI spokesperson Jamie Radice in an emailed assertion. “In addition they assist keep away from a patchwork of state-by-state guidelines and transfer towards clearer, extra constant nationwide requirements.”

Beneath its definition of crucial harms, the invoice lists just a few widespread areas of concern for the AI business, equivalent to a foul actor utilizing AI to create a chemical, organic, radiological, or nuclear weapon. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a prison offense and results in these excessive outcomes, that might even be a crucial hurt. If an AI mannequin have been to commit any of those actions beneath SB 3444, the AI lab behind the mannequin might not be held liable, as long as it wasn’t intentional and so they revealed their experiences.

Federal and state legislatures within the US have but to cross any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, might be responsible for these kind of hurt attributable to their expertise. However as AI labs proceed to launch extra highly effective AI fashions that increase novel security and cybersecurity challenges, equivalent to Anthropic’s Claude Mythos, these questions really feel more and more prescient.

In her testimony supporting SB 3444, a member of OpenAI’s International Affairs staff, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s in keeping with the Trump administration’s crackdown on state AI security legal guidelines, claiming it’s essential to keep away from “a patchwork of inconsistent state necessities that would create friction with out meaningfully enhancing security.” That is additionally in keeping with the broader view of Silicon Valley in recent times, which has usually argued that it’s paramount for AI laws to not hamper America’s place within the international AI race. Whereas SB 3444 is itself a state-level security legislation, Niedermeyer argued that these will be efficient in the event that they “reinforce a path towards harmonization with federal methods.”

“At OpenAI, we consider the North Star for frontier regulation must be the protected deployment of essentially the most superior fashions in a method that additionally preserves US management in innovation,” Niedermeyer mentioned.

Scott Wisor, coverage director for the Safe AI venture, tells WIRED he believes this invoice has a slim probability of passing, given Illinois’ popularity for aggressively regulating expertise. “We polled folks in Illinois, asking whether or not they assume AI corporations must be exempt from legal responsibility, and 90 p.c of individuals oppose it. There’s no purpose present AI corporations must be going through lowered legal responsibility,” Wisor says.

Tags: AIenabledBacksBillDeathsDisastersfinancialLiabilitylimitMassOpenAI
Admin

Admin

Next Post
Courtroom Backs Pentagon Anthropic Ban

Courtroom Backs Pentagon Anthropic Ban

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025
Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Courtroom Backs Pentagon Anthropic Ban

Courtroom Backs Pentagon Anthropic Ban

April 10, 2026
OpenAI Backs Invoice That Would Restrict Legal responsibility for AI-Enabled Mass Deaths or Monetary Disasters

OpenAI Backs Invoice That Would Restrict Legal responsibility for AI-Enabled Mass Deaths or Monetary Disasters

April 10, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved