{"id":13612,"date":"2026-04-10T03:44:12","date_gmt":"2026-04-10T03:44:12","guid":{"rendered":"https:\/\/techtrendfeed.com\/?p=13612"},"modified":"2026-04-10T03:44:12","modified_gmt":"2026-04-10T03:44:12","slug":"openai-backs-invoice-that-would-restrict-legal-responsibility-for-ai-enabled-mass-deaths-or-monetary-disasters","status":"publish","type":"post","link":"https:\/\/techtrendfeed.com\/?p=13612","title":{"rendered":"OpenAI Backs Invoice That Would Restrict Legal responsibility for AI-Enabled Mass Deaths or Monetary Disasters"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span class=\"lead-in-text-callout\">OpenAI is throwing<\/span> its help behind an Illinois state invoice that might protect AI labs from legal responsibility in instances the place <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/tag\/artificial-intelligence\" class=\"text link\">AI fashions<\/a> are used to trigger severe societal harms, equivalent to demise or severe damage of 100 or extra folks or at the very least $1 billion in property injury.<\/p>\n<p class=\"paywall\">The hassle appears to mark a shift in <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/tag\/openai\" class=\"text link\">OpenAI\u2019s<\/a> legislative <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/ai-super-pacs-trying-to-influence-midterms\/\" class=\"text link\">technique<\/a>. Till now, OpenAI has largely performed protection, opposing payments that would have made <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/ai-agents-legal-liability-issues\/\" class=\"text link\">AI labs liable<\/a> for his or her expertise\u2019s harms. A number of AI coverage specialists inform WIRED that SB 3444\u2014which might set a brand new customary for the business\u2014is a extra excessive measure than payments OpenAI has supported up to now.<\/p>\n<p class=\"paywall\">The invoice would protect frontier AI builders from legal responsibility for \u201ccrucial harms\u201d attributable to their frontier fashions so long as they didn&#8217;t deliberately or recklessly trigger such an incident, and have revealed security, safety, and transparency experiences on their web site. It defines a frontier mannequin as any AI mannequin educated utilizing greater than $100 million in computational prices, which probably might apply to America\u2019s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.<\/p>\n<p class=\"paywall\">\u201cWe help approaches like this as a result of they deal with what issues most: Decreasing the danger of significant hurt from essentially the most superior AI methods whereas nonetheless permitting this expertise to get into the palms of the folks and companies\u2014small and large\u2014of Illinois,\u201d mentioned OpenAI spokesperson Jamie Radice in an emailed assertion. \u201cIn addition they assist keep away from a patchwork of state-by-state guidelines and transfer towards clearer, extra constant nationwide requirements.\u201d<\/p>\n<p class=\"paywall\">Beneath its definition of crucial harms, the invoice lists just a few widespread areas of concern for the AI business, equivalent to a foul actor utilizing AI to <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/ai-dr-evil-drug-discovery\/\" class=\"text link\">create a chemical<\/a>, organic, <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/poems-can-trick-ai-into-helping-you-make-a-nuclear-weapon\/\" class=\"text link\">radiological, or nuclear weapon<\/a>. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a prison offense and results in these excessive outcomes, that might even be a crucial hurt. If an AI mannequin have been to commit any of those actions beneath SB 3444, the AI lab behind the mannequin might not be held liable, as long as it wasn\u2019t intentional and so they revealed their experiences.<\/p>\n<p class=\"paywall\">Federal and state legislatures within the US have but to cross any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, might be responsible for these kind of hurt attributable to their expertise. However as AI labs proceed to launch extra highly effective AI fashions that increase novel security and cybersecurity challenges, equivalent to <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/anthropic-mythos-preview-project-glasswing\/\" class=\"text link\">Anthropic\u2019s Claude Mythos<\/a>, these questions really feel more and more prescient.<\/p>\n<p class=\"paywall\">In her testimony supporting SB 3444, a member of OpenAI\u2019s International Affairs staff, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that\u2019s in keeping with the Trump administration\u2019s <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/trump-signs-executive-order-ai-state-laws\/\" class=\"text link\">crackdown on state AI security legal guidelines<\/a>, claiming it\u2019s essential to keep away from \u201ca patchwork of inconsistent state necessities that would create friction with out meaningfully enhancing security.\u201d That is additionally in keeping with the broader view of Silicon Valley in recent times, which has usually argued that it\u2019s paramount for <a rel=\"nofollow\" target=\"_blank\" data-offer-url=\"https:\/\/a16z.com\/a-roadmap-for-federal-ai-legislation-protect-people-empower-builders-win-the-future\/\" class=\"external-link text link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/a16z.com\/a-roadmap-for-federal-ai-legislation-protect-people-empower-builders-win-the-future\/&quot;}\" href=\"https:\/\/a16z.com\/a-roadmap-for-federal-ai-legislation-protect-people-empower-builders-win-the-future\/\" rel=\"nofollow noopener\" target=\"_blank\">AI laws to not hamper America\u2019s place within the international AI race.<\/a> Whereas SB 3444 is itself a state-level security legislation, Niedermeyer argued that these will be efficient in the event that they \u201creinforce a path towards harmonization with federal methods.\u201d<\/p>\n<p class=\"paywall\">\u201cAt OpenAI, we consider the North Star for frontier regulation must be the protected deployment of essentially the most superior fashions in a method that additionally preserves US management in innovation,\u201d Niedermeyer mentioned.<\/p>\n<p class=\"paywall\">Scott Wisor, coverage director for the Safe AI venture, tells WIRED he believes this invoice has a slim probability of passing, given Illinois&#8217; popularity for aggressively regulating expertise. \u201cWe polled folks in Illinois, asking whether or not they assume AI corporations must be exempt from legal responsibility, and 90 p.c of individuals oppose it. There\u2019s no purpose present AI corporations must be going through lowered legal responsibility,\u201d Wisor says.<\/p>\n<\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>OpenAI is throwing its help behind an Illinois state invoice that might protect AI labs from legal responsibility in instances the place AI fashions are used to trigger severe societal harms, equivalent to demise or severe damage of 100 or extra folks or at the very least $1 billion in property injury. The hassle appears [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":13614,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[54],"tags":[8466,3263,165,3724,8601,94,8600,4980,5676,82],"class_list":["post-13612","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news","tag-aienabled","tag-backs","tag-bill","tag-deaths","tag-disasters","tag-financial","tag-liability","tag-limit","tag-mass","tag-openai"],"_links":{"self":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/13612","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13612"}],"version-history":[{"count":1,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/13612\/revisions"}],"predecessor-version":[{"id":13613,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/13612\/revisions\/13613"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/media\/13614"}],"wp:attachment":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13612"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13612"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13612"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69c6f7b5190636d50e9f6768. Config Timestamp: 2026-03-27 21:33:41 UTC, Cached Timestamp: 2026-04-10 06:41:36 UTC -->