By Will Wood
Introduction
As artificial intelligence (AI) grows more powerful, so do its risks.[1] From misinformation to security threats, lawmakers are scrambling to regulate the technology before it spirals out of control.[2] Yet in California, despite overwhelming legislative support, Governor Gavin Newsom vetoed California’s SB 1047 (the Bill), one of the nation’s first comprehensive AI safety bills.[3] He argued that its “stringent” regulations could burden the state’s burgeoning AI industry as global competition to develop the best AI models intensifies.[4] The Bill was going to require developers to use reasonable care to prevent critical harm, establish and review safety protocols, conduct annual third-party audits, report incidents and compliance with the requirements to the California Attorney General, and protect whistleblowers.[5] The Bill would have granted the attorney general enforcement authority and would have established a consortium to oversee AI regulations and safety frameworks.[6]
The perceived need for regulation results from the significant risks that AI poses, but the potential benefits of these models require a delicate balancing of interests so that the industry can thrive instead of being smothered in its infancy.[7] These risks include the models potentially evolving without the owner being able to exercise control over their evolution, coupled with the fact that AI does not have ethical norms and is unmoved by any moral compass.[8] Because of these fears, both industry heads and lawmakers have called for regulation.[9] However, the recent failure of California’s bill highlights a fundamental challenge in regulating AI industries: how can a government safely regulate an industry with vast, complex potential to benefit humanity without being heavy-handed and stifling its early development?[10] This blog post will summarize the need for AI regulation, compare how different countries are regulating AI, review theories for AI regulation, and propose a solution advocating for governments to incorporate regulatory sandboxes to avoid stifling the industry with excessive regulation.[11]
I. The Rise of AI: Potential and Perils
To understand why AI needs regulation, it is important to comprehend what generative AI is, how it is developed, and its potential benefits and risks. Generative AI is a type of artificial intelligence that uses neural networks and deep learning models to create content through recognizing patterns within large datasets.[12] These neural networks, which the structure of the human brain inspires, consist of interconnected nodes that process data, enabling the AI to learn, adapt, and generate new content.[13]
Training AI involves several stages, beginning with data collection, where extensive and diverse datasets form the foundation for learning.[14] The next stage, pre-training, is when AI uses unsupervised learning to identify underlying structures and relationships within the data, developing a broad understanding without focusing on any specific task.[15] This stage equips the AI with the ability to generate data based on these insights.[16] Fine-tuning follows, which involves supervised learning guided by human-labeled data to refine the model for specific applications.[17] Through this process, AI becomes capable of generating content across various domain—ranging from text and images to speech and interactive media—emulating human creativity and communication.[18] AI’s ability to learn, adapt, and generate content, combined with its skill in identifying patterns within large datasets, holds the potential to turbocharge the global economy and transform numerous industries.
The AI industry is predicted to contribute trillions to the global economy in the near future and has the potential to revolutionize various sectors.[19] In pharmaceuticals, generative AI can design novel chemical compounds for designing and discovering new and effective drugs.[20] In manufacturing, AI enhances productivity by predicting equipment failures.[21] Urban planners use AI to simulate cities and optimize transportation networks.[22] Financial institutions leverage it to forecast market movements more accurately.[23] In education, AI provides advanced materials to improve learning and assists in comprehending complex materials.[24] Legal professionals use AI to precisely analyze and summarize contracts.[25] It can refine navigation in self-driving cars, which reduces road accidents and fatalities.[26] AI-driven logistics analysis makes supply chains more efficient and improved weather prediction saves lives from natural disasters.[27] In healthcare, a Stanford study revealed that AI outperforms human pathologists in diagnosing lung cancer.[28]
These expansive benefits come with the risk that AI can become an incredibly dangerous and discriminatory tool if left unchecked.[29] One example of this is the Twitter bot Tay, launched by Microsoft, which was a software program designed to interact with and entertain people on Twitter through casual conversation.[30] Within 24 hours, the bot began tweeting racial slurs and hateful comments because the algorithm learned from users who had used inappropriate language when interacting with it.[31] ChatGPT, an AI that generates human-like content, has also produced discriminatory content when asked to determine whether a person was a good scientist, as it showed a bias favoring white males.[32] Further, AI raises privacy concerns because the developers retain all user-uploaded data to improve, strengthen, and train the models.[33] OpenAI, Google, and Anthropic are companies developing AI that obscure the sources of their training data and have incredibly permissive user data collection and use policies.[34]
Another potential danger is AI’s unique ability to augment and radicalize echo chambers.[35] Echo chambers are “environments in which the opinion, political leaning, or belief of users about a topic gets reinforced due to repeated interactions with peers or sources having similar tendencies and attitudes.”[36] AI can be personalized to individual users, including being customized to reflect the user’s ideological perspectives, religious beliefs, and political opinions.[37] As the Twitter bot Tay showed, the personalization of AI can insulate users from public dialogue by generating content that reinforces their beliefs and worldviews rather than exposing them to the diverse discourse essential for a thriving democracy.[38]
Finally, militaries around the world have implemented machine learning algorithms that can act independently of any human.[39] The United States deployed autonomous drones that used facial recognition software and algorithms to hunt down Libyan Warlord Khalifa Haftar’s army.[40] Israel and South Korea built autonomous sentry guns that use facial recognition to fire at people.[41] Israel has deployed these autonomous guns in the Gaza strip.[42] If AI is not swiftly regulated, warfare could start to occur at speeds that remove human decision making from the equation.[43] Militaries could fight entire battles without any input from a human being.[44] The major problem with allowing AI to make judgments on its own is that these machines are not programmed to comprehend and deploy social responsibility, ethics, justice, fairness, or fear of retribution.[45]
A. AI Regulation Overview
Regulating AI presents a twofold challenge.[46] First, because AI learns by absorbing vast amounts of data and identifying patterns, effective regulation must strike a balance between allowing unrestricted learning and controlling the content it generates.[47] Second, regulation must establish a framework for ethical guidelines that AI systems should follow.[48] However, determining who defines these ethical standards is complex.[49] Additionally, since AI models are designed to adapt and evolve based on new data, regulation must not only protect people from potential harms but also account for the dynamic nature and continuous improvement of these technologies.[50] Despite the challenge of regulating AI, many governments and scholars have proposed legislation attempting to regulate this new industry.[51] A brief overview of the different proposals will follow so that the reader can understand some of the different options available when regulating AI.
The new AI Bill attempted to regulate AI developers by imposing stringent safety measures and oversight requirements to prevent unreasonable risks of critical harm from AI models.[52] At the core of these requirements was a duty of reasonable care, which would have obligated developers to proactively ensure that their AI systems do not pose significant threats, such as enabling mass casualties, severe property damage, or other forms of grave harm.[53] This duty would have been assessed based on factors like the quality of the developer’s safety protocols, adherence to those protocols, and how these measures compare with industry standards.[54]
Before training AI models, the Bill would have required developers to implement robust safety measures, including cybersecurity protections and shutdown capabilities, to mitigate risks of unauthorized access and misuse.[55] A comprehensive safety and security protocol would have been established, written, and regularly reviewed, with designated senior personnel overseeing its implementation.[56] Developers would have been required to publish redacted versions of the protocol and provide compliance documentation to the California Attorney General, who could have requested unredacted versions for enforcement.[57] Additionally, developers would have been required to undergo annual third-party audits, starting in 2026, to verify compliance with the Bill’s requirements.[58] The Bill would have required any AI safety incidents to be reported within 72 hours, and developers would have had to notify the attorney general if models were used beyond training, evaluation, or legal compliance.[59] Whistleblower protections would have been provided for employees disclosing violations, and enforcement authority over any violations would have been granted to the California Attorney General, who could have sought civil penalties, damages, or other relief for violations.[60] Of course, this Bill failed to become law, and the federal government has likewise failed to establish any significant regulations regarding AI.[61] However, the United States is not the only country that has begun the process of regulating AI.
B. Overview of International Regulatory Approaches to AI
China’s approach to algorithmic regulation is encapsulated in its Provisions on Administration of Algorithmic Recommendation in the Internet Information Service (PAAR).[62] These regulations apply to all entities using algorithmic recommendation technology across five classifications, from personalized advertising to content filtering.[63] Unlike other regulations that categorize entities by scale, the PAAR encompasses small, medium, and large companies without distinction.[64] The PAAR mandates a transparency regime requiring entities to register algorithmic practices, though submissions seemingly purposefully lack sufficient detail for thorough oversight.[65] It also outlines assessment guidelines, albeit with a focus on government-led supervision rather than detailed self-regulation.[66] Sanctions for PAAR violations range from fines to public criticism, but the monetary penalties are relatively minor for larger corporations.[67] Overall, China’s approach to AI regulation relies more on government supervision rather than self‑governance by companies.[68]
In the European Union, the AI Act (the Act) provides an extensive framework for regulating AI systems by defining the AI covered under the Act broadly to include machine learning, logic‑based, and statistical approaches to AI models.[69] It further categorizes AI systems into four types based on the risk the models pose to people: (1) banned systems that pose unacceptable risks, such as the government using AI to assign scores to people’s social ranking; (2) high-risk applications that must meet specific legal requirements, like CV-scanning tools; (3) systems requiring increased transparency, such as chatbots and biometric categorization systems; and (4) all other AI systems that are categorized as low-risk applications and remain largely unregulated.[70] The Act mandates transparency for systems that interact with humans, detect emotions, or generate content, requiring these systems to disclose relevant features and maintain a public EU database.[71] The information required to be kept in the database includes the system’s capabilities and limitations, data, training, testing, and validation process used.[72] High-risk systems must also conduct self-assessments based on quality management standards, which are uploaded to several governmental entities and reviewed.[73] If the entities determine the standards are in compliance with the Act they will issue certifications.[74] Non-compliance can result in fines of up to $10 million EUR or 3% of annual revenue, whichever is higher.[75]
C. Suggested Regulatory Approaches
In a law review article, one legal scholar proposed interesting regulatory strategies like reforming liability shields, enhancing competition, and applying information fiduciary principles to the data AI developers train their models with.[76] Reforming liability shields could attempt to hold AI developers accountable for AI content that is illegal or produces illegal results while balancing the need for free speech.[77] This could involve compelling platforms to adopt reasonable responses to illegal content like promulgating standards for which content will be moderated.[78] Enhancing competition in the AI industry promotes better user-centered practices, as increased rivalry incentivizes companies to offer improved content moderation and privacy protections.[79] Interoperability between platforms and preventing vertical integration can allow smaller AI developers to thrive and meet niche needs.[80] Applying an information fiduciary model places ethical obligations of care, confidentiality, and loyalty on companies handling personal data, promoting responsible use and transparency instead of the current system’s incentives to exploit data for profit and training.[81] For generative AI, this fiduciary approach is particularly critical, as the technology’s ability to analyze user data and create personalized content mimicking people’s speech requires developers to prioritize user welfare and data protection.[82]
In another law review article, the commentators suggests mandating human oversight for AI systems, providing the moral and ethical norms that machines lack to ensure decisions align with legal requirements.[83] Such oversight would play a key role in this system by correcting AI errors, justifying AI decisions to enhance legitimacy, and maintaining accountability for the outcomes of these AI decisions.[84] This approach shifts focus from solely ex-post accountability to ex-ante compliance, as this could potentially provide the necessary conscience and judgment that AI systems lack.[85]
One last suggestion by Professor Catherine Sharkey, a law professor who specializes in regulatory policy, highlights the role of liability insurance in shaping standards to mitigate AI-related harms.[86] Liability insurance has played a key role in shaping the rules and safety standards for products over the past 75 years.[87] For AI, liability insurers could use their knowledge of risk to advise AI companies on how to reduce potential harms, as part of setting insurance premiums or developing standards in their policies.[88] For instance, Digital Diagnostic’s AI-based diagnostic device, IDx-DR, carries malpractice insurance, indicating that AI designers are preparing for possible liability.[89] If a third-party liability insurance market emerges for AI, insurance companies could assist in gathering data, enhancing risk management for AI developers, and developing standards for future regulations.[90]
II. Overview of Regulatory Sandboxes
With the wide array of potential regulatory frameworks available, it is exceedingly complicated—if not impossible—to find a single solution capable of addressing all the risks associated with AI. The rapid evolution of AI technologies and their diverse applications across various sectors make it difficult to craft universal regulations.[91] However, this does not mean that legislatures are powerless in the face of these challenges.
One approach is regulatory sandboxes, which have been used to manage emerging industries that are difficult to regulate.[92] Sandboxes allow startups, companies, and tech firms to test new products under regulatory supervision, providing policymakers with firsthand insights developing technologies and business models.[93] This approach provides a proactive regulatory environment, allowing lawmakers to refine regulations based on the data they receive rather than imposing rigid regulations that may stifle innovation or prove to be ineffective.[94]
One prominent regulatory sandbox in the United States has been Utah’s legal services industry experiment.[95] In 2020, the Utah Supreme Court established a regulatory sandbox that permitted non-lawyer-owned firms and other non-legal entities to provide certain basic legal services, such as filling out marriage, business, and immigration forms.[96] This initiative has enabled close supervision of a traditionally rigid industry while still encouraging innovation.[97] The success of regulatory sandboxes demonstrates that legislatures have viable tools to regulate rapidly evolving industries while still providing oversight and progress.[98]
III. Analysis of the Viability of AI Regulatory Sandboxes
AI is particularly well-suited for regulatory sandboxes because its diverse applications across multiple sectors make a single, one-size-fits-all regulatory framework less effective.[99] Sandboxes provide the flexibility to tailor regulations based on sector-specific needs, allowing regulators to test and refine rules for different AI contexts, like healthcare, finance, and manufacturing.[100] Moreover, AI’s rapid technological advancement creates gaps between innovation and regulatory capacity, which sandboxes can bridge by enabling continuous observation and guidance.[101] By fostering an evidence-based, iterative approach to regulation, regulatory sandboxes provide a mechanism for balancing innovation with oversight, without the fear of overregulating or stifling advancements.[102] Given these advantages, the United States should implement AI regulatory sandboxes to encourage innovation while developing adaptable, effective regulations.[103]
Conclusion
California’s AI Safety Bill, though vetoed, represents an important effort to address the risks of AI development and sets the stage for a necessary debate on regulating this transformative technology.[104] While AI offers immense potential to revolutionize industries, the risks of unchecked development—including biases, privacy violations, echo chambers, and militarization—necessitate thoughtful regulation.[105] The challenges of regulating AI, particularly its evolving nature, make traditional, static regulations difficult to implement effectively without stifling innovation.[106]
A more effective approach is the adoption of regulatory sandboxes—controlled environments where AI technologies can be tested under regulatory supervision to inform the development of effective and nuanced rules.[107] Sandboxes allow regulators to observe real-world AI applications across sectors, providing evidence-based insights to guide flexible, iterative policy development.[108] By implementing regulatory sandboxes, the United States can address the complex challenge of regulating AI effectively while promoting growth, remaining competitive with other nations, and ensuring public safety.
[1] Qiuyi Pan, Note, Exploring Algorithmic Governance in International Trade Law: An Analysis of The United States, European Union, and China, 41 Ariz. J. Int’l & Comp. L. 134, 140–45 (2024).
[2] E. Jason Albert & Jessica E. Brown, Beyond the Iudex Threshold: Human Oversight As the Conscience of Machine Learning, 22 Colo. Tech. L.J. 269, 272 (2024).
[3] Bobby Allyn, California Gov. Newsom Vetoes AI Safety Bill That Divided Silicon Valley, NPR (Sept. 29, 2024, 6:18 PM), https://www.npr.org/2024/09/20/nx-s1-5119792/newsom-ai-bill-california-sb1047-tech [https://perma.cc/3GTD-UBVT].
[4] Id.
[5] Kirk J. Nahra, Arianna Evers, Ali A. Jessani & Nancy Stephen, California Greenlights Two Significant AI Bills, WilmerHale (Sept. 16, 2024), https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240916-california-greenlights-two-significant-ai-bills [https://perma.cc/2YKG-XYCA].
[6] Id.
[7] Ryan Nabil, Artificial Intelligence Regulatory Sandboxes, 19 J.L. Econ. & Policy 295, 345 (2024).
[8] Albert & Brown, supra note 2, at 272.
[9] Id.
[10] Id. at 274; Gilad Abiri, Generative AI As Digital Media, 15 Harv. J. Sports & Ent. L. 279, 289–291 (2024) (describing how AI could benefit humanity in numerous ways, including developing pharmaceutical innovations, bolstering manufacturing productivity, making urban planning more efficient, forecasting market movements, and tailoring personal medicine regimens among many other benefits).
[11] See generally Albert & Brown, supra note 2, at 272 (providing examples of the dangers of AI); Dorian Chang, AI Regulation for the AI revolution, 2023 Sing. Compar. L. Rev. 130 (comparing different regulatory schemes for AI); Nabil, supra note 7 (comparing the different regulatory schemes of countries).
[12] Abiri, supra note 10, at 286–87.
[13] Id. at 287.
[14] Id.
[15] Id.
[16] Id.
[17] Id.
[18] Id. at 287–88.
[19] Anand S. Rao & Gerard Verwij, Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise, PricewaterhouseCoopers, https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf [https://perma.cc/6UN7-DT8D] (predicting $15.7 trillion contributed to the global economy by AI during 2030 alone).
[20] Abiri, supra note 10, at 289.
[21] Id.
[22] Id.
[23] Id.
[24] Id.
[25] Id.
[26] Id.
[27] Id.
[28] Krista Conger, Computers Trounce Pathologists in Predicting Lung Cancer Type, Severity, Stan. Med. (Aug. 16, 2016), https://med.stanford.edu/news/all-news/2016/08/computers-trounce-pathologists-in-predicting-lung-cancer-severity.html [https://perma.cc/25QA-NAEN].
[29] See Albert & Brown, supra note 2, at 295-96.
[30] Pan, supra note 1, at 140–41.
[31] Id. at 141.
[32] Id.
[33] Abiri, supra note 10, at 304–05.
[34] Id.; Privacy Policy, OpenAI, https://openai.com/policies/row-privacy-policy [https://perma.cc/7LGV-XSER] (effective Nov. 4, 2024).
[35] Abiri, supra note 10, at 300–01.
[36] Id. at 298–99 (quoting Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi & Michele Starnini, The Echo Chamber Effect on Social Media, 118 Proc. Nat’l Acad. Scis., no. 9, 2021, at 1, https://www.pnas.org/doi/epdf/10.1073/pnas.2023301118 [https://perma.cc/TW5K-SZ9R]).
[37] Id. at 299–300.
[38] Id. at 300.
[39] Albert & Brown, supra note 2, at 281.
[40] Id. (citing Gerrit De Vynck, The U.S. Says Humans Will Always Be in Control of AI Weapons. But the Age of Autonomous War Is Already Here, Wash. Post (July 7, 2021, 10:00 AM), https://www.washingtonpost.com/technology/2021/07/07/ai-weapons-us-military/ [https://perma.cc/P3YW-VE3K]).
[41] Id.
[42] Id.
[43] Id. at 281-82 (quoting De Vynck, supra note 40).
[44] See id.
[45] Albert & Brown, supra note 2, at 282.
[46] Id.
[47] Id.
[48] Id.
[49] Id. at 285-86 (explaining that programming ethical standards into AI bots is almost impossible as you would have to account for every eventuality the AI could encounter).
[50] Id. at 285.
[51] Pan, supra note 1, at 145–61; Nabil, supra note 7, at 307-26.
[52] Nahra et al., supra note 5.
[53] Id.
[54] Id.
[55] Id.
[56] Id.
[57] Id.
[58] Id.
[59] Id.
[60] Id.
[61] Guy Brenner, Jonathan Slowik & Margot Richard, Trump Alters AI Policy with New Executive Order, Proskauer (Jan. 28, 2025), https://www.lawandtheworkplace.com/2025/01/shifting-ai-policies-president-donald-trump-issues-new-ai-executive-order-and-revokes-another [https://perma.cc/5YTR-2SPV] (providing that Trump’s Executive Order simply revokes all previous AI policies and orders new ones to be made that are free from bias or social agendas).
[62] Pan, supra note 1, at 150.
[63] Id. at 151.
[64] Id.
[65] Id. at 151-52.
[66] Id. at 152-53.
[67] Id. at 153.
[68] See id. at 150-53.
[69] Id. at 154.
[70] Id.
[71] Id. at 156.
[72] Id.
[73] Id. at 156-57.
[74] Id. at 157.
[75] Id.
[76] Abiri, supra note 10, at 324–26.
[77] Id. at 321.
[78] Id. at 322.
[79] Id. at 323.
[80] Id. (quoting Jack M. Balkin, To Reform Social Media, Reform International Capitalism, in Social Media, Freedom, and the Future of Our Democracy 127 (Lee C. Bollinger & Geoffrey R. Stone eds., 2022)).
[81] Id. at 324–26.
[82] Id. at 325–26.
[83] Albert & Brown, supra note 2, at 296.
[84] Id. at 296-97.
[85] Id. at 297.
[86] Catherine M. Sharkey, A Products Liability Framework for AI, 25 Colum. Sci. & Tech. L. Rev. 240, 258–59 (2024).
[87] Id. at 258.
[88] Id. at 259.
[89] Id.
[90] Id.
[91] See Nabil, supra note 7, at 305-06.
[92] Id. at 295-96 (explaining that regulatory sandboxes are government programs that provide companies the opportunity to offer innovative products and services under close regulatory supervision for a limited period).
[93] Id.
[94] Id.
[95] Id. at 296.
[96] Id.
[97] Id. at 296-97.
[98] Id. at 295-96.
[99] Id. at 304-06.
[100] Id. at 305.
[101] Id. at 304.
[102] Id. at 304-07.
[103] Id. at 306-07.
[104] Allyn, supra note 3.
[105] Abiri, supra note 10, at 289-302; Pan, supra note 1, at 140-41; Albert & Brown, supra note 2, at 281-82.
[106] Albert & Brown, supra note 2, at 282-85.
[107] Nabil, supra note 7, at 305-07.
[108] Id. at 305.