Articles

Regulating AI: 3 experts explain why it's hard to do and important to do well

Powerful new AI systems could amplify fraud and disinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences

Estimated reading time: 11 minutes

From fake photos of Donald Trump arrested by New York City police officers to a chatbot describing one computer scientist very much alive as died tragically , the ability of the new generation of systems artificial intelligence generative drive to create compelling but fictitious text and images is sparking alarms about steroid fraud and misinformation. Indeed, on March 29, 2023 a group of AI researchers and industry figures urged the industry to suspend further training on the latest AI technologies or, barring that, governments to "impose a moratorium".

Image generators like GIVE HER , midjourney e stable diffusion and content generators such as Bard , Chat GPT , Chinchilla e Calls – are now available to millions of people and require no technical knowledge to use.

Given the unfolding landscape of tech companies deploying AI systems and testing them on the public, policy makers should ask themselves whether and how to regulate the emerging technology. The Conversation asked three tech policy experts to explain why regulating AI is such a challenge and why it's so important to get it right.

Human foibles and a moving target

S. Shyam Sundar, professor of multimedia effects and director, Center for Socially Responsible AI, Penn State

The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Overwhelming media coverage has fueled irrational beliefs about AI capabilities and consciousness. These beliefs are based on the " automation bias ” or on the tendency to let our guard down when machines perform a task. An example is the reduced vigilance among the pilots when their plane is flying on autopilot.

Numerous studies in my lab have shown that when a machine, rather than a human, is identified as the source of interaction, it triggers a mental shortcut in the users' minds that we call "machine heuristics". " . This shorthand is the belief that machines are accurate, objective, impartial, infallible, and so on. It clouds the user's judgment and causes the user to trust machines excessively. However, simply disillusioning people about the infallibility of AI is not enough, because humans are known to subconsciously assume proficiency even when technology doesn't warrant it.

Research has also shown that people treat computers as social beings when machines show even the slightest hint of humanity, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as courtesy and reciprocity. So when computers seem sentient, people tend to trust them blindly. Regulation is needed to ensure AI products deserve this trust and don't exploit it.

AI presents a unique challenge because, unlike traditional engineering systems, designers cannot be sure how AI systems will perform. When a traditional automobile rolled out of the factory, engineers knew exactly how it was going to perform. But with self-driving cars, engineers they can never be sure how they will behave in new situations .

Difficulty in controlling innovation

Lately, thousands of people around the world have marveled at what big generative AI models like GPT-4 and DALL-E 2 produce in response to their suggestions. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, these models change and evolve with ever greater interaction.

All of this means that there is ample potential for misfires. Therefore, much depends on how AI systems are implemented and what provisions for recourse are in place when human sensibilities or well-being are harmed. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors in the collective, but you'll need mechanisms to deal with abuses, like speeding, and unpredictable events, like accidents.

AI developers will also need to be extraordinarily creative in predicting the ways the system might behave and try to anticipate potential violations of social standards and responsibilities. This means that there is a need for regulatory or governance frameworks that rely on periodic audits and scrutiny of AI outcomes and products, although I believe these frameworks should also recognize that system designers cannot always be held accountable for incidents .

Combining “soft” and “hard” approaches

Cason Schmit, assistant professor of public health, Texas A&M University

Regulating artificial intelligence is complicated . To adjust the AI ​​well, you have to first definish AI and understand the expected risks and benefits of AI. DefiLegally filing AI is important for identifying what is subject to the law. But AI technologies are still evolving, so it's hard defifinish one defistable legal definition.

Understanding the risks and benefits of AI is also important. Good regulation should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what the future risks or benefits might be. These types of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.

Lawmakers are often too slow to adjust to the rapidly changing technological environment. Someone new laws are obsolete at the time they are issued or made executive. Without new laws, regulators they have to use the old laws to face new problems . Sometimes this leads to legal barriers for social benefits o legal loopholes for harmful behaviors .

Soft Law

The "soft law ” are the alternative to traditional “hard law” legislative approaches aimed at preventing specific violations. In the soft law approach, a private organization establishes rules or standards for industry members. These can change more rapidly than traditional legislation. That makes promising soft laws for emerging technologies because they can quickly adapt to new applications and risks. However, Soft laws can mean soft enforcement .

Megan Doerr , Jennifer Wagner e io (Cason Schmit) we propose a third way: Copyleft AI with Trusted Enforcement (CAITE) . This approach combines two very different concepts in intellectual property: licenses copyleft e patent troll.

Copy Left Licenses

The licenses copyleft allow you to easily use, reuse, or modify content under the terms of a license, such as open source software. Model CAITE use licenses copyleft to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.

In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce AI ethical standards and can be funded in part by fines for unethical conduct. This entity is like a patent troll as it is private rather than governmental and supports itself by enforcing the legal intellectual property rights it collects from others. In this case, rather than running for profit, the entity enforces ethical guidelines definite in licenses.

This model is flexible and adaptable to meet the needs of an ever-changing AI environment. It also allows for substantial enforcement options like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to address the unique challenges of AI.

Four key questions to ask

John Villasenor, professor of electrical engineering, law, public policy and management, University of California, Los Angeles

extraordinary recent progress in large language model-based generative AI are spurring the demand to create new AI-specific regulation. Here are four key questions to ask yourself:

1) Is there a need for new specific regulation for AI? 

Many of the potentially problematic outcomes of AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory lending decisions, it would violate the Fair Housing Act. If the AI ​​software in a driverless car causes an accident, the product liability law provides a framework for pursuing remedies .

2) What are the risks of regulating a rapidly evolving technology based on a snapshot of time? 

A classic example of this is the Stored Communications Act , which was enacted in 1986 to address then-innovative digital communication technologies such as email. In enacting the SCA, Congress provided significantly less privacy protection for email older than 180 days.

The rationale was that limited storage meant people were constantly cleaning up their inboxes by deleting older messages to make room for new ones. As a result, messages archived for more than 180 days were deemed less important from a privacy perspective. It's unclear whether this logic ever made sense, and it certainly doesn't make sense in the 20s, when most of our emails and other archived digital communications are more than six months old.

A common response to concerns about regulating technology based on a single snapshot over time is this: If a law or regulation becomes obsolete, update it. It's easier said than done. Most people agree that SCA became obsolete decades ago. But because Congress was unable to specifically agree on how to revise the 180-day provision, it's still on the books more than a third of a century after it was enacted.

3) What are the potential unintended consequences? 

Il Allow States and Victims to Fight Online Sex Trafficking Act of 2017 it was a law passed in 2018 that he revised Section 230 of the Communications Decency Act with the aim of combating sex trafficking. While there is little evidence that he has reduced sex trafficking, he has had a extremely problematic impact on a different group of people: sex workers who relied on websites taken offline by FOSTA-SESTA to exchange information about dangerous clients. This example shows the importance of taking a broad look at the potential effects of proposed regulations.

4) What are the economic and geopolitical implications? 

If regulators in the US take action to intentionally slow progress in AI, that will simply push investment and innovation – and resulting job creation – elsewhere. While emerging AI raises many concerns, it also promises to bring huge benefits in areas such as education , medicine , production , transport safety , agriculture , weather forecasts , access to legal services and more.

I believe that AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while ensuring access to its benefits.

This article is freely excerpted from The Conversation, an independent non-profit news organization dedicated to sharing the knowledge of academic experts.

Related Readings

BlogInnovazione.it

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

Latest Articles

The Benefits of Coloring Pages for Children - a world of magic for all ages

Developing fine motor skills through coloring prepares children for more complex skills like writing. To color…

May 2, 2024

The Future is Here: How the Shipping Industry is Revolutionizing the Global Economy

The naval sector is a true global economic power, which has navigated towards a 150 billion market...

May 1, 2024

Publishers and OpenAI sign agreements to regulate the flow of information processed by Artificial Intelligence

Last Monday, the Financial Times announced a deal with OpenAI. FT licenses its world-class journalism…

April 30 2024

Online Payments: Here's How Streaming Services Make You Pay Forever

Millions of people pay for streaming services, paying monthly subscription fees. It is common opinion that you…

April 29 2024