On May 10th, 2023, the National Association of Attorneys General’s Consumer Protection Spring Conference held a panel on Artificial Intelligence and Deep Fakes. This panel provided an overview of the use of AI in the marketplace, how it is detected, enforcement efforts taken to address consumer harm, and the natural security concern from deep fakes—the AI generated images meant to appear as actual videos of a person speaking. Panelists and moderators included Patrice Malloy (Bureau Chief, Florida Attorney General’s Office), Diane Oates (Senior Assistant Attorney General, Florida Attorney General’s Office), Kashif T. Chand (Deputy Attorney General, Chief, Data Privacy & Cybersecurity Section, New Jersey Attorney General’s Office), Santiago Lyon (Head of Education and Advocacy for the Content Authenticity Initiative, Adobe), and Serge Jorgensen (Founding Partner & Chief Technology Officer, Sylint Group). Key takeaways from the panelists are outlined below.
Artificial Intelligence Insights:
- Responsibility: Publishers of AI should expect to be held responsible for the content it generates. This responsibility to consumers already exists, but AI is giving more urgency and reach to output. It is important that the publishers explain to the consumer what data is being using and produced.
- Cyberattacks: The ever-increasing scope and scale that can be accomplished with generative AI allows companies to collect massive amounts of data and quickly disseminate it. After all, AI is only as powerful as the information it has to supply the inputs. This will inevitably lead to AI companies having a so called “target on their backs” for cyber related attacks. Vulnerabilities in AI data storage systems could lead to devastating results such as kids’ mental health records leaking onto the dark web. Panelists suggest that retroactive corrections are needed, like legislation allowing companies to delete previous training sets used to train AI. The panelists also expressed that they anticipate seeing this type of legislation at the state, federal, and international levels.
- Intellectual Property: The increase in the use of AI has occurred without comparative copyright and trademark laws being enacted to allow use of others’ intellectual property to generate images. This has led to lawsuits against publishers by creators whose content was fed into AI algorithms. The panelists raised points about how regulators can regulate improper use of inputs by regulating the data training set at the outset.
- Consumer Education and Protection: The panelists noted that media literacy and provenance education being added to classwork would greatly assist consumers’ choices in using AI and ability to protect oneself. This type of transparency is critical and an important matter of consumer protection because they should have an opportunity to learn about data harvesting and content creation that AI companies use to create their products and services. For example, Finland introduces media literacy coursework in primary school.
- Regulations: Law makers are in a challenging position as they balance the need for regulating AI and the potential for unintended negative implications. For instance, laws that require TikTok users to be over the age or 13 has incentivized children across the country to lie about their age. One alternative, could be to install trackers on devices that prevent children of a certain age from accessing information.
Deep Fake Insights:
- Personal Protection: Allowing a company to create an avatar of you from large amounts of visual media and record your voice from a large scope of spoken sounds will enable deep fake creators to make one of you. This is happening to actors, like Tom Cruise, who already have this media out in the public.
- Evidentiary Concerns: The panelists showed concern about the potential for deep fakes to jeopardize video evidence authenticity, or defenses being asserted that real videos were instead deep fakes. Recently, Elon Musk’s lawyers have asserted that a video of Musk in a trial over autonomous car functionality was a deep fake. The panelists suggested a couple of solutions including stronger authenticity labels and acknowledgments be required for deep fakes. The downside to these options is that they will require significant time to establish.
- Collective Change: The panelists all stated that the threat of deep fakes and AI being used for improper means will take both communal and legal effort. Industry standards must be adopted for private entities to fight against AI learning material it is not meant to learn. In addition, regulations and initiatives like a federal digital ID, prevention of nonconsensual explicit material, and retroactive clean-up of leaked data will be needed to combat consumer harm from deep fakes and AI. The panelists agreed that they are attempting to address the issues at the point of creation before harm and confusion is already made. The panelists also reinforced the idea that state Unfair or Deceptive Act or Practice statutes will be an important tool as states learn to protect its consumers from negative outcomes related to AI advances.
To stay up to date on the latest actions taken by state attorneys general, sign up for Crowell & Moring’s State AG Blog.