Caroline Mallett, a ninth-grader at Issaquah High School near Seattle, attended her first homecoming dance last fall. The dance was a James Bond themed party with a blackjack table attended by hundreds of girls in party dresses.
A few weeks later, she and other female students discovered that a male classmate was using an artificial intelligence app designed to automatically “peel off” fake nude images of female students who had participated in the dance, i.e. photos of real clothing. It was learned that he had been disseminating sexually explicit photos that had been fabricated using girls and women.
Mallett, 15, alerted her father, Mark, a Democratic Washington state senator. Although she was not among the girls in her photo, she is doing what she can to help her friends who are “very uncomfortable” that their male classmates have seen mock nude images of them. She asked if she could do it. Immediately, Sen. Mallett and her colleagues in the state Legislature proposed a bill that would ban the sharing of real, AI-generated sexual depictions of minors.
“I hate the idea of having to worry about something like this happening again to my female friends, my sisters, or even myself,” Bora told state lawmakers during a hearing on the bill in January. he said.
The state legislature passed the bill without opposition. Democratic Gov. Jay Inslee signed it last month.
States are on the front lines of a new and rapidly expanding form of peer sexual exploitation and harassment in schools. Boys across the country are secretly fabricating sexually explicit images of their female classmates using widely used “nude” apps, and then posting the mock nudes via group chats on apps like Snapchat and Instagram. It was spread by
Now, spurred in part by the unsolicited accounts of teenage girls like Bora, federal and state lawmakers are rushing to pass protective legislation to keep pace with exploitative AI apps.
Since early last year, at least 24 states have turned to AI-generated sexually explicit images of people under the age of 18, known as deepfakes, according to data compiled by the nonprofit National Center for Missing and Exploited Children. We have submitted a bill to counter this. And several states have enacted this measure.
Among them, South Dakota passed a law this year that makes it illegal to possess, produce, or distribute AI-generated sexual abuse content depicting real minors. Last year, Louisiana enacted a deepfake law that criminalizes sexually explicit depictions of minors generated by AI.
“When I hear about these incidents, I wonder how much harm is being done,” said Democratic Rep. Tina Orwall, who authored Washington state's explicit deepfakes bill after hearing about incidents like the one at Issaquah High School. When I learned about this, I felt a sense of crisis.”
Some lawmakers and child protection experts believe that the easy availability of AI nudity apps will enable the mass production and distribution of false and graphic images that can circulate online for a lifetime. They say such rules are urgently needed because they threaten girls' mental health, reputations and physical health. safety.
“In the course of an afternoon, one boy with a cell phone can victimize 40 underage girls,” said Jota Souras, chief legal officer at the National Center for Missing and Exploited Children. , “And their images got out there.”
Over the past two months, cases of deepfake nudes have spread at schools in Richmond, Illinois, Beverly Hills and Laguna Beach, California.
However, there are few laws in the United States that specifically protect people under 18 from exploitative AI apps.
Democratic Representative Joseph D. Morrell said that many of the current laws prohibiting child sexual abuse and pornography without the consent of an adult (including actual photos and videos of real people) He said this is because it may not cover explicit images of people's faces. From New York.
Last year, he introduced a bill that would make it a crime to publish AI-generated intimate images of recognizable adults or minors. It would also give deepfake victims and their parents the right to sue individual perpetrators for damages.
“We want to make this incredibly painful for everyone, because this is simply irreversible harm,” Morrell said. “While it may seem like a prank to a 15-year-old boy, this is deadly serious.”
U.S. Rep. Alexandria Ocasio-Cortez, a fellow New York Democrat, recently introduced a similar bill that would allow victims to file civil lawsuits against deepfake perpetrators.
But neither bill would explicitly give victims the right to sue developers of AI nudity apps, which trial lawyers say could help stop the mass production of sexually explicit deepfakes. claims.
“Legislation is needed to stop commercialization, which is the root of the problem,” said Elizabeth Hanley, a Washington lawyer who represents victims of sexual assault and harassment.
U.S. law prohibits the distribution of computer-generated child sexual abuse material depicting identifiable minors engaged in sexually explicit conduct. Last month, the Federal Bureau of Investigation issued a warning warning that the illegal material included realistic, AI-generated images of child sexual abuse.
But experts say the AI-generated fake depiction of a real, unclothed teenage girl does not allow the fake image to meet legal standards for sexually explicit acts or obscene displays of genitals. It states that unless prosecutors can prove that the material is child sexual abuse, it may not fall under the category of “child sexual abuse material.''
Some defense attorneys are trying to take advantage of apparent legal ambiguities. A lawyer representing a high school boy in a New Jersey deepfake lawsuit recently argued that a court should not temporarily prevent his client from viewing or sharing photos of his client who created nude AI images of his female classmates. . illegal. Lawyers argued in court filings that federal law was not designed to apply to “computer-generated synthetic images that do not even include any part of the actual human body.” (Defendant ultimately agreed not to contest the restraining order for the images.)
States are currently working on legislation to stop exploitative AI images. California introduced a bill this month that would update the state's ban on child sexual abuse to specifically target AI-generated abusive content.
And Massachusetts lawmakers are finalizing a bill that would make it a crime to share explicit images, including deepfakes, without consent. It would also require state agencies to develop diversion programs to teach minors who have shared explicit images about issues such as the “responsible use of generative artificial intelligence.”
Punishment can be severe. Under a new Louisiana law, anyone who knowingly creates, distributes, promotes or sells sexually explicit deepfakes of minors could face a minimum of five to 10 years in prison.
In December, Miami-Dade County police arrested two middle school boys for allegedly creating and sharing fake nude AI images of two female classmates, ages 12 and 13, according to police documents obtained by The New York Times through a public records request. arrested a person. The boys were charged with third-degree felonies under a 2022 state law that prohibits altering sexual depictions without consent. (The Miami-Dade County State's Attorney's Office said it cannot comment on unsolved cases.)
Washington state's new deepfake law takes a different approach.
After learning about the Issaquah High School incident from his daughter, Sen. Mallett contacted Congressman Orwall, an advocate for sexual assault victims and a former social worker. Orwall, who was working on one of the state's first revenge porn bills, authored a House bill that would ban the distribution of AI-generated intimate and sexually explicit images of minors and adults. (Mallet, who sponsored the joint bill in the Senate, is currently running for governor.)
Under the law, first-time offenders could be charged with a misdemeanor, while those with a prior conviction for publishing sexually explicit images could be charged with a felony. The new deepfake law will come into effect in June.
“It's not surprising that our protection is lagging,” Orwall said. “That’s why we wanted to move this forward quickly.”