Connect with us

Artificial Intelligence

Middle schoolers are now using AI to create ‘deepfake’ pornography of their classmates

Published

8 minute read

From LifeSiteNews

By Jonathon Van Maren

It’s happening all over the world: a generation weaned on hardcore pornography is increasingly enabled by AI technology to create imagery of people they know personally.

A recent news story out of Alabama should be getting far more attention than it is, because it is a glimpse into the future. Middle school students are using artificial intelligence (AI) to create pornographic images of their female classmates 

A group of mothers in Demopolis say their daughters’ pictures were used with artificial intelligence to create pornographic images of their daughters. Tiffany Cannon, Elizabeth Smith, Holston Drinkard, and Heidi Nettles said they all learned on Dec. 4 that two of their daughters’ male classmates created and shared explicit photos of their daughters. Smith said since last Monday, it has been a rollercoaster of emotions.

“They’re scared, they’re angry, they’re embarrassed. They really feel like why did this happen to them,” said Smith. The group of mothers said there is an active investigation with Demopolis Police. However, they wish for the school district to take action. They believe this is an instance of cyberbullying and there are state laws and policies to protect their girls.

“We have laws in place through the Safe School’s law and the Student Bullying Prevention Act, which says that cyberbullying will not be tolerated either on or off campus,” said Smith. “It takes a lot for these girls to come forward, and they did. They need to be supported for that. Not just from their parents, but from their school and their community,” said Nettles.

The school hasn’t given many details yet, with the Demopolis City Schools Superintendent Tony Willis saying in a statement that there is little they can do: “The school can only address things that happen at school events, school campus on school time. Outside of this, it becomes a parent and police matter. We sympathize with parents and never want wrongful actions to go without consequences – our hearts and prayers go out to all the families hurt by this. That is why we have assisted the police in every step of this process.” 

We’ll be seeing a lot more of this in the years ahead, as a generation weaned on hardcore pornography is increasingly enabled by technology to create imagery of people they know personally. The rise of sexting took pornography and made it personal – educators and law enforcement are still grappling with how to curtail the nearly ubiquitous practice of sending and receiving intimate images, the majority of which are then shared with others. Many of these images, by virtue of the age of the students involved, constitute child pornography. AI-generated pornography will create a whole laundry list of other disturbing issues to deal with. 

A quick scan of recent headlines will give you a sense of where this is headed. From Fortune: “‘Nudify’ apps that use AI to undress women in photos are soaring in popularity, prompting worries about non-consensual porn.” These apps allow people to “digitally undress” people they know and thus create nonconsensual pornography of girls and women. These apps have already acquired millions of users. 

From MIT Technology Review: “A high school’s deepfake porn scandal is pushing US lawmakers into action.” At a New Jersey high school, boys had used AI to “create sexually explicit and even pornographic photos of some of their classmates,” with up to 30 girls being impacted. The sense of violation felt by the victims is profound. 

From CNN: “Outcry in Spain as artificial intelligence used to create fake naked images of underage girls.” From the story: “Police in Spain have launched an investigation after images of young girls, altered with artificial intelligence to remove their clothing, were sent around a town in the south of the country. A group of mothers from Almendralejo, in the Extremadura region, reported that their daughters had received images of themselves in which they appeared to be naked.”  

One girl was blackmailed by a boy with a doctored image of herself. Another cried to her mother: “What have they done to me?” 

From the Washington Post: “AI fake nudes are booming. It’s ruining real teens’ lives.” From the story: “Artificial intelligence is fueling an unprecedented boom this year in fake pornographic images and videos. It’s enabled by a rise in cheap and easy-to-use AI tools that can “undress” people in photographs — analyzing what their naked bodies would look like and imposing it into an image — or seamlessly swap a face into a pornographic video.” 

Those are just a few examples of dozens of stories from the past few months. The pornography crisis is being exacerbated further by AI, once again highlighting the unfortunate truth of a joke in tech circles: First we create new technology, then we figure out how to watch porn on it. The porn industry has ruined an untold number of lives. AI porn is taking that to the next level. We should be prepared for it. 

Featured Image

Jonathon Van Maren is a public speaker, writer, and pro-life activist. His commentary has been translated into more than eight languages and published widely online as well as print newspapers such as the Jewish Independent, the National Post, the Hamilton Spectator and others. He has received an award for combating anti-Semitism in print from the Jewish organization B’nai Brith. His commentary has been featured on CTV Primetime, Global News, EWTN, and the CBC as well as dozens of radio stations and news outlets in Canada and the United States.

He speaks on a wide variety of cultural topics across North America at universities, high schools, churches, and other functions. Some of these topics include abortion, pornography, the Sexual Revolution, and euthanasia. Jonathon holds a Bachelor of Arts Degree in history from Simon Fraser University, and is the communications director for the Canadian Centre for Bio-Ethical Reform.

Jonathon’s first book, The Culture War, was released in 2016.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Trending

X