Connect with us

Artificial Intelligence

Middle schoolers are now using AI to create ‘deepfake’ pornography of their classmates

Published

8 minute read

From LifeSiteNews

By Jonathon Van Maren

It’s happening all over the world: a generation weaned on hardcore pornography is increasingly enabled by AI technology to create imagery of people they know personally.

A recent news story out of Alabama should be getting far more attention than it is, because it is a glimpse into the future. Middle school students are using artificial intelligence (AI) to create pornographic images of their female classmates 

A group of mothers in Demopolis say their daughters’ pictures were used with artificial intelligence to create pornographic images of their daughters. Tiffany Cannon, Elizabeth Smith, Holston Drinkard, and Heidi Nettles said they all learned on Dec. 4 that two of their daughters’ male classmates created and shared explicit photos of their daughters. Smith said since last Monday, it has been a rollercoaster of emotions.

“They’re scared, they’re angry, they’re embarrassed. They really feel like why did this happen to them,” said Smith. The group of mothers said there is an active investigation with Demopolis Police. However, they wish for the school district to take action. They believe this is an instance of cyberbullying and there are state laws and policies to protect their girls.

“We have laws in place through the Safe School’s law and the Student Bullying Prevention Act, which says that cyberbullying will not be tolerated either on or off campus,” said Smith. “It takes a lot for these girls to come forward, and they did. They need to be supported for that. Not just from their parents, but from their school and their community,” said Nettles.

The school hasn’t given many details yet, with the Demopolis City Schools Superintendent Tony Willis saying in a statement that there is little they can do: “The school can only address things that happen at school events, school campus on school time. Outside of this, it becomes a parent and police matter. We sympathize with parents and never want wrongful actions to go without consequences – our hearts and prayers go out to all the families hurt by this. That is why we have assisted the police in every step of this process.” 

We’ll be seeing a lot more of this in the years ahead, as a generation weaned on hardcore pornography is increasingly enabled by technology to create imagery of people they know personally. The rise of sexting took pornography and made it personal – educators and law enforcement are still grappling with how to curtail the nearly ubiquitous practice of sending and receiving intimate images, the majority of which are then shared with others. Many of these images, by virtue of the age of the students involved, constitute child pornography. AI-generated pornography will create a whole laundry list of other disturbing issues to deal with. 

A quick scan of recent headlines will give you a sense of where this is headed. From Fortune: “‘Nudify’ apps that use AI to undress women in photos are soaring in popularity, prompting worries about non-consensual porn.” These apps allow people to “digitally undress” people they know and thus create nonconsensual pornography of girls and women. These apps have already acquired millions of users. 

From MIT Technology Review: “A high school’s deepfake porn scandal is pushing US lawmakers into action.” At a New Jersey high school, boys had used AI to “create sexually explicit and even pornographic photos of some of their classmates,” with up to 30 girls being impacted. The sense of violation felt by the victims is profound. 

From CNN: “Outcry in Spain as artificial intelligence used to create fake naked images of underage girls.” From the story: “Police in Spain have launched an investigation after images of young girls, altered with artificial intelligence to remove their clothing, were sent around a town in the south of the country. A group of mothers from Almendralejo, in the Extremadura region, reported that their daughters had received images of themselves in which they appeared to be naked.”  

One girl was blackmailed by a boy with a doctored image of herself. Another cried to her mother: “What have they done to me?” 

From the Washington Post: “AI fake nudes are booming. It’s ruining real teens’ lives.” From the story: “Artificial intelligence is fueling an unprecedented boom this year in fake pornographic images and videos. It’s enabled by a rise in cheap and easy-to-use AI tools that can “undress” people in photographs — analyzing what their naked bodies would look like and imposing it into an image — or seamlessly swap a face into a pornographic video.” 

Those are just a few examples of dozens of stories from the past few months. The pornography crisis is being exacerbated further by AI, once again highlighting the unfortunate truth of a joke in tech circles: First we create new technology, then we figure out how to watch porn on it. The porn industry has ruined an untold number of lives. AI porn is taking that to the next level. We should be prepared for it. 

Featured Image

Jonathon Van Maren is a public speaker, writer, and pro-life activist. His commentary has been translated into more than eight languages and published widely online as well as print newspapers such as the Jewish Independent, the National Post, the Hamilton Spectator and others. He has received an award for combating anti-Semitism in print from the Jewish organization B’nai Brith. His commentary has been featured on CTV Primetime, Global News, EWTN, and the CBC as well as dozens of radio stations and news outlets in Canada and the United States.

He speaks on a wide variety of cultural topics across North America at universities, high schools, churches, and other functions. Some of these topics include abortion, pornography, the Sexual Revolution, and euthanasia. Jonathon holds a Bachelor of Arts Degree in history from Simon Fraser University, and is the communications director for the Canadian Centre for Bio-Ethical Reform.

Jonathon’s first book, The Culture War, was released in 2016.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

Death of an Open A.I. Whistleblower

Published on

By John Leake

Suchir Balaji was trying to warn the world of the dangers of Open A.I. when he was found dead in his apartment. His story suggests that San Francisco has become an open sewer of corruption.

According to Wikipedia:

Suchir Balaji (1998 – November 26, 2024) was an artificial intelligence researcher and former employee of OpenAI, where he worked from 2020 until 2024. He gained attention for his whistleblowing activities related to artificial intelligence ethics and the inner workings of OpenAI.

Balaji was found dead in his home on November 26, 2024. San Francisco authorities determined the death was a suicide, though Balaji’s parents have disputed the verdict.

Balaji’s mother just gave an extraordinary interview with Tucker Carlson that is well worth watching.

If her narrative is indeed accurate, it indicates that someone has induced key decision makers within the San Francisco Police and Medical Examiner’s Office to turn a blind eye to the obvious indications that Balaji was murdered. Based on the story that his mother told Tucker Carlson, the key corrupt figure in the medical examiner’s office is David Serrano Sewell—Executive Director of the Office of the Chief Medical Examiner.

A quick Google search of Mr. Serrano Sewell resulted in a Feb. 8, 2024 report in the San Francisco Standard headlined San Francisco official likely tossed out human skull, lawsuit saysAccording to the report:

The disappearance of a human skull has spurred a lawsuit against the top administrator of San Francisco’s medical examiner’s office from an employee who alleges she faced retaliation for reporting the missing body part.

Sonia Kominek-Adachi alleges in a lawsuit filed Monday that she was terminated from her job as a death investigator after finding that the executive director of the office, David Serrano Sewell, may have “inexplicably” tossed the skull while rushing to clean up the office ahead of an inspection.

Kominek-Adachi made the discovery in January 2023 while doing an inventory of body parts held by the office, her lawsuit says. Her efforts to raise an alarm around the missing skull allegedly led up to her firing last October.

If the allegations of this lawsuit are true, they suggest that Mr. Serrano is an unscrupulous and vindictive man. According to the SF Gov website:

Serrano Sewell joined the OCME with over 16 years of experience developing management structures, building consensus, and achieving policy improvements in the public, nonprofit, and private sectors. He previously served as a Mayor’s aideDeputy City Attorney, and a policy advocate for public and nonprofit hospitals.

In other words, he is an old denizen of the San Francisco city machine. If a mafia-like organization has penetrated the city administration, it would be well-served by having a key player run the medical examiner’s office.

According to Balaji’s mother, Poornima Ramarao, his death was an obvious murder that was crudely staged to look like a suicide. The responding police officers only spent forty minutes examining the scene, and then left the body in the apartment to be retrieved by medical examiner field agents the next day. If true, this was an act of breathtaking negligence.

I have written a book about two murders that were staged to look like suicides, and to me, Mrs. Ramarao’s story sounds highly credible. Balaji kept a pistol in his apartment for self defense because he felt that his life was possibly in danger. He was found shot in the head with this pistol, which was purportedly found in his hand. If his death was indeed a murder staged to look like a suicide, it raises the suspicion that the assailant knew that Balaji possessed this pistol and where he kept it in his apartment.

Balaji was found with a gunshot wound to his head—fired from above, the bullet apparently traversing downward through his face and missing his brain. However, he had also sustained what—based on his mother’s testimony—sounds like a blunt force injury on the left side of the head, suggesting a right-handed assailant initially struck him with a blunt instrument that may have knocked him unconscious or stunned him. The gunshot was apparently inflicted after the attack with the blunt instrument.

A fragment of a bloodstained whig found in the apartment suggests the assailant wore a whig in order to disguise himself in the event he was caught in a surveillance camera placed in the building’s main entrance. No surveillance camera was positioned over the entrance to Balaji’s apartment.

How did the assailant enter Balaji’s apartment? Did Balaji know the assailant and let him in? Alternatively, did the assailant somehow—perhaps through a contact in the building’s management—obtain a key to the apartment?

All of these questions could probably be easily answered with a proper investigation, but it sounds like the responding officers hastily concluded it was a suicide, and the medical examiner’s office hastily confirmed their initial perception. If good crime scene photographs could be obtained, a decent bloodstain pattern analyst could probably reconstruct what happened to Balaji.

Vernon J. Geberth, a retired Lieutenant-Commander of the New York City Police Department, has written extensively about how homicides are often erroneously perceived to be suicides by responding officers. The initial perception of suicide at a death scene often results in a lack of proper analysis. His essay The Seven Major Mistakes in Suicide Investigation should be required reading of every police officer whose job includes examining the scenes of unattended deaths.

However, judging by his mother’s testimony, Suchir Balaji’s death was obviously a murder staged to look like a suicide. Someone in a position of power decided it was best to perform only the most cursory investigation and to rule the manner of death suicide based on the mere fact that the pistol was purportedly found in the victim’s hand.

Readers who are interested in learning more about this kind of crime will find it interesting to watch my documentary film in which I examine two murders that were staged to look like suicides. Incidentally, the film is now showing in the Hollywood North International Film Festival. Please click on the image below to watch the film.

If you don’t have a full forty minutes to spare to watch the entire picture, please consider devoting just one second of your time to click on the vote button. Many thanks!

Share

Continue Reading

Alberta

Wonder Valley – Alberta’s $70 Billion AI Data Center

Published on

From the YouTube page of Kevin O’Leary

Interview with Kyle Reiling, Executive Director of the Greenview Industrial Gateway. 

“This is the only place on earth that can do something this scale”

When Kevin O’Leary heard Alberta Premier Danielle Smith reveal just how much energy Alberta has, he knew Alberta has the solution for the coming explosion in energy consumption.

Kevin O’Leary: The demand for AI is skyrocketing—and America is out of power. Enter Alberta, with abundant natural gas and a bold premier. I’m raising $70 billion to create the world’s lowest-cost, highest-efficiency data center. Hyperscalers like Tesla, Microsoft, and Google need it, and we’re making it happen. This is how you lead the AI revolution.

Continue Reading

Trending

X