Connect with us

Artificial Intelligence

Jobs vs. Machines: The Rise of Artificial Intelligence


2 minute read

From StosselTV

The media tell us Artificial Intelligence will replace millions of jobs. They’re right, but that doesn’t mean we should fear it.

The Teamsters are protesting self-driving cars, asking government for more regulation, hoping to stop AI vehicles from taking delivery, taxi-driver and truck-driver jobs. That’s a fight that they can’t win.

Loom weavers, typists, telephone operators, bank tellers, and many other jobs were destroyed because of new technology. It won’t stop happening, and AI will make it happen faster. But as people lose jobs, remember that so far, this creative destruction has led to people finding new, better jobs.

Unemployment has been dropping, and wages keep going up! If history is any indication, AI will be a good thing.

** Technical advice from Mark Palmer **

After 40+ years of reporting, I now understand the importance of limited government and personal freedom.


Libertarian journalist John Stossel created Stossel TV to explain liberty and free markets to young people.

Prior to Stossel TV he hosted a show on Fox Business and co-anchored ABC’s primetime newsmagazine show, 20/20. Stossel’s economic programs have been adapted into teaching kits by a non-profit organization, “Stossel in the Classroom.” High school teachers in American public schools now use the videos to help educate their students on economics and economic freedom. They are seen by more than 12 million students every year.

Stossel has received 19 Emmy Awards and has been honored five times for excellence in consumer reporting by the National Press Club. Other honors include the George Polk Award for Outstanding Local Reporting and the George Foster Peabody Award.


To get our new weekly video from Stossel TV, sign up here: ————

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

Save Taylor Swift. Stop deep-fake porn: Peter Menzies

Published on

Photo by Michael Hicks, via Flickr

From the MacDonald Laurier Institute

By Peter Menzies

Tweak an existing law to ensure AI-generated porn that uses the images of real people is made illegal.

Hey there, Swifties.

Stop worrying about whether your girl can make it back from a tour performance in Tokyo in time to cheer on her boyfriend in Super Bowl LVIII.

Please shift your infatuation away from  your treasured superstar’s romantic attachment to Kansas City Chiefs’ dreamy Travis Kelce and his pending battle with the San Francisco 49ers. We all know Taylor Swift’ll be in Vegas for kickoff on Feb. 11. She’ll get there. Billionaires always find a way. And, hey, what modern woman wouldn’t take a 27-hour round trip flight to hang out with a guy ranked #1 on People’s sexiest men in sports list?

But right now, Swifties, Canada needs you to concentrate on something more important than celebrity canoodling. Your attention needs to be on what the nation’s self-styled feminist government should be doing to protect Swift (and all women) from being “deep-faked” into online porn stars.

Because that’s exactly what happened to the multiple Grammy Award-winner last week when someone used artificial intelligence to post deep-fakes (manipulated images of bodies and faces) of her that spread like a coronavirus across the internet. Swift’s face was digitally grafted onto the body of someone engaged in sexual acts/poses in a way that was convincing enough to fool some into believing that it was Swift herself. Before they were contained, the deep-fakes were viewed by millions. The BBC reported that one single “photo” had accumulated 47 million views.

For context, a 2019 study by Deeptrace Labs identified almost 15,000 deep-fakes on streaming and porn sites — twice as many as the previous year — and concluded that 96 per cent were recreations of celebrity women. Fair to assume the fakes have continued to multiply like bunnies in spring time.

In response to the Swift images, the platform formerly known as Twitter — X — temporarily blocked searches for “Taylor Swift” as it battled to eliminate the offending depictions which still found ways to show up elsewhere.

X said it was “actively removing” the deep-fakes while taking “appropriate actions” against those spreading them.

Meta said it has “strict policies that prohibit this kind of behavior” adding that it also takes “several steps to combat the spread of AI deepfakes.”

Google Deepmind launched an initiative last summer to improve detection of AI-generated images but critics say it, too, struggles to keep up.

While the creation of images to humiliate women goes back to the puerile pre-internet writing of “for a good time call” phone numbers on the walls of men’s washrooms, the use of technology to abuse women shows how difficult it is for governments to keep pace with change. The Americans are now pondering bipartisan legislation to stop this, the Brits are boasting that such outrageousness is already covered by their Online Safety Act and Canada so far ….  appears to be doing nothing.

Maybe that’s because it thinks that Section 162 of the Criminal Code, which bans the distribution or transmission of intimate images without permission of the person or people involved, has it covered.

To wit, “Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty of an indictable offence and liable to imprisonment for a term of not more than five years.”

Maybe Crown prosecutors are confident they can talk judges into interpreting that legislation in a fashion that brings deep-fakes into scope. It’s not like eminent justices haven’t previously pondered legislation — or the Charter for that matter— and then “read in” words that they think should be there.

Police in Winnipeg recently launched an investigation in December when AI-generated fake photos were spread. And a Quebec man was convicted recently when he used AI to create child porn — a first.

But anytime technology overrides the law, there’s a risk that the former turns the latter into an ass.

Which means there’s a real easy win here for the Justin Trudeau government which, when it comes to issues involving the internet, has so far behaved like a band of bumbling hillbillies.

The Online Streaming Act, in two versions, was far more contentious than necessary because those crafting it clearly had difficulty grasping the simple fact that the internet is neither broadcasting nor a cable network. And the Online News Act, which betrayed a complete misunderstanding of how the internet, global web giants and digital advertising work, remains in the running for Worst Legislation Ever, having cost the industry it was supposed to assist at least $100 million and helped it double down on its reputation for grubbiness.

Anticipated now in the spring after being first promised in 2019, the Online Harms Act has been rattling around the Department of Heritage consultations since 2019. Successive heritage ministers have failed to craft anything that’ll pass muster with the Charter of Rights and Freedoms so the whole bundle is now with Justice Minister Arif Virani, who replaced David Lametti last summer.

The last thing Canada needs right now is for the PMO to jump on the rescue Taylor Swift bandwagon and use deep-fakes as one more excuse to create, as it originally envisioned, a Digital Safety czar with invasive ready, fire, aim powers to order take downs of anything they find harmful or hurtful. Given its recent legal defeats linked to what appears to be a chronic inability to understand the Constitution, that could only end in yet another humiliation.

So, here’s the easy win. Amend Section 162 of the Criminal Code so that the use of deep-fakes to turn women into online porn stars against their will is clearly in scope. It’ll take just a few words. It’ll involve updating existing legislation that isn’t the slightest bit contentious. Every party will support it. It’ll make you look good. Swifties will love you.

And, best of all, it’ll actually be the right thing to do.

Peter Menzies is a senior fellow with the Macdonald-Laurier Institute, past vice-chair of the CRTC and a former newspaper publisher.

Continue Reading

Artificial Intelligence

Middle schoolers are now using AI to create ‘deepfake’ pornography of their classmates

Published on

From LifeSiteNews

By Jonathon Van Maren

It’s happening all over the world: a generation weaned on hardcore pornography is increasingly enabled by AI technology to create imagery of people they know personally.

A recent news story out of Alabama should be getting far more attention than it is, because it is a glimpse into the future. Middle school students are using artificial intelligence (AI) to create pornographic images of their female classmates 

A group of mothers in Demopolis say their daughters’ pictures were used with artificial intelligence to create pornographic images of their daughters. Tiffany Cannon, Elizabeth Smith, Holston Drinkard, and Heidi Nettles said they all learned on Dec. 4 that two of their daughters’ male classmates created and shared explicit photos of their daughters. Smith said since last Monday, it has been a rollercoaster of emotions.

“They’re scared, they’re angry, they’re embarrassed. They really feel like why did this happen to them,” said Smith. The group of mothers said there is an active investigation with Demopolis Police. However, they wish for the school district to take action. They believe this is an instance of cyberbullying and there are state laws and policies to protect their girls.

“We have laws in place through the Safe School’s law and the Student Bullying Prevention Act, which says that cyberbullying will not be tolerated either on or off campus,” said Smith. “It takes a lot for these girls to come forward, and they did. They need to be supported for that. Not just from their parents, but from their school and their community,” said Nettles.

The school hasn’t given many details yet, with the Demopolis City Schools Superintendent Tony Willis saying in a statement that there is little they can do: “The school can only address things that happen at school events, school campus on school time. Outside of this, it becomes a parent and police matter. We sympathize with parents and never want wrongful actions to go without consequences – our hearts and prayers go out to all the families hurt by this. That is why we have assisted the police in every step of this process.” 

We’ll be seeing a lot more of this in the years ahead, as a generation weaned on hardcore pornography is increasingly enabled by technology to create imagery of people they know personally. The rise of sexting took pornography and made it personal – educators and law enforcement are still grappling with how to curtail the nearly ubiquitous practice of sending and receiving intimate images, the majority of which are then shared with others. Many of these images, by virtue of the age of the students involved, constitute child pornography. AI-generated pornography will create a whole laundry list of other disturbing issues to deal with. 

A quick scan of recent headlines will give you a sense of where this is headed. From Fortune: “‘Nudify’ apps that use AI to undress women in photos are soaring in popularity, prompting worries about non-consensual porn.” These apps allow people to “digitally undress” people they know and thus create nonconsensual pornography of girls and women. These apps have already acquired millions of users. 

From MIT Technology Review: “A high school’s deepfake porn scandal is pushing US lawmakers into action.” At a New Jersey high school, boys had used AI to “create sexually explicit and even pornographic photos of some of their classmates,” with up to 30 girls being impacted. The sense of violation felt by the victims is profound. 

From CNN: “Outcry in Spain as artificial intelligence used to create fake naked images of underage girls.” From the story: “Police in Spain have launched an investigation after images of young girls, altered with artificial intelligence to remove their clothing, were sent around a town in the south of the country. A group of mothers from Almendralejo, in the Extremadura region, reported that their daughters had received images of themselves in which they appeared to be naked.”  

One girl was blackmailed by a boy with a doctored image of herself. Another cried to her mother: “What have they done to me?” 

From the Washington Post: “AI fake nudes are booming. It’s ruining real teens’ lives.” From the story: “Artificial intelligence is fueling an unprecedented boom this year in fake pornographic images and videos. It’s enabled by a rise in cheap and easy-to-use AI tools that can “undress” people in photographs — analyzing what their naked bodies would look like and imposing it into an image — or seamlessly swap a face into a pornographic video.” 

Those are just a few examples of dozens of stories from the past few months. The pornography crisis is being exacerbated further by AI, once again highlighting the unfortunate truth of a joke in tech circles: First we create new technology, then we figure out how to watch porn on it. The porn industry has ruined an untold number of lives. AI porn is taking that to the next level. We should be prepared for it. 

Featured Image

Jonathon Van Maren is a public speaker, writer, and pro-life activist. His commentary has been translated into more than eight languages and published widely online as well as print newspapers such as the Jewish Independent, the National Post, the Hamilton Spectator and others. He has received an award for combating anti-Semitism in print from the Jewish organization B’nai Brith. His commentary has been featured on CTV Primetime, Global News, EWTN, and the CBC as well as dozens of radio stations and news outlets in Canada and the United States.

He speaks on a wide variety of cultural topics across North America at universities, high schools, churches, and other functions. Some of these topics include abortion, pornography, the Sexual Revolution, and euthanasia. Jonathon holds a Bachelor of Arts Degree in history from Simon Fraser University, and is the communications director for the Canadian Centre for Bio-Ethical Reform.

Jonathon’s first book, The Culture War, was released in 2016.

Continue Reading