Connect with us

Artificial Intelligence

Save Taylor Swift. Stop deep-fake porn: Peter Menzies

Published

8 minute read

Photo by Michael Hicks, via Flickr

From the MacDonald Laurier Institute

By Peter Menzies

Tweak an existing law to ensure AI-generated porn that uses the images of real people is made illegal.

Hey there, Swifties.

Stop worrying about whether your girl can make it back from a tour performance in Tokyo in time to cheer on her boyfriend in Super Bowl LVIII.

Please shift your infatuation away from  your treasured superstar’s romantic attachment to Kansas City Chiefs’ dreamy Travis Kelce and his pending battle with the San Francisco 49ers. We all know Taylor Swift’ll be in Vegas for kickoff on Feb. 11. She’ll get there. Billionaires always find a way. And, hey, what modern woman wouldn’t take a 27-hour round trip flight to hang out with a guy ranked #1 on People’s sexiest men in sports list?

But right now, Swifties, Canada needs you to concentrate on something more important than celebrity canoodling. Your attention needs to be on what the nation’s self-styled feminist government should be doing to protect Swift (and all women) from being “deep-faked” into online porn stars.

Because that’s exactly what happened to the multiple Grammy Award-winner last week when someone used artificial intelligence to post deep-fakes (manipulated images of bodies and faces) of her that spread like a coronavirus across the internet. Swift’s face was digitally grafted onto the body of someone engaged in sexual acts/poses in a way that was convincing enough to fool some into believing that it was Swift herself. Before they were contained, the deep-fakes were viewed by millions. The BBC reported that one single “photo” had accumulated 47 million views.

For context, a 2019 study by Deeptrace Labs identified almost 15,000 deep-fakes on streaming and porn sites — twice as many as the previous year — and concluded that 96 per cent were recreations of celebrity women. Fair to assume the fakes have continued to multiply like bunnies in spring time.

In response to the Swift images, the platform formerly known as Twitter — X — temporarily blocked searches for “Taylor Swift” as it battled to eliminate the offending depictions which still found ways to show up elsewhere.

X said it was “actively removing” the deep-fakes while taking “appropriate actions” against those spreading them.

Meta said it has “strict policies that prohibit this kind of behavior” adding that it also takes “several steps to combat the spread of AI deepfakes.”

Google Deepmind launched an initiative last summer to improve detection of AI-generated images but critics say it, too, struggles to keep up.

While the creation of images to humiliate women goes back to the puerile pre-internet writing of “for a good time call” phone numbers on the walls of men’s washrooms, the use of technology to abuse women shows how difficult it is for governments to keep pace with change. The Americans are now pondering bipartisan legislation to stop this, the Brits are boasting that such outrageousness is already covered by their Online Safety Act and Canada so far ….  appears to be doing nothing.

Maybe that’s because it thinks that Section 162 of the Criminal Code, which bans the distribution or transmission of intimate images without permission of the person or people involved, has it covered.

To wit, “Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty of an indictable offence and liable to imprisonment for a term of not more than five years.”

Maybe Crown prosecutors are confident they can talk judges into interpreting that legislation in a fashion that brings deep-fakes into scope. It’s not like eminent justices haven’t previously pondered legislation — or the Charter for that matter— and then “read in” words that they think should be there.

Police in Winnipeg recently launched an investigation in December when AI-generated fake photos were spread. And a Quebec man was convicted recently when he used AI to create child porn — a first.

But anytime technology overrides the law, there’s a risk that the former turns the latter into an ass.

Which means there’s a real easy win here for the Justin Trudeau government which, when it comes to issues involving the internet, has so far behaved like a band of bumbling hillbillies.

The Online Streaming Act, in two versions, was far more contentious than necessary because those crafting it clearly had difficulty grasping the simple fact that the internet is neither broadcasting nor a cable network. And the Online News Act, which betrayed a complete misunderstanding of how the internet, global web giants and digital advertising work, remains in the running for Worst Legislation Ever, having cost the industry it was supposed to assist at least $100 million and helped it double down on its reputation for grubbiness.

Anticipated now in the spring after being first promised in 2019, the Online Harms Act has been rattling around the Department of Heritage consultations since 2019. Successive heritage ministers have failed to craft anything that’ll pass muster with the Charter of Rights and Freedoms so the whole bundle is now with Justice Minister Arif Virani, who replaced David Lametti last summer.

The last thing Canada needs right now is for the PMO to jump on the rescue Taylor Swift bandwagon and use deep-fakes as one more excuse to create, as it originally envisioned, a Digital Safety czar with invasive ready, fire, aim powers to order take downs of anything they find harmful or hurtful. Given its recent legal defeats linked to what appears to be a chronic inability to understand the Constitution, that could only end in yet another humiliation.

So, here’s the easy win. Amend Section 162 of the Criminal Code so that the use of deep-fakes to turn women into online porn stars against their will is clearly in scope. It’ll take just a few words. It’ll involve updating existing legislation that isn’t the slightest bit contentious. Every party will support it. It’ll make you look good. Swifties will love you.

And, best of all, it’ll actually be the right thing to do.

Peter Menzies is a senior fellow with the Macdonald-Laurier Institute, past vice-chair of the CRTC and a former newspaper publisher.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

Poll: Despite global pressure, Americans want the tech industry to slow down on AI

Published on

From The Deep View

A little more than a year ago, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of AI systems more powerful than GPT-4. Of course, the pause never happened (and we didn’t seem to stumble upon superintelligence in the interim, either) but it did elicit a narrative from the tech sector that, for a number of reasons, a pause would be dangerous.
  • One of these reasons was simple: sure, the European Union could potentially instate a pause on development — maybe the U.S. could do so as well — but there’s nothing that would require other countries to pause, which would let these other countries (namely, China and Russia) to get ahead of the U.S. in the ‘global AI arms race.’
As the Pause AI organization themselves put it: “We might end up in a world where the first AGI is developed by a non-cooperative actor, which is likely to be a bad outcome.”
But new polling shows that American voters aren’t buying it.
The details: A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) — and first published by Time — found that Americans would rather fall behind in that global race than skimp on regulation.
  • 75% of Republicans and 75% of Democrats said that “taking a careful controlled approach” to AI — namely by curtailing the release of tools that could be leveraged by foreign adversaries against the U.S. — is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
  • A majority of voters are also in favor of the application of more stringent security measures at the labs and companies developing this tech.
The polling additionally found that 50% of voters surveyed think the U.S. should use its position in the AI race to prevent other countries from building powerful AI systems by enforcing “safety restrictions and aggressive testing requirements.”
Only 23% of Americans polled believe that the U.S. should eschew regulation in favor of being the first to build a more powerful AI.
  • “What I perceive from the polling is that stopping AI development is not seen as an option,” Daniel Colson, the executive director of the AIPI, told Time. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way.”
  • “And when we present that in the polling — that third path, mitigated AI development with guardrails — is the one that people overwhelmingly want.”
This comes as federal regulatory efforts in the U.S. remain stalled, with the focus shifting to uneven state-by-state regulation.
Previous polling from the AIPI has found that a vast majority of Americans want AI to be regulated and wish the tech sector would slow down on AI; they don’t trust tech companies to self-regulate.
Colson has told me in the past that the American public is hyper-focused on security, safety and risk mitigation; polling published in May found that “66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.”
Underpinning all of this is a layer of hype and an incongruity of definition. It is not clear what “extremely powerful” AI means, or how it would be different from current systems.
Unless artificial general intelligence is achieved (and agreed upon in some consensus definition by the scientific community), I’m not sure how you measure “more powerful” systems. As current systems go, “more powerful” doesn’t mean much more than predicting the next word at slightly greater speeds.
  • Aggressive testing and safety restrictions are a great idea, as is risk mitigation.
  • However, I think it remains important for regulators and constituents alike to be aware of what risks they want mitigated. Is the focus on mitigating the risk of a hypothetical superintelligence, or is it on mitigating the reality of algorithmic bias, hallucination, environmental damage, etc.?
Do people want development to slow down, or deployment?
To once again call back Helen Toner’s comment of a few weeks: how is AI affecting your life, and how do you want it to affect your life?
Regulating a hypothetical is going to be next to impossible. But if we establish the proper levels of regulation to address the issues at play today, we’ll be in a better position to handle that hypothetical if it ever does come to pass.
Continue Reading

Artificial Intelligence

Elon Musk is building the ‘most powerful Artificial Intelligence training cluster in the world’

Published on

News release from The Deep View

Elon Musk’s xAI has ended talks with Oracle to rent more specialized Nvidia chips — in what could have been a $10 billion deal — according to The Information.
Musk is instead buying the chips himself, all to begin putting together his planned “gigafactory of compute.”
The details: Musk confirmed in a post on Twitter that xAI is now working to build the “gigafactory” internally.
  • Musk explained that the reason behind the shift is “that our fundamental competitiveness depends on being faster than any other AI company. This is the only way to catch up.”
  • “xAI is building the 100k H100 system itself for fastest time to completion,” he said. “Aiming to begin training later this month. It will be the most powerful training cluster in the world by a large margin.”
xAI isn’t the only one trying to build a supercomputer; Microsoft and OpenAI, also according to The Information, have been working on plans for a $100 billion supercomputer nicknamed “Stargate.”
Why it matters: The industry is keen to pour more and more resources into the generation of abstractly more powerful AI models, and VC investments into AI companies, as we noted yesterday, are growing.
But at the same time, concerns about revenue and return on investment are growing as well, with a growing number of analysts gaining confidence in the idea that we are in a bubble of high costs and low returns, something that could be compounded by multi-billion-dollar supercomputers.
Continue Reading

Trending

X