Connect with us

Artificial Intelligence

China wrote the playbook on AI surveillance. Will Canada adopt the playbook?

Published

7 minute read

This article supplied by Troy Media.

Troy Media By Perry Kinkaide

China is an example of AI surveillance in action. Canada should take that as a warning, not a blueprint

China shows what happens when artificial intelligence is weaponized by the state.

Its Social Credit System, a nationwide framework to rate the “trustworthiness” of citizens and businesses, decides whether people can get a loan, buy a home, travel abroad or even move freely inside the country by merging financial records, online activity, travel history and facial recognition data into one algorithmic profile.

Sold as a way to curb fraud and tax evasion, it quickly became a tool to track political loyalty and personal behaviour the state doesn’t like. Step out of line, and the system punishes you.

Canadians should treat China’s misuse of AI as a warning. AI is advancing so fast that, without strict limits, we could slide into a similar dystopian future—one where governments promise efficiency and safety but use technology to tighten control over everyday life.

It wouldn’t take much for such a system to take root here. The data, the technology and the surveillance tools already exist. All that’s missing is the
decision to connect them.

Canadian governments have already shown they are willing to impose sweeping controls and restrict freedoms when faced with dissent or crisis. During the COVID-19 pandemic, the Liberal government invoked the Emergencies Act—a law that grants Ottawa extraordinary temporary powers, including the ability to freeze bank accounts and bypass normal parliamentary debate—to limit movement in response to protests. Across Canada, governments closed businesses, banned gatherings, restricted travel within and outside the country, and introduced vaccine passport systems that
restricted access to certain public spaces.

Now imagine those same powers supercharged by AI—able to track, predict and act in real time, with decisions automated and enforcement instant. What used to be broad and temporary restrictions could become precise, ongoing controls that are almost impossible to resist.

A Canadian version of China’s Social Credit System could link tax filings, health records, driver’s licences, transit passes, social media accounts and other personal data. When once-separate databases are linked, previously separate pieces of information combine into a detailed profile, making it far easier to monitor, predict and restrict a person’s actions. With that much linked information, governments wouldn’t just know what you’ve done—they could control what you’re allowed to do next. That’s not a distant, sci-fi scenario.

This is why regulation matters—but Canada’s current plan falls short. The proposed Artificial Intelligence and Data Act (AIDA), part of Bill C-27, is meant to be Canada’s first law governing artificial intelligence systems that could have major impacts on people’s lives. These so-called “high-impact” systems include AI used in areas like health care, hiring, law enforcement, credit scoring and critical infrastructure—technologies where errors, bias or abuse could have serious consequences.

On paper, AIDA would regulate these systems, require risk assessments and keep humans in the loop for key decisions. But with its narrow scope, weak enforcement powers and a rollout that could take years before its rules are fully in force, it risks becoming a safety net with a hole in the middle, in effect more about managing political optics than preventing abuse.

AI surveillance is no longer a future threat—it’s already here. It combines cameras, sensors and massive databases to track people in real time, often without their knowledge or consent. It can predict behaviour, automate decisions and enforce rules instantly. Mustafa Suleyman, in The Coming Wave, warns that AI must be contained before it becomes uncontrollable. Shoshana Zuboff, in The Age of Surveillance Capitalism, reaches the same conclusion: AI is tailor-made for mass monitoring, and once embedded, these systems are almost impossible to dismantle.

Some insist that slowing AI’s development would be pointless, that other nations and corporations would race ahead. But that argument is dangerously naive. History shows that once governments and corporations gain powerful surveillance tools, they don’t give them up—they expand their reach, change their purpose and tighten their grip.

China’s example proves the point. The Social Credit System was never just about unpaid debts or tax evasion. Its real purpose has always been to track people and control their behaviour. Today, it measures political loyalty as much as financial reliability, punishing citizens for anything from joining a protest to criticizing the government online. Jobs, housing, education and even the right to travel can be revoked with a few keystrokes. Once a government is allowed to define “public good” and enforce it algorithmically, freedom becomes a privilege—granted or taken away at will.

Yes, AI-driven surveillance can catch criminals, detect threats and manage crises. But those benefits come at a cost. Once such a system is in place, it rarely returns to its original purpose. It finds new uses, and it becomes permanent.

The choice for Canadians is clear: demand enforceable laws, transparent oversight and real accountability now—before it’s too late.

Dr. Perry Kinkaide is a visionary leader and change agent. Since retiring in 2001, he has served as an advisor and director for various organizations and founded the Alberta Council of Technologies Society in 2005. Previously, he held leadership roles at KPMG Consulting and the Alberta Government. He holds a BA from Colgate University and an MSc and PhD in Brain Research from the University of Alberta.

Troy Media empowers Canadian community news outlets by providing independent, insightful analysis and commentary. Our mission is to support local media in helping Canadians stay informed and engaged by delivering reliable content that strengthens community connections and deepens understanding across the country

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Trending

X