Artificial Intelligence
Artificial intelligence is faking it

This article supplied by Troy Media.
AI chatbots can sound clever, but they don’t understand a word they’re saying
Every time I ask an AI tool a question, I’m struck by how fluent—and how hollow—the answer feels. Noam Chomsky, the MIT linguist and public intellectual, saw this problem long before the rise of ChatGPT: machines can imitate language, but they can’t create meaning.
Chomsky didn’t just dabble in linguistics; he detonated it. His 1957 book Syntactic Structures, a foundational text in modern linguistics, showed that
language isn’t random behaviour but a rule-based system capable of infinite creativity. That insight kick-started the cognitive revolution and laid the
intellectual tracks for the AI train that’s now barreling through our lives. But Chomsky never confused mimicry with meaning. Syntax can be generated. Semantics—what words actually mean—is a human thing.
Most Canadians know Chomsky less as a linguist and more as the political gadfly who’s spent decades skewering U.S. foreign policy and media spin. But before he became a household name for his activism, he was reshaping how we think about language itself. That double role, as scientist and provocateur, makes his critique of artificial intelligence both sharper and harder to dismiss.
That’s what I remind myself as I thumb through the seven AI apps (Perplexity.ai, DeepSeek, Gemini, Claude, Copilot, and, of course, ChatGPT and Google’s Bard) on my phone. They talk back. They help. They screw up. They’re brilliant and idiotic, sometimes in the same breath.
In other words, they’re perfectly imperfect. But unlike people, they fake semantics. They sound meaningful without ever producing meaning.
“Semantics fakers.” Not a Chomsky term, but I’d like to think he’d smirk at it.
Here’s the irony: early AI borrowed heavily from Chomsky’s ideas. His notion that a finite set of rules could generate endless sentences inspired decades of symbolic computing and natural language processing. You’d think, then, he’d be a fan of today’s large language models—the statistical engines behind tools like ChatGPT, Gemini and Claude. Not even close.
Chomsky dismisses them as “statistical messes.” They don’t know language. They don’t know meaning. They can’t tell the difference between possible and impossible sentences. They generate the grammatical alongside the gibberish.
His famous example makes the point: “Colorless green ideas sleep furiously.” A sentence can be syntactically perfect and still utterly meaningless.
That critique lands because we’ve all seen it. These tools can be dazzling one moment and deeply wrong the next. They can pump out grammatical sentences that collapse under the weight of their own emptiness. They’re the digital equivalent of a smooth-talking party guest who never actually answers your question.
The hype isn’t new. AI has been overpromising and underdelivering since the 1960s. Remember the expert systems of the 1980s, which were supposed to replace doctors and lawyers? Or IBM’s Deep Blue in the 1990s, which beat chess champion Garry Kasparov but didn’t get us any closer to actual “thinking” machines? Today’s tools are faster, slicker and more accessible, but they’re still built on the same illusion: that imitation is intelligence.
And while Chomsky has been warning about the limits of language models, others closer to the cutting edge of AI have begun sounding the alarm too.
Canada isn’t a bystander in this story. Geoffrey Hinton, the Toronto-based researcher often called the “godfather of AI,” helped pioneer the deep learning breakthroughs that power today’s chatbots. Yet even he now warns of their dangers: the spread of misinformation through convincing fakes, the loss of jobs on a massive scale, and the risk that advanced systems could slip beyond human control. Pair Hinton’s alarm with Chomsky’s critique, and it’s a sobering reminder that some of the brightest minds behind these tools are telling us not to get carried away.
Chomsky’s point is simple, even if the tech world doesn’t like hearing it: powerful mimicry is not intelligence. These systems show what machines can do with mountains of data and silicon horsepower. But they tell us nothing about what it means to think, to reason, or to create meaning through language.
It all leaves me uneasy. Not terrified—let’s save that for the doomsayers who think the robots are coming for our souls—but uneasy enough to keep my hand on the brake as the hype train speeds up.
That’s why the real conversation we have to have is about what intelligence means—and why AI still isn’t the one having it.
Bill Whitelaw is a director and advisor to many industry boards, including the Canadian Society for Evolving Energy, which he chairs. He speaks and comments frequently on the subjects of social license, innovation and technology, and energy supply networks.
Troy Media empowers Canadian community news outlets by providing independent, insightful analysis and commentary. Our mission is to support local media in helping Canadians stay informed and engaged by delivering reliable content that strengthens community connections and deepens understanding across the country
Artificial Intelligence
Meta joins forces with conservative activist Robby Starbuck to keep woke bias out of AI

From LifeSiteNews
Facebook’s parent company has agreed to collaborate with Robby Starbuck, signaling a potential shift in its left-wing bias.
Facebook parent company Meta will be working with conservative activist Robby Starbuck to keep political bias out of its artificial intelligence (AI) project in perhaps the most significant sign yet that Facebook founder Mark Zuckerberg really does want to change the tech giant’s left-wing ways for good.
The Hill reported that Starbuck, best known for his work bringing public attention to corporations’ “woke” practices and marshalling public pressure on them to change, and Meta have reached a settlement in the former’s defamation suit against the latter over Meta AI falsely identifying Starbuck as a participant in the January 6, 2021 riots at the U.S. Capitol.
The details of the settlement are not public beyond a joint statement announcing that “Meta and Robby Starbuck will work collaboratively in the coming months to continue to find ways to address issues of ideological and political bias and minimize the risk that the model returns hallucinations in response to user queries.”
I’m glad that we have resolved this matter with @robbystarbuck. You can find our joint statement between Meta and Robby below. pic.twitter.com/Lpft6kWUVM
— Joel Kaplan (@joel_kaplan) August 8, 2025
As many of you know, I sued Meta early this year due to chatbot responses about me that were 100% false. Today @Meta and I are announcing an amicable resolution to my lawsuit. Let me give you some details…
When I filed my defamation suit, Meta reached out to me immediately,… pic.twitter.com/b3W1rVT4d2
— Robby Starbuck (@robbystarbuck) August 8, 2025
Starbuck indicated he was pleased that his lawsuit achieved its loftiest goal: “fix this for everybody so this doesn’t become a massive, you know, really terrible story in the future where AI affects elections in ways that no one is comfortable with.”
The partnership appears to indicate a seismic shift at Meta, whose Facebook social network was for years one of the biggest offenders in left-wing bias and censorship in the tech world.
Last year, Zuckerberg began to acknowledge and disavow the social network’s compliance with Biden administration requests to censor content challenging establishment COVID-19 narratives, and announced in January 2025 that parent company Meta would be taking steps to “dramatically reduce the amount of censorship on our platforms.” The company also abandoned a number of diversity, equity & inclusion (DEI) policies, including the placement of female hygiene products in male restrooms.
In April, Meta laid out its goal to “remove bias” from its Llama 4 artificial intelligence language model, so that it “answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn’t favor some views over others.” Later that month, the Meta Oversight Board ruled that two social media posts that “misgendered” gender-dysphoric individuals should remain standing as they did not violate the platform’s Hate Speech policy.
In March, Facebook announced community notes, inspired by an X (formerly Twitter) feature by the same name, which would “draw on a broader range of voices that exist on our platform to decide which content would benefit from additional information,” as an alternative to the platform’s previous “fact-checking” program, which was heavily criticized for relying on third-party groups that often had left-wing biases.
Last month, the feature demonstrated its effectiveness and difference from the old days by correcting a false claim by former leading Democrat Hillary Clinton that Georgia pro-life laws were responsible for a woman’s death by sepsis in 2022 after taking abortion pills and that the attending hospital failed to proceed with a potentially life-saving procedure even though the state abortion ban would have allowed it.
Artificial Intelligence
China wrote the playbook on AI surveillance. Will Canada adopt the playbook?

This article supplied by Troy Media.
China is an example of AI surveillance in action. Canada should take that as a warning, not a blueprint
China shows what happens when artificial intelligence is weaponized by the state.
Its Social Credit System, a nationwide framework to rate the “trustworthiness” of citizens and businesses, decides whether people can get a loan, buy a home, travel abroad or even move freely inside the country by merging financial records, online activity, travel history and facial recognition data into one algorithmic profile.
Sold as a way to curb fraud and tax evasion, it quickly became a tool to track political loyalty and personal behaviour the state doesn’t like. Step out of line, and the system punishes you.
Canadians should treat China’s misuse of AI as a warning. AI is advancing so fast that, without strict limits, we could slide into a similar dystopian future—one where governments promise efficiency and safety but use technology to tighten control over everyday life.
It wouldn’t take much for such a system to take root here. The data, the technology and the surveillance tools already exist. All that’s missing is the
decision to connect them.
Canadian governments have already shown they are willing to impose sweeping controls and restrict freedoms when faced with dissent or crisis. During the COVID-19 pandemic, the Liberal government invoked the Emergencies Act—a law that grants Ottawa extraordinary temporary powers, including the ability to freeze bank accounts and bypass normal parliamentary debate—to limit movement in response to protests. Across Canada, governments closed businesses, banned gatherings, restricted travel within and outside the country, and introduced vaccine passport systems that
restricted access to certain public spaces.
Now imagine those same powers supercharged by AI—able to track, predict and act in real time, with decisions automated and enforcement instant. What used to be broad and temporary restrictions could become precise, ongoing controls that are almost impossible to resist.
A Canadian version of China’s Social Credit System could link tax filings, health records, driver’s licences, transit passes, social media accounts and other personal data. When once-separate databases are linked, previously separate pieces of information combine into a detailed profile, making it far easier to monitor, predict and restrict a person’s actions. With that much linked information, governments wouldn’t just know what you’ve done—they could control what you’re allowed to do next. That’s not a distant, sci-fi scenario.
This is why regulation matters—but Canada’s current plan falls short. The proposed Artificial Intelligence and Data Act (AIDA), part of Bill C-27, is meant to be Canada’s first law governing artificial intelligence systems that could have major impacts on people’s lives. These so-called “high-impact” systems include AI used in areas like health care, hiring, law enforcement, credit scoring and critical infrastructure—technologies where errors, bias or abuse could have serious consequences.
On paper, AIDA would regulate these systems, require risk assessments and keep humans in the loop for key decisions. But with its narrow scope, weak enforcement powers and a rollout that could take years before its rules are fully in force, it risks becoming a safety net with a hole in the middle, in effect more about managing political optics than preventing abuse.
AI surveillance is no longer a future threat—it’s already here. It combines cameras, sensors and massive databases to track people in real time, often without their knowledge or consent. It can predict behaviour, automate decisions and enforce rules instantly. Mustafa Suleyman, in The Coming Wave, warns that AI must be contained before it becomes uncontrollable. Shoshana Zuboff, in The Age of Surveillance Capitalism, reaches the same conclusion: AI is tailor-made for mass monitoring, and once embedded, these systems are almost impossible to dismantle.
Some insist that slowing AI’s development would be pointless, that other nations and corporations would race ahead. But that argument is dangerously naive. History shows that once governments and corporations gain powerful surveillance tools, they don’t give them up—they expand their reach, change their purpose and tighten their grip.
China’s example proves the point. The Social Credit System was never just about unpaid debts or tax evasion. Its real purpose has always been to track people and control their behaviour. Today, it measures political loyalty as much as financial reliability, punishing citizens for anything from joining a protest to criticizing the government online. Jobs, housing, education and even the right to travel can be revoked with a few keystrokes. Once a government is allowed to define “public good” and enforce it algorithmically, freedom becomes a privilege—granted or taken away at will.
Yes, AI-driven surveillance can catch criminals, detect threats and manage crises. But those benefits come at a cost. Once such a system is in place, it rarely returns to its original purpose. It finds new uses, and it becomes permanent.
The choice for Canadians is clear: demand enforceable laws, transparent oversight and real accountability now—before it’s too late.
Dr. Perry Kinkaide is a visionary leader and change agent. Since retiring in 2001, he has served as an advisor and director for various organizations and founded the Alberta Council of Technologies Society in 2005. Previously, he held leadership roles at KPMG Consulting and the Alberta Government. He holds a BA from Colgate University and an MSc and PhD in Brain Research from the University of Alberta.
Troy Media empowers Canadian community news outlets by providing independent, insightful analysis and commentary. Our mission is to support local media in helping Canadians stay informed and engaged by delivering reliable content that strengthens community connections and deepens understanding across the country
-
espionage2 days ago
Tulsi Gabbard guts nearly half of her agency
-
Business2 days ago
Trump Brings Hard Times For The Climate Alarm Movement
-
Alberta1 day ago
Most Alberta municipalities spending more—but some far more than others
-
Energy1 day ago
Will Coastal GasLink Do for Natural Gas What TMX Did For Oil?
-
Alberta1 day ago
Change at the top: Rob Morgan is the new CEO of the Alberta Energy Regulator
-
Bruce Dowbiggin1 day ago
What Happens When The West Runs Out of Willing Victims
-
COVID-191 day ago
EXCLUSIVE: How Fauci And A Deep State Cabal Suppressed Intel In Historic Deception
-
Energy1 day ago
Trump Admin Unveils Massive Oil And Gas Lease Expansion Biden Tried To Squash