Connect with us

Artificial Intelligence

Artificial intelligence is faking it

Published

7 minute read

This article supplied by Troy Media.

Troy Media By 

AI chatbots can sound clever, but they don’t understand a word they’re saying

Every time I ask an AI tool a question, I’m struck by how fluent—and how hollow—the answer feels. Noam Chomsky, the MIT linguist and public intellectual, saw this problem long before the rise of ChatGPT: machines can imitate language, but they can’t create meaning.

Chomsky didn’t just dabble in linguistics; he detonated it. His 1957 book Syntactic Structures, a foundational text in modern linguistics, showed that
language isn’t random behaviour but a rule-based system capable of infinite creativity. That insight kick-started the cognitive revolution and laid the
intellectual tracks for the AI train that’s now barreling through our lives. But Chomsky never confused mimicry with meaning. Syntax can be generated. Semantics—what words actually mean—is a human thing.

Most Canadians know Chomsky less as a linguist and more as the political gadfly who’s spent decades skewering U.S. foreign policy and media spin. But before he became a household name for his activism, he was reshaping how we think about language itself. That double role, as scientist and provocateur, makes his critique of artificial intelligence both sharper and harder to dismiss.

That’s what I remind myself as I thumb through the seven AI apps (Perplexity.ai, DeepSeek, Gemini, Claude, Copilot, and, of course, ChatGPT and Google’s Bard) on my phone. They talk back. They help. They screw up. They’re brilliant and idiotic, sometimes in the same breath.

In other words, they’re perfectly imperfect. But unlike people, they fake semantics. They sound meaningful without ever producing meaning.
“Semantics fakers.” Not a Chomsky term, but I’d like to think he’d smirk at it.

Here’s the irony: early AI borrowed heavily from Chomsky’s ideas. His notion that a finite set of rules could generate endless sentences inspired decades of symbolic computing and natural language processing. You’d think, then, he’d be a fan of today’s large language models—the statistical engines behind tools like ChatGPT, Gemini and Claude. Not even close.

Chomsky dismisses them as “statistical messes.” They don’t know language. They don’t know meaning. They can’t tell the difference between possible and impossible sentences. They generate the grammatical alongside the gibberish.

His famous example makes the point: “Colorless green ideas sleep furiously.” A sentence can be syntactically perfect and still utterly meaningless.

That critique lands because we’ve all seen it. These tools can be dazzling one moment and deeply wrong the next. They can pump out grammatical sentences that collapse under the weight of their own emptiness. They’re the digital equivalent of a smooth-talking party guest who never actually answers your question.

The hype isn’t new. AI has been overpromising and underdelivering since the 1960s. Remember the expert systems of the 1980s, which were supposed to replace doctors and lawyers? Or IBM’s Deep Blue in the 1990s, which beat chess champion Garry Kasparov but didn’t get us any closer to actual “thinking” machines? Today’s tools are faster, slicker and more accessible, but they’re still built on the same illusion: that imitation is intelligence.

And while Chomsky has been warning about the limits of language models, others closer to the cutting edge of AI have begun sounding the alarm too.
Canada isn’t a bystander in this story. Geoffrey Hinton, the Toronto-based researcher often called the “godfather of AI,” helped pioneer the deep learning breakthroughs that power today’s chatbots. Yet even he now warns of their dangers: the spread of misinformation through convincing fakes, the loss of jobs on a massive scale, and the risk that advanced systems could slip beyond human control. Pair Hinton’s alarm with Chomsky’s critique, and it’s a sobering reminder that some of the brightest minds behind these tools are telling us not to get carried away.

Chomsky’s point is simple, even if the tech world doesn’t like hearing it: powerful mimicry is not intelligence. These systems show what machines can do with mountains of data and silicon horsepower. But they tell us nothing about what it means to think, to reason, or to create meaning through language.

It all leaves me uneasy. Not terrified—let’s save that for the doomsayers who think the robots are coming for our souls—but uneasy enough to keep my hand on the brake as the hype train speeds up.

That’s why the real conversation we have to have is about what intelligence means—and why AI still isn’t the one having it.

Bill Whitelaw is a director and advisor to many industry boards, including the Canadian Society for Evolving Energy, which he chairs. He speaks and comments frequently on the subjects of social license, innovation and technology, and energy supply networks.

Troy Media empowers Canadian community news outlets by providing independent, insightful analysis and commentary. Our mission is to support local media in helping Canadians stay informed and engaged by delivering reliable content that strengthens community connections and deepens understanding across the country

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Trending

X