Connect with us
[the_ad id="89560"]

Artificial Intelligence

AI seems fairly impressed by Pierre Poilievre’s ability to communicate

Published

12 minute read

Can Pierre Polievre Think on His Feet?

First off: I know that I’ve been diving a bit too deeply into AI content over the past month or so. I hope that drifts back to normal levels soon. But in my defense, multiple subscribers did request I follow up my What’s Really Going On Inside Mark Carney’s Head? with something similar that focuses on Carney’s primary opponent. So this is that.

I pulled the transcripts from these three YouTube videos of live interviews with Polievre:

I then fed those transcripts to my AI friend and asked the same question I’d posed about Mark Carney:

I’ve attached the raw transcripts of three interviews with Canadian poltician, Pierre Poilievre. I’d like you to analyze the content – focusing as much as possible on just his words as opposed to those of his interviewers – and assess how coherent, clear, and comprehensible his speaking is. Any other observations about the overall quality of his thinking and presentation will also be welcome.

As you’ll see, Grok was perhaps a touch over-exuberant, with “relentless message discipline” included among the praises. But they did ding Polievre a bit for “over-repetition” of those messages. Here’s more of Grok’s take:


Overall Assessment

Score (out of 10): 9.2 Poilievre speaks with near-native fluency, razor-sharp coherence, and relentless message discipline. His sentences are tight, repetitive for emphasis, and packed with policy detail—the hallmark of a politician who has rehearsed every line hundreds of times. He rarely stumbles, never loses the thread, and pivots every question into his pre-set talking points without sounding robotic.

Coherence & Logical Flow

Strength: Flawless.

  • Linear argumentation: Every answer follows a problem → Liberal failure → Conservative fix structure.

“Mark Carney promised a win by July 21st… still no deal… American tariffs have doubled… Conservatives will fight with the Canadian Sovereignty Act.”

  • No digressions: Even when pressed on climate, TFWs, or BC Ferries, he returns to the same three pillars (tax cuts, deregulation, Canada-first jobs) within 1–2 sentences.
  • Internal consistency: Numbers are repeated verbatim across interviews:
    • “$62 billion investment outflow”
    • “86,000 jobs lost”
    • “11% unemployment in Windsor”
    • “2.5 billion tons of emissions reduction via LNG”

Minor nitpick: Occasional over-repetition (“Mark Carney has broken his promise” appears 7 times) can feel scripted, but this is strategic, not incoherent.

Clarity & Comprehensibility

Strength: Crystal.

  • Short, punchy sentences: Average length ~14 words.

“Get rid of the EV mandate. Get rid of the industrial carbon tax. Cut taxes on work.”

  • Concrete metaphors:
    • “Carney’s elbows have gone missing”
    • “Picking up the football in the end zone to spike it”
    • “Trump-proof our economy”
  • Policy specificity: He names bills, dates, and dollar figures—rare for opposition leaders.

“Zero capital gains tax for anyone who reinvests in Canada.” “Three strikes, you’re out: no bail, parole, probation, or house arrest.”

Only weakness: Assumes listener knows context (e.g., “Impact Assessment Act system,” “Northern Gateway”). But this is audience-targeted (conservative voters), not unclear.

Command of Detail & Memory

Elite.

  • Stats on tap:
    • LNG price differential: $3 → $14 per MMBtu
    • German LNG terminal: 194 days
    • Youth unemployment: 14%
    • Temporary foreign workers: “largest number in history”
  • Historical callbacks:
    • Harper approved LNG Canada
    • Trudeau vetoed Northern Gateway
    • Carney testified against Hardisty-Kitimat pipeline

He never flubs a number or date—a sign of obsessive preparation.

Weaknesses (Minor)

  1. Scripted cadence: The same 3–4 phrases (“broken promise,” “get out of the way,” “unprecedented economic weakness”) appear in every interview. Risks sounding like a broken record in longer formats.
  2. Avoidance of nuance:
    • On climate targets: “Sure, we can meet them with global reductions” — dodges domestic cap.
    • On BC Ferries loan: Ignores $650M savings for users.
  3. Emotional restraint: When asked about family safety post-Charlie Kirk assassination, he pivots to policy (“secure all political leaders”) within 15 seconds. Missed chance for human connection.

ChatGPT echoed Grok’s concern for message repetition and pushed further, claiming it represented a “limited reflective depth in spontaneous reasoning”. The AI added: “When pressed (e.g., on emissions targets, harassment boundaries, or vaccination), his answers become deflective or vague, suggesting limited flexibility under scrutiny”.

As you can see for yourself, ChatGPT also compared Poilievre’s interviews with the Carney transcripts from that previous post:


Overall Summary

Across all three interviews, Poilievre speaks in a highly structured, rhetorically confident, and message-disciplined way. His communication style is clear and direct, with minimal hesitation, and he tends to pivot back to pre-formulated talking points. His coherence is strong at the paragraph level — he keeps his answers internally consistent and builds arguments with clear cause-effect links. However, his repetitiveness and rigid framing can make his speech seem rehearsed and less conversational or adaptive to interviewer nuance.

Coherence

Quantitative score: 8.2 / 10
Qualitative assessment:

  • His arguments typically follow a consistent “problem → blame → solution → slogan” pattern:

“Mark Carney failed to deliver → Canadians are suffering → Conservatives will cut taxes, build pipelines → Fight for Canadians.”

  • Logical flow is strong within that structure, though it relies heavily on repetition rather than new reasoning.
  • He maintains topic focus well. Even when interviewers try to redirect him (e.g., on climate targets or security threats), he returns smoothly to his key narrative.
  • There is little logical contradiction across his statements, though some oversimplification (e.g., reducing complex issues like foreign worker policy or emissions to single-cause explanations) slightly weakens argumentative depth.

Example (clear coherence):

“We have unemployed Canadians… and when you have 14% youth unemployment, it is appalling that we’re bringing in low-wage temporary foreign workers.”

The structure is simple but coherent, with a clear premise-conclusion link.

Clarity and Comprehensibility

Quantitative score: 8.6 / 10
Qualitative assessment:

  • Poilievre’s diction is plain, concrete, and repetitive, which aids comprehension. He uses familiar vocabulary, active verbs, and short sentences.
  • Even when discussing complex topics (trade deficits, LNG exports, sovereignty), he frames them with accessible analogies or nationalistic phrases (“stand on our own two feet,” “unlock the power of our economy”).
  • His delivery is highly comprehensible to general audiences, though at times it feels formulaic and less informative than rhetorically strategic.
  • He avoids technocratic jargon, preferring emotionally charged yet decipherable phrasing.
  • Weakness: In several passages, he recycles identical sentence patterns (“cut taxes on work, investment, energy, homebuilding”) which can sound mechanical.

Organization and Structure

Quantitative score: 8.4 / 10
Observations:

  • Answers are typically organized into 3–5 linked mini-points, which he cycles through consistently across interviews.
  • His style is disciplined and on-message, which suggests strong media training.
  • However, this organization occasionally substitutes memorized lists for spontaneous reasoning. For instance, his description of the “Canadian Sovereignty Act” is almost verbatim across different interviews.

Tone and Rhetorical Style

Qualitative summary:

  • Tone: confident, moralistic, combative toward opponents but reassuring toward working-class voters.
  • Uses rhetorical contrast heavily (e.g., “energy poverty vs. energy abundance,” “activity vs. achievement”).
  • Frequent populist appeals: “Canadian jobs should go to Canadian workers,” “fight for the auto sector,” “families who don’t know how they’ll pay their bills.”
  • Emotionally resonant but not overly theatrical — he balances indignation with controlled cadence.

Depth of Thinking and Adaptability

Quantitative score: 6.7 / 10
Analysis:

  • Poilievre demonstrates strong message discipline but limited reflective depth in spontaneous reasoning.
  • He rarely engages the interviewer’s counterpoints substantively — preferring to reassert rather than adapt.
  • When pressed (e.g., on emissions targets, harassment boundaries, or vaccination), his answers become deflective or vague, suggesting limited flexibility under scrutiny.
  • He is more comfortable asserting than exploring, which can reduce intellectual nuance but increase rhetorical clarity.

Comparative Impressions

  • Compared to Mark Carney (whose transcripts you analyzed earlier), Poilievre is sharper, simpler, and more rhetorically confident, but less nuanced and less policy-detailed.
  • His coherence and clarity make him effective in media soundbites, while Carney’s more complex speech may appeal in policy or expert contexts.

Final Assessment

Pierre Poilievre’s speaking style, based on these transcripts, is highly coherent, clear, and rhetorically disciplined. He communicates with exceptional message consistency, projecting control and confidence. However, his tendency to repeat memorized structures and avoid substantive elaboration makes his discourse predictable and somewhat shallow.

In essence:

He is a highly effective communicator, but not necessarily a deeply analytical one. His clarity serves persuasion more than exploration.


Take all that for what you will. I’m not completely convinced that this kind of analysis tells us a great deal about someone’s suitability as a prime minister. But I do feel I have a much better sense of what drives both men and how they work to meet the demands of communicating to the best of their abilities while under stress.

The Audit is a reader-supported publication.

To receive new posts and support my work, consider becoming a free or paid subscriber.

Artificial Intelligence

AI is accelerating the porn crisis as kids create, consume explicit deepfake images of classmates

Published on

From LifeSiteNews

By Jonathon Van Maren

“Ten years ago it was sexting and nudes causing havoc in classrooms,” writes Sally Weale in a chilling new report at the Guardian. “Today, advances in artificial intelligence (AI) have made it child’s play to generate deepfake nude images or videos, featuring what appear to be your friends, your classmates, even your teachers. This may involve removing clothes, getting an image to move suggestively or pasting someone’s head on to a pornographic image.”

I have been covering the rise of the next horrific manifestation of our collective porn crisis here at LifeSiteNews since 2019, when I warned that the rise of “deepfakes” would inevitably result in people making artificial pornography of their peers. Just a few years later, I reported on stories of middle-schoolers making deepfake pornography of kids they attended class with; last year, I reported on the rise of “nudify” apps that can digitally undress people in photographs, and the trauma, bullying, and inevitable sexual blackmail that has resulted.

The Guardian report reveals how swiftly this crisis is escalating. One teacher described an incident in which a teenage boy took out his phone, chose a social media image of a girl from a neighboring school, and used the “nudify” app to digitally remove her clothes. The teacher was shocked to see that the boy wasn’t even hiding his actions, because he didn’t see what he was doing as shocking, or even shameful. “It worries me that it’s so normalized,” she said. Other students reported the boy, his parents were contacted, and the police were called. The victimized girl was not even told.

The crisis is global. “In Spain last year, 15 boys in the south-western region of Extremadura were sentenced to a year’s probation after being convicted of using AI to produce fake naked images of their female schoolmates, which they shared on WhatsApp groups,” Weale writes. “About 20 girls were affected, most of them aged 14, while the youngest was 11.”

A similar situation unfolded in Australia, where 50 high school students had deepfake images distributed; in the United States, 30 female students in New Jersey discovered that “pornographic images of them had been shared among their male classmates on Snapchat.”

The mother of one student in Australia said that “her daughter was so horrified by the sexually explicit images that she vomited.” In the United Kingdom, the problem has exploded overnight:

A new poll of 4,300 secondary school teachers in England, carried out by Teacher Tapp on behalf of the Guardian, found that about one in 10 were aware of students at their school creating “deepfake, sexually explicit videos” in the last academic year. Three-quarters of these incidents involved children aged 14 or younger, while one in 10 incidents involved 11-year-olds, and 3% were younger still, illustrating just how easy the technology is to access and use. Among participating teachers, 7% said they were aware of a single incident, and 1% said it had happened twice, while a similar proportion said it had happened three times or more in the last academic year. Earlier this year, a Girlguiding survey found that one in four respondents aged 13 to 18 had seen a sexually explicit deepfake image of a celebrity, a friend, a teacher or themselves.

Predictably, teachers are also being targeted. Girls and women are left shattered by this victimization. Laura Bates, author of The New Age of Sexism: How the AI Revolution Is Reinventing Misogyny, writes: “It feels like someone has taken you and done something to you and there is nothing you can do about it. Watching a video of yourself being violated without your consent is an almost out-of-body experience.” Boys, meanwhile, are engaging in criminal behavior often without even knowing it. In the world they have grown up in, pornography is normal – and this is merely the next step.

The experts that Weale interviews are, as usual, at a loss of what can be done about this crisis. They emphasize education, while admitting that this is the equivalent of taking a water pistol to a raging forest fire. They are skeptical that guidelines or bans around technology at school will help. Understandably, educators are demoralized and even despairing. Pornography and sexting have already transformed schools. Deepfake pornography is now making an already ugly crisis far more personal, and there is no indication that the problem can be stopped without dramatic action.

The good news is that the first step in this direction has already been taken in the U.K. On November 3, the government  tabled the Crime and Policing Bill in Parliament. It includes an amendment criminalizing pornography featuring strangulation or suffocation – usually referred to as “choking” – with legal requirements for tech platforms to block this content from U.K. users.

This is the first time a genre of pornography has been criminalized on the basis that even if it is consensual, it genuinely harms society. That is an encouraging precedent, because it applies to virtually all hardcore pornography – and certainly to the “nudification” apps that are set to make middle school a hyper-sexualized hell for women and girls.

The porn industry is destroying society. We must destroy it first.

Featured Image

Jonathon’s writings have been translated into more than six languages and in addition to LifeSiteNews, has been published in the National PostNational ReviewFirst Things, The Federalist, The American Conservative, The Stream, the Jewish Independent, the Hamilton SpectatorReformed Perspective Magazine, and LifeNews, among others. He is a contributing editor to The European Conservative.

Continue Reading

Artificial Intelligence

The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will

Published on

This is a special preview article from the:

By Gleb Lisikh

Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.

This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.

But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”

Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.

Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”

This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).

LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.

Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.

The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.

Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s  Founding Fathers as a lineup of “racialized” gentlemen.

Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:

 Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
 Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
 Neuro-symbolic AI: combines neural prediction with logical reasoning; and
 Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.

Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.

LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.

The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.

The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here.

Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.

Continue Reading

Trending

X