Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.
This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.
But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”
Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.
Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”
This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).
LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.
Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.
The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.
Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s Founding Fathers as a lineup of “racialized” gentlemen.
Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:
Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
Neuro-symbolic AI: combines neural prediction with logical reasoning; and
Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.
Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.
LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.
The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.
Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.
Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.
Parents should take precaution this holiday season when it comes to artificial intelligence toys after researchers for the new Trouble in Toyland report found safety concerns.
Illinois Public Interest Research Group Campaign Associate Ellen Hengesbach said some of the toys armed with AI raised red flags ranging from toys that talk in-depth about sexually explicit topics to acting dismayed when the child disengages.
“What they look like are basically stuffed animals or toy robots that have a chatbot like Chat GPT embedded in them and can have conversations with children,” Hengesbach told The Center Square.
The U.S. PIRG Education Fund report also points out that at least three toys have limited to no parental controls and have the capacity to record your child’s voice and collect other sensitive data via facial recognition.
“All three were willing to tell us where to find potentially dangerous objects in the house, such as plastic bags, matches, or knives,” she said. “It seems like dystopian science fiction decades ago is now reality.”
In the face of all the changing landscape and rising concerns, Hengesbach is calling for immediate action.
“The two main things that we’d like to see are more oversight in general and more research so we can see exactly how these toys interact with kids, really just identify what the harms might be and have a lot more transparency from companies around how are these toys designed,” she said. “What are they capable of and what the potential risks or harms might be. I just really want us to take this opportunity to really think through what we’re doing instead of rushing a toy to market.”
As for the here and now, Hengesbach stressed parents would be wise to be thoughtful about their purchases.
“We just have a big open question of what are the long-term impacts of these products on young kids, especially when it comes to their social development,” she said. “The fact is that we just really won’t know what the long-term impacts of AI friends and companion toys might be until the first generation playing with them grows up. For now, I think it’s just really important that parents understand that these AI toys are out there; they’re very new and they’re basically unregulated.”
Since the release of the report, Hengesbach said one AI toymaker temporarily suspended sales of all their products to conduct a safety audit.
This year’s 40th Trouble in Toyland report also focuses on toys that contain toxins, counterfeit toys that haven’t been tested for safety, recalled toys and toys that contain button cell batteries or high-powered magnets, both of which can be deadly if swallowed.
Will America’s electricity grid make it through the impending winter of 2025-26 without suffering major blackouts? It’s a legitimate question to ask given the dearth of adequate dispatchable baseload that now exists on a majority of the major regional grids according to a new report from the North American Electric Reliability Corporation (NERC).
In its report, NERC expresses particular concern for the Texas grid operated by the Electric Reliability Council of Texas (ERCOT), where a rapid buildout of new, energy hogging AI datacenters and major industrial users is creating a rapid increase in electricity demand. “Strong load growth from new data centers and other large industrial end users is driving higher winter electricity demand forecasts and contributing to continued risk of supply shortfalls,” NERC notes.
Texas, remember, lost 300 souls in February 2021 when Winter Storm Uri put the state in a deep freeze for a week. The freezing temperatures combined with snowy and icy conditions first caused the state’s wind and solar fleets to fail. When ERCOT implemented rolling blackouts, they denied electricity to some of the state’s natural gas transmission infrastructure, causing it to freeze up, which in turn caused a significant percentage of natural gas power plants to fall offline. Because the state had already shut down so much of its once formidable fleet of coal-fired plants and hasn’t opened a new nuclear plant since the mid-1980s, a disastrous major blackout that lingered for days resulted.
Dear Readers:
As a nonprofit, we are dependent on the generosity of our readers.
To their credit, Republican Texas Gov. Greg Abbott, the legislature, ERCOT, and other state agencies have invoked major reforms to the system designed to prevent this scenario from happening again over the last four years. But, as NERC notes, the state remains dangerously short of dispatchable thermal capacity needed to keep the grid up and running when wind and solar inevitably drop off the system in such a storm. And ERCOT isn’t alone: Several other regional grids are in the same boat.
This country’s power generation sector can either get serious about building out the needed new thermal capacity or disaster will inevitably result again, because demand isn’t going to stop rising anytime soon. In fact, the already rapid expansion of the AI datacenter industry is certain to accelerate in the wake of President Trump’s approval on Monday of the Genesis Mission, a plan to create another Manhattan Project-style partnership between the government and private industry focused on AI.
It’s an incredibly complex vision, but what the Genesis Mission boils down to is an effort to build an “integrated AI platform” consisting of all federal scientific datasets to which selected AI development projects will be provided access. The concept is to build what amounts to a national brain to help accelerate U.S. AI development and enable America to remain ahead of China in the global AI arm’s race.
So, every dataset that is currently siloed within DOE, NASA, NSF, Census Bureau, NIH, USDA, FDA, etc. will be melded into a single dataset to try to produce a sort of quantum leap in AI development. Put simply, most AI tools currently exist in a phase of their development in which they function as little more than accelerated, advanced search tools – basically, they’re in the fourth grade of their education path on the way to obtaining their doctorate’s degree. This is an effort to invoke a quantum leap among those selected tools, enabling them to figuratively skip eight grades and become college freshmen.
Here’s how the order signed Monday by President Trump puts it: “The Genesis Mission will dramatically accelerate scientific discovery, strengthen national security, secure energy dominance, enhance workforce productivity, and multiply the return on taxpayer investment into research and development, thereby furthering America’s technological dominance and global strategic leadership.”
It’s an ambitious goal that attempts to exploit some of the same central planning techniques China is able to use to its own advantage.
But here’s the thing: Every element envisioned in the Genesis Mission will require more electricity: Much more, in fact. It’s a brave new world that will place a huge amount of added pressure on power generation companies and grid managers like ERCOT. Americans must hope and pray they’re up to the task. Their track records in this century do not inspire confidence.
David Blackmon is an energy writer and consultant based in Texas. He spent 40 years in the oil and gas business, where he specialized in public policy and communications.