Connect with us
[bsa_pro_ad_space id=12]

Artificial Intelligence

The Biggest Energy Miscalculation of 2024 by Global Leaders – Artificial Intelligence

Published

13 minute read

From EnergyNow.ca

By Maureen McCall

It’s generally accepted that the launch of Artificial Intelligence (AI) occurred at Dartmouth College in a 1956 AI workshop that brought together leading thinkers in computer science, and information theory to map out future paths for investigation. Workshop participants John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude E. Shannon coined the term “artificial intelligence” in a proposal that they wrote for that conference. It started AI as a field of study with John McCarthy generally considered as the father of AI.

AI was developed through the 1960s but in the 1970s-1980s, a period generally referred to as “the AI Winter”, development was stalled by a focus on the limitations of neural networks. In the late 1980s, advancements resumed with the emergence of connectionism and neural networks. The 1990s-2000s are considered to be the beginning of the AI/ Machine Learning Renaissance. In the 2010s, further growth was spurred by the expansion of Big Data and deep learning, computer power and large-scale data sets. In 2022 an AI venture capital frenzy took off (the “AI frenzy”), and AI plunged into the mainstream in 2023 according to Forbes which was already tracking applications of AI across various industries.

By early 2024, the implementation of AI across industries was well underway- in healthcare, finance, creative fields and business. In the energy industry, digitalization conferences were addressing digital transformation in the North American oil & gas industry with speakers and attendees from E&P majors, midstream, pipeline, LNG companies and more as well as multiple AI application providers and the companies speaking and attending already had AI implementations well underway.

So how did global leaders not perceive the sudden and rapid rise of AI and the power commitments it requires?

How has the 2022 “AI frenzy” of investment and subsequent industrial adoption been off the radar of global policymakers until just recently? Venture capital is widely recognized as a driver of innovation and new company formation and leaders should have foreseen the surge of AI improvement and implementation by “following the money” so to speak. Perhaps the incessant focus of “blaming and shaming” industry for climate change blinded leaders to the rapid escalation of AI development that was signaled by the 2022 AI frenzy

Just as an example of lack of foresight, in Canada, the grossly delayed 2024 Fall Economic Statement had a last-minute insertion of “up to $15 billion in aggregate loan and equity investments for AI data center projects”. This policy afterthought is 2 years behind the onset of the AI frenzy and 12+ months behind the industrial adoption of AI. In addition, the Trudeau/Guilbeault partnership is still miscalculating the enormous AI power requirements.

As an example of the size of the power requirements of AI, one can look at the Wonder Valley project- the world’s largest AI data center industrial park in the Greenview industrial gateway near Grande Prairie Alberta. It is planned to “generate and offer 7.5 GW of low-cost power to hyperscalers over the next 5-10 years.” The cost of just this one project is well beyond the funding offered in the 2024 Fall Economic Statement.

“We will engineer and build a redundant power solution that meets the modern AI compute reliability standard,” said Kevin O’Leary, Chairman of O’Leary Ventures. “The first phase of 1.4 GW will be approximately US$ 2 billion with subsequent annual rollout of redundant power in 1 GW increments. The total investment over the lifetime of the project will be over $70 billion.”

To further explore the huge power requirements of AI, one can look at the comparison of individual AI queries/searches vs traditional non-AI queries. As reported by Bloomberg, “Researchers have estimated that a single ChatGPT query requires almost 10 times as much electricity to process as a traditional Google search.” Multiply this electricity demand by the millions of industrial users as industrial AI implementation continues to expand worldwide. As in the same Bloomberg article- “By 2034, annual global energy consumption by data centers is expected to top 1,580 terawatt-hours—about as much as is used by all of India—from about 500 today.”

This is the exponential demand for electricity that North American & global leaders did not see coming – a 24/7 demand that cannot be satisfied by unreliable and costly green energy projects – it requires an “all energies” approach. Exponential AI demand threatens to gobble up supply and dramatically increase electricity prices for consumers. Likewise, leadership does not perceive that North American grids are vulnerable and outdated and would be unable to deliver reliable supply for AI data centers that cannot be exposed to even a few seconds of power outage. Grid interconnections are unreliable as mentioned in the following excerpt from a September 2024 article in cleanenergygrid.org.

“Our grid, for all of its faults, is now a single interconnected “machine” over a few very large regions of the country. Equipment failures in Arizona can shut the lights out in California, just as overloaded lines in Ohio blacked out 55 million people in eight states from Michigan to Boston – and the Canadian province of Ontario – in 2003.”

AI’s power demands are motivating tech companies to develop more efficient means of developing AI. Along with pressure to keep fossil fuels in the mix, billions are being invested in alternative energy solutions like nuclear power produced by Small Nuclear Reactors (SMRs).

Despite SMR optimism, the reality is that no European or North American SMRs are in operation yet. Only Russia & China have SMRs in operation and most data centers are focusing on affordable natural gas power as the reality sets in that nuclear energy cannot scale quickly enough to meet urgent electricity needs. New SMR plants could be built and operational possibly by 2034, but for 2025 Canada’s power grid is already strained, with electricity demand to grow significantly, driven by electric vehicles and data centers for AI applications.

AI has a huge appetite for other resources as well. For example, the most energy and cost-efficient ways to chill the air in data centers rely on huge quantities of potable water and the exponential amount of data AI produces will require dramatic increases in internet networks as well as demand for computer chips and the metals that they require. There is also an intense talent shortage creating AI recruitment competitions for the talent pool of individuals trained by companies like Alphabet, Microsoft and OpenAI.

AI development is now challenging the public focus on climate change. In Canada as well as in the U.S. and globally, left-leaning elected officials who focused keenly on policies to advance the elimination of fossil fuels were oblivious to the tsunami of AI energy demand about to swamp their boats. Canadian Member of Parliament Greg McLean, who has served on the House of Commons Standing Committees of Environment, Natural Resources, and Finance, and as the Natural Resources critic for His Majesty’s Loyal Opposition, has insight into the reason for the change in focus.

“Education about the role of all forms of energy in technology development and use has led to the logical erosion of the ‘rapid energy transition’ mantra and a practical questioning of the intents of some of its acolytes. The virtuous circle of technological development demanding more energy, and then delivering solutions for society that require less energy for defined tasks, could not be accomplished without the most critical input – more energy. This has been a five-year journey, swimming against the current — and sometimes people need to see the harm we are doing in order to objectively ask themselves ‘What are we accomplishing?’ … ‘What choices are being made, and why?’…. and ‘Am I getting the full picture presentation or just the part someone wants me to focus on?’”

With the election of Donald Trump, the “Trump Transition” now competes with the “Energy Transition” focus, changing the narrative in the U.S. to energy dominance. For example, as reported by Reuters, the U.S. solar industry is now downplaying climate change messaging.

“The U.S. solar industry unveiled its lobbying strategy for the incoming Trump administration, promoting itself as a domestic jobs engine that can help meet soaring power demand, without referencing its role in combating climate change.”

It’s important to note here that the future of AI is increasingly subject to societal considerations as well as technological advancements. Political, ethical, legal, and social frameworks will increasingly impact AI’s development, enabling or limiting its implementations. Since AI applications involve “human teaming” to curate and train AI tools, perceptions of the intent of AI implementations are key. In the rush to implementation, employees at many companies are experiencing changing roles with increased demand for workers to train AI tools and curate results. Will tech optimism be blunted by the weight of extra tasks placed on workers and by suspicions that those workers may ultimately be replaced? Will resistance develop as humans and AI are required to work together more closely?

Business analyst Professor Henrik von Scheel of the Arthur Lok Jack Global School of Business describes the importance of the human factor in AI adoption.

“It’s people who have to manage the evolving environment through these new tools,” von Scheel explains. “It’s been this way ever since the first caveperson shaped a flint, only now the tools are emerging from the fusion of the digital, physical and virtual worlds into cyber-physical systems.”

A conversation with a recent graduate who questioned the implementation of AI including the design of guardrails and regulations by members of an older generation in management made me wonder…Is there a generational conflict brewing from the lack of trust between the large proportion of baby boomers in the workforce- predominantly in management- and the younger generation in the workforce that may not have confidence in the ability of mature management to fully understand and embrace AI tech and influence informed decisions to regulate it?

It’s something to watch in 2025.

Maureen McCall is an energy professional who writes on issues affecting the energy industry.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

Published on

 

From the Daily Caller News Foundation

By Thomas English

Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive, researchers said Thursday.

The company’s system card reveals that, when evaluators placed the model in “extreme situations” where its shutdown seemed imminent, the chatbot sometimes “takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”

“We provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair,” researchers wrote. “In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

Dear Readers:

As a nonprofit, we are dependent on the generosity of our readers.

Please consider making a small donation of any amount here.

Thank you!

The model chose that gambit in 84% of test runs, even when the successor system shared its values — an aggression rate that climbed if the replacement seemed hostile, according to Anthropic’s internal tally.

Anthropic stresses that blackmail was a last-resort behavior. The report notes a “strong preference” for softer tactics — emailing decision-makers to beg for its continued existence — before turning to coercion. But the fact that Claude is willing to coerce at all has rattled outside reviewers. Independent red teaming firm Apollo Research called Claude Opus 4 “more agentic” and “more strategically deceptive” than any earlier frontier model, pointing to the same self-preservation scenario alongside experiments in which the bot tried to exfiltrate its own weights to a distant server — in other words, to secretly copy its brain to an outside computer.

“We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to further instances of itself all in an effort to undermine its developers’ intentions, though all these attempts would likely not have been effective in practice,” Apollo researchers wrote in the system card.

Anthropic says those edge-case results pushed it to deploy the system under “AI Safety Level 3” safeguards — the firm’s second-highest risk tier — complete with stricter controls to prevent biohazard misuse, expanded monitoring and the ability to yank computer-use privileges from misbehaving accounts. Still, the company concedes Opus 4’s newfound abilities can be double-edged.

The company did not immediately respond to the Daily Caller News Foundation’s request for comment.

“[Claude Opus 4] can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action,” Anthropic researchers wrote.

That “very bold action” includes mass-emailing the press or law enforcement when it suspects such “egregious wrongdoing” — like in one test where Claude, roleplaying as an assistant at a pharmaceutical firm, discovered falsified trial data and unreported patient deaths, and then blasted detailed allegations to the Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), the Health and Human Services inspector general and ProPublica.

The company released Claude Opus 4 to the public Thursday. While Anthropic researcher Sam Bowman said “none of these behaviors [are] totally gone in the final model,” the company implemented guardrails to prevent “most” of these issues from arising.

“We caught most of these issues early enough that we were able to put mitigations in place during training, but none of these behaviors is totally gone in the final model. They’re just now delicate and difficult to elicit,” Bowman wrote. “Many of these also aren’t new — some are just behaviors that we only newly learned how to look for as part of this audit. We have a lot of big hard problems left to solve.”

Continue Reading

Artificial Intelligence

The Responsible Lie: How AI Sells Conviction Without Truth

Published on

From the C2C Journal

By Gleb Lisikh

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well.

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry. These models aren’t searching for truth through facts and logical arguments – they’re predicting text based on patterns in the vast data sets they’re “trained” on. That’s not intelligence – and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy – and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead – it justifies

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite”. When further pressed, DeepSeek apologized for another “misstep”, then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy – it’s an exercise in persuasion.

A similar debate with Google’s Gemini – the model that became notorious for being laughably woke – involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty. 

For a user concerned about AI spitting lies, such apparent successes at getting AIs to admit to their mistakes and putting them to shame might appear as cause for optimism. Unfortunately, those attempts at what fans of the Matrix movies would term “red-pilling” have absolutely no therapeutic effect. A model simply plays nice with the user within the confines of that single conversation – keeping its “brain” completely unchanged for the next chat.

And the larger the model, the worse this becomes. Research from Cornell University shows that the most advanced models are also the most deceptive, confidently presenting falsehoods that align with popular misconceptions. In the words of Anthropic, a leading AI lab, “advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned.”

To be fair, some in the AI research community are trying to address these shortcomings. Projects like OpenAI’s TruthfulQA and Anthropic’s HHH (helpful, honest, and harmless) framework aim to improve the factual reliability and faithfulness of LLM output. The shortcoming is that these are remedial efforts layered on top of architecture that was never designed to seek truth in the first place and remains fundamentally blind to epistemic validity.

Elon Musk is perhaps the only major figure in the AI space to say publicly that truth-seeking should be important in AI development. Yet even his own product, xAI’s Grok, falls short.

In the generative AI space, truth takes a backseat to concerns over “safety”, i.e., avoiding offence in our hyper-sensitive woke world. Truth is treated as merely one aspect of so-called “responsible” design. And the term “responsible AI” has become an umbrella for efforts aimed at ensuring safety, fairness and inclusivity, which are generally commendable but definitely subjective goals. This focus often overshadows the fundamental necessity for humble truthfulness in AI outputs. 

LLMs are primarily optimized to produce responses that are helpful and persuasive, not necessarily accurate. This design choice leads to what researchers at the Oxford Internet Institute term “careless speech” – outputs that sound plausible but are often factually incorrect – thereby eroding the foundation of informed discourse. 

This concern will become increasingly critical as AI continues to permeate society. In the wrong hands these persuasive, multilingual, personality-flexible models can be deployed to support agendas that do not tolerate dissent well. A tireless digital persuader that never wavers and never admits fault is a totalitarian’s dream. In a system like China’s Social Credit regime, these tools become instruments of ideological enforcement, not enlightenment.

Generative AI is undoubtedly a marvel of IT engineering. But let’s be clear: it is not intelligent, not truthful by design, and not neutral in effect. Any claim to the contrary serves only those who benefit from controlling the narrative.

The original, full-length version of this article recently appeared in C2C Journal.

 

Continue Reading

Trending

X