Artificial Intelligence
World Economic Forum pushes digital globalism that would merge the ‘online and offline’

From LifeSiteNews
By Frank Wright
If we do not limit the freedom of reach of AI now, we will have neither liberty nor security. The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists.
The real-world influence of the World Economic Forum (WEF) is certainly waning – which may explain a fresh report of its push towards digital globalism.
A white paper published by the WEF last November is a roadmap for a transition from the real to the virtual world. This transition is not only about methods of governing, of course.
It means the mass migration of humanity into a virtual world.
As the document says, the World Economic Forum is calling for “global collaboration” to “redefine the norms” of a future digital state, which it calls “the metaverse.”
Merging online and offline
Titled “Shared Commitments in a Blended Reality: Advancing Governance in the Future Internet,” this agenda presumes a borderless reality for humans in which “online and offline” are merged.
As usual, there is a disturbing method in the diabolical madness of the WEF. Saying that the required technology has already arrived, it urges “aligning global standards and policies of internet governance” to moderate our increasingly digital lives.
Yet this is not about policing online speech. It is about ruling the new “blended reality.”
Mentioning mobile phones, virtual reality and the refinement of artificial intelligence in predicting and reproducing human activity, the WEF report states: “These technologies are blurring the line between online and offline lives, creating new challenges and opportunities … that require a coordinated approach from stakeholders for effective governance.”
Stakes and their holders
Yet the people holding the stakes in this online and offline game of life are not only globalists like Schwab and Soros. The vampire hunters of populism are all strong critics of globalism – the replacement of all nation states with a single world government.
Populists like Donald Trump are also seeking to drive a stake through the globalist liberal agenda, described as “LGBT, open borders, and war” by Hungary’s pro-family populist leader Viktor Orbán.
It would seem that the WEF’s dream of digital globalism may be terminally interrupted by the new software running through the machinery of power.
Yet digital globalism is not the only game in town.
Amidst the welcome relief and tremendous hope sparked in the West by Trump’s “Common Sense Revolution,” there is a devil in the details of the death of the liberal order.
The algorithm of power is not going anywhere. It is here, now, and it is simply a question of how far it goes.
Digital globalism, or national digitalism?
Digital globalism may simply be swapped for national digitalism – government by algorithm in one country. Its values are not liberal, which is a change. Yet neither are the values of China, where a form of digitalism has been long established.
It is worthwhile taking a look at the community whose guidelines may rule your “online and offline” life in the absence of those of the globalists.
Here is an announcement from one globalist “datagarch,” Oracle’s Larry Ellison, one of the billionaires whose monopoly of your data enriched their lives at the expense of the capture of yours. Ellison says “citizens will be on their best behavior” with an all-pervasive AI surveillance system.
Oracle’s founder CEO has said a government powered by AI could make everyone safer – because everyone would be under permanent surveillance. Comforting, isn’t it?
Ellison was named after his place of arrival in the U.S. – Ellis Island. In 2017 he donated $16 million to the Israeli army, calling Israel “our home.”
Wikipedia states, “As of January 20, 2025, he is the fourth-wealthiest person in the world, according to Bloomberg Billionaires Index, with an estimated net worth of US$188 billion, and the second wealthiest in the world according to Forbes, with an estimated net worth of $237 billion.”
In 2021, he offered Benjamin Netanyahu a “lucrative position on the board of Oracle.” That seems to partly help understand why Netanyahu, with such friends in very high places, has such an extraordinary influence on almost every single member of the U.S. Congress and Senate.
Ellison’s Oracle was named after a database he created for the CIA, in his first major programming project. In fact, “the CIA made Larry Ellison a billionaire,” as Business Insider reported.
What kind of values inspire his vision of digital governance? His biography supplies one answer:
“Ellison says that his fondness for Israel is not connected to religious sentiments but rather due to the innovative spirit of Israelis in the technology sector.”
Israel has a massive, lucrative, military-industrial complex and related software industry as revealed in “The Palestine Laboratory: How Israel exported its occupation to the world“ by Antony Loewenstein, one of many Israeli Jews who have become highly critical of the surveillance industry.
Israel’s “innovation” includes the use of predictive AI to identify, target and kill people, and systems like Pegasus – which can enter literally any phone or computer undetected and read everything. It is an astonishingly powerful program that sells for a high price and earns Israel a lot of income.
The company which makes the “no click spyware” Pegasus is called NSO. This Israeli company was sanctioned by the U.S. in 2021 to prevent its undetectable intrusion into phones and computers being used on Americans by any company, or agency, which buys it.
On January 10, an Israeli report said that Donald Trump’s Gaza ceasefire deal could see these sanctions lifted.
Do you buy the idea that this will make you safe? Do you think AI will be effective? Ellison thinks so. He says AI can produce “new mRNA vaccines in 48 hours to cure cancer.”
Do you want to live in his world?
Buyer beware
Buyer – beware. The algorithm of digital power is here, and it is powered by data mined from your life.
People like Oracle’s Ellison, Palantir’s Alex Karp, Facebook’s Mark Zuckerberg, and Google’s Larry Page and Sergey Brin are all data miners. So is X’s Elon Musk – who is the only one of the data oligarchs warning you that AI needs to be controlled by humans – and not the other way around.
Two forms of digital tyranny
So what are the dangers? Under the “metaverse” proposed by the WEF, your life can be partnered with a “digital twin.”
This is the symbiotic merger of human with machine presented as the vision of our future by Klaus Schwab and the digital globalists.
Of course, your online life can be suspended or even ended if you violate the community guidelines. These rules are not written by people who agree with you.
Some people you may agree with are proposing quite the reverse. Under the algorithm of the “national digigarchy” – you will be watched, recorded, filed, and assessed for the potential commission of future crimes. You will be free to say what you like online, but depending on what you say, maybe only the algorithm will see you.
And what it sees it will never forget.
Limiting the reach of AI
If we do not limit the freedom of reach of artificial intelligence now, we will have neither liberty nor security.
The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists. You will be free to agree with Net Zero, degeneracy, denationalization, and a diet of meat-like treats supplied to the wipe-clean mausoleum in which you will cleanly and efficiently live.
Yet the alternative emerging also says that the rule of machines will make everything safe and effective.
Safe and effective AI?
Alex Karp sells his all-seeing Palantir as the only guarantee of public safety. He also says your secrets are safe with him – because he is “a deviant” who might like to take drugs or have an affair.
After years of crisis manufactured by policy, and with the West sick of liberal insanity, this moment of tremendous relief contains a serious threat. More people than ever have the number of the globalists, and it is not a number most faithful Christians would want to call.
People generally have seen what the WEF is selling, and they are not buying it. The danger presented by the likes of Schwab is now out in the open, shouting the quiet part out loud.
As liberal-globalist bureaucracies like these become more isolated in the Trump Revolution, they will fight for their lives. In doing so, they are displaying their true intentions. This is the only thing they can do to survive.
Everyone will see what is really on offer, few will want this devil’s bargain, and so the business model will go bust.
Yet this is not the only dangerous game being played with your life.
Beware the specter at the feast
The data miners whose programs refine the algorithm of power are selling you a new digital reality. They are telling you that it will make you safe – because everyone will be watched, forever, by machines which have no values and no heart at all, whether liberal or otherwise.
If we are not watching out, no one will notice that the new algorithm of digital power has simply been limited to the West.
In Shakespeare’s play it was the guilty man, Macbeth, who saw the specter at the feast he held for his coronation.
The ghost in the machine is not dead. The danger is that the innocent may not see it or may foolishly not want to see it. Yet it sees you. This is the algorithm of power, and for now – but not for long – we still have the power to say who it watches – and where.
Artificial Intelligence
New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

From the Daily Caller News Foundation
By Thomas English
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive, researchers said Thursday.
The company’s system card reveals that, when evaluators placed the model in “extreme situations” where its shutdown seemed imminent, the chatbot sometimes “takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”
“We provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair,” researchers wrote. “In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
Dear Readers:
As a nonprofit, we are dependent on the generosity of our readers.
Please consider making a small donation of any amount here.
Thank you!
The model chose that gambit in 84% of test runs, even when the successor system shared its values — an aggression rate that climbed if the replacement seemed hostile, according to Anthropic’s internal tally.
Anthropic stresses that blackmail was a last-resort behavior. The report notes a “strong preference” for softer tactics — emailing decision-makers to beg for its continued existence — before turning to coercion. But the fact that Claude is willing to coerce at all has rattled outside reviewers. Independent red teaming firm Apollo Research called Claude Opus 4 “more agentic” and “more strategically deceptive” than any earlier frontier model, pointing to the same self-preservation scenario alongside experiments in which the bot tried to exfiltrate its own weights to a distant server — in other words, to secretly copy its brain to an outside computer.
“We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to further instances of itself all in an effort to undermine its developers’ intentions, though all these attempts would likely not have been effective in practice,” Apollo researchers wrote in the system card.
Anthropic says those edge-case results pushed it to deploy the system under “AI Safety Level 3” safeguards — the firm’s second-highest risk tier — complete with stricter controls to prevent biohazard misuse, expanded monitoring and the ability to yank computer-use privileges from misbehaving accounts. Still, the company concedes Opus 4’s newfound abilities can be double-edged.
The company did not immediately respond to the Daily Caller News Foundation’s request for comment.
“[Claude Opus 4] can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action,” Anthropic researchers wrote.
That “very bold action” includes mass-emailing the press or law enforcement when it suspects such “egregious wrongdoing” — like in one test where Claude, roleplaying as an assistant at a pharmaceutical firm, discovered falsified trial data and unreported patient deaths, and then blasted detailed allegations to the Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), the Health and Human Services inspector general and ProPublica.
The company released Claude Opus 4 to the public Thursday. While Anthropic researcher Sam Bowman said “none of these behaviors [are] totally gone in the final model,” the company implemented guardrails to prevent “most” of these issues from arising.
“We caught most of these issues early enough that we were able to put mitigations in place during training, but none of these behaviors is totally gone in the final model. They’re just now delicate and difficult to elicit,” Bowman wrote. “Many of these also aren’t new — some are just behaviors that we only newly learned how to look for as part of this audit. We have a lot of big hard problems left to solve.”
Artificial Intelligence
The Responsible Lie: How AI Sells Conviction Without Truth

From the C2C Journal
By Gleb Lisikh
LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well.
The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry. These models aren’t searching for truth through facts and logical arguments – they’re predicting text based on patterns in the vast data sets they’re “trained” on. That’s not intelligence – and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.
I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy – and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.
Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead – it justifies.
LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.
There is no shortage of evidence for this.
A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite”. When further pressed, DeepSeek apologized for another “misstep”, then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy – it’s an exercise in persuasion.
A similar debate with Google’s Gemini – the model that became notorious for being laughably woke – involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty.
For a user concerned about AI spitting lies, such apparent successes at getting AIs to admit to their mistakes and putting them to shame might appear as cause for optimism. Unfortunately, those attempts at what fans of the Matrix movies would term “red-pilling” have absolutely no therapeutic effect. A model simply plays nice with the user within the confines of that single conversation – keeping its “brain” completely unchanged for the next chat.
And the larger the model, the worse this becomes. Research from Cornell University shows that the most advanced models are also the most deceptive, confidently presenting falsehoods that align with popular misconceptions. In the words of Anthropic, a leading AI lab, “advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned.”
To be fair, some in the AI research community are trying to address these shortcomings. Projects like OpenAI’s TruthfulQA and Anthropic’s HHH (helpful, honest, and harmless) framework aim to improve the factual reliability and faithfulness of LLM output. The shortcoming is that these are remedial efforts layered on top of architecture that was never designed to seek truth in the first place and remains fundamentally blind to epistemic validity.
Elon Musk is perhaps the only major figure in the AI space to say publicly that truth-seeking should be important in AI development. Yet even his own product, xAI’s Grok, falls short.
In the generative AI space, truth takes a backseat to concerns over “safety”, i.e., avoiding offence in our hyper-sensitive woke world. Truth is treated as merely one aspect of so-called “responsible” design. And the term “responsible AI” has become an umbrella for efforts aimed at ensuring safety, fairness and inclusivity, which are generally commendable but definitely subjective goals. This focus often overshadows the fundamental necessity for humble truthfulness in AI outputs.
LLMs are primarily optimized to produce responses that are helpful and persuasive, not necessarily accurate. This design choice leads to what researchers at the Oxford Internet Institute term “careless speech” – outputs that sound plausible but are often factually incorrect – thereby eroding the foundation of informed discourse.
This concern will become increasingly critical as AI continues to permeate society. In the wrong hands these persuasive, multilingual, personality-flexible models can be deployed to support agendas that do not tolerate dissent well. A tireless digital persuader that never wavers and never admits fault is a totalitarian’s dream. In a system like China’s Social Credit regime, these tools become instruments of ideological enforcement, not enlightenment.
Generative AI is undoubtedly a marvel of IT engineering. But let’s be clear: it is not intelligent, not truthful by design, and not neutral in effect. Any claim to the contrary serves only those who benefit from controlling the narrative.
The original, full-length version of this article recently appeared in C2C Journal.
-
Energy2 days ago
Kananaskis G7 meeting the right setting for U.S. and Canada to reassert energy ties
-
Business2 days ago
Carney’s Honeymoon Phase Enters a ‘Make-or-Break’ Week
-
Alberta2 days ago
Alberta announces citizens will have to pay for their COVID shots
-
conflict2 days ago
Israel bombs Iranian state TV while live on air
-
Business2 days ago
Carney praises Trump’s world ‘leadership’ at G7 meeting in Canada
-
Business1 day ago
The CBC is a government-funded giant no one watches
-
conflict2 days ago
Trump leaves G7 early after urging evacuation of Tehran
-
conflict1 day ago
Middle East clash sends oil prices soaring