Connect with us
[bsa_pro_ad_space id=12]

Artificial Intelligence

World Economic Forum pushes digital globalism that would merge the ‘online and offline’

Published

13 minute read

From LifeSiteNews

By Frank Wright

If we do not limit the freedom of reach of AI now, we will have neither liberty nor security. The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists.

The real-world influence of the World Economic Forum (WEF) is certainly waning – which may explain a fresh report of its push towards digital globalism.

A white paper published by the WEF last November is a roadmap for a transition from the real to the virtual world. This transition is not only about methods of governing, of course.

It means the mass migration of humanity into a virtual world.

As the document says, the World Economic Forum is calling for “global collaboration” to “redefine the norms” of a future digital state, which it calls “the metaverse.”

Merging online and offline

Titled “Shared Commitments in a Blended Reality: Advancing Governance in the Future Internet,” this agenda presumes a borderless reality for humans in which “online and offline” are merged.

As usual, there is a disturbing method in the diabolical madness of the WEF. Saying that the required technology has already arrived, it urges “aligning global standards and policies of internet governance” to moderate our increasingly digital lives.

Yet this is not about policing online speech. It is about ruling the new “blended reality.”

Mentioning mobile phones, virtual reality and the refinement of artificial intelligence in predicting and reproducing human activity, the WEF report states: “These technologies are blurring the line between online and offline lives, creating new challenges and opportunities … that require a coordinated approach from stakeholders for effective governance.”

Stakes and their holders

Yet the people holding the stakes in this online and offline game of life are not only globalists like Schwab and Soros. The vampire hunters of populism are all strong critics of globalism – the replacement of all nation states with a single world government.

It would seem that the WEF’s dream of digital globalism may be terminally interrupted by the new software running through the machinery of power.

Yet digital globalism is not the only game in town.

Amidst the welcome relief and tremendous hope sparked in the West by Trump’s “Common Sense Revolution,” there is a devil in the details of the death of the liberal order.

The algorithm of power is not going anywhere. It is here, now, and it is simply a question of how far it goes.

Digital globalism, or national digitalism?

Digital globalism may simply be swapped for national digitalism – government by algorithm in one country. Its values are not liberal, which is a change. Yet neither are the values of China, where a form of digitalism has been long established.

It is worthwhile taking a look at the community whose guidelines may rule your “online and offline” life in the absence of those of the globalists.

Here is an announcement from one globalist “datagarch,” Oracle’s Larry Ellison, one of the billionaires whose monopoly of your data enriched their lives at the expense of the capture of yours. Ellison says “citizens will be on their best behavior” with an all-pervasive AI surveillance system. 

 

Oracle’s founder CEO has said a government powered by AI could make everyone safer – because everyone would be under permanent surveillance. Comforting, isn’t it?

Ellison was named after his place of arrival in the U.S. – Ellis Island. In 2017 he donated $16 million to the Israeli army, calling Israel “our home.”

Wikipedia states, “As of January 20, 2025, he is the fourth-wealthiest person in the world, according to Bloomberg Billionaires Index, with an estimated net worth of US$188 billion, and the second wealthiest in the world according to Forbes, with an estimated net worth of $237 billion.”

In 2021, he offered Benjamin Netanyahu a “lucrative position on the board of Oracle.” That seems to partly help understand why Netanyahu, with such friends in very high places, has such an extraordinary influence on almost every single member of the U.S. Congress and Senate.

Ellison’s Oracle was named after a database he created for the CIA, in his first major programming project. In fact, “the CIA made Larry Ellison a billionaire,” as Business Insider reported.

What kind of values inspire his vision of digital governance? His biography supplies one answer:

“Ellison says that his fondness for Israel is not connected to religious sentiments but rather due to the innovative spirit of Israelis in the technology sector.”

Israel has a massive, lucrative, military-industrial complex and related software industry as revealed in “The Palestine Laboratory: How Israel exported its occupation to the world“ by Antony Loewenstein, one of many Israeli Jews who have become highly critical of the surveillance industry.

Israel’s “innovation” includes the use of predictive AI to identify, target and kill people, and systems like Pegasus – which can enter literally any phone or computer undetected and read everything. It is an astonishingly powerful program that sells for a high price and earns Israel a lot of income.

The company which makes the “no click spyware” Pegasus is called NSO. This Israeli company was sanctioned by the U.S. in 2021 to prevent its undetectable intrusion into phones and computers being used on Americans by any company, or agency, which buys it.

On January 10, an Israeli report said that Donald Trump’s Gaza ceasefire deal could see these sanctions lifted.

Do you buy the idea that this will make you safe? Do you think AI will be effective? Ellison thinks so. He says AI can produce “new mRNA vaccines in 48 hours to cure cancer.”

Do you want to live in his world? 

Buyer beware

Buyer – beware. The algorithm of digital power is here, and it is powered by data mined from your life.

People like Oracle’s Ellison, Palantir’s Alex Karp, Facebook’s Mark Zuckerberg, and Google’s Larry Page and Sergey Brin are all data miners. So is X’s Elon Musk – who is the only one of the data oligarchs warning you that AI needs to be controlled by humans – and not the other way around.

 

Two forms of digital tyranny

So what are the dangers? Under the “metaverse” proposed by the WEF, your life can be partnered with a “digital twin.”

This is the symbiotic merger of human with machine presented as the vision of our future by Klaus Schwab and the digital globalists.

Of course, your online life can be suspended or even ended if you violate the community guidelines. These rules are not written by people who agree with you.

Some people you may agree with are proposing quite the reverse. Under the algorithm of the “national digigarchy” – you will be watched, recorded, filed, and assessed for the potential commission of future crimes. You will be free to say what you like online, but depending on what you say, maybe only the algorithm will see you.

And what it sees it will never forget.

Limiting the reach of AI

If we do not limit the freedom of reach of artificial intelligence now, we will have neither liberty nor security.

The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists. You will be free to agree with Net Zero, degeneracy, denationalization, and a diet of meat-like treats supplied to the wipe-clean mausoleum in which you will cleanly and efficiently live.

Yet the alternative emerging also says that the rule of machines will make everything safe and effective.

Safe and effective AI?

Alex Karp sells his all-seeing Palantir as the only guarantee of public safety. He also says your secrets are safe with him – because he is “a deviant” who might like to take drugs or have an affair.

After years of crisis manufactured by policy, and with the West sick of liberal insanity, this moment of tremendous relief contains a serious threat. More people than ever have the number of the globalists, and it is not a number most faithful Christians would want to call.

People generally have seen what the WEF is selling, and they are not buying it. The danger presented by the likes of Schwab is now out in the open, shouting the quiet part out loud.

As liberal-globalist bureaucracies like these become more isolated in the Trump Revolution, they will fight for their lives. In doing so, they are displaying their true intentions. This is the only thing they can do to survive.

Everyone will see what is really on offer, few will want this devil’s bargain, and so the business model will go bust.

Yet this is not the only dangerous game being played with your life.

Beware the specter at the feast

The data miners whose programs refine the algorithm of power are selling you a new digital reality. They are telling you that it will make you safe – because everyone will be watched, forever, by machines which have no values and no heart at all, whether liberal or otherwise.

If we are not watching out, no one will notice that the new algorithm of digital power has simply been limited to the West.

In Shakespeare’s play it was the guilty man, Macbeth, who saw the specter at the feast he held for his coronation.

The ghost in the machine is not dead. The danger is that the innocent may not see it or may foolishly not want to see it. Yet it sees you. This is the algorithm of power, and for now – but not for long – we still have the power to say who it watches – and where.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will

Published on

This is a special preview article from the:

By Gleb Lisikh

Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.

This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.

But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”

Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.

Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”

This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).

LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.

Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.

The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.

Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s  Founding Fathers as a lineup of “racialized” gentlemen.

Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:

 Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
 Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
 Neuro-symbolic AI: combines neural prediction with logical reasoning; and
 Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.

Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.

LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.

The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.

The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here.

Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.

Continue Reading

Artificial Intelligence

‘Trouble in Toyland’ report sounds alarm on AI toys

Published on

From The Center Square

By

Parents should take precaution this holiday season when it comes to artificial intelligence toys after researchers for the new Trouble in Toyland report found safety concerns.

Illinois Public Interest Research Group Campaign Associate Ellen Hengesbach said some of the toys armed with AI raised red flags ranging from toys that talk in-depth about sexually explicit topics to acting dismayed when the child disengages.

“What they look like are basically stuffed animals or toy robots that have a chatbot like Chat GPT embedded in them and can have conversations with children,” Hengesbach told The Center Square.

The U.S. PIRG Education Fund report also points out that at least three toys have limited to no parental controls and have the capacity to record your child’s voice and collect other sensitive data via facial recognition.

“All three were willing to tell us where to find potentially dangerous objects in the house, such as plastic bags, matches, or knives,” she said. “It seems like dystopian science fiction decades ago is now reality.”

In the face of all the changing landscape and rising concerns, Hengesbach is calling for immediate action.

“The two main things that we’d like to see are more oversight in general and more research so we can see exactly how these toys interact with kids, really just identify what the harms might be and have a lot more transparency from companies around how are these toys designed,” she said. “What are they capable of and what the potential risks or harms might be. I just really want us to take this opportunity to really think through what we’re doing instead of rushing a toy to market.”

As for the here and now, Hengesbach stressed parents would be wise to be thoughtful about their purchases.

“We just have a big open question of what are the long-term impacts of these products on young kids, especially when it comes to their social development,” she said. “The fact is that we just really won’t know what the long-term impacts of AI friends and companion toys might be until the first generation playing with them grows up. For now, I think it’s just really important that parents understand that these AI toys are out there; they’re very new and they’re basically unregulated.”

Since the release of the report, Hengesbach said one AI toymaker temporarily suspended sales of all their products to conduct a safety audit.

This year’s 40th Trouble in Toyland report also focuses on toys that contain toxins, counterfeit toys that haven’t been tested for safety, recalled toys and toys that contain button cell batteries or high-powered magnets, both of which can be deadly if swallowed.

Continue Reading

Trending

X