Connect with us
[the_ad id="89560"]

Artificial Intelligence

Everyone is freaking out over DeepSeek. Here’s why

Published

9 minute read

From The Deep View

$600 billion collapse

Volatility is kind of a given when it comes to Wall Street’s tech sector. It doesn’t take much to send things soaring; it likewise doesn’t take much to set off a downward spiral.
After months of soaring, Monday marked the possible beginning of a spiral, and a Chinese company seems to be at the center of it.
Alright, what’s going on: A week ago, Chinese tech firm DeepSeek launched R1, a so-called reasoning model, that, according to DeepSeek, has reached technical parity with OpenAI’s o1 across a few benchmarks. But, unlike its American competition, DeepSeek open-sourced R1 under an MIT license, making it significantly cheaper and more accessible than any of the closed models coming from U.S. tech giants.
  • But the real punchline here doesn’t have to do with R1 at all, but with a previous language model — called V3 — that DeepSeek released in December. DeepSeek was reportedly able to train V3 using a small collection of older Nvidia chips (about 2,000 H800s) at a cost of about $5.6 million.
  • Still, training is only one cost of many tied to AI development/deployment; while the costs associated with researching, developing, training and operating both R1 and V3 remain either unknown or unconfirmed, DeepSeek’s apparent ability to reach technical parity at a far reduced cost, without state-of-the-art GPU chips or massive GPU clusters, has a lot of implications for America’s now tenuous position in AI leadership. (Though DeepSeek says it is open-sourced, the company did not release its training data).
Since the release of R1, DeepSeek has become the top free app in Apple’s App Store, bumping ChatGPT to the number two slot. In the midst of its spiking popularity, DeepSeek restricted new sign-ups due to large-scale cyberattacks against its servers. And, as Salesforce Chief Marc Benioff noted, “no Nvidia supercomputers or $100M needed,” a point that the market heard loud and clear. 
What happened: Led by Nvidia, a series of tech and chip stocks, in addition to the three major stock indices, fell hard in pre-market trading early Monday morning. All told, $1.1 trillion of U.S. market cap was erased within a half hour of the opening bell.
  • Performance didn’t get better throughout the day. Nvidia closed Monday down 17%, erasing some $600 billion in market capitalization, a Wall Street record. TSMC was down 14%, Arm was down 11%, Broadcom was down 17%, Google was down 4% and Microsoft was down 2%. The S&P fell 1.4% and the Nasdaq fell 3.3%. An Nvidia spokesperson called R1 an “excellent AI advancement.”
  • This is all going into a week of Big Tech earnings, where Microsoft and Meta will be held to account for the billions of dollars ($80 billion and $65 billion, respectively) they plan to spend on AI infrastructure in 2025, a cost that Wall Street no longer seems to feel quite so good about.
It’s hard to miss the political tensions underlying all of this. The tail end of former President Joe Biden’s time in office was marked in part by an increasingly tense trade war with China, wherein both countries issued bans on the export of materials needed to build advanced AI chips. And with President Trump hell-bent on maintaining American leadership in AI, and despite the chip restrictions that are in place, Chinese companies seem to be turning hardware challenges into a motivation for innovation that challenges the American lead, something they seem keen to drive home.
R1, for instance, was announced at around the same time as OpenAI’s $500 billion Project Stargate, two impactfully divergent approaches.
What’s happening here is that the market has finally come around to the idea that maybe the cost of AI development (hundreds of billions of dollars annually) is too high, a recognition “that the winners in AI will be the most innovative companies, not just those with the most GPUs,” according to Writer CTA Waseem Alshikh. “Brute-forcing AI with GPUs is no longer a viable strategy.”
Wedbush analyst Dan Ives, however, thinks this is just a good time to buy into Nvidia — Nvidia and the rest are building infrastructure that, he argues, China will not be able to compete with in the long run. “Launching a competitive LLM model for consumer use cases is one thing,” Ives wrote. “Launching broader AI infrastructure is a whole other ballgame.”
“I view cost reduction as a good thing. I’m of the belief that if you’re freeing up compute capacity, it likely gets absorbed — we’re going to need innovations like this,” Bernstein semiconductor analyst Stacy Rasgon told Yahoo Finance. “I understand why all the panic is going on. I don’t think DeepSeek is doomsday for AI infrastructure.”
Somewhat relatedly, Perplexity has already added DeepSeek’s R1 model to its AI search engine. And DeepSeek on Monday launched another model, one capable of competitive image generation.
Last week, I said that R1 should be enough to make OpenAI a little nervous. This anxiety spread way quicker than I anticipated; DeepSeek spent Monday dominating headlines at every publication I came across, setting off a debate and panic that has spread far beyond the tech and AI community.
Some are concerned about the national security implications of China’s AI capabilities. Some are concerned about the AI trade. Granted, there are more unknowns here than knowns; we do not know the details of DeepSeek’s costs or technical setup (and the costs are likely way higher than they seem). But this does read like a turning point in the AI race.
In January, we talked about reversion to the mean. Right now, it’s too early to tell how long-term the market impacts of DeepSeek will be. But, if Nvidia and the rest fall hard and stay down — or drop lower — through earnings season, one might argue that the bubble has begun to burst. As a part of this, watch model pricing closely; OpenAI may well be forced to bring down the costs of its models to remain competitive.
At the very least, DeepSeek appears to be evidence that scaling is one, not a law, and two, not the only (or best) way to develop more advanced AI models, something that rains heavily on OpenAI and co.’s parade since it runs contrary to everything OpenAI’s been saying for months. Funnily, it actually seems like good news for the science of AI, possibly lighting a path toward systems that are less resource-intensive (which is much needed!)
It’s yet another example of the science and the business of AI not being on the same page.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The Responsible Lie: How AI Sells Conviction Without Truth

Published on

From the C2C Journal

By Gleb Lisikh

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well.

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry. These models aren’t searching for truth through facts and logical arguments – they’re predicting text based on patterns in the vast data sets they’re “trained” on. That’s not intelligence – and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy – and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead – it justifies

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite”. When further pressed, DeepSeek apologized for another “misstep”, then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy – it’s an exercise in persuasion.

A similar debate with Google’s Gemini – the model that became notorious for being laughably woke – involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty. 

For a user concerned about AI spitting lies, such apparent successes at getting AIs to admit to their mistakes and putting them to shame might appear as cause for optimism. Unfortunately, those attempts at what fans of the Matrix movies would term “red-pilling” have absolutely no therapeutic effect. A model simply plays nice with the user within the confines of that single conversation – keeping its “brain” completely unchanged for the next chat.

And the larger the model, the worse this becomes. Research from Cornell University shows that the most advanced models are also the most deceptive, confidently presenting falsehoods that align with popular misconceptions. In the words of Anthropic, a leading AI lab, “advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned.”

To be fair, some in the AI research community are trying to address these shortcomings. Projects like OpenAI’s TruthfulQA and Anthropic’s HHH (helpful, honest, and harmless) framework aim to improve the factual reliability and faithfulness of LLM output. The shortcoming is that these are remedial efforts layered on top of architecture that was never designed to seek truth in the first place and remains fundamentally blind to epistemic validity.

Elon Musk is perhaps the only major figure in the AI space to say publicly that truth-seeking should be important in AI development. Yet even his own product, xAI’s Grok, falls short.

In the generative AI space, truth takes a backseat to concerns over “safety”, i.e., avoiding offence in our hyper-sensitive woke world. Truth is treated as merely one aspect of so-called “responsible” design. And the term “responsible AI” has become an umbrella for efforts aimed at ensuring safety, fairness and inclusivity, which are generally commendable but definitely subjective goals. This focus often overshadows the fundamental necessity for humble truthfulness in AI outputs. 

LLMs are primarily optimized to produce responses that are helpful and persuasive, not necessarily accurate. This design choice leads to what researchers at the Oxford Internet Institute term “careless speech” – outputs that sound plausible but are often factually incorrect – thereby eroding the foundation of informed discourse. 

This concern will become increasingly critical as AI continues to permeate society. In the wrong hands these persuasive, multilingual, personality-flexible models can be deployed to support agendas that do not tolerate dissent well. A tireless digital persuader that never wavers and never admits fault is a totalitarian’s dream. In a system like China’s Social Credit regime, these tools become instruments of ideological enforcement, not enlightenment.

Generative AI is undoubtedly a marvel of IT engineering. But let’s be clear: it is not intelligent, not truthful by design, and not neutral in effect. Any claim to the contrary serves only those who benefit from controlling the narrative.

The original, full-length version of this article recently appeared in C2C Journal.

 

Continue Reading

Artificial Intelligence

Apple faces proposed class action over its lag in Apple Intelligence

Published on

 

News release from The Deep View

Apple, already moving slowly out of the gate on generative AI, has been dealing with a number of roadblocks and mounting delays in its effort to bring a truly AI-enabled Siri to market. The problem, or, one of the problems, is that Apple used these same AI features to heavily promote its latest iPhone, which, as it says on its website, was “built for Apple Intelligence.”
Now, the tech giant has been accused of false advertising in a proposed class action lawsuit that argues that Apple’s “pervasive” marketing campaign was “built on a lie.”
The details: Apple has — if reluctantly — acknowledged delays on a more advanced Siri, pulling one of the ads that demonstrated the product and adding a disclaimer to its iPhone 16 product page that the feature is “in development and will be available with a future software update.”
  • But that, to the plaintiffs, isn’t good enough. Apple, according to the complaint, has “deceived millions of consumers into purchasing new phones they did not need based on features that do not exist, in violation of multiple false advertising and consumer protection laws.”
  • Apple “enriched itself by saving the costs they reasonably should have spent on ensuring that the (iPhones) had the technical capabilities advertised,” according to the complaint.
Apple did not respond to a request for comment.
The lawsuit was first reported by Axios, and can be read here.
This all comes amid an executive shuffling that just took place over at Apple HQ, which put Vision Pro creator Mike Rockwell in charge of the Siri overhaul, according to Bloomberg.
Still, shares of Apple rallied to close the day up around 2%, though the stock is still down 12% for the year.
Continue Reading

Trending

X