Connect with us

Artificial Intelligence

Everyone is freaking out over DeepSeek. Here’s why

Published

9 minute read

From The Deep View

$600 billion collapse

Volatility is kind of a given when it comes to Wall Street’s tech sector. It doesn’t take much to send things soaring; it likewise doesn’t take much to set off a downward spiral.
After months of soaring, Monday marked the possible beginning of a spiral, and a Chinese company seems to be at the center of it.
Alright, what’s going on: A week ago, Chinese tech firm DeepSeek launched R1, a so-called reasoning model, that, according to DeepSeek, has reached technical parity with OpenAI’s o1 across a few benchmarks. But, unlike its American competition, DeepSeek open-sourced R1 under an MIT license, making it significantly cheaper and more accessible than any of the closed models coming from U.S. tech giants.
  • But the real punchline here doesn’t have to do with R1 at all, but with a previous language model — called V3 — that DeepSeek released in December. DeepSeek was reportedly able to train V3 using a small collection of older Nvidia chips (about 2,000 H800s) at a cost of about $5.6 million.
  • Still, training is only one cost of many tied to AI development/deployment; while the costs associated with researching, developing, training and operating both R1 and V3 remain either unknown or unconfirmed, DeepSeek’s apparent ability to reach technical parity at a far reduced cost, without state-of-the-art GPU chips or massive GPU clusters, has a lot of implications for America’s now tenuous position in AI leadership. (Though DeepSeek says it is open-sourced, the company did not release its training data).
Since the release of R1, DeepSeek has become the top free app in Apple’s App Store, bumping ChatGPT to the number two slot. In the midst of its spiking popularity, DeepSeek restricted new sign-ups due to large-scale cyberattacks against its servers. And, as Salesforce Chief Marc Benioff noted, “no Nvidia supercomputers or $100M needed,” a point that the market heard loud and clear. 
What happened: Led by Nvidia, a series of tech and chip stocks, in addition to the three major stock indices, fell hard in pre-market trading early Monday morning. All told, $1.1 trillion of U.S. market cap was erased within a half hour of the opening bell.
  • Performance didn’t get better throughout the day. Nvidia closed Monday down 17%, erasing some $600 billion in market capitalization, a Wall Street record. TSMC was down 14%, Arm was down 11%, Broadcom was down 17%, Google was down 4% and Microsoft was down 2%. The S&P fell 1.4% and the Nasdaq fell 3.3%. An Nvidia spokesperson called R1 an “excellent AI advancement.”
  • This is all going into a week of Big Tech earnings, where Microsoft and Meta will be held to account for the billions of dollars ($80 billion and $65 billion, respectively) they plan to spend on AI infrastructure in 2025, a cost that Wall Street no longer seems to feel quite so good about.
It’s hard to miss the political tensions underlying all of this. The tail end of former President Joe Biden’s time in office was marked in part by an increasingly tense trade war with China, wherein both countries issued bans on the export of materials needed to build advanced AI chips. And with President Trump hell-bent on maintaining American leadership in AI, and despite the chip restrictions that are in place, Chinese companies seem to be turning hardware challenges into a motivation for innovation that challenges the American lead, something they seem keen to drive home.
R1, for instance, was announced at around the same time as OpenAI’s $500 billion Project Stargate, two impactfully divergent approaches.
What’s happening here is that the market has finally come around to the idea that maybe the cost of AI development (hundreds of billions of dollars annually) is too high, a recognition “that the winners in AI will be the most innovative companies, not just those with the most GPUs,” according to Writer CTA Waseem Alshikh. “Brute-forcing AI with GPUs is no longer a viable strategy.”
Wedbush analyst Dan Ives, however, thinks this is just a good time to buy into Nvidia — Nvidia and the rest are building infrastructure that, he argues, China will not be able to compete with in the long run. “Launching a competitive LLM model for consumer use cases is one thing,” Ives wrote. “Launching broader AI infrastructure is a whole other ballgame.”
“I view cost reduction as a good thing. I’m of the belief that if you’re freeing up compute capacity, it likely gets absorbed — we’re going to need innovations like this,” Bernstein semiconductor analyst Stacy Rasgon told Yahoo Finance. “I understand why all the panic is going on. I don’t think DeepSeek is doomsday for AI infrastructure.”
Somewhat relatedly, Perplexity has already added DeepSeek’s R1 model to its AI search engine. And DeepSeek on Monday launched another model, one capable of competitive image generation.
Last week, I said that R1 should be enough to make OpenAI a little nervous. This anxiety spread way quicker than I anticipated; DeepSeek spent Monday dominating headlines at every publication I came across, setting off a debate and panic that has spread far beyond the tech and AI community.
Some are concerned about the national security implications of China’s AI capabilities. Some are concerned about the AI trade. Granted, there are more unknowns here than knowns; we do not know the details of DeepSeek’s costs or technical setup (and the costs are likely way higher than they seem). But this does read like a turning point in the AI race.
In January, we talked about reversion to the mean. Right now, it’s too early to tell how long-term the market impacts of DeepSeek will be. But, if Nvidia and the rest fall hard and stay down — or drop lower — through earnings season, one might argue that the bubble has begun to burst. As a part of this, watch model pricing closely; OpenAI may well be forced to bring down the costs of its models to remain competitive.
At the very least, DeepSeek appears to be evidence that scaling is one, not a law, and two, not the only (or best) way to develop more advanced AI models, something that rains heavily on OpenAI and co.’s parade since it runs contrary to everything OpenAI’s been saying for months. Funnily, it actually seems like good news for the science of AI, possibly lighting a path toward systems that are less resource-intensive (which is much needed!)
It’s yet another example of the science and the business of AI not being on the same page.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

Google denies scanning users’ email and attachments with its AI software

Published on

From LifeSiteNews

By Charles Richards

Google claims that multiple media reports are misleading and that nothing has changed with its service.

Tech giant Google is claiming that reports earlier this week released by multiple major media outlets are false and that it is not using emails and attachments to emails for its new Gemini AI software.

Fox News, Breitbart, and other outlets published stories this week instructing readers on how to “stop Google AI from scanning your Gmail.”

“Google shared a new update on Nov. 5, confirming that Gemini Deep Research can now use context from your Gmail, Drive and Chat,” Fox reported. “This allows the AI to pull information from your messages, attachments and stored files to support your research.”

Breitbart likewise said that “Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation.”

Breitbart pointed to a press release issued by Malwarebytes that said the company made the changed without users knowing.

After the backlash, Google issued a response.

“These reports are misleading – we have not changed anyone’s settings. Gmail Smart Features have existed for many years, and we do not use your Gmail content for training our Gemini AI model. Lastly, we are always transparent and clear if we make changes to our terms of service and policies,” a company spokesman told ZDNET reporter Lance Whitney.

Malwarebytes has since updated its blog post to now say they “contributed to a perfect storm of misunderstanding” in their initial reporting, adding that their claim “doesn’t appear to be” true.

But the blog has also admitted that Google “does scan email content to power its own ‘smart features,’ such as spam filtering, categorization, and writing suggestions. But this is part of how Gmail normally works and isn’t the same as training Google’s generative AI models.”

“I think the most alarming thing that we saw was the regular organized stream of communication between the FBI, the Department of Homeland Security, and the largest tech companies in the country,” journalist Matt Taibbi told the U.S. Congress in December 2023 during a hearing focused on how Twitter was working hand in glove with the agency to censor users and feed the government information.

If you use Google and would like to turn off your “smart features,” click here to visit the Malwarebytes blog to be guided through the process with images. Otherwise, you can follow these five steps courtesy of Unilad Tech.

  • Open Gmail on Desktop and press the cog icon in the top right to open the settings
  • Select the ‘Smart Features’ setting in the ‘General’ section
  • Turn off the ‘Turn on smart features in Gmail, Chat, and Meet’
  • Find the Google Workplace smart features section and opt to manage the smart feature settings
  • Switch off ‘Smart features in Google Workspace’ and ‘Smart features in other Google products’

On November 11, a class action lawsuit was filed against Google in the U.S. District Court for the Northern District of California. The case alleges that Google violated the state’s Invasion of Privacy Act by discreetly activating Gemini AI to scan Gmail, Google Chat, and Google Meet messages in October 2025 without notifying users or seeking their consent.

Continue Reading

Artificial Intelligence

Lawsuit Claims Google Secretly Used Gemini AI to Scan Private Gmail and Chat Data

Published on

logo

By

Whether the claims are true or not, privacy in Google’s universe has long been less a right than a nostalgic illusion.

When Google flipped a digital switch in October 2025, few users noticed anything unusual.
Gmail loaded as usual, Chat messages zipped across screens, and Meet calls continued without interruption.
Yet, according to a new class action lawsuit, something significant had changed beneath the surface.
We obtained a copy of the lawsuit for you here.
Plaintiffs claim that Google silently activated its artificial intelligence system, Gemini, across its communication platforms, turning private conversations into raw material for machine analysis.
The lawsuit, filed by Thomas Thele and Melo Porter, describes a scenario that reads like a breach of trust.
It accuses Google of enabling Gemini to “access and exploit the entire recorded history of its users’ private communications, including literally every email and attachment sent and received.”
The filing argues that the company’s conduct “violates its users’ reasonable expectations of privacy.”
Until early October, Gemini’s data processing was supposedly available only to those who opted in.
Then, the plaintiffs claim, Google “turned it on for everyone by default,” allowing the system to mine the contents of emails, attachments, and conversations across Gmail, Chat, and Meet.
The complaint points to a particular line in Google’s settings, “When you turn this setting on, you agree,” as misleading, since the feature “had already been switched on.”
This, according to the filing, represents a deliberate misdirection designed to create the illusion of consent where none existed.
There is a certain irony woven through the outrage. For all the noise about privacy, most users long ago accepted the quiet trade that powers Google’s empire.
They search, share, and store their digital lives inside Google’s ecosystem, knowing the company thrives on data.
The lawsuit may sound shocking, but for many, it simply exposes what has been implicit all along: if you live in Google’s world, privacy has already been priced into the convenience.
Thele warns that Gemini’s access could expose “financial information and records, employment information and records, religious affiliations and activities, political affiliations and activities, medical care and records, the identities of his family, friends, and other contacts, social habits and activities, eating habits, shopping habits, exercise habits, [and] the extent to which he is involved in the activities of his children.”
In other words, the system’s reach, if the allegations prove true, could extend into nearly every aspect of a user’s personal life.
The plaintiffs argue that Gemini’s analytical capabilities allow Google to “cross-reference and conduct unlimited analysis toward unmerited, improper, and monetizable insights” about users’ private relationships and behaviors.
The complaint brands the company’s actions as “deceptive and unethical,” claiming Google “surreptitiously turned on this AI tracking ‘feature’ without informing or obtaining the consent of Plaintiffs and Class Members.” Such conduct, it says, is “highly offensive” and “defies social norms.”
The case invokes a formidable set of statutes, including the California Invasion of Privacy Act, the California Computer Data Access and Fraud Act, the Stored Communications Act, and California’s constitutional right to privacy.
Google is yet to comment on the filing.
Reclaim The Net is reader-supported. Consider becoming a paid subscriber.
Continue Reading

Trending

X