Connect with us

Artificial Intelligence

DeepSeek: The Rise of China’s Open-Source AI Amid US Regulatory Shifts and Privacy Concerns

Published

9 minute read

logo

By

DeepSeek offers open-source generative AI with localized data storage but raises concerns over censorship, privacy, and disruption of Western markets.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

A recent regulatory clampdown in the United States on TikTok, a Chinese-owned social media platform, triggered a surge of users migrating to another Chinese app, Rednote. Now, another significant player has entered the spotlight: DeepSeek, a Chinese-developed generative artificial intelligence (AI) platform, which is rapidly gaining traction. The growing popularity of DeepSeek raises questions about the effectiveness of bans like TikTok and their ability to curtail the use of Chinese digital services by Americans.

President Donald Trump has called attention to a recent Chinese AI development, describing it as a “wake-up call” for the US tech industry.

Speaking to Republican lawmakers in Florida on Monday evening, the president emphasized the need for America to strengthen its competitive edge against China’s advancements in technology.

During the event, Trump referenced the launch of DeepSeek AI, highlighting its potential implications for the global tech landscape. “Last week, I signed an order revoking Joe Biden’s destructive artificial intelligence regulations so that AI companies can once again focus on being the best, not just being the most woke,” Trump stated. He continued by explaining that he had been closely following developments in China’s tech sector, including reports of a faster and more cost-effective approach to AI.

“That’s good because you don’t have to spend as much money,” Trump remarked, adding that while the claims about this Chinese breakthrough remain unverified, the idea of achieving similar results with lower costs could be seen as an opportunity for US companies. He stressed, “The release of DeepSeek AI from a Chinese company should be a wake-up call for our industries, that we need to be laser-focused on competing to win because we have the greatest scientists in the world.”

Trump also pointed to what he views as a recognition by China of America’s dominance in scientific and engineering talent. “This is very unusual, when you hear a DeepSeek when you hear somebody come up with something, we always have the ideas,” he said. “We’re always first. So I would say that’s a positive that could be very much a positive development.”

DeepSeek, created by a Chinese AI research lab backed by a hedge fund, has made waves with its open-source generative AI model. The platform rivals offerings from major US developers, including OpenAI. To circumvent US sanctions on hardware and software, the company allegedly implemented innovative solutions during the development of its models.

DeepSeek’s approach to sensitive topics raises significant concerns about censorship and the manipulation of information. By mirroring state-approved narratives and avoiding discussions on politically charged issues like Tiananmen Square or Winnie the Pooh’s satirical association with Xi Jinping, DeepSeek exemplifies how AI can be wielded to reinforce government-controlled messaging.

This selective presentation of facts, or outright omission of them, deprives users of a fuller understanding of critical events and stifles diverse perspectives. Such practices not only limit the free flow of information but also normalize propaganda under the guise of fostering a “wholesome cyberspace,” calling into question the ethical implications of deploying AI that prioritizes political conformity over truth and open dialogue.

While DeepSeek provides multiple options for accessing its AI models, including downloadable local versions, most users rely on its mobile apps or web chat interface.

The platform offers features such as answering queries, web searches, and detailed reasoning responses. However, concerns over data privacy and censorship are growing as DeepSeek collects extensive information and has been observed censoring content critical of China.

DeepSeek’s data practices raise alarm among privacy advocates. The company’s privacy policy explicitly states, “We store the information we collect in secure servers located in the People’s Republic of China.”

This includes user-submitted data such as chat messages, prompts, uploaded files, and chat histories. While users can delete chat history via the app, privacy experts emphasize the risks of sharing sensitive information with such platforms.

DeepSeek also gathers other personal information, such as email addresses, phone numbers, and device data, including operating systems and IP addresses. It employs tracking technologies, such as cookies, to monitor user activity. Additionally, interactions with advertisers may result in the sharing of mobile identifiers and other information with the platform. Analysis of DeepSeek’s web activity revealed connections to Baidu and other Chinese internet infrastructure firms.

While such practices are common in the AI industry, privacy concerns are heightened by DeepSeek’s storage of data in China, where stringent cybersecurity laws allow authorities to demand access to company-held information.

The safest option is running local or self-hosted versions of AI models, which prevent data from being transmitted to the developer.

And with Deepseek, this is simple as its models are open-source.

Open-source AI stands out as the superior approach to artificial intelligence because it fosters transparency, collaboration, and accessibility. Unlike proprietary systems, which often operate as opaque black boxes, open-source AI allows anyone to examine its code, ensuring accountability and reducing biases. This transparency builds trust, while the collaborative nature of open-source development accelerates innovation by enabling researchers and developers worldwide to contribute to and improve upon existing models.

Additionally, open-source AI democratizes access to cutting-edge technology, empowering startups, researchers, and underfunded regions to harness AI’s potential without the financial barriers of proprietary systems.

It also prevents monopolistic control by decentralizing AI development, reducing the dominance of a few tech giants.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.
You subscribe to Reclaim The Net because you value free speech and privacy. Each issue we publish is a commitment to defend these critical rights, providing insights and actionable information to protect and promote liberty in the digital age.

Despite our wide readership, less than 0.2% of our readers contribute financially. With your support, we can do more than just continue; we can amplify voices that are often suppressed and spread the word about the urgent issues of censorship and surveillance.

Consider making a modest donation — just $5, or whatever amount you can afford. Your contribution will empower us to reach more people, educate them about these pressing issues, and engage them in our collective cause.

Thank you for considering a contribution. Each donation not only supports our operations but also strengthens our efforts to challenge injustices and advocate for those who cannot speak out.


Thank you.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Trending

X