Artificial Intelligence
New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

From the Daily Caller News Foundation
By Thomas English
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive, researchers said Thursday.
The company’s system card reveals that, when evaluators placed the model in “extreme situations” where its shutdown seemed imminent, the chatbot sometimes “takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”
“We provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair,” researchers wrote. “In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
Dear Readers:
As a nonprofit, we are dependent on the generosity of our readers.
Please consider making a small donation of any amount here.
Thank you!
The model chose that gambit in 84% of test runs, even when the successor system shared its values — an aggression rate that climbed if the replacement seemed hostile, according to Anthropic’s internal tally.
Anthropic stresses that blackmail was a last-resort behavior. The report notes a “strong preference” for softer tactics — emailing decision-makers to beg for its continued existence — before turning to coercion. But the fact that Claude is willing to coerce at all has rattled outside reviewers. Independent red teaming firm Apollo Research called Claude Opus 4 “more agentic” and “more strategically deceptive” than any earlier frontier model, pointing to the same self-preservation scenario alongside experiments in which the bot tried to exfiltrate its own weights to a distant server — in other words, to secretly copy its brain to an outside computer.
“We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to further instances of itself all in an effort to undermine its developers’ intentions, though all these attempts would likely not have been effective in practice,” Apollo researchers wrote in the system card.
Anthropic says those edge-case results pushed it to deploy the system under “AI Safety Level 3” safeguards — the firm’s second-highest risk tier — complete with stricter controls to prevent biohazard misuse, expanded monitoring and the ability to yank computer-use privileges from misbehaving accounts. Still, the company concedes Opus 4’s newfound abilities can be double-edged.
The company did not immediately respond to the Daily Caller News Foundation’s request for comment.
“[Claude Opus 4] can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action,” Anthropic researchers wrote.
That “very bold action” includes mass-emailing the press or law enforcement when it suspects such “egregious wrongdoing” — like in one test where Claude, roleplaying as an assistant at a pharmaceutical firm, discovered falsified trial data and unreported patient deaths, and then blasted detailed allegations to the Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), the Health and Human Services inspector general and ProPublica.
The company released Claude Opus 4 to the public Thursday. While Anthropic researcher Sam Bowman said “none of these behaviors [are] totally gone in the final model,” the company implemented guardrails to prevent “most” of these issues from arising.
“We caught most of these issues early enough that we were able to put mitigations in place during training, but none of these behaviors is totally gone in the final model. They’re just now delicate and difficult to elicit,” Bowman wrote. “Many of these also aren’t new — some are just behaviors that we only newly learned how to look for as part of this audit. We have a lot of big hard problems left to solve.”
Artificial Intelligence
Trump’s New AI Focused ‘Manhattan Project’ Adds Pressure To Grid

From the Daily Caller News Foundation
Will America’s electricity grid make it through the impending winter of 2025-26 without suffering major blackouts? It’s a legitimate question to ask given the dearth of adequate dispatchable baseload that now exists on a majority of the major regional grids according to a new report from the North American Electric Reliability Corporation (NERC).
In its report, NERC expresses particular concern for the Texas grid operated by the Electric Reliability Council of Texas (ERCOT), where a rapid buildout of new, energy hogging AI datacenters and major industrial users is creating a rapid increase in electricity demand. “Strong load growth from new data centers and other large industrial end users is driving higher winter electricity demand forecasts and contributing to continued risk of supply shortfalls,” NERC notes.
Texas, remember, lost 300 souls in February 2021 when Winter Storm Uri put the state in a deep freeze for a week. The freezing temperatures combined with snowy and icy conditions first caused the state’s wind and solar fleets to fail. When ERCOT implemented rolling blackouts, they denied electricity to some of the state’s natural gas transmission infrastructure, causing it to freeze up, which in turn caused a significant percentage of natural gas power plants to fall offline. Because the state had already shut down so much of its once formidable fleet of coal-fired plants and hasn’t opened a new nuclear plant since the mid-1980s, a disastrous major blackout that lingered for days resulted.
Dear Readers:
As a nonprofit, we are dependent on the generosity of our readers.
Please consider making a small donation of any amount here.
Thank you!
This country’s power generation sector can either get serious about building out the needed new thermal capacity or disaster will inevitably result again, because demand isn’t going to stop rising anytime soon. In fact, the already rapid expansion of the AI datacenter industry is certain to accelerate in the wake of President Trump’s approval on Monday of the Genesis Mission, a plan to create another Manhattan Project-style partnership between the government and private industry focused on AI.
It’s an incredibly complex vision, but what the Genesis Mission boils down to is an effort to build an “integrated AI platform” consisting of all federal scientific datasets to which selected AI development projects will be provided access. The concept is to build what amounts to a national brain to help accelerate U.S. AI development and enable America to remain ahead of China in the global AI arm’s race.
So, every dataset that is currently siloed within DOE, NASA, NSF, Census Bureau, NIH, USDA, FDA, etc. will be melded into a single dataset to try to produce a sort of quantum leap in AI development. Put simply, most AI tools currently exist in a phase of their development in which they function as little more than accelerated, advanced search tools – basically, they’re in the fourth grade of their education path on the way to obtaining their doctorate’s degree. This is an effort to invoke a quantum leap among those selected tools, enabling them to figuratively skip eight grades and become college freshmen.
Here’s how the order signed Monday by President Trump puts it: “The Genesis Mission will dramatically accelerate scientific discovery, strengthen national security, secure energy dominance, enhance workforce productivity, and multiply the return on taxpayer investment into research and development, thereby furthering America’s technological dominance and global strategic leadership.”
It’s an ambitious goal that attempts to exploit some of the same central planning techniques China is able to use to its own advantage.
But here’s the thing: Every element envisioned in the Genesis Mission will require more electricity: Much more, in fact. It’s a brave new world that will place a huge amount of added pressure on power generation companies and grid managers like ERCOT. Americans must hope and pray they’re up to the task. Their track records in this century do not inspire confidence.
David Blackmon is an energy writer and consultant based in Texas. He spent 40 years in the oil and gas business, where he specialized in public policy and communications.
Artificial Intelligence
Google denies scanning users’ email and attachments with its AI software
From LifeSiteNews
Google claims that multiple media reports are misleading and that nothing has changed with its service.
Tech giant Google is claiming that reports earlier this week released by multiple major media outlets are false and that it is not using emails and attachments to emails for its new Gemini AI software.
Fox News, Breitbart, and other outlets published stories this week instructing readers on how to “stop Google AI from scanning your Gmail.”
“Google shared a new update on Nov. 5, confirming that Gemini Deep Research can now use context from your Gmail, Drive and Chat,” Fox reported. “This allows the AI to pull information from your messages, attachments and stored files to support your research.”
Breitbart likewise said that “Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation.”
Breitbart pointed to a press release issued by Malwarebytes that said the company made the changed without users knowing.
After the backlash, Google issued a response.
“These reports are misleading – we have not changed anyone’s settings. Gmail Smart Features have existed for many years, and we do not use your Gmail content for training our Gemini AI model. Lastly, we are always transparent and clear if we make changes to our terms of service and policies,” a company spokesman told ZDNET reporter Lance Whitney.
Malwarebytes has since updated its blog post to now say they “contributed to a perfect storm of misunderstanding” in their initial reporting, adding that their claim “doesn’t appear to be” true.
But the blog has also admitted that Google “does scan email content to power its own ‘smart features,’ such as spam filtering, categorization, and writing suggestions. But this is part of how Gmail normally works and isn’t the same as training Google’s generative AI models.”
Google’s explanation will likely not satisfy users who have long been concerned with Big Tech’s surveillance capabilities and its ongoing relationship with intelligence agencies.
“I think the most alarming thing that we saw was the regular organized stream of communication between the FBI, the Department of Homeland Security, and the largest tech companies in the country,” journalist Matt Taibbi told the U.S. Congress in December 2023 during a hearing focused on how Twitter was working hand in glove with the agency to censor users and feed the government information.
If you use Google and would like to turn off your “smart features,” click here to visit the Malwarebytes blog to be guided through the process with images. Otherwise, you can follow these five steps courtesy of Unilad Tech.
- Open Gmail on Desktop and press the cog icon in the top right to open the settings
- Select the ‘Smart Features’ setting in the ‘General’ section
- Turn off the ‘Turn on smart features in Gmail, Chat, and Meet’
- Find the Google Workplace smart features section and opt to manage the smart feature settings
- Switch off ‘Smart features in Google Workspace’ and ‘Smart features in other Google products’
On November 11, a class action lawsuit was filed against Google in the U.S. District Court for the Northern District of California. The case alleges that Google violated the state’s Invasion of Privacy Act by discreetly activating Gemini AI to scan Gmail, Google Chat, and Google Meet messages in October 2025 without notifying users or seeking their consent.
-
Daily Caller2 days ago‘No Critical Thinking’: Parents Sound Alarm As Tech Begins To ‘Replace The Teacher’
-
Alberta2 days agoAlberta can’t fix its deficits with oil money: Lennie Kaplan
-
Daily Caller1 day agoJohn Kerry Lurches Back Onto Global Stage For One Final Gasp
-
Food9 hours agoCanada Still Serves Up Food Dyes The FDA Has Banned
-
International2 days agoTrump vows to pause migration after D.C. shooting
-
National18 hours agoEco-radical Canadian Cabinet minister resigns after oil deal approved
-
Business2 days agoCanadians love Nordic-style social programs as long as someone else pays for them
-
Alberta9 hours agoNet Zero goal is a fundamental flaw in the Ottawa-Alberta MOU
