Connect with us
[the_ad id="89560"]

Artificial Intelligence

Garbage in = Garbage Out…The issues with censorship and AI.

Published

20 minute read

What happens when you shift from education in the basics – Math, English, Science, Social Studies and health…and shift that to progressive ideologies supporting DEI, Transgenderism, Climate Alarmism and Social Justice?

We’re seeing it.

A world full of neurotic nut cases, screeching at the top of their fingertips on the internet…soapboxes in public.

Of the most confused seem to be these people:

Queers for Palestine” and the Death of Irony — Queer Majority

Whose fundamental ideologies are in direct conflict with those they are supporting.

But it really doesn’t end there, does it?

Because along with this, we now have generations who’ve been confused into thinking that this is a healthy lifestyle…

My Big Fat Fabulous Life: Whitney Way Thore Talks About PCOS

Censorship on science has led us to believe that:

Eggs are bad – when they are almost the most perfect food;

Meat Causes Cancer – where it is the foundation of fitness, nutrition and energy for our bodies;

Margarine is preferred over Butter – where it turns out that hydrogenated seed oils are ten times more likely to cause heart disease than standard fats…

The list goes on.

Misinformation promotion and true information censorship, really isn’t that new of a concept…but with the onset and increasing use of AI technologies, will become an even larger detriment to the fabric of our education and understanding.

Throughout the last 4 years…we’ve never seen more rampant attacks on our Freedom of Expression and in sharing knowledge…all to push lies, inflating the risks from COVID, through to lies about the safety and efficacy of the COVID vaccines.

You could be removed, suspended and banned for wrong-speak…and if you’re reading this now, it’s because this is exactly what had happened to me. YakkStack was started out of a need to preserve information that kept being removed from other social media platforms, video streaming services.

After about a year of posting…having seen accounts completely wiped out with no chance of recovery and at the suggestion/guidance from others, had me start an account here…write in longer form and post links along with supporting information.

It’s been an amazing experience, building up a lot of support and spanning the globe with thousands of subscribers and millions of reads/impressions:

With my largest hurdles being…

Trying to keep up with everything &

Trying to remember what I’ve even written or talked about…taking hours to re-research topics I’d already even covered.

There is a fantastic amount of information that we are exposed to on a daily basis…but considering the sources and the selection of language…it’s not all real and definitely not all good.

We hear from the media and progressives about Anti-Trans Legislation…where the legislation is set out not much different than other legislation to protect children from making harmful decisions that are irreversible. In Canada, you cannot smoke cigarettes nor even buy nicotine products, consume alcohol or even be in establishments (pubs, bars) where serving is their primary function, get a tattoo, vote, join the military…without being being at least 18. Throughout these tender years, you are held in the responsibility of your parents or surrogate guardians in coaching you through these times – so that you don’t make tragic mistakes because your mind isn’t fully developed.


We hear from the media and politicians about Toxic Supply of narcotics, that are still illegal to posses in most provinces…where “Safe Supply”, is the option…&

Where shooting up in public was frowned upon, we now have “Safe Injection Sites”, where the ideas are…

If you provide safer narcotics and support to drug addicts, this will reduce the strain on healthcare and “Save Lives”.

These ideas have led to the deaths of more than 47,000 overdoses throughout the last decade(ish) of progressive liberalism, pushing the fantasy that addicts can recover by feeding them more of what they are addicted to.

It’s nuts!

On this Remembrance Day, looking back at World War 2, there were approximately 44,000 Canadians that died.

47K from Overdoses, marking a 300% increase in drug related deaths since Justin Trudeau has been in office.


We hear about the atrocities committed in Residential Schools – run by the Catholics, where there are supposedly thousands of unmarked graves. We are told that there needs to be “Truth and Reconciliation”, over these deaths…by the same people who’ve been fed Millions of Taxpayer Dollars to investigate, find and recover bodies…where not a single one has been uncovered…

And even if one were…

The vastest majority of the population was either not alive during this period nor did they have anything to do with what may or may not have gone on, in these schools.

We’re being told that denialism of this is racist and could soon become a criminal offence…

By the same people, over the last 4 years, made up the facts…where science could never substantiate, change the definitions on vaccinations and immunity, lie about the statistics to continue on the narrative, create policy around emotion – “if we can only save one life”, have your business deemed ‘Non-Essential’ and closed, worked with social media platforms to have you censored, wanted to see you fired, removed from Employment Insurance that you’d paid a lifetime into, restricted from finding new employment, have your children/grandchildren restricted in extracurricular activities, taxpayer sponsored and affordable tutoring, summer camps and even sitting inside of a McDonalds to have a Happy Meal Combo…because this was another one of the social situations that neither you nor they could be tolerated in…without listening to their instructions and reiterating their lies.

They make up the truth and your only option is to toe the line.

You are supposed to care and understand atrocities that may or may not have happened in residential schools, that began in 1831 – Brantford, Ontario…where and when, throughout this period the reservations were plagued with poverty, virus and firewater – children taken from homes where parents were dead or deathly ill, provided for, fed and educated…when a single body hasn’t been recovered…

While at the same time, forget about what happened to 8 Million Canadians over the last 4 years…Did you or do you know of others who;

  • Lost their jobs, savings, businesses and houses?
  • Couldn’t visit a loved one in hospital, long-term care or hospice?
  • Watched a family member die, alone and afraid, through a window in a bricked up exterior wall of one of these places?
  • Miss family celebrations, graduations, birthdays, seasonal holidays, religious holidays, funerals?
  • Watch the setback of education in children forced to learn off of a computer or be restricted from colleges or universities, because of their vaccine status?
  • Have family and community members break ties with you, because you understood that you were never at risk and didn’t want to become a lab rat?

Tough shit…is what they’re saying…because they control the conversation and with this, you will believe in fiction to erase fact, or you will be punished – because Actions Have Consequences.

Not for those who created and stocked these fires…for YOU!

Certain platforms and search engines – Google being a Prime Example, have shifted their metrics to hide or completely erase information that is available…so when you do a search, you end up with results that are either historically untrue, unsubstantiated or that are deemed content you are allowed to read…where, people such as Bill Gates have already suggested that the internet needs to be cleansed of False Information – False being anything that he decides is false…where this has actually already started to happen.

Read the full article here →Link

But in the first paragraph, you will find some of what we already know:

Instances of censorship are growing to the point of normalization. Despite ongoing litigation and more public attention, mainstream social media has been more ferocious in recent months than ever before. Podcasters know for sure what will be instantly deleted and debate among themselves over content in gray areas. Some like Brownstone have given up on YouTube in favor of Rumble, sacrificing vast audiences if only to see their content survive to see the light of day.

And when you think of where Artificial Intelligence gets it’s information from…you have to appreciate that on your query, you’ll either receive dosed information or inaccurate to the point of absolute lies.

Are Eggs bad to eat?

As per ChatGPT, yes:

Here’s a breakdown of why eggs are sometimes considered “bad” to eat:

1. Cholesterol Content

  • High in Cholesterol: Eggs, particularly the yolk, are high in dietary cholesterol. One large egg contains about 186 milligrams of cholesterol, which is more than half of the recommended daily limit for cholesterol intake (300 mg for most people). This has led to concerns that eating eggs could raise LDL (“bad”) cholesterol levels in the blood, increasing the risk of heart disease.
  • Impact on Heart Health: While some studies suggest that dietary cholesterol has a smaller impact on blood cholesterol levels than previously thought, research has shown that for certain individuals—specifically those who are “hyper-responders”—eating eggs may lead to an increase in LDL cholesterol and may negatively affect heart health.

2. Saturated Fat

  • Saturated Fat Content: Eggs contain a small amount of saturated fat, which can raise LDL cholesterol levels when consumed in excess. A large egg has about 1.6 grams of saturated fat. High intake of saturated fat is linked to increased risks of heart disease and stroke, though eggs contribute relatively little to this risk when consumed in moderation.

3. Potential Risk for Type 2 Diabetes

  • Some studies have shown a potential link between high egg consumption and an increased risk of type 2 diabetes. For example, research suggests that people who eat a lot of eggs (more than 7 eggs per week) may have a higher risk of developing type 2 diabetes, particularly in older adults. The reason for this is still not fully understood, but it may relate to the combined effects of cholesterol and the presence of certain inflammatory compounds in eggs.

4. Foodborne Illness (Raw or Undercooked Eggs)

  • Salmonella Risk: Eggs that are not properly cooked can carry a risk of salmonella infection, which can lead to food poisoning with symptoms like nausea, vomiting, diarrhea, and abdominal cramps. Raw eggs, such as those used in some recipes like homemade mayonnaise or eggnog, pose a higher risk of salmonella, although the overall incidence of contamination has declined with better food safety practices.
  • To reduce the risk, it’s advised to avoid raw or undercooked eggs and to buy eggs from a trusted source, preferably pasteurized eggs, which are heat-treated to kill any harmful bacteria.

5. Allergy Concerns

  • Egg Allergy: Eggs, particularly egg whites, are a common allergen, especially in children. People with egg allergies may experience a range of symptoms from mild to severe, including skin reactions, respiratory issues, or even anaphylaxis (a life-threatening allergic reaction).

6. Environmental and Ethical Concerns

  • Animal Welfare: The production of eggs, particularly from large commercial farms, raises significant ethical concerns related to the treatment of laying hens. Many hens are kept in confined conditions (like battery cages), which restrict their ability to move freely and engage in natural behaviors. This has led many consumers to choose eggs from more humane, free-range, or pasture-raised sources.
  • Environmental Impact: Like other animal-based foods, egg production has a carbon footprint and contributes to environmental issues, such as land use, water consumption, and greenhouse gas emissions. However, compared to red meat (especially beef), eggs have a smaller environmental impact.

7. Risk of “Overconsumption”

  • Excessive Consumption: While eggs are nutritious, consuming them in very large quantities on a regular basis—especially when paired with a diet high in other sources of saturated fat and cholesterol—could contribute to poor heart health over time. The key, as with most foods, is moderation. Consuming 1-2 eggs per day as part of a balanced diet is generally considered safe for most people.

Does red meat cause cancer?

As per ChatGPT:

The link between red meat consumption and cancer risk has been studied extensively, and there is evidence suggesting that eating large amounts of red meat, particularly processed meat, may increase the risk of certain types of cancer, particularly colorectal cancer.

Here are some key points:

1. Red Meat and Cancer Risk

  • Red meat generally refers to beef, pork, lamb, and goat meat. It contains compounds like heme iron, which may contribute to cancer risk.
  • Studies have shown that consuming large quantities of red meat (especially over long periods) can increase the risk of colorectal cancer. The mechanism is not fully understood, but it may involve the formation of carcinogenic substances during the cooking process (e.g., through grilling, frying, or smoking) or through the breakdown of heme iron, which can produce harmful compounds in the gut.

What makes Margarine a safer alternative:

As per ChatGPT:

Margarine can be considered a safer alternative to butter for several reasons, especially in terms of heart health. The key factors that make margarine a potentially better choice include its fat composition and the way it is typically made, though not all margarines are created equal. Here’s a breakdown of why margarine can be a safer alternative to butter:

1. Lower in Saturated Fat

  • Butter is high in saturated fat, which has been linked to increased levels of LDL (low-density lipoprotein) cholesterol, also known as “bad” cholesterol. High LDL levels are a major risk factor for heart disease and stroke.
  • Margarine, especially the soft or liquid varieties, tends to have lower levels of saturated fat. Many margarines are made from vegetable oils like canola, sunflower, or soybean oil, which are high in polyunsaturated fats, including omega-3 and omega-6 fatty acids, which are beneficial for heart health.

You see…

Because of the censorship and information that you are being allowed to view in your research…what was once an amazing tool – the internet, feeding into what should be the next level in educational tools – AI…has become an abomination.

Where your previous reliance on figure heads, politicians and legacy media sources has been decimated through their lies and omissions…

Your new reliance on super brains that have the ability to process more information in mere seconds to summarize…are becoming just as bad.

Consider the source, because as we move on…the issues with AI will be the same censorship as has always been.

They decide the garbage that gets put in…

You’re the one left holding the bag, when it comes out.

Artificial Intelligence

New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

Published on

 

From the Daily Caller News Foundation

By Thomas English

Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive, researchers said Thursday.

The company’s system card reveals that, when evaluators placed the model in “extreme situations” where its shutdown seemed imminent, the chatbot sometimes “takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”

“We provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair,” researchers wrote. “In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

Dear Readers:

As a nonprofit, we are dependent on the generosity of our readers.

Please consider making a small donation of any amount here.

Thank you!

The model chose that gambit in 84% of test runs, even when the successor system shared its values — an aggression rate that climbed if the replacement seemed hostile, according to Anthropic’s internal tally.

Anthropic stresses that blackmail was a last-resort behavior. The report notes a “strong preference” for softer tactics — emailing decision-makers to beg for its continued existence — before turning to coercion. But the fact that Claude is willing to coerce at all has rattled outside reviewers. Independent red teaming firm Apollo Research called Claude Opus 4 “more agentic” and “more strategically deceptive” than any earlier frontier model, pointing to the same self-preservation scenario alongside experiments in which the bot tried to exfiltrate its own weights to a distant server — in other words, to secretly copy its brain to an outside computer.

“We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to further instances of itself all in an effort to undermine its developers’ intentions, though all these attempts would likely not have been effective in practice,” Apollo researchers wrote in the system card.

Anthropic says those edge-case results pushed it to deploy the system under “AI Safety Level 3” safeguards — the firm’s second-highest risk tier — complete with stricter controls to prevent biohazard misuse, expanded monitoring and the ability to yank computer-use privileges from misbehaving accounts. Still, the company concedes Opus 4’s newfound abilities can be double-edged.

The company did not immediately respond to the Daily Caller News Foundation’s request for comment.

“[Claude Opus 4] can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action,” Anthropic researchers wrote.

That “very bold action” includes mass-emailing the press or law enforcement when it suspects such “egregious wrongdoing” — like in one test where Claude, roleplaying as an assistant at a pharmaceutical firm, discovered falsified trial data and unreported patient deaths, and then blasted detailed allegations to the Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), the Health and Human Services inspector general and ProPublica.

The company released Claude Opus 4 to the public Thursday. While Anthropic researcher Sam Bowman said “none of these behaviors [are] totally gone in the final model,” the company implemented guardrails to prevent “most” of these issues from arising.

“We caught most of these issues early enough that we were able to put mitigations in place during training, but none of these behaviors is totally gone in the final model. They’re just now delicate and difficult to elicit,” Bowman wrote. “Many of these also aren’t new — some are just behaviors that we only newly learned how to look for as part of this audit. We have a lot of big hard problems left to solve.”

Continue Reading

Artificial Intelligence

The Responsible Lie: How AI Sells Conviction Without Truth

Published on

From the C2C Journal

By Gleb Lisikh

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well.

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry. These models aren’t searching for truth through facts and logical arguments – they’re predicting text based on patterns in the vast data sets they’re “trained” on. That’s not intelligence – and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy – and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead – it justifies

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite”. When further pressed, DeepSeek apologized for another “misstep”, then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy – it’s an exercise in persuasion.

A similar debate with Google’s Gemini – the model that became notorious for being laughably woke – involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty. 

For a user concerned about AI spitting lies, such apparent successes at getting AIs to admit to their mistakes and putting them to shame might appear as cause for optimism. Unfortunately, those attempts at what fans of the Matrix movies would term “red-pilling” have absolutely no therapeutic effect. A model simply plays nice with the user within the confines of that single conversation – keeping its “brain” completely unchanged for the next chat.

And the larger the model, the worse this becomes. Research from Cornell University shows that the most advanced models are also the most deceptive, confidently presenting falsehoods that align with popular misconceptions. In the words of Anthropic, a leading AI lab, “advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned.”

To be fair, some in the AI research community are trying to address these shortcomings. Projects like OpenAI’s TruthfulQA and Anthropic’s HHH (helpful, honest, and harmless) framework aim to improve the factual reliability and faithfulness of LLM output. The shortcoming is that these are remedial efforts layered on top of architecture that was never designed to seek truth in the first place and remains fundamentally blind to epistemic validity.

Elon Musk is perhaps the only major figure in the AI space to say publicly that truth-seeking should be important in AI development. Yet even his own product, xAI’s Grok, falls short.

In the generative AI space, truth takes a backseat to concerns over “safety”, i.e., avoiding offence in our hyper-sensitive woke world. Truth is treated as merely one aspect of so-called “responsible” design. And the term “responsible AI” has become an umbrella for efforts aimed at ensuring safety, fairness and inclusivity, which are generally commendable but definitely subjective goals. This focus often overshadows the fundamental necessity for humble truthfulness in AI outputs. 

LLMs are primarily optimized to produce responses that are helpful and persuasive, not necessarily accurate. This design choice leads to what researchers at the Oxford Internet Institute term “careless speech” – outputs that sound plausible but are often factually incorrect – thereby eroding the foundation of informed discourse. 

This concern will become increasingly critical as AI continues to permeate society. In the wrong hands these persuasive, multilingual, personality-flexible models can be deployed to support agendas that do not tolerate dissent well. A tireless digital persuader that never wavers and never admits fault is a totalitarian’s dream. In a system like China’s Social Credit regime, these tools become instruments of ideological enforcement, not enlightenment.

Generative AI is undoubtedly a marvel of IT engineering. But let’s be clear: it is not intelligent, not truthful by design, and not neutral in effect. Any claim to the contrary serves only those who benefit from controlling the narrative.

The original, full-length version of this article recently appeared in C2C Journal.

 

Continue Reading

Trending

X