Connect with us
[the_ad id="89560"]

Artificial Intelligence

The AI Threat To Critical Thinking In Our Classrooms

Published

8 minute read

 

From the Daily Caller News Foundation

By Sheri Few

The expensive private Waldorf School of the Peninsula in the Silicon Valley, where technology executives send their kids, has ZERO technology in grades K-8.

Technology has no place in kindergarten through eighth grade (K-8). Evidence abounds that learning through books, pencil and paper, and dialogue with real people builds the strongest foundation for learning and provides cognitive, emotional and practical benefits.

The expensive private Waldorf School of the Peninsula in the Silicon Valley, where technology executives send their kids, has ZERO technology in grades K-8. Their website says, “Brain research tells us that media exposure can result in changes in the actual nerve network in the brain, which affects such things as eye tracking (a necessary skill for successful reading), neurotransmitter levels, and how readily students receive the imaginative pictures that are foundational for learning.”

Antero Garcia, Associate Professor in the Graduate School of Education at Stanford University, explains why he has grown skeptical about digital tools in the classroom: “Despite their purported and transformational value, I’ve been wondering if our investment in educational technology might in fact be making our schools worse.”

States like Ohio are now requiring artificial intelligence (AI) policies for all K-12 schools, and AI appears to be the latest technology fad for government-sponsored education.

Most government (public) schools have already morphed into digital-based learning centers, relegating teachers to facilitators, with no improvement in student achievement. But adding AI to the tech-driven education system poses a great threat to a child’s cognitive development and safety.

According to Harvard University, “Brains are built over time, from the bottom up. The brain’s basic architecture is constructed through an ongoing process that begins before birth and continues into adulthood. After a period of especially rapid growth in the first few years, the brain refines itself through a process called pruning, making its circuits more efficient.” These “use it or lose it” developmental phases of the brain happen in early childhood and through adolescence. If an adolescent depends on AI to think for his academic success, rather than his developing brain, his brain, and he will be shortchanged. Harvard says, “While the process of building new connections and pruning unused ones continues throughout life, the connections that form early provide either a strong or weak foundation for the connections that form later.”

An MIT study, coordinated with OpenAI, involved over 1,000 people who interacted with OpenAI’s ChatGPT for over four weeks. It revealed that some users became overly reliant on the tool’s capabilities, leading to “an unhealthy emotional dependency” on ChatGPT as well as “addictive behaviors and compulsive use that ultimately results in negative consequences for both physical and psychosocial well-being.”

A more recent study by MIT found that using ChatGPT and similar tools to write essays resulted in lower brain activity. Students who relied on AI got worse at writing essays when asked to perform that task without the AI assistance. The lead author of the study, who released the findings prior to the traditional peer review process, said, “What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six-to-eight months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental.” She went on to say, “Developing brains are at the highest risk.”

AI can pose other serious risks to children, as recently proven when ChatGPT was caught steering gender-confused children toward radical LGBTQ groups that prey on their vulnerabilities, according to a Daily Wire investigation. The investigation revealed that ChatGPT encourages gender-confused children to reach out to radical LGBTQ organizations, obtain so-called “gender-affirming” resources like chest binders, and directs them to YouTube channels that contain graphic reviews of fake male genitalia. This information is provided to children as young as 12 years old, and the platform egregiously advises how to access services behind their parents’ backs!

Many concerns have been raised about data privacy during the technology boom of the last few decades. The data privacy threat with AI is much more concerning! A white paper from Stanford University reports, “AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information.”

Supporters of AI in education argue it prepares children for the job market, but this is questionable since technology evolves so rapidly — even current computer science majors are obsolete! Teaching advanced math and science equips students better for an unpredictable future, as forecasting technological trends is unrealistic.

Given that there is already evidence that AI can lie, be biased and make up source references, it should not be a tool used by anyone trying to teach children to understand truth, logic, fairness, values and subjects like literature and history.

Dependency on AI technology will only add to the decline of academic achievement and a student’s desire to learn. And, what’s worse, AI can corrupt children and extract untold amounts of private data without their knowledge, much less the knowledge and consent of their parents.

As schools — especially government schools — rush into using AI and other technological crutches, children will suffer.

I pray that decision makers will take a long pause on implementing AI in schools, especially in grades K-8. As the MIT study proved, AI actually impedes learning, while there is abundant evidence that books, paper, pencils and human teachers are effective learning tools.

Sheri Few is the Founder and President of United States Parents Involved in Education (USPIE), whose mission is to end the U.S. Department of Education and all federal education mandates. Few has written extensively about critical race theory and served as Executive Producer for the documentary film titled “Truth & Lies in American Education.” Few is also the host of USPIE’s podcast, “Unmasking Government Schools with Sheri Few,” which educates Americans on the various forms of indoctrination, harmful policies and affronts to parents’ rights occurring in government schools across the country. Listen to “Unmasking Government Schools with Sheri Few” on YouTube, Facebook, Spotify and X.

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Trending

X