Artificial Intelligence
Yuval Noah Harari warns against AI’s ‘ability to manipulate people,’ pretend to be human
From LifeSiteNews
The transhumanist has highlighted the fact that AI has a real ability to deceive human beings. The question is, who is using AI, and for what purposes?
Transhumanist philosopher and World Economic Forum (WEF) senior adviser Yuval Noah Harari recently warned on MSNBC that AI can be used to manipulate us, having already been shown to be capable of impersonating a human.
He shared the story of how the AI tool GPT-4 was programmed to seek out a real human being — a TaskRabbit worker — to convince them to solve a CAPTCHA puzzle that is designed to distinguish between human beings and AI.
“It asked a human worker, ‘Please solve the CAPTCHA puzzle for me,’” shared Harari. “This is the interesting part. The human got suspicious. It asked GPT-4, ‘Why do you need somebody to do this for you? Are you a robot?’ GPT-4 told the human, ‘No, I’m not a robot, I have a vision impairment, so I can’t see the CAPTCHA puzzles, this is why I need help.’”
The human fell for the AI tool’s lie and completed the CAPTCHA puzzle on its behalf, he recounted, pointing out that this is evidence that AI is “able to manipulate people.”
He further warned that AI has a newfound ability to “understand and manipulate” human emotions, which he said could be employed for good purposes, such as in AI “teachers” and “doctors,” but could also be used to “sell us everything from products to politicians.”
Harari suggested that regulations by which AI would be legally required to identify itself for what it is — artificial intelligence — would be a desirable solution to this potential problem.
“AI should be welcome to human conversations as long as it identifies itself as AI,” said Harari, adding that this is something both Republicans and Democrats can get behind.
What the WEF adviser did not reveal during this particular interview, however, is that he believes speech on social media should be censored under the pretext of regulating AI.
He recently argued regarding social media, “The problem is not freedom of speech. The problem is that there are algorithms on Twitter, Facebook, and so forth that deliberately promote information that captures our attention even if it’s not true.”
Lamenting that algorithms are one way in which AI amplifies “falsehoods” on the internet, and claiming that AI “is capable of creating content by itself,” Harari ignored the fact that AI-generated content as well as algorithms are always ultimately a product of human programming.
Harari, an atheist, has previously claimed that AI can manipulate human beings to such a degree that it renders democratic functioning as well as free will obsolete. He explained to journalist Romi Noimark in 2020, “If you have enough data and you have enough computing power, you can understand people better than they understand themselves. And then you can manipulate them in ways which were previously impossible … And in such a situation, the old democratic situation stops functioning.
Acclaimed author and investigative reporter Leo Hohmann points to the human beings behind AI as the real manipulators and real danger to the masses, rather than characterizing AI itself as a prime danger.
Hohmann believes that AI “may very well turn out to be the nerve center of the coming beast system” — referring to a potential AI system with centralized access to intimate information about ourselves, as well as the power to manipulate or control our behavior — and that in the hands of globalists like the WEF, “its core mission is to eliminate free will in the human being.”
Artificial Intelligence
UK Police Pilot AI System to Track “Suspicious” Driver Journeys
AI-driven surveillance is shifting from spotting suspects to mapping ordinary life, turning everyday travel into a stream of behavioral data
|
|
Alberta
Schools should go back to basics to mitigate effects of AI
From the Fraser Institute
Odds are, you can’t tell whether this sentence was written by AI. Schools across Canada face the same problem. And happily, some are finding simple solutions.
Manitoba’s Division Scolaire Franco-Manitobaine recently issued new guidelines for teachers, to only assign optional homework and reading in grades Kindergarten to six, and limit homework in grades seven to 12. The reason? The proliferation of generative artificial intelligence (AI) chatbots such as ChatGPT make it very difficult for teachers, juggling a heavy workload, to discern genuine student work from AI-generated text. In fact, according to Division superintendent Alain Laberge, “Most of the [after-school assignment] submissions, we find, are coming from AI, to be quite honest.”
This problem isn’t limited to Manitoba, of course.
Two provincial doors down, in Alberta, new data analysis revealed that high school report card grades are rising while scores on provincewide assessments are not—particularly since 2022, the year ChatGPT was released. Report cards account for take-home work, while standardized tests are written in person, in the presence of teaching staff.
Specifically, from 2016 to 2019, the average standardized test score in Alberta across a range of subjects was 64 while the report card grade was 73.3—or 9.3 percentage points higher). From 2022 and 2024, the gap increased to 12.5 percentage points. (Data for 2020 and 2021 are unavailable due to COVID school closures.)
In lieu of take-home work, the Division Scolaire Franco-Manitobaine recommends nightly reading for students, which is a great idea. Having students read nightly doesn’t cost schools a dime but it’s strongly associated with improving academic outcomes.
According to a Programme for International Student Assessment (PISA) analysis of 174,000 student scores across 32 countries, the connection between daily reading and literacy was “moderately strong and meaningful,” and reading engagement affects reading achievement more than the socioeconomic status, gender or family structure of students.
All of this points to an undeniable shift in education—that is, teachers are losing a once-valuable tool (homework) and shifting more work back into the classroom. And while new technologies will continue to change the education landscape in heretofore unknown ways, one time-tested winning strategy is to go back to basics.
And some of “the basics” have slipped rapidly away. Some college students in elite universities arrive on campus never having read an entire book. Many university professors bemoan the newfound inability of students to write essays or deconstruct basic story components. Canada’s average PISA scores—a test of 15-year-olds in math, reading and science—have plummeted. In math, student test scores have dropped 35 points—the PISA equivalent of nearly two years of lost learning—in the last two decades. In reading, students have fallen about one year behind while science scores dropped moderately.
The decline in Canadian student achievement predates the widespread access of generative AI, but AI complicates the problem. Again, the solution needn’t be costly or complicated. There’s a reason why many tech CEOs famously send their children to screen-free schools. If technology is too tempting, in or outside of class, students should write with a pencil and paper. If ChatGPT is too hard to detect (and we know it is, because even AI often can’t accurately detect AI), in-class essays and assignments make sense.
And crucially, standardized tests provide the most reliable equitable measure of student progress, and if properly monitored, they’re AI-proof. Yet standardized testing is on the wane in Canada, thanks to long-standing attacks from teacher unions and other opponents, and despite broad support from parents. Now more than ever, parents and educators require reliable data to access the ability of students. Standardized testing varies widely among the provinces, but parents in every province should demand a strong standardized testing regime.
AI may be here to stay and it may play a large role in the future of education. But if schools deprive students of the ability to read books, structure clear sentences, correspond organically with other humans and complete their own work, they will do students no favours. The best way to ensure kids are “future ready”—to borrow a phrase oft-used to justify seesawing educational tech trends—is to school them in the basics.
-
Digital ID2 days agoCanadian government launches trial version of digital ID for certain licenses, permits
-
Alberta2 days agoAlberta Next Panel calls to reform how Canada works
-
International2 days agoGeorgia county admits illegally certifying 315k ballots in 2020 presidential election
-
Agriculture2 days agoEnd Supply Management—For the Sake of Canadian Consumers
-
Business2 days agoThe “Disruptor-in-Chief” places Canada in the crosshairs
-
Artificial Intelligence2 days agoUK Police Pilot AI System to Track “Suspicious” Driver Journeys
-
Energy2 days ago‘The electric story is over’
-
International2 days agoWorld-leading biochemist debunks evolutionary theory



