Artificial Intelligence
Character AI sued following teen suicide

|
| The last person 14-year-old Sewell Setzer III spoke to before he shot himself wasn’t a person at all. | ||
| It was an AI chatbot that, in the last months of his life, had become his closest companion. | ||
| Sewell was using Character AI, one of the most popular personal AI platforms out there. The basic pitch is that users can design and interact with “characters,” powered by large language models (LLMs) and intended to mirror, for instance, famous characters from film and book franchises. | ||
| In this case, Sewell was speaking with Daenerys Targaryen (or Dany), one of the leads from Game of Thrones. According to a New York Times report, Sewell knew that Dany’s responses weren’t real, but he developed an emotional attachment to the bot, anyway. | ||
| One of their last conversations, according to the Times, went like this: | ||
|
||
| On the night he died, Sewell told the chatbot he loved her and would come home to her soon. | ||
|
||
| This is not the first time chatbots have been involved in suicide. | ||
| In 2023, a Belgian man died by suicide — similar to Sewell — following weeks of increasing isolation as he grew closer to a Chai chatbot, which then encouraged him to end his life. | ||
| Megan Garcia, Sewell’s mother, hopes it will be the last time. She filed a lawsuit against Character AI, its founders and parent company Google on Wednesday, accusing them of knowingly designing and marketing an anthropomorphized, “predatory” chatbot that caused the death of her son. | ||
| “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders and Google.” | ||
| The lawsuit — which you can read here — accuses the company of “anthropomorphizing by design.” This is something we’ve talked about a lot, here; the majority of chatbots out there are very blatantly designed to make users think they’re, at least, human-like. They use personal pronouns and are designed to appear to think before responding. | ||
| While these may be minor examples, they build a foundation for people, especially children, to misapply human attributes to unfeeling, unthinking algorithms. This was termed the “Eliza effect” in the 1960s. | ||
|
||
| Garcia is suing for several counts of liability, negligence and the intentional infliction of emotional distress, among other things. | ||
| Character at the same time published a blog responding to the tragedy, saying that it has added new safety features. These include revised disclaimers on every chat that the chatbot isn’t a real person, in addition to popups with mental health resources in response to certain phrases. | ||
| In a statement, Character AI said it was “heartbroken” by Sewell’s death, and directed me to their blog post. | ||
| Google did not respond to a request for comment. | ||
|
||
| The suit does not claim that the chatbot encouraged Sewell to commit suicide. I view it more so as a reckoning with the anthropomorphized chatbots that have been born of an era of unregulated social media, and that are further incentivized for user engagement at any cost. | ||
| There were other factors at play here — for instance, Sewell’s mental health issues and his access to a gun — but the harm that can be caused by a misimpression of what AI actually is seems very clear, especially for young kids. This is a good example of what researchers mean when they emphasize the presence of active harms, as opposed to hypothetical risks. | ||
|
||
| “Artificial intimacy programs use the same large language models as the generative AI programs that help us create business plans and find the best restaurants in Tulsa. They scrape the internet so that the next thing they say stands the greatest chance of pleasing their user.” | ||
| We are witnessing and grappling with a very raw crisis of humanity. Smartphones and social media set the stage. | ||
| More technology is not the cure. |
Artificial Intelligence
UK Police Pilot AI System to Track “Suspicious” Driver Journeys
AI-driven surveillance is shifting from spotting suspects to mapping ordinary life, turning everyday travel into a stream of behavioral data
|
|
Alberta
Schools should go back to basics to mitigate effects of AI
From the Fraser Institute
Odds are, you can’t tell whether this sentence was written by AI. Schools across Canada face the same problem. And happily, some are finding simple solutions.
Manitoba’s Division Scolaire Franco-Manitobaine recently issued new guidelines for teachers, to only assign optional homework and reading in grades Kindergarten to six, and limit homework in grades seven to 12. The reason? The proliferation of generative artificial intelligence (AI) chatbots such as ChatGPT make it very difficult for teachers, juggling a heavy workload, to discern genuine student work from AI-generated text. In fact, according to Division superintendent Alain Laberge, “Most of the [after-school assignment] submissions, we find, are coming from AI, to be quite honest.”
This problem isn’t limited to Manitoba, of course.
Two provincial doors down, in Alberta, new data analysis revealed that high school report card grades are rising while scores on provincewide assessments are not—particularly since 2022, the year ChatGPT was released. Report cards account for take-home work, while standardized tests are written in person, in the presence of teaching staff.
Specifically, from 2016 to 2019, the average standardized test score in Alberta across a range of subjects was 64 while the report card grade was 73.3—or 9.3 percentage points higher). From 2022 and 2024, the gap increased to 12.5 percentage points. (Data for 2020 and 2021 are unavailable due to COVID school closures.)
In lieu of take-home work, the Division Scolaire Franco-Manitobaine recommends nightly reading for students, which is a great idea. Having students read nightly doesn’t cost schools a dime but it’s strongly associated with improving academic outcomes.
According to a Programme for International Student Assessment (PISA) analysis of 174,000 student scores across 32 countries, the connection between daily reading and literacy was “moderately strong and meaningful,” and reading engagement affects reading achievement more than the socioeconomic status, gender or family structure of students.
All of this points to an undeniable shift in education—that is, teachers are losing a once-valuable tool (homework) and shifting more work back into the classroom. And while new technologies will continue to change the education landscape in heretofore unknown ways, one time-tested winning strategy is to go back to basics.
And some of “the basics” have slipped rapidly away. Some college students in elite universities arrive on campus never having read an entire book. Many university professors bemoan the newfound inability of students to write essays or deconstruct basic story components. Canada’s average PISA scores—a test of 15-year-olds in math, reading and science—have plummeted. In math, student test scores have dropped 35 points—the PISA equivalent of nearly two years of lost learning—in the last two decades. In reading, students have fallen about one year behind while science scores dropped moderately.
The decline in Canadian student achievement predates the widespread access of generative AI, but AI complicates the problem. Again, the solution needn’t be costly or complicated. There’s a reason why many tech CEOs famously send their children to screen-free schools. If technology is too tempting, in or outside of class, students should write with a pencil and paper. If ChatGPT is too hard to detect (and we know it is, because even AI often can’t accurately detect AI), in-class essays and assignments make sense.
And crucially, standardized tests provide the most reliable equitable measure of student progress, and if properly monitored, they’re AI-proof. Yet standardized testing is on the wane in Canada, thanks to long-standing attacks from teacher unions and other opponents, and despite broad support from parents. Now more than ever, parents and educators require reliable data to access the ability of students. Standardized testing varies widely among the provinces, but parents in every province should demand a strong standardized testing regime.
AI may be here to stay and it may play a large role in the future of education. But if schools deprive students of the ability to read books, structure clear sentences, correspond organically with other humans and complete their own work, they will do students no favours. The best way to ensure kids are “future ready”—to borrow a phrase oft-used to justify seesawing educational tech trends—is to school them in the basics.
-
Alberta2 days agoAlberta’s huge oil sands reserves dwarf U.S. shale
-
Energy2 days agoCanada’s sudden rediscovery of energy ambition has been greeted with a familiar charge: hypocrisy
-
armed forces2 days agoOttawa’s Newly Released Defence Plan Crosses a Dangerous Line
-
Business2 days agoCOP30 finally admits what resource workers already knew: prosperity and lower emissions must go hand in hand
-
Alberta2 days agoCanada’s New Green Deal
-
Business2 days agoOttawa Pretends To Pivot But Keeps Spending Like Trudeau
-
Agriculture8 hours agoWhy is Canada paying for dairy ‘losses’ during a boom?
-
Indigenous2 days agoResidential school burials controversy continues to fuel wave of church arsons, new data suggests




