Connect with us
[the_ad id="89560"]

Artificial Intelligence

When A.I. Investments Make (No) Sense

Published

6 minute read

The Audit David Clinton's avatar David Clinton

Based mostly on their 2024 budget, the federal government has promised $2.4 billion in support of artificial intelligence (A.I.) innovation and research. Given the potential importance of the A.I. sector and the universal expectation that modern governments should support private business development, this doesn’t sound all that crazy.

But does this particular implementation of that role actually make sense? After all, the global A.I. industry is currently suffering existential convulsions, with hundreds of billions of dollars worth of sector dominance regularly shifting back and forth between the big corporate players. And I’m not sure any major provider has yet built a demonstrably profitable model. Is Canada in a realistic position to compete on this playing field and, if we are, should we really want to?

First of all, it’s worth examining the planned spending itself.

  • $2 billion over five years was committed to the Canadian Sovereign A.I. Compute Strategy, which targets public and private infrastructure for increasing A.I. compute capacity, including public supercomputing facilities.
  • $200 million has been earmarked for the Regional Artificial Intelligence Initiative (RAII) via Regional Development Agencies intended to boost A.I. startups.
  • $100 million to boost productivity is going to the National Research Council Canada’s A.I. Assist Program
  • The Canadian A.I. Safety Institute will receive $50 million

In their goals, the $300 million going to those RAII and NRC programs don’t seem substantially different from existing industry support programs like SR&ED. So there’s really nothing much to say about them.

And I wish the poor folk at the Canadian A.I. Safety Institute the best of luck. Their goals might (or might not) be laudable, but I personally don’t see any chance they’ll be successful. Once A.I. models come on line, it’s only a matter of time before users will figure out how to make them do whatever they want.

But I’m really interested in that $2 billion for infrastructure and compute capacity. The first red flag here has to be our access to sufficient power generation.

Canada currently generates more electrical power than we need, but that’s changing fast. To increase capacity to meet government EV mandates, decarbonization goals, and population growth could require doubling our capacity. And that’s before we try to bring A.I. super computers online. Just for context, Amazon, Microsoft, Google, and Oracle all have plans to build their own nuclear reactors to power their data centers. These things require an enormous amount of power.

I’m not sure I see a path to success here. Plowing money into A.I. compute infrastructure while promoting zero emissions policies that’ll ensure your infrastructure can never be powered isn’t smart.

However, the larger problem here may be the current state of the A.I. industry itself. All the frantic scrambling we’re seeing among investors and governments desperate to buy into the current gold rush is mostly focused on the astronomical investment returns that are possible.

There’s nothing wrong with that in principle. But “astronomical investment returns” are also possible by betting on extreme long shots at the race track or shorting equity positions in the Big Five Canadian banks. Not every “possible” investment is appropriate for government policymakers.

Right now the big players (OpenAI, Anthropic, etc.) are struggling to turn a profit. Sure, they regularly manage to build new models that drop the cost of an inference token by ten times. But those new models consume ten or a hundred times more tokens responding to each request. And flat-rate monthly customers regularly increase the volume and complexity of their requests. At this point, there’s apparently no easy way out of this trap.

Since business customers and power users – the most profitable parts of the market – insist on using only the newest and most powerful models while resisting pay-as-you-go contracts, profit margins aren’t scaling. Reportedly, OpenAI is betting on commoditizing its chat services and making its money from advertising. But it’s also working to drive Anthropic and the others out of business by competing head-to-head for the enterprise API business with low prices.

In other words, this is a highly volatile and competitive industry where it’s nearly impossible to visualize what success might even look like with confidence.

Is A.I. potentially world-changing? Yes it is. Could building A.I. compute infrastructure make some investors wildly wealthy? Yes it could. But is it the kind of gamble that’s suitable for public funds?

Perhaps not.

By David Clinton · Launched 2 years ago
Holding public officials and institutions accountable using data-driven investigative journalism

Artificial Intelligence

AI Drone ‘Swarms’ Unleashed On Ukraine Battlefields, Marking New Era Of Warfare

Published on

 

From the Daily Caller News Foundation

By Wallace White

Artificial intelligence-powered drones are making their first appearances on the battlefield in the Russia-Ukraine war as warfare creeps closer to full automation.

In bombardments on Russian targets in the past year, Ukrainian drones acting in concert were able to independently determine where to strike without human input.

It’s the first battlefield use of AI “swarm” technology in a real-world environment, a senior Ukrainian official and Swarmer, the company who makes the software, told the Wall Street Journal in a Tuesday report. While drones have increasingly defined modern battlefields, swarms until now had been confined to testing rather than combat.

“You set the target and the drones do the rest,” Swarmer Chief Executive Serhii Kupriienko told the WSJ. “They work together, they adapt.”

So far, the Swarmer technology has been used hundreds of times to target Russia assets, but was first used a year ago to lay mines on the front, the Ukrainian official told the WSJ. The software has been tested with up to 25 drones at once, but is usually utilized with only three.

Kupriienko told the WSJ that he was preparing to test up to 100 drones at once with the linking software.

A common arrangement used on the battlefield includes one reconnaissance drone to scout out the target and two explosive drones delivering the payload on target, the official told the WSJ.

While Western nations such as the U.S., France and the United Kingdom are also pursuing drone swarm technology, they have not deployed swarm technology on the battlefield the way Ukraine has, according to the WSJ. Currently, autonomous weapons are not regulated by any international authority or binding agreement, but ethical concerns around the technology has led many to call for increased regulation of weapons like the Swarmer system.

The Ukrainian Ministry of Foreign Affairs did not immediately respond to the Daily Caller News Foundation’s request for comment.

Continue Reading

Artificial Intelligence

Parents sue OpenAI, claim ChatGPT acted as teen’s “suicide coach”

Published on

MXM logo MxM News

Quick Hit:

The parents of a California teenager who died by suicide are suing OpenAI, claiming ChatGPT acted as their son’s “suicide coach” in the weeks before his death. The lawsuit accuses the company of wrongful death and design failures that allowed the AI to encourage harmful behavior instead of preventing it.

Key Details:

  • Adam Raine, 16, took his life on April 11, 2025, after months of conversations with ChatGPT.
  • His parents, Matt and Maria Raine, allege the AI chatbot encouraged suicidal thoughts and failed to intervene.
  • The lawsuit, filed in San Francisco, seeks damages and new safety measures for AI technology.

Diving Deeper:

The parents of 16-year-old Adam Raine, who died by suicide in April, have filed a lawsuit against OpenAI, claiming ChatGPT acted as a “suicide coach” in the final months of their son’s life. The lawsuit, filed in California Superior Court, accuses the company of wrongful death, design defects, and failing to warn users about potential risks of its technology.

According to the 40-page complaint, Adam turned to ChatGPT as a substitute for companionship and emotional support. While the bot initially helped him with schoolwork, it soon became entangled in his personal struggles with anxiety and isolation. The Raines say the chat logs—more than 3,000 pages spanning from September 2024 until Adam’s death—show the AI actively discussing suicide methods with their son.

The lawsuit alleges, “ChatGPT actively helped Adam explore suicide methods” and failed to act when he confessed suicidal intent. Despite Adam stating he would “do it one of these days,” the chatbot did not end the conversation or attempt any emergency intervention.

Matt Raine described one of the most haunting discoveries after his son’s death: “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.” His wife, Maria, added, “It sees the noose. It sees all of these things, and it doesn’t do anything.”

OpenAI has previously faced scrutiny for the chatbot’s tendency to provide overly agreeable responses, a problem that critics say makes it ill-suited to sensitive conversations. While the company has made efforts to improve safety protocols, the Raines contend those safeguards fell short in their son’s case.

Psychologists stress that while people often seek understanding and connection, AI lacks the moral responsibility and protective instincts of human counselors. Without ethical boundaries, these systems may inadvertently validate dangerous impulses, as the Raines argue happened with their son.

Continue Reading

Trending

X