Connect with us

Artificial Intelligence

When A.I. Investments Make (No) Sense

Published

6 minute read

The Audit David Clinton's avatar David Clinton

Based mostly on their 2024 budget, the federal government has promised $2.4 billion in support of artificial intelligence (A.I.) innovation and research. Given the potential importance of the A.I. sector and the universal expectation that modern governments should support private business development, this doesn’t sound all that crazy.

But does this particular implementation of that role actually make sense? After all, the global A.I. industry is currently suffering existential convulsions, with hundreds of billions of dollars worth of sector dominance regularly shifting back and forth between the big corporate players. And I’m not sure any major provider has yet built a demonstrably profitable model. Is Canada in a realistic position to compete on this playing field and, if we are, should we really want to?

First of all, it’s worth examining the planned spending itself.

  • $2 billion over five years was committed to the Canadian Sovereign A.I. Compute Strategy, which targets public and private infrastructure for increasing A.I. compute capacity, including public supercomputing facilities.
  • $200 million has been earmarked for the Regional Artificial Intelligence Initiative (RAII) via Regional Development Agencies intended to boost A.I. startups.
  • $100 million to boost productivity is going to the National Research Council Canada’s A.I. Assist Program
  • The Canadian A.I. Safety Institute will receive $50 million

In their goals, the $300 million going to those RAII and NRC programs don’t seem substantially different from existing industry support programs like SR&ED. So there’s really nothing much to say about them.

And I wish the poor folk at the Canadian A.I. Safety Institute the best of luck. Their goals might (or might not) be laudable, but I personally don’t see any chance they’ll be successful. Once A.I. models come on line, it’s only a matter of time before users will figure out how to make them do whatever they want.

But I’m really interested in that $2 billion for infrastructure and compute capacity. The first red flag here has to be our access to sufficient power generation.

Canada currently generates more electrical power than we need, but that’s changing fast. To increase capacity to meet government EV mandates, decarbonization goals, and population growth could require doubling our capacity. And that’s before we try to bring A.I. super computers online. Just for context, Amazon, Microsoft, Google, and Oracle all have plans to build their own nuclear reactors to power their data centers. These things require an enormous amount of power.

I’m not sure I see a path to success here. Plowing money into A.I. compute infrastructure while promoting zero emissions policies that’ll ensure your infrastructure can never be powered isn’t smart.

However, the larger problem here may be the current state of the A.I. industry itself. All the frantic scrambling we’re seeing among investors and governments desperate to buy into the current gold rush is mostly focused on the astronomical investment returns that are possible.

There’s nothing wrong with that in principle. But “astronomical investment returns” are also possible by betting on extreme long shots at the race track or shorting equity positions in the Big Five Canadian banks. Not every “possible” investment is appropriate for government policymakers.

Right now the big players (OpenAI, Anthropic, etc.) are struggling to turn a profit. Sure, they regularly manage to build new models that drop the cost of an inference token by ten times. But those new models consume ten or a hundred times more tokens responding to each request. And flat-rate monthly customers regularly increase the volume and complexity of their requests. At this point, there’s apparently no easy way out of this trap.

Since business customers and power users – the most profitable parts of the market – insist on using only the newest and most powerful models while resisting pay-as-you-go contracts, profit margins aren’t scaling. Reportedly, OpenAI is betting on commoditizing its chat services and making its money from advertising. But it’s also working to drive Anthropic and the others out of business by competing head-to-head for the enterprise API business with low prices.

In other words, this is a highly volatile and competitive industry where it’s nearly impossible to visualize what success might even look like with confidence.

Is A.I. potentially world-changing? Yes it is. Could building A.I. compute infrastructure make some investors wildly wealthy? Yes it could. But is it the kind of gamble that’s suitable for public funds?

Perhaps not.

By David Clinton · Launched 2 years ago
Holding public officials and institutions accountable using data-driven investigative journalism

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Trending

X