Connect with us
[the_ad id="89560"]

Artificial Intelligence

AI is another reason why Canada needs to boost the energy supply

Published

5 minute read

From Resource Works

Massive energy levels are required to keep up with AI innovations, and Canada risks being unable to do that

Artificial Intelligence is already one of the most important technologies of our time, and its development has been pushing innovation at a breakneck pace across huge swathes of the economy. Smart assistants now operate, albeit in a limited fashion, as secretaries for those who need help in the office, while autonomous vehicle capabilities keep improving.

It is a remarkable and world-changing time.

Just as one plays a video game, turns on a light, or starts up their car, AI requires energy. To say that AI’s appetite for energy is ravenous is an understatement, and Canadian governments must understand the challenge that comes with that.

Energy shortages are a growing threat to Canada’s economic security and, yes, our standard of living. Failure to keep up with demand means importing more energy at a cost, or facing energy blackouts, in which case Canada will fall behind in far more than just AI.

New AI models are seemingly rolling out every month, especially in machine learning and generative AI. OpenAI’s ChatGPT and Google’s Bard require huge levels of computing power to work. To train ChatGPT-4, an advanced language model, consumes thousands of megawatt hours of electricity, not incomparable to the energy usage of urban centres.

A single query made to ChatGPT requires ten times the energy of making a search on Google, revealing the massive needs of AI technology. AI is not just another internet search extension or downloadable app, it is an entirely new industry.

AI models are trained and run in data centers, which are central to this energy dilemma. The sheer power consumption in data centers is ballooning, and some estimates warn that the world’s data center energy demand will surge by 160 percent by 2030.

The International Energy Agency (IEA) has reported that AI and data centers already consume 1 to 2 percent of global electricity, a figure expected only to climb as more companies embrace AI-driven technology. As much as AI is driving digital innovation, it is also consuming electricity at a rate we will have to match.

Canada’s energy security is being seriously challenged by rising demand, with or without AI. Historically, Canadians have enjoyed the fruits of abundant, cheap energy generated by hydroelectricity in BC and Quebec, or nuclear power in Ontario. Times, and weather, have unfortunately changed.

A large and growing population, electrifying economies, and the weakening of Canada’s legacy energy sources are pushing the country to its limits regarding power supply.

The current federal government wants Canada to achieve net-zero emissions by 2050, which means that electricity is going to have to double in the next 25 years. Canada is already dealing with electricity shortages, such as in British Columbia, where demand for hydroelectricity is expected to rise 15 percent over the next six years. Manitoba is projecting a shortfall by 2029, while Ontario races to put up new nuclear power plants to avert an energy crisis by 2029 as well.

AI can help Canadians craft solutions to its incoming energy problems as a valuable research aid that can help with modeling and processing data. However, that will mean more energy consumption as part of the rogue wave of energy consumption that AI innovation has created.

As evidenced by the constant developments in AI, it is obvious that the technology is going nowhere, and neither are Canada’s energy shortfalls.

If AI is going to contribute to the surge in energy demand, then it only makes sense that it becomes a vital tool in the search for solutions, and we need those solutions now.

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Artificial Intelligence

UK Police Chief Hails Facial Recognition, Outlines Drone and AI Policing Plans

Published on

logo

By

Any face in the crowd can be caught in the dragnet of a digital police state.

The steady spread of facial recognition technology onto Britain’s streets is drawing alarm from those who see it as a step toward mass surveillance, even as police leaders celebrate it as a powerful new weapon against crime.
Live Facial Recognition (LFR) is a system that scans people’s faces in public spaces and compares them against watchlists.
Civil liberties groups warn it normalizes biometric monitoring of ordinary citizens, while the Metropolitan Police insist it is already producing results.
Britain’s senior police leadership is promoting these biometric and artificial intelligence systems as central to the future of policing, with commissioner Sir Mark Rowley arguing that such tools are already transforming the way the Met operates.
Speaking to the TechUK trade association, Rowley described Live Facial Recognition (LFR) as a “game-changing tool” and pointed to more than 700 arrests linked to its use so far this year.
Camera vans stationed on streets have been deployed to flag people wanted for serious crimes or those breaking license conditions.
Rowley highlighted a recent deployment at the Notting Hill Carnival, where he joined officers using LFR.
“Every officer I spoke to was energized by the potential,” he said to The Sun. According to the commissioner, the weekend brought 61 arrests, including individuals sought in cases of serious violence and offenses against women and girls.
Rowley claimed that the technology played “a critical role” in making the carnival safer.
Beyond facial recognition, Rowley spoke of expanding the Met’s reliance on drones. “From searching for missing people, to arriving quickly at serious traffic incidents, or replacing the expensive and noisy helicopter at large public events,” he said, “done well, drones will be another tool to help officers make faster, more informed decisions on the ground.”
The commissioner also promoted the V100 program, which draws on data analysis to focus resources on those considered the highest risk to women.
He said this initiative has already led to the conviction of more than 160 offenders he described as “the most prolific and predatory” in London.
Artificial Intelligence is being tested in other areas too, particularly to review CCTV footage.
Rowley noted the labour involved in manually tracing suspects through crowded areas. “Take Oxford Street, with 27 junctions—a trawl to identify a suspect’s route can take two days,” he explained.
“Now imagine telling AI to find clips of a male wearing a red baseball cap between X and Y hours, and getting results in hours. That’s game-changing.”
While the Met portrays these systems as advances in crime prevention, their deployment raises questions about surveillance creeping deeper into everyday life.
Expansions in facial recognition, drone monitoring, and algorithmic analysis are often introduced as matters of efficiency and safety, but they risk building an infrastructure of constant observation where privacy rights are gradually eroded.
Shaun Thompson’s case has already been cited by campaigners as evidence of the risks that come with rolling out facial recognition on public streets.
He was mistakenly identified by the technology, stopped, and treated as though he were a wanted suspect before the error was realized.
Incidents like this highlight the danger of false matches and the lack of safeguards around biometric surveillance.
For ordinary people, the impact is clear: even if you have done nothing wrong, you can still find yourself pulled into a system that treats you as guilty first and asks questions later.
Continue Reading

Trending

X