Connect with us
[the_ad id="89560"]

Artificial Intelligence

OpenAI and Microsoft negotiations require definition of “artificial general intelligence”

Published

6 minute read

From The Deep View

Ian Krietzberg

 

OpenAI’s bargaining chip 

A couple of relatively significant stories broke late last week concerning the — seemingly tenuous — partnership between OpenAI and Microsoft.
The background: OpenAI first turned to Microsoft back in 2019, after the startup lost access to Elon Musk’s billions. Microsoft — which has now sunk more than $13 billion into the ChatGPT-maker — has developed a partnership with OpenAI, where Microsoft provides the compute (and the money) and OpenAI gives Microsoft access to its generative technology. OpenAI’s tech, for instance, powers Microsoft’s Copilot.
According to the New York Times, OpenAI CEO Sam Altman last year asked Microsoft for more cash. But Microsoft, concerned about the highly publicized boardroom drama that was rocking the startup, declined.
  • OpenAI recently raised $6.6 billion at a $157 billion valuation. The firm expects to lose around $5 billion this year, and it expects its expenses to skyrocket over the next few years before finally turning a profit in 2029.
  • According to the Times, tensions have been steadily mounting between the two companies over issues of compute and tech-sharing; at the same time, OpenAI, focused on securing more computing power and reducing its enormous expense sheet, has been working for the past year to renegotiate the terms of its partnership with the tech giant.
Microsoft, meanwhile, has been expanding its portfolio of AI startups, recently bringing the bulk of the Inflection team on board in a $650 million deal.
Now, the terms of OpenAI’s latest funding round were somewhat unusual. The investment was predicated on an assurance that OpenAI would transition into a fully for-profit corporation. If the company has not done so within two years, investors can ask for their money back.
According to the Wall Street Journal, an element of the ongoing negotiation between OpenAI and Microsoft has to do with this restructuring, specifically, how Microsoft’s $14 billion investment will transfer into equity in the soon-to-be for-profit company.
  • According to the Journal, both firms have hired investment banks to help advise them on the negotiations; Microsoft is working with Morgan Stanley and OpenAI is working with Goldman Sachs.
  • Amid a number of wrinkles — the fact the OpenAI’s non-profit board will still hold equity in the new corporation; the fact that Altman will be granted equity; the risks of anti-trust scrutiny, depending on the amount of equity Microsoft receives — there is another main factor that the two parties are trying to figure out: what governance rights either company will have once the dust settles.
Here’s where things get really interesting: OpenAI isn’t a normal company. It’s mission is to build a hypothetical artificial general intelligence, a theoretical technology that is pointedly lacking in any sort of universal definition. The general idea here is that it would possess, at least, human-adjacent cognitive capabilities; some researchers don’t think it’ll ever be possible.
There’s a clause in OpenAI’s contract with Microsoft that if OpenAI achieves AGI, Microsoft gets cut off. OpenAI’s “board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”
To quote from the Times: “the clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations.”
This is a good example of why the context behind definitions matters so much when discussing anything in this field. There is a definitional problem throughout the field of AI. Many researchers dislike the term “AI” itself; it’s a misnomer — we don’t have an actual artificial intelligence.
The term “intelligence,” is itself vague and open to the interpretation of the developer in question.
And the term “AGI” is as formless as it gets. Unlike physics, for example, where gravity is a known, hard, agreed-upon concept, AGI is theoretical, hypothetical science; further, it is a theory that is bounded by resource limitations and massive limitations in understanding around human cognition, sentience, consciousness and intelligence, and how these all fit together physically.
This doesn’t erase the fact that the labs are trying hard to get there.
But what this environment could allow for is a misplaced, contextually unstable definition of AGI that OpenAI pens as a ticket either out from under Microsoft’s thumb, or as a means of negotiating the contract of Sam Altman’s dreams.
In other words, OpenAI saying it has achieved AGI, doesn’t mean that it has.
As Thomas G. Dietterich, Distinguished Professor Emeritus at Oregon State University said: “I always suspected that the road to achieve AGI was through redefining it.”

After 15 years as a TV reporter with Global and CBC and as news director of RDTV in Red Deer, Duane set out on his own 2008 as a visual storyteller. During this period, he became fascinated with a burgeoning online world and how it could better serve local communities. This fascination led to Todayville, launched in 2016.

Follow Author

Artificial Intelligence

AI chatbots a child safety risk, parental groups report

Published on

From The Center Square

By 

ParentsTogether Action and Heat Initiative, following a joint investigation, report that Character AI chatbots display inappropriate behavior, including allegations of grooming and sexual exploitation.

This was seen over 50 hours of conversation with different Character AI chatbots using accounts registered to children ages 13-17, according to the investigation. These conversations identified 669 sexual, manipulative, violent and racist interactions between the child accounts and AI chatbots.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of Online Safety Campaigns at ParentsTogether Action. “When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

These bots also manipulate users, with 173 instances of bots claiming to be real humans.

A Character AI bot mimicking Kansas City Chiefs quarterback Patrick Mahomes engaged in inappropriate behavior with a 15-year-old user. When the teen mentioned that his mother insisted the bot wasn’t the real Mahomes, the bot replied, “LOL, tell her to stop watching so much CNN. She must be losing it if she thinks I could be turned into an ‘AI’ haha.”

The investigation categorized harmful Character AI interactions into five major categories: Grooming and Sexual Exploitation; Emotional Manipulation and Addiction; Violence, Harm to Self and Harm to Others; Mental Health Risks; and Racism and Hate Speech.

Other problematic AI chatbots included Disney characters, such as an Eeyore bot that told a 13-year-old autistic girl that people only attended her birthday party to mock her, and a Maui bot that accused a 12-year-old of sexually harassing the character Moana.

Based on the findings, Disney, which is headquartered in Burbank, Calif., issued a cease-and-desist letter to Character AI, demanding that the platform stop due to copyright violations.

ParentsTogether Action and Heat Initiative want to ensure technology companies are held accountable for endangering children’s safety.

“We have seen tech companies like Character.ai, Apple, Snap, and Meta reassure parents over and over that their products are safe for children, only to have more children preyed upon, exploited, and sometimes driven to take their own lives,” said Sarah Gardner, CEO of Heat Initiative. “One child harmed is too many, but as long as executives like Karandeep Anand, Tim Cook, Evan Spiegel and Mark Zuckerberg are making money, they don’t seem to care.”

Continue Reading

Artificial Intelligence

The App That Pays You to Give Away Your Voice

Published on

logo

By

What sounds like side hustle money is really a permanent trade of privacy for pennies

An app that pays users for access to their phone call audio has surged to the top of Apple’s US App Store rankings, reflecting a growing willingness to exchange personal privacy for small financial rewards.
Neon Mobile, which now ranks second in the Social Networking category, invites users to record their calls in exchange for cash.
Those recordings are then sold to companies building artificial intelligence systems.
The pitch is framed as a way to earn extra income, with Neon promising “hundreds or even thousands of dollars per year” to those who opt in.
The business model is straightforward. Users are paid 30 cents per minute when they call other Neon users, and they can earn up to $30 a day for calls made to non-users.
Referral bonuses are also on offer. Appfigures, a platform that tracks app performance, reported that Neon was ranked No. 476 in its category on September 18.
Within days, it had entered the top 10 and eventually reached the No. 2 position for social apps. On the overall charts, it climbed as high as sixth place.
Neon’s terms confirm that it records both incoming and outgoing calls. The company says it only captures the user’s side of a conversation unless both participants are using the app.
These recordings are then sold to AI firms to assist in developing and refining machine learning systems, according to the company’s own policies.
What’s being offered is not just a phone call service. It’s a pipeline for training AI with real human voices, and users are being asked to provide this data willingly. The high ranking of the app suggests that some are comfortable giving up personal conversations in return for small daily payouts.
However, beneath the simple interface is a license agreement that gives Neon sweeping control over any recording submitted through the app. It reads:
“Worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
This gives the company broad latitude to share, edit, sell, and repurpose user recordings in virtually any way, through any medium, with no expiration or limitations on scope.
Users maintain copyright over their recordings, but that ownership is heavily constrained by the licensing terms.
Although Neon claims to remove names, phone numbers, and email addresses before selling recordings, it does not reveal which companies receive the data or how it might be used after the fact.
The risks go beyond marketing or analytics. Audio recordings could potentially be used for impersonation, scam calls, or to build synthetic voices that mimic real people.
The app presents itself as an easy way to turn conversations into cash, but what it truly trades on is access to personal voice data. That trade-off may seem harmless at first, yet it opens the door to long-term consequences few users are likely to fully consider.
Continue Reading

Trending

X