ADVERTISEMENT

Elon Musk and other tech leaders call for pause in ‘out of control’ AI race

binsfeldcyhawk2

HR Legend
Gold Member
Oct 13, 2006
37,824
53,344
113
Skynet here we come....


Some of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

Elon Musk was among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.

The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.

The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

If a pause is not put in place soon, the letter said governments should step in and create a moratorium.

The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.
 
  • Like
Reactions: Hawk_82
A little terrifying...


"even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
 
  • Like
Reactions: PoopandBoogers
The guy who lit 22 billion dollars on fire in the last year is purportedly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy?

Now THAT is some funny shit.
 
AI is scary for a democracy. We argue about the truth right now, but in the near future, we will not be able to distinguish real news from fake news.

I don't think we even know the ways in which we will be impacted yet.
 
  • Like
Reactions: SotaHawk87
The guy who lit 22 billion dollars on fire in the last year is purportedly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy?

Now THAT is some funny shit.
This is what I was afraid of...Musk puts his name on something and it pretty much ensures the left won't take it seriously.

We're doomed. :)
 
Horse-out-of-the-Barn.jpg
 
  • Like
Reactions: binsfeldcyhawk2
This is what I was afraid of...Musk puts his name on something and it pretty much ensures the left won't take it seriously.

We're doomed. :)

No we aren’t. It’s an issue.

I’m just not going to listen to a guy who has shown by his actions that he has no real interest in the problems associated with biased responses, the spread of misinformation, and user privacy violations.
 
  • Like
Reactions: binsfeldcyhawk2
No we aren’t. It’s an issue.

I’m just not going to listen to a guy who has shown by his actions that he has no real interest in the problems associated with biased responses, the spread of misinformation, and user privacy violations.
That's kind of what I meant...it isn't just Musk voicing concerns over biased responses, the spread of misinformation, and user privacy violations.

If it was just Musk you'd have a point.
 
  • Like
Reactions: St. Louis Hawk
AI is scary for a democracy. We argue about the truth right now, but in the near future, we will not be able to distinguish real news from fake news.

I don't think we even know the ways in which we will be impacted yet.
This is true for everything as we know it. Why take a picture when you can AI yourself into any scene and no one being able to distinguish between fantasy and reality? I recently saw some AI photos of Trump and Biden under arrest - what will happen when we're bombarded w/ those images and "stories" of their arrest?

What's the point of reading a resume if everyone has Chatgpt write their bullet points for them? I think employers would be better off handing a perspective employee a pen and sheet of paper and make them write a resume in front of their interviewers.....

There is a lot of good to it, but I"m not sure the good outweighs the bad.......
 
  • Like
Reactions: binsfeldcyhawk2
Take this with a huge grain of salt.

While AI will definitely disrupt, and has some scary implications, the push to throw the brakes on it is largely driven by the fact is has a chance to democratize services that are currently bottled up behind very specific providers who are loathe to lose their monopoly.

Notice very specifically the movement against "open AI", which is essentially open source AI tools that any person or organization can use.

AI will decimate Google's search business (and corresponding ad revenue). ChatGPT is already frequently WAY more effective than Googling in scores of scenarios...I use it all the time now. Google search has gotten suckier and suckier over the years to the point of being almost unusable in many cases. I've taken to adding reddit to my search terms to find a discussion of the subject on reddit, which is infinitely more useful than the raw results. ChatGPT largely just delivers the answer.

But besides just better answers for users, Allrecipes.com can use good AI tools trained on their vast database to deliver way more accurate results, none of which have 2000 word blogs before the ingredients. Maybe if you're thinking "What is something spicy and a little sweet I can make with frozen turkey" that's not something that Allrecipes can deliver in standard site search, so you currently would probably google it and take your chances. But with fully realized AI, you would go straight to Allrecipes.com, skipping the google.

There are thousands of gatekeepers of different kinds that desperately want to keep AI behind locked doors and charge to use them, or be the only ones to reap the benefits. Disney will 100% use AI to create an unlimited, non-repeating, animated 24/7 Spiderman channel that can be run by one or two people, and be subscribed to for $9.99 a month. But they DEFINITELY don't want YOU to be able to single-handedly write and animate hundreds of episodes of "HROT-Man and his Grand Adventures with OP's Mom." They need to retain AI behind closed doors in order to retain the marketplace advantage of having thousands of writers and animators (before they then lay them off).

AI for me, but not for Thee.

There's a lot of fearmongering about Skynet and teenagers learning how to build a nuclear reactor in the basement. But make no mistake, the massive Anti-AI push is driven primarily by the organizations trying to protect their monopolies and fiefdoms (which not coincidentally have massively stagnated tech advancement in the past decade, versus the early days of the silicon chip age or the internet age).

And they've got allies in the "we need to control information" crowd that don't want you to be able to get an accurate answer to "Do vaccinated people still get COVID?" or whatever the current dangerous thought crimes of the day might be.

AI will be incredibly powerful. Is it going to be powerful like a nuclear weapon, or powerful like the printing press? Only one of those is best left under the strict supervision of our government and industrial overlords. I certainly think we need to decide which it is before we all agree that we don't deserve rights to the technology, or at the very least should only be allowed the version we are deemed worthy of and willing to pay for.
 
ADVERTISEMENT
ADVERTISEMENT