Technology News

Elon Musk, Steve Wozniak and many tech leaders urged for Government control of AI development

Over 1000 influential tech people, including Elon musk and Steve Wozniak, have signed a letter calling for a pause on new AI models. The rise of chat GPT shows the good and ill effects AI can have on society. The uncontrolled race to develop a better AI can be harmful to humanity at large according to the above-mentioned people.

Steve Wozniak and Elon musk seems to agree on one thing. AI development needs to slow down. In recent months, language models powered by artificial intelligence have racked up millions of users, immediately creating viral hits like a biblical scripture about how to remove a peanut butter sandwich from a VCR.

Elon Musk and other tech executives signed an open letter on Wednesday calling for a six-month halt in the development of AI systems and a considerable extension of government control, arguing that the risks of AI-enhanced systems far outweigh the advantages.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter said.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter added.

Of course, this is not the first time that guardrails for AI have been advocated. Yet as AI has advanced, so have the warnings that it should be used with care.

“It’s a very good idea to slow down the development of new models because if AI ends up being good for us, then there’s no harm waiting months or years, we’ll get to the endpoint anyway,” said James Grimmelmann, professor of digital and information law at Cornell University. “If it’s harmful, then we just bought ourselves extra time to strategize the best ways to respond and understand how to combat it.”

OpenAI’s popular ChatGPT generated a lot of buzzes when it was made available for public testing in November of last year. Naturally, users started attempting to maximize ChatGPT’s potential. It quickly became clear how disruptive it would be to society. It began with passing tests for medical licenses. It provided guidelines for making explosives (after a little extra prompting). It gave itself a different identity.

Ethics experts who care about ethically creating the technology are employed by several AI businesses. Yet if the urgency to release goods is placed higher than its social value, a dedicated staff focused on responsibly developing AI cannot do its job.

It certainly appears that speed is the name of the game. OpenAI realized that if it acts swiftly enough, it can beat out rivals and take the top spot in the field of generative AI. That has encouraged Microsoft, Google, and pretty much everyone else to do the same.

Releasing powerful AI models for the public to play with before it’s ready isn’t making the technology better. People cannot find the best use cases for AI because the developers have to put out fires their technology created, and users are distracted by the noise.

Related Articles

Back to top button