Should We Worry About the Coming of Artificial General Intelligence?

Should We Worry About the Coming of Artificial General Intelligence?

Last week, a group of technologists, including Elon Musk, called on artificial intelligence (AI) labs to stop training the most powerful systems for at least six months, citing “risks to society and humanity.”. In an open letter, which has already been signed by more than 3,100 people, including Steve Wozniak, co-founder of Apple, they focus mainly on the new OpenAI GPT-4 algorithm, which, according to the letter, should stop being fed until it complies with supervisory standards. 

This objective has, in addition to these personalities, the support of technicians, executive directors, financiers, students, psychologists, doctors, software engineers, and teachers worldwide.

On Friday, Italy became the first Western country to ban further development of ChatGPT over privacy concerns. Additionally, the tool experienced a data breach last month that exposed conversations with users and information regarding payments from a small percentage of customers. 

The authorities are investigating whether the chatbot violates the European General Data Protection Regulation (GDPR). Although, according to a BBC report, it is concluded that OpenAI complies with the law.

Industry expectations are that GPT will soon become GPT-5, a version of an AI that can think for itself. At that point, the algorithm would get smarter over time. And, it is that, around 2016, a trend in AI training models tried to forecast the development of these services.

Currently, systems are only as critical as GPT-4 regarding training computation, according to Jaime Sevilla, director of Epoch. But this will change. Anthony Aguirre, a professor of physics at UC Santa Cruz, believes there is no reason to think that GPT-4 will not continue to double in capacity annually. “Only the labs know what calculations they are running, but the trend is unmistakable.”

On the other hand, on its corporate blog, Microsoft announced AGI (AI that can think for itself), an AI capable of learning any task or subject, as the “big dream of the computer industry.” “AGI doesn’t exist yet; there is a strong debate in the industry about how to create it and if it can be done,” said Bill Gates, co-founder of the Redmond technology company. “Now, with the advent of machine learning and computing power, sophisticated AI is a reality, and it will get better very fast.”

Muddu Sudhakar, CEO of Aisera, says that only a few companies focus on achieving AGI, such as OpenAI and DeepMind. Still, they have a long way to go. “There are so many tasks that AI systems cannot do that humans can achieve naturally, such as reasoning with common sense, knowing what a fact is, and understanding abstract concepts like justice, politics, and philosophy. It will take a lot of breakthroughs and innovations for AGI. But if this is achieved, it seems that this system will be able to replace humans”.

“This would certainly be disruptive, and there would have to be a lot of protective measures to prevent the AGI from taking full control,” Sudhakar said. “But for now, this will likely be in the distant future. It’s more in the realm of science fiction.” Although not everyone agrees.

Artificial intelligence technology and chatbot assistants have made and will continue to make inroads in almost every industry. Technology can create efficiencies and take over mundane tasks, freeing knowledge workers and others to focus on more important work. 

For example, Long Language Models (LLMs), the algorithms that power chatbots, can analyze millions of alerts, online chats, and emails and find phishing webpages and executables .potentially malicious. LLM-powered chatbots can write essays and marketing campaigns and suggest computer code from simple user prompts.

LLM-powered chatbots are natural language processors that predict the following words after a user asks them. So if a user asked one to create a poem about a person sitting on a beach on Nantucket, the AI ​​would string together words, sentences, and paragraphs that are the best responses based on the programmers’ prior training. 

But LLMs have also made high-profile mistakes and can produce “hallucinations” in which next-generation engines go off the rails and produce strange responses.

If LLM-based AI with billions of tunable parameters can go off the rails, how much greater would the risk be when the AI ​​no longer needs to be taught by humans and can think for itself? Avivah Litan, vice president and distinguished analyst at Gartner Research, says the answer is much larger. 

Litan believes that AI development labs are advancing at breakneck speed without any oversight, which could cause AGI to become uncontrollable. AI labs, he argues, have “progressed without putting in place the right tools for users to monitor what’s going on. “I think it’s going a lot faster than anyone expected .”

The current concern is that artificial intelligence technology for use by corporations is being released without the tools users need to determine whether the technology is generating accurate or inaccurate information.

“Right now, we’re talking about all the good guys who have all this innovative ability, but the bad guys have it too,” Litan said. “So we have to have these watermarking systems and know what is real and synthetic. And we can’t rely on detection; we must have content authentication. Otherwise, misinformation will spread like wildfire.”

For example, Microsoft this week released Security Copilot, which is based on OpenAI’s GPT-4 large language model. The tool is an AI chatbot for cybersecurity experts that helps them quickly detect and respond to threats and better understand the broader threat landscape.

The problem is that “you as a user have to go in and identify any mistakes that you make,” Litan said. “That is unacceptable. They should have a scoring system that says this output is likely 95% true, so it has a 5% chance of error. And this one has a 10% chance of error. 

They don’t give you any idea of ​​performance to see if it’s something you can trust.” A significant concern in the future is that the creator of GPT-4, OpenAI, will release an AGI-compliant version. At that point, it may be too late to control the technology.

One possible solution, Litan suggests, is to launch two models for each generative AI tool: one to generate responses and the other to check the accuracy of the first. “That could do an excellent job of ensuring that a model posts something she can trust. You can’t expect a human being to go through all this content and decide what’s true, but if you give other models that are checking… that would allow users to monitor performance.”

In 2022,  Time reported that OpenAI outsourced services to low-wage workers in Kenya to determine if their GPT LLM produced secure information. Workers hired by Sama, a San Francisco-based company, were reportedly paid $2 an hour and asked to review GPT app responses “that were prone to violent, sexist, and even comments.” racists.”

“And this is how you are protecting us? Pay people two dollars an hour, and they are getting sick. It is inefficient and immoral,” says Litan.

“AI developers need to work with policymakers, and these need to include, at a minimum, new and capable regulatory authorities,” continues Litan. “I don’t know if we’ll ever get there, but the regulators can’t keep up with this, and that was predicted years ago. We need to devise a new type of authority.”

Leave a Reply

Your email address will not be published. Required fields are marked *