- Artificial General Intelligence (AGI) is closer than we think, an AI expert told Insider.
- The term AGI, which has been getting a lot of attention lately, is still vague in its definition.
- But experts agree that AGI poses a danger to humanity and should be studied and regulated.
If the rise of artificial intelligence proves anything, it’s that the technology can be smarter than even the most knowledgeable of experts think.
Microsoft researchers say GPT-4, ChatGPT’s most advanced language model to date, comes up with clever solutions to puzzles like how to stack books, nine eggs, laptops, bottles and nails I was shocked to learn that it is possible. stable way. One of the researchers was surprised after drawing a unicorn in GPT-4 with an obscure coding language, he told Wired.
Another study suggests that AI avatars could run their own virtual cities with little human intervention.
These capabilities may offer a glimpse into what some experts call Artificial General Intelligence (AGI), the ability of technology to enable complex human capabilities such as common sense and consciousness.
AI experts Interviewed by Insider disagree about what AGI will actually look like, but they agree that we are making progress toward new forms of intelligence.
Ian Hogarth, co-author of the annual report “The State of AI” and investor in dozens of AI startups, describes AGI as “super-intelligent computers that learn and develop autonomously.” defined as “god-like AI” that ” and understand context without the need for human intervention. In theory, AGI-powered technology could foster a sense of self and “become a force beyond our control or understanding,” he told Insider.
One AGI researcher at an OpenAI competitor told an insider that AGI could look like the killer robot in the 2023 sci-fi movie M3GAN. In the film, an AI-powered lifelike doll refuses to turn off when asked to do so. Pretend to be asleep and deepen your own moral complex.
But Tom Everitt, an AGI safety researcher at DeepMind, Google’s AI arm, says machines don’t have to be self-conscious to be superintelligent.
“I think one of the most common misconceptions is that ‘consciousness’ is necessary for intelligence,” Everitt told Insider. “That models be ‘self-aware’ need not be a prerequisite for these models to match or reinforce human-level intelligence.”
He defines AGI as AI systems that can solve any cognitive or human task in ways that are not limited to how they are trained. In theory, AGI could help scientists develop cures for disease, discover new forms of renewable energy, and “solve some of humanity’s greatest mysteries,” he said.
“AGI, when used correctly, can be an incredibly powerful tool that enables breakthroughs that transform our everyday lives,” says Everitt.
AI experts wonder when AGI will become a reality
Exactly what AGI is may still be a mystery, but AI experts agree that hints are beginning to emerge.
Jeffrey Hinton, known as the “Godfather of AI,” told CBS that AGI could be a reality in as little as five years. Earlier this month, he told the BBC that AI chatbots could soon be smarter than humans.
“Who knows how far the industry is from developing god-like AI,” Hogarth said, noting that tools like AutoGPT, a virtual agent that runs on GPT-4, “can I really don’t know if it will happen,” he added. Wired says it can be designed to order pizza and run its own marketing campaigns.
He said he has already seen hints of AGI, such as deepfakes used for malicious purposes and machines that can play chess better than grandmasters.
But they’re just hints, and “AI systems still lack long-term planning ability, memory, reasoning, and understanding of the physical world around them,” Everitt said. “We have a lot of work to do to figure out how to build these features into the system.”
AGI could render humanity obsolete if its risks are not addressed
According to experts, an important part of building AGI involves understanding and addressing the risks so that the technology can be deployed safely.
One AI study found that as researchers fed more data into the model, the language model became more likely to ignore human instructions and even expressed a desire not to shut down. This finding suggests that AI may at some point become so powerful that humans cannot control it.
If this happens, Hogarth predicts, AGI “could lead to the obsolescence or extinction of mankind.”
That’s why researchers like Everitt study AGI safety to anticipate “existing questions” about “how humans can maintain control over AGI.” He said Google’s DeepMind “has a strong focus on ethics and safety research” to “ensure a responsible approach to the development of increasingly sophisticated AI.”
Hogarth says regulation is key for AI technology to develop in a responsible way.
“Regulators need to keep a close eye on projects like OpenAI’s GPT-4, Google DeepMind’s Gato, or the open source project AutoGPT,” he said.
Many AI and machine learning professionals are calling for AI models to be open sourced so that the public can understand how they are trained and how they behave.
“We need to discuss these big issues as soon as possible,” Everitt said. “It’s important to be open to diverse perspectives and schools of thought when it comes to this.”
Watch Now: Top Insider Inc. Videos
Loading…