(NEW YORK) — The CEO behind the company that created ChatGPT believes artificial intelligence technology will reshape society as we know it. He believes it comes with real dangers, but can also be “the greatest technology humanity has yet developed” to drastically improve our lives.
“We’ve got to be careful here,” said Sam Altman, CEO of OpenAI. “I think people should be happy that we are a little bit scared of this.”
Altman sat down for an exclusive interview with ABC News’ chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 — the latest iteration of the AI language model.
In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity. He added that he is in “regular contact” with government officials.
ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformer.
Released only a few months ago, it is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. In comparison, TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.
Watch the exclusive interview with Sam Altman on “World News Tonight with David Muir” at 6:30 p.m. ET on ABC.
Though “not perfect,” per Altman, GPT-4 scored in the 90th percentile on the Uniform Bar Exam. It also scored a near-perfect score on the SAT Math test, and it can now proficiently write computer code in most programming languages.
GPT-4 is just one step toward OpenAI’s goal to eventually build Artificial General Intelligence, which is when AI crosses a powerful threshold which could be described as AI systems that are generally smarter than humans.
Though he celebrates the success of his product, Altman acknowledged the possible dangerous implementations of AI that keep him up at night.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”
A common sci-fi fear that Altman doesn’t share: AI models that don’t need humans, that make their own decisions and plot world domination.
“It waits for someone to give it an input,” Altman said. “This is a tool that is very much in human control.”
However, he said he does fear which humans could be in control. “There will be other people who don’t put some of the safety limits that we put on,” he added. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
President Vladimir Putin is quoted telling Russian students on their first day of school in 2017 that whoever leads the AI race would likely “rule the world.”
“So that’s a chilling statement for sure,” Altman said. “What I hope, instead, is that we successively develop more and more powerful systems that we can all use in different ways that integrate it into our daily lives, into the economy, and become an amplifier of human will.”
Concerns about misinformation
According to OpenAI, GPT-4 has massive improvements from the previous iteration, including the ability to understand images as input. Demos show GTP-4 describing what’s in someone’s fridge, solving puzzles, and even articulating the meaning behind an internet meme.
This feature is currently only accessible to a small set of users, including a group of visually impaired users who are part of its beta testing.
But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.
“The thing that I try to caution people the most is what we call the ‘hallucinations problem,"” Altman said. “The model will confidently state things as if they were facts that are entirely made up.”
The model has this issue, in part, because it uses deductive reasoning rather than memorization, according to OpenAI.
“One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better,” Mira Murati, OpenAI’s Chief Technology Officer, told ABC News.
“The goal is to predict the next word – and with that, we’re seeing that there is this understanding of language,” Murati said. “We want these models to see and understand the world more like we do.”
“The right way to think of the models that we create is a reasoning engine, not a fact database,” Altman said. “They can also act as a fact database, but that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”
Altman and his team hope “the model will become this reasoning engine over time,” he said, eventually being able to use the internet and its own deductive reasoning to separate fact from fiction. GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI. Still, Altman said relying on the system as a primary source of accurate information “is something you should not use it for,” and encourages users to double-check the program’s results.
Precautions against bad actors
The type of information ChatGPT and other AI language models contain has also been a point of concern. For instance, whether or not ChatGPT could tell a user how to make a bomb. The answer is no, per Altman, because of the safety measures coded into ChatGPT.
“A thing that I do worry about is … we’re not going to be the only creator of this technology,” Altman said. “There will be other people who don’t put some of the safety limits that we put on it.”
There are a few solutions and safeguards to all of these potential hazards with AI, per Altman. One of them: Let society toy with ChatGPT while the stakes are low, and learn from how people use it.
Right now, ChatGPT is available to the public primarily because “we’re gathering a lot of feedback,” according to Murati.
As the public continues to test OpenAI’s applications, Murati says it becomes easier to identify where safeguards are needed.
“What are people using them for, but also what are the issues with it, what are the downfalls, and being able to step in [and] make improvements to the technology,” says Murati. Altman says it’s important that the public gets to interact with each version of ChatGPT.
“If we just developed this in secret — in our little lab here — and made GPT-7 and then dropped it on the world all at once … That, I think, is a situation with a lot more downside,” Altman said. “People need time to update, to react, to get used to this technology [and] to understand where the downsides are and what the mitigations can be.”
Regarding illegal or morally objectionable content, Altman said they have a team of policymakers at OpenAI who decide what information goes into ChatGPT, and what ChatGPT is allowed to share with users.
“[We’re] talking to various policy and safety experts, getting audits of the system to try to address these issues and put something out that we think is safe and good,” Altman added. “And again, we won’t get it perfect the first time, but it’s so important to learn the lessons and find the edges while the stakes are relatively low.”
Will AI replace jobs?
Among the concerns of the destructive capabilities of this technology is the replacement of jobs. Altman says this will likely replace some jobs in the near future, and worries how quickly that could happen.
“I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts,” said Altman. “But if this happens, you know, in a single digit number of years, some of these shifts, that is the part I worry about the most.
But he encourages people to look at ChatGPT as more of a tool, not as a replacement. He added that “human creativity is limitless, and we find new jobs. We find new things to do.”
“I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts,” Altman said. “But if this happens in a single-digit number of years, some of these shifts … That is the part I worry about the most.”
The ways ChatGPT can be used as tools for humanity outweigh the risks, according to Altman.
“We can all have an incredible educator in our pocket that’s customized for us, that helps us learn,” Altman said. “We can have medical advice for everybody that is beyond what we can get today.”
ChatGPT as ‘co-pilot’
In education, ChatGPT has become controversial, as some students have used it to cheat on assignments. Educators are torn on whether this could be used as an extension of themselves, or if it deters students’ motivation to learn for themselves.
“Education is going to have to change, but it’s happened many other times with technology,” said Altman, adding that students will be able to have a sort of teacher that goes beyond the classroom. “One of the ones that I’m most excited about is the ability to provide individual learning — great individual learning for each student.”
In any field, Altman and his team want users to think of ChatGPT as a “co-pilot,” someone who could help you write extensive computer code or problem solve.
“We can have that for every profession, and we can have a much higher quality of life, like standard of living,” Altman said. “But we can also have new things we can’t even imagine today — so that’s the promise.”
Copyright © 2023, ABC Audio. All rights reserved.