SAN FRANCISCO — Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday, launching a new line of chatbot products that can be customized to a variety of tasks.
“Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI’s inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.
At the event held in a cavernous former Honda dealership in OpenAI’s hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that it says is more capable and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions that couldn’t answer questions about anything after 2021.
It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what’s in images to people who are blind or have low vision.
ChatGPT has more than 100 million weekly active users and 2 million developers, spread “entirely by word of mouth,” Altman said.
He also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.
Alyssa Hwang, a computer science researcher at the University of Pennsylvania who got an early glimpse at the GPT vision tool, said it was “so good at describing a whole lot of different kinds of images, no matter how complicated they were,” but also needed some improvements.
For instance, in trying to test its limits, Hwang appended an image of steak with a caption about chicken noodle soup, confusing the chatbot into describing the image as having something to do with chicken noodle soup.
“That could lead to some adversarial attacks,” Hwang said. “Imagine if you put some offensive text or something like that in an image, you’ll end up getting something you don’t want.”
That’s partly why OpenAI has given researchers such as Hwang early access to help discover flaws in its newest tools before their wide release. Altman on Monday described the company’s approach as “gradual iterative deployment” that leaves time to address safety risks.
The path to OpenAI’s debut DevDay has been an unusual one. Founded as a nonprofit research