For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that’s what the marketing pitches and tech industry T-shirts say.
What makes an artificial intelligence product “agentic” depends on who’s selling it. But the promise is usually that it’s a step beyond today’s generative AI chatbots.
Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions on a person’s behalf.
But if you’re confused, you’re not alone. Google searches for “agentic” have skyrocketed from near obscurity a year ago to a peak earlier this fall.
A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a “new class of systems” that “can plan, act, and learn on their own.”
“They are not just tools to be operated or assistants waiting for instructions,” says the MIT Sloan Management Review report. “Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go.”
AI chatbots — such as the original ChatGPT that debuted three years ago this month — rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they’ve been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.
That’s different from what AI developers — including ChatGPT’s maker, OpenAI, and tech giants like Amazon, Google, IBM, Microsoft and Salesforce — have in mind for AI agents.
“A generative AI-based chatbot will say, ‘Here are the great ideas’ … and then be done,” said Swami Sivasubramanian, vice president of Agentic AI at Amazon Web Services, in an interview this week. “It’s useful, but
