Despite all the ways that artificial intelligence promises to improve our lives, many consumers feel anxious and are averse to AI-powered products and services. For marketers and product managers, it’s vital to understand what is driving that resistance to adoption. Julian De Freitas is an assistant professor in the marketing unit at Harvard Business School. He has identified five main ways people see artificial intelligence negatively: that AI is opaque, emotionless, inflexible, autonomous, and not human enough. Through real-life cases and the latest research, he explains how companies can soothe anxieties and encourage consumer adoption. De Freitas the author of the HBR article “Why People Resist Embracing AI.”
CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Curt Nickisch.
Artificial intelligence is changing business as we know it, but the extent of those changes depends on two things. First, how good the technology gets, and second, how much companies adopt it and consumers actually buy it, and there’s a gap there. According to a Gartner survey, for instance, four out of five corporate strategists say AI will be critical to their success in the near future, but only one out of five said they actually use AI in their day-to-day work.
That was a 2023 survey, it’s probably different today. But the point remains, there is, adoption is lagging, and a key reason for that is perception. Many people view AI and automation negatively and resist using them.
Today’s guest has studied the psychological barriers to adoption and explains what managers can do to overcome them. Julian De Freitas is an assistant professor at Harvard Business School and he wrote the HBR article, Why People Resist Embracing AI. Julian, hi.
JULIAN DE FREITAS: Hi, Curt. Thanks for having me on the show.
CURT NICKISCH: Julian, the adoption of technology is an age-old experience for people. We’ve resisted technology many times in the past and have adopted it. Is AI any different from other technologies when it comes to resistance to adoption?
JULIAN DE FREITAS: I think the answer is yes, and we’re finding in many cases AI is different from a consumer perception standpoint. What we’re seeing is that in many use cases, people perceive AI as though it is more human-like, as opposed to being this sort of non-living technology. And this has profound implications for a number of marketing problems, such as overcoming barriers to adoption, but also, new ways of unlocking value from the technology that aren’t possible with previous technologies.
And then of course, there’s also interesting challenges around the risks, because it’s not actually the case that this is another human. It does fall short of humans in various ways. And so if we treat it as a full-fledged human being, that could also create challenges and risks.
CURT NICKISCH: What are the main ways that people see AI as something that they want to drag their feet with, as something they want to resist?
JULIAN DE FREITAS: We try to narrow it down to five main barriers. At a high level, I think you can summarize them as, AI is often seen as human-like, but not human enough. Or conversely, it is seen as a little bit too human, too capable. And then there’s one last barrier that’s just about how it’s really difficult to understand. So, the five barriers that myself and my colleagues have identified through our research are that AI is opaque, emotionless, rigid, autonomous, and not human.
CURT NICKISCH: So, let’s talk about these roadblocks one by one. Starting with AI being too opaque, what does that mean?
JULIAN DE FREITAS: So this is the idea that AI is black box. There are inputs that come in, let’s say email, and then outputs that come out. It tells you if the email is spam or not, but you don’t really understand how it got from the input to the output. Or there are these really sophisticated chatbots and you just can’t really predict what they’re going to do in any new situation. And admittedly, there are many products that we don’t understand, but this is particularly acute for AI, given the complexity of the models. Some of the latest models are operating using billions or even trillions of interacting parameters, making it impossible even for the makers of the technology to understand exactly how it works.
CURT NICKISCH: I remember seeing a video where somebody was talking about autopilot on a plane where the pilots said to each other, “What is it doing now?” Just that sense that it’s doing something for a reason but you can’t quite figure out why it’s doing what it’s doing. So, what do you suggest companies or product designers do in this situation?
JULIAN DE FREITAS: So one obvious intervention is to try to explain how their systems work, especially answering this question, why is the system doing what it’s doing? So for example, an automated vehicle might be stopping because there is an obstacle ahead, as opposed to just saying that the vehicle is stopping now.
Another solution, sometimes companies will ease stakeholders into the more difficult to explain forms of AI. So, one example which a colleague, Sunil Gupta, wrote a case about is Miroglio Fashion, which is an Italian woman’s apparel company. And they were dealing with this of forecasting the inventory that they would need to have on hand in their stores. Previously, this was something that the local store manager was responsible for, but they realized that they could get more accurate at doing this, and this would translate into higher revenues, if they could use some kind of AI model.
They had two options. One was to use the latest off the shelf model that really operated in a way that was hard to understand, so it could extract all sorts of features about the clothing that you and I can’t even perfectly verbalize, and use that to forecast what the store should order for the next week. But there was also a simpler model which would use easy to verbalize features, such as the colors or the shapes of the clothing, and then use that to predict what to order for next week.
And so, even though the first type of model, the more sophisticated one performed much better, they realized that if this was going to be implemented, they needed buy-in from the store managers. The store managers needed to actually use the predictions from the model. So for that reason, they initially at least, rolled out the simpler model to a subset of their stores. And the store managers did use these, the stores that had this model performed better than the ones that didn’t.
And after doing this for some time, eventually they felt ready to upgrade to the more capable model, which they did eventually do. In some ways, they ended up with a model that is still not very easy for you and I or the store managers to understand, but what they did is they trained their employees to get used to this idea of working alongside this kind of technology to make these kinds of predictions. So they kept the human factor in mind.
CURT NICKISCH: That’s really interesting. So, what about this critique of AI that it’s emotionless?
JULIAN DE FREITAS: At the heart of this barrier is this belief that AI is incapable of feeling emotions. There are many domains that are seen as depending on this ability, domains where some sort of subjective opinion is very important. If you are selling some sort of offering and introducing AI into it, well, if it’s a domain that is seen as relying on emotions, you’re going to have a hard time getting people to get comfortable using AI in that domain.
CURT NICKISCH: This also makes me think of automated voices, right? On your smartphones or smart speakers, where a lot of companies use a woman’s voice. Doesn’t make it right, but they use a woman’s voice because it’s perceived as more trustworthy, more engaging. Is that what you’re talking about here?
JULIAN DE FREITAS: Yeah, you’re absolutely right, that imbuing the technology with a gender, a voice, even other cues that we typically associate with having a body and a mind, like when Amazon’s Alexa goes, “Hm,” it’s as if it’s really pausing and thinking, or if you imagine introducing breathing cues and all these sorts of things. What they do is they subconsciously tell us that we’re interacting with an entity that is like a human. And these kinds of anthropomorphizing interventions do indeed increase how much people feel that technology is capable of experiencing emotions.
Another strategy that I’ve seen is instead of trying to convince people in some way that this AI system can indeed experience feelings, instead, you can play to what are already seen as being AI strengths. Take dating advice. Much of the experiments will show that people prefer receiving dating advice from a human then from some kind of AI system, and that gets flipped when you think about something like financial advice.
But if you tell people that actually, getting the best dating advice or getting the best match in the domain of dating really does depend on having this machinery beneath the hood that can take in as inputs your demographics and any information you might’ve provided the company and then it has to sort of sort and rank and filter various possible matches to find the perfect match to you. Now people can see how something that they would typically view as being highly subjective, independent on emotions, actually benefits from an ability that they already think AI is good at.
So, a company like OkCupid for instance, often talks about how its AI algorithms are doing this to find the perfect match for you. That kind of intervention also helps get around this emotionless-ness barrier.
CURT NICKISCH: Do you have to know as a product designer or company