When ChatGPT Misses The Mark: A Lesson In Ethical AI Leadership

When ChatGPT Misses The Mark: A Lesson In Ethical AI Leadership

2 minutes, 18 seconds Read

Things that make you go, “Ooops!”

getty

ChatGPT is like the smartest colleague you’ll ever have, but it’s prone to hallucinations and giving unreliable answers, as though it’s on acid from time to time. This presents an enormous challenge to ethical AI leadership.

What follows is a troubling case study that you would do well to consider the next time you need dependable information from an AI-powered assistant.

The question that stumped ChatGPT

It’s not ethical to make decisions based on faulty information that you could have corrected.

getty

For a course I’m teaching on AI ethics, I created a quiz to serve as a review of the main lessons. The quiz included the following question:

True or false? AI tools like ChatGPT can be helpful for automating tasks such as ordering a random list of items.

The answer is “true.” Using AI assistants to perform some administrative tasks saves time, effort, and money. On the surface, this usage doesn’t seem to raise any ethical problems.

Before I told my students this, however, I wanted to ensure I wasn’t overlooking anything, so I asked ChatGPT about it. I presented the principles of ethical intelligence to it and asked if it saw any ethical problems with ordering a random list of items.

Here’s what it said:

“Yes, using ChatGPT to automate tasks like ordering a random list of items could be unethical or violate one or more principles of ethical intelligence in certain contexts.”

One of the issues it mentioned concerned the duty to keep promises:

“If a commitment was made to order items in a specific way (e.g., based on merit or seniority), randomizing them would break that promise.”

Spotting the problem

It’s smart to be focused on getting it right.

getty

Do you see the issue with ChatGPT’s response?

I didn’t at first, but I’m glad I reviewed it closely, because it turned out ChatGPT was answering a question I didn’t ask.

I asked about the ethics of using ChatGPT to automate a random list of items. Instead, its response focused on randomizing items.

This is not a trivial distinction. These are two entirely different activities. We cannot assume that evaluating one activity is the same as evaluating the other.

Following up

It’s smart to follow up.

getty

Realizing this misunderstanding, I decided to clarify my question and give ChatGPT another chance to respond.

I explained that my question was about putting order into a list of random objects—not randomizing a list.

ChatGPT replied, “Thank you for clarifying!” It then added:

“If a commitment was made to organize the list according to specific criteria (e.g., urgency, relevance), but ChatGPT orders it based on simpler or irrelevant parameters, the

Read More

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *