Sunday 15 May 2022

Hitting the Books: Why we need to treat the robots of tomorrow like tools

Do not be swayed by the dulcet dial-tones of tomorrow's AIs and their siren songs of the singularity. No matter how closely artificial intelligences and androids may come to look and act like humans, they'll never actually be humans, argue Paul Leonardi, Duca Family Professor of Technology Management at University of California Santa Barbara, and Tsedal Neeley, Naylor Fitzhugh Professor of Business Administration at the Harvard Business School, in their new book The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI — and therefore should not be treated like humans. The pair contends in the excerpt below that in doing so, such hinders interaction with advanced technology and hampers its further development.

Digital Mindset cover
Harvard Business Review Press

Reprinted by permission of Harvard Business Review Press. Excerpted from THE DIGITAL MINDSET: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI by Paul Leonardi and Tsedal Neeley. Copyright 2022 Harvard Business School Publishing Corporation. All rights reserved.


Treat AI Like a Machine, Even If It Seems to Act Like a Human

We are accustomed to interacting with a computer in a visual way: buttons, dropdown lists, sliders, and other features allow us to give the computer commands. However, advances in AI are moving our interaction with digital tools to more natural-feeling and human-like interactions. What’s called a conversational user interface (UI) gives people the ability to act with digital tools through writing or talking that’s much more the way we interact with other people, like Burt Swanson’s “conversation” with Amy the assistant. When you say, “Hey Siri,” “Hello Alexa,” and “OK Google,” that’s a conversational UI. The growth of tools controlled by conversational UIs is staggering. Every time you call an 800 number and are asked to spell your name, answer “Yes,” or say the last four numbers of your social security number you are interacting with an AI that uses conversational UI. Conversational bots have become ubiquitous in part because they make good business sense, and in part because they allow us to access services more efficiently and more conveniently.

For example, if you’ve booked a train trip through Amtrak, you’ve probably interacted with an AI chatbot. Its name is Julie, and it answers more than 5 million questions annually from more than 30 million passengers. You can book rail travel with Julie just by saying where you’re going and when. Julie can pre-fill forms on Amtrak’s scheduling tool and provide guidance through the rest of the booking process. Amtrak has seen an 800 percent return on their investment in Julie. Amtrak saves more than $1 million in customer service expenses each year by using Julie to field low-level, predictable questions. Bookings have increased by 25 percent, and bookings done through Julie generate 30 percent more revenue than bookings made through the website, because Julie is good at upselling customers!

One reason for Julie’s success is that Amtrak makes it clear to users that Julie is an AI agent, and they tell you why they’ve decided to use AI rather than connect you directly with a human. That means that people orient to it as a machine, not mistakenly as a human. They don’t expect too much from it, and they tend to ask questions in ways that elicit helpful answers. Amtrak’s decision may sound counterintuitive, since many companies try to pass off their chatbots as real people and it would seem that interacting with a machine as though it were a human should be precisely how to get the best results. A digital mindset requires a shift in how we think about our relationship to machines. Even as they become more humanish, we need to think about them as machines— requiring explicit instructions and focused on narrow tasks.

x.ai, the company that made meeting scheduler Amy, enables you to schedule a meeting at work, or invite a friend to your kids’ basketball game by simply emailing Amy (or her counterpart, Andrew) with your request as though they were a live personal assistant. Yet Dennis Mortensen, the company’s CEO, observes that more than 90 percent of the inquiries that the company’s help desk receives are related to the fact that people are trying to use natural language with the bots and struggling to get good results.

Perhaps that was why scheduling a simple meeting with a new acquaintance became so annoying to Professor Swanson, who kept trying to use colloquialisms and conventions from informal conversation. In addition to the way he talked, he made many perfectly valid assumptions about his interaction with Amy. He assumed Amy could understand his scheduling constraints and that “she” would be able to discern what his preferences were from the context of the conversation. Swanson was informal and casual—the bot doesn’t get that. It doesn’t understand that when asking for another person’s time, especially if they are doing you a favor, it’s not effective to frequently or suddenly change the meeting logistics. It turns out it’s harder than we think to interact casually with an intelligent robot.

Researchers have validated the idea that treating machines like machines works better than trying to be human with them. Stanford professor Clifford Nass and Harvard Business School professor Youngme Moon conducted a series of studies in which people interacted with anthropomorphic computer interfaces. (Anthropomorphism, or assigning human attributes to inanimate objects, is a major issue in AI research.) They found that individuals tend to overuse human social categories, applying gender stereotypes to computers and ethnically identifying with computer agents. Their findings also showed that people exhibit over-learned social behaviors such as politeness and reciprocity toward computers. Importantly, people tend to engage in these behaviors — treating robots and other intelligent agents as though they were people — even when they know they are interacting with computers, rather than humans. It seems that our collective impulse to relate with people often creeps into our interaction with machines.

This problem of mistaking computers for humans is compounded when interacting with artificial agents via conversational UIs. Take for example a study we conducted with two companies who used AI assistants that provided answers to routine business queries. One used an anthropomorphized AI that was human-like. The other wasn’t.

Workers at the company who used the anthropomorphic agent routinely got mad at the agent when the agent did not return useful answers. They routinely said things like, “He sucks!” or “I would expect him to do better” when referring to the results given by the machine. Most importantly, their strategies to improve relations with the machine mirrored strategies they would use with other people in the office. They would ask their question more politely, they would rephrase into different words, or they would try to strategically time their questions for when they thought the agent would be, in one person’s terms, “not so busy.” None of these strategies was particularly successful.

In contrast, workers at the other company reported much greater satisfaction with their experience. They typed in search terms as though it were a computer and spelled things out in great detail to make sure that an AI, who could not “read between the lines” and pick up on nuance, would heed their preferences. The second group routinely remarked at how surprised they were when their queries were returned with useful or even surprising information and they chalked up any problems that arose to typical bugs with a computer.

For the foreseeable future, the data are clear: treating technologies — no matter how human-like or intelligent they appear — like technologies is key to success when interacting with machines. A big part of the problem is they set the expectations for users that they will respond in human-like ways, and they make us assume that they can infer our intentions, when they can do neither. Interacting successfully with a conversational UI requires a digital mindset that understands we are still some ways away from effective human-like interaction with the technology. Recognizing that an AI agent cannot accurately infer your intentions means that it’s important to spell out each step of the process and be clear about what you want to accomplish.



from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/RTZ5YV3

No comments:

Post a Comment