Works well with robots?

0

Blame it on HAL 9000, the constant cheerful interruptions of Clippy or any navigation system leading delivery drivers to dead end destinations. In the workspace, humans and robots don’t always get along.

But as more artificial intelligence systems and robots help human workers, building trust between them is essential to get the job done. A professor at the University of Georgia seeks to bridge this gap with the help of the United States military.

Aaron Schecter, Assistant Professor at Terry CollegeDepartment of Management Information Systems, has received two grants – worth nearly $ 2 million – from the US military to study the interaction between human and robotic teams. While AI at home can help order races, AI on the battlefield offers a much riskier set of circumstances – team cohesion and trust can be a matter of life and death.

My research focuses less on the design and elements of robot operation; it’s more the psychological side. When are we likely to trust something? What are the mechanisms that induce trust? How to get them to cooperate? If the robot is wrong, can you forgive it? -Aaron Schecter

“On the ground for the military, they want to have an uncontrolled human robot or AI that performs a function that will take some of the burden off humans,” Schecter said. “There is obviously a desire that people don’t react badly to this.”

While visions of military robots may delve into “Terminator” territory, Schecter explained that most robots and systems in development are intended to transfer heavy loads or provide advanced spotting – a walking platform. carrying ammunition and water, so the soldiers were not loaded with 80 pounds of equipment.

“Or imagine a drone that is not remotely controlled,” he said. “It flies above you like a companion bird, watches ahead and provides vocal comments like, ‘I recommend taking this route. “”

But these bots are only trustworthy if they don’t get shot or put them in danger.

“We don’t want people to hate, blame or ignore the robot,” Schecter said. “You have to be prepared to trust him in life and death situations for them to be effective. So how do you get people to trust robots? How do you get people to trust AI? “

Rick Watson, Regents Professor and J. Rex Fuqua Distinguished Chair in Internet Strategy, is Schecter’s co-author on Selected AI Team Research. He believes that studying how machines and humans work together will be more important as AI grows more fully.

Understand the limits

“I think we’re going to see a lot of new applications for AI, and we’ll have to know when that will work well,” Watson said. “We can avoid situations where it poses a danger to humans or where it becomes difficult to justify a decision because we don’t know how an AI system suggested it where it’s a black box. We need to understand its limitations.

Understanding when AI systems and robots work well led Schecter to take what he knows about human teams and apply it to human-robot team dynamics.

“My research is less about the design and elements of how the robot works; it’s more the psychological side of it, ”Schecter said. “When are we likely to trust something? What are the mechanisms that induce trust? How to get them to cooperate? If the robot is wrong, can you forgive it? “

Schecter first gathered information about when people are more likely to take a robot’s advice. Then, in a set of projects funded by the Army Research Office, he analyzed how humans took advice from machines and compared it to other people’s advice.

Rely on algorithms

In one project, Schecter’s team presented test subjects with a planning task, such as plotting the shortest route between two points on a map. He found that people were more likely to trust the advice of one algorithm than that of another human. In another, his team found evidence that humans could rely on algorithms for other tasks, like word association or brainstorming.

“We’re looking at how an algorithm or AI can influence a human’s decision making,” he said. “We test a bunch of different types of tasks and find out when people trust algorithms the most. … We didn’t find anything too surprising. When people do something more analytical, they trust a computer more. Interestingly, this model could be extended to other activities.

In another study focused on how robots and humans interact, Schecter’s team introduced more than 300 subjects to VERO, a fake AI helper in the form of an anthropomorphic spring. “If you remember Clippy (Microsoft’s animated helper robot), it’s like Clippy on steroids,” he says.

During the experiments on Zoom, teams of three performed team-building tasks such as finding the maximum number of uses for a paperclip or listing the items necessary for survival on a desert island. Then VERO introduced himself.

Looking for a good collaboration

“It’s that avatar floating up and down – he had coils that looked like a spring and stretched and contracted when he wanted to speak,” Schecter said. “That said, ‘Hello, my name is VERO. I can help you with a variety of different things. I have natural voice processing abilities. ‘”

But it was a research assistant with a voice modulator operating VERO. Sometimes VERO would offer helpful suggestions, such as different uses for the paperclip; other times he would play the role of moderator, coming in with a “nice job, guys!” Or encourage more sober teammates to contribute ideas.

“People really hated this condition,” Schecter said, noting that less than 10% of participants understood the ruse. “They were like, ‘Stupid VERO!’ They were so mean to it.

Schecter’s aim was not only to torment the subjects. The researchers recorded every conversation, facial expression, gesture and response to the survey of the experience to look for “patterns that tell us how to do a good collaboration,” he said.

A first article on human and human AI teams was published in Nature scientific reports in April, but Schecter has several more in the works and in preparation for the coming year.


Source link

Leave A Reply

Your email address will not be published.