Dressing with the help of robots | MIT News

0


Basic security needs in the Paleolithic era changed significantly with the onset of industrial and cognitive revolutions. We interact a little less with raw materials, and we interface a little more with machines.

Robots don’t have the same wired consciousness and behavioral control, so working securely with humans requires methodical planning and coordination. You can probably assume that your friend can fill your morning cup of coffee without spilling it, but for a robot, this seemingly simple task requires careful observation and understanding of human behavior.

Scientists at MIT’s Laboratory for Computing and Artificial Intelligence (CSAIL) recently created a new algorithm to help a robot find effective planes of motion to ensure the physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in extending assistance to people with disabilities or reduced mobility.

“Developing algorithms to prevent physical damage without unnecessarily affecting task efficiency is a critical challenge,” says Shen Li, a PhD student at MIT and lead author of a new research paper. “By allowing robots to have a non-harmful impact on humans, our method can find efficient robot paths to dress humans with a guarantee of safety.”

Play video

Robot-assisted dressing could help people with reduced mobility or disabilities.

Human modeling, safety and efficiency

Good human modeling – how humans move, react and respond – is necessary to enable successful planning of robot movement in interactive human-robot tasks. A robot can achieve fluid interaction if the human model is perfect, but in many cases there is no such thing as a flawless plane.

A robot shipped to a person at home, for example, would have a very narrow “default” model of how a human might interact with it during an assisted dressing task. This would ignore the great variability in human reactions, depending on a myriad of variables such as personality and habits. A screaming toddler would react differently to putting on a coat or shirt than a frail elderly person or a person with a disability who may have rapid fatigue or reduced dexterity.

If this robot is tasked with dressing and plans a trajectory solely based on this default model, the robot could awkwardly collide with the human, resulting in an uncomfortable experience or even possible injury. However, if he is too careful to provide security, he may pessimistically assume that all the nearby space is unsafe, and then stand still, known as the “freezing robot” problem.

To provide a theoretical guarantee of human security, the team’s algorithm reasons on the uncertainty of the human model. Instead of having a single default model where the robot only understands a potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot collects more data, it will reduce uncertainty and refine these models.

To solve the freezing robot problem, the team redefined the safety of human-conscious motion planners by avoiding collisions or impacting harmlessly in a crash. Often times, especially in robot-assisted tasks of daily living activities, collisions cannot be completely avoided. This allowed the robot to establish harmless contact with humans to progress, as long as the robot’s impact on humans is low. With this two-pronged definition of security, the robot could safely accomplish the task of dressing in a shorter period of time.

For example, let’s say there are two possible models of how a human might react to dressing. The “model 1” is that the human will go up during the dressing, and the “model 2” is that the human will go down during the dressing. With the team’s algorithm, when the robot plans its movement, instead of selecting a model, it will try to ensure the safety of both models. It doesn’t matter if the person goes up or down, the trajectory found by the robot will be safe.

To paint a more holistic picture of these interactions, future efforts will focus on studying subjective feelings of safety in addition to the physical during the robot-assisted dressing task.

“This multifaceted approach combines set theory, human-sensitive safety constraints, human motion prediction and feedback control for safe human-robot interaction,” says the assistant professor at the Institute of Robotics from Carnegie Mellon University, Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, with the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.”

Li wrote the article alongside CSAIL postdoctoral fellow Nadia Figueroa, MIT doctoral student Ankit Shah, and MIT professor Julie A. Shah. They will present the article virtually at the 2021 Robotics: Science and Systems conference. The work was supported by the Office of Naval Research.


Leave A Reply

Your email address will not be published.