Washington
Scientists, including those of Indian origin, have developed robots that can follow spoken instructions, an advance that may make it easier for people to interact with automated machines at home and workplaces.
“The issue we’re addressing is language grounding, which means having a robot take natural language commands and generate behaviours that successfully complete a task,” said Dilip Arumugam, from Brown University in the US.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” Arumugam said.
For example, someone in a warehouse working side-by-side with a robotic forklift might say to the robotic partner, “Grab that pallet”.
That is a highly abstract command that implies a number of smaller sub-steps — lining up the lift, putting the forks underneath and hoisting it up.
However, other common commands might be more fine-grained, involving only a single action: “Tilt the forks back a little”, for example.
Those different levels of abstraction can cause problems for current robot language models, the researchers said.
Most models try to identify cues from the words in the command as well as the sentence structure and then infer a desired action from that language.
The inference results then trigger a planning algorithm that attempts to solve the task.
However, without taking into account the specificity of the instructions, the robot might overplan for simple instructions, or underplan for more abstract instructions that involve more sub-steps.
That can result in incorrect actions or an overly long planning lag before the robot takes action.
The new system adds an additional level of sophistication to existing models. In addition to simply inferring a desired task from language, it also analyses the language to infer a distinct level of abstraction.
“That allows us to couple our task inference as well as our inferred specificity level with a hierarchical planner, so we can plan at any level of abstraction,” Arumugam said.
“In turn, we can get dramatic speed-ups in performance when executing tasks compared to existing systems,” he said.
Researchers, including Siddharth Karamcheti and Nakul Gopalan, showed that when a robot was able to infer both the task and the specificity of the instructions, it responded to commands in one second 90 per cent of the time.
In comparison, when no level of specificity was inferred, half of all tasks required 20 or more seconds of planning time.
“We ultimately want to see robots that are helpful partners in our homes and workplaces,” said Stefanie Tellex, a professor of computer science at Brown.
“This work is a step toward the goal of enabling people to communicate with robots in much the same way that we communicate with each other,” Tellex said.