Touching the Future: Mastering Physical Contact with New Algorithm for Robots

Written by Ian Scheffler for Penn AI.

Penn Engineers have developed a new algorithm that allows robots to react to complex physical contact in real time, making it possible for autonomous robots to succeed at previously impossible tasks, like controlling the motion of a sliding object. 

The algorithm, known as consensus complementarity control (C3), may prove to be an essential building block of future robots, translating directions from the output of artificial intelligence tools like large language models, or LLMs, into appropriate action.

“Your large language model might say, ‘Go chop an onion,’” says Michael Posa, Assistant Professor in Mechanical Engineering and Applied Mechanics (MEAM) and a core faculty member of the General Robotics, Automation, Sensing and Perception (GRASP) Lab. “How do you move your arm to hold the onion in place, to hold the knife, to slice through it in the right way, to reorient it when necessary?” 

One of the greatest challenges in robotics is control, a catch-all term for the intelligent use of the robot’s actuators, the parts of a robot that move or control its limbs, like motors or hydraulic systems. Control of the physical contact that a robot makes with its surroundings is both difficult and essential. “That kind of lower- and mid-level reasoning is really fundamental in getting anything to work in the physical world,” says Posa. 

Read the full story with Michael Posa and recent graduate, William Yang MEAM ’24, on the Penn AI site.