Cornell researchers have developed a robotic feeding system that uses computer vision, machine learning, and multimodal sensing to safely feed people with severe mobility limitations. It has two essential features: real-time mouth tracking that adjusts to users’ movements and a dynamic response mechanism that enables the robot to detect the nature of physical interactions as they occur and react appropriately. It successfully fed 13 individuals.