TeleMoMa is a modular teleoperation system designed to control mobile manipulators using advanced AI technologies. The project focuses on mapping human movements to robot joint states, enabling intuitive and seamless control of robotic systems.
TeleMoMa is a flexible and modular teleoperation interface developed to control mobile manipulators. It simplifies the teleoperation process by allowing users to combine multiple input devices, such as RGB-D cameras, virtual reality controllers, keyboards, and joysticks, to provide demonstrations for mobile manipulation tasks. This integration lowers the barrier for collecting robot demonstrations, particularly in tasks that require whole-body coordination, making it easier for researchers and developers to teach robots complex behaviors.
TeleMoMa has been tested on robots such as PAL Tiago++, Toyota HSR, and Fetch in both simulation and real-world scenarios. The system supports remote teleoperation by transmitting commands through a computer network, making it versatile for a variety of tasks, including manipulation, navigation, and data collection. I also worked on improving the system’s precision by integrating input from multiple devices, balancing intuitive control with precise manipulation.
During the development of TeleMoMa, I worked on various aspects of robotics and AI:
For more info please visit our official website: