AI teaches robot to throw like a pro. But can it pitch a perfect game?
Key takeaways
- Robots using full-body motion can move objects to hard-to-reach areas, expanding automation capabilities on the plant floor.
- Reinforcement learning enables precise robotic movements, helping improve accuracy in dynamic industrial tasks.
- Real-time motion control systems boost robot adaptability, key for handling unpredictable production environments.
- Whole-body robotic design offers greater power and precision than fixed-arm systems in object handling and transfer.
After 42 years, it finally happened—I’ve become a baseball fan. I know, I wasn’t expecting it either. Maybe it was the cheap ticket prices, the upgraded concessions, or simply living close to a minor league stadium. Whatever the reason, I’ve grown to love spending warm summer nights cheering on my hometown team. That said, baseball isn’t perfect. The pace can drag at times, and if it weren’t for the hard-working mascots keeping things lively, I might’ve been tempted to scroll through Instagram between innings. Maybe I’ve watched one too many Savannah Bananas clips on TikTok, but I truly believe there’s a way to preserve the spirit of the game while making it more exciting for younger fans. For starters, let’s replace the pitchers with robots.
Researchers at the Robotic Systems Lab have developed a new full-body throwing system for legged robots, enabling them to launch different types of objects quickly and accurately. Unlike previous methods that rely on fixed arms, this approach takes advantage of the robot's entire body—including its legs and torso—for greater power and precision. While throwing might not seem like an important skill for a robot to possess, throwing allows robots to move objects to places they can’t physically reach.
The researchers faced two main challenges during the process: accurately tracking the position of the robot’s hand (or end-effector) and managing the unpredictable timing of the object’s release. To address this, the team used reinforcement learning to train a control policy that keeps the robot’s joints aligned with the intended movement path. A second, faster-acting policy fine-tunes this control to ensure smooth and accurate throws.
For handling unpredictable release timing, the researchers improved upon a technique called tube acceleration, originally designed for stationary robots. They developed pullback tube acceleration, a real-time, adaptive solution that adjusts the end-effector’s motion to ensure it stays within a safe release zone—even during complex, dynamic movements.
In this work, we address these challenges by formulating prehensile throwing as a whole-body EE velocity tracking problem. Using reinforcement learning (RL), we train a pol- icy to accurately track timed end-effector trajectories during the throwing interval while accounting for uncertainties in the throwing process. To improve accuracy further, we introduce a high-frequency residual policy that refines the nominal policy’s output and a pullback tube acceleration optimizer module that generates corrective motion commands based on real-time throwing errors. This integrated approach enables robust and accurate prehensile throwing under highly dynamic conditions.”