We design a nominal controller for animating an articulated physics-based human arm model, including the hands and fingers, to catch and throw objects. The controller is based on a finite state machine that defines the target poses for proportional-derivative control of the hand, as well as the orientation and position of the center of the palm using the solution of an inverse kinematics solver. We then use reinforcement learning to train agents to improve the robustness of the nominal controller for achieving many different goals. Imitation learning based on trajectories output by a numerical optimization is used to accelerate the training process. The success of our controllers is demonstrated by a variety of throwing and catching tasks, including flipping objects, hitting targets, and throwing objects to a desired height, and for several different objects, such as cans, spheres, and rods. We also discuss ways to extend our approach so that more challenging tasks, such as juggling, may be accomplished.
Best Presentation Award (MIG’21). Congratulations to Yunhao Luo!
BibTeX
@proceedings{catchThrow2021, author = {Luo, Yunhao and Xie, Kaixiang and Andrews, Sheldon and Kry, Paul G.}, title = {Catching and Throwing Control of a Physically Simulated Hand}, booktitle = {Proc. of the ACM/SIGGRAPH conference on Motion, Interaction and Games}, series = {MIG’21}, year = {2021}, doi = {10.1145/3487983.3488300}, publisher = {ACM} }