How Robust is 3D Human Pose Estimation to Occlusion?

Home / Publications / 2018 / How Robust is 3D Human Pose Estimation to Occlusion?

István Sárándi, Timm Linder, Kai O. Arras, and Bastian Leibe
How Robust is 3D Human Pose Estimation to Occlusion?
IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018. Workshop on Robotic Co-workers 4.0: Human Safety and Comfort in Human-Robot Interactive Social Environments

Abstract

Occlusion is commonplace in realistic human-robot shared environments, yet its effects are not considered in standard 3D human pose estimation benchmarks. This leaves the question open: how robust are state-of-the-art 3D pose estimation methods against partial occlusions? We study several types of synthetic occlusions over the Human3.6M dataset and find a method with state-of-the-art benchmark performance to be sensitive even to low amounts of occlusion. Addressing this issue is key to progress in applications such as collaborative and service robotics. We take a first step in this direction by improving occlusion-robustness through training data augmentation with synthetic occlusions. This also turns out to be an effective regularizer that is beneficial even for non-occluded test cases.

@misc{sarandiIROSWS18, 
     author={Istv\'{a}n S\'{a}r\'{a}ndi and Timm Linder and Kai Oliver Arras and Bastian Leibe}, 
     howpublished={IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS'18) - Workshop on Robotic Co-workers 4.0: Human Safety and Comfort  in Human-Robot Interactive Social Environments}, 
     title={How Robust is 3D Human Pose Estimation to Occlusion?}, 
     year={2018}, 
   month={October},
   archivePrefix = "arXiv",
   eprint = {1808.09316},
   primaryClass = "cs.CV",
}