The Walt Disney Company
Deep-learning motion priors for full-body performance capture in real-time

Last updated:

Abstract:

Training data from multiple types of sensors and captured in previous capture sessions can be fused within a physics-based tracking framework to train motion priors using different deep learning techniques, such as convolutional neural networks (CNN) and Recurrent Temporal Restricted Boltzmann Machines (RTRBMs). In embodiments employing one or more CNNs, two streams of filters can be used. In those embodiments, one stream of the filters can be used to learn the temporal information and the other stream of the filters can be used to learn spatial information. In embodiments employing one or more RTRBMs, all visible nodes of the RTRBMs can be clamped with values obtained from the training data or data synthesized from the training data. In cases where sensor data is unavailable, the input nodes may be unclamped and the one or more RTRBMs can generate the missing sensor data.

Status:
Grant
Type:

Utility

Filling date:

30 Sep 2016

Issue date:

26 Jan 2021