Meta Platforms, Inc.
Media effects using predicted facial feature locations

Last updated:

Abstract:

An effects application receives a video of a face and detects a bounding box for each frame indicating the location and size of the face in each frame. In one or more reference frames. The application uses an algorithm to determine locations of facial features in the frame. The application then normalizes the feature locations relative to the bounding box and saves the normalized feature locations. In other frames (e.g., target frames), the application obtains the bounding box and then predicts the locations of the facial features based on the size and location of the bounding box and the normalized feature locations calculated in the reference frame. The predicted locations can be made available to an augmented reality function that overlays graphics in a video stream based on face tracking in order to apply a desired effect to the video.

Status:
Grant
Type:

Utility

Filling date:

22 Sep 2017

Issue date:

15 Sep 2020