Amazon.com, Inc.
Caption timestamp predictor

Last updated:

Abstract:

An automated solution to determine suitable time ranges or timestamps for captions is described. In one example, a content file includes subtitle data with captions for display over respective timeframes of video. Audio data is extracted from the video, and the audio data is compared against a sound threshold to identify auditory timeframes in which sound is above the threshold. The subtitle data is also parsed to identify subtitle-free timeframes in the video. A series of candidate time ranges is then identified based on overlapping ranges of the auditory timeframes and the subtitle-free timeframes. In some cases, one or more of the candidate time ranges can be merged together or omitted, and a final series of time ranges or timestamps for captions is obtained. The time ranges or timestamps can be used to add additional non-verbal and contextual captions and indicators, for example, or for other purposes.

Status:
Grant
Type:

Utility

Filling date:

5 Dec 2018

Issue date:

24 May 2022