NVIDIA Corporation
HYBRID NEURAL NETWORK ARCHITECTURE WITHIN CASCADING PIPELINES

Last updated:

Abstract:

A multi-stage multimedia inferencing pipeline may be set up and executed using configuration data including information used to set up each stage by deploying the specified or desired models and/or other pipeline components into a repository (e.g., a shared folder in a repository). The configuration data may also include information a central inference server library uses to manage and set parameters for these components with respect to a variety of inference frameworks that may be incorporated into the pipeline. The configuration data can define a pipeline that encompasses stages for video decoding, video transform, cascade inferencing on different frameworks, metadata filtering and exchange between models and display. The entire pipeline can be efficiently hardware-accelerated using parallel processing circuits (e.g., one or more GPUs, CPUs, DPUs, or TPUs). Embodiments of the present disclosure can integrate an entire video/audio analytics pipeline into an embedded platform in real time.

Status:
Application
Type:

Utility

Filling date:

9 Dec 2020

Issue date:

28 Oct 2021