Advanced Micro Devices, Inc.
METHOD AND SYSTEM FOR HARDWARE MAPPING INFERENCE PIPELINES

Last updated:

Abstract:

Methods and systems for hardware mapping inference pipelines in deep neural network (DNN) systems. Each layer of the inference pipeline is mapped to a queue, which in turn is associated with one or more processing elements. Each queue has multiple elements, where an element represents the task to be completed for a given input. Each input is associated with a queue packet which identifies, for example, a type of DNN layer, which DNN layer to use, a next DNN layer to use and a data pointer. A queue packet is written into the element of a queue, and the processing elements read the element and process the input based on the information in the queue packet. The processing element then writes another queue packet to another queue based on the processed queue packet. Multiple inputs can be processed in parallel and on-the-fly using the queues independent of layer starting points.

Status:
Application
Type:

Utility

Filling date:

12 Apr 2018

Issue date:

17 Oct 2019