Microsoft Corporation
ACCELERATING INFERENCE OF TRADITIONAL ML PIPELINES WITH NEURAL NETWORK FRAMEWORKS

Last updated:

Abstract:

Methods, systems, and computer program products are provided for generating a neural network model. A ML pipeline parser is configured to identify a set of ML operators for a previously trained ML pipeline, and map the set of ML operators to a set of neural network operators. The ML pipeline parser generates a first neural network representation using the set of neural network operators. A neural network optimizer is configured to perform an optimization on the first neural network representation to generate a second neural network representation. A tensor set provider outputs a set of tensor operations based on the second neural network representation for execution on a neural network framework. In this manner, a traditional ML pipeline can be converted into a neural network pipeline that may be executed on an appropriate framework, such as one that utilizes specialized hardware accelerators.

Status:
Application
Type:

Utility

Filling date:

14 Aug 2020

Issue date:

17 Feb 2022