Intel Corporation
Universal Loss-Error-Aware Quantization for Deep Neural Networks with Flexible Ultra-Low-Bit Weights and Activations

Last updated:

Abstract:

Apparatuses, methods, and GPUs are disclosed for universal loss-error-aware quantization (ULQ) of a neural network (NN). In one example, an apparatus includes data storage to store data including activation sets and weight sets, and a network processor coupled to the data storage. The network processor is configured to implement the ULQ by constraining a low-precision NN model based on a full-precision NN model, to perform a loss-error-aware activation quantization to quantize activation sets into ultra-low-bit versions with given bit-width values, to optimize the NN with respect to a loss function that is based on the full-precision NN model, and to perform a loss-error-aware weight quantization to quantize weight sets into ultra-low-bit versions.

Status:
Application
Type:

Utility

Filling date:

26 Jun 2019

Issue date:

28 Apr 2022