Intel Corporation
Weight prefetch for in-memory neural network execution

Last updated:

Abstract:

The present disclosure is directed to systems and methods of bit-serial, in-memory, execution of at least an n.sup.th layer of a multi-layer neural network in a first on-chip processor memory circuitry portion contemporaneous with prefetching and storing layer weights associated with the (n+1).sup.st layer of the multi-layer neural network in a second on-chip processor memory circuitry portion. The storage of layer weights in on-chip processor memory circuitry beneficially decreases the time required to transfer the layer weights upon execution of the (n+1).sup.st layer of the multi-layer neural network by the first on-chip processor memory circuitry portion. In addition, the on-chip processor memory circuitry may include a third on-chip processor memory circuitry portion used to store intermediate and/or final input/output values associated with one or more layers included in the multi-layer neural network.

Status:
Grant
Type:

Utility

Filling date:

15 Oct 2018

Issue date:

31 May 2022