NVIDIA Corporation
Instructions for managing a parallel cache hierarchy
Last updated:
Abstract:
A technique for managing a parallel cache hierarchy that includes receiving an instruction from a scheduler unit, where the instruction comprises a load instruction or a store instruction; determining that the instruction includes a cache operations modifier that identifies a policy for caching data associated with the instruction at one or more levels of the parallel cache hierarchy; and executing the instruction and caching the data associated with the instruction based on the cache operations modifier.
Status:
Grant
Type:
Utility
Filling date:
1 May 2017
Issue date:
30 Jul 2019