NVIDIA Corporation
Asynchronous data movement pipeline

Last updated:

Abstract:

Apparatuses, systems, and techniques to parallelize operations in one or more programs with data copies from global memory to shared memory in each of the one or more programs. In at least one embodiment, a program performs operations on shared data and then asynchronously copies shared data to shared memory, and continues performing additional operations in parallel while the shared data is copied to shared memory until an indicator provided by an application programming interface to facilitate parallel computing, such as CUDA, informs said program that shared data has been copied to shared memory.

Status:
Grant
Type:

Utility

Filling date:

20 Mar 2020

Issue date:

5 Apr 2022