Using The NVIDIA CUDA Stream-Ordered Memory Allocator Half 1
Most CUDA developers are accustomed to the cudaMalloc and cudaFree API capabilities to allocate GPU accessible memory. However, there has lengthy been an obstacle with these API capabilities: they aren’t stream ordered. In this put up, we introduce new API functions, cudaMallocAsync and cudaFreeAsync, that enable memory allocation and deallocation to be stream-ordered operations. Partly 2 of this series, we spotlight the benefits of this new capability by sharing some big information benchmark results and supply a code migration information for modifying your existing functions. We additionally cover superior topics to take advantage of stream-ordered memory allocation in the context of multi-GPU entry and using IPC. This all helps you improve performance within your present functions. The following code instance on the left is inefficient as a result of the first cudaFree call has to anticipate kernelA to complete, so it synchronizes the device earlier than freeing the memory. To make this run extra effectively, the memory might be allocated upfront and sized to the larger of the two sizes, as proven on the fitting.
us-thememorywave.com
This increases code complexity in the appliance as a result of the memory administration code is separated out from the enterprise logic. The issue is exacerbated when different libraries are involved. This is far tougher for the applying to make environment friendly because it might not have full visibility or control over what the library is doing. To bypass this drawback, the library would have to allocate memory when that operate is invoked for the first time and by no means free it until the library is deinitialized. This not only increases code complexity, but it surely additionally causes the library to hold on to the memory longer than it needs to, potentially denying one other portion of the applying from using that memory. Some applications take the idea of allocating memory upfront even additional by implementing their very own customized allocator. This adds a major quantity of complexity to software development. CUDA goals to offer a low-effort, excessive-performance alternative.
CUDA 11.2 launched a stream-ordered memory allocator to solve these kind of issues, with the addition of cudaMallocAsync and cudaFreeAsync. These new API features shift memory allocation from international-scope operations that synchronize your entire gadget to stream-ordered operations that enable you to compose memory management with GPU work submission. This eliminates the need for Memory Wave synchronizing excellent GPU work and helps limit the lifetime of the allocation to the GPU work that accesses it. It's now possible to manage memory at perform scope, MemoryWave Community as in the following example of a library perform launching kernelA. All the same old stream-ordering rules apply to cudaMallocAsync and cudaFreeAsync. The memory returned from cudaMallocAsync can be accessed by any kernel or memcpy operation as lengthy as the kernel or memcpy is ordered to execute after the allocation operation and before the deallocation operation, in stream order. Deallocation might be carried out in any stream, as long as it's ordered to execute after the allocation operation and after all accesses on all streams of that memory on the GPU.
In effect, stream-ordered allocation behaves as if allocation and free were kernels. If kernelA produces a legitimate buffer on a stream and kernelB invalidates it on the same stream, then an utility is free to access the buffer after kernelA and earlier than kernelB in the suitable stream order. The next instance reveals varied legitimate usages. Determine 1 exhibits the various dependencies specified in the earlier code instance. As you'll be able to see, all kernels are ordered to execute after the allocation operation and complete earlier than the deallocation operation. Memory allocation and deallocation can not fail asynchronously. Memory errors that occur due to a call to cudaMallocAsync or cudaFreeAsync (for instance, out of memory) are reported instantly by way of an error code returned from the decision. If cudaMallocAsync completes efficiently, the returned pointer is assured to be a valid pointer to memory that is secure to entry in the appropriate stream order. The CUDA driver uses memory pools to attain the conduct of returning a pointer instantly.