Unified Memory For CUDA Inexperienced Persons

From Pyra Wiki
Jump to navigation Jump to search


", launched the fundamentals of CUDA programming by showing how to write down a simple program that allocated two arrays of numbers in memory accessible to the GPU after which added them together on the GPU. To do this, I introduced you to Unified Memory, which makes it very easy to allocate and entry information that can be utilized by code running on any processor in the system, CPU or GPU. I finished that put up with a number of easy "exercises", certainly one of which inspired you to run on a latest Pascal-based mostly GPU to see what happens. I hoped that readers would strive it and touch upon the outcomes, and a few of you probably did! I urged this for two reasons. First, because Pascal GPUs such as the NVIDIA Titan X and the NVIDIA Tesla P100 are the primary GPUs to incorporate the Page Migration Engine, which is hardware assist for Unified Memory web page faulting and migration.



The second purpose is that it provides a terrific alternative to learn more about Unified Memory. Quick GPU, Fast Reminiscence… Proper! But let’s see. First, I’ll reprint the results of working on two NVIDIA Kepler GPUs (one in my laptop and one in a server). Now let’s strive running on a really fast Tesla P100 accelerator, primarily based on the Pascal GP100 GPU. Hmmmm, that’s under 6 GB/s: slower than working on my laptop’s Kepler-based GeForce GPU. Don’t be discouraged, although; we will fix this. To understand how, I’ll have to let you know a bit more about Unified Memory. What's Unified Memory? Unified Memory is a single memory deal with area accessible from any processor in a system (see Figure 1). This hardware/software program know-how allows purposes to allocate knowledge that can be read or written from code operating on both CPUs or GPUs. Allocating Unified Memory is so simple as changing calls to malloc() or new with calls to cudaMallocManaged(), an allocation function that returns a pointer accessible from any processor (ptr in the next).



When code working on a CPU or GPU accesses data allocated this manner (often known as CUDA managed knowledge), the CUDA system software program and/or the hardware takes care of migrating memory pages to the memory of the accessing processor. The important level here is that the Pascal GPU architecture is the first with hardware support for virtual memory web page faulting and migration, through its Web page Migration Engine. Older GPUs based on the Kepler and Maxwell architectures also assist a extra restricted type of Unified Memory. What Occurs on Kepler When i call cudaMallocManaged()? On methods with pre-Pascal GPUs like the Tesla K80, calling cudaMallocManaged() allocates size bytes of managed memory on the GPU system that's active when the decision is made1. Internally, the driver additionally units up page table entries for all pages coated by the allocation, in order that the system is aware of that the pages are resident on that GPU. So, in our example, operating on a Tesla K80 GPU (Kepler structure), x and y are each initially totally resident in GPU memory.



Then in the loop beginning on line 6, the CPU steps through both arrays, initializing their components to 1.0f and 2.0f, respectively. For the reason that pages are initially resident in device memory, a page fault happens on the CPU for every array page to which it writes, and the GPU driver migrates the web page from device memory to CPU memory. After the loop, all pages of the 2 arrays are resident in CPU memory. After initializing the info on the CPU, the program launches the add() kernel to add the elements of x to the weather of y. On pre-Pascal GPUs, upon launching a kernel, the CUDA runtime must migrate all pages previously migrated to host Memory Wave Program or to another GPU again to the gadget memory of the system working the kernel2. Since these older GPUs can’t web page fault, all information must be resident on the GPU simply in case the kernel accesses it (even when it won’t).


Memory Wave

Memory Wave