Difference between revisions of "Streams and Multi-GPU using CUVI"
Line 91: | Line 91: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
|} | |} | ||
If you have more than one GPU installed on the machine you can divide stream execution among them by just selecting the device before any CUVI call using <code>cuviSetCurrentDevice(X);</code> where X is the device id and its range is {0,N-1} in a machine containing N CUDA capable GPUs. Any function call of CUVI following <code>cuviSetCurrentDevice(X)</code> will execute on the device X until you set the current device to another. |
Revision as of 18:01, 4 May 2012
Using Streams with CUVI
CUVI framework provides a way to use streams with minimal coding effort. Each function call in CUVI takes an optional parameter to specify the stream on which it should run. The code below shows how a simple function call of CUVI can be divided into streaming calls on GPU. For most of the cases this will result in better performance as copying image data to GPU and processing that data on GPU happens simultaneously.
CUVI example
In this example we use CUVI's RGB2Gray function from Color Operations module on a full HD input image
|
Same example with Streams
Streams greatly improve performance of your application by hiding data processing time in data copying time. Instead of waiting for the complete image to be copied on GPU before processing, streaming enables processing the data as it arrives on GPU. Here's how you can use streaming in your application using CUVI:
|
Multi-GPU in CUVI
Applications that use CUVI also have the liberty to scale up on a Multi-GPU environment without changing a single line of code. We now know how to play with streams in CUVI. Multi-GPU is nothing more than just dividing those chunk's execution across all the GPU devices installed in the machine. You can write a single piece of code with few checks and error handling that will run of a single GPU machine as well as on Multi-GPU machine while using full capability of that machine.
|
If you have more than one GPU installed on the machine you can divide stream execution among them by just selecting the device before any CUVI call using cuviSetCurrentDevice(X);
where X is the device id and its range is {0,N-1} in a machine containing N CUDA capable GPUs. Any function call of CUVI following cuviSetCurrentDevice(X)
will execute on the device X until you set the current device to another.