GPGPU with GDAL - Basics of GPGPU interfacing

Yann Chemin

Abstract


The new generation of graphic cards are having on-board what is called a Graphical Processing Unit (GPU). These GPUs, have commonly hundreds of processing cores, very high speed parallel architechture, and RAM already in the Gigabyte range. Derived essentially by the need of processing virtual representation of reality in the gaming industry,
they are now also having general physics accelerated algorithms for environmental modeling like fluids dynamics.

General-Purpose computation on GPUs (GPGPU) is a relatively new type of computation possibility derived from the increasingly varied types of computations available on those graphic cards. They are what is called in computer engineering ``coprocessors", their specific high-speed high-parallel architecture makes them very attractive for heavy RAM-based computations.

More information on GPGPU in general can be found at www.gpgpu.org.
For our example, we will use a language from NVIDIA GPUs called Compute Unified Driver Architecture (CUDA). This is a C/C++ language addendum that enables the code to send your data to your GPGPU-enabled NVIDIA graphic card, to be processed there, and retrieve your results back from the graphic card RAM memory to your computer hard disk.

Keywords


programming; GPGPU; GDAL; raster; NDVI

Full Text:

PDF


Feedback

To send direct feedback or article ideas to the Editorial team, please use this form.

Contribute / Contact

If you are interested in joining the editorial team you are welcome to Join the Journal mailing list and introduce yourself, your interests and area of expertise.