Peter Messmer, NVidia, Switzerland
Since the introduction of CUDA a bit over a decade ago, heterogeneous computing with GPUs has become increasingly popular in HPC. While the initial applications were mostly exploratory in nature, the processing power, the relatively intuitive programming model and a rapidly growing software ecosystem comprised of tools, libraries and training material helped a broad user community to adopt heterogeneous computing. Today, most of the top HPC applications are therefore GPU accelerated, covering all areas of computational science and engineering, including quantum chemistry, structural mechanics or weather simulation. This trend got an extra boost with the increasing computing demand of machine learning, specifically for training deep neural networks, where the processing power of GPUs was suddenly in demand from non-traditional HPC applications in the datacenter. Today, we therefore find GPUs not only in the fastest supercomputers in the world, but also in the largest datacenters. In this presentation, I will discuss the current impact of GPU in HPC and the data center, look at the challenges still faced by developers and how we are working on mitigating them.