Abstracts

Sustainability, Energy Efficiency and HPC

Natalie Bates
Co-chair of the Energy Efficient High performance Computing Working Group (EE HPC WG)

Without dispute, the amount of CO2 in earth’s atmosphere has been rapidly rising attributed in large part due to combustion of coal, oil and gas that has fueled industrialization.  While technologies consume energy, they also enable productivity enhancements and directly contribute to energy efficiency.  Some even argue that computing and information technologies are vital, significant and critical in moving towards a low-carbon future.  How would HPC net out with respect to carbon and energy; a net savings or deficit?

With a targeted power envelope around 20MW in 2018, and notwithstanding the significant technology advances required to meet this goal, Exascale will force a redefinition of supercomputer’s relation to energy.  While enhanced datacenter efficiency and metrics like PUE have brought forward significant improvement, we must now envision game-changing innovations, fostering a focus on sustainability with things like renewable energy utilization, heat re-use and optimal datacenter location.

 

The Danish plans – for gardar and for computing as a service

Martin Bech
UNI-C

The whole field of research e-infrastructures in Denmark is currently being re-organized with the creation of a new organization with the preliminary name DeiC (Danish e-infrastructure Centre). The aim of this organization is to provide the whole spectrum of IT infrastructures to the research community in Denmark, ranging from the research network, over cloud computing to high performance computing. The NHPC cluster is a component in this strategy.

This talk will try to summarize this strategy and in particular the part
the gardar cluster plays in it, as far as this is known by now.

 

Simulations of materials using distributed computing

Hannes Jónsson, professor
University of Iceland

The simulation of time evolution in materials on the atomic scale is one of the grand challenges of computational science. An algorithm called ‘adaptive kinetic Monte Carlo’ (AKMC) is designed to make efficient use of distributed computing on possibly remote computers connected simply by internet. Various processes in materials can be simulated in this way, including diffusion in and on the surface of solids, growth of overlayers, and chemical reactions catalyzed by solid surfaces or even nanoclusters. Such simulations can, in particular, help search for improved materials for renewable energy technology. Some examples of recent simulations will be given.

 

Next generation energy efficient IT infrastructure

Paul Santeler, Vice President and GM HPC and Hyperscale Computing, Hewlett-Packard 
Richard Curran, Director Product Marketing EMEA, Intel Corporation 

HP & Intel will provide insights and guidance on next generation energy efficient IT infrastructure and cloud computing. They will discuss how their solutions will contribute to greater energy efficiency, less complexity, better asset utilization and lower TCO.

 

Challenges and Roadmaps to Exascale Simulation for Molecular Dynamics

Erik Lindahl
KTH Royal Institute of Technology & Stockholm University

Over the last decade, molecular dynamics simulation has evolved from a severely limited esoteric method into a cornerstone of many fields, in particular structural biology where it is now just as established NMR or X-ray crystallography. Here, I will discuss some of our recent results using both nordic, PRACE and international resources, in particular how high performance computing has become a critical tool in our studies of trying to understand how anesthesia and other important processes in the nervous system work.

I will show roughly where the current cutting edge for molecular dynamics is on clusters in terms of scaling and performance, but also discuss our current approaches to extend this by automated parallel adaptive molecular dynamics in a new open framework called “Copernicus”, drawing significant inspiration from the Folding@Home approach to simulation. This type of computation appears to combine efficient strong scaling with ensemble parallelism and the possibility to employ resources at several different centers, and even makes it possible to achieve sampling of slow dynamics that frequently is more efficient than special-purpose hardware.