You are currently viewing Summer Round-Up

Summer Round-Up

  • Post author:
  • Post category:News

Summer Round-Up

It’s been a while since our last update, and we’re still in the early days of our work. That said, we’ve been travelling to introduce ourselves and present some of the work we’ll be building on during our 4 project years. Maybe you’ve seen us? This summer we had various presentations both at ISC High-Performance and at the Platform for Advanced Scientific Computing (PASC) conference.

At ISC, we were available to answer questions at the EuroHPC Joint Undertaking booth during the entire exhibition. In particular, Niclas Jansson introduced our project at a booth presentation on Wednesday of the conference. Before introducing our Lighthouse cases and project partners, he talked about why our work is important even in light of work other projects are already doing on Computational Fluid Dynamics (CFD). First, about 10% of the energy use in the world is spent overcoming turbulent friction! Since CFD can be used to study things like the movement of air, water, and even aircraft in our atmosphere, as in our first, second, and fifth lighthouse cases; there’s No upper limit to the size (detail and actual scale) of the systems we can study via simulations of fluid dynamics. What sets us apart from other European HPC Centers of Excellence is the size of the simulations we plan to run. Both because of the scale of our simulations (e.g. an entire small-scale ship hull) and the level of detail (e.g. the movements of grains of sand around a wind turbine foundation), our simulations are capable of requiring an entire exascale supercomputer to run. These massive simulations are important because they are more realistic than what we’ve been able to do in the past. Instead of approximate answers that we can use to understand general ideas, exascale simulations have the potential to provide more practical answers to grand challenges.

Niclas Jansson stands to the right of a large screen showing an intro slide that says 'Introducing CEEC' and lists his name all in the CEEC brand and colors of white and grey on a dark blue/green background.

As you might imagine, simulations like these have the potential to require a great deal/ of energy and to encounter every possible error while making up to 1,000,000,000,000,000,000 floating point operations (FLOPS) every second. Thus, our work will focus on things like improving code efficiency, monitoring performance, implementing new algorithms, mixing different levels of precision in a single simulation to save computational work where possible, and increasing fault tolerance.

In fact, mixed precision was the topic of our other presentation at ISC and one of our presentations at PASC. Before I get into the details, you can get a crash course or refresher in mixed precision by reading the first of our “explainer” articles on mixed precision here.

As you now know from the explainer article, moving data takes both time and energy at Exascale because of the distance electricity needs to travel between processors and memory and because memory itself runs slower than processors. Hence, decreasing the precision of numbers or the number of bits used to represent numbers in calculations can save time and energy in simulations by reducing the amount of data that needs to travel between processors and memory. However, previous attempts to decrease precision across simulation runtime or based on a best guess of when precision could be reduced usually led to inaccurate results.

This is why Roman is using tools like Verificarlo on mini-apps like NEKbone (which mimics the Nek5000 project code) and AMG to see how much precision is actually necessary over the course of a simulation. Although each real use case would have its own set of data and resulting precision needs, investigating a number of similar use cases and mini apps should allow one to generalize when low precision can be used without jeopardizing results and when it’s superfluous and just slowing down the system.

With this knowledge, it’s then possible to rewrite algorithms to use only as much precision at any given time point as necessary. A simulation may start running at half precision (16 bits to represent numbers) and finish using extended precision (up to 128 bits to represent numbers) and be faster, more power efficient, and just as accurate as the original double precision simulation. It is exactly this kind of precision optimization that we will need in order to efficiently and sustainably run simulations as large as our Lighthouse cases.

For the full technical details of Roman’s presentations, please check out his slides below!

Reliable and sustainable computations: An application-driven approach

Sustainable and Reliable Computing with Tools: Analyzing Precision Appetites of CFD Applications with VerifiCarlo

Roman’s talk at PASC was part of a broader minisymposium on the Cross-Cutting Aspects of Exploiting Exascale Platforms for High-Fidelity CFD in Turbulence Research. While researchers like him are rewriting algorithms for mixed precision, experts from fields like software development, physics, and other domains will also need to optimise parts of CFD software to fully realise the potential of exascale systems. For more on the minisymposium, watch an interview with CEEC’s very own Philipp Schlatter and Niclas Jansson on YouTube.

PASC23 on stage – Cross-Cutting Aspects of Exploiting Exascale Platforms for High-Fidelity CFD

We have been participating in other events besides ISC and PASC, of course, and will keep posting here as we can. For instance, you can look forward to more on our May presentation at the cray User Group (CUG) later this fall. Moving forward, stay tuned for updates like this on our participation in future events and any future publications by subscribing to our newsletter or following our social media handles below. Until next time!