Reliable and sustainable computations: An application-driven approach

In this talk, Roman Iakymchuk presents his work on accuracy and reproducibility assuring strategies for parallel iterative solvers that may not hold due to the non-associativity of floating-point operations. These strategies primarily rely on guarding every bit of result until final rounding, hence they can be costly. The energy consumption constraint for large-scale computing encourages scientists to revise the architecture design of hardware but also applications, algorithms, as well as the underlying working/ storage precision. The main aim is to make the computing cost sustainable and apply the lagom principle (”not too much, not too little, the right amount”), especially when it comes to working/ storage precision. Thus, he will introduce an approach to address the issue of sustainable, but still reliable, computations from the perspective of computer arithmetic tools. Before lowering precision, one must ensure that the simulation is numerically correct, e.g. by relying on alternative floating-point models/ rounding to pinpoint numerical bugs and to estimate the accuracy. We employ VerifiCarlo and its variable precision backend to identify the parts of the code that benefit from smaller floating-point formats. Finally, we show preliminary results on proxy applications.

Sustainable and Reliable Computing with Tools: Analyzing Precision Appetites of CFD Applications with VerifiCarlo

Energy consumption constraints for large-scale computing encourage scientists to revise the architecture design of hardware but also applications, algorithms, as well as the underlying working/ storage precision. I will introduce an approach to address the issue of sustainable, but still reliable, computations from the perspective of computer arithmetic tools. We employ VerifiCarlo and its variable precision backend to identify the parts of the code that benefit from smaller floating-point formats. Finally, we show preliminary results on proxies of CFD applications.

VPREC to analyze the precision appetites and numerical abnormalities of several proxy applications

The third in a series of presentations from Roman Iakymchuk on work using tools to investigate mixed precision possibilities. He and his co-author Pablo de Oliveira Castro introduce an approach to address the issue of sustainable computations with computer arithmetic tools. They use the variable precision backend (VPREC) to identify parts of code that can benefit from smaller floating-point formats and show preliminary results on several proxy applications.

Precision in linear algebra solvers and its cascade impact on applications

Until recent years, linear algebra solvers were predominantly operating with double precision or binary64. With the advent of AI, in particular deep learning, lower floating-point precisions formats were introduced to accommodate the need of such computations; now the spectrum is shifted to fixed point and integer arithmetic, expanding to block floating point arithmetic. This change […]

CEEC at the Euro HPC Summit Poster Session

Flanders Meeting and Convention Centre, A Room with a ZOO Koningin Astridplein 20-26, Antwerp, Belgium

If you’re interested in our progress over the last year or hoping to ask us questions about our plans for the future, don’t miss the chance to talk with our own Niclas Jansson at the first poster session of the Euro HPC Summit on Tuesday, March 19th!

‘Enabling mixed-precision with the help of tools: A Nekbone case study’ at PPAM24

Ostrava, Czechia Ostrava, Czech Republic

If you’re going to PPAM24 September 8 – 11 in Ostrava, Czechia, make sure to check out the talk by Yanxiang Chen from our partner UMU on their progress implementing and optimizing mixed precision in Nekbone toward its integration in Neko and NekRS. The accompanying paper is also available as a pre-print online: https://arxiv.org/pdf/2405.11065