CEEC at ECCOMAS 2024
CEEC at ECCOMAS 2024 CEEC will have a robust presence at the 9th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS24) this summer in Lisbon, Portugal! Over…
CEEC at ECCOMAS 2024 CEEC will have a robust presence at the 9th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS24) this summer in Lisbon, Portugal! Over…
If you're attending the EuroHPC Summit Week this month in Antwerp, make sure to join our Niclas Jansson for a PLENARY, "EuroHPC Users: How Are They Exploiting the Current EuroHPC Systems & Will Exploit Future Exascale Capabilities?" 17:15→18:45
Knowledge Shared is Knowledge Gained: the 1st CEEC Community Workshop This past December 13th, CEEC held its first annual community workshop at our consortium partner Friedrich-Alexander-Universität in Erlangen, Germany and…
Recent trends and advancements including more diverse and heterogeneous hardware in High-Performance Computing are challenging scientific software developers in their pursuit of good performance and efficient numerical methods. As a result, the well-known maxim “software outlives hardware” may no longer necessarily hold true, and researchers are today forced to re-factor their codes to leverage these powerful new heterogeneous systems. We present Neko – a portable framework for high-fidelity spectral element flow simulations. Unlike prior works, Neko adopts a modern object-oriented Fortran 2008 approach, allowing multi-tier abstractions of the solver stack and facilitating various hard- ware backends ranging from general-purpose processors, accelerators down to exotic vector processors and Field Programmable Gate Arrays (FPGAs) via Neko’s device abstraction layer. Focusing on Neko’s performance and exascale readiness, we outline the optimisation and algorithmic work necessary to ensure scalability and performance portability across a wide range of platforms. Finally, we present performance measurements on a wide range of accelerated computing platforms, including the EuroHPC pre-exascale systems LUMI and Leonardo, where Neko achieves excellent parallel efficiency for an extreme-scale direct numerical simulation (DNS) of turbulent thermal convection using up to 80% of the entire LUMI supercomputer.
Join us for our first annual community workshop! The energy consumption constraint for large-scale computing encourages scientists to revise the architecture design of hardware but also applications, algorithms, as well as the underlying working/ storage precision. The main aim is to make the computations energy-efficient (aka sustainable) and robust numerically but also in terms of fault tolerance. On the level of algorithmic solutions, we propose to utilize all provided resources wisely by exhibiting algorithms to computation overlapping but even more communication overlapping strategies. We also promote mixed-precision strategies with the aid of computer arithmetic tools like VerifiCarlo and its variable precision backend. Hence, before lowering precision, one must ensure that the simulation is numerically correct, e.g. by relying on alternative floating-point models/ rounding to pinpoint numerical bugs and to estimate the accuracy. We also work on fault tolerant and resilient algorithms, adaptivity and meshing/ mesh refinement that adapt to the heterogeneous nature of current machines. Another issue discussed in the workshop is the adaptation of adjoint-based topology optimization methods to spectral-element CFD codes. Therefore, in this workshop, we will share our approaches, lessons learnt with preliminary results, and outline perspectives for the upcoming three years of the project.
It’s been a while since our last update, and we’re still in the early days of our work. That said, we’ve been travelling to introduce ourselves and present some of the work we’ll be building on during our 4 project years. Maybe you’ve seen us? This summer we had various presentations both at ISC High-Performance and at the Platform for Advanced Scientific Computing (PASC) conference
The third in a series of presentations from Roman Iakymchuk on work using tools to investigate mixed precision possibilities. He and his co-author Pablo de Oliveira Castro introduce an approach to address the issue of sustainable computations with computer arithmetic tools. They use the variable precision backend (VPREC) to identify parts of code that can benefit from smaller floating-point formats and show preliminary results on several proxy applications.
This minisymposium was chaired by a CEEC consortium member and contained the presentation of another CEEC consortium member. The arrival of exascale computing has opened up unprecedented simulation capabilities for Computational Fluid Dynamics (CFD) applications. While offering high theoretical peak performance and high memory bandwidth, efficiently exploiting these systems necessitates complex programming models and significant programming investments .
Energy consumption constraints for large-scale computing encourage scientists to revise the architecture design of hardware but also applications, algorithms, as well as the underlying working/ storage precision. I will introduce an approach to address the issue of sustainable, but still reliable, computations from the perspective of computer arithmetic tools. We employ VerifiCarlo and its variable precision backend to identify the parts of the code that benefit from smaller floating-point formats. Finally, we show preliminary results on proxies of CFD applications.
What is Mixed Precision? Computers have been getting faster for as long as they’ve existed. However, not every computer part has been speeding up at the same rate. For example,…