Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the complianz-gdpr domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /srv/wpfarm/ceec_coe_eu/wordpress/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ninja-forms domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /srv/wpfarm/ceec_coe_eu/wordpress/wp-includes/functions.php on line 6114
What is CFD Part II – CEEC CoE

What is CFD Part II

What is CFD Part II: This Time with Computers

If you’ve already read part I of this explainer series, you know that fluids are described by their density, pressure, temperature, velocity, and viscosity[1]. You also know that fluid flows can be either incompressible, meaning that the density of the fluid is the same no matter how quickly it runs into a wall like a merchant ship hull, or that it’s running into the wall slowly enough to not experience a change in density. This “slow” case is why the Mach number of the flow can determine whether we use compressible or incompressible equations to calculate the fluid flow. Hence, compressible flows are only those that both involve a compressible fluid like a gas and entail flow fast enough to change the density of the fluid when running into a wall such as an airplane wing at nearly the speed of sound, for example[2]. All of these properties influence how fluid dynamics are calculated, as we discuss in our codes pages. But now, with no further ado, let’s bring computers into the equations!

Computational fluid dynamics (CFD) “refers to a broad set of methods that are used to solve the coupled nonlinear equations that govern fluid motion,” [3] using computers.  When we say equations are coupled, we mean that they need to be solved simultaneously as in the coupled system of Navier‐Stokes equations[4], where variables appear in multiple equations and you need to know the answer to each equation in order to solve the others. This coupled quality is what makes solving fluid flow equations by hand so difficult that even after Navier and Stokes developed their equations, scientists routinely used more simplified equations for calculating fluid dynamics before the advent of modern computers [5].

Unfortunately, these simplified equations necessarily produced simplified answers both in terms of being 2-dimentional and not detailed enough to replace experimentation. The transformational power of modern computers has been that the more powerful the computers, the more detailed the simulations. With the advent of exascale supercomputers, we’re reaching the point where simulations can now practically replace large portions of the design and prototyping experimentation cycle.

However, even with the most cutting-edge Exascale supercomputers, the degree of detail and software structure still has to be adjusted to ensure that simulations run quickly enough to be useful. CFD codes can be adjusted to balance detail and efficiency in several ways, all of which we’re working on in CEEC.

Common Ways to Optimize CFD Codes

Adaptive Mesh refinement

Colorful wooden building blocks.

The first question we have to answer is, what is mesh? Your mesh is the type and layout of “blocks” you use to build your CFD simulation. You can imagine a CFD simulation like burying an airplane in a box of blocks. Every block represents a set of your chosen fluid flow equations and hence data about the flow.  Accordingly, every gap between blocks or between a block and the airplane represents space not covered by your chosen equations and thus lost data – lost knowledge about what’s happening with turbulence at the boundary layer.

When writing or choosing your CFD software, the blocks can be different shapes, which impacts how seamlessly the blocks fit around the shape of your airplane, merchant ship, or whatever else you’re simulating. Blocks can also be different sizes, which effectively changes the “resolution” of the simulation by increasing the number of times you solve your fluid flow equations in a given geometrical space. Since we know that turbulence happens when fluids come into contact with a wall, it would make sense to have the highest possible resolution near that wall, also known as the boundary layer. However, using this super high resolution for other geometric spaces in the simulation is less useful the further you get from the boundary layer and the turbulence “action”, as it were. Changing the size and “resolution” of the mesh blocks depending on where they appear in the simulation is one type of mesh refinement.

Film frames of a garden on a white background.

But CFD simulations aren’t a single snapshot of flow at a single timepoint. They’re more like old-fashioned film or cartoons where the flow “video” is built of many frames run in chronological order.

In the same way that you may want to change your mesh based on where it appears in the simulation, you may also want to change it based on when it appears in the simulated flow. The part of the simulation where turbulence is happening most may change over time. Making the software able to change the mesh or “block resolution” over the course of the simulation is called adaptive mesh refinement (AMR).

In conclusion, adaptive mesh refinement is changing the size, shape, and distribution of your blocks or mesh over the geometric and temporal space of your simulation. Being able to do this can save a great deal of computational work while maximizing the relevance and accuracy of your data about the fluid flow.

Parallelization

What does parallelization have to do with simulations run on computers, you ask? If you think about your mesh made up of blocks each representing an instance of your chosen fluid flow equations, you have hundreds of thousands or millions of your equations being calculated every time frame of your simulation. If one person had to do all of this, it would take far more time than if each block or small cluster of blocks was assigned to a whole team of people who calculated each time step in parallel and shared answers with each other before moving on to the next time step. In fact, this idea of a team of people working in parallel to simulate something over time was the original conceptualization of modern CFD imagined by Lewis Fry Richardson in 1922 as his fantastical forecast factory[6].

It’s the same concept with computers except you replace “people” with “processors, cores, or threads” depending on the size and scale of what you’re discussing. Hence, parallelization is related to how the computational work is distributed across supercomputer hardware. It requires writing the software so that it can delegate work across hundreds or thousands of processors in order to finish calculations in proportion to the number of processors available. Ideally, more processors would scale directly to faster results, but as anyone who has managed a large team knows, more hands don’t always make lighter work. Sometimes they just make more meetings and confusion. Parallelization is particularly important as supercomputers get larger and as they use more graphical processing units (GPUs) because GPUs are especially good at working quickly at repetitive tasks. IN comparison, central processing units (CPUs) from which older supercomputers were made, are much better at making decisions such as in if x than y until z is true. This shift in hardware from exclusively CPU supercomputers to mixed CPU and GPU supercomputers is both a large part of why supercomputers are so much faster and why we have to spend so much time rewriting and optimizing software. Supercomputers are undergoing a massive “reorg” and it takes time to do in a way that makes efficient use of the new hardware “personnel”.

Dynamic Load Balancing

Now, you might ask, what happens when your parallelization meets adaptive mesh refinement and your “people” or processors find themselves with suddenly far more “blocks” to calculate in their assigned geometric space than in previous time steps? If the whole team had to wait on them, many other processors would be idle while they crunched through their extra work. This is precisely what dynamic load balancing (DLB) fixes. It’s the necessary flip side to adaptive mesh refinement. If processors suddenly find themselves falling behind the group, DLB steps in like a middle manager to reallocate work so that the team can stay on schedule and make the most efficient possible use of available hardware. If this sounds a little familiar, it’s because DLB is part of what Marta talked about in her PeopleOfCEEC profile.

Mixed Precision

Lastly, for today, is mixed precision, which we’ve explained in an earlier article. In short, it means designing an algorithm to adjust the number of bits used to store numbers during the simulation so that numbers are stored using the minimum necessary number of bits without compromising accuracy in order to save time wherever possible. Numbers are only stored with the normal or even higher number of bits when they are needed.  – when they get extremely large or extremely small, which can often happen during CFD simulations. However, at any point that the numbers are unremarkably mid-sized enough to use fewer than normal bits for storage without compromising accuracy, the algorithm is smart enough to do that too.

The ongoing work of adjusting CFD codes to be as detailed and efficient as technologically possible is what CEEC will spend the next years doing. Now that we have a common understanding of what CFD is, stay tuned for future articles on the anatomy of CFD codes and how they work, key terms like the equations and numbers mentioned above and in part I, and computational concepts like uncertainty quantification, fault tolerance, turbulence closure models, and data compression.

Until next time, you can always find us on social media!

References