15 subscribers
10 photos
15 videos
3 files
3 links
Download Telegram
Here are a few updates:

As part of the DrivAer aerodynamics simulations, work has been done to reduce the main memory requirements and achieve consistent results for different grid resolutions compared to the experiments. The higher-order solver in particular should show its advantages for this application.

At this point, we would like to thank the team of the “1st OpenFOAM HPC Challenge (OHC-1)” for providing the meshes for this case in different resolutions.

The mixed-precision solver achieved identical results for most validation cases compared to the double-precision solver, but unfortunately not for all of them. For this reason, this function remains in the experimental stage for the time being. The suitability of this solver for DrivAer in terms of memory requirements, accuracy and computing time, especially for higher-order, still needs to be investigated.

In addition, there is now a new implementation of the komega turbulence model, which requires about 10% less memory and works better with the higher-order solver. This model provides improved values for pressure loss and flow resistance for all validation cases.

The VOF solver was tested for its ability to simulate condensation with subsequent droplet and film formation, taking surface tension and contact angle into account. The results were more than promising, also in direct comparison with another CFD code.
3
The summer holidays are probably over everywhere now, and I hope everyone has settled back into normal life again.

There have been some promising developments at WK, such as a new energy model for multiphase flows, which works very well for phase transitions (melting, solidification, evaporation, condensation, and cavitation).

Last week, I finally replaced all the fans in my workstation with quieter models. However, the fan control on my Supermicro motherboard didn’t like this very much and reacted in the well-known way by turning the fans up to maximum speed for a brief moment every 10 seconds. The solution was a fan hub (Noctua NA-FH1) together with a fan controller (Noctua NA-FC1). Now, for the first time, the machine runs almost silently even under full load. The whole thing cost maybe 200 euros, but it was probably my best investment in years :-)
2
There are more important updates to the solver. I need to find time to list and show here.
👍2
These are the main updates from July and August.
🔥1
On cfd-online, there was a question about how the fan-controller is connected to the motherboard so that it can receive information about the current temperature. The connection runs via one of the two colored cables that arrive at the upper right corner of the motherboard.
The fan hub is connected directly to the power supply.
👍4
"Blast from the past - Part 3" - Solution Mapping

This is about transferring the flow solution from one computational grid to another. Mapping is often used so you don’t have to start the simulation all over again when the geometry changes slightly.

However, solution mapping to another grid could become problematic if the source solution is also based on a very fine mesh. An example of such situation is the DriveAer test case, where the meshes go from 64 million to 128 million to 256 million cells.

The traditional method that is employed by all the CFD solvers is to load the initial guess solution completely and then the interpolation process takes place. In the above example, interpolating the solution for 256 million cell from 128 million cells is extremely difficult due to memory usage. Not everybody has this much RAM to load both solutions simultaneously.

Due to this reasons, Wildkatze only partially loads the solution from the initial guess file. This allows initial guess files of any grid sizes.

The process is very simple. In the old model, the solution is saved with the command:

> save-initial-guess initialGuessFile

In the new model, this file is loaded instead of a restart file after the model has been read in with the command:

> read-initial-guess initialGuessFile

The mapping process is followed by a few initial guess iterations to carefully satisfy the divergence of velocity criteria of the mapped solution. This later results in very rapid convergence.

Otherwise one usually observes high pressure fluctuations at the first (post mapping) iterations, destroying the mapped solution and often resulting in an increased number of iterations to converge.

Mapping in WK also works in parallel with the associated speedup and without additional overhead.
4
New Variable Time-Step Scheme in Wildkatze

Wildkatze's ability to perform multiphase simulations at extremely high Courant numbers could cause problems in certain situations.

Although it is desirable for a CFD solver to be stable at higher Courant numbers — that is, at larger time steps — this could negatively impact the accuracy of the results.

Of course, the accuracy of the solution is questionable if the simulation is run at time-step size’s corresponding of Courant numbers above 25 or more. This is specially true when a sharp interface tracking has to be performed. However, Wildkatze's MaxGBCA VOF scheme can still produce a very sharp interface at high Courant numbers.

To address this issue, Wildkatze now allows you to specify a range of time-step sizes (minimum to maximum time step) and a Courant number. Above this Courant number the solver shall use the minimum user specified time-step size. When the Courant numbers are towards lower specified values the maximum time-step values are used.

The key feature of the new method is that the time step size is adjusted using a blending function to ensure a smooth transition. This is particularly important in the simulation of flow problems where surface tension is the driving force, as sudden changes in the time step size disturb the solution too much.

... more text in the comments
2👍1
A delayed St. Nicholas Day gift for our motorsport enthusiasts.
Pressure and Acoustic Monopol Source for a velocity of 50 m/s run with KOmega-DDES.
🔥5
Every time I use ParaView for post-processing large models, I wonder why we don't use a framework like Wildkatze as a visualization server. Then we would already have automatic support for multiple cores using domain decomposition (on the fly), native support for polyhedra, and optimized algorithms for calculating derived quantities such as gradients.
Maybe someone out there is interested in such a project. We even considered making our code available to the PV developers.
1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
We wish all our readers a Merry Christmas, whether you celebrate it now or in January 😊

If you have any wishes, suggestions, or questions, feel free to share them here. Think of it as a wish list for our very special Santa Claus.
3