HPC & Quantum
27 subscribers
11.4K photos
668 videos
3 files
30.6K links
Download Telegram
Atos HPC Big Data (Twitter)

Jean-Pierre Panziera is pleased to be speaking about “Optimizing Computational Fluid Dynamics workloads with BullSequana XH3000 and @intel SapphireRapids+HBM" at Intel's booth, #2428, at #SC22. Join the conversation Nov. 16th at 12:30 PM CT.
#HPCAccelerates #BullSequana #CFD
HPC Guru (Twitter)

RT @ExaFoam: The #exaFOAM project mobilises a highly-capable consortium of 12 partners.

The project is co-funded by @EuroHPC_JU and national funding bodies in France, Germany, Spain, Italy, Croatia, Greece and Portugal.

https://exafoam.eu

@openfoam #HPC #CFD https://twitter.com/ExaFoam/status/1640343724596121602/photo/1
HPC Guru (Twitter)

👏👏 Great accomplishment, Dr. Moritz Lehmann (@ProjectPhysX), and an inspiration for others!

#HPC #CFD #GPU #OpenCL #FluidX3D
-----------
@ProjectPhysX:
5 years ago I had this wild idea to write my own #CFD software from scratch in #OpenCL. I wanted to know how fluid simulations work, and make them ridiculously fast on any #GPU. Today #FluidX3D has 1.4k on #GitHub: https://t.co/c92kURQxiH

how it started how it's going https://t.co/aoJCKSR3sY
HPC Guru (Twitter)

RT @ProjectPhysX: How fast are #HPC CPUs in #CFD? I benchmarked @FluidX3D on Sapphire Rapids and Ice/Cascade/Sky Lake on @intelhpc DevCloud. SPR-HBM (#IntelMaxSeries 9480) in cache mode are slower than non-HBM SPR (8480+) in #OpenCL. But both beat the EPYC 9654. GTX 1080 Ti for scale. 🖖😇🔥
🧵1/3 https://twitter.com/ProjectPhysX/status/1663942677283450880/photo/1
HPC Guru (Twitter)

RT @ProjectPhysX: #IntelMaxSeries SPR-HBM CPUs in HBM-only mode could be super interesting for #CFD in #Formula1, where #GPU compute is not allowed. 🖖😎🏎️
🧵3/3
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)

RT @ProjectPhysX: Over the weekend I got to test #FluidX3D on the world's largest #HPC #GPU server, #GigaIOSuperNODE. Here is one of the largest #CFD simulations ever, the Concorde for 1s at 300km/h landing speed. 40 *Billion* cells resolution. 33h runtime on 32 @AMDInstinct MI210, 2TB VRAM.
🧵1/5
HPC Guru (Twitter)

Coherent file format eliminates bottlenecks in #OpenFOAM workflows

In a large-scale simulation using more than 500,000 cores on HLRS’s #supercomputer Hawk, scientists successfully tested this new approach

hlrs.de/news/detail/coherent…

#HPC #CFD @HLRS_HPC
HPC Guru (Twitter)

RT @RandlesLab: Kicking off @Supercomputing with a talk assessing performance portability on different GPU-based architectures and one on in situ visualization of cellular models.

#p3hpc #isav23 #hpc #fsi #cfd #GPU
HPC Guru (Twitter)

RT @RandlesLab: Come see what our @DukeEngineering lab has been up to at @Supercomputing! We have a number of talks and posters being presented this week showcasing advances in #CFD and #FSI. I am excited to see the hard work come to fruition. Check it out! And keep in mind -- we're hiring!
HPC Guru (Twitter)

RT @RandlesLab: Excited to announce that our @DukeEngineering lab has been awarded one of @doescience's INCITE awards. Looking forward to using the #aurora and #frontier supercomputers for developing #digitaltwins of microfluidic devices and assessing whole blood characteristics.

#hpc #cfd
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)

RT @Dr_NeilA: Just wanted to share a cool video of a 1B cell WMLES #CFD simulation of a complete NASA high-lift aircraft (NASA CRM) that I ran on @awscloud using a new code by startup Volcano Platforms (led by ex-NASA Ames Branch Chief Cetin Kiris) - It's based upon a cartesian immersed boundary method that is very #HPC efficient. The whole simulation runs on AWS ParallelCluster using a single Amazon EC2 g5.48xlarge node (x8 Nvidia A10g) in only 4 days for ~$880. To put that in perspective that's roughly the same as a mesh-converged RANS simulation which is pretty impressive given the correlation to experimental data is so far looking very good indeed. You can learn more about this work at the 5th @aiaa high-lift prediction workshop website (as well as look through the excellent work of other participants). Full details will be discussed at the upcoming AIAA Aviation HLPW5 workshop & hopefully an Aviation2024 paper too (abstract submitted). hiliftpw.larc.nasa.gov/
HPCwire (Twitter)

Oak Ridge and Georgia Tech Push Frontier to New Scale for CFD Research
ow.ly/1rtf50WuX0Z #CFD #HPC