Atos HPC Big Data (Twitter)
Jean-Pierre Panziera is pleased to be speaking about “Optimizing Computational Fluid Dynamics workloads with BullSequana XH3000 and @intel SapphireRapids+HBM" at Intel's booth, #2428, at #SC22. Join the conversation Nov. 16th at 12:30 PM CT.
#HPCAccelerates #BullSequana #CFD
Jean-Pierre Panziera is pleased to be speaking about “Optimizing Computational Fluid Dynamics workloads with BullSequana XH3000 and @intel SapphireRapids+HBM" at Intel's booth, #2428, at #SC22. Join the conversation Nov. 16th at 12:30 PM CT.
#HPCAccelerates #BullSequana #CFD
insideHPC.com (Twitter)
Check out this article from Ansys: Enabling Innovation for CFD Simulations, Thanks to Ansys and AWS
https://wp.me/p3RLHQ-nH6
@ANSYS @awscloud @aws #HPC #modsim #CFD #CloudComputing #cloudHPC
Check out this article from Ansys: Enabling Innovation for CFD Simulations, Thanks to Ansys and AWS
https://wp.me/p3RLHQ-nH6
@ANSYS @awscloud @aws #HPC #modsim #CFD #CloudComputing #cloudHPC
High-Performance Computing News Analysis | insideHPC
Enabling Innovation for CFD Simulations, Thanks to Ansys and AWS
Using the integrated suite of Ansys solvers, including fluids, structural and composites simulation, Emirates Team New Zealand designed a craft that [...]
HPC Guru (Twitter)
RT @ExaFoam: The #exaFOAM project mobilises a highly-capable consortium of 12 partners.
The project is co-funded by @EuroHPC_JU and national funding bodies in France, Germany, Spain, Italy, Croatia, Greece and Portugal.
https://exafoam.eu
@openfoam #HPC #CFD https://twitter.com/ExaFoam/status/1640343724596121602/photo/1
RT @ExaFoam: The #exaFOAM project mobilises a highly-capable consortium of 12 partners.
The project is co-funded by @EuroHPC_JU and national funding bodies in France, Germany, Spain, Italy, Croatia, Greece and Portugal.
https://exafoam.eu
@openfoam #HPC #CFD https://twitter.com/ExaFoam/status/1640343724596121602/photo/1
HPC Guru (Twitter)
👏👏 Great accomplishment, Dr. Moritz Lehmann (@ProjectPhysX), and an inspiration for others!
#HPC #CFD #GPU #OpenCL #FluidX3D
-----------
@ProjectPhysX:
5 years ago I had this wild idea to write my own #CFD software from scratch in #OpenCL. I wanted to know how fluid simulations work, and make them ridiculously fast on any #GPU. Today #FluidX3D has ⭐1.4k on #GitHub: https://t.co/c92kURQxiH
how it started how it's going https://t.co/aoJCKSR3sY
👏👏 Great accomplishment, Dr. Moritz Lehmann (@ProjectPhysX), and an inspiration for others!
#HPC #CFD #GPU #OpenCL #FluidX3D
-----------
@ProjectPhysX:
5 years ago I had this wild idea to write my own #CFD software from scratch in #OpenCL. I wanted to know how fluid simulations work, and make them ridiculously fast on any #GPU. Today #FluidX3D has ⭐1.4k on #GitHub: https://t.co/c92kURQxiH
how it started how it's going https://t.co/aoJCKSR3sY
insideHPC.com (Twitter)
Check out this article from Ansys: Unlock Industrial Cloud Transformation with Ansys and AWS
https://wp.me/p3RLHQ-nP1
@ANSYS @awscloud #HPC #CFD #simulation #engineeringsimulation
Check out this article from Ansys: Unlock Industrial Cloud Transformation with Ansys and AWS
https://wp.me/p3RLHQ-nP1
@ANSYS @awscloud #HPC #CFD #simulation #engineeringsimulation
High-Performance Computing News Analysis | insideHPC
Unlock Industrial Cloud Transformation with Ansys and AWS
With increased global competition and rising customer expectations, industrial engineers are under incredible pressure to lower their expenses while [...]
HPC Guru (Twitter)
RT @ProjectPhysX: How fast are #HPC CPUs in #CFD? I benchmarked @FluidX3D on Sapphire Rapids and Ice/Cascade/Sky Lake on @intelhpc DevCloud. SPR-HBM (#IntelMaxSeries 9480) in cache mode are slower than non-HBM SPR (8480+) in #OpenCL. But both beat the EPYC 9654. GTX 1080 Ti for scale. 🖖😇🔥
🧵1/3 https://twitter.com/ProjectPhysX/status/1663942677283450880/photo/1
RT @ProjectPhysX: How fast are #HPC CPUs in #CFD? I benchmarked @FluidX3D on Sapphire Rapids and Ice/Cascade/Sky Lake on @intelhpc DevCloud. SPR-HBM (#IntelMaxSeries 9480) in cache mode are slower than non-HBM SPR (8480+) in #OpenCL. But both beat the EPYC 9654. GTX 1080 Ti for scale. 🖖😇🔥
🧵1/3 https://twitter.com/ProjectPhysX/status/1663942677283450880/photo/1
HPC Guru (Twitter)
RT @ProjectPhysX: #IntelMaxSeries SPR-HBM CPUs in HBM-only mode could be super interesting for #CFD in #Formula1, where #GPU compute is not allowed. 🖖😎🏎️
🧵3/3
RT @ProjectPhysX: #IntelMaxSeries SPR-HBM CPUs in HBM-only mode could be super interesting for #CFD in #Formula1, where #GPU compute is not allowed. 🖖😎🏎️
🧵3/3
HPC Guru (Twitter)
RT @Arash_Hamzehloo: Very interesting talk by Petros Koumoutsakos @314159K at the @PASC_Conference in Davos🇨🇭!
#pasc23 #artificalintelligence #hpc #turbulence #cfd https://twitter.com/Arash_Hamzehloo/status/1673960641223172096/photo/1
RT @Arash_Hamzehloo: Very interesting talk by Petros Koumoutsakos @314159K at the @PASC_Conference in Davos🇨🇭!
#pasc23 #artificalintelligence #hpc #turbulence #cfd https://twitter.com/Arash_Hamzehloo/status/1673960641223172096/photo/1
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)
RT @ProjectPhysX: Over the weekend I got to test #FluidX3D on the world's largest #HPC #GPU server, #GigaIOSuperNODE. Here is one of the largest #CFD simulations ever, the Concorde for 1s at 300km/h landing speed. 40 *Billion* cells resolution. 33h runtime on 32 @AMDInstinct MI210, 2TB VRAM.
🧵1/5
RT @ProjectPhysX: Over the weekend I got to test #FluidX3D on the world's largest #HPC #GPU server, #GigaIOSuperNODE. Here is one of the largest #CFD simulations ever, the Concorde for 1s at 300km/h landing speed. 40 *Billion* cells resolution. 33h runtime on 32 @AMDInstinct MI210, 2TB VRAM.
🧵1/5
HPC Guru (Twitter)
RT @jarsivaud: @EvanKirstel @HPC_Guru @giga_io Check out the deets https://t.co/YOKzWYekHU
@AMD server and GPUs, no sw change needed because our engineering teams have done the hard work of integration.
Great for #CFD but also for the new buzzword: #genai 😆 Drop your PyTorch or TensorFlow code directly - " it just works!"
RT @jarsivaud: @EvanKirstel @HPC_Guru @giga_io Check out the deets https://t.co/YOKzWYekHU
@AMD server and GPUs, no sw change needed because our engineering teams have done the hard work of integration.
Great for #CFD but also for the new buzzword: #genai 😆 Drop your PyTorch or TensorFlow code directly - " it just works!"
GigaIO
SuperNODE - GigaIO
GigaIO SuperNODE™ The World’s First 32 GPU Single-node Supercomputer for Next-Gen AI and High Performance Computing Talk to an expert Challenges LLM applications demand even more GPU performance Optimization is limited by standard GPU configurations inside…
HPC Guru (Twitter)
Coherent file format eliminates bottlenecks in #OpenFOAM workflows
In a large-scale simulation using more than 500,000 cores on HLRS’s #supercomputer Hawk, scientists successfully tested this new approach
hlrs.de/news/detail/coherent…
#HPC #CFD @HLRS_HPC
Coherent file format eliminates bottlenecks in #OpenFOAM workflows
In a large-scale simulation using more than 500,000 cores on HLRS’s #supercomputer Hawk, scientists successfully tested this new approach
hlrs.de/news/detail/coherent…
#HPC #CFD @HLRS_HPC
HPC Guru (Twitter)
RT @RandlesLab: Kicking off @Supercomputing with a talk assessing performance portability on different GPU-based architectures and one on in situ visualization of cellular models.
#p3hpc #isav23 #hpc #fsi #cfd #GPU
RT @RandlesLab: Kicking off @Supercomputing with a talk assessing performance portability on different GPU-based architectures and one on in situ visualization of cellular models.
#p3hpc #isav23 #hpc #fsi #cfd #GPU
HPC Guru (Twitter)
RT @RandlesLab: Come see what our @DukeEngineering lab has been up to at @Supercomputing! We have a number of talks and posters being presented this week showcasing advances in #CFD and #FSI. I am excited to see the hard work come to fruition. Check it out! And keep in mind -- we're hiring!
RT @RandlesLab: Come see what our @DukeEngineering lab has been up to at @Supercomputing! We have a number of talks and posters being presented this week showcasing advances in #CFD and #FSI. I am excited to see the hard work come to fruition. Check it out! And keep in mind -- we're hiring!
HPC Guru (Twitter)
RT @RandlesLab: Excited to announce that our @DukeEngineering lab has been awarded one of @doescience's INCITE awards. Looking forward to using the #aurora and #frontier supercomputers for developing #digitaltwins of microfluidic devices and assessing whole blood characteristics.
#hpc #cfd
RT @RandlesLab: Excited to announce that our @DukeEngineering lab has been awarded one of @doescience's INCITE awards. Looking forward to using the #aurora and #frontier supercomputers for developing #digitaltwins of microfluidic devices and assessing whole blood characteristics.
#hpc #cfd
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)
RT @Dr_NeilA: Just wanted to share a cool video of a 1B cell WMLES #CFD simulation of a complete NASA high-lift aircraft (NASA CRM) that I ran on @awscloud using a new code by startup Volcano Platforms (led by ex-NASA Ames Branch Chief Cetin Kiris) - It's based upon a cartesian immersed boundary method that is very #HPC efficient. The whole simulation runs on AWS ParallelCluster using a single Amazon EC2 g5.48xlarge node (x8 Nvidia A10g) in only 4 days for ~$880. To put that in perspective that's roughly the same as a mesh-converged RANS simulation which is pretty impressive given the correlation to experimental data is so far looking very good indeed. You can learn more about this work at the 5th @aiaa high-lift prediction workshop website (as well as look through the excellent work of other participants). Full details will be discussed at the upcoming AIAA Aviation HLPW5 workshop & hopefully an Aviation2024 paper too (abstract submitted). hiliftpw.larc.nasa.gov/
RT @Dr_NeilA: Just wanted to share a cool video of a 1B cell WMLES #CFD simulation of a complete NASA high-lift aircraft (NASA CRM) that I ran on @awscloud using a new code by startup Volcano Platforms (led by ex-NASA Ames Branch Chief Cetin Kiris) - It's based upon a cartesian immersed boundary method that is very #HPC efficient. The whole simulation runs on AWS ParallelCluster using a single Amazon EC2 g5.48xlarge node (x8 Nvidia A10g) in only 4 days for ~$880. To put that in perspective that's roughly the same as a mesh-converged RANS simulation which is pretty impressive given the correlation to experimental data is so far looking very good indeed. You can learn more about this work at the 5th @aiaa high-lift prediction workshop website (as well as look through the excellent work of other participants). Full details will be discussed at the upcoming AIAA Aviation HLPW5 workshop & hopefully an Aviation2024 paper too (abstract submitted). hiliftpw.larc.nasa.gov/
HPC Guru (Twitter)
.@Cadence Unveils Millennium Platform: Industry’s First Accelerated #DigitalTwin Delivering Unprecedented Performance and Energy Efficiency
https://www.cadence.com/en_US/home/company/newsroom/press-releases/pr/2024/cadence-unveils-millennium-platformindustrys-first-accelerated.html
#HPC #AI #GPU #CFD #GenAI
.@Cadence Unveils Millennium Platform: Industry’s First Accelerated #DigitalTwin Delivering Unprecedented Performance and Energy Efficiency
https://www.cadence.com/en_US/home/company/newsroom/press-releases/pr/2024/cadence-unveils-millennium-platformindustrys-first-accelerated.html
#HPC #AI #GPU #CFD #GenAI
X (formerly Twitter)
#DigitalTwin - Search / X
See posts about #DigitalTwin on X. See what people are saying and join the conversation.
insideHPC.com (Twitter)
Cadence Mutliphysics HPC System to be Used by Honda R&D for 'Air Taxi' Design
wp.me/p3RLHQ-oiQ
@Cadence @Honda @nvidia @AMD #HPC #CFD #LES
Cadence Mutliphysics HPC System to be Used by Honda R&D for 'Air Taxi' Design
wp.me/p3RLHQ-oiQ
@Cadence @Honda @nvidia @AMD #HPC #CFD #LES
High-Performance Computing News Analysis | insideHPC
Cadence Mutliphysics HPC System to be Used by Honda R&D for ‘Air Taxi’ Design
Cadence Design Systems earlier this month released the news that Honda R&D Co. will use Cadence’s newly announced Millennium Enterprise Multiphysics [...]
HPC Guru (Twitter)
Performance gains while running #FluidX3D on @intel #Xeon6 - Moritz Lehmann
https://invidious.poast.org/watch?v=qH5cY2a6L-8
#HPC #CFD #SC24
Performance gains while running #FluidX3D on @intel #Xeon6 - Moritz Lehmann
https://invidious.poast.org/watch?v=qH5cY2a6L-8
#HPC #CFD #SC24
insideHPC.com (Twitter)
insideHPC Vanguard: GE Aerospace's Stephan Priebe -- Pushing the Frontiers of Simulation
wp.me/p3RLHQ-oMW
@GEAerospacePA @ORNL @OLCFGOV #Frontier #HPC #CFD #Aerospace
insideHPC Vanguard: GE Aerospace's Stephan Priebe -- Pushing the Frontiers of Simulation
wp.me/p3RLHQ-oMW
@GEAerospacePA @ORNL @OLCFGOV #Frontier #HPC #CFD #Aerospace
High-Performance Computing News Analysis | insideHPC
insideHPC Vanguard: GE Aerospace’s Stephan Priebe — Pushing the Frontiers of Simulation
Dr. Stephan Priebe is a Senior Engineer in the Aerodynamics and Computational Fluid Dynamics Lab at GE Aerospace Research in Niskayuna, NY. He became [...]
HPCwire (Twitter)
Oak Ridge and Georgia Tech Push Frontier to New Scale for CFD Research
ow.ly/1rtf50WuX0Z #CFD #HPC
Oak Ridge and Georgia Tech Push Frontier to New Scale for CFD Research
ow.ly/1rtf50WuX0Z #CFD #HPC