HPC & Quantum
27 subscribers
11.4K photos
668 videos
3 files
30.6K links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)

Agree that the interconnect is key for #Exascale #supercomputing

@rolfhorn, does @Atos have benchmarks that compare #BXI
performance with @HPE_Cray #Slingshot & @NVIDIANetworkng #Infiniband?

#HPC #AI
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.

Learn more about Atos' #BXI #interconnect ➡️ https://t.co/NhBLFT6aEf
@AtosBigData https://t.co/wNLH17xiqD
HPC Guru (Twitter)

Please refresh my memory, @rolfhorn:

Did @Atos publish a comparison of #BXI performance versus @HPE_Cray #Slingshot and @NVIDIANetworkng HDR/NDR #Infiniband?

Thanks!

#SC21 #HPC #AI #Exascale #interconnect @AtosBigData
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.
Atos #BXI👉 https://t.co/NhBLFT6aEf
#SC21 @AtosBigData #BullSequana #ExascaleJourney https://t.co/7Jtfzunrlp
HPC Guru (Twitter)

RT @hpcnotes: Available now (in preview): #AI #Supercomputers in #Microsoft #Azure with Hopper H100 #GPUs and #InfiniBand #HPC interconnect https://t.co/v5KVpoP29b
HPC Guru (Twitter)

TSUBAME4.0 will be an @HPE_Cray XD6500 #supercomputer

o 240 nodes with:

o 2x @AMD Genoa CPU

o 4x @nvidia H100 #GPU

o 768 GiB memory

o Interconnected with @Nvidia Quantum-2 #InfiniBand

o 66PF FP64 peak

o Operational in Spring 2024

#HPC #AI @HPE_News @hpe @tokyotech_en
HPC Guru (Twitter)

Cisco does not have much nice to say about #InfiniBand, except that it has excellent single job performance on a cluster

Cisco targets #AI workloads with “Fully Scheduled #Ethernet

#HPC #AI via @TheNextPlatform
-----------
@TheNextPlatform:
Cisco’s Silicon One G200 switch chip is an object lesson in now AI networking is different from HPC networking, and how you can make Ethernet compete with InfiniBand here.
https://t.co/WGXmsZoTVw
HPC Guru (Twitter)

Why @Meta is determined to make #Ethernet work for #AI

@Meta is one of the founding companies behind the @ultraethernet Consortium, a group of companies -according to @TDaytonPM - united against a common enemy - #Infiniband

nextplatform.com/2023/09/26/…

#HPC #Network
HPC Guru (Twitter)

On the #networking side, both #Infiniband and #Ethernet are going to progress from 400Gbps to 800Gbps in 2024 and then to 1.6Tbps in 2025

Something missing from the roadmap is the NVSwitch/ NVLink roadmap

#HPC #AI via @Patrick1Kennedy
HPC Guru (Twitter)

.@nvidia is showing off its new #supercomputer, Eos, a 10,752 #H100 #GPU system connected via 400Gbps Quantum-2 #InfiniBand

Such a system would cost $400+ million on the open market

servethehome.com/nvidia-show…

#AI #HPC via @ServeTheHome
HPC Guru (Twitter)

Interconnect Network: Needed for #AI at #scale

AMD extending access to Infinity fabric to strategic partners to allow innovation

AMD backing #Ethernet as the #HPC #interconnect over #Infiniband - it's scalable, open - nod to @ultraethernet
HPC Guru (Twitter)

Five years after @intel spun off its #OmniPath interconnect tech into Cornelis Networks (@CornelisHPC), its 400Gbps CN5000 line of switches and NICs is finally ready to do battle with its long-time rival, @nvidia's #InfiniBand

https://www.theregister.com/2025/06/09/omnipath_is_back/

#HPC #AI via @TheRegister