HPC Guru (Twitter)
.@TheNextPlatform on @CornelisHPC
https://www.nextplatform.com/2021/07/09/a-third-dialect-of-infiniband-in-the-works-again/
#HPC #AI #Network #Infiniband
.@TheNextPlatform on @CornelisHPC
https://www.nextplatform.com/2021/07/09/a-third-dialect-of-infiniband-in-the-works-again/
#HPC #AI #Network #Infiniband
The Next Platform
A Third Dialect Of InfiniBand In The Works – Again
The InfiniBand interconnect emerged from the ashes of a fight about the future of server I/O at the end of the last millennium, and instead of becoming
This media is not supported in your browser
VIEW IN TELEGRAM
HPC Guru (Twitter)
Agree that the interconnect is key for #Exascale #supercomputing
@rolfhorn, does @Atos have benchmarks that compare #BXI
performance with @HPE_Cray #Slingshot & @NVIDIANetworkng #Infiniband?
#HPC #AI
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.
Learn more about Atos' #BXI #interconnect ➡️ https://t.co/NhBLFT6aEf
@AtosBigData https://t.co/wNLH17xiqD
Agree that the interconnect is key for #Exascale #supercomputing
@rolfhorn, does @Atos have benchmarks that compare #BXI
performance with @HPE_Cray #Slingshot & @NVIDIANetworkng #Infiniband?
#HPC #AI
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.
Learn more about Atos' #BXI #interconnect ➡️ https://t.co/NhBLFT6aEf
@AtosBigData https://t.co/wNLH17xiqD
HPC Guru (Twitter)
NVIDIA Quantum-2 400G Switches and ConnectX-7 at GTC Fall 2021
https://www.servethehome.com/nvidia-quantum-2-400g-switches-and-connectx-7-at-gtc-fall-2021/
#HPC #AI #Infiniband #GTC21 via @ServeTheHome
NVIDIA Quantum-2 400G Switches and ConnectX-7 at GTC Fall 2021
https://www.servethehome.com/nvidia-quantum-2-400g-switches-and-connectx-7-at-gtc-fall-2021/
#HPC #AI #Infiniband #GTC21 via @ServeTheHome
HPC Guru (Twitter)
Please refresh my memory, @rolfhorn:
Did @Atos publish a comparison of #BXI performance versus @HPE_Cray #Slingshot and @NVIDIANetworkng HDR/NDR #Infiniband?
Thanks!
#SC21 #HPC #AI #Exascale #interconnect @AtosBigData
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.
Atos #BXI👉 https://t.co/NhBLFT6aEf
#SC21 @AtosBigData #BullSequana #ExascaleJourney https://t.co/7Jtfzunrlp
Please refresh my memory, @rolfhorn:
Did @Atos publish a comparison of #BXI performance versus @HPE_Cray #Slingshot and @NVIDIANetworkng HDR/NDR #Infiniband?
Thanks!
#SC21 #HPC #AI #Exascale #interconnect @AtosBigData
-----------
@rolfhorn:
#Exascale entails an explosion of performance, of number of nodes and cores, of data volume and data movement. The #interconnect is a key enabling technology for exascale #supercomputing systems.
Atos #BXI👉 https://t.co/NhBLFT6aEf
#SC21 @AtosBigData #BullSequana #ExascaleJourney https://t.co/7Jtfzunrlp
insideHPC.com (Twitter)
NVIDIA Long-Haul InfiniBand at Purdue University - Extending Accelerated Research Across Campus
https://wp.me/p3RLHQ-n4U
@NVIDIAHPCDev @PurdueRCAC @LifeAtPurdue @NVIDIAAI #HPC #AI #infiniband
NVIDIA Long-Haul InfiniBand at Purdue University - Extending Accelerated Research Across Campus
https://wp.me/p3RLHQ-n4U
@NVIDIAHPCDev @PurdueRCAC @LifeAtPurdue @NVIDIAAI #HPC #AI #infiniband
insideHPC
NVIDIA Long-Haul InfiniBand at Purdue University – Extending Accelerated Research Across Campus
[Sponsored Post] For data-driven researchers, the time-related expense of moving data from machines between data centers slows computation and causes [...]
insideHPC.com (Twitter)
RT @insideHPC: NVIDIA Long-Haul InfiniBand at Purdue University - Extending Accelerated Research Across Campus
https://wp.me/p3RLHQ-n4U
@NVIDIAHPCDev @PurdueRCAC @LifeAtPurdue @NVIDIAAI #HPC #AI #infiniband
RT @insideHPC: NVIDIA Long-Haul InfiniBand at Purdue University - Extending Accelerated Research Across Campus
https://wp.me/p3RLHQ-n4U
@NVIDIAHPCDev @PurdueRCAC @LifeAtPurdue @NVIDIAAI #HPC #AI #infiniband
insideHPC
NVIDIA Long-Haul InfiniBand at Purdue University – Extending Accelerated Research Across Campus
[Sponsored Post] For data-driven researchers, the time-related expense of moving data from machines between data centers slows computation and causes [...]
HPC Guru (Twitter)
RT @hpcnotes: Available now (in preview): #AI #Supercomputers in #Microsoft #Azure with Hopper H100 #GPUs and #InfiniBand #HPC interconnect https://t.co/v5KVpoP29b
RT @hpcnotes: Available now (in preview): #AI #Supercomputers in #Microsoft #Azure with Hopper H100 #GPUs and #InfiniBand #HPC interconnect https://t.co/v5KVpoP29b
HPC Guru (Twitter)
.@Broadcom says its Jericho3-AI #Ethernet is better than @NVIDIA’s #Infiniband by ~10% on NCCL performance
https://t.co/Z6zCsyXke1
#AI #Network via @ServeTheHome
.@Broadcom says its Jericho3-AI #Ethernet is better than @NVIDIA’s #Infiniband by ~10% on NCCL performance
https://t.co/Z6zCsyXke1
#AI #Network via @ServeTheHome
ServeTheHome
Broadcom Jericho3-AI Ethernet Switch Launched Saying NVIDIA Infiniband is Bad for AI
In launching the new Broadcom Jericho3-AI Ethernet switch chip, Broadcom is taking an overt swipe at NVIDIA's Infiniband for AI
HPC Guru (Twitter)
TSUBAME4.0 will be an @HPE_Cray XD6500 #supercomputer
o 240 nodes with:
o 2x @AMD Genoa CPU
o 4x @nvidia H100 #GPU
o 768 GiB memory
o Interconnected with @Nvidia Quantum-2 #InfiniBand
o 66PF FP64 peak
o Operational in Spring 2024
#HPC #AI @HPE_News @hpe @tokyotech_en
TSUBAME4.0 will be an @HPE_Cray XD6500 #supercomputer
o 240 nodes with:
o 2x @AMD Genoa CPU
o 4x @nvidia H100 #GPU
o 768 GiB memory
o Interconnected with @Nvidia Quantum-2 #InfiniBand
o 66PF FP64 peak
o Operational in Spring 2024
#HPC #AI @HPE_News @hpe @tokyotech_en
HPC Guru (Twitter)
Cisco does not have much nice to say about #InfiniBand, except that it has excellent single job performance on a cluster
Cisco targets #AI workloads with “Fully Scheduled #Ethernet”
#HPC #AI via @TheNextPlatform
-----------
@TheNextPlatform:
Cisco’s Silicon One G200 switch chip is an object lesson in now AI networking is different from HPC networking, and how you can make Ethernet compete with InfiniBand here.
https://t.co/WGXmsZoTVw
Cisco does not have much nice to say about #InfiniBand, except that it has excellent single job performance on a cluster
Cisco targets #AI workloads with “Fully Scheduled #Ethernet”
#HPC #AI via @TheNextPlatform
-----------
@TheNextPlatform:
Cisco’s Silicon One G200 switch chip is an object lesson in now AI networking is different from HPC networking, and how you can make Ethernet compete with InfiniBand here.
https://t.co/WGXmsZoTVw
Twitter
Cisco’s Silicon One G200 switch chip is an object lesson in now AI networking is different from HPC networking, and how you can make Ethernet compete with InfiniBand here.
https://t.co/WGXmsZoTVw
https://t.co/WGXmsZoTVw
HPC Guru (Twitter)
.@ultraethernet
#InfiniBand is controlled by one vendor, and the hyperscalers and cloud builders hate that, and it is not Ethernet, and they hate that
They want one protocol to scale to 1 million endpoints in a single network
https://t.co/8gfRUp0qgm
#HPC #AI via @TDaytonPM
.@ultraethernet
#InfiniBand is controlled by one vendor, and the hyperscalers and cloud builders hate that, and it is not Ethernet, and they hate that
They want one protocol to scale to 1 million endpoints in a single network
https://t.co/8gfRUp0qgm
#HPC #AI via @TDaytonPM
The Next Platform
Ethernet Consortium Shoots For 1 Million Node Clusters That Beat InfiniBand - The Next Platform
Here we go again. Some big hyperscalers and cloud builders and their ASIC and switch suppliers are unhappy about Ethernet, and rather than wait for the IEEE to address issues, they are taking matters in their own hands to create what will ultimately become…
HPC Guru (Twitter)
Why @Meta is determined to make #Ethernet work for #AI
@Meta is one of the founding companies behind the @ultraethernet Consortium, a group of companies -according to @TDaytonPM - united against a common enemy - #Infiniband
nextplatform.com/2023/09/26/…
#HPC #Network
Why @Meta is determined to make #Ethernet work for #AI
@Meta is one of the founding companies behind the @ultraethernet Consortium, a group of companies -according to @TDaytonPM - united against a common enemy - #Infiniband
nextplatform.com/2023/09/26/…
#HPC #Network
HPC Guru (Twitter)
On the #networking side, both #Infiniband and #Ethernet are going to progress from 400Gbps to 800Gbps in 2024 and then to 1.6Tbps in 2025
Something missing from the roadmap is the NVSwitch/ NVLink roadmap
#HPC #AI via @Patrick1Kennedy
On the #networking side, both #Infiniband and #Ethernet are going to progress from 400Gbps to 800Gbps in 2024 and then to 1.6Tbps in 2025
Something missing from the roadmap is the NVSwitch/ NVLink roadmap
#HPC #AI via @Patrick1Kennedy
HPC Guru (Twitter)
.@nvidia is showing off its new #supercomputer, Eos, a 10,752 #H100 #GPU system connected via 400Gbps Quantum-2 #InfiniBand
Such a system would cost $400+ million on the open market
servethehome.com/nvidia-show…
#AI #HPC via @ServeTheHome
.@nvidia is showing off its new #supercomputer, Eos, a 10,752 #H100 #GPU system connected via 400Gbps Quantum-2 #InfiniBand
Such a system would cost $400+ million on the open market
servethehome.com/nvidia-show…
#AI #HPC via @ServeTheHome
HPC Guru (Twitter)
Interconnect Network: Needed for #AI at #scale
AMD extending access to Infinity fabric to strategic partners to allow innovation
AMD backing #Ethernet as the #HPC #interconnect over #Infiniband - it's scalable, open - nod to @ultraethernet
Interconnect Network: Needed for #AI at #scale
AMD extending access to Infinity fabric to strategic partners to allow innovation
AMD backing #Ethernet as the #HPC #interconnect over #Infiniband - it's scalable, open - nod to @ultraethernet
HPC Guru (Twitter)
RT @vsoch: Happy 2025 from @FluxFramework! 🌀 I'm excited to share our newest tutorial - a fully automated build (packer) and deployment (Terraform) on #Microsoft #Azure for an #HPC cluster ready to go with #Infiniband!
https://www.youtube.com/watch?v=1WhJTKAu05o?si=_zlrIMba5sI4KtRz
This is the most fun of the series yet! 🪩
RT @vsoch: Happy 2025 from @FluxFramework! 🌀 I'm excited to share our newest tutorial - a fully automated build (packer) and deployment (Terraform) on #Microsoft #Azure for an #HPC cluster ready to go with #Infiniband!
https://www.youtube.com/watch?v=1WhJTKAu05o?si=_zlrIMba5sI4KtRz
This is the most fun of the series yet! 🪩
X (formerly Twitter)
V (@vsoch) on X
Happy 2025 from @FluxFramework! 🌀 I'm excited to share our newest tutorial - a fully automated build (packer) and deployment (Terraform) on #Microsoft #Azure for an #HPC cluster ready to go with #Infiniband!
https://t.co/WgNqfWy9hz
This is the most fun…
https://t.co/WgNqfWy9hz
This is the most fun…
insideHPC.com (Twitter)
Check out DriveNets' new article: Seeking Ethernet Alternative to InfiniBand? Start with Performance!
wp.me/p3RLHQ-p5f
@drivenets #ethernet #InfiniBand #networking #HPC #AI #HPCAI
Check out DriveNets' new article: Seeking Ethernet Alternative to InfiniBand? Start with Performance!
wp.me/p3RLHQ-p5f
@drivenets #ethernet #InfiniBand #networking #HPC #AI #HPCAI
High-Performance Computing News Analysis | insideHPC
Seeking Ethernet Alternative to InfiniBand? Start with Performance!
[SPONSORED GUEST ARTICLE] When it comes to AI and HPC workloads, networking is critical. While this is well known already, the impact your networking [...]
HPC Guru (Twitter)
Five years after @intel spun off its #OmniPath interconnect tech into Cornelis Networks (@CornelisHPC), its 400Gbps CN5000 line of switches and NICs is finally ready to do battle with its long-time rival, @nvidia's #InfiniBand
https://www.theregister.com/2025/06/09/omnipath_is_back/
#HPC #AI via @TheRegister
Five years after @intel spun off its #OmniPath interconnect tech into Cornelis Networks (@CornelisHPC), its 400Gbps CN5000 line of switches and NICs is finally ready to do battle with its long-time rival, @nvidia's #InfiniBand
https://www.theregister.com/2025/06/09/omnipath_is_back/
#HPC #AI via @TheRegister
insideHPC.com (Twitter)
Check out the new article from DriveNets: Re-Engineering Ethernet for AI Fabric
wp.me/p3RLHQ-p7E
@drivenets #Ethernet #Infiniband #AInetwork #network #AI #HPC
Check out the new article from DriveNets: Re-Engineering Ethernet for AI Fabric
wp.me/p3RLHQ-p7E
@drivenets #Ethernet #Infiniband #AInetwork #network #AI #HPC
Inside HPC & AI News | High-Performance Computing & Artificial Intelligence
Re-Engineering Ethernet for AI Fabric
Ethernet wasn’t built with AI in mind. While cost-effective and ubiquitous, its best-effort, packet-based nature creates challenges in AI clusters... But [...]
HPCwire (Twitter)
InfiniBand Multilayered Security Protects Data Centers and AI Workloads
ow.ly/I0ro50Wo3OA #NVIDIA #InfiniBand #HPC
InfiniBand Multilayered Security Protects Data Centers and AI Workloads
ow.ly/I0ro50Wo3OA #NVIDIA #InfiniBand #HPC
HPCwire
InfiniBand Multilayered Security Protects Data Centers and AI Workloads
July 10, 2025 — In today’s data-driven world, security isn’t just a feature—it’s the foundation. With the exponential growth of AI, HPC, and hyperscale cloud computing, the integrity of the network […]