HPC Guru (Twitter)
RT @thoefler: Humanity is transitioning from the information age to the age of computing driven by #HPC. Soon, economic power will be with the ones who compute fastest and cheapest.
#AI and #LLMs are big applications, but simulations and scientific discovery will equally benefit. https://twitter.com/TheOfficialACM/status/1820461030587220193#m
RT @thoefler: Humanity is transitioning from the information age to the age of computing driven by #HPC. Soon, economic power will be with the ones who compute fastest and cheapest.
#AI and #LLMs are big applications, but simulations and scientific discovery will equally benefit. https://twitter.com/TheOfficialACM/status/1820461030587220193#m
HPC Guru (Twitter)
#RAG enhances the accuracy of #LLMs by retrieving information from external sources
This blog post explores the impact of increased context length on the quality of RAG applications
https://www.databricks.com/blog/long-context-rag-performance-llms
#GenAI via @NaveenGRao
#RAG enhances the accuracy of #LLMs by retrieving information from external sources
This blog post explores the impact of increased context length on the quality of RAG applications
https://www.databricks.com/blog/long-context-rag-performance-llms
#GenAI via @NaveenGRao
HPC Guru (Twitter)
SigLLM: In a new study, @MIT researchers found that #LLMs hold the potential to be more efficient anomaly detectors for time-series data
Importantly, these pretrained models can be deployed right out of the box
https://news.mit.edu/2024/researchers-use-large-language-models-to-flag-problems-0814
#AI #GenAI
SigLLM: In a new study, @MIT researchers found that #LLMs hold the potential to be more efficient anomaly detectors for time-series data
Importantly, these pretrained models can be deployed right out of the box
https://news.mit.edu/2024/researchers-use-large-language-models-to-flag-problems-0814
#AI #GenAI
X (formerly Twitter)
#LLMs - Search / X
See posts about #LLMs on X. See what people are saying and join the conversation.
HPC Guru (Twitter)
Intro to #Al-driven #Science on #Supercomputers
@argonne_lcf hosts a series of courses on understanding the fundamentals of #LLMs and their scientific applications
Register by Sept 29
https://events.cels.anl.gov/event/538/
#HPC
Intro to #Al-driven #Science on #Supercomputers
@argonne_lcf hosts a series of courses on understanding the fundamentals of #LLMs and their scientific applications
Register by Sept 29
https://events.cels.anl.gov/event/538/
#HPC
insideHPC.com (Twitter)
HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent?
wp.me/p3RLHQ-oHk
@AMDServer @foxconniai @NVIDIAAI @NVIDIAHPCDev @HPCpodcast @NobelPrize @AIatMeta @AMDGPU_ #HPC #AI #HPCAI #LLMs
HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent?
wp.me/p3RLHQ-oHk
@AMDServer @foxconniai @NVIDIAAI @NVIDIAHPCDev @HPCpodcast @NobelPrize @AIatMeta @AMDGPU_ #HPC #AI #HPCAI #LLMs
High-Performance Computing News Analysis | insideHPC
HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent?
A good mid-October morn to you! Here’s a brief (6:30) run-through of developments from the world of HPC-AI, including: AMD's products rollout, [...]
HPC Guru (Twitter)
Everything you always wanted to know about large language models for science (but were afraid to ask)
@argonne researchers explain what #LLMs are and how they’re shaping the future of science
https://www.anl.gov/article/everything-you-always-wanted-to-know-about-large-language-models-for-science-but-were-afraid-to-ask
https://invidious.poast.org/watch?v=f6xKtV86k3g
#AI #LLM
Everything you always wanted to know about large language models for science (but were afraid to ask)
@argonne researchers explain what #LLMs are and how they’re shaping the future of science
https://www.anl.gov/article/everything-you-always-wanted-to-know-about-large-language-models-for-science-but-were-afraid-to-ask
https://invidious.poast.org/watch?v=f6xKtV86k3g
#AI #LLM
X (formerly Twitter)
#LLMs - Search / X
See posts about #LLMs on X. See what people are saying and join the conversation.
insideHPC.com (Twitter)
Check out Eviden's new insights on interconnects: "BullSequana eXascale Interconnect V3: Intelligent Network Management Accelerates GPU Performance in AI-HPC"
wp.me/p3RLHQ-oJF
@Eviden_Compute @Evidenlive @Atos #HPC #HPCAI #AI #LLMs #GenerativeAI
Check out Eviden's new insights on interconnects: "BullSequana eXascale Interconnect V3: Intelligent Network Management Accelerates GPU Performance in AI-HPC"
wp.me/p3RLHQ-oJF
@Eviden_Compute @Evidenlive @Atos #HPC #HPCAI #AI #LLMs #GenerativeAI
High-Performance Computing News Analysis | insideHPC
BullSequana eXascale Interconnect V3: Intelligent Network Management Accelerates GPU Performance in AI-HPC
[SPONSORED GUEST ARTICLE] While the compute power of GPUs has grown significantly..., networking infrastructure has not kept up at the same rate. Even [...]
insideHPC.com (Twitter)
Check out Lenovo's article: ‘It’s Vertical’: Lenovo’s New Rack Server Chassis Turns HPC-AI Liquid Cooling on its Edge
wp.me/p3RLHQ-oKq
@Lenovo @Lenovodc #HPC #AI #HPCAI #LLMs #GenerativeAI
Check out Lenovo's article: ‘It’s Vertical’: Lenovo’s New Rack Server Chassis Turns HPC-AI Liquid Cooling on its Edge
wp.me/p3RLHQ-oKq
@Lenovo @Lenovodc #HPC #AI #HPCAI #LLMs #GenerativeAI
High-Performance Computing News Analysis | insideHPC
‘It’s Vertical’: Lenovo’s New Rack Server Chassis Turns HPC-AI Liquid Cooling on its Edge
[SPONSORED GUEST ARTICLE] The first thing you notice about the N1380 is that the servers stand on their sides and are stacked vertically within a rack. [...]
HPC Guru (Twitter)
RT @Livermore_Comp: Tuesday is #LLNLatSC BoF day at #SC24 🐦
linear #algebra, #LLMs, #science comms (yours truly is giving a talk!), skill development, dynamic workloads, HPSF, Lustre, #HPC #software tools, & more
https://computing.llnl.gov/sc24-event-calendar
RT @Livermore_Comp: Tuesday is #LLNLatSC BoF day at #SC24 🐦
linear #algebra, #LLMs, #science comms (yours truly is giving a talk!), skill development, dynamic workloads, HPSF, Lustre, #HPC #software tools, & more
https://computing.llnl.gov/sc24-event-calendar
HPC Guru (Twitter)
RT @DDNStorage: ✨ Day 2 at #SC24 was incredible! We had insightful conversations with customers, partners, and the media about driving innovation in AI & HPC.
These exchanges fuel our passion to push boundaries and deliver solutions that matter.
Let’s keep the momentum going! 💪
Want to chat? Book some time with our SMEs: https://www.ddn.com/sc24-meeting-registration
#AI #ArtificialIntelligence #ML #MachineLearning #LLMs #tech #data #DataStorage #DataCenters #DataAnalytics #innovation #SC24 @AlexbAlex @jswaroop
RT @DDNStorage: ✨ Day 2 at #SC24 was incredible! We had insightful conversations with customers, partners, and the media about driving innovation in AI & HPC.
These exchanges fuel our passion to push boundaries and deliver solutions that matter.
Let’s keep the momentum going! 💪
Want to chat? Book some time with our SMEs: https://www.ddn.com/sc24-meeting-registration
#AI #ArtificialIntelligence #ML #MachineLearning #LLMs #tech #data #DataStorage #DataCenters #DataAnalytics #innovation #SC24 @AlexbAlex @jswaroop
HPCwire (Twitter)
The OSI’s new Open AI definition sparks debate by stopping short of requiring open data for #LLMs. Explore the implications for #ArtificialIntelligence, #MachineLearning, and #OpenSource innovation.
Read more: ow.ly/k2p750U8M5f
The OSI’s new Open AI definition sparks debate by stopping short of requiring open data for #LLMs. Explore the implications for #ArtificialIntelligence, #MachineLearning, and #OpenSource innovation.
Read more: ow.ly/k2p750U8M5f
HPCwire (Twitter)
The OSI’s Open AI definition raises questions by omitting open data requirements for #LLMs. A pivotal moment for #ArtificialIntelligence, #MachineLearning, and #OpenSource development. #HPCwire
Read more: ow.ly/IKvh50U8M5k
The OSI’s Open AI definition raises questions by omitting open data requirements for #LLMs. A pivotal moment for #ArtificialIntelligence, #MachineLearning, and #OpenSource development. #HPCwire
Read more: ow.ly/IKvh50U8M5k
insideHPC.com (Twitter)
MLCommons Launches LLM Safety Benchmark
wp.me/p3RLHQ-oMT
@MLCommons #LLM #LLMs #AI #AIbenchmark #HPCAI #HPC
MLCommons Launches LLM Safety Benchmark
wp.me/p3RLHQ-oMT
@MLCommons #LLM #LLMs #AI #AIbenchmark #HPCAI #HPC
High-Performance Computing News Analysis | insideHPC
MLCommons Launches LLM Safety Benchmark
Dec. 4, 2024 — MLCommons today released AILuminate, a safety test for large language models. The v1.0 benchmark – which provides a series of safety [...]
HPCwire (Twitter)
MLCommons Launches AILuminate Benchmark to Measure Safety of LLMs
ow.ly/1bW950Ulmen #MLCommons #LLMs #HPC
MLCommons Launches AILuminate Benchmark to Measure Safety of LLMs
ow.ly/1bW950Ulmen #MLCommons #LLMs #HPC
HPCwire
MLCommons Launches AILuminate Benchmark to Measure Safety of LLMs
SAN FRANCISCO, Dec. 4, 2024 — MLCommons today released AILuminate, a first-of-its-kind safety test for large language models (LLMs). The v1.0 benchmark – which provides a series of safety grades for […]
HPCwire (Twitter)
Discover how the NVIDIA GB200 NVL72 and QCT Platform on Demand (POD) address challenges like training #LLMs and efficient inter-GPU communication.
ow.ly/V30S50UsTjt
#GenAI
Discover how the NVIDIA GB200 NVL72 and QCT Platform on Demand (POD) address challenges like training #LLMs and efficient inter-GPU communication.
ow.ly/V30S50UsTjt
#GenAI
HPC Guru (Twitter)
RT @thoefler: From #LLMs 🤖 to Reasoning Language Models 🧠 Three Eras in the Age of Computation!
🔥 Progress in #AI and #Computing 🎥 https://www.youtube.com/watch?v=NFwZi94S8qc
💡 Combining the best knowledge databases (#LLM) with the best strategy play (#RL) will be only limited by computational cost 🚀 #HPC
RT @thoefler: From #LLMs 🤖 to Reasoning Language Models 🧠 Three Eras in the Age of Computation!
🔥 Progress in #AI and #Computing 🎥 https://www.youtube.com/watch?v=NFwZi94S8qc
💡 Combining the best knowledge databases (#LLM) with the best strategy play (#RL) will be only limited by computational cost 🚀 #HPC
HPCwire (Twitter)
While large language models are great for human language tasks, quantitative AI is tuned into complex tasks in science and healthcare, pushing the boundaries of what’s possible in these fields. ow.ly/83IL50UTFeV #LLMs #artificialintelligence #quantitativeAI
While large language models are great for human language tasks, quantitative AI is tuned into complex tasks in science and healthcare, pushing the boundaries of what’s possible in these fields. ow.ly/83IL50UTFeV #LLMs #artificialintelligence #quantitativeAI
HPC Guru (Twitter)
.@ylecun: If you are interested in human level #AI, don‘t work on #LLMs https://twitter.com/rohanpaul_ai/status/1888345605434716312#m
.@ylecun: If you are interested in human level #AI, don‘t work on #LLMs https://twitter.com/rohanpaul_ai/status/1888345605434716312#m
HPCwire (Twitter)
A recent collaborative effort by researchers from #MIT and other schools have introduced a new AI approach that can rapidly compress #LLMs without a significant loss of quality. ow.ly/WuSa50VF2zb #artificialintelligence
A recent collaborative effort by researchers from #MIT and other schools have introduced a new AI approach that can rapidly compress #LLMs without a significant loss of quality. ow.ly/WuSa50VF2zb #artificialintelligence
HPC Guru (Twitter)
Harnessing #LLMs for scientific computing: @argonne researchers use large language models to tackle challenges and support real-world science
https://www.anl.gov/mcs/article/harnessing-llms-for-scientific-computing
#AI #HPC
Harnessing #LLMs for scientific computing: @argonne researchers use large language models to tackle challenges and support real-world science
https://www.anl.gov/mcs/article/harnessing-llms-for-scientific-computing
#AI #HPC
X (formerly Twitter)
#LLMs - Search / X
See posts about #LLMs on X. See what people are saying and join the conversation.