Hugging Face (Twitter)
RT @mikonvergence: Major TOM @GoogleDeepMind's AlphaEarth Embeddings are now on @huggingface! 🚀
A new 6 TB prototype dataset for the community. Get it here:
#EarthEngine #AlphaEarth
RT @mikonvergence: Major TOM @GoogleDeepMind's AlphaEarth Embeddings are now on @huggingface! 🚀
A new 6 TB prototype dataset for the community. Get it here:
#EarthEngine #AlphaEarth
huggingface.co
Major-TOM/Core-AlphaEarth-Embeddings · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @multimodalart: Qwen Image LoRAs are growing fast up on the @huggingface Hub, and now they can be inferenced them via @FAL on the API or directly via the model page
RT @multimodalart: Qwen Image LoRAs are growing fast up on the @huggingface Hub, and now they can be inferenced them via @FAL on the API or directly via the model page
Hugging Face (Twitter)
RT @cgeorgiaw: 🔥 The future of science just dropped.
Shanghai AI Lab just unveiled Intern-S1, a scientific multimodal foundation model that is transforming how we discover molecules and reason about the natural world.
It's beating o3 and Gemini-Pro, too 😲
A quick breakdown 🧵
RT @cgeorgiaw: 🔥 The future of science just dropped.
Shanghai AI Lab just unveiled Intern-S1, a scientific multimodal foundation model that is transforming how we discover molecules and reason about the natural world.
It's beating o3 and Gemini-Pro, too 😲
A quick breakdown 🧵
Hugging Face (Twitter)
RT @thibaudfrere: 🎉 New chapter: just started at @huggingface!
Excited to help science teams bring their ML research to life through interactive demos and visualizations.
Ready to dive in!
RT @thibaudfrere: 🎉 New chapter: just started at @huggingface!
Excited to help science teams bring their ML research to life through interactive demos and visualizations.
Ready to dive in!
Hugging Face (Twitter)
RT @abidlabs: Thanks to Saba from the XetHub team at @huggingface, Trackio now supports logging images. Again, completely for free!
What should we add next?
RT @abidlabs: Thanks to Saba from the XetHub team at @huggingface, Trackio now supports logging images. Again, completely for free!
What should we add next?
Hugging Face (Twitter)
RT @thibaudfrere: 🚀 First week at @huggingface: shipped improvements to The Ultra Scale Playbook!
✅ Dark mode
✅ Some mobile responsiveness
✅ Performance fixes
Your complete guide to scaling #LLMs in 2025 👇 https://huggingface.co/spaces/nanotron/ultrascale-playbook
RT @thibaudfrere: 🚀 First week at @huggingface: shipped improvements to The Ultra Scale Playbook!
✅ Dark mode
✅ Some mobile responsiveness
✅ Performance fixes
Your complete guide to scaling #LLMs in 2025 👇 https://huggingface.co/spaces/nanotron/ultrascale-playbook
X (formerly Twitter)
#LLMs - Search / X
See posts about #LLMs on X. See what people are saying and join the conversation.
Hugging Face (Twitter)
RT @romainhuet: You know you’ve been deep in open model collabs with @huggingface when 🤗 is still the top emoji on your keyboard!
RT @romainhuet: You know you’ve been deep in open model collabs with @huggingface when 🤗 is still the top emoji on your keyboard!
Hugging Face (Twitter)
RT @Ali_TongyiLab: Long live open source! https://twitter.com/ArtificialAnlys/status/1958712568731902241#m
RT @Ali_TongyiLab: Long live open source! https://twitter.com/ArtificialAnlys/status/1958712568731902241#m
Hugging Face (Twitter)
RT @ClementDelangue: We don’t give nearly enough credit to the people and organizations who build and share open AI datasets.
In fact, I’d argue they matter even more than open models:
- they’re foundational, enabling hundreds of different models
- they remove not only technical but also legal bottlenecks for democratization of AI building
- their impact usually lasts longer, since datasets are less tied to the latest training fad.
Let’s celebrate the teams who are sharing open datasets today. Who are your favorite ones? https://twitter.com/gui_penedo/status/1958913881281069119#m
RT @ClementDelangue: We don’t give nearly enough credit to the people and organizations who build and share open AI datasets.
In fact, I’d argue they matter even more than open models:
- they’re foundational, enabling hundreds of different models
- they remove not only technical but also legal bottlenecks for democratization of AI building
- their impact usually lasts longer, since datasets are less tied to the latest training fad.
Let’s celebrate the teams who are sharing open datasets today. Who are your favorite ones? https://twitter.com/gui_penedo/status/1958913881281069119#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @lukas_m_ziegler: Imagine having a ping pong robot! 🏓
Researchers and developers building physical AI: meet Reachy 2 from @pollenrobotics, an open-source, humanoid robot for real-world experimentation.
It’s a bimanual mobile manipulator: each 7-DOF arm mimics human proportions and can lift up to 3 kg, giving dexterity for object handling.
It can be controlled with Python and ROS2 Humble, or go straight into VR teleoperation, use a headset to move Reachy’s arms, hands, and head, and see through its cameras as if you’re in the robot’s own body.
Want it to move around? A mobile base with three omnidirectional wheels, rich sensors, and LiDAR lets Reachy 2 navigate and explore its surroundings smoothly. 🗺️
Under the hood, it’s powered by a CPU system that’s ready for machine learning, perfect for loading AI frameworks and testing new models from @huggingface directly on the robot.
Keep making robots more, and more accessible Pollen team!
... and...
Перейти на оригинальный пост
RT @lukas_m_ziegler: Imagine having a ping pong robot! 🏓
Researchers and developers building physical AI: meet Reachy 2 from @pollenrobotics, an open-source, humanoid robot for real-world experimentation.
It’s a bimanual mobile manipulator: each 7-DOF arm mimics human proportions and can lift up to 3 kg, giving dexterity for object handling.
It can be controlled with Python and ROS2 Humble, or go straight into VR teleoperation, use a headset to move Reachy’s arms, hands, and head, and see through its cameras as if you’re in the robot’s own body.
Want it to move around? A mobile base with three omnidirectional wheels, rich sensors, and LiDAR lets Reachy 2 navigate and explore its surroundings smoothly. 🗺️
Under the hood, it’s powered by a CPU system that’s ready for machine learning, perfect for loading AI frameworks and testing new models from @huggingface directly on the robot.
Keep making robots more, and more accessible Pollen team!
... and...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @pollenrobotics: 🏓 After mastering chess, xylophone and Jenga towers, Reachy 2 is now taking on ping-pong!
Low-latency teleoperation allows the operator to react quickly enough to return the ball.
RT @pollenrobotics: 🏓 After mastering chess, xylophone and Jenga towers, Reachy 2 is now taking on ping-pong!
Low-latency teleoperation allows the operator to react quickly enough to return the ball.
Hugging Face (Twitter)
RT @RisingSayak: Wrote an FA3 attention processor for @Alibaba_Qwen Image using the 🤗 Kernels library. The process is so enjoyable!
Stuff cooking stuff coming 🥠
https://gist.github.com/sayakpaul/ff715f979793d4d44beb68e5e08ee067
RT @RisingSayak: Wrote an FA3 attention processor for @Alibaba_Qwen Image using the 🤗 Kernels library. The process is so enjoyable!
Stuff cooking stuff coming 🥠
https://gist.github.com/sayakpaul/ff715f979793d4d44beb68e5e08ee067
Hugging Face (Twitter)
RT @HuggingPapers: xAI just released Grok 2 on Hugging Face.
This massive 500GB model, a core part of xAI's 2024 work,
is now openly available to push the boundaries of AI research.
https://huggingface.co/xai-org/grok-2
RT @HuggingPapers: xAI just released Grok 2 on Hugging Face.
This massive 500GB model, a core part of xAI's 2024 work,
is now openly available to push the boundaries of AI research.
https://huggingface.co/xai-org/grok-2
huggingface.co
xai-org/grok-2 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @elonmusk: The @xai Grok 2.5 model, which was our best model last year, is now open source.
Grok 3 will be made open source in about 6 months.
https://huggingface.co/xai-org/grok-2
RT @elonmusk: The @xai Grok 2.5 model, which was our best model last year, is now open source.
Grok 3 will be made open source in about 6 months.
https://huggingface.co/xai-org/grok-2
huggingface.co
xai-org/grok-2 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @eliebakouch: Grok2 is open source now and available on Hugging Face. I have 2 questions:
- wtf is `model_type: doge`
- wtf is this rope theta value
RT @eliebakouch: Grok2 is open source now and available on Hugging Face. I have 2 questions:
- wtf is `model_type: doge`
- wtf is this rope theta value
Hugging Face (Twitter)
RT @ClementDelangue: Grok 2 from @xai has just been released on @huggingface: https://huggingface.co/xai-org/grok-2
RT @ClementDelangue: Grok 2 from @xai has just been released on @huggingface: https://huggingface.co/xai-org/grok-2
Hugging Face (Twitter)
RT @Teknium1: .@xai’s Grok 2 weights have been released on @huggingface
https://huggingface.co/xai-org/grok-2
RT @Teknium1: .@xai’s Grok 2 weights have been released on @huggingface
https://huggingface.co/xai-org/grok-2
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @rohanpaul_ai: Hunyuan 3D-2.1 turns any flat image into studio-quality 3D models.
And you can do it on this @huggingface space for free.
RT @rohanpaul_ai: Hunyuan 3D-2.1 turns any flat image into studio-quality 3D models.
And you can do it on this @huggingface space for free.