How would you rate the quality of content in this channel?
Anonymous Poll
12%
Too spammy 🐥
50%
Excellent 🚀
31%
Can be improved 🫡
7%
I dont know 😬🍿
👍1😁1
A very nice video, must watch if you are into Machine Learning Algorithms.
======
The moment we stopped understanding AI [AlexNet]
https://www.youtube.com/watch?v=UZDiGooFs54
======
The moment we stopped understanding AI [AlexNet]
https://www.youtube.com/watch?v=UZDiGooFs54
👍2
This media is not supported in your browser
VIEW IN TELEGRAM
Final rankings (as of July 17, 2024):
1️⃣ OpenAI (1,287)
2️⃣ Anthropic (1,271)
3️⃣ Google (1,267)
4️⃣ DeepSeek (1,222)
5️⃣ Meta (1,207)
6️⃣ Mistral (1,157)
1️⃣ OpenAI (1,287)
2️⃣ Anthropic (1,271)
3️⃣ Google (1,267)
4️⃣ DeepSeek (1,222)
5️⃣ Meta (1,207)
6️⃣ Mistral (1,157)
👍5
GitHub - black-forest-labs/flux: Official inference repo for FLUX.1 models
https://github.com/black-forest-labs/flux
https://github.com/black-forest-labs/flux
GitHub
GitHub - black-forest-labs/flux: Official inference repo for FLUX.1 models
Official inference repo for FLUX.1 models. Contribute to black-forest-labs/flux development by creating an account on GitHub.
https://auto-rt.github.io
In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots
In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots