Gustavo Campos - Programação com IA:
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
#LLM@TutorialBTC
#Local@TutorialBTC
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
#LLM@TutorialBTC
#Local@TutorialBTC
YouTube
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
Entre para o meu treinamento Nova Era da Programação: 👇
https://novaeradaprogramacao.com/index.html?utm_source=youtube
Se você quiser testar a IA que mostrei no vídeo (rodando direto no seu PC, sem pagar nada), dá uma olhada aqui:
👉 https://ollama.com
👉…
https://novaeradaprogramacao.com/index.html?utm_source=youtube
Se você quiser testar a IA que mostrei no vídeo (rodando direto no seu PC, sem pagar nada), dá uma olhada aqui:
👉 https://ollama.com
👉…
Tech Jarves:
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
#Local@TutorialBTC
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
#Local@TutorialBTC
YouTube
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
Run uncensored AI from USB offline with no internet, no limits, and full privacy. Learn how to run AI on USB that works on Windows, Mac, and Linux. This support Gemma 4, Qwen, Kimi and other.
In this video, I’ll show you how to run uncensored AI from USB…
In this video, I’ll show you how to run uncensored AI from USB…
Peter Steinberger 🦞 / steipete:
RT by @steipete: Who is running local models on GPUs on OpenClaw?
I have started benchmarking different models this week. I am working on improving model selection and switching UX on OpenClaw, i.e. I run
/model vllm/gemma-e4b
to switch the model in a channel, and then a model controller automatically loads that into memory, gets it ready, or gives an insufficient memory error, if capacity is not enough for that. Like when you are using multiple models in parallel
I am going to try llama-...
#Local@TutorialBTC
#OpenClaw@TutorialBTC
RT by @steipete: Who is running local models on GPUs on OpenClaw?
I have started benchmarking different models this week. I am working on improving model selection and switching UX on OpenClaw, i.e. I run
/model vllm/gemma-e4b
to switch the model in a channel, and then a model controller automatically loads that into memory, gets it ready, or gives an insufficient memory error, if capacity is not enough for that. Like when you are using multiple models in parallel
I am going to try llama-...
#Local@TutorialBTC
#OpenClaw@TutorialBTC