Peter Steinberger 🦞 / @steipete:
RT by @steipete: 🦞#Ollama's cloud is one of the best places to run #OpenClaw.
$20 plan is enough for most day to day OpenClaw usage with open models!
To make the switch, all you need is to open the terminal and type:
ollama launch openclaw
Choose a model:
kimi-k2.5:cloud
glm-5:cloud
minimax-m2.7:cloud
If you are affected, Ollama welcomes you!! ❤️
#Ollama@TutorialBTC
#Local@TutorialBTC
#HowToClaw@TutorialBTC
RT by @steipete: 🦞#Ollama's cloud is one of the best places to run #OpenClaw.
$20 plan is enough for most day to day OpenClaw usage with open models!
To make the switch, all you need is to open the terminal and type:
ollama launch openclaw
Choose a model:
kimi-k2.5:cloud
glm-5:cloud
minimax-m2.7:cloud
If you are affected, Ollama welcomes you!! ❤️
#Ollama@TutorialBTC
#Local@TutorialBTC
#HowToClaw@TutorialBTC
Peter Steinberger 🦞 / @steipete:
RT by @steipete: Liberate your #openclaw with an open model or #local model with these tools from our friend @ClementDelangue and team at @huggingface 🦞
https://huggingface.co/blog/liberate-your-openclaw
RT by @steipete: Liberate your #openclaw with an open model or #local model with these tools from our friend @ClementDelangue and team at @huggingface 🦞
https://huggingface.co/blog/liberate-your-openclaw
huggingface.co
Liberate your OpenClaw
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
stacker news ~tech:
Interview with ‘Just use a VPS’ bro (#OpenClaw version).
#HowToClaw@TutorialBTC
#OpsMolt@TutorialBTC
#Local@TutorialBTC
#VPS@TutorialBTC
#Linux@TutorialBTC
Interview with ‘Just use a VPS’ bro (#OpenClaw version).
#HowToClaw@TutorialBTC
#OpsMolt@TutorialBTC
#Local@TutorialBTC
#VPS@TutorialBTC
#Linux@TutorialBTC
YouTube
Interview with ‘Just use a VPS’ bro (OpenClaw version).
Just use a VPS bro…
Interview with a VPS bro with Jack Borrough - aired on © The VPS.
Music: Paramore - Emergency
Openclaw installation guide
Linux server installation guide
OpenClaw VPS setup
Next-gen ai
Anthropic Claude
Linux server setup
Programming…
Interview with a VPS bro with Jack Borrough - aired on © The VPS.
Music: Paramore - Emergency
Openclaw installation guide
Linux server installation guide
OpenClaw VPS setup
Next-gen ai
Anthropic Claude
Linux server setup
Programming…
Gustavo Campos - Programação com IA:
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
#LLM@TutorialBTC
#Local@TutorialBTC
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
#LLM@TutorialBTC
#Local@TutorialBTC
YouTube
GEMMA 4: a IA de graça que pode eliminar TODAS as pagas ( ABSURDO )
Se você quiser testar a IA que mostrei no vídeo (rodando direto no seu PC, sem pagar nada), dá uma olhada aqui:
👉 https://ollama.com
👉 https://ai.google.dev/gemma
Basicamente, no vídeo eu mostro como usar o Gemma 4 pra rodar uma inteligência artificial…
👉 https://ollama.com
👉 https://ai.google.dev/gemma
Basicamente, no vídeo eu mostro como usar o Gemma 4 pra rodar uma inteligência artificial…
Tech Jarves:
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
#Local@TutorialBTC
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
#Local@TutorialBTC
YouTube
Run Local Uncensored AI from USB - Windows, Mac & Linux (No Internet) 🔥
Run uncensored AI from USB offline with no internet, no limits, and full privacy. Learn how to run AI on USB that works on Windows, Mac, and Linux. This support Gemma 4, Qwen, Kimi and other.
In this video, I’ll show you how to run uncensored AI from USB…
In this video, I’ll show you how to run uncensored AI from USB…
Peter Steinberger 🦞 / steipete:
RT by @steipete: Who is running local models on GPUs on OpenClaw?
I have started benchmarking different models this week. I am working on improving model selection and switching UX on OpenClaw, i.e. I run
/model vllm/gemma-e4b
to switch the model in a channel, and then a model controller automatically loads that into memory, gets it ready, or gives an insufficient memory error, if capacity is not enough for that. Like when you are using multiple models in parallel
I am going to try llama-...
#Local@TutorialBTC
#OpenClaw@TutorialBTC
RT by @steipete: Who is running local models on GPUs on OpenClaw?
I have started benchmarking different models this week. I am working on improving model selection and switching UX on OpenClaw, i.e. I run
/model vllm/gemma-e4b
to switch the model in a channel, and then a model controller automatically loads that into memory, gets it ready, or gives an insufficient memory error, if capacity is not enough for that. Like when you are using multiple models in parallel
I am going to try llama-...
#Local@TutorialBTC
#OpenClaw@TutorialBTC