So…with all this openclaw stuff, I was wondering, what’s the FOSS status for something to run locally? Can I get my own locally run agent to which I can ask to perform simple tasks (go and find this, download that, summarize an article) or things like this? I’m just kinda curious about all of this.
Thanks!
I’m curious about this too. I know that on the latest version of Ollama it’s possible to install OpenClaw. But I assumed you needed to point it to a paid API (Claude, ChatGPT, Grok, etc.) for it to really work. But yeah, maybe it works with Qwen 3 or similar models?
I guess a major factor to this is what your system resources look like, especially howmuch RAM you have. And therefore which model you are hosting locally.
https://wiki.archlinux.org/title/Ollama
Ollama is an application which lets you run offline large language models locally.
Ollama is a VC-backed copy/paste of llama.cpp.
They have a history of using llama.cpp’s code (and bugs) without supporting the project or giving credit. llama.cpp is easy to use, more performant, and truly open source.
Thanks! I have an understanding of being able to run these models as LLM you can chat with, using tools like ollama or GPT4All. My question would be, how do I go from that to actually do things for me, handle files, etc. As it stands, if I run any of these locally, it’s just able to answer offline questions, and that’s about it…how about these “skills”, where it can go fetch files, or go find an specific URL, or say a summary of what a youtube video is about based on what’s being said in it?
I’ve had better luck with llama.cpp for opencode. I’m guessing it does formatting better for tool use.



