I was looking for this comment. Seems to fit right in between “just” docker compose and a “fully-fledged” K8s. This book is running Erlang clusters on Swarm on EC2: https://www.goodreads.com/book/show/216601296.
Feels like there should be a way to compile skills and readme’s and even code files into concise maps and descriptions optimized for LLMs. They only recompile if timestamps are modified.
OpenAI / Gpt should do the opposite. Let the people use their subscription on openclaw and break down which tasks are efficient vs inefficient. Help openclaw learn to be efficient.
The ones who recognize standards as a good thing. ARM making their own CPUs shifts their focus from making a good ISA for people to use to making a good ISA to use in their own CPUs.
The arm family of chips (apple A series, m series, and qcom snapdragon) are better on energy usage (thus battery life) and performance and design compared to many x86 style chips (intel, amd).
Time will tell if ARMs owncpu is on par or better than Apple’s ARM based chips
reply