As always, I _wish_ I had a use case for an iPad. Seem like such powerful machines hindered by where they live in the serious-computing space. The iPadOS being much more restrictive doesn't help either.
I wish they could repurpose macOS to touch screens... Oh well.
It’s really just easier integrations with stuff like iMessage. I assume easier for email and calendars too since that’s a total wreck trying to come up with anything sane for Linux VM + gsuite. At least has been from my limited experience so far.
Other than that I can’t really come up with an explanation of why a Mac mini would be “better” than say an intel nuc or virtual machine.
Why though? The context window is 1 millions token max so far. That is what, a few MB of text? Sounds like I should be able to run claw on a raspberry pi.
If you’re using it with a local model then you need a lot of GPU memory to load up the model. Unified memory is great here since you can basically use almost all the RAM to load the model.
I meant cheap in the context of other Apple offerings. I think Mac Studios are a bit more expensive in comparable configurations and with laptops you also pay for the display.
Local LLM is so utterly slow even with multiple $3,000+ modern GPUs operating in the giant context windows openclaw generally works with that I doubt anyone using it is doing so.
Local LLM from my basic messing around is a toy. I really wanted to make it work and was willing to invest 5 figures into it if my basic testing showed promise - but it’s utterly useless for the things I want to eventually bring to “prod” with such a setup. Largely live devops/sysadmin style tasking. I don’t want to mess around hyper-optimizing the LLM efficiency itself.
I’m still learning so perhaps I’m totally off base - happy to be corrected - but even if I was able to get a 50x performance increase at 50% of the LLM capabilities it would be a non-starter due to speed of iteration loops.
With opelclaw burning 20-50M/tokens a day with codex just during “playing around in my lab” stage I can’t see any local LLM short of multiple H200s or something being useful, even as I get more efficient with managing my context.
Naive take but I feel the only way to combat this problem is with AI. Any manual queuing, or separate threads could be overrun all the same. The only difference being there is a dam of sorts before we let some of the projects out.
Any manual efforts to combat AI won't scale as models get better and better. Show HN is a place to show-case cool projects, I don't see why a 100% AI generated project can't be shown. AI has allowed many to rob themselves of their retirement projects and the uptick reflects that. My hunch is once we settle into the new shift, we can perhaps tweak the parameters around decay of Show HN posts.
Or we could allow upstanding members to signal-boost Show HN posts. Something along the lines of "Hey guys I _really_ want you to see this post so here is it (again)".
I'm not actively planning for it, as I'm building this to fix a specific problem I have. However, it will be a static API and an SPA frontend, and both will be open source. The frontend should be fairly easy to reuse.
TIL about shading, and am surprised how less I've seen this term in grading tutorials. While different, I feel like shading is something that should be learnt before grading.
PS You might have pasted two different answer drafts above. Paras 1,4 and 2,5 deliver similar information
They don't do that anymore, at least not for me. I tried contacting two different support agents and both mentioned that the functionality has been removed for them by the higher-ups.
I wish they could repurpose macOS to touch screens... Oh well.