Hacker Newsnew | past | comments | ask | show | jobs | submit | ActorNightly's commentslogin

Prove to me that people deserve to be free and not micromanaged on a day to day basis.

I want the ability to run any linux distro on my macbook, like I can with any other computer that is not a macbook.

Macs have enough open firmware to allow you to run any OS that you want. Linux Asahi only supports a certain subset of modern Mac HW, if you want to speed up development you should probably contribute to that project.

I really am starting to think that the level of technical understanding on HN is so low that when readers see an exploit like this, they imagine basically the cult classic movie "Hackers" in their heads where some guy hacks into any machine of their choosing.

I mean, Google already has Mu Zero, which Im willing to bet has evolved quite a bit in private because if anything is going to get us closer to actual AI its that.

Realistically, one can build a AI capable of reasoning (i.e recurrent loops with branches) using very basic models that fit on a 3090, with multi agent configuration along the lines https://github.com/gastownhall/gastown. Nobody has done it yet because we don't know what the number of agents is required and what the prompts for those look like.

The fundamental philosophical problem is if that configuration is possible to arrive at using training, or do ai agents have to go through equivalent "evolution epocs" to be able to do all that in a simulated environment. Because in the case of those prompts and models, they have to be information agnostic.


Because in order to exploit this, you have to have direct access to the computer. Either through malicious usb device, or by exploiting some supply chain or a known piece of software that will be willingly or automatically installed, and furthermore you need to be able to essentially run arbitrary terminal commands, which is a huge breach of isolation in that software.

If an attacker manages to do all that, its already bad news for you. Escalation to root with this is the least of your worries at that point.

Like someone else below posted, https://xkcd.com/1200/

People need to understand what the vulnerability actually is before freaking out about it.


You are assuming that LPE only applies to the user that holds all the sensitive stuff. But it also applies to users created specifically for isolation. Without LPE they would not have access to anything important even if they were compromised.

It doesn't matter which "user" this goes through. If an attacker can get hold of a users control to the point where they can execute arbitrary scripts, you have already lost.

So a threat actor buys access to a managed kubernetes service, or other linux-based shared hosting platform, and now they have access to the computer.

Hell, GitHub Actions would do.


Is there any service that relies on Linux user separation or containers to separate different user accounts? I’m pretty sure you’re not supposed to do that and the proper way is to run different instances in virtual machines.

Basically every shared webhost that uses cPanel works like this. The security mechanism they use is called CageFS (https://cloudlinux.com/getting-started-with-cloudlinux-os/41...), which makes it so users can't see other users, but it's not like a VM or something.

Right, you're not supposed to do that...

Yes, because hypervisors are simply just a program that runs under linux, not total cpu/memory isolation......

Lemme guess, you probably think this can be used to hack into the backend that runs AWS from any EC2 lol?


Qwen is still better that Gemma though. Also you can tune it more for different tasks, which means that you can prioritize thinking and accuracy versus inference speed.

Qwen is better at some things (code, in particular), but Gemma has better prose and better vision. At least, it feels that way to me.

gemma is also just way faster. i dont wanna wait 10min to get a 5-10% better answer (and sometimes, actually worse answer).

best is to use your own model router atm, depending on the task


I'm pretty sure Qwen is faster? The MoE version of Qwen is 3B active, while Gemma 4 is 4B active. Similarly, the dense Qwen is 27B while Gemma is 31B. All else being equal (though I know all else isn't equal), Qwen should be faster in both cases. I haven't actually measured with any precision, but on my AMD hardware (Strix Halo or dual Radeon Pro V620) they seem quite similar in both cases...both MoE models are fast enough for interactive use, both dense models are notably smarter but much slower, long time to first response and single-digit tokens per second once it starts talking.

qwen-3.6 is really interesting. The dense 27B model is pretty slow for me whereas the sparse 31B is blazingly fast but it also needs to be since it's so chatty. It produces pages and pages of stream of consciousness stuff. 27B does this to a lesser extent but slow enough that I can actually read it whereas 31B just blasts by.

I haven't yet compared either to Gemma 4. I tried that out the day after it came out with the patched llama.cpp that added support for it but I couldn't make tool calling work and so it was kind of useless. I should try again to see if things have changed but judging by what people say, qwen-3.6 seems stronger for coding anyway.


I had the same experience with 31B. Runs well on 4090 too!

I'm using both incessantly and having a great time.

Qwen without thinking is just as fast. I have 4 parameter settings based on recommendation. If you want a good coding problem, the thinking coding mode works well, but takes a while to arrive at an answer. If you want faster turn around time, instruction mode works without thinking.

Genuine question: how do you tune it?

I thought "fine-tuning" meant training it on additional data to add additional facts / knowledge? I might be mistaking your use of the word "tune", though :)


You can fine-tune relatively easily in Unsloth Studio.

Parameter settings are here. https://huggingface.co/Qwen/Qwen3.6-35B-A3B

Most clients that support ollama support passing extra body options where you can set those.


It’s a heck of a lot faster too.

Yes I would just go with qwen.

I found that Gemma 4:26b makes way more mistakes compared to Qwen and Gemma 3. Gemma3 27b QAT was my goto for some time as this was quite fast. Qwen is still king for a balance of accuracy and inference speed.

Gemma:31b was more accurate but speed was horrendous.


You don't need hdmi out, just ability to do screenshots, which easy to script.

Arguably though, browser automation gets you 95% of the way there for most things.


Many systems won't allow the end user to install any software (e.g. work issued laptops), but you can plug in HDMI and USB.

I had a Casio that was multi color, because I thought it was cooler. Display was nice, functionality sucked.

I had a Casio as well because, IIRC, it was the only thing the shop had. Eventually I had to also get a TI because it allowed using imaginary numbers in a matrix operation. Not that that was used in more than one course after all. But I grew to like it and even had an emulator for a long time on my first smart phone.

But yeah, Casio was definitely more friendly and polished in UI, but dumber. You could only use "wizard" type things and pseudo gui clickies while the ti was crude and text-heavy but let you enter just about anything anywhere and seemed more symbol and language oriented. Which one was nicer in use? I guess it would depend on how much of that language you could memorize. Or browse a cheat sheet for.


>Around the same time, Andrej Karpathy (OpenAI cofounder, former Tesla AI lead) told the No Priors podcast he was in a “state of psychosis” over AI agents. He said he hadn’t written a line of code since December. He described tasks that used to take a weekend now finishing in 30 minutes with zero human intervention. Karpathy is a literal genius and one of the most technically accomplished people in the industry. He built a WhatsApp bot called “Dobby the House Elf” to control his home systems (though that naming leans more towards genius than psychosis).

Ah yes, the same guy that said implementing lidar with cameras is hard (like Kalman filters aren't a thing). Same guy who spoke positively about Musks engineering talents AFTER he went crazy. That genius...

Basically, I feel like if you are suffering from psychosis, your talent is measured by how much stuff you have memorized, and how much of it you can type on keyboard in a given timeframe. And now that LLMs are doing it for you, you feel worthless.

I remember when I first started learning python, having been in Java/C++ land. It felt like a hack. You could just pip install stuff, import it, dynamically hack things around if you needed to, and make stuff work in much shorter time. I wrote tools that let me write other tools quicker. For example, back before you could ask LLMs to write code, you basically had to google stuff and search for examples. So one of the first things I wrote was essentially web page to api converter. Now I had a tool that programmatically let me pull content from web, which included things like code samples.

I then wrote a tool to search documentation and github, and pull things that were styled as code, using my previous tool, and put them into opensearch, so when i had a question about something, I could search a function in opensearch and see examples.

E.t.c and so on.

Agents these days have replaced a lot of the manual work. But complex tasks, with decision making, repeat loops, and unknown unknowns is still something that agents cant reliably do. Anyone can put together a UI with agents very quickly. But then, if you leave a lot of stuff to the agents and not specify how you want the code written, you are going to get bounded into code that is going to quickly degrade performance, introduce edge case bugs, and so on. Sure, you can have llms fix all that, but to do that automatically is something nobody has done yet.

The real skill in the future is going to be writing agentic programs to work on features for you instead of working on features. You invest time up front to do this, and spend minimal time maintaining. Much in the same way that you invested time into writing OOP code with clean separation in packages and classes, build systems with verification, all so that anyone can come in and write code and have a safe way of testing and committing changes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: