Hacker Newsnew | past | comments | ask | show | jobs | submit | VladVladikoff's commentslogin

I love the winter for this. My thermostat is set to 16C at night. I prefer if the heat never even kicks on, it’s noisy and disruptive to have air blowing through the vents. I wish there was AC that could make my house that cold at night while making no noise!

Going on your Slavic username: as an American who moved to a country without forced-air HVAC, it’s been quite a revelation to discover how backwards forced air really is.

Forced air is pretty necessary for air conditioning (cooling).

For heat, radiant is nicer. But most people don't want to pay to have two completely separate climate control infrastructures in their house.


Heat pumps, they are one piece of infrastructure that work for both heating and cooling as it's a symmetric process.

But you still need to move air for cooling (as opposed to using water for heating), because you can’t cause condensation during cooling without creating damage and health risks.

Hydronic (water transport) heat is great: extremely comfortable and quiet compared to forced hot air.


Air heat pump? I don’t notice the sound.

Hey Xint Code / tylerni7 <https://news.ycombinator.com/threads?id=tylerni7>, maybe you should improve your disclosure process as well? Maybe make it mandatory for users of your tool?

they disclosed 30 days after the patch was merged in the thing they reported to.

its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people

reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.


In the original thread they admitted multiple times that they rushed it out for marketing reasons.

as an explanation for the misnumbered redhat version.

the disclosure itself followed a normal timeline, which you can view at the bottom of their blog post.


The security research community would run you out on a rail if you tried to take a successful research product and attach mandatory disclosure norms to it.

Couldn't the product itself disclose to the vendors?

No firm in the world would use a vulnerability research product that automatically disclosed to vendors.

I doubt the vendors would be big fans either lol

Does the author think question marks are the same thing as periods?


He could be an uptalker?


Not sure you're familiar with the different ways punctuation has evolved online?


Man that’s pretty shitty that Mercor tricked 40k contractors, and then did a poor job of securing their data. There should be stronger consequences for stuff like this.

What happens now is that a lot of clueless CTO that didn't know about this company now know it's name. So the outcome of this mess is probably more business for Mercor

I mean, just look at what happened to Crowdstrike....


Mercor has around 5 customers that make up 95% of its revenue. Anybody who needs to know about them already does.

At minimum, collecting voiceprints should come with much stricter consent, retention and security requirements than ordinary "training data"

I checked your link and fail to see how that is a pre AI project that you are only completing now, it is obviously a recent concept. Also your username matches the project, so this seems mostly like a lame attempt to shill your product by “joining the conversation”

You replied to a bot.

There are lots of sprouting tutorials on YouTube. I used mason jars, soak the seeds for an hour or two, drain and leave the seeds in the jar damp. Rinse the seeds twice a day. Eventually they start to sprout.

Many years ago at a science-y summer camp as a child, this was a "project" we did. Not for the same purpose as suggested here but just to see how sprouting happens. Cool little experiment.

What are these databases not scoped to origin of creation like cookies?

They are. The leak is that if a webpage you visit creates several databases with certain names, the order is random but stays the same within the same browser session.

Why does Gemini 3.1 get a pass for the same reasons they got image 2 gets a fail on the flat earth one? Gemini has all sorts of random body parts and limbs etc.

That's a mistake~ None of the models successfully passed the Flat Earth composition test. I've updated the passing criteria to be more explicit as well. Thanks for catching that!

That's a pretty popular budget friendly GPU people use for local AI, it actually seems like an excellent choice IMHO.

Depends on your definition of budget friendly, I suppose. I was looking around the other day and the cheapest working 24GB RTX 3090 on eBay was $1800 CAD after exchange rate, shipping and all the rest.

Hugely inflated from the $700 they were once going for. Maybe there are still deals around.


Actually budget friendly is RTX 3060 12Gb.

With one you can run 9B/12B models which are fine for text tasks like chatting or summarisation. Not for precision like tool calling or code.

With two of them you can run models up to Qwen 27B and 35B with a few-turn context window (8k-16k). Dense at 14t/s and MoE at 68t/s.

With three of them you can run 128k context, though you'll need a large format case and the right motherboard or PCIe riser.

I'm running three and even with a new case this setup cost me less than one 3090.


This seems quite unlikely. What motherboard are you getting three 16x GPUs on? That alone with the associated sever processor would be more than a used 3090, before even buying the three 3060s. Give full BOM and costs.

I already had the PC. I just mean the extra purchase of the graphics cards.

The motherboard is an MSI Pro Z690-A.

The slots are physical x16. Electronically they are x16, x4, x1 which doesn't harm anything at all.


That's insane. I bought two in December for ARS 1.2M (a little less than USD 1000). Maybe OpenClaw raised the demand.

Wild I paid $1000 CAD for mine 2 years ago, I guess things have changed.

Because they are hugely more useful now than running some stupid game at 240 fps instead of 60 fps.

They're not a particularly fast card compared to something like a 5070, they have lots of VRAM.

That's why they were cheap before.

Also "Some stupid game", who woke up and made you king of hobbies.


The only thing that compares to this is probably Mac mini with MLX models.

Radeon 9700 pro or intel arc b70 (both $1000-1400, 32GB, 650GB/s bandwidth), or ryzen AI max 390 (more vram, less bandwidth)

The local inference space is pretty good nowadays.


The tweet is only a few words, you really need an LLM to write that for you???

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: