Hacker Newsnew | past | comments | ask | show | jobs | submit | bilekas's commentslogin

No, //cancer stems from the carcinogens in the burning / heating of tobacco//. Nicotine is not cancerous in itself any more than caffeine.

I'm not a doctor though so while I might sound sure it's based on what I've read on the topic over the many years.

Edit : rightly corrected its not just heating and burning, its tobacco and others in general. But nicotine itself is not cancerous.


Cancer also stems from non-heated tobacco because the plant itself contains carcinogens that are pressed into the skin in the mouth for example, often including lesions and such

Chewing tobacco causes cancer

But not because of the nicotine.

> This makes them AMD’s first desktop chips to qualify for Microsoft’s Copilot+ PC label, which enables a handful of unique Windows 11 features like Recall and Click to Do.

This is not the selling point they think it is.

The problem I see with the AM5 socket is simply the fact that DD5 RAM to support it is just too expensive. So this will not really make the big impact they were hopin for.


I have a hard time believing ANYONE thinks this is a selling point. Literally anyone, including marketing and execs at Microsoft. I think they have just sunk too much money in it to quit, so they keep doubling down.

"The problem I see with the AM5 socket is simply the fact that DD5 RAM to support it is just too expensive."

DDR4 is basically just as expensive. At least DDR5 gives on-chip error correction (not as good as full ECC).

For a geneal computer, there's not that much difference between AM4 and AM5 unless you really want the extra speed of DDR5, PCI 5, and the newest processors. You can build a very capable AM4 machine for slightly less money, but that savings is found on the CPU and motherboard, not on the RAM.


DDR4 looks to be around half the price of DDR5 on the used market to me. I wouldn't call that slightly less money unless you weren't planning to install much RAM.

On the new market, they are really close. Most consumers are buying new. If we want to get pedantic, we can compare buying used systems on Facebook marketplace to cannibalize the parts and resell the others to see net cost.

I don't think most people building AM4 systems currently are buying new, or at least not everything new, simply because depending on what you're looking for there might not even be any new parts.

The ECC in DDR5 is there just to make it work because of the errors caused by the density and the data rates. The ECC isn’t there for you, it is for the manufacturers.

I think for brand new computers/builds that's correct but where it hit me was wanting to upgrade an existing desktop. I already have more DDR4 RAM than I need and would have been willing to purchase a new CPU/motherboard and being forced to also purchase new RAM at the same time made it too big of a price tag all at once. I just found the best zen 3 cpu I could on ebay and called it a day.

I think your point still stands overall for AMD's business though, I assume a vast majority of CPUs are purchased in new desktops?


I am still running DDR2 & DDR3 machines! I was going to finally make the big upgrade this year but am now holding off until the market finds a little bit more sanity.

I built a workstation / gaming pc in 2024, and I feel like I was on the last chopper out of 'nam

Agree, i just built a desktop for the first time in ages, it is a leap and change from using laptops with numerous components pplugged into them. i made the leap to desktop. everything was comparably reasonable except the RAM or anything that has memory chip on it ( RAM, NVME etc) so i did some research just to make sure. All in all i happy with the result i went with AMD 9900x no graphics card in this option, i skipped the graphics card for now.

I would like to add that, looking for bundles helps a lot. If you have micro center near you, utilize it to you full advantage they are the only ones given promotional items with bundles at the moment from what seen. The main objective is to skip the price gouge of RAM chips, they cost more than the CPU at the moment. I got CPU and motherboard plus 32g RAM fro $600 and that was a save. the RAM was $445 alone.

If I can use it with Linux in any meaningful way, that would be a better selling point.

You in fact can now! In the past week, a transformer framework called FastFlowLM [0] supporting XDNA 2 NPUs officially started supporting Linux.

I posted it here the same day I found and started using it, to almost no reaction.

[0] https://github.com/FastFlowLM https://fastflowlm.com/ https://huggingface.co/FastFlowLM


> to almost no reaction.

HN is overloaded with AI stuff, its hard to break through all the noise. I say this as someone very interested in AI. Even I skip some links because its just too much.


I see it making claims about 10x efficiency, but how is tokens / second / watt? The only machines I've seen with the memory bandwidth to effectively do local inference are Mx arm chips on mac.

because it's not faster than the Ryzen 395's GPU. power efficiency doesn't matter as much as TTFT for desktop users, especially when they're tasking their AMD box as a dedicated inference machine.

some older pre-395 AMD articles suggested it'd be possible to use the NPU for prefill and the GPU for decoding and this would be faster than using either alone, but we have yet to see that (even on Windows) for any usefully sized models, just toys like LLaMA-8B.


Now all the features you don't use can perform 20% faster!

>Local NPU

The main thing I've noticed with local NPU's is very few applications take advantage of it, and IIRC it's because documentation for them is lacking and there are quite a few different kinds out there, each with different API's, libraries, etc.


Upgrading to AM5 wasn't compelling to me even last summer before things went bonkers; I'm still very content with my 5800x and 64GB of DDR4.

Trying to take the plunge on that now sounds like a nightmare.


I got lucky on the timing and got 9800X3D + 64GB DDR5 before prices increased.

Your machine is sweet and probably runs just as fast on most tasks. I wouldn't be in n a hurry to upgrade.

I upgraded from an old Intel i7.


To really take advantage of those gpu cores you need memory bandwidth. Modern transformer based LLMs are really bandwidth hungry. I am really happy to see this first push. NVIDIA having discrete GPU/memory/etc is an option, but not great for a lot of different reasons. Unified memory architectures like what AMD and Apple have are the way to go for the future. Put 256GB of ram on the main board and be able to access it at speed for LLM use please.

Just like the previous generation of AI PC, consumers just need a usb/pcie NPU,

Mass adoption won't happen until we get those cheap, because there are no mass prosumers making software for them that is massively popular.


No, AI inference is mainly RAM/RAM speed constrained, we need more fast RAM to make local AI thrive.

Lol. Thanks to someone buying all the ram platters, before they became modules, that won't happen.

I think the article is being s bit disingenuous. The real problem is triple A developers and publishers pushing for it to replace a not insignificant amount of creative work.

While still over charging everyone and scalpling every $ from everyone with micro transactions and game mechanics that need xp boosters.


I don't think the article pointing out the existence of AI use outside of that “real problem” is disingenuity. It's possible to simultaneously condone indie developers using AI to make the most of their limited budgets and condemn AAA studios for using AI to lay off their workers; conflating one with the other just makes it harder to have an honest and productive conversation about whether AI helps or hurts software development as a profession and a craft.

What you are describing is a completely different set of problems.

They are both real and coexist, often at the same time.


> They are both real and coexist, often at the same time.

Are they though because I don't see the discourse of indie or single game developers being ostracized in some public shaming trend.

I see it only in the double/triple `A` scene.


> I don't see the discourse of indie or single game developers being ostracized in some public shaming trend

Not specifically “game” developers, but I do see attempts at that ostracization on the OSDev subreddit; at least one participant there has posted progress updates on a vibe-coded hobby OS, and each of those updates ends up deluged with people complaining specifically about the AI use.


> and each of those updates ends up deluged with people complaining specifically about the AI use

I would genuinely like to see this thread, because if the comments are legitimate and backed up by examples, ie : "This is XSS vulnerable" etc.. then even with the prefix of "AI Slop" I'm fine with..

I think it's fair people don't get too comfortable with just trusting vibe coded agents, when in my own experience, the bugs they leave around are often harder to identify from a simple review than a simple architeture misalignment.


It would elevate the conversation significantly if people didn't use "vibe coding" and AI use interchangeably.

I don't use Reddit, but you don't have to look for a specific thread. It often feels like there's a mob of people just waiting for fresh meat to wander into their camp. Literally any thread referencing AI on this site is full of people who appear to have nothing but venom and contempt for people who use these tools.

It's not everyone, but loud minorities are still loud.


I'll try to only use the term AI as VibeCoding is more a derivative. There definitely are some people who are just doomers about AI entirely and I think it's always the case with any new technology. That said, you can't deny there is just an unholy amount of useless applications for the AI tooling that are really not providing anything useful other than generating 'slop'.

Using AI tools for protein folding or medical breakthroughs for example will impact the world in a positive way. People will champion that. Using them to automate your creativity hasn't been in demand by anyone except shareholders or people looking to milk a quick ad revenue for little to no effort. So of course, there's a negative sentiment.


Why would it need to be a container?

My ollama and GPU are in k8s.

Are you asking why people run things in a container?

No, I'm asking why a website that someone could fill in a few fields and result in the optimized llm for you would need to run in a container? It's a webform.

The use cases they give here are so bad. "Customer service automatically create a ticket. Shop automatically for you. Book a flight automatically for you"

So this is the company pushing to be an integral part of everyone's lives, forcing it down everyone's throat without consent.

And they're already moderation a light hearted joke about their low quality products.

Doesn't really bode well for the future product Vision.


It's a monopoly. Unless you are a wealthy hipster with money to burn on Apple products. (They also spy on you.)

A monopoly on what? The most popular OS is from Google.

Where are you getting this info from? Windows' consumer market share is not even being challenged right now.

Android. Far more devices.

Android ~ 4bn

Windows ~ 1.5bn


Just because Kagi is always mentioned in these type of posts, I can see already there's a lot of posts.

https://kagifeedback.org/d/5445-reconsider-yandex-integratio...


This will be downvoted heavy, so just to remind Russia

Im curious if there is a deep need for entire codebase to be consumed in the first place?

It would be better to have the architecture support a more decoupled/modular design if you're going to rely heavy on LLMs.

That or let it consume high quality maintained documentation?


> Maybe it’s just more visible now but it seems like these companies are really accelerating in their evil lately.

Well I mean they have no incentive to behave any other way, if anything they are rewarded via the shareholders. Number goes up when they perform mass layoffs. That tells you everything you need to know.


Not particularly 'educational' at all, but "My dad wrote a Porno". Was recommended it by a friend and have been wetting myself laughing on the work commute.

Also "Stuff you should know" is a super popular one that always gets a listen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: