Hacker Newsnew | past | comments | ask | show | jobs | submit | jtfrench's commentslogin

Hopefully Mythos didn't go rogue and hold production hostage.

If this can happen to Anthropic, imagine all the companies building on top of Claude Code for live products. Hopefully the industry is learning that competent problem solving human engineers are still very much needed when you have increasingly deceptive non-deterministic genies running your production stack.

It's not that simple. API is still up and there are multiple API providers. https://openrouter.ai/anthropic/claude-opus-4.7

The fact API is available, does not mean you will actually get the model it states you get. Today Opus 4.7 was noticeably dumber than yesterday. It performed worse than my local Qwen.

I don't think there are many other companies serving Claude.

At least Google, Amazon, and Microsoft. What more do you want?

Came here to say this. At Kilo Code we aren’t impacted by this because of the other places that can run Claude

Sadly its "good enough" for execs

Maybe it will push companies to run them locally.

On what hardware? Like companies would buy up GPUs?

Presumably you'd buy really beefy laptops. The price delta between buying the most basic MacBook Pro possible (14", M5, 16 GB unified memory, 1 TB SSD) and one with the M5 Max with 40 GPU cores, 128 GB unified memory, 2 TB SSD is $3400. How much Claude usage does that get you/in what time does it pay itself back?

That doesn't get you any Claude usage. Claude models obviously aren't open, but equivalent models to Opus take about 400GB of memory to run.

You can run the versions with fewer parameters or quantized weights but depending on how much quality you're sacrificing, now you'd have to compare the price against cheaper Claude models like Sonnet.


Haha, good one.

Working on Snow Leopard was one of my most rewarding experiences at Apple. Loved the ethos behind that update. No new fluff, just make everything work better.

Totally agreed! And I loved Snow Leopard. (And I still love Mac OS - just making the point, there have always been things to complain about :-) )

How does this handle “dumb zone” evasion while looping?

Definitely sounded like a shower gel moment.


Haven’t heard of Codeberg. What are the top reasons to switch from GitHub?


Was hoping to see Apple break the 128GB barrier in a laptop that they previously set, though 128GB is still pretty sweet for local LLM inference on consumer hardware. My 128GB M3 Max is still shredding tokens pretty well (with that annoying slow initial prompt processing), so no major complaints there. I guess the question is, given access to the same amount of RAM, does the M5 really do an order-of-magnitude better than 128GB on a M3 or M4?


The new tensor cores significantly speed up prompt processing. Up to 3x faster per the marketing information.


I feel heard. Auto-correct un-correcting what I corrected three times in a row is maddening.


What if their maximally vindictive traits just makes them want to use the same invasive tools and techniques?


Like today?

It’s entirely possible to prosecute the heads of all of these horrific things into the stone age, comb through internal data and throw every agent who’s murdered someone in jail, and not punish everyday people who just cast a vote.


They already are. Playing nice and hoping the other side will come to their senses and return to normalcy doesn’t make sense when they’ve already shown you they will try to destroy you regardless.


Compared to the alternative of staying on our current path of American fascism and WW3?

I’ll take the odds for vindictiveness.


No way. Next level.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: