Yall want to see instant? Check out chatjimmy.ai blow your mind. I’m not affiliated.
But the things it unlocks in a product I’m building are mind blowing. Millisecond inference even on much older models will change the whole game. Enough to run inference on every. Single. API call. Without notable disruption. This sh*t is wild.
> Engineers: All a meeting does is distract from work.
> Every leader ever: if we could do the right work, we could have less meetings.
Guess who defined “work” in the first place? I wonder if it’s some kind of manager schizophrenia where they define shitty requirements and outcomes and then act surprised when they get subpar result, which they try to promptly mitigate with more meetings.
The University of Toronto research team, led by Chris S. Lin and Prof. Gururaj Saileshwar recently disclosed GPUBreach (https://gpubreach.ca) a new class of attack targeting NVIDIA GPU drivers. The work highlights how fault injection techniques such as Rowhammer can be combined with GPU memory management behavior to achieve privilege escalation, even in environments with protections like the IOMMU enabled.
This article is about how we, at Stealthium, reverse engineered what was going on and explain this new kind of attack.
> Imagine a bunch of people are tortured, removed from their homes, are killed… and you get paid?
This is what every military supplier does. What every soldier. What every person holding stock in the military supplier's company, every person shorting oil shares, etc.
Gambling on war and death is disgusting. Sadly, people profiting from those things is nothing new. That does not make it ok though.
You're right that I say both "above and not in place of" and "MCP needs to die"... I should that (but cant edit anymore).. it's unclear.... someday I see MCP being replaced by something else. But it's not my intention to completely replace MCP, but to solve the problem above it today.... I think that will be sufficient for today.
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
Exactly - the implication is that every other company is absolutely open to surveilling you and killing you. They’re complicit. They participate in whatever the regime calls for.
The other companies have signed the waiver, however they aren’t being used in classified systems currently. So that type of use is already extremely limited for them. Now once they enter into those contracts to be used in those systems without these protections, I will cancel my subs to them and switch to Anthropic. xAi entered into that contract last week. Altman is now publicly siding with anthropic, so he better stand on that position with openai as they are currently negotiating for use in those system.
I've bene working with a GPU security company for the last few months... I can tell you that neo clouds (generally) do not see security as a high priority—or often, even their responsibility. Many do not have hte ability to even know if your GPUs have been compromised and they expect you'll take responsibility.
Meanwhile companies think the clouds are looking at it.... anyhow. it is a real problem.
> I can tell you that neo clouds (generally) do not see security as a high priority—or often, even their responsibility.
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
Give me automatic plaintext syncing (hell sync to GitHub) and no other network interface and it’s perfect. Otherwise I lose my three weeks of work like my mom lost writing her masters thesis. I don’t want to go back to that.
But the things it unlocks in a product I’m building are mind blowing. Millisecond inference even on much older models will change the whole game. Enough to run inference on every. Single. API call. Without notable disruption. This sh*t is wild.
reply