SpaceX is profitable. Merging with xAI hides the absolute dumpster fire P&L xAI is bringing to the table. He can continue to fund his ego projects to the detriment of both orgs.
Depends on how democratic the Society is. The less so, the more it's a powerful minority at the top. How much say did the average US citizen have in starting this war with Iran? It's not something the current administration ran on to get elected. It's not very popular in the polling.
There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.
I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
I disagree. VMs are better isolated to precisely the extent that (a) the attack surface is lower and (b) the implementation is simpler and thus less buggy.
Hardware virtualization has a strong effect on (b), but it’s not at all a foregone conclusion that it’s strictly in the direction of being more straightforward and thus more secure. And hardware features like fancy device passthrough encourages applications with a very, very large attack surface that has historically been full of holes.
You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.
VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.
If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?
I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.
I believe that is tautologically true, and thus not a very useful framing.
Security is obviously a continuum (eg. you can even have a bug in your IPMI FW, and a network packet could break in without any interaction with the OS; or there could be a HW bug too), but there is a discrete "jump" between containers and VMs to the extent that it is useful to call one a security boundary and the other not. Just like a firewall is a security boundary even if it can have security bugs.
Whether this jump between exploitable surface area warrants this distinction is what the point is: many believe it does.
But you also cannot just handwave the difference by "it's a continuum". I did not use absolutes, but said "VMs are _better_ for security", which already implicit about a "continuum".
Containers are mostly used as a deployment/packaging model where typically VMs are used where stronger security is needed. This has been the established industry standard for a while. Look at major cloud providers for example.
AWS:
> Unless explicitly stated, AWS does not consider a container or primitives such as an ECS task or a Kubernetes pod to be a security boundary. A notable exception to this is ECS tasks running AWS Fargate, where the isolation boundary is a task. To account for this, we recommend that you use Fargate with ECS if your applications have strict isolation requirements.
> When you’re using the Fargate launch type, each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
They also further recommend that for even higher security requirements use different EC2 instances - which you can also run on dedicated hardware etc. But the fact that you can further increase isolation beyond VMs, does not make containers the same as VMs.
> There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.
> If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.
> Applications that run in traditional Linux containers access system resources in the same way that regular (non-containerized) applications do: by making system calls directly to the host kernel.
> One approach to improve container isolation is to run each container in its own virtual machine (VM). This gives each container its own "machine," including kernel and virtualized devices, completely separate from the host. Even if there is a vulnerability in the guest, the hypervisor still isolates the host, as well as other applications/containers running on the host.
> gVisor is more lightweight than a VM while maintaining a similar level of isolation. The core of gVisor is a kernel that runs as a normal, unprivileged process that supports most Linux system calls. This kernel is written in Go, which was chosen for its memory- and type-safety. Just like within a VM, an application running in a gVisor sandbox gets its own kernel and set of virtualized devices, distinct from the host and other sandboxes.
These guys are experts when it comes to securing workloads on shared infra and while there are different levels of isolation using various techniques, the current industry practice is to not consider regular Linux containers a security boundary.
> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
Yeah this almost never happens though whereas Linux privesc is 10x a day.
They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.
I would have thought they provide better isolation than using multiple users which is the traditional security boundary.
It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?
Containers are a convenience boundary and they increase complexity of your risk assessments.
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
> Security scanners already support most container and VM image formats in widespread use.
E.g.,
> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.
You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.
These sorts of vulns are extremely common on Linux. This one is making the rounds for various reasons but it's a good justification for a migration away from containers if your threat model is concerned about it.
MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.
Or use gvisor, which mitigates this vulnerability.
I found one in Istanbul [0] (which now 404s) that somewhat fits the label and looks like it could have been a set on The Wire, but most of the "drug den" ones are just cramped, taken by someone who doesn't know how to take pictures and doesn't care to learn (blurry, bad lighting, noisy, poor staging), or both.
Most of the bad TV placement ones are also boring because they're just over a fireplace. Technically correct, but not noteworthy. However, I did find one that was truly spectacular [1] (still live for now) and left me with more questions than answers.
I feel like floor mattresses, trash, and peeling paint were also at play. They're all sort of unsafe rooms people wouldn't want to go to unless they felt like they had to (i.e. doing drugs)
Feels like maybe something lost in translation with their explanation - they say they were fed up of data structures etc. but they returned to Rust? I’m assuming there’s something a bit more nuanced about what they got tired of with Zig
Rust is a world away from Zig as far as being low-level. Rust does not have manual memory management and revolves around RAII which hides a great deal of complexity from you. Moreover it is not unusual for a Rust project to have 300+ dependencies that deal with data structures, synchronization, threading etc. Zig has a rich std lib, but is otherwise very bare and expects you to implement the things you actually want.
This depends on what you mean by low level. Commonly it means, how much you need to take care about minute, low-level issues. In that way C, Rust, and Zig are about the same.
Dependencies have nothing to do with low-level vs. high-level but just package management, how well the language composes, and how rich the standard library is. Are assumptions in package A able to affect package B. In C that's almost impossible to avoid, because different people have different ideas about how long their objects live.
Having a rich standard library isn't just a pure positive. More code means more maintenance.
I agree with you that package management has nothing to do with how low-level a language is.
That being said Rust is definitely a much higher level language than either C or Zig. The availability of `Arc` and `Box`, the existence and reliance on `drop`, and all of `async` are things that just wouldn't exist in Zig and allow Rust programmers to think at higher levels of abstraction when it comes to memory management.
> Having a rich standard library isn't just a pure positive. More code means more maintenance.
I would argue it's much worse to rely on packages that are not in the standard library since its harder to gain trust on maintenance and quality of the code you rely on. I do agree that more code is almost always just more of a burden though.
> That being said Rust is definitely a much higher level language than either C or Zig. The availability of `Arc` and `Box`, the existence and reliance on `drop`
I mean, C++ have RAII and stuff like unique pointer, does that make it higher level than Zig?
And what if you don't use Arc or Box? Is your program now lower level than baseline Rust?
As I said, depends a lot about what you mean by low level.
It depends on the facilities the language offers to you by default right?
C++ offers much higher level primitives out of the box compared to Zig, so I'd say its a higher level language. Of course you can ignore all the features of C++ and just write C, but that's not why people are picking the language.
IMO "level" roughly corresponds to the amount of runtime control flow hidden by abstractions. Zig is famous for having almost no hidden runtime control flow, this appears pretty "low level" to many. OTOH, Zig can have highly non-trivial hidden compile time control flow thanks to comptime reflection, but hardly anyone identifies Zig as a "high level" metaprogramming language.
I'd say so. Zig is aiming to be a bit smarter than C while staying at roughly the same level. C++ more sought/seeks to support C but offer higher level things with it.
And in practice the maintenance just doesn't get done. That's why Python's "rich standard library" with batteries included not only periodically has to throw out "dead batteries" because parts of its stdlib are now obsolete, but also has an ecosystem where good Python programmers don't use parts of the stdlib "everybody knows" just aren't good enough.
You see that in C++ too. The provided hash tables aren't good enough so "everybody knows" to use replacements, the provided regular expression features aren't good enough, there's the 1970s linear algebra library that somebody decided must be part of your stdlib, here's somebody's "my first optimized string buffer" type named string...
For now Zig is young enough that all the bitrot can be excused as "Don't worry, we'll tidy that up before 1.0" but don't rely on that becoming a reality.
I think Rust is "higher level" than C or Zig in the sense that there are most abstractions than C or Zig. Its not Javascript, but it is possible to program Rust without worrying too much about low level concerns.
The languages trade complexity in different areas. Rust tries to prevent a class of problems that appear in almost all languages (i.e two threads mutating the same piece of data at the same time) via a strict type system and borrow checker. Zig won't do any of that but will force you to think about the allocator that you're using, when you need to free memory, the exact composition of your data structures, etc. Depending on the kind of programmer you are you may find one of these more difficult to work with than the other.
There are some cases in Rust where the borrow checker rejects valid programs, in those cases it may be because of a certain data structure in which case you probably have many crates available to solve the issue, or you can solve it yourself with boxing, cloning, or whatever. The vast majority of the time (imo) the borrow checker is just checking invariants you have to otherwise hold and maintain in your head, which is harder and more error prone.
The actual hard part of Rust is dealing with async, especially when building libraries. But thats the cost of a zero-cost async abstraction I suppose.
Then it's actually the immature zig ecosystem that rubbed the author the wrong way, not zig the language itself. Not that the ecosystem isn't important, but IMO a language only truly fails you when it doesn't offer the composability and performance characteristics necessary for your solution.
Not really understanding what this would be though, zig has all the basic stuff you would expect in its stdlib (hashmap, queues, lists etc) just like Rust
While you can obviously write low level code in Rust and manage allocations, memory, use pointers etc, you can also write much higher level code leveraging abstractions both in Rust itself and its' rich ecosystem. If you're coming from higher level languages it's much friendlier than C/C++ or Zig. I think I would struggle to write C or Zig effectively but I have no issues with Rust and I really enjoy the language.
> But don't hide complexity under the rug by using indexes instead of pointers, it's mostly the same thing.
I think the simple fact that pointers are not guaranteed aligned/valid even if they are in range of a particular slice/collection etc. actually makes it very different
It also blocks IPs from states with age verification laws. I mean, okay, I respect the principle, but since there's very little I can do about the laws, and since there are exactly zero politicians who are going to fret because they can't visit aphyr dot com, it's a bit pretentious. So I guess I'll just k-line all aphyr dot com links in my filter and move on with my life. Everybody blocking everybody is how we win.
From others "Kyle has spent an insane amount of effort to get answers from OFCOM, got none, and as such blocks the UK for self-preservation. The UK wants to fine non-citizens for violating online purity rules, so this is the result"
If he can't get an answer, then probably better to not get charged with something.
reply