Dynamic linking with a safe ABI, where if you change and recompile one library then the outcome has to obey some definition of safety, and ABI stability is about as good as C or Objective-C or Swift.
Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.
> Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.
Wait, Rust can already communicate using the C ABI. In fact, it offers exactly the same capabilities as C++ in this regard (dynamic linking).
As unsafe as C or C++. In fact, safer, because only the ABI surface is unsafe, the rust code behind it can be as safe or unsafe as you want it to be.
I was addressing this portion of your comment: "C's ABI and dynamic linking are the thing that enables the software to get huge". If the C ABI is what enables software to get huge then Rust is already there.
There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.
Here's the problem. If you told me that you rebuilt the Linux userland with Rust but you used C ABI at all of the boundaries, then I would be pretty convinced that you did not create a meaningful improvement to security because of how many dynamic linking boundaries there are. So many of the libraries involved are small, and big or small they expose ABIs that involve pointers to buffers and manual memory management.
> There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.
Of course C and C++ are no safer in this regard. (Well, with Fil-C they are safer, but like whatever.)
But that misses the point, which is that:
- It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.
- So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.
> - It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.
I think we all agree that it would be a huge deal.
> - So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.
As you point out, this is the debatable part, and I'm not sure I get your justification here.
This might end up being the forcing function (quoting myself from another reply in this discussion):
> It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
If memory was cheap, then maybe you could say, "who cares".
Can you even make the standard library dynamically linked in the C way??
In C, a function definition usually corresponds 1-to-1 to a function in object code. In Rust, plenty of things in the stdlib are generic functions that effectively get a separate implementation for each type you use them with.
If there's a library that defines Foo but doesn't use VecFoo>, and there are 3 other libraries in your program that do use that type, where should the Vec functions specialized for Foo reside? How do languages like Swift (which is notoriously dynamically-linked) solve this?
You can have an intermediate dynamic object that just exports Vec<Foo> specialized functions, and the three consumers that need it just link to that object. If the common need for Vec<Foo> is foreseeable by the dynamic object that provides Foo, it can export the Vec<Foo> functions itself.
Your apt update would still be huge though. When the dependency changes (eg. a security update) you’d be downloading rebuilds of 20 apps. For the update of a key library, you’d be downloading your entire distribution again. Every time.
NixOS "suffers" from this. It's really not that bad if you have solid bandwidth. For me it's more than worth the trade off. With a solid connection a major upgrade is still just a couple minutes.
I think you misunderstand my point. Nix basically forces dynamic linking to be more like static linking. So changing a low level library causes ~everything to redownload.
Oh, well yeah, statically linked binaries have that downside. I guess I don't think that's a big deal, but I could maybe imagine on some devices that are heavily constrained that it could be? IDK. Compression is insanely effective.
You are forgetting about elephant in the room - if every bug require rebuild of downstream then it is not only question of constraint it is also question of SSD cycles - you are effectively destroying someone drive faster. And btrfs actually worsens this problem - because instead of one Copy on Write of library you now have 2n copies of library within 2 copies of different apps. Now (reverting/ø) update will cost you even more writes. It is just waste for no apparent reason - less memory, less disk space.
"compression is insanely effective" - And what about energy? compression will increase CPU use. It will also make everything slower - slower than just plain deduplication. Also, your reason for using worse for user tech is: the user can mitigate in other ways? This strikes me as the same logic as "we don't need to optimize our program/game, users will just buy better hardware" or just plain throwing cost to user - this is not valid solution just downplaying of the argument.
If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.
Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."
Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.
One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.
We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.
What you are asking for is to make a library definition replacement to .h-files that contain sufficient information to make rust safe. That is a big, big step and would be fantastic not only for rust but for any other language trying to break out of the C tar pit.
So you're calling for dynamic linking for rust native code? Because rust's safety doesn't come from runtime, it comes from the compiler and the generated code. An object file generated from a bit of rust source isn't some "safe" object file, it's just generated in a safe set of patterns. That safety can cross the C ABI perfectly fine if both things on either side came from rust to begin with. Which means rust dynamic linking.
I don’t think GP is moving the goalposts at all, rather I think a lot of people are willfully misrepresenting GP’s point.
Rust-to-rust code should be able to be dynamically linked with an ABI that has better safety guarantees than the C ABI. That’s the point. You can’t even express an Option<T> via the C ABI, let alone the myriad of other things rust has that are put together to make it a safe language.
It would be very hard to accomplish. Apple was extremely motivated to make Swift have a resilient/stable ABI, because they wanted to author system frameworks in swift and have third parties use them in swift code (including globally updating said frameworks without any apps needing to recompile.) They wanted these frameworks to feel like idiomatic swift code too, not just be a bunch of pointers and manual allocation. There’s a good argument that (1) Rust doesn’t consider this an important enough feature and (2) they don’t have enough resources to accomplish it even if they did. But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.
Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.
Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.
You could maybe say that a pointer can be transmuted to an Option<&T> because there’s an Option-specific optimization that an Option<&T> uses null as the None value, but that’s not always guaranteed. And it doesn’t apply to non-references, for instance Option<bool>’s None value would be indistinguishable from false. You could get lucky if you launder your Option<T> through repr(C) and the compiler versions match and don’t mangle the internal representation, but there’s no guarantees here, since the ABI isn’t stable. (You even get a warning if you try to put a struct in your function signatures that doesn’t have a stable repr(C).)
You're right that there isn't a single standard convention for representing e.g. Option<bool>, but that's just as true of C. You'd just define a repr(C) compatible object that can be converted to or from Option<Foo>, and pass that through the ABI interface, while the conversion step would happen internally and transparently on both sides. That kind of marshaling is ubiquitous when using FFI.
Right, that's the whole point of this thread. The only stable ABI rust has is one where you can only use C's features at the boundaries. It would be really nice if that wasn't the case (ie. if you could express "real" rust types at a stable ABI boundary.)
As OP said, "I don't think deflecting by saying "but C is no safer" is super interesting". People seem intent on steering that conversation that way anyway, I guess.
No fundamental reason, that I know of, why Rust or any other safe language can't also have some kind of story here.
> I think you're moving the goalposts significantly here.
No. I'm describing a problem worth solving.
Also, I think a major chasm for Rust to cross is how defensive the community gets. It's important to talk about problems so that the problems can be solved. That's how stuff gets better.
Swift and fil-c are only pseudo safe. Once you deal with the actual world and need to pass around data from memory things are always unsafe since there is no safe way of sharing memory. At least not in our current operating systems. Swift and fil-c can at least guard to some extent the api.
A safe ABI would be cool, for sure, but in the market (specifically addressing your prediction) I don't know if it's really that big a priority for adoption. The market is obviously fine with an unsafe ABI, seeing how C/C++ is already dominant. Rust with an unsafe ABI might then not be as big an improvement as we would like, but it's still an improvement, and I feel like you're underestimating the benefits of safe Rust code as an application-level frontline of security, even linked to unsafe C code.
> An ABI can't control whether one or both parties either end of the interface are honest.
You are aware that Rust already fails that without dynamic linking? The wrapper around the C getenv functionality was
originally considered safe, despite every bit of documentation on getenv calling out thread safety issues.
Yes? That's called a bug? The standard library incorrectly labelled something as safe, and then changed it. The root was an unsafe FFI call which was incorrectly marked as safe.
It's no different than a bug in an unsafe pure Rust function.
I'm choosing to ignore that libc is typically dynamically linked, but linking in foreign code and marking it safe is a choice to trust the code. Under dynamic linking anything could get linked in, unlike static linking. At least a static link only includes the code you (theoretically) audited and decided is safe.
A "safe" ABI is just a C ABI plus a "safe" Rust crate (the moral equivalent to a C/C++ header file) that wraps it to provide safety guarantees. All bare-metal "safe" FFI's are ultimately implemented on top of completely "unsafe" assembly, and Rust is not really any different.
C++ ABI stability is the main reason improvements to the language get rejected.
You cannot change anything that would affect the class layout of something in the STL. For templated functions where the implementation is in the header, ODR means you can't add optimizations later on.
Maybe this was OK in the 90s when companies deleted the source code and laid off the programmers once the software was done, but it's not a feature Rust should ever support or guarantee.
The "stable ABI" is C functions and nothing else for a very good reason.
I think if Rust wants to evolve even more aggressively than C++ evolves, then that is a chasm that needs to be crossed.
In lots of domains, having a language that doesn't change very much, or that only changes very carefully with backcompat being taken super seriously, is more important than the memory safety guarantees Rust offers.
As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.
Rust preventing this makes my life so much better.
Rust does not prevent you from creating a library that exports a C/C++ interface. It's indistinguishable from a C or C++ library, except that it's written in Rust. cbindgen will even generate proper C header files out of the box, that Rust can then consume via bindgen.
> As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.
I mean yeah that's bad.
> Rust preventing this makes my life so much better.
I'm talking about a different issue, which is: how do you create software that's in the billions of lines of code in scale. That's the scale of desktop OSes. Probably also the scale of some other things too.
At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.
This is true even and especially if everyone has access to everyone else's source code
> At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.
Rust supports ABI compatibility if everyone is on the same compiler version.
That means you can have a distributed caching architecture for your billion line monorepo where everyone can compile world at all times because they share artifacts. Google pioneered this for C++ and doesn't need to care about ABI as a result.
What Rust does not support is a team deciding they don't want to upgrade their toolchains and still interoperate with those that do. Or random copy and pasting of `.so` files you don't know the provenance of. Everyone must be in sync.
In my opinion, this is a reasonable constraint. It allows Rust to swap out HashMap implementations. In contrast, C++ map types are terrible for performance because they cannot be updated for stability reasons.
My understanding: Even if everyone uses the same toolchain, but someone changes the code for a module and recompiles, then you're in UB land unless everyone who depends on that recompiles
If your key is a hash of the code and its dependencies, for a given toolchain and target, then any change to the code, its dependencies, the toolchain or target will result in a new key unique to that configuration. Though I am not familiar with these distributed caching systems so I could be overlooking something.
> Except as you well know, C might not change as fast, but it does change, including the OS ABI.
I don't know that.
Here's what I know: the most successful OSes have stable OS ABIs. And their market share is positively correlated with the stability of their ABIs.
Most widely used: Windows, which has a famously stable OS ABI. (If you wanted to be contrarian you could say that it doesn't because the kernel ABI is not stable, but that misses the point - on Windows you program against userland ABIs provided by DLLs, which are remarkably stable.)
Second place: macOS, which maintains ABI stability with some sunsetting of old CPU targets. But release to release the ABI provides solid stability at the framework level, and used to also provide stability at the kernel ABI level (not sure if that's still true - but see above, the important thing is userland framework ABI stability at the end of the day).
Third place: Linux, which maintains excellent kernel ABI stability. Linux has the stablest kernel ABI right now AFAIK. And in userland, glibc has been investing heavily in ABI stability; it's stable enough now that in practice you could ship a binary that dynlinks to glibc and expect it to work on many different Linuxes today and in the future.
So it would seem that OS ABIs are stable in those OSes that are successful.
Speaking of Windows alone, there are the various calling conventions (pascal, stdcall, cdecl), 16, 32, 64 bits, x86, ARM, ARM64EC, DLLs, COM in-proc and ext-proc, WinRT within Win32 and UWP.
Leaving aside the platforms it no longer supports.
So there are some changes to account for depending on the deployment scenario.
- the same entity has access to the source of both the library and the main app
- library and main app share the same build tooling
And even if that’s the case, you have the problem of end users accidentally using different versions of the main app and the library and getting unexpected UB.
What's the stat of single-compiler version ABI? I mean - if the compiler guaranteed that for the same version of the compiler the ABI can work, we could potentially use dynamic linking for a lot of things (speed up iterative development) without committing to any long term stable API or going through C ABI for everything.
> In their zeal to convert, they are happily replacing pro-user software with pro-business software.
This is one of the two main reasons I'm not using Rust. Second reason is being addressed by gccrs team, so I have no big gripes there, since they are progressing well.
By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed? Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.
> By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed?
It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.
The last significant online C community was the one that gave us the pro-user products in the first place.
> Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.
I don't care whether or not they are hostile, that is not relevant. What is relevant to the complaints you are reading is that their primary goal is the spread of Rust, not the interests of the users.
It is totally reasonable to be against a community who are working very hard to replace pro-user software with pro-business software.
> The last significant online C community was the one that gave us the pro-user products in the first place.
You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.
> It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.
It's clear that you have no idea what you're talking about. There is no "rewrite-it-in-Rust community", there are just people using Rust and writing what they want. That copyleft licenses have lost mindshare to permissive licenses in the decades since the rise of the OSI is a broader movement in OSS that long predates Rust, and has nothing to do with Rust itself.
> You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.
Sure, C played a great part there too, but you are ignoring the present.
What we are seeing now is a concerted effort to replace pro-user products with pro-business products.
Even if you re right that the start of Copyleft, with gcc, is revisionist history, that has no relevance to what is happening now, which is a large effort by a specific community to replace pro-user products with pro-business products.
Well, that's funny. Considering all the comments I have written for this submission.
First of all, most of the arguments I'd make is already addressed by lelanthran. Do I need to write the same things over and over? It's bad etiquette to write the same things said by someone else. This is why we have the voting mechanism here.
So, since you insist, let me reiterate the same thing.
No I don't refuse to use C, because most of the GPL software which is enabling everything we do today is written in C or a C-descendant language. However, as I write everywhere, I refuse to use Rust because of two reasons:
1- LLVM only for now (I don't use any language which doesn't have a compiler in GCC)
2- Rust's apparent rewrite in rust, in MIT, replace the thing and beat it with a club if it refuses to die attitude.
For reference, uutils and sister projects use "drop-in-replacement" and "completely replace" leisurely, signaling their clear intentions to forcefully replace GPL code with more permissive, business-friendly bits.
I tend to reluctantly accept Rust in the Kernel since gccrs is in the works and progressing steadily, and Rust guys are somewhat forced to write a proper reference for their language and back it with proper PLT, since it's a hard requirement if you want your programming language to be a long-living, dependable one.
Similarly, you use words like courage and non-sequitur leisurely. I'm not sure it's fitting in this instance.
There is absolutely nothing "pro-business" about permissive licenses. People choose permissive licenses for all kinds of reasons. For example, I personally use them because I believe they are more free and thus more in line with my values. You shouldn't project unsubstantiated statements onto people's motives like this.
With permissive licenses you often run into the following situation:
You buy something physical from a company, say a humanoid unitree robot, a robot actuator or Arm SBC. These pieces of hardware come with their own proprietary SDK that they sell for a significant fee or a proprietary GPU driver without any hope of updates. The SDK heavily uses MIT licensed code and there is no possibility of modifying or inspecting the code for debugging.
From the perspective of the user, the system might as well be 100% proprietary and his freedoms are maximally restricted. You could say that this is fine since it doesn't detract from the original open source project, but you have to remember that these companies would ordinarily have to pay significant development fees to build the same level of functionality and they have no obligation to help or support your project financially. You as the open source developer will then have to beg them to hire you, so you can do paid work that is unrelated to the original project to finally work on your project in your spare time, purely because it is possible to charge for hardware but not the software that the hardware depends on.
What I'm trying to get at here is that this means full vertical integration is the only way. The problem is that most hardware companies are hardware companies first and they don't care about software. They concentrate on making hardware, because each sale brings in money. They don't spend money on software, because it appears to be optional. You can just tell the customer or an open source community to bring their own software. The money that is needed to pay for open source projects flows through the very companies that refuse to spend money on software.
If you want to write open source software, you must be a hardware company so you are customer facing and have access to customer money that can be diverted to the development of the software.
> You shouldn't project unsubstantiated statements onto people's motives like this.
I am not criticising their motives, I am criticising the result!
Also, definitions are hard. It's why we have pro-choice/pro-life and not anti-choice/anti-life - using the positive spin is a good faith characterisation of a position.
In much the same way, I am using pro-user/pro-business; if my intention was to vilify one of those positions I would have used pro-user/anti-user or pro-business/anti-business to label those positions.
No reasonable interpretation of pro-user/pro-business can make the audience think that I am unfairly characterising either of two positions.
I say this to address the use of the word "unsubstantiated" in your assertion about my characterisations.
That would be great, but Rust relies on compile-time monomorphization for efficiency (very much like C++, if you consider templates polymorphic functions/classes).
This means that any Rust ABI would have to cater for link-time specialization. I think this should be doable, but it would require a solution that's better than just to move the code generation into the linker. Instead, one would need to carefully consider the usage of the "shape" of all parameters of a function.
I wonder if we look at it from a too narrow perspective. We use the C ABI because it's the only game in town. We should be aiming for a safe cross language ABI. I'd love to make Rust, C, PHP, Swift, Java and Python easily talk to each other inside 1 process.
It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.
As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.
> It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.
> As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.
Yeah, and we had CORBA. Gnome was originally not a DE - the acronym stood for Gnu Network Object Model Environment or similar.
I programmed in CORBA in the 90s. Other than being slower than a snail on weed, I liked it just fine. Maybe it's time for a resurgence of something similar, but without requiring that calls work across networks.
You'll find that all of these languages ultimately build FFI on top of C ABI conventions, though Swift's own internally stable ABI uses a lot of alloca() to place dynamically sized objects on the stack, in a way that's somewhat unidiomatic (the Rust folks are trying to back out of their alloca() equivalent). You can even interface to COM from pure C.
Dynamic linking is also great for compile time of debug builds. If a large library or application is split up into smaller shared libraries, ones unaffected by changes don't need to be touched at all. Runtime dynamic linking has a small overhead, but it's several orders of magnitude faster than compile-time linking, so not a problem in debug builds.
for developer turnaround time, it is huge. we explicitly do not statically link Ardour because as developers we are in the edit-compile-debug cycle all day every day, and speeding up the link step (which dynamic linking does dramatically, especially with parallel linkers like lld) is a gigantic improvement to our quality of life and productivity.
1) It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
2) If you really have 20 libraries calling into one another using C ABI, then you end up with manual memory management and manual buffer offset management everywhere even if you rewrite the innards in Rust. So long as Rust doesn't have a safe ABI, the upside of a Rust rewrite might be too low in terms of safety/security gained to be worth doing
Many Rust core/standard library functions are trivial and inlining them is not really a concern. For those that do involve significant amount of code, C ABI-compatible code could be exported from some .so dynamic object, with only a small safe wrapper being statically linked.
I found c ABI a bit too difficult in rust compared to c or zig. Mainly because of destructors. I am guessing c++ would be difficult in a similar way.
Also unsafe rust has always on strict-aliasing, which makes writing code difficult unless you do it in certain ways.
Having glue libraries like pyo3 makes it good in rust. But that introduces bloat and other issues. This has been the biggest issue I had with rust, it is too hard to write something so you use a dependency. And before you know it, you are bloating out of control
Not really. The foreign ABI requires a foreign API, which adds friction that you don't have with C exporting a C API / ABI. I've never tried, but I would guess that it adds a lot of friction.
COM is interesting as it implements interfaces using the C++ vtable layout, which can be done in C. Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.
You can also access .NET/C# objects/interfaces via COM. It has an interface to allow you to get the type metadata but that isn't necessary. This makes it possible to e.g. get the C#/.NET exception stack trace from a C/C++ application.
>Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.
DCOM is Distributed COM not Dynamic COM[1].
COM does have an interface for dynamic dispatch called IDispatch[2] which is used for scripting languages like VBScript or JScript. It isn't required for Visual Basic though. VB is compiled and supports early binding to interfaces.
Eh, some people can work on moving to Rust, while others work on adding dynamic linking to Rust.
Or maybe we can some how get used to living with static linking. (I don't think so, but many seem to think so in spite of my advice to the contrary!)
Another possibility is to use IPC as the dynamic linking boundary of sorts, but this will consume lots more memory, and as is stated elsewhere in this thread, memory ain't cheap no more.
Dynamic linking with a safe ABI, where if you change and recompile one library then the outcome has to obey some definition of safety, and ABI stability is about as good as C or Objective-C or Swift.
Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.