Hacker Newsnew | past | comments | ask | show | jobs | submit | jjmarr's commentslogin

if this is a C++17 library why couldn't you use `constexpr` evaluation and not murder your compilation time?

Template metaprogramming isn't really suited for this task, the prime sieve here serves only as a proof of concept, meant to show the capabilities of this style. But there are cases where `constexpr` is not applicable, especially when involving type manipulations.

On the other hand, C++ template metaprogramming, as an esolang, is fun to tame and experiment with.


Is there a clearer example where constexpr wouldn't work?

> On the other hand, C++ template metaprogramming, as an esolang, is fun to tame and experiment with.

Is it an esolang at this point? I feel old.


Always has been (I'm over 40). It used to be a nightmare of a programming language, now with all the improvements over time it's merely quite bad. Its power to weight ratio is below any reasonable standard, but sometimes you need the power. Some lunatics like to play with it.

I've add exmaples of a toy EDSL and compile-time reflex in C++17 in GitHub readme, which wouldn't work with constexpr only.

I imagine because some of the cool constexpr improvements are only available in C++20 and C++23.

You can take a hybrid approach and use the rack for base capacity, cloud for scaling.

Realistically, I would die.

A condo costs $2500/month so I will either be homeless and freeze to death or be euthanized.

Maybe I'm a contrarian but I don't think there's hope for anyone that doesn't control resources.


There is no way a condo would continue to cost $2500/mo in a world where there isn't a concentration of well-paid office jobs in that location.

Don’t worry, pitchforks and torches are still cheap.

Great! We'll be able to scoop up lots of hay and even toast it on the fire a bit. It's a good starvation-proof fallback.

You would die rather than move somewhere cheaper? What an odd take. I live in the midwest and pay $700/mo for a perfectly fine apartment in a clean and safe suburb.

I live in Canada so housing is uniformly expensive unless you live super rural.

Best choice would be moving up north and slaving in a mineral mine along with everyone else that lost their jobs. Like the 1920s.

I don't see myself being qualified for such a role since I am too short and don't have the physical leverage.


Housing is expensive in Canada, but it's absolutely not uniform. $2500/mo starting is crazy, which city are you sourcing these claims for? I live in a major city (but not Vancouver or Toronto, obviously) and if you're just trying to survive, you can live with roommates for $700-900, possibly less depending on your luck. Apartments, studios and other types of housing for one are about $1500 and up. Then you can go to Quebec and enjoy slighter cheaper housing still, even in the big cities. There's some middle ground between downtown Toronto and some mining town in northern Manitoba.

In this theoretical scenario where AI displaces everyone, the only thing with value will be housing and physical necessities, so I think housing prices will go up.

AI only uses big words to engage in elegant variation, not to compress information.

If someone calls an article like this a "jeremiad" I know they're a human.


Oh, well chosen. I keep forgetting that word, and lamenting that "diatribe" (or, er, "lament") doesn't quite fit in some situation.

Interesting. I'll have to keep an eye out for this!

Nix and Guix.

Good luck convincing people to switch!


Trying to convince people usually makes any resistance worse.

Using it, solving problems with it, and building a real community around it tend to make a much greater impact in the long run.


Yeah, but if the problem you are solving is rare for most practitioners, effectively theoretical until it actually happens, then people won't switch until they get bit by that particular problem.

But they’re roughly the same paradigm as docker, right? My understanding of the Nix approach is that it’s still reproducing most of a user land/filesystem in a captive/separate/sandbox environment. Like, docker is using namespaces for more stuff, Nix has a heavier emphasis on reproducibility/determinism, but … they’re both still throwing in the towel on deploying directly on the underlying OS’s userland (unless you go all the way to nixOS) and shipping what amounts to a filesystem in a box, no?

I daily drive NixOS. I don't have a global "userland". Packages are shipped from upstream and pull in the dependencies they need to function.

That means unlike Gentoo, I've never dealt with a "slot conflict" where two packages want conflicting dependencies. And unlike Ubuntu, I have new versions of everything.

Pick 2: share dependencies, be on the bleeding edge, or waste your time resolving conflicts.


Yeah nix is great for this. Also I can update infrequently and still package anything I want bleeding edge without any big issues other then maybe some build from sourcing.

> But they’re roughly the same paradigm as docker, right?

Absolutely not. Nix and Guix are package managers that (very simplified) model the build process of software as pure functions mapping dependencies and source code as inputs to a resulting build as their output. Docker is something entirely different.

> they’re both still throwing in the towel on deploying directly on the underlying OS’s userland

The existence of an underlying OS userland _is_ the disaster. You can't build a robust package management system on a shaky foundation, if nix or guix were to use anything from the host OS their packaging model would fundamentally break.

> unless you go all the way to nixOS

NixOS does not have a "traditional/standard/global" OS userland on which anything could be deployed (excluding /bin/sh for simplicity). A package installed with nix on NixOS is identical to the same package being installed on a non-NixOS system (modulo system architecture).

> shipping what amounts to a filesystem in a box

No. Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.

With docker two independent images share nothing, except maybe some base layer, if they happen to use the same one. With nix or Guix, packages automatically share their dependencies iff it is the same dependency. The thing is: if one package depends on lib foo compiled with -O2 and the other one depends on lib foo compiled with -O3, then those are two different dependencies. This nuance is something that only the nix model started to capture at all.


> Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.

The rest of your endorsement of NixOS is well taken, but this is a silly distinction to draw. Dockerfiles and nix package definitions are extremely similar. The fact that docker images are distributed with a heavier emphasis on opaque binary build step caching, and nix expressions have a heavier emphasis on code-level determinism/purity is accidental. The output of both is some form of a copy of a Linux user space “in a box” (via squashfs and namespaces for Docker, and via path hacks and symlinks for Nix). Zoom out even a little and they look extremely alike.


> This nuance is something that only the nix model started to capture at all.

Unpopular opinion, loosely held: the whole attempt to share any dependencies at all is the source of evil.

If you imagine the absolute worst case scenario that every program shipped all of its dependencies and nothing was shared then the end result would be… a few gigabytes of duplicated data? Which could plausible be deduped at the filesystem level rather than build or deployment layer?

Feels like a big waste of time. Maybe it mattered in the 70s. But that was a long, long time ago.


I think the storage optimization aspect is secondary, it is more about keeping control over your distribution. You need processes to replace all occurrences of xz with an uncompromised version when necessary. When all packages in the distribution link against one and the same that's easy.

Nix and guix sort of move this into the source layer. Within their respective distributions you would update the package definition of xz and all packages depending on it would be rebuild to use the new version.

Using shared dependencies is a mostly irrelevant detail that falls out of this in the end. Nix can dedupe at the filesystem layer too, e.g. to reduce duplication between different versions of the same packages.

You can of course ship all dependencies for all packages separately, but you have to have a solution for security updates.


Node.js basically tried this — every package gets its own copy of every dependency in node_modules. Worked great until you had 400MB of duplicated lodash copies and the memes started.

pnpm fixed it exactly the way you describe though: content-addressable store with hardlinks. Every package version exists once on disk, projects just link to it. So the "dedup at filesystem level" approach does work, it just took the ecosystem a decade of pain to get there.


nix has a cache too but only if the packages are reproducible.

Much harder to get reproducibility with C++ than JavaScript to say the least.


> If you imagine the absolute worst case scenario that every program shipped all of its dependencies and nothing was shared then the end result would be… a few gigabytes of duplicated data?

Honestly, I've seen projects that do this. In fact, a lot of projects that do this, at the compilation level.

It feels like a lot of the projects that I would want to use from git pull in their own dependencies via submodules when I compile them, even when I already have the development libraries needed to compile it. It's honestly kind of frustrating.

I mean, I get it - it makes it easier to compile for people who don't actually do things like that regularly. And yeah, I can see why that's a good thing. But at the very least, please give me an option to opt out and to use my own installed libraries.


Maybe the RAM crunch will get people optimizing for dedup again.

> If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow

In my experience, target schools are the only universities now that can make their assignments too hard for AI.

When my university tried that, the assignments were too hard for students. So they gave up.


This comment would make sense 6 months ago. Now it is much, much, much more likely any given textually answerable problem will be way easier for a bleeding edge frontier AI than a human, especially if you take time into account

What university is assigning undergrads assignments too hard for AI?

Funnily enough, my Science Fiction class graded like that.

If you didn't have high information density in essays you were torn into. AI was a disadvantage due to verboseness.

Most people dropped the class and prof went on sabbatical.


Besides being tough, it's shaping students' writing in a specific direction. That dense style I think of as 19th century English philosophy prose, though I hear it may still be the ideal in parts of Europe.

Juniors from non target schools are getting pushed out since the skill floor is too high.

I graduated 9 months ago. In that time I've merged more PRs than anyone else, reduced mean time to merge by 20% on a project with 300 developers with an automated code review tool, and in the past week vibe coded an entire Kubernetes cluster that can remotely execute our builds (working on making it more reliable before putting it into prod).

None of this matters.

The companies/teams like OpenAI or Google Deepmind that are allegedly hiring these super juniors at huge salaries only do so from target schools like Waterloo or MIT. If you don't work at a top company your compensation package is the same as ever. I am not getting promoted faster, my bonus went from 9% to 14% and I got a few thousand in spot bonuses.

From my perspective, this field is turning into finance or law, where the risk of a bad hire due to the heightened skill floor is so high that if you DIDN'T go to a target school you're not getting a top job no matter how good you are. Like how Yale goes to Big Law at $250k while non T14 gets $90k doing insurance defence and there's no movement between the categories. 20-30% of my classmates are still unemployed.

We cannot get around this by interviewing well because anyone can cheat on interviews with AI, so they don't even give interviews or coding assessments to my school. We cannot get around this with better projects because anyone can release a vibe coded library.

It appears the only thing that matters is pedigree of education because 4 years of in person exams from a top school aren't easy to fake.


I hate the credentialism. What a bummer of a place to end up.

Can I ask what you and others that posts things like this here -"What are you actually developing?"

People are posting about pull requests, use of AIs, yada yada. But they never tell us what they are trying to produce. Surely this should be the first thing in the post:

- I am developing an X

- I use an LLM to write some of the code for it ... etc.

- I have these ... testing problems

- I have these problems with the VCS/build system ...

Otherwise it is all generalised, well "stuff". And maybe, dare I say it, slop.


I'm hosting a Kubernetes cluster on Azure and trying to autoscale it to tens of thousands of vCPUs. The goal is to transparently replace dedicated developer workstations (edit: transparently replace compiling) because our codebase is really big and we've hired enough people this is viable.

edit: to clarify, I'm using recc which wraps the compiler commands like distcc or ccache. It doesn't require developers to give up their workspace.

Right now I'm using buildbarn. Originally, I used sccache but there's a hard cap on parallel jobs.

In terms of how LLMs help, they got me through all the gruntwork of writing jsonnet and dockerfiles. I have barely touched that syntax before so having AI churn it out was helpful to driving towards the proof of concept. Otherwise I'd be looking up "how do I copy a file into my Docker container".

AI also meant I didn't have to spend a lot of time evaluating competing solutions. I got sccache working in a day and when it didn't scale I threw away all that work and started over.

In terms of where the LLM fell short, it constantly lies to me. For example, it mounted the host filesystem into the docker image so it could get access to the toolchains instead of making the docker images self-contained like it said it would.

It also kept trying to not to the work, e.g. It randomly decides in the thinking tokens "let's fall back to a local caching solution since the distributed option didn't work" then spams me with checkmark emojis and claims in the chat message the distributed solution is complete.

A decent amount of it is slop, to be honest, but an 80% working solution means I am getting more money and resources to turn this into a real initiative. At which point I'll rewrite the code again but I'll pay closer attention now that I know docker better.


  > The goal is to transparently replace dedicated developer workstation
Isn't there a less convoluted way of making the best engineers leave? I am half serious here. If you want your software to run slow, IT could equally well install corporate security software on developer laptops. Oops, I did it again. Oh well, in all seriousness, I have never seen any performance problem being solved by running it on Azure's virtualization. I am afraid you are replacing the hardware layer by a software layer with ungodly complexity, which you are sure of will be functionally incomplete.

Are you sure they don't have to fix the build pipeline first? Tens of thousands of vCPUs for a single compilation run, or to accommodate 100 developers who try to compile their own changes?


> I have never seen any performance problem being solved by running it on Azure's virtualization

Sorry, I wasn't clear. I am not virtualizing the workspace. I'm using `recc` which is like `distcc` or `ccache` in that it wraps the compiler job. Every developer keeps their workstation. It just routes the actual `clang` or `gcc` calls to a Kubernetes cluster which provides distributed build and cache.

> Isn't there a less convoluted way of making the best engineers leave?

We have 7000+ compiler jobs in a clean build because it is a big codebase. People are waiting hours for CI.

I'm sure that drives attrition and bringing that down to minutes will help retain talent.

> Tens of thousands of vCPUs for a single compilation run, or to accommodate 100 developers who try to compile their own changes?

Because it uses remote execution, it will ideally do both. My belief is that an individual developer launching 6000 compiler jobs because they changed a header will smooth out over 300 developers that generally do incremental builds. Likewise, this'll eliminate redundant recompilation when git pulling since this also serves as a cache.


Thanks for expanding on it, now it's more clear what you want to achieve. If I see things like this, it seems Linus was up to something for banning C++. That sounds like a nasty compilation scheme, but I guess the org has painted itself too deep into that corner to get out of it.

This makes absolutely no sense to me. Are you really recompiling 6000 things each time a dev in the company needs to add a line somewhere in the codebase? Have you thought about splitting that giant thing in smaller chunks?

> Are you really recompiling 6000 things each time a dev in the company needs to add a line somewhere in the codebase?

It happens when someone modifies a widely included header file. Which there are a lot of thanks to our use of templates. And this is just our small team of 300 people.

> Have you thought about splitting that giant thing in smaller chunks?

Yes. We've tried but it's not scaling. Unfortunately, we've banned tactics like pImpl and dynamic linking that would split a codebase unless they're profiled not to be on a hot path. Speed is important because I'm writing tests for a semiconductor fab and test time is more expensive than any other kind of factory on Earth.

I tried stuff like precompiled headers but the fact only one can be used per compilation job meant it didn't scale to our codebase.


Thanks for the detailed breakdown. The template header cascade problem makes total sense, I underestimated how bad it gets at scale with heavy template usage. The semiconductor fab constraint is interesting too. When test time costs that much per minute, banning pImpl on hot paths is a pretty obvious call, even if it makes compile times painful. Appreciate the real-world context.

You seem exceptionally bright. Most people are not like this. This is why they are struggling.

It sounds like you have a job, right out of college, but you're griping about not getting promoted faster. People generally don't get promoted 9 months into a job.

I'm reading your post and I am genuinely impressed but what you claim to have done. At the same time I am confused about what you would like to achieve within the first year of your professional career. You seem to be doing quite well, even in this challenging environment.


> At the same time I am confused about what you would like to achieve within the first year of your professional career.

I am in great fear of ending up on the wrong side of the K shaped recovery.

Everyone is telling me I need to be exceptional or unemployed because the middle won't exist in 2 years.

I want to secure the résumé that gives me the highest possibility of retaining emoloyment if there's a sudden AI layoff tomorrow. A fast career trajectory catches HR's eye even if they don't understand the technicals.


I mean you don’t need your first job go to top of the top companies. Your first job is to get you into the industry then you can flourish.

How many juniors OpenAI GDM are going to hire in a year, probably double digits at max, the chances are super slim and they are by nature are allowed to be as picky as they should be.

That being said, I do agree this industry is turning into finance/law, but that won’t last long either. I genuinely can’t foresee what if when AGI/ASI is really here, it should start giving human ideas to better itself, and there will be no incentive to hire any human for a large sum anymore, maybe a single digit individuals on earth perhaps


The problem is the lack of experience compounds.

Because AI accelerates the rate of knowledge gain, this gets even faster.


I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.

That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.

The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.


> that would've taken me 3 months a year ago, just to learn the syntax

This is hyperbole, right? In what world does it take 3 months to learn the syntax to anything? 3 days is more than enough time.


You perhaps just introduced one more moving part, that you don't understand well. Instead of thinking of a simpler solution.

I hope we get a follow-up in six months or a year as to how this all went.

> I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.

Dunning-Kruger as a service. Thank God software engineers are not in charge of building bridges.

Looking forward to your post-mortem.


I'm writing a HIP (amd gpu kernels) linker in my job and the calling convention is contained in a metadata section in the object file.

Whether the array is passed in registers or pointers can be chosen by the compiler based on factors like profile guided optimization. That the ABI isn't stable doesn't matter because the linker handles it for the programmer.

This is all publicly documented in the llvm docs too so you can write your own loader.


> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.

California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.


> California is mandating OSes provide ages to app stores,

They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.

Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).


> They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.

That's correct. You need to provide your age to install grep.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: