Hacker Newsnew | past | comments | ask | show | jobs | submit | gjadi's commentslogin

Well, it does tell us something if they limit screen time like they limit sugar but don't limit book time.

I'm sure almost no family have an upper limit on book time.

Thus aiming for screens the replace books is a bad aim.


It depends on their sleep habit, work-life requirements and compensation when they need to be on-call.

When you get a fatter check because your code break, the incentives are not in favor of good code.


Vendoring means you don't have to fetch the internet for every build, that you can work offline, that you're not at the mercy of the oh-so-close-99.999 availability, that it will keep on working in 10 years, and probably other advantages.

If your tooling can pull a dependency from the internet, it could certainly check if more recent version from a vendored one is available.


This is only true if you aren’t internally mirroring those packages.

Most places I’ve worked have Artifactory or something like it sitting between you and actual PyPI/npm/etc. As long as someone has pulled that version at some point before the internet goes out, it’ll continue to work after.


And this is exactly why we see noise on HN/Reddit when a supply-chain cyberattack breaks out, but no breach is ever reported. Enterprises are protected by internal mirroring.

Is there any package manager incapable of working offline?

> Is there any package manager incapable of working offline?

I think you've identified the problem here: package management and package distribution are two different problems. Both tools have possibilities for exploits, but if they are separate tools then the surface area is smaller.

I'm thinking that the package distribution tool maintains a local system cache of packages, using keys/webrings/whatever to verify provenance, while the package management tool allows pinning, minver/maxver, etc.


“The doers are the major thinkers. The people that really create the things that change this industry are both the thinker and doer in one person.”

Steve Jobs

Now, what are doers in the age of LLM is another question.


Well was Jobs a "doer"? Did he get his hands dirty on the code? Or did he use his employees how we would like to use LLMs?


> Well was Jobs a "doer"?

Jobs' talent was that he was an incredibly talented salesman.


Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent

> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent

A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.


Have you ever heard of Steve Jobs?

That wasn't too hard for him given he was also an incredibly talented market opportunity spotter and product leader.


Why do people write such nonsense?

Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.

Everyone around him at that time has commented on this. Are you going to claim they’re all lying?


> Jobs envisioned the iPad and iPhone. [...] Everyone around him at that time has commented on this. Are you going to claim they’re all lying?

I don't claim that they are all lying, but I do claim that quite some people fell for Apple's marketing (as I wrote: "Jobs' talent was that he was an incredibly talented salesman.").


Because people only quote it partially.

> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.


The maintainer explained the reasoning for closing the issue quite well in a comment.


Not everyone is motivated by the highest wage they can get.

Good enough can be good enough and then aim for fun/interesting/challenging/fulfilling work instead of a fatter check.


>Not everyone is motivated by the highest wage they can get.

THis idealism always goes away once you have to buy a home, and realize you're working more hours and getting less money than your mates in other industries that are easier to get into, so you start to switch really quick.

People aren't selfless when it comes to being exploited by private sector entities, they'll always go towards the ones with the best wage/hour ratios.

People aren';t stupid. Why would they voluntarily choose to work harder and be less well off? It's not like this is working for the public good like medicine, firefighters, EMT, education, etc.


There is always more money elsewhere.

But once you have a home, enough to raise your family and save for later, when is enough enough?

And is the work fun? Fulfilling?

Money is a mean to an end.

Sure you can aim to earn enough to get FIRE asap. In my case, I aim for FIRE in the next 40y while maxing my fun in the meantime :)


IMHO, the real problem is that they create an even greater dissonance between online life and IRL.

Think about dating apps, pictures could be fake, and now words exchanged can be fake too.

You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.


This.

For me, navigating with shortcuts feels like I can keep my inner monologue, it is part of it, maybe because I can spell it?

Dunno, but reaching for the mouse and navigating around breaks that, even if it can be more convenient for some actions.


Interesting argument.

But isn't the corrections of those errors that are valuable to society and get us a job?

People can tell they found a bug or give a description about what they want from a software, yet it requires skills to fix the bugs and to build software. Though LLMs can speedup the process, expert human judgment is still required.


I think there's different levels to look at it.

If you know that you need O(n) "contains" checks and O(1) retrieval for items, for a given order of magnitude, it feels like you've all the pieces of the puzzle needed to make sure you keep the LLM on the straight and narrow, even if you didn't know off the top of your head that you should choose ArrayList.

Or if you know that string manipulation might be memory intensive so you write automated tests around it for your order of magnitude, it probably doesn't really matter if you didn't know to choose StringBuilder.

That feels different to e.g. not knowing the difference between an array list and linked list (or the concept of time/space complexity) in the first place.


My gut feeling is that, without wrestling with data structures at least once (e.g. during a course), then that knowledge about complexity will be cargo cult.

When it comes to fundamentals, I think it's still worth the investment.

To paraphrase, "months of prompting can save weeks of learning".


I think the kind of judgement required here is to design ways to test the code without inspecting it manually line by line, that would be walking a motorcycle, and you would be only vibe-testing. That is why we have seen the FastRender browser and JustHTML parser - the testing part was solved upfront, so AI could go nuts implementing.


I partially agree, but I don’t think “design ways to test the code without inspecting it manually line by line” is a good strategy.

Tests only cover cases you already know to look for. In my experience, many important edge cases are discovered by reading the implementation and noticing hidden assumptions or unintended interactions.

When something goes wrong, understanding why almost always requires looking at the code, and that understanding is what informs better tests.


Another possibility is to implement the same spec twice, and do differential testing, you can catch diverging assumptions and clarify them.


Isn't that too much work?

Instead, just learning concepts with AI and then using HI (Human Intelligence) & AI to solve the problem at hand—by going through code line by line and writing tests - is a better approach productivity-, correctness-, efficiency-, and skill-wise.

I can only think of LLMs as fast typists with some domain knowledge.

Like typists of government/legal documents who know how to format documents but cannot practice law. Likewise, LLMs are code typists who can write good/decent/bad code but cannot practice software engineering - we need, and will need, a human for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: