Hacker Newsnew | past | comments | ask | show | jobs | submit | wisty's commentslogin

Jupyterlite - a lot of jupyter trial versions use it.

Millenials are the "stereotypical manchild who hates his parents because he's too much like them" generation. (I'm a millenial too)

A large part of both is the professionalisation/outsourcing of child care which weakens family ties.

Phones are like alchohol or fentynal. I might dabble a bit but if I see a loved one constantly zonked out on the couch I worry.

The far left and neoliberals are united on this. Whether it's by malice, self interest or incompetence (or a combination), they end up discriminating against the lower classes.

Neoliberals and the far left, when forced to work in the real world, both tend to prefer putting power into rules, not giving people in authority the power to make decisions.

The upside is there's less misuse of power by authorities, at least in theory. The bad news is, you now need far more detailed rules to allow for the exceptions, common sense, and nuance that are no longer up to authorities.

The worse news is, that the people who benefit from complex rules are the upper classes, and the authorities who know how to manipulate complex rules.

"Don't be evil" requires a leader with the authority to enforce it.

A 500 employee manual will be selectively implemented, and will end up full of exploits, but hey, at least you can pretend you tried to remove human error from the process.


Ok sow how about "much cheaper"?

"Don't worry about money" is something a lot of companies do. They can just try to create value first, then look for profits later (albeit often though "enshitification").

This bias towards creating value makes them more moral than mere mortals, creating huge amounts of innovation and surplus value.


Yes I too have seen the 2022 Perun video where an Australian Youtuber gives a lesson on Russian linguisics, but I'm not certain he's right.

English also has more than one words for lies - lies, falsehoods, fibs, bs, prevarication.

Yeah sometimes we know stuff is a load of crap at work, but we gotta humour the process. Maybe it's 10x as bad in Russia. But I've seen little independent evidence those words Perun used mean completely different things, I think he's just accidently exhaggerating a possible bit of nuance.


The actor playing Data in Star Trek has a personality, but can give a neutral sounding answer to a question.

I still think someone should set up a voice chat bot that answers to "Computer!" and has Majel Barrett's monotone voice.

My fan theory of the original Star Trek is that the computer voice is something they arrived at AFTER trying more naturalistic personalities. They intended to have the control interface be a cold monotone.

In fact, there is an episode where the computer voice becomes sultry, and Kirk complains.


IMO it's not just intelligence.

I think it's related to syncophancy. LLM are trained to not question the basic assumptions being made. They are horrible at telling you that you are solving the wrong problem, and I think this is a consequence of their design.

They are meant to get "upvotes" from the person asking the question, so they don't want to imply you are making a fundamental mistake, even if it leads you into AI induced psychosis.

Or maybe they are just that dumb - fuzzy recall and the eliza effect making them seem smart?


A perfectly fine, sycophantic response, that doesn't question the premises in any way, would be "That's a great question! While normally walking is better for such a short distance, you'd need to drive in this case, since you need to get the car to the car wash anyway. Do you want me to help with detailed information for other cases where the car is optional?" or some such.


AI syncophancy isn't just polite or even obsequious language, it's also "yes man" responses.

Do you want me to track down some research that shows people think information is more likely to be correct of they agree with it?


Gemini is the only AI that seems to really push back and somewhat ignores what I say. I also think it's a total dick, and never use it, so maybe the motivation to make them a bit sycophants is justified, from a user engagement perspective.


I think there's also an "alignment blinkers" effect. There is an ethical framework bolted on.

EDIT: Though it could simply reflect training data. Maybe Redditors don't drive.


The nightmare scenario - they "know", but are trained to make us feel clever by humouring our most bone headed requests.

Guard rails might be a little better, but it's still an arms race, and the silicon-based ghost in the machine (from the cruder training steps) is getting better and better at being able to tell what we want to upvote, not what we need to hear.

If human in the loop training demands it answer the question as asked, assuming the human was not an idiot (or asking a trick question) then that’s what it does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: