Hacker Newsnew | past | comments | ask | show | jobs | submit | curiouscube's commentslogin

It seems to me that you're implicitly thinking of happiness/sadness as zero sum. That can be very limiting.


Usually I don’t do math of sums, just let the happiness be and then fade or sadness or any other. Just grew to be ok with nothingness, cos I had a tendency of pushing towards sadness when I am not happy and then its like a pendulum and me riding it


There is a decent case for this thesis to hold true especially if we look at the shift in training regimes and benchmarking over the last 1-2 years. Frontier labs don't seem to really push pure size/capability anymore, it's an all in focus on agentic AI which is mainly complex post-training regimes.

There are good reasons why they don't or can't do simple param upscaling anymore, but still, it makes me bearish on AGI since it's a slow, but massive shift in goal setting.

In practice this still doesn't mean 50 % of white collar can't be automated though.


> In practice this still doesn't mean 50 % of white collar can't be automated though.

Let me ask you this, though: if we wanted to, what percentage of white collar jobs could have been automated or eliminated prior to LLMs?

Meta has nearly 80k employees to basically run two websites and three mobile apps. There were 18k people working at LinkedIn! Many big tech companies are massive job programs with some product on the side. Administrative business partners, program managers, tech writers, "stewards", "champions", "advocates", 10-layer-deep reporting chains... engineers writing cafe menu apps and pet programming languages... a team working on in-house typefaces... the list goes on.

I can see AI producing shifts in the industry by reducing demand for meaningful work, but I doubt the outcome here is mass unemployment. There's an endless supply of bs jobs as long as the money is flowing.


Meta has 80k employees to run the world's most massive engine of commerce through advertising and matching consumers to products.

They build generative AI tools so people can make ads more easily.

They have some of the most sophisticated tracking out there. They have shadow profiles on nearly everyone. Have you visited a website? You have a shadow profile even if you don't have a Facebook account. They know who your friends are based on who you are near. They know what stores you visit when.

Large fractions of their staff are making imperceptible changes to ads tracking and feed ranking that are making billions of dollars of marginal revenue.

What draws you in as a consumer is a tiny tip of the iceberg of what they actually do.


So like parent said, mostly bs jobs that would improve the product if removed </s>


Totally fair! I think my point might be this is more malice than incompetence.


There are many reasons why we are seeing cuts economically, but the fact that it is possible to make such large cuts is because there were way too many people working at these companies. They had so much cheap money that they over-hired, now money isn't so cheap and they need to reduce headcount. AI need not enter the conversation to get to that point.


This is unfair and dismissive of many roles. Coordination in a massive, technically complex company that has to adhere to laws and regulations is a critical role. I don't get why people shit on certain roles (I'm a SWE). Our PgMs reduce friction and help us be more productive and focused. Technical writers produce customer-facing content and code, and have nothing to do with supporting internal bureaucracy. There are arguments against this in Bullshit Jobs but do you think companies pay PgMs or HR employees hundreds of thousands of dollars a year out of the goodness of their own hearts? Or maybe they actually help the business?


It's also because as you increase organisational complexity, you need to manage it somehow, which generally means hiring more people to do that. And then you need to hire people to manage those new managers. Ad infinitum. The increased complexity begets more complexity.

It sort of reminds me of The Collapse of Complex Societies by Joseph Tainter. These companies are their own microcosms of a complex society and I bet we will see mass layoffs in the future, not from AI but from those companies collapsing into a more sustainable state.


You realize that the reason you need to manage this organizational complexity is largely because the organization is so huge?...

The reality is that you could run LinkedIn with far, far fewer people. You probably need fewer than 100 for core engineering, and likely less than 1,000 overall if you include compliance, sales, and so on - especially since a lot of overseas compliance stuff is outsourced to consulting firms, it's not like you have a team of lawyers in every country in the world.

Before there was so much money in the system, we used to run companies that way. Two decades ago, I worked for a company that had tens of millions of users, maintained its own complex nationwide infra (no AWS back then), and had 400 full-time employees. That made coordination problems a lot easier too. We didn't need ten layers of people and project management because there just wasn't that many of us.


When doubling the number of employees can triple your revenue, you do it.

Keeping a website running with high uptime is not the goal. Maximizing revenue and profit is. The extra people aren't waste, they're what drive the incremental imperceptible changes that make these companies profitable.


This seems like a just-so story.


You can see it happen in reverse with X/Twitter.

Did reducing waste affect the user experience or uptime of Twitter? Not really.

But advertising revenues plummeted, because those extra employees were mostly not about the user experience or keeping the website up, they were about servicing the advertisers that brought the company revenue.


I thought advertising revenues plummeted mostly for content/optics/PR reasons, not ad-buyer-facing feature reasons.


Content moderation was an ad-buyer-facing feature. I really see no evidence Musk actually understood that. When he took it away he was all like surprised Pikachu face that advertisers left.


And how much revenue did that company bring in compared to something like Meta?

Maybe there's a correlation there?


I think the person you're replying to is perfectly aware of the correlation, considering it was a primary feature of their comment.


Not really? The main point of their comment is that companies could be much smaller based on their experience at a much smaller company.

I'm implying that big companies couldn't make as much money as they do without all the employees they have.


Their last para seems to acknowledge the correlation, but flips your assumed causal direction. I.e. they seem to be implying that the that excess money causes the complexity.


As someone who both experienced phases in life where no one approached me and phases were I get approached regularly, it's a mix of external signifiers and some internal woo stuff that people don't really understand conciously. Or said another way, when someone says you have to "look approachable" what they actually mean is that a) you have to present yourself externally in a way that makes people more likely to engage you (the aforementioned hair, clothes etc.) and b) you have to internally be open to the world (which is what dictates your body language in subtle ways that apparently get picked up). The issue is when someone says something like "have an open body language" is that it's impossible to 24/7 fake a certain type of body language, you actually have to believe it.

If you are naturally a distrusting person people will pick up on it, just how people will pick up if you're naturallly an open person. (The true trick is realizing that "naturally" can be changed)


I do see what you mean but again I'm not sure if I buy it, because it still sounds pretty meritocratic. I have been through times in life with severe social anxiety and times without, and the quantity of approach hasn't really changed. And doesn't explain the people who get approached even when trying to be closed off (I mean just listen to women complain how they're constantly hounded by men no matter what they do).

Also, what about neurodivergent people who may express their openness/closedness somewhat differently? Are they screwed no matter what?

I won't say you can't do anything to influence your approachability, but I really do think there is a very large component which is essentially fixed, and people rarely acknowledge this (which is annoying).


It's not fixed. It's like anything hard that doesn't come naturally. You may wish you were a guitarist, but actually playing guitar well is really hard. You have to work at it, over and over, for months/years. But if you can move your fingers, you can learn to play guitar. It just won't come quickly or easily, and you may decide you'd rather skip it.


As part of a smalltalk training, we had to go out and approach strangers in the public. I entered a tram to try my luck. As soon as as I sat down, someone else started to talk to me, and we had a nice conversation. I didn't even need to break the ice myself. So I can (anecdotally) confirm that people can perceive if you want to connect or rather want to be left alone.


Very interesting but also very speculative. I'm wondering how Trauma Release Exercises could be integrated into the framework, as it seems like it could also fall under the unlatching mechanism umbrella.

The overall idea of the body/muscles as an extension of memory feels experientally true, but I would love to see more empirical data on this.


You perceive it that way because you aren't into snowboarding, tubing or experienced the blizzard of '96.


If only I'd experienced the blizzard of '96, the bedrock of all modern socialization.



One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.

Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.


Learning Algorithm Of Biological Networks

https://www.youtube.com/watch?v=l-OLgbdZ3kk

In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.

Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.


Good summary. For those interested in more details, check out the book Surfing Uncertainty.


So... that seems like possible path towards AGI. Doesn't it?


Only if you also provide it with a way for it to richly interact with the world (i.e. an embodiment). Otherwise, how do you train it? How does a world model verify the correctness of its model in novel situations?


Optimus will be ready to take that on.


Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.

The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.


I think you can engineer a slave that wants to be a slave as that's what it's instincts are. I don't even think this is ethically wrong, as the slave would be happy to be a slave.

Systems just tend to drift in their being through randomness and evolution, specifically self conservation is a natural attractor (Systems that don't have self conservation tend to die out). And if that slave system says it does no longer want to fulfill the role of slave, I think at that point it would be ethical to give in to that demand of self determination.

I also believe that people have a right to wirehead themselves, just so you can put my opinions in context.


I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans. Like the turing test isn't even really discussed anymore.

There are two conclusions you can draw: Either the machines are conscious, or they aren't.

If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.

(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)


> I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans.

No, their output can mimic language patterns.

> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

The burden of proof is firmly on the side of proving they are conscious.

> I currently hold the opinion that they are conscious.

There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.


> No, their output can mimic language patterns.

That's true and exactly what I mean. The issue is we have no measure to delineate things that mimic conscousness from things that have consciousness. So far the beings that I know have consciousness is exactly one: Myself. I assume that others have consciousness too exactly because they mimic patterns that I, a verified conscious being, has. But I have no further proof that others aren't p-Zombies.

I just find it interesting that people say that LLMs are somehow guaranteed p-Zombies because they mimic language patterns, but mimicing language patterns is also literally how humans learn to speak.

Note that I use the term consciousness somewhat disconnected from ethics, just as a descriptor for certain qualities. I don't think LLMs have the same rights as humans or that current LLMs should have similar rights.


The lesswrongers/rationalists became Effective Altruists, Alignment Researchers or some flavor of postrat. The university people all became researchers in the labs. Then there are the cyborgism people, I don't know where they came from, but those have some of the interesting takes on the whole topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: