Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m disappointed in all the anthropomorphizing in this thread. Time and time again, we make analogies for how black box ML algos must work like people, only for researchers to come along and show that they actually just use shortcuts that don’t remotely resemble human learning/thinking.

When will we learn to stop being overconfident about how these things work? Just say “we don’t know yet.” Anthropomorphism and overconfidence are dangerous in that we could set the wrong precedents (culturally and legally) for how these are used and how automation affects society.



That only answers half of the question. Maybe ML uses some weird shortcut, but how do we know the human brain doesn't use the same shortcut? If it's possible to use some simple hack to do something, why didn't we evolve to work that way?


Here's a concrete famous example. I'm sure I'll remember the details wrong, but the gist is the kind of thing that keeps happening.

A while back, people built a model for medical imaging that learned to distinguish between images of patients with vs without some disease (can't remember that detail). It did well, but failed in the real world. It turned out that instead of learning to recognize features of the disease at hand, it learned to recognize some tiny feature of whether the image came from a specific hospital that collected part of the dataset, or something stupid like that.

Saying "maybe model X does the same thing as humans" is proven wrong for X after X after X. At this point, the default assumption should be that ML techniques are different from humans unless proven otherwise.


I can't say with 100% certainty but I don't think our brains turn words into numbers and perform math calculations equations to make images. It's a bit of a cop out saying "we don't know how the brain works". If you apply the same analogy to ChatGPT we don't use math calculations to write words. If anything GPUs are not using shortcut, they use brute computation, to get to what our brains perceive as similar.


>Maybe ML uses some weird shortcut,

You do realize that there are people in this thread who can explain to you in fine grain detail how an ML model actually comes to conclusions, without speculating abstract "weird shortcuts".


Brains are radically different from GPUs.


The same calculations can be performed by an abacus. What is doing the calculation is irrelevant. The question is what the calculation is


This argument would be somewhat more compelling in a world where little bits of silicon were not on the order of quadrillions of times faster than we are at arithmetic, whereas those bits of silicon struggle to do at all things we casually do every waking second.

The calculation is theoretically unimportant. Practically, it is of great importance.


That's kind of a philosophical question no? Is being able to model an analog behavior accurately the same thing as the analog behavior itself? While we know from math there's an equivalence / bounded error rate, it certainly seems like emulation in the digital domain is far more power intensive which would indicate that it's not the thing itself. A clearer example is photon collisions. Simulating that behavior on a computer is not the same thing as colliding the photons. Could be wrong though.


Does it matter if the simulations of photons is on an abacus or using a GPU? I think that's the question. Neither of those are "reality", just a simulation.


>>>> Maybe ML uses some weird shortcut, but how do we know the human brain doesn't use the same shortcut? If it's possible to use some simple hack to do something, why didn't we evolve to work that way?

>>> Brains are radically different from GPUs.

>> The same calculations can be performed by an abacus. What is doing the calculation is irrelevant. The question is what the calculation is

> Does it matter if the simulations of photons is on an abacus or using a GPU? I think that's the question. Neither of those are "reality", just a simulation.

I think so, yes, specifically with respect to the question of "how do we know the human brain doesn't use the same shortcut?". Simulations likely use very different shortcuts because they're optimizing for the structural design of a man-made machine that exists today and uses numerical and CS tricks to cheapen the computation cost while maintaining error rates on training data. The brain uses physical shortcuts to minimize energy expenditure, for survival of the host, and resiliency of the species (i.e. OK if flaws exist sometimes as long as the species survival is improved long-term). So not only is ML a fun-house mirror image of a brain (our model is extremely imperfect today), the optimization process is totally alien to how the brain figured out all its shortcuts.


Neurons do not fire like logic gates. stop it.


I’d just like to point out that if neurons do or don’t fire like logic gates that it basically doesn’t matter at all, not for this argument about Stable Diffusion or even in a deeper philosophical sense. It’s a silly question much like asking if a submarine can swim is a silly question.

The irony here being we're a few layers deep in a thread started as a critique on this kind of pointless anthropomorphism.


> I’m disappointed in all the anthropomorphizing

It’s not anthropomorphizing it’s a description and an analogy.

Neural networks work similar to a brain, and it’s easier to describe them that way because, again, they were modelled that way.

It’s not a perfect analogy, but your offence would indicate you have a lack of understanding in communication, neural nets, or you’re trying to blow things out of proportion for some reason.


> you have a lack of understanding in communication, neural nets, or you’re trying to blow things out of proportion for some reason

Surely there's a more appropriate way to say that, and a more charitable reading of my comment. As a researcher in the field, I think it's safe to say I understand the models, and maybe I am overly sensitive at people jumping to wrong conclusions because I'm so tired of it.

The issue isn't the communication aspect of the analogy, it's the reasoning aspect. For example, people who understand these things say "these work like people" (a useful analogy) and then people who don't understand them say "well if they work like people then they should be legislated like people" (not useful reasoning because the assumption in the "if" was just an analogy). The game of telephone is the danger.

You can see lay people in this very thread taking the analogies literally and extrapolating based on literal interpretations of model-brain analogies.


It’s a fantastic critique because these discussions immediately descend into some kind of debate over “what it means to learn”, which has nothing to do with copyright infringement or authorship.

Eg, Someone will naively state that something is or isn’t copyright infringement “because the tool learns like humans do”… which again, is not a question that a court would ask about the tool and since copyright is a legal invention it is kind of pointless to drift off into philosophical oblivion…


I wouldn't necessarily say it learns how a human would. But instead learns how an arbitrary brain would.

There's all kinds of brains in this world from all kinds of life. A dog can learn. A cat can learn. And they don't learn like a human would.


>>A dog can learn. A cat can learn. And they don't learn like a human would.

Yes, but they learn a LOT MORE like a human would vs these machine models. Cats & dogs share the same underlying structure, from the neuron/synapse/neurotransmitter system, up to the brainstem/cerebellum/midbrain/cerebrum architecture, as well as being inextricably integrated into a living body and sensory system, and growth pattern.

And, as you say, there are big differences in how we all learn. But those differences are utterly trivial compared to the differences between humans and ML.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: