Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this and a previous tweet a ML-guys discussion? My layman understanding of neural networks is that the core operation is you basically kick a figure down the hill and see where it ends up, but both the figure and the hill are N-dimensional objects, where N is too huge to comprehend. Of course some nonsensical figures end up at valid locations, but can you really expect some stable inner structure of the hill-figure interaction? I think it’s unlikely that there is a place in a learning method to produce one. NNs can give interesting results, but they don’t magically rewrite their own design yet.

Would still be interesting to see how the output changes with little changes to these inputs. If my vague understanding is at all close, this will reveal the “faces” that are more “noisy” than the others. Not sure what that gives though.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: