Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> [...] bake for fifteen minutes and out pops a fully-featured ML stack

where is logging, where is model storage and versioning, where is input data processing and normalizing, where is results processing?



lowbrow comment.

the hard part of ML stacks is AD and GPU not all of those other things (i'm sure there has been zero cutting edge research done on better ways to log).


Yes, and unlike AD and GPU support, things like logging have nothing (special) to do with ML. Julia has both very nice logging and plenty of good serialisation options, all of which works nicely with the ML stack. It's entirely unnecessary to duplicate these tools just so they can be baked in to a huge framework.


Well, I think there is something to be said there though. The reason why the Julia stack is nice is because Julia's standard logging tools can be used for logging in ML codes. Even other things like Julia's standard progress monitoring toolbars just work on ML codes. That's quite a surprising result. Tools which build a sub-language for graph building like TensorFlow have to build and document such tooling. So for newcomers to Julia, they will search the package documentation and package codes and find nothing. It is a confusing problem because the functionality exists but no one thought to document its usage for this context since it is just the standard Julia usage!


I agree that it shouldn't be called a fully featured ML stack. Still, the most important feature (optimized compilation of models) is handled quite well. Whenever I have to look at Tensorflow source code to understand how it works, I see an over-complicated system that is too far from the research papers (which makes it hard to work with it).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: