Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For what i can understand, he asked deepseek to convert arm simd code to wasm code.

in the github issue he links he gives an example of a prompt: Your task is to convert a given C++ ARM NEON SIMD to WASM SIMD. Here is an example of another function: (follows a block example and a block with the instructions to convert)

https://gist.github.com/ngxson/307140d24d80748bd683b396ba13b...

I might be wrong of course, but asking to optimize code is something that quite helped me when i first started learning pytorch. I feel like "99% of this code blabla" is useful as in it lets you understand that it was ai written, but it shouldn't be a brag. then again i know nothing about simd instructions but i don't see why it should be different for a capable llm to do simd instructions or optimized high level code (which is much harder than just working high level code, i'm glad i can do the latter lol)



Yes, “take this clever code written by a smart human and convert it for WASM” is certainly less impressive than “write clever code from scratch” (and reassuring if you’re worried about losing your job to this thing).

That said, translating good code to another language or environment is extremely useful. There’s a lot of low hanging fruit where there’s, for example, an existing high quality library is written for Python or C# or something, and an LLM can automatically convert it to optimized Rust / TypeScript / your language of choice.


Keep in mind, two of the functions were translated, and the third was created from scratch. Quoting from the FAQ on the Gist [1]:

Q: "It only does conversion ARM NEON --> WASM SIMD, or it can invent new WASM SIMD code from scratch?"

A: "It can do both. For qX_0 I asked it to convert, and for qX_K I asked it to invent new code."

* [1]: https://gist.github.com/ngxson/307140d24d80748bd683b396ba13b...


Porting well written code if you know the target language well is pretty fun and fast in my experience. Often when there are library, API, or language feature differences, these are better considered outside of most work it would take to fully describe the entire context to a model is what has happened in my experience, however.


This. For folks who regularly write simd/vmx/etc, this is a fairly straightforward PR, and one that uses very common patterns to achieve better parallelism.

It's still cool nonetheless, but not a particularly great test of DeepSeek vs. alternatives.


That is what I am struggling to understand about the hype. I regularly use them to generate new simd. Other than a few edge cases (issues around handling of nan values, order of argument for corresponding ops, availability of new avx512f intrinsics), they are pretty good at converting. The names of very intrinsics are very similar from simd to another. The very self-explanatory nature of the intrinsics names and having similar apis from simd to another makes this somewhat expected result given what they can already accomplish.


If I had to guess, it's both the title ggml : x2 speed for WASM by optimizing SIMD and the pr being written by ai


+ Deepseekai recently being in the headlines + Lack of knowledge around simd extension. Modern social media is interesting...


I do have to say, that before knowing what was Simd it was all black magic to me. Now, I've had to get how it works for my thesis, on a very shallow level, and I have to say it's much less black magic than before, although I wouldn't be able to write Simd code


Deepseek r1 is not exactly better than the alternatives. It is, however, open as in open weight and requires much less resources. This is what’s disruptive about it.


LLMs are great at converting code, I've taken functions whole cloth and converted them before and been really impressed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: