> But we did decide to write a spec because (1) we want to be 100% open about this, and a spec makes it easier for others to learn about it, and (2) it helps us check we didn't miss anything because we have a formal type system.
If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec.
Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized".
This is making me more wary about being able to build reliable tooling around such a "bytecode", not less.
> If so, then that exposes a failing of all JITs, including the JVM.
Yes. But some languages and targets are much worse off than others.
> Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.
This is why we sometimes get down to carefull statement/instructions ordering to avoid pipeline stalls, or using architecture-specific intrinsics, or worrying about false sharing.
> Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.
The web could be both, if Mozilla and other web die hards would critically evaluate the accident of history that is the modern web browser. What bothers me most of all is just how much Mozilla can hold back the industry. For example of where the industry could go, look at what happened with virtual machines implementations.
First, they were particularly inefficient, and relied on tricks such as trap-and-emulate. Not all that different from how NaCL is working, especially on ARM, with funny tricks like load+store pseudo instructions. Gradually, hardware vendors took notice, and we saw instruction sets and hardware shift to add enhanced VM-specific functionality (and ultimately performance) -- first with VT-x, and now with VT-d (eg, IOMMUs).
NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level. asm.js is going in the complete opposite direction, and in doing so, has the potential to steer the entire industry away from a path that could introduce significant and beneficial changes in the realm of security, portability, and open platforms.
> If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec. Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized". This is making me more wary of building tooling around such a "bytecode", not less.
Then if I understand you correctly, you are in favor of a spec for something like asm.js? But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.
If I had that right, then yes, that's a valid concern, there will be performance differences, just like there are between JS engines on specific benchmarks already. Note that asm.js is much simpler to optimize than arbitrary JS, so that could decrease in time. But there are no guarantees with multiple vendor implementations.
And that's the real issue. NaCl, Flash, etc have one implementation, so you get predictable performance (and the same vulnerabilities...). But you don't get that with JavaScript, Java, C#, etc.
If NaCl were to become an industry standard somehow, then it would need to have multiple implementations, and have the same unpredictability in terms of performance. Except that it is fairly straightforward to optimize NaCl, so in theory the differences could become small over time - but the exact same is true of asm.js as I said earlier.
> NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level.
Again, PNaCl is a better comparison - even Google is shifting from NaCl to PNaCl (according to their announcements).
I see no reason that asm.js cannot be as fast as PNaCl, both are portable and can use the same CPU-specific sandboxing mechanisms. In fact it would be interesting to benchmark the two right now.
> Then if I understand you correctly, you are in favor of a spec for something like asm.js?
Yes. Imagine I'm writing an "asm.js" backend for a debugger, coupled with toolchain support for DWARF. To tell you the truth, I'm not even sure where I'd start, since it's not like the spec exposes a VM or a virtual machine state -- but if it did, I'd need the spec for that.
> But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.
That's part of it. Without a spec, I can't really rely on remotely equivalent performance, but that's hardly the only toolchain issue.
asm.js is just JavaScript, it isn't a new VM with exposed low-level binary details. You wouldn't need to use DWARF or write your own low-level debugger integration. You can debug code on it of course, the right approach would be to use the JS debuggers that all web browsers now have, with SourceMaps support.
The goal with asm.js is to get the same level of performance as native code, or very close to it. But it isn't a new VM like PNaCl, it runs in an existing JS VM like all JavaScript does. That means it can use existing JS debugging, profiling, etc.
Matter of opinion, I actually prefer to debug C-compiled-to-JS than C-compiled-to-native these days. Mainly because I can script debugging procedures directly in the JS source and just run them.
But sure, if you prefer gdb or such, then the web platform is not going to be a perfect match for you.
If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec.
Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized".
This is making me more wary about being able to build reliable tooling around such a "bytecode", not less.
> If so, then that exposes a failing of all JITs, including the JVM.
Yes. But some languages and targets are much worse off than others.
> Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.
This is why we sometimes get down to carefull statement/instructions ordering to avoid pipeline stalls, or using architecture-specific intrinsics, or worrying about false sharing.
> Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.
The web could be both, if Mozilla and other web die hards would critically evaluate the accident of history that is the modern web browser. What bothers me most of all is just how much Mozilla can hold back the industry. For example of where the industry could go, look at what happened with virtual machines implementations.
First, they were particularly inefficient, and relied on tricks such as trap-and-emulate. Not all that different from how NaCL is working, especially on ARM, with funny tricks like load+store pseudo instructions. Gradually, hardware vendors took notice, and we saw instruction sets and hardware shift to add enhanced VM-specific functionality (and ultimately performance) -- first with VT-x, and now with VT-d (eg, IOMMUs).
NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level. asm.js is going in the complete opposite direction, and in doing so, has the potential to steer the entire industry away from a path that could introduce significant and beneficial changes in the realm of security, portability, and open platforms.