Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It is most definitely JavaScript. It runs perfectly in other browsers - in fact it often runs faster than non-asm.js compiled code, so it is worthwhile even without new optimizations for it.

It can only run at speed if you interpret it as something that is not JavaScript.

In other words, the fact that it's valid JavaScript is essentially pointless, because it's totally useless (relative to actual native applications) unless you interpret it as something that's not JavaScript.

> asm.js, by design, can have pretty much the same level of performance as NaCl. Both can use the same sandboxing mechanisms, for example. Unless there is something specific in the asm.js spec you feel is preventing some additional level of optimization that is possible in NaCl - if so, what?

Stupid bytecode format aside (which incurs a cost for every single VM implementor, tool implementor, and developer that has to use the language, forever) ...

Here's a simple example: You can use NEON directly from NaCL/ARM. This matters on mobile. A lot.

Is asm.js going to define NEON and SSE intrinsics, too? If so, why the hell isn't asm.js just another output target for (P)NaCL so that people with Internet Explorer can run really really really really slow applications?

Or, how about the fact that NaCL can output ARM and x86 binaries directly, such that one doesn't need to AOT compile on every user's system, or introduce the cost, complexity, and overhead of JIT, when targeting standard/popular architectures.

How about this one -- I was going to write something up asking about thread-local storage and %gs-relative loads, or architecture-specific atomic operations on shared state, but then I realized -- asm.js doesn't define a threading model. At all. Unless I'm mistaken, it can't, because JavaScript itself doesn't define a shared state threading model.



Slow is not always "totally useless". It just degrades gracefully. Furthermore, the fact that it's based on JS makes the integration with the DOM much easier.

The bytecode format is not as big of a deal as you claim. 1,000 lines of code for the verifier.

There's nothing intrinsic to JS that forbids having SIMD support. In fact, it's under active discussion...

Having two back ends for PNaCl seems suboptimal. Why force all Web developers to generate two bytecode formats because we don't like the way asm.js looks?

AOT compilation isn't a panacea. You still have to verify it. So it's not like the NDK could just be adopted to the Web as is. Besides, caching can mitigate the startup costs a lot.

Finally, JS has threading, and it could grow shared state threading in the future. This doesn't strike me as that big of an obstacle.


> Slow is not always "totally useless". It just degrades gracefully.

Slow isn't graceful, especially when your competitors aren't slow. I look forward to Mozilla advertising Firefox as "only 10x slow as Chrome!".

In this case, it's even worse, because if asm.js is successful, then the fallback mechanism will simply become an awkward vestigial limb that's no longer required by any modern browser. Treat a vestigial limb as your first-order target is silly.

> Having two back ends for PNaCl seems suboptimal. Why force all Web developers to generate two bytecode formats because we don't like the way asm.js looks?

Because it's about users, not about developers. Generating optimized binaries means that users have a better user experience.

> Finally, JS has threading, and it could grow shared state threading in the future. This doesn't strike me as that big of an obstacle.

"I'll pay you tomorrow for a hamburger today" is getting old.


Slower is far better than not running at all. Backwards compatibility is how Web technologies (and, for that matter, most technologies--x86 for example) survive.

And asm.js' encoding has no impact on the user experience. That's precisely my point. Using a binary bytecode instead of interoperable JS would maybe make the parsing easier, but that doesn't make up for the costs this would burden developers with (not to mention the difficulties with two VMs sharing data and so forth, something which is terribly important but is always glossed over in these conversations).

Finally, regarding threads, we're talking about a new proposed standard here. There will always be work to do, since just taking the NDK and putting it on the Web is not an option. PNaCl isn't shipping yet. So the question is whether asm.js presents a shorter, more straightforward path to success than the alternatives. I believe it does.


> Slower is far better than not running at all.

Why? For the purposes for which asm.js is intended, the code may as well not be running at all, and users will be driven to upgrade their browser.

If you really want to support backwards compatibility, then make it a secondary target, not the primary one that you saddle yourself with for all time.

> So the question is whether asm.js presents a shorter, more straightforward path to success than the alternatives. I believe it does.

I believe NaCL and PNaCL provide a much saner path to an ultimately superior end, and would have a far greater chance of succeeding if Mozilla wasn't playing NIH to assuage their need to support JavaScript.

asm.js is competing with native, and they're starting out with an intentional disadvantage.


> For the purposes for which asm.js is intended, the code may as well not be running at all, and users will be driven to upgrade their browser.

That isn't true. Consider photo filters: you may want to have a photo filter written in asm.js for maximum performance on browsers that support it, but degrades to a suboptimal, but usable, experience on browsers that don't support it. Likewise, consider an asm.js-compiled mobile game: you may want it to be playable on desktops too, whose browsers generally contain enough horsepower to run JS-compiled mobile games at full speed even without special support for asm.js.

There are plenty of use cases in which "slower but backwards compatible" is extremely desirable. AAA games are not everything.

> If you really want to support backwards compatibility, then make it a secondary target, not the primary one that you saddle yourself with for all time.

Again, it's not worth forcing Web developers to compile multiple binaries, when the main difference here is how you parse the bytecode.

> asm.js is competing with native, and they're starting out with an intentional disadvantage.

PNaCl is starting out with the same disadvantage as asm.js, modulo surface syntax. In fact, asm.js arguably has an advantage over PNaCl. Starting from the JS AST that must be constructed by every browser anyway, the asm.js verifier is a mere 1,216 lines. For comparison, LLVM's BitcodeReader is over 3,000 lines. LLVM's Verifier is over 2,000.


> That isn't true. Consider photo filters: you may want to have a photo filter written in asm.js for maximum performance on browsers that support it, but degrades to a suboptimal, but usable, experience on browsers that don't support it. Likewise, consider an asm.js-compiled mobile game: you may want it to be playable on desktops too, whose browsers generally contain enough horsepower to run JS-compiled mobile games at full speed even without special support for asm.js.

You're competing with native, not with today's crappy JS webapps. "Just like native but slow!" is not a selling point to users that have a plethora of better performing, better integrated native applications to choose from.

You guys at Mozilla could just target NaCL/PNaCL, contribute asm.js as the second-tier means of pulling IE and Safari along with you, the whole industry moves past the disaster that is JS/DOM/CSS, and everyone is happier.

> Again, it's not worth forcing Web developers to compile multiple binaries, when the main difference here is how you parse the bytecode.

Again, it's ALL about users, not developers. Of course, it can be easy for developers too ...

cc -arch armv6 -arch armv7 -arch i386 -arch pnacl -c users_first.c -o users_first.o

... but at the end of the day, you have to stop thinking about the developers as your primary users if you want to produce a successful platform.

Developers matter -- but users are why we're all here. It's not as if Apple's users have been hurting because developers have to target multiple variants of ARM and learn a new programming language.

> PNaCl is starting out with the same disadvantage as asm.js, modulo surface syntax

That's not just surface syntax. As a tool maker, I spend an awful lot of time working on and with things that touch assembly/bytecode as "surface syntax". It matters, a lot, and JavaScript is a TERRIBLE bytecode syntax to have to deal with.

That said, it's also not the same disadvantage, because PNaCL implementations compile to native code; there's no fallback interpretation mechanism that exhibits the unusable performance profile of asm.js's fallback profile.

That said, I think asm.js would be a great second-tier target for PNaCL in the case that one wishes to target a backwards browser, regardless of how slow it is.


> You guys at Mozilla could just target NaCL/PNaCL, contribute asm.js as the second-tier means of pulling IE and Safari along with you, the whole industry moves past the disaster that is JS/DOM/CSS, and everyone is happier.

Moving past JS/DOM/CSS is just not possible. Backwards compatibility is how Web technologies survive. There have been many attempts to try to redo the Web from the ground up: XHTML 2.0 for example. They did not succeed.

> Developers matter -- but users are why we're all here. It's not as if Apple's users have been hurting because developers have to target multiple variants of ARM and learn a new programming language.

And users do not care about the surface syntax of the bytecode. If there were some user-facing advantage to not using JS as the transport format, then sure, it might be worth not using it. But so far I've simply heard "I don't like JavaScript syntax". It's fine that you have that opinion, but it's not worth sacrificing backwards compatibility for it, since it doesn't matter to users.

Regarding shipping native ARM and x86 code alongside the fallback mechanism, I've already explained why that won't work: developers won't test against the portable version, so it might as well not exist. The Web will be locked into those hardware architectures for all time. That might be a cheap way to compete with native apps in the short term, but in the long term it is bad for hardware innovation. (For example, consider that, as azakai noted, ARM might not have taken off at all if we had done that early on.)

> That's not just surface syntax. As a tool maker, I spend an awful lot of time working on and with things that touch assembly/bytecode as "surface syntax". It matters, a lot, and JavaScript is a TERRIBLE bytecode syntax to have to deal with.

It only matters to a small subset of developers. Besides, if you really don't like it, just write a converter that converts it to the mnemonics of your choice. It's really quite trivial.

By your logic, we shouldn't gzip executables, because tools have a hard time reading gzipped assembly. But that's a silly argument: you just un-gzip them first. It's the same with asm.js. If your tool has a hard time reading asm.js, convert it to a "real" bytecode first.

Additionally, LLVM bitcode isn't really much better from a simplicity standpoint. As I already pointed out, the verifier and bitcode reader for LLVM is much larger than the asm.js verifier.

> That said, it's also not the same disadvantage, because PNaCL implementations compile to native code; there's no fallback interpretation mechanism that exhibits the unusable performance profile of asm.js's fallback profile.

Yeah. The fallback mechanism is it doesn't run at all. As I already explained, there are applications for which it is much better to actually run, despite reduced performance.

And "unusable" really is a stretch. There are many apps written in Emscripten that run just fine in current browsers. Like I already said, AAA games aren't everything.


Sigh. You web guys are completely blinded to the rest of the engineering universe. You actively oppose all attempts to do anything genuinely new, and then say "look! It never works!"

Meanwhile, the rest of us non-web people grow increasingly tired of even trying to contribute or explain anything, since every novel gets shut down. The end result is that you've created a perfect echo chamber of self-fulfilling prophesy -- the web technology stack remains stuck and broken, and you actually seem to like it that way.

I don't work on AAA games. I work on development tools and end-user applications, where having good tooling directly translates to better user experiences.

The fact that you think no-compromises performance is only the purview of AAA games is exactly why you have no business being an OS vendor. I hope -- for the sale of our industry -- that Apple, Google, and Ubuntu eat Firefox OS' lunch.


> Sigh. You web guys are completely blinded to the rest of the engineering universe. You actively oppose all attempts to do anything genuinely new, and then say "look! It never works!"

I oppose technologies (such as PNaCl) that I feel are worse than alternatives (asm.js). The reason they are worse is that they are not backwards compatible.

> The fact that you think no-compromises performance is only the purview of AAA games is exactly why you have no business being an OS vendor. I hope -- for the sale of our industry -- that Apple, Google, and Ubuntu eat Firefox OS' lunch.

I never said that no-compromises performance is only the purview of AAA games. I said that for most applications that are not AAA games, running more slowly is better than not running at all.


The problem with opposing improved technologies is, simply put, that JS/DOM/CSS/HTTP are broken crufty accidents of design, and enforcing compatibility with them forever is what has prevented the web from moving forward, has stifled innovation, and ultimately is why the web could lose the app war.

Imagine if we'd used a priviledged market position to religiously defend against the introduction of everything that wasn't backwards compatible with gopher? That's what you're doing to our industry today, and in that, you're almost as bad for our industry as Microsoft was bad for the web in the 90s.


> It can only run at speed if you interpret it as something that is not JavaScript.

By that argument any JS JIT is "no longer JS".

All JS JITS find cases where JS can be optimized as something simpler. For example CrankShaft and TraceMonkey find areas where variables are simply-typed and heavily optimize those.

This isn't surprising - to make JS be fast, you do need to find where you can make it go faster, by avoiding the "normal" JS dynamism where anything is possible. So the JIT optimizes it as something that is "not JS". Again, nothing new with asm.js there, JS JITs have been doing this since 2008 (and JITs in other languages far earlier).

> You can use NEON directly from NaCL/ARM [..] NaCL can output ARM and x86 binaries directly

Those are not portable, which JS must be. A better comparison might be PNaCl, which is like NaCl but has an intermediary portable format. PNaCl will of course have the same issues asm.js does with not having direct binaries that can just be loaded and run, not allowing use of CPU-specific code, etc.

If you don't care for portability, then the web/JS/asm.js/WebGL/etc etc. are likely not the best thing for you. Instead, a native app could make more sense.


> By that argument any JS JIT is "no longer JS".

JS JIT is an internal implementation detail. If a JIT exposed its internal bytecode (assuming it has one), you certainly wouldn't call it JavaScript.

asm.js's strict requirements expose that implementation detail. To make asm.js useful, you can't treat it as JS, in which case the fact that it's javascript is a pointless burden on the entire target community of developers and users.

> Those are not portable, which JS must be.

Users don't care about whether a particular application uses non-portable optimization strategies, they care about battery life and application performance. If that means I have to use NEON on ARM, SSE on x86, and provide a portable fallback implementation, then so be it -- as a developer, that's my job.

As a platform/OS vendor, your job is to make it possible for me to provide the best available user experience on the market.

This isn't the web development space; we can't just tell users to go get faster hardware, the way web developers tell IT to go get faster/bigger servers.

> PNaCl will of course have the same issues asm.js does with not having direct binaries that can just be loaded and run, not allowing use of CPU-specific code, etc.

You can target x86 and ARM specifically (which are likely to be the only architectures that matter for the next 5 years), and fallback to PNaCL for everyone else.

> If you don't care for portability, then the web/JS/asm.js/WebGL/etc etc. are likely not the best thing for you.

I don't care about sacrificing user experience for some irrational slavish devotion to JavaScript.

Of course I want portability, but not at the cost of providing the best user experience on the market.


> JS JIT is an internal implementation detail

asm.js optimizations are also an internal implementation detail. You don't need to do them, and the code still runs fast. Or you can do them in a variety of ways, not just the one being tested in Firefox. In fact I argued for optimizing in a very different way originally.

The point of asm.js is that it ensures you don't do things like use multiple types in a single variable, avoid undefined values cropping up, etc. That helps JS JITs in general, both existing ones as well as new optimizations made more feasible by the approach.

So asm.js does not expose any implementation details, no more than say CrankShaft and TraceMonkey do in the documents written about "how to write fast JS for modern JS engines" (which often say explicit things about "don't mix types" and so forth).

> Users don't care about whether a particular application uses non-portable optimization strategies

Of course users do. A portable application would be runnable from all the users' devices, that's a huge plus. Just like users want to play their music from their iPod, laptop, TV, etc., they want to run their apps on all their devices as well. Portability makes that possible.

> You can target x86 and ARM specifically (which are likely to be the only architectures that matter for the next 5 years), and fallback to PNaCL for everyone else.

That's a big compromise. If we had done that before the rise of ARM, for example, ARM might never have achieved its current success.

But anyhow, of course there are different compromises to be made. The web and JS focus on true portability, with its downsides. If you personally are willing to compromise more to get better performance, then sure, another option might be better for you.


> asm.js optimizations are also an internal implementation detail.

Come on, really? If you require a full spec to define a very specific format, type annotations, and special designators to actually take advantage of it in any meaningful way, it's not an "implementation detail", because as the user, I have to care about it.

> So asm.js does not expose any implementation details, no more than say CrankShaft and TraceMonkey do in the documents written about "how to write fast JS for modern JS engines" (which often say explicit things about "don't mix types" and so forth).

That's exposing implementation details, too, and demonstrates a failing of JS.

> Of course users do. A portable application would be runnable from all the users' devices, that's a huge plus. Just like users want to play their music from their iPod, laptop, TV, etc., they want to run their apps on all their devices as well. Portability makes that possible.

Users want apps to run, and run well. They don't care how. Figuring out how is our job. Making user's lives suck more because we have lofty ideas is not doing our job.

Apple gets it. Google gets it. Even Ubuntu gets it.

Mozilla doesn't get it.

> That's a big compromise. If we had done that before the rise of ARM, for example, ARM might never have achieved its current success.

Not really. Apple and NeXT navigated these waters successfully for multiple decades via Mach-O and CFM fat binaries, and toolchains built around easily and efficiently supporting multiple architectures.

> But anyhow, of course there are different compromises to be made. The web and JS focus on true portability, with its downsides. If you personally are willing to compromise more to get better performance, then sure, another option might be better for you.

The web is competing with native applications. Now you're trying to compete with native operating systems, yet you're not willing to take the steps necessary to actually compete.

Ultimately, you're creating a two tier systems where platform vendors like yourself get decent performance and runtime environments in which you can produce things like Firefox, and 3rd party developers get crappy performance and runtimes environments where we can produce webapps.

I'd love to see you write and deploy Firefox in asm.js, and then try to compete with Chrome.


> Come on, really? If you require a full spec to define a very specific format, type annotations, and special designators to actually take advantage of it in any meaningful way, it's not an "implementation detail", because as the user, I have to care about it.

First of all, it doesn't require a spec. We could have just done some heuristical optimizations like all JS engines have been doing since 2008, finding more cases where we can optimize and so forth - in fact, this was my initial idea for how to do this, as I mentioned earlier.

But we did decide to write a spec because (1) we want to be 100% open about this, and a spec makes it easier for others to learn about it, and (2) it helps us check we didn't miss anything because we have a formal type system.

>> So asm.js does not expose any implementation details, no more than say CrankShaft and TraceMonkey do in the documents written about "how to write fast JS for modern JS engines" (which often say explicit things about "don't mix types" and so forth).

> That's exposing implementation details, too, and demonstrates a failing of JS.

If so, then that exposes a failing of all JITs, including the JVM. All optimizing implementations expose details. People have optimized for the JVM for years.

If you can't stand anything between you and the underlying CPU, then nothing portable (like JavaScript, C#, Java, etc.) will satisfy you. Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.

Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.

> Apple and NeXT navigated these waters successfully for multiple decades via Mach-O and CFM fat binaries, and toolchains built around easily and efficiently supporting multiple architectures

I do see your point, but that isn't quite the same. Apple fat binaries were of a platform Apple controlled. We are talking about the web, which no one controls. But again, yes, to some degree it is possible as you say to overcome such issues.

> The web is competing with native applications. Now you're trying to compete with native operating systems, yet you're not willing to take the steps necessary to actually compete.

I disagree. If we have 2x slower than native now, and 1.5x slower than native later on, we're competitive with native on that front. And we have some advantages over native, like portability, which can have long-term performance advantages (for example, we can easily switch to a different underlying CPU if a faster arch shows up). There are also short-term performance advantages to things like Firefox OS that only run web apps, like their graphics stack being much simpler than Android's or Linux's (you don't need another layer underneath the browser compositor, and can go right into GL).


> But we did decide to write a spec because (1) we want to be 100% open about this, and a spec makes it easier for others to learn about it, and (2) it helps us check we didn't miss anything because we have a formal type system.

If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec.

Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized".

This is making me more wary about being able to build reliable tooling around such a "bytecode", not less.

> If so, then that exposes a failing of all JITs, including the JVM.

Yes. But some languages and targets are much worse off than others.

> Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.

This is why we sometimes get down to carefull statement/instructions ordering to avoid pipeline stalls, or using architecture-specific intrinsics, or worrying about false sharing.

> Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.

The web could be both, if Mozilla and other web die hards would critically evaluate the accident of history that is the modern web browser. What bothers me most of all is just how much Mozilla can hold back the industry. For example of where the industry could go, look at what happened with virtual machines implementations.

First, they were particularly inefficient, and relied on tricks such as trap-and-emulate. Not all that different from how NaCL is working, especially on ARM, with funny tricks like load+store pseudo instructions. Gradually, hardware vendors took notice, and we saw instruction sets and hardware shift to add enhanced VM-specific functionality (and ultimately performance) -- first with VT-x, and now with VT-d (eg, IOMMUs).

NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level. asm.js is going in the complete opposite direction, and in doing so, has the potential to steer the entire industry away from a path that could introduce significant and beneficial changes in the realm of security, portability, and open platforms.


> If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec. Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized". This is making me more wary of building tooling around such a "bytecode", not less.

Then if I understand you correctly, you are in favor of a spec for something like asm.js? But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.

If I had that right, then yes, that's a valid concern, there will be performance differences, just like there are between JS engines on specific benchmarks already. Note that asm.js is much simpler to optimize than arbitrary JS, so that could decrease in time. But there are no guarantees with multiple vendor implementations.

And that's the real issue. NaCl, Flash, etc have one implementation, so you get predictable performance (and the same vulnerabilities...). But you don't get that with JavaScript, Java, C#, etc.

If NaCl were to become an industry standard somehow, then it would need to have multiple implementations, and have the same unpredictability in terms of performance. Except that it is fairly straightforward to optimize NaCl, so in theory the differences could become small over time - but the exact same is true of asm.js as I said earlier.

> NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level.

Again, PNaCl is a better comparison - even Google is shifting from NaCl to PNaCl (according to their announcements).

I see no reason that asm.js cannot be as fast as PNaCl, both are portable and can use the same CPU-specific sandboxing mechanisms. In fact it would be interesting to benchmark the two right now.


> Then if I understand you correctly, you are in favor of a spec for something like asm.js?

Yes. Imagine I'm writing an "asm.js" backend for a debugger, coupled with toolchain support for DWARF. To tell you the truth, I'm not even sure where I'd start, since it's not like the spec exposes a VM or a virtual machine state -- but if it did, I'd need the spec for that.

> But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.

That's part of it. Without a spec, I can't really rely on remotely equivalent performance, but that's hardly the only toolchain issue.


asm.js is just JavaScript, it isn't a new VM with exposed low-level binary details. You wouldn't need to use DWARF or write your own low-level debugger integration. You can debug code on it of course, the right approach would be to use the JS debuggers that all web browsers now have, with SourceMaps support.

The goal with asm.js is to get the same level of performance as native code, or very close to it. But it isn't a new VM like PNaCl, it runs in an existing JS VM like all JavaScript does. That means it can use existing JS debugging, profiling, etc.


> You can debug code on it of course, the right approach would be to use the JS debuggers that all web browsers now have, with SourceMaps support.

That's not really a replacement for a real language and architecture-(VM or otherwise)-aware debugger.

> That means it can use existing JS debugging, profiling, etc.

Which is a problem, because all of that stuff is awful compared to the state of the art of modern desktop and mobile tooling.


Matter of opinion, I actually prefer to debug C-compiled-to-JS than C-compiled-to-native these days. Mainly because I can script debugging procedures directly in the JS source and just run them.

But sure, if you prefer gdb or such, then the web platform is not going to be a perfect match for you.


If x86 and ARM become the favored platforms for full speed on the Web, as you've essentially locked the Web into those architectures for all time. Web developers will realistically not optimize the PNaCl solution.

I disagree that this is worth the cost. This is not about "a slavish devotion to JavaScript"; your solution is fundamentally opposed to portability.


> Web developers will realistically not optimize the PNaCl solution.

It'll be "fast enough", which is what you're claiming for asm.js's backwards compatibility mode, and PNaCL is a whole heck of a lot faster than that.

> I disagree that this is worth the cost. This is not about "a slavish devotion to JavaScript"; your solution is fundamentally opposed to portability.

Users want applications that don't waste their battery, and that perform well. What do they care about architecture portability beyond the devices they actually have?

Fortunately, PNaCL solves that problem, too, as a fallback, while still being able to target x86/ARM without compromise.


"It'll be "fast enough", which is what you're claiming for asm.js's backwards compatibility mode, and PNaCL is a whole heck of a lot faster than that."

I haven't seen PNaCl benchmarks. Have you?

It is true that the compilers for asm.js are more immature than LLVM at this time, but there's nothing stopping asm.js from reaching that level.

Again, it's just syntax. You're complaining about the fact that the code is delivered in a backwards compatible surface syntax and extrapolating that to unfounded assumptions that it must be slow. It's really an absurd claim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: