> (1) It requires a pause to compile a bunch of stuff before you even get going. While in our case LLint would have executed it a few times already.
I think for games you actually want the pause ahead of time, during the loading screen. Having the first few seconds stutter isn't a good experience.
> (2) Even for asm.js-style code, dynamic profiling info can help you optimize things more than you could with pure AOT. For example, you can make inlining decisions on the fly, based on what functions are actually called.
Well, for asm.js code the static inlining is already done by LLVM/Emscripten before the code is even shipped, so I'm not sure how much of a benefit this actually brings in practice—you can only choose to inline functions more aggressively than LLVM already did, not less aggressively. (Might help work around Emscripten's outlining pass, however.) You also eat CPU cycles that could be used for the app by doing non-AOT compilation (though, granted, Emscripten apps of today tend to use only one core, so background compilation will fix this on multicore).
> (3) Caching compiled code is a good idea for more than just asm.js, no need to make it a special case.
No argument there, although it's harder to handle the full generality of JavaScript due to the temptation to bake pointers into the jitcode. If you can make it work though, hey, that's awesome.
The hypothesis of JSC folks is that it will be able to get very good results without any special case handling for code labeled as asm.js just by having a great general compiler, particularly with more tuning. On the things we profiled, there wasn't a case where AOT or even triggering the fourth tier earlier seemed like the most valuable thing to do.
If we turn out to be wrong, though, we will not be religious about it or anything.
I think for games you actually want the pause ahead of time, during the loading screen. Having the first few seconds stutter isn't a good experience.
> (2) Even for asm.js-style code, dynamic profiling info can help you optimize things more than you could with pure AOT. For example, you can make inlining decisions on the fly, based on what functions are actually called.
Well, for asm.js code the static inlining is already done by LLVM/Emscripten before the code is even shipped, so I'm not sure how much of a benefit this actually brings in practice—you can only choose to inline functions more aggressively than LLVM already did, not less aggressively. (Might help work around Emscripten's outlining pass, however.) You also eat CPU cycles that could be used for the app by doing non-AOT compilation (though, granted, Emscripten apps of today tend to use only one core, so background compilation will fix this on multicore).
> (3) Caching compiled code is a good idea for more than just asm.js, no need to make it a special case.
No argument there, although it's harder to handle the full generality of JavaScript due to the temptation to bake pointers into the jitcode. If you can make it work though, hey, that's awesome.