As I've looked at Chrome timelines and thought about it, I have some concerns about the benchmark. I think it's somewhat biased in favor of optimize-js because optimize-js moves some CPU time from execution to compilation, but the benchmark only measures execution.
Basically, unoptimized code does a quick parse during the compilation phase, and then does a slow/complete parse during the execution phase (along with the actual execution of the code, obviously). After optimize-js has been run, the compilation phase does a slow/complete parse, and the execution phase just runs the code. But since the benchmark measures the time between executing the first line and the last line of the script, it is measuring only the execution phase, which means that the time increase in the compilation phase gets lost. I confirm this by looking at Chrome timelines; after optimize-js runs, the compilation phase goes up considerably, but the benchmark just reports the execute time.
I think the fairest benchmark is compilation + first execution, as this is what most pages care about for first load. What I don't know is how to measure that. Here are some ideas, all problematic:
eval()
on it. The other possibility is to download the code and then eval
it. The benefit here is that you definitely are capturing compilation + execution without getting any network load time mixed in. The downside is that I could totally believe that browsers disable some perf optimizations for eval
, so it's possible the numbers will be misleading.Does this make sense? Do you have other ideas how to measure this (or other thoughts)?
tomByrer and dmitryshimkin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4