A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/nolanlawson/optimize-js/issues/37 below:

Benchmark unfairly excludes compilation time. · Issue #37 · nolanlawson/optimize-js · GitHub

As I've looked at Chrome timelines and thought about it, I have some concerns about the benchmark. I think it's somewhat biased in favor of optimize-js because optimize-js moves some CPU time from execution to compilation, but the benchmark only measures execution.

Basically, unoptimized code does a quick parse during the compilation phase, and then does a slow/complete parse during the execution phase (along with the actual execution of the code, obviously). After optimize-js has been run, the compilation phase does a slow/complete parse, and the execution phase just runs the code. But since the benchmark measures the time between executing the first line and the last line of the script, it is measuring only the execution phase, which means that the time increase in the compilation phase gets lost. I confirm this by looking at Chrome timelines; after optimize-js runs, the compilation phase goes up considerably, but the benchmark just reports the execute time.

I think the fairest benchmark is compilation + first execution, as this is what most pages care about for first load. What I don't know is how to measure that. Here are some ideas, all problematic:

  1. Start measurement from the moment that the script element is added to the DOM. This is what the cost-of-small-modules benchmark does, and it clearly shows time moving from the execution phase to the compilation/loading phase when you use optimize-js. The downside, of course, is that it includes loading time as well in the compilation phase. If all the files are served locally, this probably isn't a huge issue, but it is a source of error in the measurements.
  2. Start measurement from the moment that the script element is added to the DOM, but subtract the time from the Resource Timing API. This is the same as 1, except that you would use the Resource Timing API to see how long it took to load the script from the network and subtract that amount from the measurement. This would reduce the network-based error in 1, but it may not work perfectly, because browsers might start the compilation phase before receiving the last byte of the script. If this is the case, then subtracting the load time of the script would hide some of the compilation phase. More conservatively, you could just subtract TTFB from the loading/compilation phase. Also, Resource Timing isn't available on Safari.
  3. Download the script with XHR/fetch, and call eval() on it. The other possibility is to download the code and then eval it. The benefit here is that you definitely are capturing compilation + execution without getting any network load time mixed in. The downside is that I could totally believe that browsers disable some perf optimizations for eval, so it's possible the numbers will be misleading.

Does this make sense? Do you have other ideas how to measure this (or other thoughts)?

tomByrer and dmitryshimkin


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4