Hi David. Any reason you run a tiny tiny subset of benchmarks? On Tue, Dec 1, 2015 at 5:26 PM, Stewart, David C <david.c.stewart at intel.com> wrote: > > > From: Fabio Zadrozny <fabiofz at gmail.com<mailto:fabiofz at gmail.com>> > Date: Tuesday, December 1, 2015 at 1:36 AM > To: David Stewart <david.c.stewart at intel.com<mailto:david.c.stewart at intel.com>> > Cc: "R. David Murray" <rdmurray at bitdance.com<mailto:rdmurray at bitdance.com>>, "python-dev at python.org<mailto:python-dev at python.org>" <python-dev at python.org<mailto:python-dev at python.org>> > Subject: Re: [Python-Dev] Avoiding CPython performance regressions > > > On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart at intel.com<mailto:david.c.stewart at intel.com>> wrote: > > On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com at python.org<mailto:intel.com at python.org> on behalf of rdmurray at bitdance.com<mailto:rdmurray at bitdance.com>> wrote: > >> >>There's also an Intel project posted about here recently that checks >>individual benchmarks for performance regressions and posts the results >>to python-checkins. > > The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) > > There is also a graphic dashboard at http://languagesperformance.intel.com/ > > Hi Dave, > > Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). > > Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). > > The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). > > We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. > > Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. > > Dave > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4