Got it -- fair enough. We deploy so often where I work (a couple of times a week at least) that 104 days seems like an eternity. But I can see where for a very stable file server or something you might well run it that long without deploying. Then again, why are you doing performance tuning on a "very stable server"? -Ben On Mon, Oct 16, 2017 at 11:58 AM, Guido van Rossum <guido at python.org> wrote: > On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt <benhoyt at gmail.com> wrote: > >> I've read the examples you wrote here, but I'm struggling to see what the >> real-life use cases are for this. When would you care about *both* very >> long-running servers (104 days+) and nanosecond precision? I'm not saying >> it could never happen, but would want to see real "experience reports" of >> when this is needed. >> > > A long-running server might still want to log precise *durations* of > various events. (Durations of events are the bread and butter of server > performance tuning.) And for this it might want to use the most precise > clock available, which is perf_counter(). But if perf_counter()'s epoch is > the start of the process, after 104 days it can no longer report ns > precision due to float rounding (even though the internal counter does not > lose ns). > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20171016/cbe73644/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4