On 2016-02-03 3:53 PM, francismb wrote: > Hi, > > On 02/01/2016 10:43 PM, Yury Selivanov wrote: > >> We also need to deoptimize the code to avoid having too many cache >> misses/pointless cache updates. I found that, for instance, LOAD_ATTR >> is either super stable (hits 100% of times), or really unstable, so 20 >> misses is, again, seems to be alright. >> > Aren't those hits/misses a way to see how dynamic the code is? I mean > can't the current magic (manually tweaked on a limited set) values, > be self tweaked/adapted on those numbers? Probably. One way of tackling this is to give each optimized opcode a counter for hit/misses. When we have a "hit" we increment that counter, when it's a miss, we decrement it. I kind of have something like that right now: https://github.com/1st1/cpython/blob/opcache5/Python/ceval.c#L3035 But I only decrement that counter -- the idea is that LOAD_ATTR is allowed to "miss" only 20 times before getting deoptimized. I'll experiment with inc/dec on hit/miss and see how that affects the performance. An ideal way would be to calculate a hit/miss ratio over time for each cached opcode, but that would be an expensive calculation. Yury
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4