On Wed, 19 Apr 2000, Christian Tismer wrote: > Greg Stein wrote: >... > > Ah. Neat. "Automatic marking of shared-ness" > > > > Could work. That initial test for the thread id could be expensive, > > though. What is the overhead of getting the current thread id? > > Zero if we cache it in the thread state. You don't have the thread state at incref/decref time. And don't say "_PyThreadState_Current" or I'll fly to Germany and personally kick your ass :-) >... > > There is a race condition when an object "becomes shared". > > > > DECREF: > > if ( object is not shared ) > > /* whoops! it just became shared! */ > > --(op)->ob_refcnt; > > else > > atomic_decrement(op) > > > > To prevent the race, you'd need an interlock which is more expensive than > > an atomic decrement. > > Really, sad but true. > > Are atomic decrements really so cheap, meaning "are they mapped > to the atomic dec opcode"? On some platforms and architectures, they *might* be. On Win32, we call InterlockedIncrement(). No idea what that does, but I don't think that it is a macro or compiler-detected thingy to insert opcodes. I believe there is a function call involved. pthreads do not define atomic inc/dec, so we must use a critical section + normal inc/dec operators. Linux has a kernel macro for atomic inc/dec, but it is only valid if __SMP__ is defined in your compilation context. etc. Platforms that do have an API (as Donn stated: BeOS has one; Win32 has one), they will be cheaper than an interlock. Therefore, we want to take advantage of an "atomic inc/dec" semantic when possible (and fallback to slower stuff when not). Cheers, -g -- Greg Stein, http://www.lyra.org/
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4