I just had yet another idea for optimizing Python that looks so plausible that I guess someone else must have looked into it already (and, hence, probably rejected it:-): We add to the type structure a "type identifier" number, a small integer for the common types (int=1, float=2, string=3, etc) and 0 for everything else. When eval_code2 sees, for instance, a MULTIPLY operation it does something like the following: case BINARY_MULTIPLY: w = POP(); v = POP(); code = (BINARY_MULTIPLY << 8) | ((v->ob_type->tp_typeid) << 4) | ((w->ob_type->tp_typeid); x = (binopfuncs[code])(v, w); .... etc ... The idea is that all the 256 BINARY_MULTIPLY entries would be filled with PyNumber_Multiply, except for a few common cases. The int*int field could point straight to int_mul(), etc. Assuming the common cases are really more common than the uncommon cases the fact that they jump straight out to the implementation function in stead of mucking around in PyNumber_Multiply and PyNumber_Coerce should easily offset the added overhead of shifts, ors and indexing. Any thoughts? -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4