[Tim] >> This is strange (according to me). The real point of adding that >> option would be to prevent bad code generation in the presence of >> pretty common non-standard C code, but I don't know why a compiler >> would complain about PyObject_IsTrue: > [...] >> if (v == Py_True) THIS IS LINE 1565 FOR ME [martin@v.loewis.de] > Notice that a compiler is allowed to infer that the test is always > false. I don't buy that. I'll buy that the result of the comparison is undefined by C, and that a hostile implementation of C could arbitrarily decide to call all such expressions false -- or arbitrarily decide to call all such expressions true. > Py_True is > > #define Py_True ((PyObject *) &_Py_TrueStruct) > > where _Py_TrueStruct is of type PyIntObject. A PyObject* could never > ever possibly point to a PyIntObject. I don't buy that because C simply doesn't define the result of (PyObject *) &_Py_TrueStruct Since it's undefined, a compiler would be within its rights to create a pointer that happened to (or not to) compare equal to any other particular PyObject *, or to segfault, or whatever. But that's not the point I was trying to make: the text of the warning message Neil quoted warned about dereferencing, but there is no dereferencing going on in the line it's complaining about. > If you still compare the two, you obviously assume the language is not > standard C. Indeed we do. If we had to, we could cast both sides of such comparisons to char* first. The result of that is defined (they have to point to the "lowest-addressed byte of the object", and that's exactly what we intend get compared in these cases). So I'm not worried about the specific things gcc is complaining about -- they could be wormed around with mechanical pain, so are just nuisances. I'm more worried about real problems <wink>: >> The way in which Python fakes inheritance by hand means there are >> potential problems all over the place (just about everywhere we cast >> to or from PyObject*), but very likely very few real problems. If >> the only ones gcc complains about involve Py_{True,False,None}, it's >> not being helpful. > These are the ones that gcc recognizes. It can and will generate bad > code all over the place. Has anyone tried running Python compiled without no-strict-aliasing? If so, how bad was it? I expect that Python's call test branch call test branch ... coding style inhibits most reordering optimizations anyway. The other question is whether no-strict-aliasing prevents such optimizations. If it does, then we should probably always use it. programmers-who-want-aggressive-anti-aliasing-assumptions-should- stick-to-fortran-ly y'rs - tim
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4