A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2011-November.txt below:

From solipsis at pitrou.net Wed Nov 23 05:33:31 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 23 Nov 2011 05:33:31 +0100 Subject: [Python-Dev] Python 3.4 Release Manager References: <4ECC6D9D.4020105@hastings.org> <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> Message-ID: <20111123053331.10f04486@pitrou.net> On Tue, 22 Nov 2011 20:27:24 -0800 Raymond Hettinger wrote: > > > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? > > You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) How about: PHP 5.5 IS NOW OPEN FOR COMMIT ? From senthil at uthcode.com Wed Nov 23 05:50:19 2011 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 23 Nov 2011 12:50:19 +0800 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <4ECC6D9D.4020105@hastings.org> References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 11:50 AM, Larry Hastings wrote: > I've volunteered to be the Release Manager for Python 3.4. ?The FLUFL has That's cool. But just my thought, wouldn't it be better for someone who regularly commits, fixes bugs and feature requests be better for a RM role? Once a developer gets bored with those and wants more, could take up RM role. Is there anything wrong with this kind of thinking? Thanks, Senthil From ncoghlan at gmail.com Wed Nov 23 07:32:31 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Nov 2011 16:32:31 +1000 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 2:50 PM, Senthil Kumaran wrote: > On Wed, Nov 23, 2011 at 11:50 AM, Larry Hastings wrote: >> I've volunteered to be the Release Manager for Python 3.4. ?The FLUFL has > > That's cool. ?But just my thought, wouldn't it be better for someone > who regularly commits, fixes bugs and feature requests be better for a > RM role? Once a developer gets bored with those and wants more, could > take up RM role. Is there anything wrong with this kind of thinking? The main (thoroughly informal) criteria are having commit privileges, having shown some evidence of "getting it" when it comes to the release process and then actually putting your hand up to volunteer. Most people who pass the second criterion seem to demonstrate this odd reluctance to meet the third criterion ;) There's probably a fourth criterion of the other devs not going "Arrg, no, not *them*!", but, to my knowledge, that's never actually come up... I'm sure Larry will now be paying close attention as Georg shepherds 3.3 towards release next year, so it sounds like a perfectly reasonable idea to me. +1 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Wed Nov 23 07:49:28 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 23 Nov 2011 01:49:28 -0500 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: On 11/22/2011 7:42 PM, Benjamin Peterson wrote: > 2011/11/22 Antoine Pitrou : >> On Tue, 22 Nov 2011 16:42:35 -0500 >> Benjamin Peterson wrote: >>> 2011/11/22 Antoine Pitrou : >>>> On Tue, 22 Nov 2011 21:29:43 +0100 >>>> benjamin.peterson wrote: >>>>> http://hg.python.org/cpython/rev/77ab830930ae >>>>> changeset: 73697:77ab830930ae >>>>> user: Benjamin Peterson >>>>> date: Tue Nov 22 15:29:32 2011 -0500 >>>>> summary: >>>>> fix compiler warning by implementing this more cleverly >>>> >>>> You mean "more obscurely"? >>>> Obfuscating the original intent in order to disable a compiler warning >>>> doesn't seem very wise to me. >>> >>> Well, I think it makes sense that the kind tells you how many bytes are in it. >> >> Yes, but "kind * 2 + 2" looks like a magical formula, while the >> explicit switch let you check mentally that each estimate was indeed >> correct. > > I don't see how it's more magic than hardcoding 4, 6, and 10. Don't > you have to mentally check that those are correct? I personally strongly prefer the one-line formula to the hardcoded magic numbers calculated from the formula. I find it much more readable. To me, the only justification for the switch would be if there is a serious worry about the kind being changed to something other than 1, 2, or 4. But the fact that this is checked with an assert that can be optimized away negates that. The one-liner could be followed by assert(kind==1 || kind==2 || kind==4) which would also serve to remind the reader of the possibilities. You could even follow the formula with /* 4, 6, or 10 */ I think you reverted too soon. -- Terry Jan Reedy From ncoghlan at gmail.com Wed Nov 23 08:07:15 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Nov 2011 17:07:15 +1000 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: On Wed, Nov 23, 2011 at 4:49 PM, Terry Reedy wrote: > I personally strongly prefer the one-line formula to the hardcoded magic > numbers calculated from the formula. I find it much more readable. To me, > the only justification for the switch would be if there is a serious worry > about the kind being changed to something other than 1, 2, or 4. But the > fact that this is checked with an assert that can be optimized away negates > that. The one-liner could be followed by > ?assert(kind==1 || kind==2 || kind==4) > which would also serve to remind the reader of the possibilities. You could > even follow the formula with /* 4, 6, or 10 */ I think you reverted too > soon. +1 to what Terry said here, although I would add a genuinely explanatory comment that gives the calculation meaning: /* For each character, allow for "\U" prefix and 2 hex digits per byte */ expandsize = 2 + 2 * kind; Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From nad at acm.org Wed Nov 23 08:12:21 2011 From: nad at acm.org (Ned Deily) Date: Tue, 22 Nov 2011 23:12:21 -0800 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: In article <87fwhfqywr.fsf at uwakimon.sk.tsukuba.ac.jp>, "Stephen J. Turnbull" wrote: > I haven't had the nerve to do this on MacPorts because "port" is such > a flaky thing (not so much port itself, but so many ports assume that > the port maintainer's local configuration is what others' systems use, > so I stay as vanilla as possible -- I rather doubt that many ports are > ready for Python 3, and I'm not willing to be a guinea pig). I think your fears are unfounded. MacPort's individual port files are supposed to be totally independent of the setting of 'port select'. In other words, there are separate ports for each Python version, i.e. py24-distribute, py25-distribute, py26-distribute, py27-distribute, py31-distribute, and py32-distribute. Or, for ports that are not principally Python packages, there may be port variants, i.e. +python27, +python32, etc. If you do find a port that somewhere uses an unversioned 'python', you should report it as a bug; they will fix that. Also, fairly recently, the MacPorts introduced a python ports group infrastructure behind the scenes that makes it possible for them to maintain one meta portfile that will generate ports for each of the supported Python versions also supported by the package. The project has been busily converting Python package port files over to this new system and, thus, increasing the number of ports available for Python 3.2. Currently, I count 30 'py32' ports and '38 'py31' ports compared to 468 'py26' and 293 'py27' ports so, yes, there is still a lot to be done. But my observation of the MacPorts project is that they respond well to requests. If people request existing packages be made available for py32, or - even better - provide patches to do so, it will happen. Also right now besides the Python port group transition, the project has been swamped with issues arising from the Xcode 4 introduction for Lion, mandating the transition from gcc to clang or llvm-gcc. -- Ned Deily, nad at acm.org From python-dev at masklinn.net Wed Nov 23 08:15:10 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Wed, 23 Nov 2011 08:15:10 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 2011-11-23, at 04:51 , Stephen J. Turnbull wrote: > Xavier Morel writes: >> On 2011-11-22, at 17:41 , Stephen J. Turnbull wrote: >>> Barry Warsaw writes: > >>>> Hopefully, we're going to be making a dent in that in the next version of >>>> Ubuntu. > >>> This is still a big mess in Gentoo and MacPorts, though. MacPorts >>> hasn't done anything about ceating a transition infrastructure AFAICT. > >> What kind of "transition infrastructure" would it need? It's definitely >> not going to replace the Apple-provided Python out of the box, so >> setting `python` to a python3 is not going to happen. > > Sure, but many things do shadow Apple-provided software if you set > PATH=/opt/local/bin:$PATH. > Some I'm sure do, but "many" is more doubtful, and I have not seen any do that in the Python ecosystem: macports definitely won't install a bare (unversioned) `python` without the user asking. > I'm not sure what infrastructure is required, but I can't really see > MacPorts volunteers doing a 100% conversion the way that Ubuntu's paid > developers can. So there will be a long transition period, and I > wouldn't be surprised if multiple versions of Python 2 and multiple > versions of Python 3 will typically need to be simultaneously > available to different ports. That's already the case so it's not much of a change. > >> It doesn't define a `python3`, so maybe that? > A python3 symlink or script would help a little bit, but I don't think > that's necessary or sufficient, because ports already can and do > depend on Python x.y, not just Python x. Yes indeed, which is why I was wondering in the first place: other distributions are described as "fine" because they have separate Python2 and Python3 stacks, macports has a Python stack *per Python version* so why would it be more problematic when it should have even less conflicts? >> Macports provide `port select` which I believe has the same function >> (you need to install the `python_select` for it to be configured for >> the Python group), the syntax is port `select --set python $VERSION`: > > Sure. > > I haven't had the nerve to do this on MacPorts because "port" is such > a flaky thing (not so much port itself, but so many ports assume that > the port maintainer's local configuration is what others' systems use, > so I stay as vanilla as possible -- I rather doubt that many ports are > ready for Python 3, and I'm not willing to be a guinea pig). That is what I'd expect as well, I was just giving the corresponding tool to the gentoo version thereof. > The problem that I've run into with Gentoo is that *even when the > ebuild is prepared for Python 3* assumptions about the Python current > when the ebuild is installed/upgraded gets baked into the installation > (eg, print statement vs. print function), but some of the support > scripts just call "python" or something like that. OTOH, a few > ebuilds don't support Python 3 (or in a ebuild that nominally supports > Python 3, upstream does something perfectly reasonable for Python 2 > like assume that Latin-1 characters are acceptable in a ChangeLog, and > the ebuild maintainer doesn't test under Python 3 so it slips through) > so I have to do an eselect dance while emerging ... and in the > meantime things that expect Python 3 as the system Python break. > > So far, in Gentoo I've always been able to wiggle out of such problems > by doing the eselect dance two or three times with the ebuild that is > outdated, and then a couple of principal prerequisites or dependencies > at most. > > Given my experience with MacPorts I *very much* expect similar > issues with its ports. Yes I would as well, although: 1. A bare `python` call would always call into the Apple-provided Python, this has no reason to change so ports doing that should not be affected 2. Few ports should use Python (therefore assume things about Python) in their configuration/installation section (outside upstream's own assumptions): ports are tcl, not bash, so there shouldn't be too much reason to call Python from them From hodgestar+pythondev at gmail.com Wed Nov 23 10:06:18 2011 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Wed, 23 Nov 2011 11:06:18 +0200 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 6:50 AM, Senthil Kumaran wrote: > That's cool. ?But just my thought, wouldn't it be better for someone > who regularly commits, fixes bugs and feature requests be better for a > RM role? Once a developer gets bored with those and wants more, could > take up RM role. Is there anything wrong with this kind of thinking? There is something to be said for letting those people continue to regularly commit and fix bugs rather than saddling them with the RM role. :) Schiavo Simon From stephen at xemacs.org Wed Nov 23 10:24:18 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 18:24:18 +0900 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <20111123053331.10f04486@pitrou.net> References: <4ECC6D9D.4020105@hastings.org> <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> <20111123053331.10f04486@pitrou.net> Message-ID: <87ehwzqji5.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > On Tue, 22 Nov 2011 20:27:24 -0800 > Raymond Hettinger wrote: > > > > > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? > > > > You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) > > How about: PHP 5.5 IS NOW OPEN FOR COMMIT ? I thought Larry's version was somewhat more encouraging. From victor.stinner at haypocalc.com Wed Nov 23 10:40:36 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 23 Nov 2011 10:40:36 +0100 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: Message-ID: <9311648.o2tbS2xZmn@dsk000552> Le Mercredi 23 Novembre 2011 01:49:28 Terry Reedy a ?crit : > The one-liner could be followed by > assert(kind==1 || kind==2 || kind==4) > which would also serve to remind the reader of the possibilities. For a ready string, kind must be 1, 2 or 4. We might rename "kind" to "charsize" because its value changed from 1, 2, 3 to 1, 2, 4 (to make it easy to compute the size of a string: length * kind). You are not supposed to see the secret kind==0 case. This value is only used for string created by _PyUnicode_New() and not ready yet: str = _PyUnicode_New() /* use str */ assert(PyUnicode_KIND(str) == 0); if (PyUnicode_READY(str) < 0) /* error */ assert(PyUnicode_KIND(str) != 0); /* kind is 1, 2, 4 */ Thanks to the effort of t0rsten, Martin and me, quite all functions use the new API (PyUnicode_New). For example, PyUnicode_AsRawUnicodeEscapeString() starts by ensuring that the string is ready. For your information, PyUnicode_KIND() fails with an assertion error in debug mode if the string is not ready. -- I don't have an opinion about the one-liner vs the switch :-) But if you want to fix compiler warnings, you should use "enum PyUnicode_Kind" type and PyUnicode_WCHAR_KIND should be removed from the enum. Victor From dirkjan at ochtman.nl Wed Nov 23 10:52:16 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 23 Nov 2011 10:52:16 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Tue, Nov 22, 2011 at 17:41, Stephen J. Turnbull wrote: > This is still a big mess in Gentoo and MacPorts, though. ?MacPorts > hasn't done anything about ceating a transition infrastructure AFAICT. > Gentoo has its "eselect python set VERSION" stuff, but it's very > dangerous to set to a Python 3 version, as many things go permanently > wonky once you do. ?(So far I've been able to work around problems > this creates, but it's not much fun.) Problems like what? > I don't have any connections to the distros, so can't really offer to > help directly. ?I think it might be a good idea for users to lobby > (politely!) ?their distros to work on the transition. Please create a connection to your distro by filing bugs as you encounter them? The Gentoo Python team is woefully understaffed (and I've been busy with some Real Life things, although that should improve in a couple more weeks), but we definitely care about providing an environment where you can successfully run python2 and python3 in parallel. Cheers, Dirkjan From stephen at xemacs.org Wed Nov 23 13:21:28 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 21:21:28 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Dirkjan Ochtman writes: > On Tue, Nov 22, 2011 at 17:41, Stephen J. Turnbull wrote: > > This is still a big mess in Gentoo and MacPorts, though. ?MacPorts > > hasn't done anything about ceating a transition infrastructure AFAICT. > > Gentoo has its "eselect python set VERSION" stuff, but it's very > > dangerous to set to a Python 3 version, as many things go permanently > > wonky once you do. ?(So far I've been able to work around problems > > this creates, but it's not much fun.) > > Problems like what? Like those I explained later in the post, which you cut. But I'll repeat. Some ebuilds are not prepared for Python 3, so must be emerged with a Python 2 eselected (and sometimes they need a specific Python 2). Some which are prepared don't get linted often enough, so new ebuilds are DOA because of an accented character in a changelog triggering a Unicode exception or similar dumb things like that. > > I don't have any connections to the distros, so can't really offer to > > help directly. ?I think it might be a good idea for users to lobby > > (politely!) ?their distros to work on the transition. > > Please create a connection to your distro by filing bugs as you > encounter them? No, thank you. File bugs, maybe, although most of the bugs I encounter in Gentoo are already in the database (often with multiple regressions going back a year or more), I could do a little more of that. (Response in the past has not been encouraging.) But I don't have time for distro politics. Is lack of Python 3-readiness considered a bug by Gentoo? From dirkjan at ochtman.nl Wed Nov 23 14:19:24 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 23 Nov 2011 14:19:24 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Wed, Nov 23, 2011 at 13:21, Stephen J. Turnbull wrote: > ?> Problems like what? > > Like those I explained later in the post, which you cut. ?But I'll They were in a later post, I didn't cut them. :) > ?> Please create a connection to your distro by filing bugs as you > ?> encounter them? > > No, thank you. ?File bugs, maybe, although most of the bugs I > encounter in Gentoo are already in the database (often with multiple > regressions going back a year or more), I could do a little more of > that.?(Response in the past has not been encouraging.)?But I don't > have time for distro politics. I'm sorry for the lack of response in the past. I looked at Gentoo's Bugzilla and didn't find any related bugs you reported or were CC'ed on, can you name some of them? > Is lack of Python 3-readiness considered a bug by Gentoo? Definitely. Again, we are trying to hard to make things better, but there's a lot to do and going through version bumps sometimes wins out over addressing the hard problems. Be assured, though, that we're also trying to make progress on the latter. If you're ever on IRC, come hang out in #gentoo-python, where distro politics should be minimal and the crew is generally friendly and responsive. Cheers, Dirkjan From stephen at xemacs.org Wed Nov 23 15:45:26 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 23:45:26 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87bos2rj7d.fsf@uwakimon.sk.tsukuba.ac.jp> Ned Deily writes: > In article <87fwhfqywr.fsf at uwakimon.sk.tsukuba.ac.jp>, > "Stephen J. Turnbull" wrote: > > I haven't had the nerve to do this on MacPorts because "port" is such > > a flaky thing (not so much port itself, but so many ports assume that > > the port maintainer's local configuration is what others' systems use, > > so I stay as vanilla as possible -- I rather doubt that many ports are > > ready for Python 3, and I'm not willing to be a guinea pig). > > I think your fears are unfounded. MacPort's individual port files are > supposed to be totally independent of the setting of 'port select'. If you think I'm complaining or imagining things, you're missing the point. My fears are *not* unfounded. For personal use, I wanted Python 2.6 to be default using "port select", and things went wonky. Some things just didn't work, or disappeared. Reverting to 2.5 fixed, so I left it that way for a while. I tried it again with Python 2.7, same deal, different ports. Maybe those would have been considered bugs in "port select", I don't know. But reverting was easy, "fixed" things, and I won't try it with Python 3 (until I have a sacrificial system available). Also, the MacPorts solution is very resource intensive for users: I have *seven* Python stacks on the Mac where I'm typing this -- the only version of Python I've been able to eliminate once it has been installed so far is 3.0! although I could probably get rid of 3.1 now). It also leads to fragmentation (*all* of my 2.x stacks are incomplete, I can't do without any of them), and a couple of extra frustrating steps in finding the code that raised an exceptions or whatever. Not to mention that it's in my face daily: "port outdated" frequently lines up 3, occasionally 4 versions of the same port. This *only* happens with Python! And there's no way that many ports are ready for Python 3, because their upstreams aren't! I think that projects that would like to move to Python 3 are going to find they get pushback from Mac users who "don't need" *yet another* Python stack installed. Note that Gentoo has globally switched off the python USE flag[1] (I suspect that the issue is that one-time configuration utilities can pull in a whole Python stack that mostly duplicates Python content required for Gentoo to work at all). > Also right now besides the Python port group transition, the > project has been swamped with issues arising from the Xcode 4 > introduction for Lion, mandating the transition from gcc to clang > or llvm-gcc. Sure, I understand that kind of thing. That doesn't mean it improves the user experience with Python, especially Python 3. It helps if you can get widespread adoption at similar pace across the board rather than uneven diffusion with a few niches moving really fast. It's like Lao Tse didn't quite say: the most successful leaders are those who hustle and get a few steps ahead of the crowd wherever it's heading. But you need a crowd moving in the same direction to execute that strategy! So I'd like see people who *already* have the credibility with their distros to advocate Python 3. If Ubuntu's going to lead, now's a good time to join them. (Other things being equal, of course -- but then, other things are never equal, so it may as well be now anyway. ) If that doesn't happen, well, Python and Python 3 will survive. But I'd rather to see them dominate. Footnotes: [1] According to the notes for the ibus ebuild. From barry at python.org Wed Nov 23 16:24:04 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 23 Nov 2011 10:24:04 -0500 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <4ECC6D9D.4020105@hastings.org> References: <4ECC6D9D.4020105@hastings.org> Message-ID: <20111123102404.496fca93@limelight.wooz.org> On Nov 22, 2011, at 07:50 PM, Larry Hastings wrote: >I've volunteered to be the Release Manager for Python 3.4. The FLUFL has >already given it his Sloppy Wet Kiss Of Approval, I think you mistook that for my slackjaw droolings when you repeatedly ignored my warnings to run as far from it as possible. But you're persistent, I'll give you that. Looks like that persistence will be punis^H^H^H^H^Hrewarded with your first RMship! Congratulations (?). >and we talked to Georg and he was for it too. There's no formal process for >selecting the RM, so I may already be stuck with the job, but I thought it >best to pipe up on python-dev in case someone had a better idea. > >But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? >Needs work? > >I look forward to seeing how the sausage is made, Undoubtedly it will make you a vegan, but hopefully not a Go programmer! :) Seriously though, I think it's great for you to work with Georg through the 3.3 process, so you can take over for 3.4. And I also think it's great that someone new wants to do it. I kid, but the mechanics of releasing really isn't very difficult these days as we've continually automated the boring parts. The fun part is really making the decisions about what is a showstopper, what changes can go in at the last minute, etc. Like the president of the USA, I just hope your hair is already gray! -Barry From stephen at xemacs.org Wed Nov 23 19:57:12 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 24 Nov 2011 03:57:12 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87aa7mr7jr.fsf@uwakimon.sk.tsukuba.ac.jp> Dirkjan Ochtman writes: > I'm sorry for the lack of response in the past. I looked at Gentoo's > Bugzilla and didn't find any related bugs you reported or were CC'ed > on, can you name some of them? This isn't about my bugs; I've been able to work through them satisfactorily. It's about what I perceive as a need for simultaneous improvement in Python 3 support across several distros, covering enough users to establish momentum. I don't think Python 3 needs to (or even can) replace Python 2 as the system python in the near future. But the "python" that is visible to users (at least on single-user systems) should be choosable by the user. eselect (on Gentoo) and port select (on MacPorts) *appear* to provide this, but it doesn't work very well. > > Is lack of Python 3-readiness considered a bug by Gentoo? > > Definitely. Again, we are trying to hard to make things better, but > there's a lot to do and going through version bumps sometimes wins out > over addressing the hard problems. Well, as I see it the two hard problems are (1) the stack-per-python- minor-version solution is ugly and unattractive to users, and (2) the "select python" utility needs to be safe. This probably means that python-using ebuilds need to complain if the Python they find isn't a Python 2 version by default. From martin at v.loewis.de Wed Nov 23 20:58:23 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 23 Nov 2011 20:58:23 +0100 Subject: [Python-Dev] PyUnicode_Resize In-Reply-To: <201111220208.17514.victor.stinner@haypocalc.com> References: <201111220208.17514.victor.stinner@haypocalc.com> Message-ID: <4ECD505F.8070209@v.loewis.de> > In Python 3.2, PyUnicode_Resize() expects a number of Py_UNICODE units, > whereas Python 3.3 expects a number of characters. Is that really the case? If the string is not ready (i.e. the kind is WCHAR_KIND), then it does count Py_UNICODE units, no? Callers are supposed to call PyUnicode_Resize only while the string is under construction, i.e. when it is not ready. If they resize it after it has been readied, changes to the Py_UNICODE representation wouldn't be reflected in the canonical representation, anyway. > Should we rename PyUnicode_Resize() in Python 3.3 to avoid surprising bugs? IIUC (and please correct me if I'm wrong) this issue won't cause memory corruption: if they specify a new size assuming it's Py_UNICODE units, but interpreted as code points, then the actual Py_UNICODE buffer can only be larger than expected - right? If so, callers could happily play with Py_UNICODE representation. It won't have the desired effect if the string was ready, but it won't crash Python, either. > The easiest solution is to do nothing in Python 3.3: the API changed, but it > doesn't really matter. Developers just have to be careful on this particular > issue (which is not well documented today). See above. I think there actually is no issue in the first place. Please do correct me if I'm wrong. Regards, Martin From pjenvey at underboss.org Wed Nov 23 22:13:38 2011 From: pjenvey at underboss.org (Philip Jenvey) Date: Wed, 23 Nov 2011 13:13:38 -0800 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Nov 22, 2011, at 12:43 PM, Amaury Forgeot d'Arc wrote: > 2011/11/22 Philip Jenvey > One reason to target 3.2 for now is it's not a moving target. There's overhead involved in managing modifications to the pure python standard lib needed for PyPy, tracking 3.3 changes as they happen as well exacerbates this. > > The plans to split the standard lib into its own repo separate from core CPython will of course help alternative implementations here. > > I don't see how it would help here. > Copying the CPython Lib/ directory is not difficult, even though PyPy made slight modifications to the files, and even without any merge tool. Pulling in a separate stdlib as a subrepo under the PyPy repo would certainly make this whole process easier. But you're right, if we track CPython's default branch (3.3) we can make many if not all of the PyPy modifications upstream (until the 3.3rc1 code freeze) instead of in PyPy's modified-3.x directory. Maintaining that modified-3.x dir after every resync can be tedious. -- Philip Jenvey From guido at python.org Thu Nov 24 01:28:39 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 23 Nov 2011 16:28:39 -0800 Subject: [Python-Dev] PEP 380 Message-ID: Mea culpa for not keeping track, but what's the status of PEP 380? I really want this in Python 3.3! -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Thu Nov 24 05:06:43 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Nov 2011 14:06:43 +1000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: > Mea culpa for not keeping track, but what's the status of PEP 380? I > really want this in Python 3.3! There are two relevant tracker issues (both with me for the moment). The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 That's really just missing the doc updates - I haven't had a chance to look at Zbyszek's latest offering on that front, but it shouldn't be far off being complete (the *text* in his previous docs patch actually seemed reasonable - I mainly objected to way it was organised). However, the PEP 380 test suite updates have a dependency on a new dis module feature that provides an iterator over a structured description of bytecode instructions: http://bugs.python.org/issue11816 I find Meador's suggestion to change the name of the new API to something involving the word "instruction" appealing, so I plan to do that, which will have a knock-on effect on the tests in the PEP 380 branch. However, even once I get that done, Raymond specifically said he wanted to review the dis module patch before I check it in, so I don't plan to commit it until he gives the OK (either because he reviewed it, or because he decides he's OK with it going in without his review and he can review and potentially update it in Mercurial any time before 3.3 is released). I currently plan to update my working branches for both of those on the 3rd of December, so hopefully they'll be ready to go within the next couple of weeks. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From fijall at gmail.com Thu Nov 24 13:20:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 24 Nov 2011 14:20:38 +0200 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Wed, Nov 23, 2011 at 11:13 PM, Philip Jenvey wrote: > > On Nov 22, 2011, at 12:43 PM, Amaury Forgeot d'Arc wrote: > >> 2011/11/22 Philip Jenvey >> One reason to target 3.2 for now is it's not a moving target. There's overhead involved in managing modifications to the pure python standard lib needed for PyPy, tracking 3.3 changes as they happen as well exacerbates this. >> >> The plans to split the standard lib into its own repo separate from core CPython will of course help alternative implementations here. >> >> I don't see how it would help here. >> Copying the CPython Lib/ directory is not difficult, even though PyPy made slight modifications to the files, and even without any merge tool. > > Pulling in a separate stdlib as a subrepo under the PyPy repo would certainly make this whole process easier. > > But you're right, if we track CPython's default branch (3.3) we can make many if not all of the PyPy modifications upstream ? (until the 3.3rc1 code freeze) instead of in PyPy's modified-3.x directory. Maintaining that modified-3.x dir after every resync can be tedious. > > -- > Philip Jenvey The problem is not with maintaining the modified directory. The problem was always things like changing interface between the C version and the Python version or introduction of new stuff that does not run on pypy because it relies on refcounting. I don't see how having a subrepo helps here. From ncoghlan at gmail.com Thu Nov 24 13:46:01 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Nov 2011 22:46:01 +1000 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski wrote: > The problem is not with maintaining the modified directory. The > problem was always things like changing interface between the C > version and the Python version or introduction of new stuff that does > not run on pypy because it relies on refcounting. I don't see how > having a subrepo helps here. Indeed, the main thing that can help on this front is to get more modules to the same state as heapq, io, datetime (and perhaps a few others that have slipped my mind) where the CPython repo actually contains both C and Python implementations and the test suite exercises both to make sure their interfaces remain suitably consistent (even though, during normal operation, CPython users will only ever hit the C accelerated version). This not only helps other implementations (by keeping a Python version of the module continuously up to date with any semantic changes), but can help people that are porting CPython to new platforms: the C extension modules are far more likely to break in that situation than the pure Python equivalents, and a relatively slow fallback is often going to be better than no fallback at all. (Note that ctypes based pure Python modules *aren't* particularly useful for this purpose, though - due to the libffi dependency, ctypes is one of the extension modules most likely to break when porting). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jcea at jcea.es Thu Nov 24 17:43:11 2011 From: jcea at jcea.es (Jesus Cea) Date: Thu, 24 Nov 2011 17:43:11 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python Message-ID: <4ECE741F.10303@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I have a question and I would rather have an answer instead of actually trying and getting myself in a messy situation. Let say we have the following scenario: 1. A programer clones hg.python.org. 2. Programer creates a named branch and start to develop a new feature. 3. She adds her repository&named branch to the bugtracker. 4. From time to time, she posts updates in the tracker using the "Create Patch" button. So far so good. Now, the question: 5. Development of the new feature is taking a long time, and python canonical version keeps moving forward. The clone+branch and the original python version are diverging. Eventually there are changes in python that the programmer would like in her version, so she does a "pull" and then a merge for the original python branch to her named branch. 6. What would be posted in the bug tracker when she does a new "Create Patch"?. Only her changes, her changes SINCE the merge, her changes plus merged changes or something else?. What if the programmer cherrypick changesets from the original python branch?. Thanks! :-). - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs50H5lgi5GaxT1NAQJsTAP6AsUsLo2REdxxyVvPBDQ51GjZermCXD08 jOqKkKY9cre4OHx/+uZHEvO8j7RJ5X3o2/0Yl4OeDSTBDY8/eWINc9cgtuNqrJdW W27fu1+UTIpgl1oLh06P23ufOEWPWh90gsV6eiVnFlj7r+b3HkP7PNdZCmqU2+UW 92Ac9B1JOvU= =goYv -----END PGP SIGNATURE----- From merwok at netwok.org Thu Nov 24 18:08:53 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Thu, 24 Nov 2011 18:08:53 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE741F.10303@jcea.es> References: <4ECE741F.10303@jcea.es> Message-ID: <4ECE7A25.5000701@netwok.org> Hi, > I have a question and I would rather have an answer instead of > actually trying and getting myself in a messy situation. Clones are cheap, trying is cheap! > Let say we have the following scenario: > > 1. A programer clones hg.python.org. > 2. Programer creates a named branch and start to develop a new feature. > 3. She adds her repository&named branch to the bugtracker. > 4. From time to time, she posts updates in the tracker using the > "Create Patch" button. > > So far so good. Now, the question: > > 5. Development of the new feature is taking a long time, and python > canonical version keeps moving forward. The clone+branch and the > original python version are diverging. Eventually there are changes in > python that the programmer would like in her version, so she does a > "pull" and then a merge for the original python branch to her named > branch. I do this all the time. I work on a fix-nnnn branch, and once a week for example I pull and merge the base branch. Sometimes there are no conflicts except Misc/NEWS, sometimes I have to adapt my code because of other people?s changes before I can commit the merge. > 6. What would be posted in the bug tracker when she does a new "Create > Patch"?. Only her changes, her changes SINCE the merge, her changes > plus merged changes or something else?. The diff would be equivalent to ?hg diff -r base? and would contain all the changes she did to add the bug fix or feature. Merging only makes sure that the computed diff does not appear to touch unrelated files, IOW that it applies cleanly. (Barring bugs in Mercurial-Roundup integration, we have a few of these in the metatracker.) > What if the programmer cherrypick changesets from the original python > branch?. Then her branch will revert some changes done in the original branch. Therefore, cherry-picking is not a good idea. Regards From eliben at gmail.com Thu Nov 24 19:15:25 2011 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 24 Nov 2011 20:15:25 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? Message-ID: Hi there, I was doing some experiments with the buffer interface of bytearray today, for the purpose of quickly reading a file's contents into a bytearray which I can then modify. I decided to do some benchmarking and ran into surprising results. Here are the functions I was timing: def justread(): # Just read a file's contents into a string/bytes object f = open(FILENAME, 'rb') s = f.read() def readandcopy(): # Read a file's contents and copy them into a bytearray. # An extra copy is done here. f = open(FILENAME, 'rb') b = bytearray(f.read()) def readinto(): # Read a file's contents directly into a bytearray, # hopefully employing its buffer interface f = open(FILENAME, 'rb') b = bytearray(os.path.getsize(FILENAME)) f.readinto(b) FILENAME is the name of a 3.6MB text file. It is read in binary mode, however, for fullest compatibility between 2.x and 3.x Now, running this under Python 2.7.2 I got these results ($1 just reflects the executable name passed to a bash script I wrote to automate these runs): $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 461 usec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.81 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 1000 loops, best of 3: 697 usec per loop Which make sense. The readinto() approach is much faster than copying the read buffer into the bytearray. But with Python 3.2.2 (built from the 3.2 branch today): $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 336 usec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.62 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 100 loops, best of 3: 2.69 msec per loop Oops, readinto takes the same time as copying. This is a real shame, because readinto in conjunction with the buffer interface was supposed to avoid the redundant copy. Is there a real performance regression here, is this a well-known issue, or am I just missing something obvious? Eli P.S. The machine is quad-core i7-2820QM, running 64-bit Ubuntu 10.04 -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 24 19:29:33 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 24 Nov 2011 19:29:33 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: Message-ID: <20111124192933.393cf96f@pitrou.net> On Thu, 24 Nov 2011 20:15:25 +0200 Eli Bendersky wrote: > > Oops, readinto takes the same time as copying. This is a real shame, > because readinto in conjunction with the buffer interface was supposed to > avoid the redundant copy. > > Is there a real performance regression here, is this a well-known issue, or > am I just missing something obvious? Can you try with latest 3.3 (from the default branch)? Thanks Antoine. From eliben at gmail.com Thu Nov 24 19:53:30 2011 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 24 Nov 2011 20:53:30 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111124192933.393cf96f@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> Message-ID: On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou wrote: > On Thu, 24 Nov 2011 20:15:25 +0200 > Eli Bendersky wrote: > > > > Oops, readinto takes the same time as copying. This is a real shame, > > because readinto in conjunction with the buffer interface was supposed to > > avoid the redundant copy. > > > > Is there a real performance regression here, is this a well-known issue, > or > > am I just missing something obvious? > > Can you try with latest 3.3 (from the default branch)? > Sure. Updated the default branch just now and built: $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 1.14 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.78 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 1000 loops, best of 3: 1.6 msec per loop Strange. Although here, like in python 2, the performance of readinto is close to justread and much faster than readandcopy, but justread itself is much slower than in 2.7 and 3.2! Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 24 21:55:40 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 06:55:40 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE7A25.5000701@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: I've never been able to get the Create Patch button to work reliably with my BitBucket repo, so I still just run "hg diff -r default" locally and upload the patch directly. It would be nice if I could just specify both the feature branch *and* the branch to diff against rather than having to work out why Roundup is guessing wrong... -- Nick Coghlan (via Gmail on Android, so likely to be more terse than usual) -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Thu Nov 24 22:23:26 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 Nov 2011 22:23:26 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ECEB5CE.5050309@v.loewis.de> Am 24.11.2011 21:55, schrieb Nick Coghlan: > I've never been able to get the Create Patch button to work reliably > with my BitBucket repo, so I still just run "hg diff -r default" locally > and upload the patch directly. Please submit a bug report to the meta tracker. > It would be nice if I could just specify both the feature branch *and* > the branch to diff against rather than having to work out why Roundup is > guessing wrong... Why would you not diff against the default branch? Regards, Martin From python-dev at masklinn.net Thu Nov 24 22:46:51 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Thu, 24 Nov 2011 22:46:51 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: On 2011-11-24, at 21:55 , Nick Coghlan wrote: > I've never been able to get the Create Patch button to work reliably with > my BitBucket repo, so I still just run "hg diff -r default" locally and > upload the patch directly. Wouldn't it be simpler to just use MQ and upload the patch(es) from the series? Would be easier to keep in sync with the development tip too. From anacrolix at gmail.com Thu Nov 24 23:02:15 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 09:02:15 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: What if you broke up the read and built the final string object up. I always assumed this is where the real gain was with read_into. On Nov 25, 2011 5:55 AM, "Eli Bendersky" wrote: > On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou wrote: > >> On Thu, 24 Nov 2011 20:15:25 +0200 >> Eli Bendersky wrote: >> > >> > Oops, readinto takes the same time as copying. This is a real shame, >> > because readinto in conjunction with the buffer interface was supposed >> to >> > avoid the redundant copy. >> > >> > Is there a real performance regression here, is this a well-known >> issue, or >> > am I just missing something obvious? >> >> Can you try with latest 3.3 (from the default branch)? >> > > Sure. Updated the default branch just now and built: > > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' > 1000 loops, best of 3: 1.14 msec per loop > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 2.78 msec per loop > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' > 1000 loops, best of 3: 1.6 msec per loop > > Strange. Although here, like in python 2, the performance of readinto is > close to justread and much faster than readandcopy, but justread itself is > much slower than in 2.7 and 3.2! > > Eli > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Thu Nov 24 23:41:02 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 00:41:02 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: > What if you broke up the read and built the final string object up. I > always assumed this is where the real gain was with read_into. > Matt, I'm not sure what you mean by this - can you suggest the code? Also, I'd be happy to know if anyone else reproduces this as well on other machines/OSes. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 24 23:45:58 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 08:45:58 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECEB5CE.5050309@v.loewis.de> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEB5CE.5050309@v.loewis.de> Message-ID: On Fri, Nov 25, 2011 at 7:23 AM, "Martin v. L?wis" wrote: > Am 24.11.2011 21:55, schrieb Nick Coghlan: >> I've never been able to get the Create Patch button to work reliably >> with my BitBucket repo, so I still just run "hg diff -r default" locally >> and upload the patch directly. > > Please submit a bug report to the meta tracker. Done: http://psf.upfronthosting.co.za/roundup/meta/issue428 >> It would be nice if I could just specify both the feature branch *and* >> the branch to diff against rather than having to work out why Roundup is >> guessing wrong... > > Why would you not diff against the default branch? I usually do - the only case I have at the moment where diffing against a branch other than default sometimes make sense is the dependency from the PEP 380 branch on the dis.get_opinfo() feature branch (http://bugs.python.org/issue11682). In fact, I believe that's also the case that confuses the diff generator. My workflow in the repo is: - update default from hg.python.org/cpython - merge into get_opinfo branch from default - merge into pep380 branch from the get_opinfo branch So, after merging into the pep380 branch, "hg diff -r default" gives a full patch from default -> pep380 (including the dis module updates), while "hg diff -r get_opinfo" gives a patch that assumes the dis changes have already been applied separately. I'm now wondering if doing an explicit "hg merge default" before I do the merges from the get_opinfo branch in my sandbox might be enough to get the patch generator back on track... Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Nov 24 23:46:23 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 08:46:23 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: On Fri, Nov 25, 2011 at 7:46 AM, Xavier Morel wrote: > On 2011-11-24, at 21:55 , Nick Coghlan wrote: >> I've never been able to get the Create Patch button to work reliably with >> my BitBucket repo, so I still just run "hg diff -r default" locally and >> upload the patch directly. > Wouldn't it be simpler to just use MQ and upload the patch(es) from the series? Would be easier to keep in sync with the development tip too. >From my (admittedly limited) experience, using MQ means I can only effectively collaborate with other people also using MQ (e.g. the Roundup integration doesn't work if the only thing that is published on BitBucket is a patch queue). I'll stick with named branches until MQ becomes a builtin Hg feature that better integrates with other tools. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jcea at jcea.es Fri Nov 25 01:01:13 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 01:01:13 +0100 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement Message-ID: <4ECEDAC9.7040903@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Trying to clear the licensing issues surrounding my DTrace work (http://bugs.python.org/issue13405) I am contacting Sun/Oracle guys. Checking documentation abut the contributor license agreement, I had encounter a wrong HTML link in http://www.python.org/about/help/ : * "Python Patch Guidelines" points to http://www.python.org/dev/patches/, that doesn't exist. Other links in that page seems OK. PS: The devguide doesn't say anything (AFAIK) about the contributor agreement. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7ayZlgi5GaxT1NAQLOfwQAoa1GFuQZKhbXD3FnmH3XUiylzTMBmXMh vB++AdDP8fcEBC/NYZ9j0DH+AGspXrPg4YVta09zJJ/1kHa2UxRVmtXM8centl3V Jkad+6lJw6YYjtXXgM4QExlzClsYNn1ByhYaRSiSa8g9dtsFq4YTlKzfeAXLPC50 DUju8DavMyo= =xOEe -----END PGP SIGNATURE----- From jcea at jcea.es Fri Nov 25 01:20:17 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 01:20:17 +0100 Subject: [Python-Dev] webmaster@python.org address not working Message-ID: <4ECEDF41.2030008@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 When mailing there, I get this error. Not sure where to report. """ Final-Recipient: rfc822; sdrees at sdrees.de Original-Recipient: rfc822;webmaster at python.org Action: failed Status: 5.1.1 Remote-MTA: dns; stefan.zinzdrees.de Diagnostic-Code: smtp; 550 5.1.1 : Recipient address rejected: User unknown in local recipient table """ - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1 yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT TMmsu39qf2E= =ecRu -----END PGP SIGNATURE----- From solipsis at pitrou.net Fri Nov 25 01:23:19 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 01:23:19 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> Message-ID: <20111125012319.52a69ebb@pitrou.net> On Thu, 24 Nov 2011 20:53:30 +0200 Eli Bendersky wrote: > > Sure. Updated the default branch just now and built: > > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' > 1000 loops, best of 3: 1.14 msec per loop > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 2.78 msec per loop > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' > 1000 loops, best of 3: 1.6 msec per loop > > Strange. Although here, like in python 2, the performance of readinto is > close to justread and much faster than readandcopy, but justread itself is > much slower than in 2.7 and 3.2! This seems to be a side-effect of http://hg.python.org/cpython/rev/f8a697bc3ca8/ Now I'm not sure if these numbers matter a lot. 1.6ms for a 3.6MB file is still more than 2 GB/s. Regards Antoine. From tjreedy at udel.edu Fri Nov 25 01:31:12 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 24 Nov 2011 19:31:12 -0500 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: On 11/24/2011 5:02 PM, Matt Joiner wrote: > What if you broke up the read and built the final string object up. I > always assumed this is where the real gain was with read_into. If a pure read takes twice as long in 3.3 as in 3.2, that is a concern regardless of whether there is a better way. -- Terry Jan Reedy From anacrolix at gmail.com Fri Nov 25 01:49:19 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 11:49:19 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: Eli, Example coming shortly, the differences are quite significant. On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky wrote: > On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: >> >> What if you broke up the read and built the final string object up. I >> always assumed this is where the real gain was with read_into. > > Matt, I'm not sure what you mean by this - can you suggest the code? > > Also, I'd be happy to know if anyone else reproduces this as well on other > machines/OSes. > > Eli > > From anacrolix at gmail.com Fri Nov 25 02:02:17 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 12:02:17 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: It's my impression that the readinto method does not fully support the buffer interface I was expecting. I've never had cause to use it until now. I've created a question on SO that describes my confusion: http://stackoverflow.com/q/8263899/149482 Also I saw some comments on "top-posting" am I guilty of this? Gmail defaults to putting my response above the previous email. On Fri, Nov 25, 2011 at 11:49 AM, Matt Joiner wrote: > Eli, > > Example coming shortly, the differences are quite significant. > > On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky wrote: >> On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: >>> >>> What if you broke up the read and built the final string object up. I >>> always assumed this is where the real gain was with read_into. >> >> Matt, I'm not sure what you mean by this - can you suggest the code? >> >> Also, I'd be happy to know if anyone else reproduces this as well on other >> machines/OSes. >> >> Eli >> >> > From solipsis at pitrou.net Fri Nov 25 02:07:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 02:07:00 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> Message-ID: <20111125020700.4c38aab7@pitrou.net> On Fri, 25 Nov 2011 12:02:17 +1100 Matt Joiner wrote: > It's my impression that the readinto method does not fully support the > buffer interface I was expecting. I've never had cause to use it until > now. I've created a question on SO that describes my confusion: > > http://stackoverflow.com/q/8263899/149482 Just use a memoryview and slice it: b = bytearray(...) m = memoryview(b) n = f.readinto(m[some_offset:]) > Also I saw some comments on "top-posting" am I guilty of this? Kind of :) Regards Antoine. From fuzzyman at voidspace.org.uk Fri Nov 25 03:00:35 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 02:00:35 +0000 Subject: [Python-Dev] webmaster@python.org address not working In-Reply-To: <4ECEDF41.2030008@jcea.es> References: <4ECEDF41.2030008@jcea.es> Message-ID: <4ECEF6C3.3060901@voidspace.org.uk> On 25/11/2011 00:20, Jesus Cea wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > When mailing there, I get this error. Not sure where to report. The address works fine. It would be nice if someone fixed the annoying bounce however. :-) Michael > """ > Final-Recipient: rfc822; sdrees at sdrees.de > Original-Recipient: rfc822;webmaster at python.org > Action: failed > Status: 5.1.1 > Remote-MTA: dns; stefan.zinzdrees.de > Diagnostic-Code: smtp; 550 5.1.1 : Recipient address > rejected: User unknown in local recipient table > """ > > - -- > Jesus Cea Avion _/_/ _/_/_/ _/_/_/ > jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ > jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ > . _/_/ _/_/ _/_/ _/_/ _/_/ > "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ > "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ > "El amor es poner tu felicidad en la felicidad de otro" - Leibniz > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1 > yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe > pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT > TMmsu39qf2E= > =ecRu > -----END PGP SIGNATURE----- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From jcea at jcea.es Fri Nov 25 03:39:16 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 03:39:16 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE7A25.5000701@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ECEFFD4.5030601@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 24/11/11 18:08, ?ric Araujo wrote: >> I have a question and I would rather have an answer instead of >> actually trying and getting myself in a messy situation. > Clones are cheap, trying is cheap! I would need to publish another repository online, and instruct the bug tracker to use it and create a patch, and play for the best or risk polluding the tracker. Maybe I would be hitting a corner case and be lucky this time, but not next time. Better to ask people that know better, I guess. >> 5. Development of the new feature is taking a long time, and >> python canonical version keeps moving forward. The clone+branch >> and the original python version are diverging. Eventually there >> are changes in python that the programmer would like in her >> version, so she does a "pull" and then a merge for the original >> python branch to her named branch. > I do this all the time. I work on a fix-nnnn branch, and once a > week for example I pull and merge the base branch. Sometimes there > are no conflicts except Misc/NEWS, sometimes I have to adapt my > code because of other people?s changes before I can commit the > merge. That is good, because that means your patch is always able to be applied to the original branch tip, and that you changes work with current work in the mainline. That is what I want to do, but I need to know that it is safe to do so (from the "Create Patch" perspective). >> 6. What would be posted in the bug tracker when she does a new >> "Create Patch"?. Only her changes, her changes SINCE the merge, >> her changes plus merged changes or something else?. > The diff would be equivalent to ?hg diff -r base? and would contain > all the changes she did to add the bug fix or feature. Merging > only makes sure that the computed diff does not appear to touch > unrelated files, IOW that it applies cleanly. (Barring bugs in > Mercurial-Roundup integration, we have a few of these in the > metatracker.) So you are saying that "Create patch" will ONLY get the differences in the development branch and not the changes brought in from the merge?. A "hg diff -r base" -as you indicate- should show all changes in the branch since creation, included the merges, if I understand it correctly. I don't want to include the merges, although I want their effect in my own work (like changing patch offset). That is, is that merge safe for "Create Patch"?. Your answer seems to indicate "yes", but I rather prefer an explicit "yes" that an "implicit" yes :). Python Zen! :). PS: Sorry if I am being blunt. My (lack of) social skills are legendary. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7/1Jlgi5GaxT1NAQJUDAQAhQi5e3utsVdOveO/3r1EDr/9BUTpB8Tb DxIe12HEt+KT33CJR+HGTt9OBqNGmVb4Q3h8lj3YIi7WdIXjc3CQ3+dO1NF1jTZO 0rt5EbEU99RAkgqOT0r3ziKy6MSSWhTuZlQy6pvcivEJet0GANiNqbdw6xFBETeZ a5m85Q793iU= =1Kg3 -----END PGP SIGNATURE----- From anacrolix at gmail.com Fri Nov 25 07:13:45 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 17:13:45 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125020700.4c38aab7@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 12:07 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 12:02:17 +1100 > Matt Joiner wrote: >> It's my impression that the readinto method does not fully support the >> buffer interface I was expecting. I've never had cause to use it until >> now. I've created a question on SO that describes my confusion: >> >> http://stackoverflow.com/q/8263899/149482 > > Just use a memoryview and slice it: > > b = bytearray(...) > m = memoryview(b) > n = f.readinto(m[some_offset:]) Cheers, this seems to be what I wanted. Unfortunately it doesn't perform noticeably better if I do this. Eli, the use pattern I was referring to is when you read in chunks, and and append to a running buffer. Presumably if you know in advance the size of the data, you can readinto directly to a region of a bytearray. There by avoiding having to allocate a temporary buffer for the read, and creating a new buffer containing the running buffer, plus the new. Strangely, I find that your readandcopy is faster at this, but not by much, than readinto. Here's the code, it's a bit explicit, but then so was the original: BUFSIZE = 0x10000 def justread(): # Just read a file's contents into a string/bytes object f = open(FILENAME, 'rb') s = b'' while True: b = f.read(BUFSIZE) if not b: break s += b def readandcopy(): # Read a file's contents and copy them into a bytearray. # An extra copy is done here. f = open(FILENAME, 'rb') s = bytearray() while True: b = f.read(BUFSIZE) if not b: break s += b def readinto(): # Read a file's contents directly into a bytearray, # hopefully employing its buffer interface f = open(FILENAME, 'rb') s = bytearray(os.path.getsize(FILENAME)) o = 0 while True: b = f.readinto(memoryview(s)[o:o+BUFSIZE]) if not b: break o += b And the timings: $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.justread()' 10 loops, best of 3: 298 msec per loop $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 9.22 msec per loop $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.readinto()' 100 loops, best of 3: 9.31 msec per loop The file was 10MB. I expected readinto to perform much better than readandcopy. I expected readandcopy to perform slightly better than justread. This clearly isn't the case. > >> Also I saw some comments on "top-posting" am I guilty of this? If tehre's a magical option in gmail someone knows about, please tell. > > Kind of :) > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From eliben at gmail.com Fri Nov 25 07:38:48 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 08:38:48 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125012319.52a69ebb@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> Message-ID: > On Thu, 24 Nov 2011 20:53:30 +0200 > Eli Bendersky wrote: > > > > Sure. Updated the default branch just now and built: > > > > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.justread()' > > 1000 loops, best of 3: 1.14 msec per loop > > $1 -m timeit -s'import fileread_bytearray' > > 'fileread_bytearray.readandcopy()' > > 100 loops, best of 3: 2.78 msec per loop > > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readinto()' > > 1000 loops, best of 3: 1.6 msec per loop > > > > Strange. Although here, like in python 2, the performance of readinto is > > close to justread and much faster than readandcopy, but justread itself > is > > much slower than in 2.7 and 3.2! > > This seems to be a side-effect of > http://hg.python.org/cpython/rev/f8a697bc3ca8/ > > Now I'm not sure if these numbers matter a lot. 1.6ms for a 3.6MB file > is still more than 2 GB/s. > Just to be clear, there were two separate issues raised here. One is the speed regression of readinto() from 2.7 to 3.2, and the other is the relative slowness of justread() in 3.3 Regarding the second, I'm not sure it's an issue because I tried a larger file (100MB and then also 300MB) and the speed of 3.3 is now on par with 3.2 and 2.7 However, the original question remains - on the 100MB file also, although in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the same speed (even a few % slower). That said, I now observe with Python 3.3 the same speed as with 2.7, including the readinto() speedup - so it appears that the readinto() regression has been solved in 3.3? Any clue about where it happened (i.e. which bug/changeset)? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Fri Nov 25 07:41:47 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 08:41:47 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: > > Eli, the use pattern I was referring to is when you read in chunks, > and and append to a running buffer. Presumably if you know in advance > the size of the data, you can readinto directly to a region of a > bytearray. There by avoiding having to allocate a temporary buffer for > the read, and creating a new buffer containing the running buffer, > plus the new. > > Strangely, I find that your readandcopy is faster at this, but not by > much, than readinto. Here's the code, it's a bit explicit, but then so > was the original: > > BUFSIZE = 0x10000 > > def justread(): > # Just read a file's contents into a string/bytes object > f = open(FILENAME, 'rb') > s = b'' > while True: > b = f.read(BUFSIZE) > if not b: > break > s += b > > def readandcopy(): > # Read a file's contents and copy them into a bytearray. > # An extra copy is done here. > f = open(FILENAME, 'rb') > s = bytearray() > while True: > b = f.read(BUFSIZE) > if not b: > break > s += b > > def readinto(): > # Read a file's contents directly into a bytearray, > # hopefully employing its buffer interface > f = open(FILENAME, 'rb') > s = bytearray(os.path.getsize(FILENAME)) > o = 0 > while True: > b = f.readinto(memoryview(s)[o:o+BUFSIZE]) > if not b: > break > o += b > > And the timings: > > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.justread()' > 10 loops, best of 3: 298 msec per loop > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 9.22 msec per loop > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.readinto()' > 100 loops, best of 3: 9.31 msec per loop > > The file was 10MB. I expected readinto to perform much better than > readandcopy. I expected readandcopy to perform slightly better than > justread. This clearly isn't the case. > > What is 'python3' on your machine? If it's 3.2, then this is consistent with my results. Try it with 3.3 and for a larger file (say ~100MB and up), you may see the same speed as on 2.7 Also, why do you think chunked reads are better here than slurping the whole file into the bytearray in one go? If you need it wholly in memory anyway, why not just issue a single read? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Fri Nov 25 08:38:28 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 25 Nov 2011 16:38:28 +0900 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <87vcq8ps7f.fsf@uwakimon.sk.tsukuba.ac.jp> Nick Coghlan writes: > I'll stick with named branches until MQ becomes a builtin Hg > feature that better integrates with other tools. AFAIK MQ *is* considered to be a *stable, standard* part of Hg functionality that *happens* (for several reasons *not* including "it's not ready for Prime Time") to be packaged as an extension. If you want more/different functionality from it, you probably should file a feature request with the Mercurial developers. From soltysh at wp.pl Fri Nov 25 09:18:38 2011 From: soltysh at wp.pl (Maciej Szulik) Date: Fri, 25 Nov 2011 09:18:38 +0100 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement In-Reply-To: <4ECEDAC9.7040903@jcea.es> References: <4ECEDAC9.7040903@jcea.es> Message-ID: <4ecf4f5ef399d0.71104851@wp.pl> Dnia 25-11-2011 o godz. 1:01 Jesus Cea napisa?(a): > > PS: The devguide doesn't say anything (AFAIK) about the contributor > agreement. There is info in the Contributing part of the devguide, follow How to Become a Core Developer link which points to http://docs.python.org/devguide/coredev.html where Contributor Agreement is mentioned. Regards, Maciej From anacrolix at gmail.com Fri Nov 25 10:34:21 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 20:34:21 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 5:41 PM, Eli Bendersky wrote: >> Eli, the use pattern I was referring to is when you read in chunks, >> and and append to a running buffer. Presumably if you know in advance >> the size of the data, you can readinto directly to a region of a >> bytearray. There by avoiding having to allocate a temporary buffer for >> the read, and creating a new buffer containing the running buffer, >> plus the new. >> >> Strangely, I find that your readandcopy is faster at this, but not by >> much, than readinto. Here's the code, it's a bit explicit, but then so >> was the original: >> >> BUFSIZE = 0x10000 >> >> def justread(): >> ? ?# Just read a file's contents into a string/bytes object >> ? ?f = open(FILENAME, 'rb') >> ? ?s = b'' >> ? ?while True: >> ? ? ? ?b = f.read(BUFSIZE) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?s += b >> >> def readandcopy(): >> ? ?# Read a file's contents and copy them into a bytearray. >> ? ?# An extra copy is done here. >> ? ?f = open(FILENAME, 'rb') >> ? ?s = bytearray() >> ? ?while True: >> ? ? ? ?b = f.read(BUFSIZE) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?s += b >> >> def readinto(): >> ? ?# Read a file's contents directly into a bytearray, >> ? ?# hopefully employing its buffer interface >> ? ?f = open(FILENAME, 'rb') >> ? ?s = bytearray(os.path.getsize(FILENAME)) >> ? ?o = 0 >> ? ?while True: >> ? ? ? ?b = f.readinto(memoryview(s)[o:o+BUFSIZE]) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?o += b >> >> And the timings: >> >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.justread()' >> 10 loops, best of 3: 298 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readandcopy()' >> 100 loops, best of 3: 9.22 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readinto()' >> 100 loops, best of 3: 9.31 msec per loop >> >> The file was 10MB. I expected readinto to perform much better than >> readandcopy. I expected readandcopy to perform slightly better than >> justread. This clearly isn't the case. >> > > What is 'python3' on your machine? If it's 3.2, then this is consistent with > my results. Try it with 3.3 and for a larger file (say ~100MB and up), you > may see the same speed as on 2.7 It's Python 3.2. I tried it for larger files and got some interesting results. readinto() for 10MB files, reading 10MB all at once: readinto/2.7 100 loops, best of 3: 8.6 msec per loop readinto/3.2 10 loops, best of 3: 29.6 msec per loop readinto/3.3 100 loops, best of 3: 19.5 msec per loop With 100KB chunks for the 10MB file (annotated with #): matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import readinto' "readinto.$f()"; done; done read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually faster than the 10MB read read/3.2 10 loops, best of 3: 253 msec per loop # wtf? read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? bytearray_read/2.7 100 loops, best of 3: 7.9 msec per loop bytearray_read/3.2 100 loops, best of 3: 7.48 msec per loop bytearray_read/3.3 100 loops, best of 3: 15.8 msec per loop # wtf? readinto/2.7 100 loops, best of 3: 8.93 msec per loop readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 is performing well? readinto/3.3 10 loops, best of 3: 20.4 msec per loop Here's the code: http://pastebin.com/nUy3kWHQ > > Also, why do you think chunked reads are better here than slurping the whole > file into the bytearray in one go? If you need it wholly in memory anyway, > why not just issue a single read? Sometimes it's not available all at once, I do a lot of socket programming, so this case is of interest to me. As shown above, it's also faster for python2.7. readinto() should also be significantly faster for this case, tho it isn't. > > Eli > > From solipsis at pitrou.net Fri Nov 25 11:55:04 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 11:55:04 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> Message-ID: <20111125115504.65fcd400@pitrou.net> On Fri, 25 Nov 2011 08:38:48 +0200 Eli Bendersky wrote: > > Just to be clear, there were two separate issues raised here. One is the > speed regression of readinto() from 2.7 to 3.2, and the other is the > relative slowness of justread() in 3.3 > > Regarding the second, I'm not sure it's an issue because I tried a larger > file (100MB and then also 300MB) and the speed of 3.3 is now on par with > 3.2 and 2.7 > > However, the original question remains - on the 100MB file also, although > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > same speed (even a few % slower). That said, I now observe with Python 3.3 > the same speed as with 2.7, including the readinto() speedup - so it > appears that the readinto() regression has been solved in 3.3? Any clue > about where it happened (i.e. which bug/changeset)? It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ Regards Antoine. From solipsis at pitrou.net Fri Nov 25 12:04:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 12:04:00 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: <20111125120400.53ce8ca1@pitrou.net> On Fri, 25 Nov 2011 20:34:21 +1100 Matt Joiner wrote: > > It's Python 3.2. I tried it for larger files and got some interesting results. > > readinto() for 10MB files, reading 10MB all at once: > > readinto/2.7 100 loops, best of 3: 8.6 msec per loop > readinto/3.2 10 loops, best of 3: 29.6 msec per loop > readinto/3.3 100 loops, best of 3: 19.5 msec per loop > > With 100KB chunks for the 10MB file (annotated with #): > > matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for > v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > readinto' "readinto.$f()"; done; done > read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > faster than the 10MB read > read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? No "wtf" here, the read() loop is quadratic since you're building a new, larger, bytes object every iteration. Python 2 has a fragile optimization for concatenation of strings, which can avoid the quadratic behaviour on some systems (depends on realloc() being fast). > readinto/2.7 100 loops, best of 3: 8.93 msec per loop > readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 > is performing well? > readinto/3.3 10 loops, best of 3: 20.4 msec per loop What if you allocate the bytearray outside of the timed function? Regards Antoine. From anacrolix at gmail.com Fri Nov 25 12:23:03 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 22:23:03 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125115504.65fcd400@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: You can see in the tests on the largest buffer size tested, 8192, that the naive "read" actually outperforms readinto(). It's possibly by extrapolating into significantly larger buffer sizes that readinto() gets left behind. It's also reasonable to assume that this wasn't tested thoroughly. On Fri, Nov 25, 2011 at 9:55 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 08:38:48 +0200 > Eli Bendersky wrote: >> >> Just to be clear, there were two separate issues raised here. One is the >> speed regression of readinto() from 2.7 to 3.2, and the other is the >> relative slowness of justread() in 3.3 >> >> Regarding the second, I'm not sure it's an issue because I tried a larger >> file (100MB and then also 300MB) and the speed of 3.3 is now on par with >> 3.2 and 2.7 >> >> However, the original question remains - on the 100MB file also, although >> in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> same speed (even a few % slower). That said, I now observe with Python 3.3 >> the same speed as with 2.7, including the readinto() speedup - so it >> appears that the readinto() regression has been solved in 3.3? Any clue >> about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From anacrolix at gmail.com Fri Nov 25 12:37:49 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 22:37:49 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125120400.53ce8ca1@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 20:34:21 +1100 > Matt Joiner wrote: >> >> It's Python 3.2. I tried it for larger files and got some interesting results. >> >> readinto() for 10MB files, reading 10MB all at once: >> >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop >> >> With 100KB chunks for the 10MB file (annotated with #): >> >> matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import >> readinto' "readinto.$f()"; done; done >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually >> faster than the 10MB read >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > No "wtf" here, the read() loop is quadratic since you're building a > new, larger, bytes object every iteration. ?Python 2 has a fragile > optimization for concatenation of strings, which can avoid the > quadratic behaviour on some systems (depends on realloc() being fast). Is there any way to bring back that optimization? a 30 to 100x slow down on probably one of the most common operations... string contatenation, is very noticeable. In python3.3, this is representing a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. > >> readinto/2.7 100 loops, best of 3: 8.93 msec per loop >> readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 >> is performing well? >> readinto/3.3 10 loops, best of 3: 20.4 msec per loop > > What if you allocate the bytearray outside of the timed function? This change makes readinto() faster for 100K chunks than the other 2 methods and clears the differences between the versions. readinto/2.7 100 loops, best of 3: 6.54 msec per loop readinto/3.2 100 loops, best of 3: 7.64 msec per loop readinto/3.3 100 loops, best of 3: 7.39 msec per loop Updated test code: http://pastebin.com/8cEYG3BD > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > So as I think Eli suggested, the readinto() performance issue goes away with large enough reads, I'd put down the differences to some unrelated language changes. However the performance drop on read(): Python 3.2 is 30x slower than 2.7, and 3.3 is 100x slower than 2.7. From eliben at gmail.com Fri Nov 25 12:56:05 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 13:56:05 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125115504.65fcd400@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: > > However, the original question remains - on the 100MB file also, although > > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > > same speed (even a few % slower). That said, I now observe with Python > 3.3 > > the same speed as with 2.7, including the readinto() speedup - so it > > appears that the readinto() regression has been solved in 3.3? Any clue > > about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > Great, thanks. This is an important change, definitely something to wait for in 3.3 Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From anacrolix at gmail.com Fri Nov 25 13:02:53 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 23:02:53 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: I was under the impression this is already in 3.3? On Nov 25, 2011 10:58 PM, "Eli Bendersky" wrote: > > >> > However, the original question remains - on the 100MB file also, although >> > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> > same speed (even a few % slower). That said, I now observe with Python 3.3 >> > the same speed as with 2.7, including the readinto() speedup - so it >> > appears that the readinto() regression has been solved in 3.3? Any clue >> > about where it happened (i.e. which bug/changeset)? >> >> It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > > Great, thanks. This is an important change, definitely something to wait for in 3.3 > Eli > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Fri Nov 25 13:14:02 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 14:14:02 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 14:02, Matt Joiner wrote: > I was under the impression this is already in 3.3? > Sure, but 3.3 wasn't released yet. Eli P.S. Top-posting again ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 25 13:11:57 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 13:11:57 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: <20111125131157.343ff63b@pitrou.net> On Fri, 25 Nov 2011 22:37:49 +1100 Matt Joiner wrote: > On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > > On Fri, 25 Nov 2011 20:34:21 +1100 > > Matt Joiner wrote: > >> > >> It's Python 3.2. I tried it for larger files and got some interesting results. > >> > >> readinto() for 10MB files, reading 10MB all at once: > >> > >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop > >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop > >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop > >> > >> With 100KB chunks for the 10MB file (annotated with #): > >> > >> matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for > >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > >> readinto' "readinto.$f()"; done; done > >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > >> faster than the 10MB read > >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > > > No "wtf" here, the read() loop is quadratic since you're building a > > new, larger, bytes object every iteration. ?Python 2 has a fragile > > optimization for concatenation of strings, which can avoid the > > quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. Well, extending a bytearray() (as you saw yourself) is the proper solution in such cases. Note that you probably won't see a difference when concatenating very small strings. It would be interesting if you could run the same benchmarks on other OSes (Windows or OS X, for example). Regards Antoine. From p.f.moore at gmail.com Fri Nov 25 15:48:00 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 25 Nov 2011 14:48:00 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On 25 November 2011 11:37, Matt Joiner wrote: >> No "wtf" here, the read() loop is quadratic since you're building a >> new, larger, bytes object every iteration. ?Python 2 has a fragile >> optimization for concatenation of strings, which can avoid the >> quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. It's a fundamental, but sadly not well-understood, consequence of having immutable strings. Concatenating immutable strings in a loop is quadratic. There are many ways of working around it (languages like C# and Java have string builder classes, I believe, and in Python you can use StringIO or build a list and join at the end) but that's as far as it goes. The optimisation mentioned was an attempt (by mutating an existing string when the runtime determined that it was safe to do so) to hide the consequences of this fact from end-users who didn't fully understand the issues. It was relatively effective, but like any such case (floating point is another common example) it did some level of harm at the same time as it helped (by obscuring the issue further). It would be nice to have the optimisation back if it's easy enough to do so, for quick-and-dirty code, but it is not a good idea to rely on it (and it's especially unwise to base benchmarks on it working :-)) Paul. From amauryfa at gmail.com Fri Nov 25 16:07:59 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 25 Nov 2011 16:07:59 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: 2011/11/25 Paul Moore > The optimisation mentioned was an attempt (by mutating an existing > string when the runtime determined that it was safe to do so) to hide > the consequences of this fact from end-users who didn't fully > understand the issues. It was relatively effective, but like any such > case (floating point is another common example) it did some level of > harm at the same time as it helped (by obscuring the issue further). > > It would be nice to have the optimisation back if it's easy enough to > do so, for quick-and-dirty code, but it is not a good idea to rely on > it (and it's especially unwise to base benchmarks on it working :-)) > Note that this string optimization hack is still present in Python 3, but it now acts on *unicode* strings, not bytes. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From aahz at pythoncraft.com Fri Nov 25 16:40:12 2011 From: aahz at pythoncraft.com (Aahz) Date: Fri, 25 Nov 2011 07:40:12 -0800 Subject: [Python-Dev] webmaster@python.org address not working In-Reply-To: <4ECEDF41.2030008@jcea.es> References: <4ECEDF41.2030008@jcea.es> Message-ID: <20111125154011.GC7042@panix.com> On Fri, Nov 25, 2011, Jesus Cea wrote: > > When mailing there, I get this error. Not sure where to report. > > """ > Final-Recipient: rfc822; sdrees at sdrees.de > Original-Recipient: rfc822;webmaster at python.org > Action: failed > Status: 5.1.1 > Remote-MTA: dns; stefan.zinzdrees.de > Diagnostic-Code: smtp; 550 5.1.1 : Recipient address > rejected: User unknown in local recipient table > """ You reported it to the correct place, I pinged Stefan at the contact address listed by whois. Note that webmaster at python.org is a plain alias, so anyone whose e-mail isn't working will generate a bounce. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ WiFi is the SCSI of the 21st Century -- there are fundamental technical reasons for sacrificing a goat. (with no apologies to John Woods) From p.f.moore at gmail.com Fri Nov 25 16:48:23 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 25 Nov 2011 15:48:23 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: > 2011/11/25 Paul Moore >> It would be nice to have the optimisation back if it's easy enough to >> do so, for quick-and-dirty code, but it is not a good idea to rely on >> it (and it's especially unwise to base benchmarks on it working :-)) > > Note that this string optimization hack is still present in Python 3, > but it now acts on *unicode* strings, not bytes. Ah, yes. That makes sense. Paul From fuzzyman at voidspace.org.uk Fri Nov 25 16:50:25 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 15:50:25 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: <4ECFB941.7040206@voidspace.org.uk> On 25/11/2011 15:48, Paul Moore wrote: > On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: >> 2011/11/25 Paul Moore >>> It would be nice to have the optimisation back if it's easy enough to >>> do so, for quick-and-dirty code, but it is not a good idea to rely on >>> it (and it's especially unwise to base benchmarks on it working :-)) >> Note that this string optimization hack is still present in Python 3, >> but it now acts on *unicode* strings, not bytes. > Ah, yes. That makes sense. Although for concatenating immutable bytes presumably the same hack would be *possible*. Michael > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From status at bugs.python.org Fri Nov 25 18:07:30 2011 From: status at bugs.python.org (Python tracker) Date: Fri, 25 Nov 2011 18:07:30 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20111125170730.4BA8A1CC5C@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2011-11-18 - 2011-11-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3134 (+19) closed 22128 (+31) total 25262 (+50) Open issues with patches: 1328 Issues opened (41) ================== #2286: Stack overflow exception caused by test_marshal on Windows x64 http://bugs.python.org/issue2286 reopened by brian.curtin #13387: suggest assertIs(type(obj), cls) for exact type checking http://bugs.python.org/issue13387 reopened by eric.araujo #13433: String format documentation contains error regarding %g http://bugs.python.org/issue13433 opened by Christian.Iversen #13434: time.xmlrpc.com dead http://bugs.python.org/issue13434 opened by pitrou #13435: Copybutton does not hide tracebacks http://bugs.python.org/issue13435 opened by lehmannro #13436: compile() doesn't work on ImportFrom with level=None http://bugs.python.org/issue13436 opened by Janosch.Gr??f #13437: Provide links to the source code for every module in the docum http://bugs.python.org/issue13437 opened by Julian #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 opened by Oleg.Plakhotnyuk #13439: turtle: Errors in docstrings of onkey and onkeypress http://bugs.python.org/issue13439 opened by smichr #13440: Explain the "status quo wins a stalemate" principle in the dev http://bugs.python.org/issue13440 opened by ncoghlan #13441: TestEnUSCollation.test_strxfrm() fails on Solaris http://bugs.python.org/issue13441 opened by haypo #13443: wrong links and examples in the functional HOWTO http://bugs.python.org/issue13443 opened by eli.bendersky #13444: closed stdout causes error on stderr when the interpreter unco http://bugs.python.org/issue13444 opened by Ronny.Pfannschmidt #13445: Enable linking the module pysqlite with Berkeley DB SQL instea http://bugs.python.org/issue13445 opened by Lauren.Foutz #13446: imaplib, fetch: improper behaviour on read-only selected mailb http://bugs.python.org/issue13446 opened by char.nikolaou #13447: Add tests for Tools/scripts/reindent.py http://bugs.python.org/issue13447 opened by eric.araujo #13448: PEP 3155 implementation http://bugs.python.org/issue13448 opened by pitrou #13449: sched - provide an "async" argument for run() method http://bugs.python.org/issue13449 opened by giampaolo.rodola #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 opened by akira #13451: sched.py: speedup cancel() method http://bugs.python.org/issue13451 opened by giampaolo.rodola #13452: PyUnicode_EncodeDecimal: reject error handlers different than http://bugs.python.org/issue13452 opened by haypo #13453: Tests and network timeouts http://bugs.python.org/issue13453 opened by haypo #13454: crash when deleting one pair from tee() http://bugs.python.org/issue13454 opened by PyryP #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 opened by ezio.melotti #13456: Providing a custom HTTPResponse class to HTTPConnection http://bugs.python.org/issue13456 opened by r.david.murray #13461: Error on test_issue_1395_5 with Python 2.7 and VS2010 http://bugs.python.org/issue13461 opened by sable #13462: Improve code and tests for Mixin2to3 http://bugs.python.org/issue13462 opened by eric.araujo #13463: Fix parsing of package_data http://bugs.python.org/issue13463 opened by eric.araujo #13464: HTTPResponse is missing an implementation of readinto http://bugs.python.org/issue13464 opened by r.david.murray #13465: A Jython section in the dev guide would be great http://bugs.python.org/issue13465 opened by fwierzbicki #13466: new timezones http://bugs.python.org/issue13466 opened by Rioky #13467: Typo in doc for library/sysconfig http://bugs.python.org/issue13467 opened by naoki #13471: setting access time beyond Jan. 2038 on remote share failes on http://bugs.python.org/issue13471 opened by Thorsten.Simons #13472: devguide doesn???t list all build dependencies http://bugs.python.org/issue13472 opened by eric.araujo #13473: Add tests for files byte-compiled by distutils[2] http://bugs.python.org/issue13473 opened by eric.araujo #13474: Mention of "-m" Flag Missing From Doc on Execution Model http://bugs.python.org/issue13474 opened by eric.snow #13475: Add '-p'/'--path0' command line option to override sys.path[0] http://bugs.python.org/issue13475 opened by ncoghlan #13476: Simple exclusion filter for unittest autodiscovery http://bugs.python.org/issue13476 opened by ncoghlan #13477: tarfile module should have a command line http://bugs.python.org/issue13477 opened by brandon-rhodes #13478: No documentation for timeit.default_timer http://bugs.python.org/issue13478 opened by sandro.tosi #13479: pickle too picky on re-defined classes http://bugs.python.org/issue13479 opened by kxroberto Most recent 15 issues with no replies (15) ========================================== #13478: No documentation for timeit.default_timer http://bugs.python.org/issue13478 #13476: Simple exclusion filter for unittest autodiscovery http://bugs.python.org/issue13476 #13474: Mention of "-m" Flag Missing From Doc on Execution Model http://bugs.python.org/issue13474 #13467: Typo in doc for library/sysconfig http://bugs.python.org/issue13467 #13464: HTTPResponse is missing an implementation of readinto http://bugs.python.org/issue13464 #13463: Fix parsing of package_data http://bugs.python.org/issue13463 #13456: Providing a custom HTTPResponse class to HTTPConnection http://bugs.python.org/issue13456 #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 #13421: PyCFunction_* are not documented anywhere http://bugs.python.org/issue13421 #13413: time.daylight incorrect behavior in linux glibc http://bugs.python.org/issue13413 #13408: Rename packaging.resources back to datafiles http://bugs.python.org/issue13408 #13403: Option for XMLPRC Server to support HTTPS http://bugs.python.org/issue13403 #13397: Option for XMLRPC clients to automatically transform Fault exc http://bugs.python.org/issue13397 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 #13369: timeout with exit code 0 while re-running failed tests http://bugs.python.org/issue13369 Most recent 15 issues waiting for review (15) ============================================= #13473: Add tests for files byte-compiled by distutils[2] http://bugs.python.org/issue13473 #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 #13452: PyUnicode_EncodeDecimal: reject error handlers different than http://bugs.python.org/issue13452 #13451: sched.py: speedup cancel() method http://bugs.python.org/issue13451 #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 #13449: sched - provide an "async" argument for run() method http://bugs.python.org/issue13449 #13448: PEP 3155 implementation http://bugs.python.org/issue13448 #13446: imaplib, fetch: improper behaviour on read-only selected mailb http://bugs.python.org/issue13446 #13436: compile() doesn't work on ImportFrom with level=None http://bugs.python.org/issue13436 #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 #13420: newer() function in dep_util.py discard changes in the same se http://bugs.python.org/issue13420 #13410: String formatting bug in interactive mode http://bugs.python.org/issue13410 #13405: Add DTrace probes http://bugs.python.org/issue13405 #13402: Document absoluteness of sys.executable http://bugs.python.org/issue13402 #13396: new method random.getrandbytes() http://bugs.python.org/issue13396 Top 10 most discussed issues (10) ================================= #13441: TestEnUSCollation.test_strxfrm() fails on Solaris http://bugs.python.org/issue13441 13 msgs #10318: "make altinstall" installs many files with incorrect shebangs http://bugs.python.org/issue10318 9 msgs #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 9 msgs #13448: PEP 3155 implementation http://bugs.python.org/issue13448 9 msgs #12328: multiprocessing's overlapped PipeConnection on Windows http://bugs.python.org/issue12328 8 msgs #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 8 msgs #9530: integer undefined behaviors http://bugs.python.org/issue9530 7 msgs #12776: argparse: type conversion function should be called only once http://bugs.python.org/issue12776 7 msgs #12890: cgitb displays

tags when executed in text mode http://bugs.python.org/issue12890 7 msgs #13466: new timezones http://bugs.python.org/issue13466 7 msgs Issues closed (29) ================== #7049: decimal.py: Three argument power issues http://bugs.python.org/issue7049 closed by mark.dickinson #8270: Should socket.PF_PACKET be removed, in favor of socket.AF_PACK http://bugs.python.org/issue8270 closed by neologix #9614: _pickle is not entirely 64-bit safe http://bugs.python.org/issue9614 closed by pitrou #12156: test_multiprocessing.test_notify_all() timeout (1 hour) on Fre http://bugs.python.org/issue12156 closed by neologix #12245: Document the meaning of FLT_ROUNDS constants for sys.float_inf http://bugs.python.org/issue12245 closed by mark.dickinson #12934: pysetup doesn???t work for the docutils project http://bugs.python.org/issue12934 closed by eric.araujo #13093: Redundant code in PyUnicode_EncodeDecimal() http://bugs.python.org/issue13093 closed by haypo #13156: _PyGILState_Reinit assumes auto thread state will always exist http://bugs.python.org/issue13156 closed by neologix #13215: multiprocessing Manager.connect() aggressively retries refused http://bugs.python.org/issue13215 closed by neologix #13245: sched.py kwargs addition and default time functions http://bugs.python.org/issue13245 closed by giampaolo.rodola #13313: test_time fails: tzset() do not change timezone http://bugs.python.org/issue13313 closed by haypo #13338: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER http://bugs.python.org/issue13338 closed by python-dev #13393: Improve BufferedReader.read1() http://bugs.python.org/issue13393 closed by pitrou #13401: test_argparse fails with root permissions http://bugs.python.org/issue13401 closed by python-dev #13411: Hashable memoryviews http://bugs.python.org/issue13411 closed by pitrou #13415: del os.environ[key] ignores errors http://bugs.python.org/issue13415 closed by python-dev #13417: faster utf-8 decoding http://bugs.python.org/issue13417 closed by pitrou #13425: http.client.HTTPMessage.getallmatchingheaders() always returns http://bugs.python.org/issue13425 closed by stachjankowski #13431: Pass context information into the extension module init functi http://bugs.python.org/issue13431 closed by scoder #13432: Encoding alias "unicode" http://bugs.python.org/issue13432 closed by georg.brandl #13442: Better support for pipe I/O encoding in subprocess http://bugs.python.org/issue13442 closed by ncoghlan #13457: Display module name as string in `ImportError` http://bugs.python.org/issue13457 closed by cool-RR #13458: _ssl memory leak in _get_peer_alt_names http://bugs.python.org/issue13458 closed by pitrou #13459: logger.propagate=True behavior clarification http://bugs.python.org/issue13459 closed by python-dev #13460: urllib methods should demand unicode, instead demand str http://bugs.python.org/issue13460 closed by r.david.murray #13468: Python 2.7.1 SegmentationFaults when given high recursion limi http://bugs.python.org/issue13468 closed by benjamin.peterson #13469: TimedRotatingFileHandler fails to handle intervals of several http://bugs.python.org/issue13469 closed by vinay.sajip #13470: A user may need ... when she has ... http://bugs.python.org/issue13470 closed by pitrou #13480: range exits loop without action when start is higher than end http://bugs.python.org/issue13480 closed by r.david.murray From brett at python.org Fri Nov 25 18:37:59 2011 From: brett at python.org (Brett Cannon) Date: Fri, 25 Nov 2011 12:37:59 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > wrote: > > The problem is not with maintaining the modified directory. The > > problem was always things like changing interface between the C > > version and the Python version or introduction of new stuff that does > > not run on pypy because it relies on refcounting. I don't see how > > having a subrepo helps here. > > Indeed, the main thing that can help on this front is to get more > modules to the same state as heapq, io, datetime (and perhaps a few > others that have slipped my mind) where the CPython repo actually > contains both C and Python implementations and the test suite > exercises both to make sure their interfaces remain suitably > consistent (even though, during normal operation, CPython users will > only ever hit the C accelerated version). > > This not only helps other implementations (by keeping a Python version > of the module continuously up to date with any semantic changes), but > can help people that are porting CPython to new platforms: the C > extension modules are far more likely to break in that situation than > the pure Python equivalents, and a relatively slow fallback is often > going to be better than no fallback at all. (Note that ctypes based > pure Python modules *aren't* particularly useful for this purpose, > though - due to the libffi dependency, ctypes is one of the extension > modules most likely to break when porting). > And the other reason I plan to see this through before I die is to help distribute the maintenance burden. Why should multiple VMs fix bad assumptions made by CPython in their own siloed repos and then we hope the change gets pushed upstream to CPython when it could be fixed once in a single repo that everyone works off of? -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 25 18:37:46 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 18:37:46 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: <20111125183746.45ab20b5@pitrou.net> On Fri, 25 Nov 2011 12:37:59 -0500 Brett Cannon wrote: > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > > > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > > wrote: > > > The problem is not with maintaining the modified directory. The > > > problem was always things like changing interface between the C > > > version and the Python version or introduction of new stuff that does > > > not run on pypy because it relies on refcounting. I don't see how > > > having a subrepo helps here. > > > > Indeed, the main thing that can help on this front is to get more > > modules to the same state as heapq, io, datetime (and perhaps a few > > others that have slipped my mind) where the CPython repo actually > > contains both C and Python implementations and the test suite > > exercises both to make sure their interfaces remain suitably > > consistent (even though, during normal operation, CPython users will > > only ever hit the C accelerated version). > > > > This not only helps other implementations (by keeping a Python version > > of the module continuously up to date with any semantic changes), but > > can help people that are porting CPython to new platforms: the C > > extension modules are far more likely to break in that situation than > > the pure Python equivalents, and a relatively slow fallback is often > > going to be better than no fallback at all. (Note that ctypes based > > pure Python modules *aren't* particularly useful for this purpose, > > though - due to the libffi dependency, ctypes is one of the extension > > modules most likely to break when porting). > > > > And the other reason I plan to see this through before I die Uh! Any bad news? :/ From amauryfa at gmail.com Fri Nov 25 19:21:48 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 25 Nov 2011 19:21:48 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: 2011/11/25 Brett Cannon > > > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski >> wrote: >> > The problem is not with maintaining the modified directory. The >> > problem was always things like changing interface between the C >> > version and the Python version or introduction of new stuff that does >> > not run on pypy because it relies on refcounting. I don't see how >> > having a subrepo helps here. >> >> Indeed, the main thing that can help on this front is to get more >> modules to the same state as heapq, io, datetime (and perhaps a few >> others that have slipped my mind) where the CPython repo actually >> contains both C and Python implementations and the test suite >> exercises both to make sure their interfaces remain suitably >> consistent (even though, during normal operation, CPython users will >> only ever hit the C accelerated version). >> >> This not only helps other implementations (by keeping a Python version >> of the module continuously up to date with any semantic changes), but >> can help people that are porting CPython to new platforms: the C >> extension modules are far more likely to break in that situation than >> the pure Python equivalents, and a relatively slow fallback is often >> going to be better than no fallback at all. (Note that ctypes based >> pure Python modules *aren't* particularly useful for this purpose, >> though - due to the libffi dependency, ctypes is one of the extension >> modules most likely to break when porting). >> > > And the other reason I plan to see this through before I die is to help > distribute the maintenance burden. Why should multiple VMs fix bad > assumptions made by CPython in their own siloed repos and then we hope the > change gets pushed upstream to CPython when it could be fixed once in a > single repo that everyone works off of? > PyPy copied the CPython stdlib in a directory named "2.7", which is never modified; instead, adaptations are made by copying the file into "modified-2.7", and fixed there. Both directories appear in sys.path This was done for this very reason: so that it's easy to identify the differences and suggest changes to push upstream. But this process was not very successful for several reasons: - The definition of "bad assumptions" used to be very strict. It's much much better nowadays, thanks to the ResourceWarning in 3.x for example (most changes in modified-2.7 are related to the garbage collector), and wider acceptance by the core developers of the "@impl_detail" decorators in tests. - 2.7 was already in maintenance mode, and such changes were not considered as bug fixes, so modified-2.7 never shrinks. It was a bit hard to find the motivation to fix only the 3.2 version of the stdlib, which you can not even test with PyPy! - Some modules in the stdlib rely on specific behaviors of the VM or extension modules that are not always easy to implement correctly in PyPy. The ctypes module is the most obvious example to me, but also the pickle/copy modules which were modified because of subtle differences around built-in methods (or was it the __builtins__ module?) And oh, I almost forgot distutils, which needs to parse some Makefile which of course does not exist in PyPy. - Differences between C extensions and pure Python modules are sometimes considered "undefined behaviour" and are rejected. See issue13274, this one has an happy ending, but I remember that the _pyio.py module chose to not fix some obscure reentrancy issues (which I completely agree with) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Fri Nov 25 23:14:04 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 22:14:04 +0000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >> Mea culpa for not keeping track, but what's the status of PEP 380? I >> really want this in Python 3.3! > > There are two relevant tracker issues (both with me for the moment). > > The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 > > That's really just missing the doc updates - I haven't had a chance to > look at Zbyszek's latest offering on that front, but it shouldn't be > far off being complete (the *text* in his previous docs patch actually > seemed reasonable - I mainly objected to way it was organised). > > However, the PEP 380 test suite updates have a dependency on a new dis > module feature that provides an iterator over a structured description > of bytecode instructions: http://bugs.python.org/issue11816 Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. Michael > > I find Meador's suggestion to change the name of the new API to > something involving the word "instruction" appealing, so I plan to do > that, which will have a knock-on effect on the tests in the PEP 380 > branch. However, even once I get that done, Raymond specifically said > he wanted to review the dis module patch before I check it in, so I > don't plan to commit it until he gives the OK (either because he > reviewed it, or because he decides he's OK with it going in without > his review and he can review and potentially update it in Mercurial > any time before 3.3 is released). > > I currently plan to update my working branches for both of those on > the 3rd of December, so hopefully they'll be ready to go within the > next couple of weeks. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From jcea at jcea.es Sat Nov 26 03:18:51 2011 From: jcea at jcea.es (Jesus Cea) Date: Sat, 26 Nov 2011 03:18:51 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> Message-ID: <4ED04C8B.8000502@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/11/11 16:56, ?ric Araujo wrote: > Ezio and I chatted a bit about his on IRC and he may try to write > a Python parser for Misc/NEWS in order to write a fully automated > merge tool. Anything new in this front? :-) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtBMi5lgi5GaxT1NAQLKsgP6At6qnzHknuTjq35mHfxVSOxJnMuZ8/vx 5ZXHcxCuPJud9GJz0+NEmDPImQAtRUZyV41ud9nQYIfhYE5rV4qBiK7KwMspg39o kclfRhMIPsQV3PkB4dDWy+gEkck+Q16pSzdtxbzKx7DpYk7lnFp/vsHQbNC5iqC9 pfmMny4L0s8= =NlDr -----END PGP SIGNATURE----- From ncoghlan at gmail.com Sat Nov 26 05:39:45 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2011 14:39:45 +1000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord wrote: > > On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>> really want this in Python 3.3! >> >> There are two relevant tracker issues (both with me for the moment). >> >> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >> >> That's really just missing the doc updates - I haven't had a chance to >> look at Zbyszek's latest offering on that front, but it shouldn't be >> far off being complete (the *text* in his previous docs patch actually >> seemed reasonable - I mainly objected to way it was organised). >> >> However, the PEP 380 test suite updates have a dependency on a new dis >> module feature that provides an iterator over a structured description >> of bytecode instructions: http://bugs.python.org/issue11816 > > > Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. The affected tests aren't testing the PEP 380 semantics, they're specifically testing CPython's bytecode generation for yield from expressions and disassembly of same. Just because they aren't of any interest to other implementations doesn't mean *we* don't need them :) There are plenty of behavioural tests to go along with the bytecode specific ones, and those *will* be useful to other implementations. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Sat Nov 26 06:14:59 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 25 Nov 2011 21:14:59 -0800 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > On 12/11/11 16:56, ?ric Araujo wrote: >> Ezio and I chatted a bit about his on IRC and he may try to write >> a Python parser for Misc/NEWS in order to write a fully automated >> merge tool. > > Anything new in this front? :-) To me, it would make more sense to split the file into a Misc/NEWS3.2 and Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a piece of cake and would avoid adding a parser (an its idiosyncracies) to the toolchain. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 26 06:29:34 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2011 15:29:34 +1000 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: On Sat, Nov 26, 2011 at 3:14 PM, Raymond Hettinger wrote: > > On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > > On 12/11/11 16:56, ?ric Araujo wrote: > > Ezio and I chatted a bit about his on IRC and he may try to write > > a Python parser for Misc/NEWS in order to write a fully automated > > merge tool. > > Anything new in this front? :-) > > To me, it would make more sense to split the file into a Misc/NEWS3.2 and > Misc/NEWS3.3 much as we've done with whatsnew. ?That would make merging a > piece of cake and would avoid adding a parser (and its idiosyncracies) to the > toolchain. +1 A simple-but-it-works approach to this problem sounds good to me. We'd still need to work out a few conventions about how changes that affect both versions get recorded (I still favour putting independent entries in both files), but simply eliminating the file name collision will also eliminate most of the merge conflicts. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From fijall at gmail.com Sat Nov 26 08:46:29 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 26 Nov 2011 09:46:29 +0200 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Sat, Nov 26, 2011 at 6:39 AM, Nick Coghlan wrote: > On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord > wrote: >> >> On 24 Nov 2011, at 04:06, Nick Coghlan wrote: >> >>> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>>> really want this in Python 3.3! >>> >>> There are two relevant tracker issues (both with me for the moment). >>> >>> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >>> >>> That's really just missing the doc updates - I haven't had a chance to >>> look at Zbyszek's latest offering on that front, but it shouldn't be >>> far off being complete (the *text* in his previous docs patch actually >>> seemed reasonable - I mainly objected to way it was organised). >>> >>> However, the PEP 380 test suite updates have a dependency on a new dis >>> module feature that provides an iterator over a structured description >>> of bytecode instructions: http://bugs.python.org/issue11816 >> >> >> Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. > > The affected tests aren't testing the PEP 380 semantics, they're > specifically testing CPython's bytecode generation for yield from > expressions and disassembly of same. Just because they aren't of any > interest to other implementations doesn't mean *we* don't need them :) > > There are plenty of behavioural tests to go along with the bytecode > specific ones, and those *will* be useful to other implementations. > > Cheers, > Nick. > I'm with nick on this one, seems like a very useful test, just remember to mark it as @impl_detail (or however the decorator is called). Cheers, fijal From fuzzyman at voidspace.org.uk Sat Nov 26 14:46:12 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 26 Nov 2011 13:46:12 +0000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: <4ED0EDA4.6050309@voidspace.org.uk> On 26/11/2011 07:46, Maciej Fijalkowski wrote: > On Sat, Nov 26, 2011 at 6:39 AM, Nick Coghlan wrote: >> On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord >> wrote: >>> On 24 Nov 2011, at 04:06, Nick Coghlan wrote: >>> >>>> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>>>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>>>> really want this in Python 3.3! >>>> There are two relevant tracker issues (both with me for the moment). >>>> >>>> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >>>> >>>> That's really just missing the doc updates - I haven't had a chance to >>>> look at Zbyszek's latest offering on that front, but it shouldn't be >>>> far off being complete (the *text* in his previous docs patch actually >>>> seemed reasonable - I mainly objected to way it was organised). >>>> >>>> However, the PEP 380 test suite updates have a dependency on a new dis >>>> module feature that provides an iterator over a structured description >>>> of bytecode instructions: http://bugs.python.org/issue11816 >>> >>> Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. >> The affected tests aren't testing the PEP 380 semantics, they're >> specifically testing CPython's bytecode generation for yield from >> expressions and disassembly of same. Just because they aren't of any >> interest to other implementations doesn't mean *we* don't need them :) >> >> There are plenty of behavioural tests to go along with the bytecode >> specific ones, and those *will* be useful to other implementations. >> >> Cheers, >> Nick. >> > I'm with nick on this one, seems like a very useful test, just > remember to mark it as @impl_detail (or however the decorator is > called). Fair enough. :-) If other tests are failing (the semantics are wrong) then having a test that shows you the semantics are screwed because the bytecode has been incorrectly generated will be a useful diagnostic tool. On the other hand it is hard to see that bytecode generation could be "wrong" without it affecting some test of semantics that should also fail - so as tests in their own right the bytecode tests *must* be superfluous (or there is some aspect of the semantics that is *only* tested through the bytecode and that seems bad, particularly for other implementations). All the best, Michael > Cheers, > fijal > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From merwok at netwok.org Sat Nov 26 14:52:32 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 14:52:32 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: <4ED0EF20.1060203@netwok.org> Le 25/11/2011 19:21, Amaury Forgeot d'Arc a ?crit : > And oh, I almost forgot distutils, which needs to parse some Makefile which > of course does not exist in PyPy. This is a bug (#10764) that I intend to fix for the next releases of 2.7 and 3.2. I also want to fix all modules that use sys.version[:2] to get 'X.Y', which is a CPython implementation detail. I find PyPy and excellent project, so you can send any bugs in distutils, sysconfig, site and friends my way! I also hope to make distutils2 compatible with PyPy before 2012. Cheers From merwok at netwok.org Sat Nov 26 17:13:03 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:13:03 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <4ED1100F.4090701@netwok.org> Le 26/11/2011 03:18, Jesus Cea a ?crit : > On 12/11/11 16:56, ?ric Araujo wrote: >> Ezio and I chatted a bit about his on IRC and he may try to write >> a Python parser for Misc/NEWS in order to write a fully automated >> merge tool. > Anything new in this front? :-) Not from me. I don?t have the roundtuits, and I find my hgddmt script good enough. Cheers From aahz at pythoncraft.com Sat Nov 26 17:14:35 2011 From: aahz at pythoncraft.com (Aahz) Date: Sat, 26 Nov 2011 08:14:35 -0800 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement In-Reply-To: <4ECEDAC9.7040903@jcea.es> References: <4ECEDAC9.7040903@jcea.es> Message-ID: <20111126161435.GA24540@panix.com> On Fri, Nov 25, 2011, Jesus Cea wrote: > > Checking documentation abut the contributor license agreement, I had > encounter a wrong HTML link in http://www.python.org/about/help/ : > > * "Python Patch Guidelines" points to > http://www.python.org/dev/patches/, that doesn't exist. Fixed > PS: The devguide doesn't say anything (AFAIK) about the contributor > agreement. The devguide seems to now be hosted on docs.python.org and AFAIK the web team doesn't deal with that. Someone from python-dev needs to lead. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ WiFi is the SCSI of the 21st Century -- there are fundamental technical reasons for sacrificing a goat. (with no apologies to John Woods) From merwok at netwok.org Sat Nov 26 17:16:44 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:16:44 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: <4ED110EC.3080702@netwok.org> Le 26/11/2011 06:29, Nick Coghlan a ?crit : > On Sat, Nov 26, 2011 at 3:14 PM, Raymond Hettinger wrote: >> To me, it would make more sense to split the file into a Misc/NEWS3.2 and >> Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a >> piece of cake and would avoid adding a parser (and its idiosyncracies) to the >> toolchain. > > +1 > > A simple-but-it-works approach to this problem sounds good to me. > > We'd still need to work out a few conventions about how changes that > affect both versions get recorded (I still favour putting independent > entries in both files), but simply eliminating the file name collision > will also eliminate most of the merge conflicts. Maybe I?m not seeing something, but adding an entry by hand does not sound much better than solving conflicts by hand. Another idea: If we had different sections for bug fixes and new features (with or without another level of core/lib/doc/tests separation), then there should be less (no?) conflicts. Regards From merwok at netwok.org Sat Nov 26 17:23:54 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:23:54 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ED1129A.5090702@netwok.org> Le 24/11/2011 22:46, Xavier Morel a ?crit : > Wouldn't it be simpler to just use MQ and upload the patch(es) > from the series? MQ is a very powerful and useful tool, but its learning curve is steeper than regular Mercurial, and it is not designed for long-term development. Rebasing patches is more fragile and less user-friendly than merging branches, and it?s also easier to corrupt your MQ patch queue than your Mercurial repo. I like Mercurial merges and I don?t like diffs of diffs, so I avoid MQ. > Would be easier to keep in sync with the development tip too. How so? With a regular clone you have to pull and merge regularly, with MQ you have to pull and rebase. Regards From merwok at netwok.org Sat Nov 26 17:30:22 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:30:22 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECEFFD4.5030601@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> Message-ID: <4ED1141E.9050604@netwok.org> Le 25/11/2011 03:39, Jesus Cea a ?crit : > On 24/11/11 18:08, ?ric Araujo wrote: >>> I have a question and I would rather have an answer instead of >>> actually trying and getting myself in a messy situation. >> Clones are cheap, trying is cheap! > [snip valid reasons for not trying] My reply was tongue-in-cheek :) FYI, it?s not considered pollution to use a tracker issue to test hooks or Mercurial integration (there?s even one issue entirely devoted to such tests, but I can?t find its number). >>> 5. Development of the new feature is taking a long time, and >>> python canonical version keeps moving forward. The clone+branch >>> and the original python version are diverging. Eventually there >>> are changes in python that the programmer would like in her >>> version, so she does a "pull" and then a merge for the original >>> python branch to her named branch. >> I do this all the time. I work on a fix-nnnn branch, and once a >> week for example I pull and merge the base branch. Sometimes there >> are no conflicts except Misc/NEWS, sometimes I have to adapt my >> code because of other people?s changes before I can commit the >> merge. > That is good, because that means your patch is always able to be > applied to the original branch tip, and that you changes work with > current work in the mainline. > > That is what I want to do, but I need to know that it is safe to do so > (from the "Create Patch" perspective). I don?t understand ?safe?. >>> 6. What would be posted in the bug tracker when she does a new >>> "Create Patch"?. Only her changes, her changes SINCE the merge, >>> her changes plus merged changes or something else?. >> The diff would be equivalent to ?hg diff -r base? and would contain >> all the changes she did to add the bug fix or feature. Merging >> only makes sure that the computed diff does not appear to touch >> unrelated files, IOW that it applies cleanly. (Barring bugs in >> Mercurial-Roundup integration, we have a few of these in the >> metatracker.) > So you are saying that "Create patch" will ONLY get the differences in > the development branch and not the changes brought in from the merge?. I don?t really understand how you understood what I said :( The merge brings in changes from default; when you diff your branch against default later, it will not show the changes brought by the merge, but it will apply cleanly on top of default. Does this wording make sense? Regards From merwok at netwok.org Sat Nov 26 17:44:59 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:44:59 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4EA560E3.8060307@gmail.com> References: <4EA560E3.8060307@gmail.com> Message-ID: <4ED1178B.6030203@netwok.org> Hi, +1 to all Ezio said. One specific remark: PendingDeprecationWarning could just become an alias of DeprecationWarning, but maybe there is code out there that relies on the distinction, and there is no real value in making it an alias (there is value in removing it altogether, but we can?t do that, can we?). I don?t see the need to deprecate PDW, except in documentation, and am -1 to the metaclass idea (no need). Cheers From merwok at netwok.org Sat Nov 26 17:53:02 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:53:02 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <20110811183114.701DF3A406B@sparrow.telecommunity.com> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> Message-ID: <4ED1196E.8090505@netwok.org> Hi, Going through my email backlog. > Le 11/08/2011 20:30, P.J. Eby a ?crit : >> At 04:39 PM 8/11/2011 +0200, ?ric Araujo wrote: >>>> (By the way, both of these additions to the import protocol (i.e. the >>>> dynamically-added ``__path__``, and dynamically-created modules) >>>> apply recursively to child packages, using the parent package's >>>> ``__path__`` in place of ``sys.path`` as a basis for generating a >>>> child ``__path__``. This means that self-contained and virtual >>>> packages can contain each other without limitation, with the caveat >>>> that if you put a virtual package inside a self-contained one, it's >>>> gonna have a really short ``__path__``!) >>> I don't understand the caveat or its implications. >> Since each package's __path__ is the same length or shorter than its >> parent's by default, then if you put a virtual package inside a >> self-contained one, it will be functionally speaking no different >> than a self-contained one, in that it will have only one path >> entry. So, it's not really useful to put a virtual package inside a >> self-contained one, even though you can do it. (Apart form it >> letting you avoid a superfluous __init__ module, assuming it's indeed >> superfluous.) I still don?t understand why this matters or what negative effects it could have on code, but I?m fine with not understanding. I?ll trust that people writing or maintaining import-related tools will agree or complain about that item. >>> I?ll just regret that it's not possible to provide a module docstring >>> to inform that this is a namespace package used for X and Y. >> It *is* possible - you'd just have to put it in a "zc.py" file. IOW, >> this PEP still allows "namespace-defining packages" to exist, as was >> requested by early commenters on PEP 382. It just doesn't *require* >> them to exist in order for the namespace contents to be importable. That?s quite cool. I guess such a namespace-defining module (zc.py here) would be importable, right? Also, would it cause worse performance for other zc.* packages than if there were no zc.py? >>> This was probably said on import-sig, but here I go: yet another import >>> artifact in the sys module! I hope we get ImportEngine in 3.3 to clean >>> up all this. >> Well, I rather *like* having them there, personally, vs. having to >> learn yet another API, but oh well, whatever. Agreed with ?whatever? :) I just like to grunt sometimes. >> AFAIK, ImportEngine isn't going to do away with the need for the >> global ones to live somewhere, Yep, but as Nick replied, at least we?ll gain one structure to rule them all. >>> Let's imagine my application Spam has a namespace spam.ext for plugins. >>> To use a custom directory where plugins are stored, or a zip file with >>> plugins (I don't use eggs, so let me talk about zip files here), I'd >>> have to call sys.path.append *and* pkgutil.extend_virtual_paths? >> As written in the current proposal, yes. There was some discussion >> on Python-Dev about having this happen automatically, and I proposed >> that it could be done by making virtual packages' __path__ attributes >> an iterable proxy object, rather than a list: That sounds a bit too complicated. What about just having pkgutil.extend_virtual_paths call sys.path.append? For maximum flexibility, extend_virtual_paths could have an argument to avoid calling sys.path.append. >>> Besides, putting data files in a Python package is held very poorly by >>> some (mostly people following the File Hierarchy Standard), >> ISTM that anybody who thinks that is being inconsistent in >> considering the Python code itself to not be a "data file" by that >> same criterion... especially since one of the more common uses for >> such "data" files are for e.g. HTML templates (which usually contain >> some sort of code) or GUI resources (which are pretty tightly bound >> to the code). A good example is documentation: Having a unique location (/usr/share/doc) for all installed software makes my life easier. Another example is JavaScript files used with HTML documents, such as jQuery: Debian recently split the jQuery file out of their Sphinx package, so that there is only one library installed that all packages can use and that can be updated and fixed once for all. (I?m simplifying; there can be multiple versions of libraries, but not multiple copies. I?ll stop here; I?m not one of the authors of the Filesystem Hierarchy Standard, and I?ll rant against package_data in distutils mailing lists :) >>> A pure virtual package having no source file, I think it should have no >>> __file__ at all. Antoine and someone else thought likewise (I can find the link if you want); do you consider it consensus enough to update the PEP? Regards From benjamin at python.org Sat Nov 26 20:36:25 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 26 Nov 2011 13:36:25 -0600 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: 2011/11/25 Raymond Hettinger : > To me, it would make more sense to split the file into a Misc/NEWS3.2 and > Misc/NEWS3.3 much as we've done with whatsnew. ?That would make merging a > piece of cake and would avoid adding a parser (an its idiosyncracies) to the > toolchain. Would we not add fixes from 3.2, which were ported to 3.3 to the NEWS3.3 file then? -- Regards, Benjamin From zbyszek at in.waw.pl Sat Nov 26 18:54:13 2011 From: zbyszek at in.waw.pl (=?UTF-8?B?WmJpZ25pZXcgSsSZZHJ6ZWpld3NraS1Tem1law==?=) Date: Sat, 26 Nov 2011 18:54:13 +0100 Subject: [Python-Dev] ImportError: No module named multiarray (is back) In-Reply-To: <4ECD1D31.7080802@netwok.org> References: <4ECBFF19.8080100@in.waw.pl> <4ECD1D31.7080802@netwok.org> Message-ID: <4ED127C5.1060004@in.waw.pl> Hi, I apologize in advance for the length of this mail. sys.path ======== When a script or a module is executed by invoking python with proper arguments, sys.path is extended. When a path to script is given, the directory containing the script is prepended. When '-m' or '-c' is used, $CWD is prepended. This is documented in http://docs.python.org/dev/using/cmdline.html, so far ok. sys.path and $PYTHONPATH is like $PATH -- if you can convince someone to put a directory under your control in any of them, you can execute code as this someone. Therefore, sys.path is dangerous and important. Unfortunately, sys.path manipulations are only described very briefly, and without any commentary, in the on-line documentation. python(1) manpage doesn't even mention them. The problem: each of the commands below is insecure: python /tmp/script.py (when script.py is safe by itself) ('/tmp' is added to sys.path, so an attacker can override any module imported in /tmp/script.py by writing to /tmp/module.py) cd /tmp && python -mtimeit -s 'import numpy' 'numpy.test()' (UNIX users are accustomed to being able to safely execute programs in any directory, e.g. ls, or gcc, or something. Here '' is added to sys.path, so it is not secure to run python is other-user-writable directories.) cd /tmp/ && python -c 'import numpy; print(numpy.version.version)' (The same as above, '' is added to sys.path.) cd /tmp && python (The same as above). IMHO, if this (long-lived) behaviour is necessary, it should at least be prominently documented. Also in the manpage. Prepending realpath(dirname(scriptname)) ======================================== Before adding a directory to sys.path as described above, Python actually runs os.path.realpath over it. This means that if the path to a script given on the commandline is actually a symlink, the directory containing the real file will be executed. This behaviour is not really documented (the documentation only says "the directory containing that file is added to the start of sys.path"), but since the integrity of sys.path is so important, it should be, IMHO. Using realpath instead of the (expected) path specified by the user breaks imports of non-pure-python (mixed .py and .so) modules from modules executed as scripts on Debian. This is because Debian installs architecture-independent python files in /usr/share/pyshared, and symlinks those files into /usr/lib/pymodules/pythonX.Y/. The architecture-dependent .so and python-version-dependent .pyc files are installed in /usr/lib/pymodules/pythonX.Y/. When a script, e.g. /usr/lib/pymodules/pythonX.Y/script.py, is executed, the directory /usr/share/pyshared is prepended to sys.path. If the script tries to import a module which has architecture-dependent parts (e.g. numpy) it first sees the incomplete module in /usr/share/pyshared and fails. This happens for example in parallel python (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620551) and recently when packaging CellProfiler for Debian. Again, if this is on purpose, it should be documented. PEP 395 (Qualified Names for Modules) ===================================== PEP 395 proposes another sys.path manipulation. When running a script, the directory tree will be walked upwards as long as there are __init__.py files, and then the first directory without will be added. This is of course a fine idea, but it makes a scenario, which was previously safe, insecure. More precisely, when executing a script in a directory in a parent directory-writable-by-other-users, the parent directory will be added to sys.path. So the (safe) operation of downloading an archive with a package, unzipping it in /tmp, changing into the created directory, checking that the script doesn't do anything bad, and running a script is now insecure if there is __init__.py in the archive root. I guess that it would be useful to have an option to turn off those sys.path manipulations. Zbyszek From ncoghlan at gmail.com Sun Nov 27 01:28:33 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 27 Nov 2011 10:28:33 +1000 Subject: [Python-Dev] ImportError: No module named multiarray (is back) In-Reply-To: <4ED127C5.1060004@in.waw.pl> References: <4ECBFF19.8080100@in.waw.pl> <4ECD1D31.7080802@netwok.org> <4ED127C5.1060004@in.waw.pl> Message-ID: 2011/11/27 Zbigniew J?drzejewski-Szmek : > I guess that it would be useful to have an option to turn off those sys.path > manipulations. Yeah, I recently proposed exactly that (a '--nopath0' option) in http://bugs.python.org/issue13475 (that issue also proposes a "-p/--path0" option to *override* the automatic initialisation of sys.path[0] with a different value). I may still make this general question part of the proposals in PEP 395, though, since it's fairly closely related to many of the issues already discussed by that PEP and is something that will need to be thought out fairly well to make sure it achieves the objective of avoiding cross-user interference. There are limits to what we can do by default due to backwards compatibility concerns, but it should at least be possible to provide better tools to help manage the problem. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From petri at digip.org Sun Nov 27 20:33:29 2011 From: petri at digip.org (Petri Lehtinen) Date: Sun, 27 Nov 2011 21:33:29 +0200 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <20111127193329.GA2219@ihaa> Jesus Cea wrote: > On 12/11/11 16:56, ?ric Araujo wrote: > > Ezio and I chatted a bit about his on IRC and he may try to write > > a Python parser for Misc/NEWS in order to write a fully automated > > merge tool. > > Anything new in this front? :-) I don't see what's the problem really. The most common case is to have one conflicting file with one conflict. I'm completely fine with removing the conflict markers and possibly moving my own entry above the other entries. Petri From jcea at jcea.es Mon Nov 28 05:21:18 2011 From: jcea at jcea.es (Jesus Cea) Date: Mon, 28 Nov 2011 05:21:18 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED1141E.9050604@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> Message-ID: <4ED30C3E.4090700@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 26/11/11 17:30, ?ric Araujo wrote: >> That is what I want to do, but I need to know that it is safe to >> do so (from the "Create Patch" perspective). > I don?t understand ?safe?. "Safe", in this context, means "when clicking 'create patch' the created patch ONLY includes my code in the development branch, EVEN if the branch merged-in the original mainline branch several times". >>>> 6. What would be posted in the bug tracker when she does a >>>> new "Create Patch"?. Only her changes, her changes SINCE the >>>> merge, her changes plus merged changes or something else?. >>> The diff would be equivalent to ?hg diff -r base? and would >>> contain all the changes she did to add the bug fix or feature. >>> Merging only makes sure that the computed diff does not appear >>> to touch unrelated files, IOW that it applies cleanly. >>> (Barring bugs in Mercurial-Roundup integration, we have a few >>> of these in the metatracker.) >> So you are saying that "Create patch" will ONLY get the >> differences in the development branch and not the changes brought >> in from the merge?. > I don?t really understand how you understood what I said :( The > merge brings in changes from default; when you diff your branch > against default later, it will not show the changes brought by the > merge, but it will apply cleanly on top of default. But I am not doing that diff, it is the tracker who is doing that diff. I agree that the following procedure would work. In fact it is the way I used to work, before publishing my repository and using "create patch" in the tracker: 1. Branch. 2. Develop in the branch. 3. Merge changes from mainline INTO the branch. 4. Jump to 2 as many times as needed. 5. When done: 5.1. Do a final merge from mainline to branch. 5.2. Do a DIFF from branch to mainline. After 5.2, the diff shows only the code I have patched in the branch. PERFECT. But I don't know if the tracker does that or not. Without the final merge, a diff between my branch and mainline tips will show my changes PLUS the "undoing" of any change in mainline that I didn't merge in my branch. Since "create patch" (in the tracker) doesn't compare against the tip of mainline (at least not in a trivial way), I guess it is comparing against the BASE of the branch. That is ok... as far as I don't merge changes from mainline to the branch. If so, when diffing the branch tip from the branch base it will show all changes in the branch, both my code and the code imported via merges. So, in this context, if the tracker "create patch" diff from BASE, it is not "safe" to merge changes from mainline to the branch, because if so "create patch" would include code not related to my work. I could try actually merging and clicking "create patch" but if the result is unpleasant my repository would be in a state "not compatible" with "create patch" tool in the tracker. I rather prefer to avoid that, if somebody knows the answer. If nobody can tell, experimentation would be the only option, although any experimental result would be suspicious because the hooks can be changes later or you are hitting any strange corner case. Another approach, that I am doing so far, is to avoid merging from mainline while developing in a branch, just in case. But I am hitting now a situation while there are changes in mainline that overlap my effort so I am quite forced to merge that code in, instead of dealing with a huge divergent code in a month. So, I have avoid to merge in the past and was happy, but I would need to merge now (changes from mainline) and I am unsure on what is going to happen with the "create patch" option in the tracker. Anybody knows the mercurial command used to implement "create patch"?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtMMPplgi5GaxT1NAQKzhQP8DzAql1PAJkyEROsWl8CgPpW9ie8jNM1V +K5jLx/dCukzFXrZ2Ba1Tu5IFYFZxH7Wj4rg4sQ47zlKBi6gQELgtGV+bCYPAEt/ WQo7uGUCj+xLmBKXuQQlXrl1pNl9XhlufTNXIzW34o7SPKMEQy7N7uUxpxgwV8JX KoJoYAbiH88= =9lYm -----END PGP SIGNATURE----- From ncoghlan at gmail.com Mon Nov 28 06:06:56 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Nov 2011 15:06:56 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED30C3E.4090700@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: On Mon, Nov 28, 2011 at 2:21 PM, Jesus Cea wrote: > Since "create patch" (in the tracker) doesn't compare against the tip > of mainline (at least not in a trivial way), I guess it is comparing > against the BASE of the branch. That is ok... as far as I don't merge > changes from mainline to the branch. If so, when diffing the branch > tip from the branch base it will show all changes in the branch, both > my code and the code imported via merges. > > So, in this context, if the tracker "create patch" diff from BASE, it > is not "safe" to merge changes from mainline to the branch, because if > so "create patch" would include code not related to my work. No, "Create Patch" is smarter than that. What it does (or tries to do) is walk back through your branch history, trying to find the last point where you merged in a changeset that it recognises as coming from the main CPython repo. It then uses that revision of the CPython repo as the basis for the diff. So if you're just working on a feature branch, periodically pulling from hg.python.org/cpython and merging from default, then it should all work fine. Branches-of-branches (i.e. where you've merged from CPython via another named branch in your local repo) seems to confuse it though - I plan to change my workflow for those cases to merge each branch from the same version of default before merging from the other branch. > Anybody knows the mercurial command used to implement "create patch"?. It's not a single command - it's a short script MvL wrote that uses the Mercurial API to traverse the branch history and find an appropriate revision to diff against. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Mon Nov 28 10:30:53 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 28 Nov 2011 01:30:53 -0800 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4EA560E3.8060307@gmail.com> References: <4EA560E3.8060307@gmail.com> Message-ID: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > Hi, > our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: > 1) deprecate something and add a DeprecationWarning; > 2) forget about it after a while; > 3) wait a few versions until someone notices it; > 4) actually remove it; > > I suggest to follow the following process: > 1) deprecate something and add a DeprecationWarning; > 2) decide how long the deprecation should last; > 3) use the deprecated-remove[1] directive to document it; > 4) add a test that fails after the update so that we remember to remove it[2]; How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. There is rarely a need to actually remove support for something in the standard library. That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From catch-all at masklinn.net Mon Nov 28 10:53:24 2011 From: catch-all at masklinn.net (Xavier Morel) Date: Mon, 28 Nov 2011 10:53:24 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On 2011-11-28, at 10:30 , Raymond Hettinger wrote: > On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. > Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. > There is rarely a need to actually remove support for something in the standard library. The problem with "deprecating and not removing" (and worse, only informally deprecating by leaving a note in the documentation) is that you end up with zombie APIs: there are tons of tutorials & such on the web talking about them, they're not maintained, nobody really cares about them (but users who found them via Google) and they're all around harmful. It's the current state of many JDK 1.0 and 1.1 APIs and it's dreadful, most of them are more than a decade out of date, sometimes retrofitted for new interfaces (but APIs using them usually are *not* fixed, keeping them in their state of partial death), sometimes still *taught*, all of that because they're only informally deprecated (at best, sometimes not even that as other APIs still depend on them). It's bad for (language) users because they use outdated and partially unmaintained (at least in that it's not improved) APIs and it's bad for (language) maintainers in that once in a while they still have to dive into those things and fix bugs cropping up without the better understanding they have from the old APIs or the cleaner codebase they got from it. Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. From ncoghlan at gmail.com Mon Nov 28 13:06:48 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Nov 2011 22:06:48 +1000 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel wrote: > Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. But restricting ourselves to cleaning out such APIs every 10 years or so with a major version bump is also a potentially viable option. So long as the old APIs are fully tested and aren't actively *harmful* to creating reasonable code (e.g. optparse) then refraining from killing them before the (still hypothetical) 4.0 is reasonable. OTOH, genuinely problematic APIs that ideally wouldn't have survived even the 3.x transition (e.g. the APIs that the 3.x subprocess module inherited from the 2.x commands module that run completely counter to the design principles of the subprocess module) should probably still be considered for removal as soon as is reasonable after a superior alternative is made available. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Mon Nov 28 13:14:46 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 28 Nov 2011 23:14:46 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <4ED37B36.7080208@pearwood.info> Xavier Morel wrote: > Not being too eager to kill APIs is good, but giving rise to this kind of > living-dead APIs is no better in my opinion, even more so since Python has > lost one of the few tools it had to manage them (as DeprecationWarning was > silenced by default). Both choices are harmful to users, but in the long > run I do think zombie APIs are worse. I would much rather have my code relying on "zombie" APIs and keep working, than to have that code suddenly stop working when the zombie is removed. Working code should stay working. Unless the zombie is actively harmful, what's the big deal if there is a newer, better way of doing something? If it works, and if it's fast enough, why force people to "fix" it? It is a good thing that code or tutorials from Python 1.5 still (mostly) work, even when there are newer, better ways of doing something. I see a lot of newbies, and the frustration they suffer when they accidentally (carelessly) try following 2.x instructions in Python3, or vice versa, is great. It's bad enough (probably unavoidable) that this happens during a major transition like 2 to 3, without it also happening during minor releases. Unless there is a good reason to actively remove an API, it should stay as long as possible. "I don't like this and it should go" is not a good reason, nor is "but there's a better way you should use". When in doubt, please don't break people's code. -- Steven From exarkun at twistedmatrix.com Mon Nov 28 13:45:24 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Mon, 28 Nov 2011 12:45:24 -0000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED37B36.7080208@pearwood.info> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: <20111128124524.2308.1242187804.divmod.xquotient.325@localhost.localdomain> On 12:14 pm, steve at pearwood.info wrote: >Xavier Morel wrote: >>Not being too eager to kill APIs is good, but giving rise to this kind >>of >>living-dead APIs is no better in my opinion, even more so since Python >>has >>lost one of the few tools it had to manage them (as DeprecationWarning >>was >>silenced by default). Both choices are harmful to users, but in the >>long >>run I do think zombie APIs are worse. > >I would much rather have my code relying on "zombie" APIs and keep >working, than to have that code suddenly stop working when the zombie >is removed. Working code should stay working. Unless the zombie is >actively harmful, what's the big deal if there is a newer, better way >of doing something? If it works, and if it's fast enough, why force >people to "fix" it? > >It is a good thing that code or tutorials from Python 1.5 still >(mostly) work, even when there are newer, better ways of doing >something. I see a lot of newbies, and the frustration they suffer when >they accidentally (carelessly) try following 2.x instructions in >Python3, or vice versa, is great. It's bad enough (probably >unavoidable) that this happens during a major transition like 2 to 3, >without it also happening during minor releases. > >Unless there is a good reason to actively remove an API, it should stay >as long as possible. "I don't like this and it should go" is not a good >reason, nor is "but there's a better way you should use". When in >doubt, please don't break people's code. +1 Jean-Paul From python-dev at masklinn.net Mon Nov 28 13:56:31 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Mon, 28 Nov 2011 13:56:31 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On 2011-11-28, at 13:06 , Nick Coghlan wrote: > On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel wrote: >> Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. > > But restricting ourselves to cleaning out such APIs every 10 years or > so with a major version bump is also a potentially viable option. > > So long as the old APIs are fully tested and aren't actively *harmful* > to creating reasonable code (e.g. optparse) then refraining from > killing them before the (still hypothetical) 4.0 is reasonable. Sure, the original proposal leaves the deprecation timelines as TBD and I hope I did not give the impression of setting up a timeline (that was not the intention). Ezio's original proposal could simply be implemented by having the second step ("decide how long the deprecation should last") default to "the next major release", I don't think that goes against his proposal, and in case APIs are actively harmful (e.g. very hard to use correctly) the deprecation timeline can be accelerated specifically for that case. From petri at digip.org Mon Nov 28 14:36:03 2011 From: petri at digip.org (Petri Lehtinen) Date: Mon, 28 Nov 2011 15:36:03 +0200 Subject: [Python-Dev] Deprecation policy In-Reply-To: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <20111128133603.GD32511@p16> Raymond Hettinger wrote: > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. > Instead, it is best to mark APIs as obsolete with a recommendation to use > something else instead. > > There is rarely a need to actually remove support for something in > the standard library. > > That may serve a notion of tidyness or somesuch but in reality it is > a PITA for users making it more difficult to upgrade python versions > and making it more difficult to use published recipes. I'm strongly against breaking backwards compatiblity between minor versions (e.g. 3.2 and 3.3). If something is removed in this manner, the transition period should at least be very, very long. To me, deprecating an API means "this code will not get new features and possibly not even (big) fixes". It's important for the long term health of a project to be able to deprecate and eventually remove code that is no longer maintained. So, I think we should have a clear and working deprecation policy, and Ezio's suggestion sounds good to me. There should be a clean way to state, in both code and documentation, that something is deprecated, do not use in new code. Furthermore, deprecated code should actually be removed when the time comes, be it Python 4.0 or something else. Petri From barry at python.org Mon Nov 28 14:53:44 2011 From: barry at python.org (Barry Warsaw) Date: Mon, 28 Nov 2011 08:53:44 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111128133603.GD32511@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> Message-ID: <20111128085344.012bf983@resist.wooz.org> On Nov 28, 2011, at 03:36 PM, Petri Lehtinen wrote: >Raymond Hettinger wrote: >> That may serve a notion of tidyness or somesuch but in reality it is >> a PITA for users making it more difficult to upgrade python versions >> and making it more difficult to use published recipes. > >I'm strongly against breaking backwards compatiblity between minor >versions (e.g. 3.2 and 3.3). If something is removed in this manner, >the transition period should at least be very, very long. +1 It's even been a pain when porting between Python 2.x and 3. You'll see some things that were carried forward into Python 3.0 and 3.1 but are now gone in 3.2. So if you port from 2.7 -> 3.2 for example, you'll find a few things missing (the intobject.h aliases come to mind). For those reasons I think we need to be conservative about removing stuff. Once the world is all on Python 3 we can think about removing code. Cheers, -Barry From fuzzyman at voidspace.org.uk Mon Nov 28 15:33:50 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 28 Nov 2011 14:33:50 +0000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111128133603.GD32511@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> Message-ID: <4ED39BCE.5040000@voidspace.org.uk> On 28/11/2011 13:36, Petri Lehtinen wrote: > Raymond Hettinger wrote: >> How about we agree that actually removing things is usually bad for users. >> It will be best if the core devs had a strong aversion to removal. >> Instead, it is best to mark APIs as obsolete with a recommendation to use >> something else instead. >> >> There is rarely a need to actually remove support for something in >> the standard library. >> >> That may serve a notion of tidyness or somesuch but in reality it is >> a PITA for users making it more difficult to upgrade python versions >> and making it more difficult to use published recipes. > I'm strongly against breaking backwards compatiblity between minor > versions (e.g. 3.2 and 3.3). If something is removed in this manner, > the transition period should at least be very, very long. We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology. Nonetheless, our usual deprecation policy has been a *minimum* of deprecated for two releases and removed in a third (if at all) - which is about five years from deprecation to removal given our normal release rate. The water is muddied by Python 3, where we may deprecate something in Python 3.1 and remove in 3.3 (hypothetically) - but users may go straight from Python 2.7 to 3.3 and skip the deprecation period altogether... So we should be extra conservative about removals in Python 3 (for the moment at least). > To me, deprecating an API means "this code will not get new features > and possibly not even (big) fixes". It's important for the long term > health of a project to be able to deprecate and eventually remove code > that is no longer maintained. The issue is that deprecated code can still be a maintenance burden. Keeping deprecated APIs around can require effort just to keep them working and may actively *prevent* other changes / improvements. All the best, Michael Foord > So, I think we should have a clear and working deprecation policy, and > Ezio's suggestion sounds good to me. There should be a clean way to > state, in both code and documentation, that something is deprecated, > do not use in new code. Furthermore, deprecated code should actually > be removed when the time comes, be it Python 4.0 or something else. > > Petri > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From anacrolix at gmail.com Mon Nov 28 15:38:18 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 29 Nov 2011 01:38:18 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED37B36.7080208@pearwood.info> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: On Mon, Nov 28, 2011 at 11:14 PM, Steven D'Aprano wrote: > Xavier Morel wrote: > >> Not being too eager to kill APIs is good, but giving rise to this kind of >> living-dead APIs is no better in my opinion, even more so since Python has >> lost one of the few tools it had to manage them (as DeprecationWarning was >> silenced by default). Both choices are harmful to users, but in the long >> run I do think zombie APIs are worse. > > I would much rather have my code relying on "zombie" APIs and keep working, > than to have that code suddenly stop working when the zombie is removed. > Working code should stay working. Unless the zombie is actively harmful, > what's the big deal if there is a newer, better way of doing something? If > it works, and if it's fast enough, why force people to "fix" it? > > It is a good thing that code or tutorials from Python 1.5 still (mostly) > work, even when there are newer, better ways of doing something. I see a lot > of newbies, and the frustration they suffer when they accidentally > (carelessly) try following 2.x instructions in Python3, or vice versa, is > great. It's bad enough (probably unavoidable) that this happens during a > major transition like 2 to 3, without it also happening during minor > releases. > > Unless there is a good reason to actively remove an API, it should stay as > long as possible. "I don't like this and it should go" is not a good reason, > nor is "but there's a better way you should use". When in doubt, please > don't break people's code. This is a great argument. But people want to see new, bigger better things in the standard library, and the #1 reason cited against this is "we already have too much". I think that's where the issue lies: Either lots of cool nice stuff is added and supported (we all want our favourite things in the standard lib for this reason), and or the old stuff lingers... I'm sure a while ago there was mention of a "staging" area for inclusion in the standard library. This attracts interest, stabilization, and quality from potential modules for inclusion. Better yet, the existing standard library ownership is somehow detached from the CPython core, so that changes enabling easier customization to fit other implementations (jpython, pypy etc.) are possible. tl;dr old stuff blocks new hotness. make room or separate standard library concerns from cpython > > > > -- > Steven > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From brett at python.org Mon Nov 28 16:11:57 2011 From: brett at python.org (Brett Cannon) Date: Mon, 28 Nov 2011 10:11:57 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: <20111125183746.45ab20b5@pitrou.net> References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> <20111125183746.45ab20b5@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 12:37, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 12:37:59 -0500 > Brett Cannon wrote: > > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > > > > > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > > > > wrote: > > > > The problem is not with maintaining the modified directory. The > > > > problem was always things like changing interface between the C > > > > version and the Python version or introduction of new stuff that does > > > > not run on pypy because it relies on refcounting. I don't see how > > > > having a subrepo helps here. > > > > > > Indeed, the main thing that can help on this front is to get more > > > modules to the same state as heapq, io, datetime (and perhaps a few > > > others that have slipped my mind) where the CPython repo actually > > > contains both C and Python implementations and the test suite > > > exercises both to make sure their interfaces remain suitably > > > consistent (even though, during normal operation, CPython users will > > > only ever hit the C accelerated version). > > > > > > This not only helps other implementations (by keeping a Python version > > > of the module continuously up to date with any semantic changes), but > > > can help people that are porting CPython to new platforms: the C > > > extension modules are far more likely to break in that situation than > > > the pure Python equivalents, and a relatively slow fallback is often > > > going to be better than no fallback at all. (Note that ctypes based > > > pure Python modules *aren't* particularly useful for this purpose, > > > though - due to the libffi dependency, ctypes is one of the extension > > > modules most likely to break when porting). > > > > > > > And the other reason I plan to see this through before I die > > Uh! Any bad news? :/ Sorry, turn of phrase in English which didn't translate well. I just meant "when I get to it, which could quite possibly be a *long* time from now". This year has been absolutely insane for me personally (if people care, the details are shared on Google+ or you can just ask me), so I am just not promising anything for Python on a short timescale (although I'm still hoping the final details for bootstrapping importlib won't be difficult to work out so I can meet a personal deadline of PyCon). -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon Nov 28 16:19:50 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 29 Nov 2011 00:19:50 +0900 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> Matt Joiner writes: > This is a great argument. But people want to see new, bigger better > things in the standard library, and the #1 reason cited against this > is "we already have too much". I think that's where the issue lies: > Either lots of cool nice stuff is added and supported (we all want our > favourite things in the standard lib for this reason), and or the old > stuff lingers... Deprecated features are pretty much irrelevant to the height of the bar for new features. The problem is that there are a limited number of folks doing long term maintenance of the standard library, and an essentially unlimited supply of one-off patches to add cool new features (not backed by a long term warranty of maintenance by the contributor). So deprecated features do add some burden of maintenance for the core developers, as Michael points out -- but removing *all* of them on short notice would not really make it possible to *add* features *in a maintainable way* any faster. > I'm sure a while ago there was mention of a "staging" area for > inclusion in the standard library. This attracts interest, > stabilization, and quality from potential modules for inclusion. But there's no particular reason to believe it will attract more contributors willing to do long-term maintenance, and *somebody* has to maintain the staging area. From solipsis at pitrou.net Mon Nov 28 16:46:23 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 Nov 2011 16:46:23 +0100 Subject: [Python-Dev] New features References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20111128164623.1fcc8835@pitrou.net> On Tue, 29 Nov 2011 00:19:50 +0900 "Stephen J. Turnbull" wrote: > > Deprecated features are pretty much irrelevant to the height of the > bar for new features. The problem is that there are a limited number > of folks doing long term maintenance of the standard library, and an > essentially unlimited supply of one-off patches to add cool new > features (not backed by a long term warranty of maintenance by the > contributor). Actually, we don't often get patches for new features. Many new features are implemented by core developers themselves. Regards Antoine. From stephen at xemacs.org Mon Nov 28 18:19:58 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 29 Nov 2011 02:19:58 +0900 Subject: [Python-Dev] New features In-Reply-To: <20111128164623.1fcc8835@pitrou.net> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> <20111128164623.1fcc8835@pitrou.net> Message-ID: <8739d8p3k1.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > Actually, we don't often get patches for new features. Many new > features are implemented by core developers themselves. Right. That's not inconsistent with what I wrote, as long as would-be feature submitters realize what the standards for an acceptable feature patch are. From solipsis at pitrou.net Mon Nov 28 18:37:24 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 Nov 2011 18:37:24 +0100 Subject: [Python-Dev] Deprecation policy References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <20111128183724.1a50a0ee@pitrou.net> Hi, On Mon, 28 Nov 2011 01:30:53 -0800 Raymond Hettinger wrote: > > On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > > > Hi, > > our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: > > 1) deprecate something and add a DeprecationWarning; > > 2) forget about it after a while; > > 3) wait a few versions until someone notices it; > > 4) actually remove it; > > > > I suggest to follow the following process: > > 1) deprecate something and add a DeprecationWarning; > > 2) decide how long the deprecation should last; > > 3) use the deprecated-remove[1] directive to document it; > > 4) add a test that fails after the update so that we remember to remove it[2]; > > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. Well, it's not like we aren't already conservative in deprecating things. > Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. > There is rarely a need to actually remove support for something in the standard library. > That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. I agree with Xavier's answer that having recipes around which use outdated (and possibly inefficient/insecure/etc.) APIs is a nuisance. Also, deprecated-but-not-removed APIs come at a maintenance and support cost. Regards Antoine. From jcea at jcea.es Mon Nov 28 21:47:55 2011 From: jcea at jcea.es (Jesus Cea) Date: Mon, 28 Nov 2011 21:47:55 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: <4ED3F37B.4030103@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 28/11/11 06:06, Nick Coghlan wrote: >> So, in this context, if the tracker "create patch" diff from >> BASE, it is not "safe" to merge changes from mainline to the >> branch, because if so "create patch" would include code not >> related to my work. > > No, "Create Patch" is smarter than that. What it does (or tries to > do) is walk back through your branch history, trying to find the > last point where you merged in a changeset that it recognises as > coming from the main CPython repo. It then uses that revision of > the CPython repo as the basis for the diff. Oh, that sounds quite the right way. Clever. > So if you're just working on a feature branch, periodically > pulling from hg.python.org/cpython and merging from default, then > it should all work fine. So, my original question is answered as "yes, you can merge in changes from mainline, and 'create patch' will work as it should". Good!!. Thanks!!!. >> Anybody knows the mercurial command used to implement "create >> patch"?. > > It's not a single command - it's a short script MvL wrote that > uses the Mercurial API to traverse the branch history and find an > appropriate revision to diff against. Publish out somewhere would be useful, I guess. This is a problem I have found in a few other projects. I can see even a modifier for "hg diff" for a future mercurial version :-). Could be implemented as a command line command using "revsets"?. Propose a new revset to mercurial devels? - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtPze5lgi5GaxT1NAQIT9gP+N4urbw7TgCWTa7EFZ4rjj7/o9f3aBq4I kYBnVZGmh98YqjHL0MzHhhu2a+G6pC/Zksf9CyIinPol4DJR8zGhBDIxo6SNIja+ QsSyQ7DhBWkSwKZAKqBNSdBBH0fu/DpdmNv6fP0s04Ju6sllvHAbEN/oj9zWqxWM KjAMzrgPcSA= =zViH -----END PGP SIGNATURE----- From ncoghlan at gmail.com Mon Nov 28 22:00:28 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Nov 2011 07:00:28 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED3F37B.4030103@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> <4ED3F37B.4030103@jcea.es> Message-ID: It's published as part of the tracker repo, although I'm not sure exactly where it lives. -- Nick Coghlan (via Gmail on Android, so likely to be more terse than usual) On Nov 29, 2011 6:50 AM, "Jesus Cea" wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 28/11/11 06:06, Nick Coghlan wrote: > >> So, in this context, if the tracker "create patch" diff from > >> BASE, it is not "safe" to merge changes from mainline to the > >> branch, because if so "create patch" would include code not > >> related to my work. > > > > No, "Create Patch" is smarter than that. What it does (or tries to > > do) is walk back through your branch history, trying to find the > > last point where you merged in a changeset that it recognises as > > coming from the main CPython repo. It then uses that revision of > > the CPython repo as the basis for the diff. > > Oh, that sounds quite the right way. Clever. > > > So if you're just working on a feature branch, periodically > > pulling from hg.python.org/cpython and merging from default, then > > it should all work fine. > > So, my original question is answered as "yes, you can merge in changes > from mainline, and 'create patch' will work as it should". > > Good!!. Thanks!!!. > > >> Anybody knows the mercurial command used to implement "create > >> patch"?. > > > > It's not a single command - it's a short script MvL wrote that > > uses the Mercurial API to traverse the branch history and find an > > appropriate revision to diff against. > > Publish out somewhere would be useful, I guess. This is a problem I > have found in a few other projects. I can see even a modifier for "hg > diff" for a future mercurial version :-). > > Could be implemented as a command line command using "revsets"?. > Propose a new revset to mercurial devels? > > - -- > Jesus Cea Avion _/_/ _/_/_/ _/_/_/ > jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ > jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ > . _/_/ _/_/ _/_/ _/_/ _/_/ > "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ > "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ > "El amor es poner tu felicidad en la felicidad de otro" - Leibniz > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQCVAwUBTtPze5lgi5GaxT1NAQIT9gP+N4urbw7TgCWTa7EFZ4rjj7/o9f3aBq4I > kYBnVZGmh98YqjHL0MzHhhu2a+G6pC/Zksf9CyIinPol4DJR8zGhBDIxo6SNIja+ > QsSyQ7DhBWkSwKZAKqBNSdBBH0fu/DpdmNv6fP0s04Ju6sllvHAbEN/oj9zWqxWM > KjAMzrgPcSA= > =zViH > -----END PGP SIGNATURE----- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petri at digip.org Tue Nov 29 13:46:06 2011 From: petri at digip.org (Petri Lehtinen) Date: Tue, 29 Nov 2011 14:46:06 +0200 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED39BCE.5040000@voidspace.org.uk> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> Message-ID: <20111129124606.GC21346@p16> Michael Foord wrote: > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > just Python's terminology. Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :) From petri at digip.org Tue Nov 29 13:58:39 2011 From: petri at digip.org (Petri Lehtinen) Date: Tue, 29 Nov 2011 14:58:39 +0200 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: <20111129125839.GD21346@p16> Nick Coghlan wrote: > > So, in this context, if the tracker "create patch" diff from BASE, it > > is not "safe" to merge changes from mainline to the branch, because if > > so "create patch" would include code not related to my work. > > No, "Create Patch" is smarter than that. What it does (or tries to do) > is walk back through your branch history, trying to find the last > point where you merged in a changeset that it recognises as coming > from the main CPython repo. It then uses that revision of the CPython > repo as the basis for the diff. > > So if you're just working on a feature branch, periodically pulling > from hg.python.org/cpython and merging from default, then it should > all work fine. > > Branches-of-branches (i.e. where you've merged from CPython via > another named branch in your local repo) seems to confuse it though - > I plan to change my workflow for those cases to merge each branch from > the same version of default before merging from the other branch. The ancestor() revset could help for the confusion: http://stackoverflow.com/a/6744163/639276 In this case, the user would have to be able to tell the branch against which he wants the diff. Petri From solipsis at pitrou.net Tue Nov 29 13:59:48 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 29 Nov 2011 13:59:48 +0100 Subject: [Python-Dev] Deprecation policy References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: <20111129135948.23a2e5cc@pitrou.net> On Tue, 29 Nov 2011 14:46:06 +0200 Petri Lehtinen wrote: > Michael Foord wrote: > > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > > just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough. Regards Antoine. From phd at phdru.name Tue Nov 29 13:53:58 2011 From: phd at phdru.name (Oleg Broytman) Date: Tue, 29 Nov 2011 16:53:58 +0400 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129124606.GC21346@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: <20111129125358.GA7839@iskra.aviel.ru> On Tue, Nov 29, 2011 at 02:46:06PM +0200, Petri Lehtinen wrote: > Michael Foord wrote: > > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > > just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) When the major version component is increased it's a World Shattering Change, isn't it?! ;-) Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From barry at python.org Tue Nov 29 16:13:06 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 29 Nov 2011 10:13:06 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129135948.23a2e5cc@pitrou.net> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> Message-ID: <20111129101306.08a931b9@resist.wooz.org> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >Well, that's why I think the version number components are not >correctly named. I don't think any of the 2.x or 3.x releases can be >called "minor" by any stretch of the word. A quick glance at >http://docs.python.org/dev/whatsnew/index.html should be enough. Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics. -Barry From g.brandl at gmx.net Tue Nov 29 19:28:40 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 Nov 2011 19:28:40 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129124606.GC21346@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: Am 29.11.2011 13:46, schrieb Petri Lehtinen: > Michael Foord wrote: >> We tend to see 3.2 -> 3.3 as a "major version" increment, but that's >> just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) Yes. Georg From nadeem.vawda at gmail.com Tue Nov 29 23:59:11 2011 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 30 Nov 2011 00:59:11 +0200 Subject: [Python-Dev] LZMA support has landed Message-ID: Hey folks, I'm pleased to announce that as of changeset 74d182cf0187, the standard library now includes support for the LZMA compression algorithm (as well as the associated .xz and .lzma file formats). The new lzma module has a very similar API to the existing bz2 module; it should serve as a drop-in replacement for most use cases. If anyone has any feedback or any suggestions for improvement, please let me know. I'd like to ask the owners of (non-Windows) buildbots to install the XZ Utils development headers so that they can build the new module. For Debian-derived Linux distros, the relevant package is named "liblzma-dev"; on Fedora I believe the correct package is "xz-devel". Binaries for OS X are available from the project's homepage at . Finally, a big thanks to everyone who contributed feedback during this module's development! Cheers, Nadeem From solipsis at pitrou.net Wed Nov 30 00:07:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 30 Nov 2011 00:07:00 +0100 Subject: [Python-Dev] cpython: Issue #6715: Add module for compression using the LZMA algorithm. References: Message-ID: <20111130000700.56e9c9bc@pitrou.net> On Tue, 29 Nov 2011 23:36:58 +0100 nadeem.vawda wrote: > http://hg.python.org/cpython/rev/74d182cf0187 > changeset: 73794:74d182cf0187 > user: Nadeem Vawda > date: Wed Nov 30 00:25:06 2011 +0200 > summary: > Issue #6715: Add module for compression using the LZMA algorithm. Congratulations, Nadeem! Regards Antoine. From amauryfa at gmail.com Wed Nov 30 00:34:49 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 30 Nov 2011 00:34:49 +0100 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: 2011/11/29 Nadeem Vawda > I'm pleased to announce that as of changeset 74d182cf0187, the > standard library now includes support for the LZMA compression > algorithm Congratulations! > I'd like to ask the owners of (non-Windows) buildbots to install the > XZ Utils development headers so that they can build the new module. > And don't worry about Windows builbots, they will automatically download the XZ prebuilt binaries from the usual place. (svn export http://svn.python.org/projects/external/xz-5.0.3) Next step: add support for tar.xz files (issue5689)... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 30 00:45:19 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 Nov 2011 09:45:19 +1000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129101306.08a931b9@resist.wooz.org> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: > On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: > >>Well, that's why I think the version number components are not >>correctly named. I don't think any of the 2.x or 3.x releases can be >>called "minor" by any stretch of the word. A quick glance at >>http://docs.python.org/dev/whatsnew/index.html should be enough. > > Agreed, but it's too late to change it. ?I look at it as the attributes of the > namedtuple being evocative of the traditional names for the digit positions, > not the assignment of those positions to Python's semantics. Hmm, I wonder about that. Perhaps we could add a second set of names in parallel with the "major.minor.micro" names: "series.feature.maint". That would, after all, reflect what is actually said in practice: - release series: 2.x, 3.x (usually used in a form like "In the 3.x series, X is true. In 2.x, Y is true) - feature release: 2.7, 3.2, etc - maintenance release: 2.7.2, 3.2.1, etc I know I tend to call feature releases major releases and I'm far from alone in that. The discrepancy in relation to sys.version_info is confusing, but we can't make 'major' refer to a different field without breaking existing programs. But we *can* change: >>> sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) to instead read: sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) while allowing 'major' as an alias of 'series', 'minor' as an alias of 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd have started down the path towards coherent terminology for the three fields in the version numbers (by accepting that 'major' has now become irredeemably ambiguous in the context of CPython releases). This idea of renaming all three fields has come up before, but I believe we got stuck on the question of what to call the first number (i.e. the one I'm calling the "series" here). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Wed Nov 30 01:00:45 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 29 Nov 2011 19:00:45 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: 2011/11/29 Nick Coghlan : > On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: >> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >> >>>Well, that's why I think the version number components are not >>>correctly named. I don't think any of the 2.x or 3.x releases can be >>>called "minor" by any stretch of the word. A quick glance at >>>http://docs.python.org/dev/whatsnew/index.html should be enough. >> >> Agreed, but it's too late to change it. ?I look at it as the attributes of the >> namedtuple being evocative of the traditional names for the digit positions, >> not the assignment of those positions to Python's semantics. > > Hmm, I wonder about that. Perhaps we could add a second set of names > in parallel with the "major.minor.micro" names: > "series.feature.maint". > > That would, after all, reflect what is actually said in practice: > - release series: 2.x, 3.x ?(usually used in a form like "In the 3.x > series, X is true. In 2.x, Y is true) > - feature release: 2.7, 3.2, etc > - maintenance release: 2.7.2, 3.2.1, etc > > I know I tend to call feature releases major releases and I'm far from > alone in that. The discrepancy in relation to sys.version_info is > confusing, but we can't make 'major' refer to a different field > without breaking existing programs. But we *can* change: > >>>> sys.version_info > sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) > > to instead read: > > sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) > > while allowing 'major' as an alias of 'series', 'minor' as an alias of > 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd > have started down the path towards coherent terminology for the three > fields in the version numbers (by accepting that 'major' has now > become irredeemably ambiguous in the context of CPython releases). > > This idea of renaming all three fields has come up before, but I > believe we got stuck on the question of what to call the first number > (i.e. the one I'm calling the "series" here). Can we drop this now? Too much effort for very little benefit. We call releases what we call releases. -- Regards, Benjamin From anacrolix at gmail.com Wed Nov 30 06:26:27 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 30 Nov 2011 16:26:27 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: I like this article on it: http://semver.org/ The following snippets being relevant here: Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. With the exception of actually dropping stuff (however this only occurs in terms of modules, which hardly count in special cases?), Python already conforms to this standard very well. On Wed, Nov 30, 2011 at 11:00 AM, Benjamin Peterson wrote: > 2011/11/29 Nick Coghlan : >> On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: >>> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >>> >>>>Well, that's why I think the version number components are not >>>>correctly named. I don't think any of the 2.x or 3.x releases can be >>>>called "minor" by any stretch of the word. A quick glance at >>>>http://docs.python.org/dev/whatsnew/index.html should be enough. >>> >>> Agreed, but it's too late to change it. ?I look at it as the attributes of the >>> namedtuple being evocative of the traditional names for the digit positions, >>> not the assignment of those positions to Python's semantics. >> >> Hmm, I wonder about that. Perhaps we could add a second set of names >> in parallel with the "major.minor.micro" names: >> "series.feature.maint". >> >> That would, after all, reflect what is actually said in practice: >> - release series: 2.x, 3.x ?(usually used in a form like "In the 3.x >> series, X is true. In 2.x, Y is true) >> - feature release: 2.7, 3.2, etc >> - maintenance release: 2.7.2, 3.2.1, etc >> >> I know I tend to call feature releases major releases and I'm far from >> alone in that. The discrepancy in relation to sys.version_info is >> confusing, but we can't make 'major' refer to a different field >> without breaking existing programs. But we *can* change: >> >>>>> sys.version_info >> sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) >> >> to instead read: >> >> sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) >> >> while allowing 'major' as an alias of 'series', 'minor' as an alias of >> 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd >> have started down the path towards coherent terminology for the three >> fields in the version numbers (by accepting that 'major' has now >> become irredeemably ambiguous in the context of CPython releases). >> >> This idea of renaming all three fields has come up before, but I >> believe we got stuck on the question of what to call the first number >> (i.e. the one I'm calling the "series" here). > > Can we drop this now? Too much effort for very little benefit. We call > releases what we call releases. > > > > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com From anacrolix at gmail.com Wed Nov 30 06:28:54 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 30 Nov 2011 16:28:54 +1100 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: Congrats, this is an excellent feature. On Wed, Nov 30, 2011 at 10:34 AM, Amaury Forgeot d'Arc wrote: > 2011/11/29 Nadeem Vawda >> >> I'm pleased to announce that as of changeset 74d182cf0187, the >> standard library now includes support for the LZMA compression >> algorithm > > > Congratulations! > >> >> I'd like to ask the owners of (non-Windows) buildbots to install the >> XZ Utils development headers so that they can build the new module. > > > And don't worry about Windows builbots, they will automatically download > the XZ prebuilt?binaries from the usual place. > (svn export http://svn.python.org/projects/external/xz-5.0.3) > > Next step: add support for tar.xz files (issue5689)... > > -- > Amaury Forgeot d'Arc > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From meadori at gmail.com Wed Nov 30 06:50:26 2011 From: meadori at gmail.com (Meador Inge) Date: Tue, 29 Nov 2011 23:50:26 -0600 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 4:59 PM, Nadeem Vawda wrote: > "liblzma-dev"; on Fedora I believe the correct package is "xz-devel". "xz-devel" is right. I just verified a build of the new module on a fresh F16 system. -- Meador From martin at v.loewis.de Wed Nov 30 09:01:55 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 30 Nov 2011 09:01:55 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED3F37B.4030103@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> <4ED3F37B.4030103@jcea.es> Message-ID: <4ED5E2F3.4050200@v.loewis.de> > Could be implemented as a command line command using "revsets"?. > Propose a new revset to mercurial devels? It *is* implemented as a command line command using "revsets". The revset is max(ancestors(branch("%s")))-outgoing("%s")) where the first parameter is the branch that contains your changes, and the second one is the "path" of the repository you want to diff against. In English: find the most recent revision in the ancestry of your branch that is not an outgoing change wrt. the base repository. ancestors(branch(yours)) gives all revisions preceding your branches' tip, which will be your own changes, plus all changes from the "default" branch that have been merged into your branch (including the changes from where you originally forked the branch). Subtracting outgoing removes all changes that are not yet in cpython, leaving only the changes in your ancestry that come from cpython. max() then finds the most recent such change, which will be the "default" parent of your last merge, or the branch point if you haven't merged after branching. HTH, Martin From anacrolix at gmail.com Wed Nov 30 15:31:14 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 1 Dec 2011 01:31:14 +1100 Subject: [Python-Dev] STM and python Message-ID: Given GCC's announcement that Intel's STM will be an extension for C and C++ in GCC 4.7, what does this mean for Python, and the GIL? I've seen efforts made to make STM available as a context, and for use in user code. I've also read about the "old attempts way back" that attempted to use finer grain locking. The understandably failed due to the heavy costs involved in both the locking mechanisms used, and the overhead of a reference counting garbage collection system. However given advances in locking and garbage collection in the last decade, what attempts have been made recently to try these new ideas out? In particular, how unlikely is it that all the thread safe primitives, global contexts, and reference counting functions be made __transaction_atomic, and magical parallelism performance boosts ensue? I'm aware that C89, platforms without STM/GCC, and single threaded performance are concerns. Please ignore these for the sake of discussion about possibilities. http://gcc.gnu.org/wiki/TransactionalMemory http://linux.die.net/man/4/futex From benjamin at python.org Wed Nov 30 16:25:20 2011 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 30 Nov 2011 10:25:20 -0500 Subject: [Python-Dev] STM and python In-Reply-To: References: Message-ID: 2011/11/30 Matt Joiner : > Given GCC's announcement that Intel's STM will be an extension for C > and C++ in GCC 4.7, what does this mean for Python, and the GIL? > > I've seen efforts made to make STM available as a context, and for use > in user code. I've also read about the "old attempts way back" that > attempted to use finer grain locking. The understandably failed due to > the heavy costs involved in both the locking mechanisms used, and the > overhead of a reference counting garbage collection system. > > However given advances in locking and garbage collection in the last > decade, what attempts have been made recently to try these new ideas > out? In particular, how unlikely is it that all the thread safe > primitives, global contexts, and reference counting functions be made > __transaction_atomic, and magical parallelism performance boosts > ensue? Have you seen http://morepypy.blogspot.com/2011/08/we-need-software-transactional-memory.html ? -- Regards, Benjamin From pje at telecommunity.com Wed Nov 30 16:28:32 2011 From: pje at telecommunity.com (PJ Eby) Date: Wed, 30 Nov 2011 10:28:32 -0500 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <4ED1196E.8090505@netwok.org> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> Message-ID: On Sat, Nov 26, 2011 at 11:53 AM, ?ric Araujo wrote: > > Le 11/08/2011 20:30, P.J. Eby a ?crit : > >> At 04:39 PM 8/11/2011 +0200, ?ric Araujo wrote: > >>> I?ll just regret that it's not possible to provide a module docstring > >>> to inform that this is a namespace package used for X and Y. > >> It *is* possible - you'd just have to put it in a "zc.py" file. IOW, > >> this PEP still allows "namespace-defining packages" to exist, as was > >> requested by early commenters on PEP 382. It just doesn't *require* > >> them to exist in order for the namespace contents to be importable. > > That?s quite cool. I guess such a namespace-defining module (zc.py > here) would be importable, right? Yes. > Also, would it cause worse > performance for other zc.* packages than if there were no zc.py? > No. The first import of a subpackage sets up the __path__, and all subsequent imports use it. > >>> A pure virtual package having no source file, I think it should have no >>> __file__ at all. > > Antoine and someone else thought likewise (I can find the link if you > want); do you consider it consensus enough to update the PEP? > Sure. At this point, though, before doing any more work on the PEP I'd like to have some idea of whether there's any chance of it being accepted. At this point, there seems to be a lot of passive, "Usenet nod syndrome" type support for it, but little active support. It doesn't help at all that I'm not really in a position to provide an implementation, and the persons most likely to implement have been leaning somewhat towards 382, or wanting to modify 402 such that it uses .pyp directory extensions so that PEP 395 can be supported... And while 402 is an extension of an idea that Guido proposed a few years ago, he hasn't weighed in lately on whether he still likes that idea, let alone whether he likes where I've taken it. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From neologix at free.fr Wed Nov 30 17:45:07 2011 From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Wed, 30 Nov 2011 17:45:07 +0100 Subject: [Python-Dev] STM and python In-Reply-To: References: Message-ID: > However given advances in locking and garbage collection in the last > decade, what attempts have been made recently to try these new ideas > out? In particular, how unlikely is it that all the thread safe > primitives, global contexts, and reference counting functions be made > __transaction_atomic, and magical parallelism performance boosts > ensue? > I'd say that given that the current libitm implementation uses a single global lock, you're more likely to see a performance loss. TM is useful to synchronize non-trivial operations: an increment or decrement of a reference count is highly trivial (and expensive when performed atomically, as noted), and TM's never going to help if you put each refcount operation in its own transaction: see Armin's http://morepypy.blogspot.com/2011/08/we-need-software-transactional-memory.html for more realistic use cases. From merwok at netwok.org Wed Nov 30 17:52:17 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Wed, 30 Nov 2011 17:52:17 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> Message-ID: <4ED65F41.2000400@netwok.org> Hi, Thanks for the replies. > At this point, though, before doing any more work on the PEP I'd > like to have some idea of whether there's any chance of it being accepted. > At this point, there seems to be a lot of passive, "Usenet nod syndrome" > type support for it, but little active support. If this helps, I am +1, and I?m sure other devs will chime in. I think the feature is useful, and I prefer 402?s way to 382?s pyp directories. I do acknowledge that 402 poses problems to PEP 395 which 382 does not, and as I?m not in a position to help, my vote may count less. Cheers From martin at v.loewis.de Wed Nov 30 18:33:32 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 30 Nov 2011 18:33:32 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <4ED65F41.2000400@netwok.org> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> <4ED65F41.2000400@netwok.org> Message-ID: <4ED668EC.3090700@v.loewis.de> > If this helps, I am +1, and I?m sure other devs will chime in. I think > the feature is useful, and I prefer 402?s way to 382?s pyp directories. If that's the obstacle to adopting PEP 382, it would be easy to revert the PEP back to having file markers to indicate package-ness. I insist on having markers of some kind, though (IIUC, this is also what PEP 395 requires). The main problem with file markers is that a) they must not overlap across portions of a package, and b) the actual file name and content is irrelevant. a) means that package authors have to come up with some name, and b) means that the name actually doesn't matter (but the file name extension would). UUIDs would work, as would the name of the portion/distribution. I think the specific choice of name will confuse people into interpreting things in the file name that aren't really intended. Regards, Martin


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4