Hi, since this is the first time I've piped up here, I'm Scott, one of the older programmers who learned young (at 16, in 1966, I learned on a vacuum tube desk-sized computer, the LGP-30). My career has included work on operating systems, compilers, and databases. Python is the first new language I've learned in a long time that is a delight to use. My most recent uses of python have been large-volume data analysis (Numeric, VPython) for employment and an art project for myself. The art project involves a _large_ (43M compressed, .25G uncompressed) .png file that I wanted to manipulate with PIL. Unfortunately, PIL failed; I rewrote my code to reading the .png bytes directly in Python, and that failed as well. I evewntually found the problem was in zlib, and changed my code to drain all data before calling .flush(). That solved my immediate problem, and now I'm trying to get a fix in to solve this for others. The fix is small; the original writer misunderstood the cases a decompression object could be in at .flush time, so implemented a no-op as far as the data was concerned. This might be because there was no way to limit the size output by the decompression when that method was written. The test case that should have caught it was subject to a typo that hid the defect, and I submitted bug #640230. I also have a patch in, #678531, wich fixes the problem, accompanied by a rewrite of test_zlib.py into PyUnit style, which was much easier to use in testing my fix. If at all possible, it would be nice to see some exercise in the 2.3 test chain of this fix, so I am hoping for a reviewer. -Scott David Daniels Scott.Daniels@Acm.Org
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4