Yury Selivanov wrote: > I think the Motivation section is pretty weak. I have normally wished for this when I was (semi- interactively) exploring a weakly structured dataset. Often, I start with a string, split it into something hopefully like records, and then start applying filters and transforms. I would prefer to write a comprehension instead of a for loop. Alas, without pre-editing, I can be fairly confident that the data is dirty. Sometimes I can solve it with a filter (assuming that I remember and don't mind the out-of-order evaluation): # The "if value" happens first, # so the 1/value turns out to be safe. [1/value for value in working_list if value] Note that this means dropping the bad data, so that items in this list will have different indices than those in the parent working_list. I would rather have written: [1/value except (TypeError, ZeroDivisionError): None] which would keep the matching indices, and clearly indicate where I now had missing/invalid data. Sometimes I solve it with a clumsy workaround: sum((e.weight if hasattr(e, 'weight') else 1.0) for e in working_list) But the "hasattr" implies that I am doing some sort of classification based on whether or not the element has a weight. The true intent was to recognize that while every element does have a weight, the representation that I'm starting from didn't always bother to store it -- so I am repairing that before processing. sum(e.weight except AttributeError: 1) Often I give up, and create a junky helper function, or several. But to avoid polluting the namespace, I may leave it outside the class, or give it a truly bad name: def __only_n2(worklist): results = [] for line in worklist: line=line.strip() if not line: # or maybe just edit the input file... continue split1=line.split(", ") if 7 != len(split1): continue if "n2" == split1[3]: results.append(split1) return results worklist_n2 = __only_n2(worklist7) In real life code, even after hand-editing the input data to fix a few cases, I recently ended up with: class VoteMark: ... @classmethod def from_property(cls, voteline): # print (voteline) count, _junk, prefs = voteline.partition(": ") return cls(count, prefs) ... # module level scope def make_votes(vs=votestring): return [VoteMark.from_property(e) for e in vs.splitlines()] vs=make_votes() You can correctly point out that I was being sloppy, and that I *should* have gone back to clean it up. But I wouldn't have had to clean up either the code or the data (well, not as much), if I had been able to just keep the step-at-a-time transformations I was building up during development: vs=[(VoteMark(*e.strip().split(": ")) except (TypeError, ValueError): None) for e in votestring.splitlines()] Yes, the first line is still doing too much, and might be worth a helper function during cleanup. But it is already better than an alternate constructor that exists only to simplify a single (outside the class) function that is only called once. Which in turn is better than the first draft that was so ugly that I actually did fix it during that same work session. > Inconvenience of dict[] raising KeyError was solved by > introducing the dict.get() method. And I think that > dct.get('a', 'b') > is 1000 times better than > dct['a'] except KeyError: 'b' I don't. dct.get('a', default='b') would be considerably better, but it would still imply that missing values are normal. So even after argclinic is fully integrated, there will still be times when I prefer to make it explicit that I consider this an abnormal case. (And, as others have pointed out, .get isn't a good solution when the default is expensive to compute.) >> Consider this example of a two-level cache:: >> for key in sequence: >> x = (lvl1[key] except KeyError: (lvl2[key] except KeyError: f(key))) > I'm sorry, it took me a minute to understand what your > example is doing. I would rather see two try..except blocks > than this. Agreed -- like my semi-interactive code above, it does too much on one line. I don't object as much to: for key in sequence: x = (lvl1[key] except KeyError: (lvl2[key] except KeyError: f(key))) >> Retrieve an argument, defaulting to None:: >> cond = args[1] except IndexError: None >> >> # Lib/pdb.py:803: >> try: >> cond = args[1] >> except IndexError: >> cond = None > cond = None if (len(args) < 2) else args[1] This is an area where tastes will differ. I view the first as saying that not having a cond would be unusual, or at least a different kind of call. I view your version as a warning that argument parsing will be complex, and that there may be specific combinations of arguments that are only valid depending on the values of other arguments. Obviously, not everyone will share that intuition, but looking at the actual code, the first serves me far better. (It is a do_condition method, and falsy values -- such as None -- trigger a clear rather than a set.) >> Attempt a translation, falling back on the original:: >> e.widget = self._nametowidget(W) except KeyError: W >> >> # Lib/tkinter/__init__.py:1222: >> try: >> e.widget = self._nametowidget(W) >> except KeyError: >> e.widget = W Note that if the value were being stored in a variable instead of an attribute, it would often be written more like: try: W=_nametowidget(W) except: pass > I'm not sure this is a good example either. ... > Your new syntax just helps to work with this error prone api. Exactly. You think that is bad, because it encourages use of a sub-optimal API. I think it is good, because getting an external API fixed is ... unlikely to happen. > # sys.abiflags may not be defined on all platforms. > _CONFIG_VARS['abiflags'] = sys.abiflags except AttributeError: '' > Ugly. > _CONFIG_VARS['abiflags'] = getattr(sys, 'abiflags', '') > Much more readable. Again, tastes differ. To me, getattr looks too much like internals, and I wonder if I will need to look up getattribute or __getattr__, because maybe this code is doing something strange. After an unpleasant pause, I realize that, no, it really is safe. Then later, I wonder if _CONFIG_VARS is some odd mapping with special limits. Then I see that it doesn't matter, because that isn't what you're passing to getattr. Then I remember that sys often is a special case, and start wondering if I need extra tests around this. Wait, why was I looking at this code again? (And as others pointed out, getattr with a constant has its own code smell.) The "except" form clearly indicates that sys.abiflags *ought* to be there, and the code is just providing some (probably reduced) services to oddball systems, instead of failing. >> Retrieve an indexed item, defaulting to None (similar to dict.get):: >> def getNamedItem(self, name): >> return self._attrs[name] except KeyError: None > _attrs there is a dict (or at least it's something that quaks > like a dict, and has [] and keys()), so > return self._attrs.get(name) Why do you assume it has .get? Or that .get does what a python mapping normally does with .get? The last time I dove into code like that, it was written that way specifically because the DOM (and _attrs in particular) might well be created by some other program, in support of a very unpythonic API. Note that the method itself is called getNamedItem, rather than just get; that also suggests an external source of API expectations. >> Translate numbers to names, falling back on the numbers:: >> g = grp.getgrnam(tarinfo.gname)[2] except KeyError: tarinfo.gid >> u = pwd.getpwnam(tarinfo.uname)[2] except KeyError: tarinfo.uid >> >> # Lib/tarfile.py:2198: >> try: >> g = grp.getgrnam(tarinfo.gname)[2] >> except KeyError: >> g = tarinfo.gid >> try: >> u = pwd.getpwnam(tarinfo.uname)[2] >> except KeyError: >> u = tarinfo.uid > This one is a valid example, but totally unparseable by > humans. Moreover, it promotes a bad pattern, as you > mask KeyErrors in 'grp.getgrnam(tarinfo.gname)' call. Do you find the existing try: except: code any better, or are you just worried that it doesn't solve enough of the problem? FWIW, I think it doesn't matter where the KeyError came from; even if the problem were finding the getgrnam method, the right answer (once deployed, as opposed to during development) would still be to fall back to the best available information. >> Perform some lengthy calculations in EAFP mode, handling division by >> zero as a sort of sticky NaN:: >> value = calculate(x) except ZeroDivisionError: float("nan") vs >> try: >> value = calculate(x) >> except ZeroDivisionError: >> value = float("nan") > I think all of the above more readable with try statement. I don't, though I would wrap the except. The calculation is important; everything else is boilerplate, and the more you can get it out of the way, the better. > Yes, some examples look neat. But your syntax is much easier > to abuse, than 'if..else' expression, and if people start > abusing it, Python will simply loose it's readability > advantage. This is also an argument for mandatory parentheses. ()-continuation makes it easier to wrap the except clause out of the way. (((( )) (() ()) ))-proliferation provides pushback when the expressions start to get too complicated. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4