Kevin Jacobs <jacobs at theopalgroup.com> writes: > This isn't something I am willing to go to war on, but at the same > time, I'm willing to expend some effort to lobby for inclusion. > Either way, I will have the necessary infrastructure to accomplish > my aims, though my goal is for everyone to have it without > re-inventing the wheel. Silence on this topic benefits nobody. I thought I'd try to comment, based on this. After all, I do have an interest in the issue (I'm an Oracle user), and so if there is an issue, it could well affect me, so I should at least be sure I understand the point. ... and I discovered that I understand Decimal far less than I thought I did. But after some experimentation, and reading of the spec, I think that I've hit on a key point: The internal representation of a Decimal instance, and specifically the number of digits of precision stored internally, has no impact on anything, *except when the instance is converted to a string* The reason for this is that every possible operation on a Decimal instance uses context, with the sole exception of the "convert to string" operations (sections 4 and 5 of the spec). As a result of this, I'm not sure that it's valid to care about the internal representation of a Decimal instance. > It seems that Jim and I want to be able to easily create Decimal > instances that conform to a pre-specified (maximum) scale and > (maximum) precision. But here we have that same point - Decimal instances do not "conform to" a scale/precision. > The motivation for this is clearly explained in that section of the > PostgeSQL manual that I sent the other day. i.e., numeric and > decimal values in SQL are specified in terms of scale and precision > parameters. Thus, I would like to create decimal instances that > conform to those schema -- i.e., they would be rounded appropriately > and overflow errors generated if they exceeded either the maximum > precision or scale. OK, so what you are talking about is rounding during construction. Or is it? Hang on, and let's look at your examples. > e.g.: > Decimal('20000.001', precision=4, scale=0) === Decimal('20000') This works fine with the current Decimal: >>> Decimal("20000.001").round(prec=4) == Decimal("20000") True Do you dislike the need to construct an exact Decimal, and then round it? On what grounds? I got the impression that you thought it would be "hard", but I don't think the round() method is too hard to use... (Although I would say that the documentation in the PEP is currently very lacking in its coverage of how to use the type - I found the round() method after a lot of experimentation. Before the Decimal module is ready for prime time, it needs some serious documentation effort). > Decimal('20000.001', precision=4, scale=0) raises an overflow > exception Hang on - this example is the same as the previous one, but you want a different result! In any case, the General Decimal Arithmetic spec doesn't have a concept of overflow when a precision is exceeded (only when the implementation-defined maximum exponent is exceeded), so I'm not sure what you want to happen here in the context of the spec. > Decimal('20000.001', precision=5, scale=3) raises an overflow > exception A similar comment abut overflow applies here. I can imagine that you want to know if information has been lost, but that's no problem - check like this: >>> Decimal("20000.001").round(prec=5) == Decimal("20000.001") False > Decimal('200.001', precision=6, scale=3) === Decimal('200.001') Again, not an issue: >>> Decimal("200.001").round(prec=6) == Decimal("200.001") True > Decimal('200.000', precision=6, scale=3) === Decimal('200') or > Decimal('200.000') > (depending on if precision and scale are interpreted as absolutes or > maximums) This doesn't make sense, given that Decimal("200") == Decimal("200.000"). Unless your use of === is meant to imply "has the same internal representation as", in which case I don't believe that you have a right to care what the internal representation is. I've avoided considering scale too much here - Decimal has no concept of scale, only precision. But that's effectively just a matter of multiplying by the appropriate power of 10, so shouldn't be a major issue. Apologies if I've completely misunderstood or misrepresented your problem here. If it's any consolation, I've learned a lot in the process of attempting to comment. Paul. -- This signature intentionally left blank
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4