The PEP I posted yesterday, which currently doesn't have a number, addresses the syntactic issues of adding a decimal number type to Python and it investigates how to safely introduce the new type in a language with a large base of legacy code. The PEP does not address the definition of how decimal numbers should be implemented in Python. This topic has been the subject of other PEPs. The PEP also proposes the definition of a new language dialect that makes some small improvements on the syntax of the classic Python language. The changes to the numerical model is tailored to make the language attractive to two very important markets. Many of the users attracted from these markets may initially have little or no interest in classic Python. They may not even know that the Python language exists. They will happily use a language called dpython that works very well for their profession. The interesting thing about the proposed language is how little effort will be required to create and maintain it. The additions to Python were straightforward and the total patch was only a few hundred lines. The prototype implementation uses the following rules when interpreting the type to be created from a number literal. literal '.py' '.dp' interactive interactive value file file python dpython 2.2b float float float float 2b int int int int 2.2 float decimal float decimal 2 int decimal int decimal 2.2d decimal decimal decimal decimal 2d decimal decimal decimal decimal Based on a comment from Guido I've decided to change the 'f' to 'b' in the next version of dpython. That will be more descriptive of the distinction the types. [Michael] >> This was a proposal for a mechanism for mingling types safely. It >> was not intended as a definition of how decimal numbers should be >> implemented. My implementation tests the interaction of the current >> number types with the decimal type and I only completed enough of >> the decimal type implementation to support this testing. I was not >> expecting to discuss how decimal types should work. That has been >> discussed already. I was primarily interested in testing the effects >> of adding a new number type as I described in the PEP. > Can you summarize the rules you used for mixed arithmetic? I forget > what your PEP said would happen when you add a decimal 1 to a binary > 1. Is the result decimal 2 or binary 2? Why? The rule is very simple. You can't mix the types. You must explicitly cast a binary to a decimal or a decimal to a binary. This introduces the least chance of error. This pedantic behavior is very important in fields like accounting. I want accountants to think of the proposed dpython language as the COBOL version of Python:-) This approach is also the correct one to take for newbies. They will get a nice clean exception if they mix the types. This error will be something they can easily look up in the documentation. An unexpected answer, like 1/2=0, will just leave them scratching their head. This proposal tries to be consistent with what I like about Python and what I think makes Python a great language. The implementation maintains complete backwards compatibility and it requires that programmers explicitly state that they want to do something rather than have bad things happen unexpectedly. Mixing different types of numbers can lead to bugs that are very difficult to identify. The nature of the errors that would occur when binary numbers are used instead of decimals would be particularly difficult to detect. The answers would always be very close, and sometimes they would be correct. Without the use of an explicit cast these errors would be silent. The price paid for being pedantic will be the occasional need to add an int() or float() around a decimal number or an decimal() around a float or int. >> What did you think of the idea of adding a new command and file format? > I don't think that would be necessary. I'd prefer the 'd' and 'f' (or > maybe 'b'?) suffixes to be explicit, perhaps combined with an optional > per-module directive to set the default. This would be more robust > than keying on the filename extension. Why do you think a directive statement would be more robust than using a file suffix or command name as the directive? I'll try to explain why I think the opposite is true. Take the example of teaching a newbie to program. They must be told some basic things For instance, they will have only been told to use a specific suffix for the file name in order to create a new module. So how do you make sure that the newbie always uses decimal numbers? If a directive statement is required then the newbie must remember to always add this statement at the top of a file. If they forget they will not get the expected results. With the file suffix based approach they will have to use a '.dp' suffix for the file name of a new module. If the are told to use a '.dp' suffix from the outset then the chances of their accidentally typing '.py' instead of '.dp' is very unlikely, whereas, forgetting to add a directive would be a silent error that they might easily forget. Your request to have an explicit 'd' and 'f' is already implemented. The prototype implementation allows an explicit 'd' or 'f' to be used at anytime. The rules on the interpretation of the values that have no suffix were defined earlier. The prototype implementation simply uses the suffix of the module file and the name of the command as the directive. This approach provides a very natural language experience to someone working in a profession that normally uses decimal numbers. They are not treated as second class citizens who must endure the clutter of a magic directive statements at the top of every module they create. They just use their special command and the file extension. > If you have to change the > default globally, I'd prefer a command line option. After all it's > only the scanner that needs to know about the different defaults, > right? I think there would be a problem with only using a command line option. It would work for files that are named on the command line and for code being interpreted in an interactive session. However, for imported modules the meaning of a number literal must be based on the author intentions when the module was created. This means that the interpreter must recognize the type of file so it can determine how compile the literals defined in the module. If the command line option determines how a scanner is to convert the number literals then a module source file could incorrectly be converted if the wrong command line option were used. > I wonder about the effectiveness of the default though. If you write > a module for decimal arithmetic, how do you prevent a caller to pass > in a binary number? Since the module is written with decimal numbers an exception would be raised if a binary number was used where a decimal number was required. For instance: --------------- #File spam.py a = 1.0 --------------- #File eggs.dp import spam c = a + 1.0 --------------- The name 'a' was compiled into a float type object when the spam.py file was scanned. So when the expression being assigned to 'c' is executed it would result in an TypeError being raised because a float was being added to a decimal. >> decimalobject.c so I could test the impact of introducing an >> additional command and file format to Python. I expect this code to >> be replaced. As I said in the PEP I also think the decimal number >> implementation will evolve into a type that supports inheritance. > Please, please, please, unify all these efforts. A decimal PEP would > be a good one, but there should be only one. Absolutely. The PEP process is suppose to formalize the capture of ideas so they can be reference. This PEP is mostly orthogonal to Aahz's proposal. They can be merge, or we can reference each others PEP. I'm probably not the best choice for doing the implement of the decimal number semantics, so I'd be happy to work with Aahz.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4