The floating-point precision determines the maximum number of digits to be written on insertion operations to express floating-point values. How this is interpreted depends on whether the floatfield format flag is set to a specific notation (either fixed or scientific) or it is unset (using the default notation, which is not necessarily equivalent to either fixed nor scientific).
For the default locale −
Following is the declaration for ios_base::precision function.
get (1) streamsize precision() const; set (2) streamsize precision (streamsize prec);
The first form (1) returns the value of the current floating-point precision field for the stream.
The second form (2) also sets it to a new value.
Parametersprec − New value for the floating-point precision.
Return ValueThe precision selected in the stream before the call.
ExceptionsBasic guarantee − if an exception is thrown, the stream is in a valid state.
Data racesAccesses (1) or modifies (2) the stream object. Concurrent access to the same stream object may cause data races.
ExampleIn below example explains about ios_base::precision function.
#include <iostream> int main () { double f = 3.14159; std::cout.unsetf ( std::ios::floatfield ); std::cout.precision(5); std::cout << f << '\n'; std::cout.precision(10); std::cout << f << '\n'; std::cout.setf( std::ios::fixed, std:: ios::floatfield ); std::cout << f << '\n'; return 0; }
Let us compile and run the above program, this will produce the following result −
3.1416 3.14159 3.141590000
ios.htm
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4