From Wikibooks, open books for an open world
stdint.h is a header file in the C standard library introduced in the C99 standard library section 7.18 to allow programmers to write more portable code by providing a set of typedefs that specify exact-width integer types, together with the defined minimum and maximum allowable values for each type, using macros[1] . This header is particularly useful for embedded programming which often involves considerable manipulation of hardware specific I/O registers requiring integer data of fixed widths, specific locations and exact alignments. stdint.h (for C or C++), and cstdint (for C++) can be downloaded or quickly created if they are not provided.
The naming convention for exact-width integer types is intN_t for signed int and uintN_t for unsigned int [1] . For example int8_t
and uint64_t
amongst others could be declared together with defining their corresponding ranges INT8_MIN
to INT8_MAX
and 0
(zero) to UINT64_MAX
; again using a similar but upper case naming convention. In addition stdint.h defines limits of integer types capable of holding object pointers such as UINTPTR_MAX
, the value of which depends on the processor and its address range[1].
The exact-width types and their corresponding ranges are only included in that header if they exist for that specific compiler/processor. Note that even on the same processor, two different compiler implementations can differ. The use of #if
or #ifdef
would allow the inclusion or exclusion of types by the use of compilers preprocessor so that the correct exact-width set is selected for a compiler and its processor target.
The related include file <limits.h>
provides macros values for the range limits of common integer variable types. In C <limits.h>
is already included in <stdint.h>
, but in contrast to <stdint.h>
which is implementation independent; all maximum and minimum integer values defined in <limits.h>
are compiler implementation specific. For example a compiler generating 32 bit executables will define LONG_MIN as −2,147,483,648 [−231] however for 64 bit processors targets, LONG_MIN can be −9,223,372,036,854,775,808 [−263].
The C standard has a notion of "corresponding integer types". Informally, what this means is for any integer type T:
typedef signed T A; typedef unsigned T B;
the type A
and the type B
are said to be corresponding integer types (note: typedef
doesn't create a new type, it creates a new identifier as a synonym for the given type). This is important for two reasons:
Both of these combined require code like:
A a = 1; B b = 1; *(B*)&a = 0; *(A*)&b = 0;
to have defined behavior by the standard (as opposed to being undefined in the general case). There are many caveats to how far you can push this, so it's important to actually read the C standard to see what's legal or not (the bulk of this has to deal with padding bits and out of range representations).
The C99 standard elaborated the difference between value representations and object representations.
The object representation of an integer consists of 0 or more padding bits, 1 or more value bits[1], and either 0 or 1 sign bits (this doesn't count as a value bit) depending on the signedness of the integer type.
The value representation is a conceptual representation of an integer. The value representation ignores any padding bits and does a (possible) rearrangement to the bits so that the integer is ordered sequentially from most significant value bit to least significant value bit. Most programmers deal with this representation because it allows them easily to write portable code by only dealing with −0 and out of range values as opposed to both of those in addition to tricky aliasing rules and trap representations if they choose to deal with the object representation directly.
The C standard allows for only three signed integer representations specified by the compiler writer:
The types <something>_t
and u<something>_t
are required to be corresponding signed and unsigned integer types. For the types that are marked optional, an implementation must either define both <something>_t
and u<something>_t
or neither of the two. The limits of these types shall be defined with macros with a similar name in the same fashion as described below.
If a type is of the form [u]<something>N_t
(or similarly for a preprocessor define), N must be a positive decimal integer with no leading 0's.
These are of the form intN_t
and uintN_t
. Both types must be represented by exactly N bits with no padding bits. intN_t
must be encoded as a two's complement signed integer and uintN_t
as an unsigned integer. These types are optional unless the implementation supports types with widths of 8, 16, 32 or 64, then it shall typedef them to the corresponding types with corresponding N. Any other N is optional[1] .
int8_t
Signed 8 1 −27 which equals −128 27 − 1 which is equal to 127 uint8_t
Unsigned 8 1 0 28 − 1 which equals 255 int16_t
Signed 16 2 −215 which equals −32,768 215 − 1 which equals 32,767 uint16_t
Unsigned 16 2 0 216 − 1 which equals 65,535 int32_t
Signed 32 4 −231 which equals −2,147,483,648 231 − 1 which equals 2,147,483,647 uint32_t
Unsigned 32 4 0 232 − 1 which equals 4,294,967,295 int64_t
Signed 64 8 −263 which equals −9,223,372,036,854,775,808 263 − 1 which equals 9,223,372,036,854,775,807 uint64_t
Unsigned 64 8 0 264 − 1 which equals 18,446,744,073,709,551,615
The limits of these types are defined with macros with the following formats:
INTN_MAX
is the maximum value (2N−1 − 1) of the signed version of intN_t.INTN_MIN
is the minimum value (−2N−1) of the signed version of intN_t.UINTN_MAX
is the maximum value (2N – 1) of the unsigned version of uintN_t.These are of the form int_leastN_t
and uint_leastN_t
. int_leastN_t
is a signed integer and uint_leastN_t
is an unsigned integer[1] .
The standard mandates that these have widths greater than or equal to N, and that no smaller type with the same signedness has N or more bits. For example, if a system provided only a uint32_t
and uint64_t
, uint_least16_t
must be equivalent to uint32_t
.
An implementation is required to define these for the following N: 8, 16, 32, 64. Any other N is optional.
The limits of these types are defined with macros with the following formats:
INT_LEASTN_MAX
is the maximum value (2N−1 − 1 or greater) of the signed version of int_leastN_t
.INT_LEASTN_MIN
is the minimum value (−2N−1 + 1 or less) of the signed version of int_leastN_t
.UINT_LEASTN_MAX
is the maximum value (2N − 1 or greater) of the unsigned version of uint_leastN_t
.stdint.h should also define macros which will convert constant decimal, octal or hexadecimal value which are guaranteed to be suitable for the corresponding types and to be usable with the #if
:
INTN_C(value)
is substituted for a value suitable for int_leastN_t
. For example if int_least64_t
is "typedefed" to signed long long int
, INT64_C(123)
corresponds to 123LL
.UINTN_C(value)
is substituted for a value suitable for uint_leastN_t
.These are of the form int_fastN_t
and uint_fastN_t
.
The standard does not mandate anything about these types except that their widths must be greater than or equal to N. It also leaves it up to the implementer to decide what it means to be a "fast" integer type.
An implementation is required to define these for the following N: 8, 16, 32, 64[2].
The limits of these types are defined with macros with the following formats:
INT_FASTN_MAX
is the maximum value (2N−1 − 1 or greater) of the signed version of int_fastN_t
.INT_FASTN_MIN
is the minimum value (−2N−1 + 1 or less) of the signed version of int_fastN_t
.UINT_FASTN_MAX
is the maximum value (2N − 1 or greater) of the unsigned version of uint_fastN_t
[1] .intptr_t
and uintptr_t
are, respectively, signed and unsigned integers which are guaranteed to be able to hold the value of a pointer. These two types are optional.
The limits of these types are defined with the following macros:
INTPTR_MIN
is the minimum value (−32,767 [−215 + 1] or less) of intptr_t
.INTPTR_MAX
is the maximum value (32,767 [215 − 1] or greater) of intptr_t
.UINTPTR_MAX
is the maximum value (65,535 [216 − 1] or greater) of uintptr_t
[3].intmax_t
and uintmax_t
is a signed and unsigned integer which are of the greatest supported width. They are, in other words, the integer types which have the greatest limits.
The limits of these types are defined with macros with the following formats:
INTMAX_MAX
is the maximum value (9,223,372,036,854,775,807 [263 − 1] or greater) of the signed version of intmax_t
.INTMAX_MIN
is the minimum value (−9,223,372,036,854,775,807 [−263 + 1] or less) of the signed version of intmax_t
.UINTMAX_MAX
is the maximum value (18,446,744,073,709,551,615 [264 − 1] or greater) of the unsigned version of uintmax_t
.Macros which will convert constant decimal, octal or hexadecimal value which will suit the corresponding type are also defined:
INTMAX_C(value)
is substituted for a value suitable for intmax_t
.UINTMAX_C(value)
is substituted for a value suitable for uintmax_t
[1] .PTRDIFF_MIN
is the minimum value of ptrdiff_t
.PTRDIFF_MAX
is the maximum value of ptrdiff_t
.SIZE_MAX
is the maximum value (216 − 1 or greater) of size_t
.WCHAR_MIN
is the minimum value of wchar_t
.WCHAR_MAX
is the maximum value of wchar_t
.WINT_MIN
is the minimum value of wint_t
.WINT_MAX
is the maximum value of wint_t
.SIG_ATOMIC_MIN
is the minimum value of sig_atomic_t
.SIG_ATOMIC_MAX
is the maximum value of sig_atomic_t
.printf
and scanf
specifiers aren't recognized and will probably lead to something undefined. The typical ways of working around this are:
long
or unsigned long
types as an intermediate step and pass these types into printf
or scanf
. This works reasonably well for the exact, minimum, and fast integer types less than 32-bits but may cause trouble with ptrdiff_t
and size_t
and the types larger than 32-bits, typically on platforms that use 32-bit long
s and 64-bit pointers.scanf
directly but manually reading in a buffer, calling strto<i|u>max
, and then converting it to the desired type. This doesn't help with printing out integers though.printf
and scanf
library that is C99 compatible.[u]intmax_t
types or synthesizing a corresponding integer type.[u]intN_t
types are a compromise between the desire to have guaranteed two's complement integer types and the desire to have guaranteed types with no padding bits (as opposed to a more fine grained approach which would define more types). Because of the "all or nothing" approach to the [u]intN_t
types, an implementation might have to play the same sort of games described above depending on whether they care about speed, programmer convenience, or standards conformance.stdint.h
: integer types – Base Definitions Reference, The Single UNIX® Specification, Issue 7 from The Open GroupAs stdint.h is not shipped with older C++ compilers and Visual Studio C++ products prior to Visual Studio 2010, third-party implementations are available:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4