Convert columns to the best possible dtypes using dtypes supporting pd.NA
.
Whether object dtypes should be converted to the best possible types.
Whether object dtypes should be converted to StringDtype()
.
Whether, if possible, conversion can be done to integer extension types.
Whether object dtypes should be converted to BooleanDtypes()
.
Whether, if possible, conversion can be done to floating extension types. If convert_integer is also True, preference will be give to integer dtypes if the floats can be faithfully casted to integers.
Back-end data type applied to the resultant DataFrame
(still experimental). Behaviour is as follows:
"numpy_nullable"
: returns nullable-dtype-backed DataFrame
(default).
"pyarrow"
: returns pyarrow-backed nullable ArrowDtype
DataFrame.
Added in version 2.0.
Copy of input object with new dtype.
Notes
By default, convert_dtypes
will attempt to convert a Series (or each Series in a DataFrame) to dtypes that support pd.NA
. By using the options convert_string
, convert_integer
, convert_boolean
and convert_floating
, it is possible to turn off individual conversions to StringDtype
, the integer extension types, BooleanDtype
or floating extension types, respectively.
For object-dtyped columns, if infer_objects
is True
, use the inference rules as during normal Series/DataFrame construction. Then, if possible, convert to StringDtype
, BooleanDtype
or an appropriate integer or floating extension type, otherwise leave as object
.
If the dtype is integer, convert to an appropriate integer extension type.
If the dtype is numeric, and consists of all integers, convert to an appropriate integer extension type. Otherwise, convert to an appropriate floating extension type.
In the future, as new dtypes are added that support pd.NA
, the results of this method will change to support those new dtypes.
Examples
>>> df = pd.DataFrame( ... { ... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")), ... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")), ... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")), ... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")), ... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")), ... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")), ... } ... )
Start with a DataFrame with default dtypes.
>>> df a b c d e f 0 1 x True h 10.0 NaN 1 2 y False i NaN 100.5 2 3 z NaN NaN 20.0 200.0
>>> df.dtypes a int32 b object c object d object e float64 f float64 dtype: object
Convert the DataFrame to use best possible dtypes.
>>> dfn = df.convert_dtypes() >>> dfn a b c d e f 0 1 x True h 10 <NA> 1 2 y False i <NA> 100.5 2 3 z <NA> <NA> 20 200.0
>>> dfn.dtypes a Int32 b string[python] c boolean d string[python] e Int64 f Float64 dtype: object
Start with a Series of strings and missing data represented by np.nan
.
>>> s = pd.Series(["a", "b", np.nan]) >>> s 0 a 1 b 2 NaN dtype: object
Obtain a Series with dtype StringDtype
.
>>> s.convert_dtypes() 0 a 1 b 2 <NA> dtype: string
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4