As of pandas version 0.15.0, the memory usage of a dataframe (including the index) is shown when accessing the info
method of a dataframe. A configuration option, display.memory_usage
(see Options and Settings), specifies if the dataframe’s memory usage will be displayed when invoking the df.info()
method.
For example, the memory usage of the dataframe below is shown when calling df.info()
:
In [1]: dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', ...: 'complex128', 'object', 'bool'] ...: In [2]: n = 5000 In [3]: data = dict([ (t, np.random.randint(100, size=n).astype(t)) ...: for t in dtypes]) ...: In [4]: df = pd.DataFrame(data) In [5]: df['categorical'] = df['object'].astype('category') In [6]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 5000 entries, 0 to 4999 Data columns (total 8 columns): bool 5000 non-null bool complex128 5000 non-null complex128 datetime64[ns] 5000 non-null datetime64[ns] float64 5000 non-null float64 int64 5000 non-null int64 object 5000 non-null object timedelta64[ns] 5000 non-null timedelta64[ns] categorical 5000 non-null category dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1) memory usage: 289.1+ KB
The +
symbol indicates that the true memory usage could be higher, because pandas does not count the memory used by values in columns with dtype=object
.
New in version 0.17.1.
Passing memory_usage='deep'
will enable a more accurate memory usage report, that accounts for the full usage of the contained objects. This is optional as it can be expensive to do this deeper introspection.
In [7]: df.info(memory_usage='deep') <class 'pandas.core.frame.DataFrame'> RangeIndex: 5000 entries, 0 to 4999 Data columns (total 8 columns): bool 5000 non-null bool complex128 5000 non-null complex128 datetime64[ns] 5000 non-null datetime64[ns] float64 5000 non-null float64 int64 5000 non-null int64 object 5000 non-null object timedelta64[ns] 5000 non-null timedelta64[ns] categorical 5000 non-null category dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1) memory usage: 425.6 KB
By default the display option is set to True
but can be explicitly overridden by passing the memory_usage
argument when invoking df.info()
.
The memory usage of each column can be found by calling the memory_usage
method. This returns a Series with an index represented by column names and memory usage of each column shown in bytes. For the dataframe above, the memory usage of each column and the total memory usage of the dataframe can be found with the memory_usage method:
In [8]: df.memory_usage() Out[8]: Index 80 bool 5000 complex128 80000 datetime64[ns] 40000 float64 40000 int64 40000 object 40000 timedelta64[ns] 40000 categorical 10920 dtype: int64 # total memory usage of dataframe In [9]: df.memory_usage().sum() Out[9]: 296000
By default the memory usage of the dataframe’s index is shown in the returned Series, the memory usage of the index can be suppressed by passing the index=False
argument:
In [10]: df.memory_usage(index=False) Out[10]: bool 5000 complex128 80000 datetime64[ns] 40000 float64 40000 int64 40000 object 40000 timedelta64[ns] 40000 categorical 10920 dtype: int64
The memory usage displayed by the info
method utilizes the memory_usage
method to determine the memory usage of a dataframe while also formatting the output in human-readable units (base-2 representation; i.e., 1KB = 1024 bytes).
See also Categorical Memory Usage.
Using If/Truth Statements with pandas¶pandas follows the numpy convention of raising an error when you try to convert something to a bool
. This happens in a if
or when using the boolean operations, and
, or
, or not
. It is not clear what the result of
>>> if pd.Series([False, True, False]): ...
should be. Should it be True
because it’s not zero-length? False
because there are False
values? It is unclear, so instead, pandas raises a ValueError
:
>>> if pd.Series([False, True, False]): print("I was true") Traceback ... ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
If you see that, you need to explicitly choose what you want to do with it (e.g., use any(), all() or empty). or, you might want to compare if the pandas object is None
>>> if pd.Series([False, True, False]) is not None: print("I was not None") >>> I was not None
or return if any
value is True
.
>>> if pd.Series([False, True, False]).any(): print("I am any") >>> I am any
To evaluate single-element pandas objects in a boolean context, use the method .bool()
:
In [11]: pd.Series([True]).bool() Out[11]: True In [12]: pd.Series([False]).bool() Out[12]: False In [13]: pd.DataFrame([[True]]).bool() Out[13]: True In [14]: pd.DataFrame([[False]]).bool() Out[14]: FalseBitwise boolean¶
Bitwise boolean operators like ==
and !=
will return a boolean Series
, which is almost always what you want anyways.
>>> s = pd.Series(range(5)) >>> s == 4 0 False 1 False 2 False 3 False 4 True dtype: bool
See boolean comparisons for more examples.
Using thein
operator¶
Using the Python in
operator on a Series tests for membership in the index, not membership among the values.
If this behavior is surprising, keep in mind that using in
on a Python dictionary tests keys, not values, and Series are dict-like. To test for membership in the values, use the method isin()
:
For DataFrames, likewise, in
applies to the column axis, testing for membership in the list of column names.
For Series and DataFrame objects, var
normalizes by N-1
to produce unbiased estimates of the sample variance, while NumPy’s var
normalizes by N, which measures the variance of the sample. Note that cov
normalizes by N-1
in both pandas and NumPy.
As of pandas 0.11, pandas is not 100% thread safe. The known issues relate to the DataFrame.copy
method. If you are doing a lot of copying of DataFrame objects shared among threads, we recommend holding locks inside the threads where the data copying occurs.
See this link for more information.
Byte-Ordering Issues¶Occasionally you may have to deal with data that were created on a machine with a different byte order than the one on which you are running Python. A common symptom of this issue is an error like
Traceback ... ValueError: Big-endian buffer not supported on little-endian compiler
To deal with this issue you should convert the underlying NumPy array to the native system byte order before passing it to Series/DataFrame/Panel constructors using something similar to the following:
In [21]: x = np.array(list(range(10)), '>i4') # big endian In [22]: newx = x.byteswap().newbyteorder() # force native byteorder In [23]: s = pd.Series(newx)
See the NumPy documentation on byte order for more details.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4