Index
objects are not required to be unique; you can have duplicate row or column labels. This may be a bit confusing at first. If youâre familiar with SQL, you know that row labels are similar to a primary key on a table, and you would never want duplicates in a SQL table. But one of pandasâ roles is to clean messy, real-world data before it goes to some downstream system. And real-world data has duplicates, even in fields that are supposed to be unique.
This section describes how duplicate labels change the behavior of certain operations, and how prevent duplicates from arising during operations, or to detect them if they do.
In [1]: import pandas as pd In [2]: import numpy as npConsequences of Duplicate Labels#
Some pandas methods (Series.reindex()
for example) just donât work with duplicates present. The output canât be determined, and so pandas raises.
In [3]: s1 = pd.Series([0, 1, 2], index=["a", "b", "b"]) In [4]: s1.reindex(["a", "b", "c"]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[4], line 1 ----> 1 s1.reindex(["a", "b", "c"]) File ~/work/pandas/pandas/pandas/core/series.py:4890, in Series.reindex(self, index, axis, method, copy, level, fill_value, limit, tolerance) 4873 @doc( 4874 NDFrame.reindex, # type: ignore[has-type] 4875 klass=_shared_doc_kwargs["klass"], (...) 4888 tolerance=None, 4889 ) -> Series: -> 4890 return super().reindex( 4891 index=index, 4892 method=method, 4893 level=level, 4894 fill_value=fill_value, 4895 limit=limit, 4896 tolerance=tolerance, 4897 copy=copy, 4898 ) File ~/work/pandas/pandas/pandas/core/generic.py:5428, in NDFrame.reindex(self, labels, index, columns, axis, method, copy, level, fill_value, limit, tolerance) 5425 return self._reindex_multi(axes, fill_value) 5427 # perform the reindex on the axes -> 5428 return self._reindex_axes( 5429 axes, level, limit, tolerance, method, fill_value 5430 ).__finalize__(self, method="reindex") File ~/work/pandas/pandas/pandas/core/generic.py:5450, in NDFrame._reindex_axes(self, axes, level, limit, tolerance, method, fill_value) 5447 continue 5449 ax = self._get_axis(a) -> 5450 new_index, indexer = ax.reindex( 5451 labels, level=level, limit=limit, tolerance=tolerance, method=method 5452 ) 5454 axis = self._get_axis_number(a) 5455 obj = obj._reindex_with_indexers( 5456 {axis: [new_index, indexer]}, 5457 fill_value=fill_value, 5458 allow_dups=False, 5459 ) File ~/work/pandas/pandas/pandas/core/indexes/base.py:4207, in Index.reindex(self, target, method, level, limit, tolerance) 4204 raise ValueError("cannot handle a non-unique multi-index!") 4205 elif not self.is_unique: 4206 # GH#42568 -> 4207 raise ValueError("cannot reindex on an axis with duplicate labels") 4208 else: 4209 indexer, _ = self.get_indexer_non_unique(target) ValueError: cannot reindex on an axis with duplicate labels
Other methods, like indexing, can give very surprising results. Typically indexing with a scalar will reduce dimensionality. Slicing a DataFrame
with a scalar will return a Series
. Slicing a Series
with a scalar will return a scalar. But with duplicates, this isnât the case.
In [5]: df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"]) In [6]: df1 Out[6]: A A B 0 0 1 2 1 3 4 5
We have duplicates in the columns. If we slice 'B'
, we get back a Series
In [7]: df1["B"] # a series Out[7]: 0 2 1 5 Name: B, dtype: int64
But slicing 'A'
returns a DataFrame
In [8]: df1["A"] # a DataFrame Out[8]: A A 0 0 1 1 3 4
This applies to row labels as well
In [9]: df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"]) In [10]: df2 Out[10]: A a 0 a 1 b 2 In [11]: df2.loc["b", "A"] # a scalar Out[11]: np.int64(2) In [12]: df2.loc["a", "A"] # a Series Out[12]: a 0 a 1 Name: A, dtype: int64Duplicate Label Detection#
You can check whether an Index
(storing the row or column labels) is unique with Index.is_unique
:
In [13]: df2 Out[13]: A a 0 a 1 b 2 In [14]: df2.index.is_unique Out[14]: False In [15]: df2.columns.is_unique Out[15]: True
Note
Checking whether an index is unique is somewhat expensive for large datasets. pandas does cache this result, so re-checking on the same index is very fast.
Index.duplicated()
will return a boolean ndarray indicating whether a label is repeated.
In [16]: df2.index.duplicated() Out[16]: array([False, True, False])
Which can be used as a boolean filter to drop duplicate rows.
In [17]: df2.loc[~df2.index.duplicated(), :] Out[17]: A a 0 b 2
If you need additional logic to handle duplicate labels, rather than just dropping the repeats, using groupby()
on the index is a common trick. For example, weâll resolve duplicates by taking the average of all rows with the same label.
In [18]: df2.groupby(level=0).mean() Out[18]: A a 0.5 b 2.0Disallowing Duplicate Labels#
Added in version 1.2.0.
As noted above, handling duplicates is an important feature when reading in raw data. That said, you may want to avoid introducing duplicates as part of a data processing pipeline (from methods like pandas.concat()
, rename()
, etc.). Both Series
and DataFrame
disallow duplicate labels by calling .set_flags(allows_duplicate_labels=False)
. (the default is to allow them). If there are duplicate labels, an exception will be raised.
In [19]: pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[19], line 1 ----> 1 pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False) File ~/work/pandas/pandas/pandas/core/generic.py:470, in NDFrame.set_flags(self, copy, allows_duplicate_labels) 468 df = self.copy(deep=False) 469 if allows_duplicate_labels is not None: --> 470 df.flags["allows_duplicate_labels"] = allows_duplicate_labels 471 return df File ~/work/pandas/pandas/pandas/core/flags.py:118, in Flags.__setitem__(self, key, value) 116 if key not in self._keys: 117 raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}") --> 118 setattr(self, key, value) File ~/work/pandas/pandas/pandas/core/flags.py:105, in Flags.allows_duplicate_labels(self, value) 103 if not value: 104 for ax in obj.axes: --> 105 ax._maybe_check_unique() 107 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:708, in Index._maybe_check_unique(self) 705 duplicates = self._format_duplicate_message() 706 msg += f"\n{duplicates}" --> 708 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label b [1, 2]
This applies to both row and column labels for a DataFrame
In [20]: pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags( ....: allows_duplicate_labels=False ....: ) ....: Out[20]: A B C 0 0 1 2 1 3 4 5
This attribute can be checked or set with allows_duplicate_labels
, which indicates whether that object can have duplicate labels.
In [21]: df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags( ....: allows_duplicate_labels=False ....: ) ....: In [22]: df Out[22]: A x 0 y 1 X 2 Y 3 In [23]: df.flags.allows_duplicate_labels Out[23]: False
DataFrame.set_flags()
can be used to return a new DataFrame
with attributes like allows_duplicate_labels
set to some value
In [24]: df2 = df.set_flags(allows_duplicate_labels=True) In [25]: df2.flags.allows_duplicate_labels Out[25]: True
The new DataFrame
returned is a view on the same data as the old DataFrame
. Or the property can just be set directly on the same object
In [26]: df2.flags.allows_duplicate_labels = False In [27]: df2.flags.allows_duplicate_labels Out[27]: False
When processing raw, messy data you might initially read in the messy data (which potentially has duplicate labels), deduplicate, and then disallow duplicates going forward, to ensure that your data pipeline doesnât introduce duplicates.
>>> raw = pd.read_csv("...") >>> deduplicated = raw.groupby(level=0).first() # remove duplicates >>> deduplicated.flags.allows_duplicate_labels = False # disallow going forward
Setting allows_duplicate_labels=False
on a Series
or DataFrame
with duplicate labels or performing an operation that introduces duplicate labels on a Series
or DataFrame
that disallows duplicates will raise an errors.DuplicateLabelError
.
In [28]: df.rename(str.upper) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[28], line 1 ----> 1 df.rename(str.upper) File ~/work/pandas/pandas/pandas/core/frame.py:5838, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 5716 """ 5717 Rename columns or index labels. 5718 (...) 5835 4 3 6 5836 """ 5837 self._check_copy_deprecation(copy) -> 5838 return super()._rename( 5839 mapper=mapper, 5840 index=index, 5841 columns=columns, 5842 axis=axis, 5843 inplace=inplace, 5844 level=level, 5845 errors=errors, 5846 ) File ~/work/pandas/pandas/pandas/core/generic.py:1070, in NDFrame._rename(self, mapper, index, columns, axis, inplace, level, errors) 1068 return None 1069 else: -> 1070 return result.__finalize__(self, method="rename") File ~/work/pandas/pandas/pandas/core/generic.py:6116, in NDFrame.__finalize__(self, other, method, **kwargs) 6110 if other.attrs: 6111 # We want attrs propagation to have minimal performance 6112 # impact if attrs are not used; i.e. attrs is an empty dict. 6113 # One could make the deepcopy unconditionally, but a deepcopy 6114 # of an empty dict is 50x more expensive than the empty check. 6115 self.attrs = deepcopy(other.attrs) -> 6116 self.flags.allows_duplicate_labels = ( 6117 self.flags.allows_duplicate_labels 6118 and other.flags.allows_duplicate_labels 6119 ) 6120 # For subclasses using _metadata. 6121 for name in set(self._metadata) & set(other._metadata): File ~/work/pandas/pandas/pandas/core/flags.py:105, in Flags.allows_duplicate_labels(self, value) 103 if not value: 104 for ax in obj.axes: --> 105 ax._maybe_check_unique() 107 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:708, in Index._maybe_check_unique(self) 705 duplicates = self._format_duplicate_message() 706 msg += f"\n{duplicates}" --> 708 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label X [0, 2] Y [1, 3]
This error message contains the labels that are duplicated, and the numeric positions of all the duplicates (including the âoriginalâ) in the Series
or DataFrame
In general, disallowing duplicates is âstickyâ. Itâs preserved through operations.
In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False) In [30]: s1 Out[30]: a 0 b 0 dtype: int64 In [31]: s1.head().rename({"a": "b"}) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[31], line 1 ----> 1 s1.head().rename({"a": "b"}) File ~/work/pandas/pandas/pandas/core/series.py:4828, in Series.rename(self, index, axis, copy, inplace, level, errors) 4821 axis = self._get_axis_number(axis) 4823 if callable(index) or is_dict_like(index): 4824 # error: Argument 1 to "_rename" of "NDFrame" has incompatible 4825 # type "Union[Union[Mapping[Any, Hashable], Callable[[Any], 4826 # Hashable]], Hashable, None]"; expected "Union[Mapping[Any, 4827 # Hashable], Callable[[Any], Hashable], None]" -> 4828 return super()._rename( 4829 index, # type: ignore[arg-type] 4830 inplace=inplace, 4831 level=level, 4832 errors=errors, 4833 ) 4834 else: 4835 return self._set_name(index, inplace=inplace) File ~/work/pandas/pandas/pandas/core/generic.py:1070, in NDFrame._rename(self, mapper, index, columns, axis, inplace, level, errors) 1068 return None 1069 else: -> 1070 return result.__finalize__(self, method="rename") File ~/work/pandas/pandas/pandas/core/generic.py:6116, in NDFrame.__finalize__(self, other, method, **kwargs) 6110 if other.attrs: 6111 # We want attrs propagation to have minimal performance 6112 # impact if attrs are not used; i.e. attrs is an empty dict. 6113 # One could make the deepcopy unconditionally, but a deepcopy 6114 # of an empty dict is 50x more expensive than the empty check. 6115 self.attrs = deepcopy(other.attrs) -> 6116 self.flags.allows_duplicate_labels = ( 6117 self.flags.allows_duplicate_labels 6118 and other.flags.allows_duplicate_labels 6119 ) 6120 # For subclasses using _metadata. 6121 for name in set(self._metadata) & set(other._metadata): File ~/work/pandas/pandas/pandas/core/flags.py:105, in Flags.allows_duplicate_labels(self, value) 103 if not value: 104 for ax in obj.axes: --> 105 ax._maybe_check_unique() 107 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:708, in Index._maybe_check_unique(self) 705 duplicates = self._format_duplicate_message() 706 msg += f"\n{duplicates}" --> 708 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label b [0, 1]
Warning
This is an experimental feature. Currently, many methods fail to propagate the allows_duplicate_labels
value. In future versions it is expected that every method taking or returning one or more DataFrame or Series objects will propagate allows_duplicate_labels
.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4