New in version 0.15.
Note
While there was in pandas.Categorical in earlier versions, the ability to use categorical data in Series and DataFrame is new.
This is a introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood types, country affiliations, observation time or ratings via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, ...) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical order of the values. Internally, the data structure consists of a categories array and an integer array of codes which point to the real value in the categories array.
The categorical data type is useful in the following cases:
See also the API docs on categoricals.
Object Creation¶Categorical Series or columns in a DataFrame can be created in several ways:
By specifying dtype="category" when constructing a Series:
In [1]: s = Series(["a","b","c","a"], dtype="category") In [2]: s Out[2]: 0 a 1 b 2 c 3 a dtype: category Categories (3, object): [a < b < c]
By converting an existing Series or column to a category dtype:
In [3]: df = DataFrame({"A":["a","b","c","a"]}) In [4]: df["B"] = df["A"].astype('category') In [5]: df Out[5]: A B 0 a a 1 b b 2 c c 3 a a
By using some special functions:
In [6]: df = DataFrame({'value': np.random.randint(0, 100, 20)}) In [7]: labels = [ "{0} - {1}".format(i, i + 9) for i in range(0, 100, 10) ] In [8]: df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels) In [9]: df.head(10) Out[9]: value group 0 65 60 - 69 1 49 40 - 49 2 56 50 - 59 3 43 40 - 49 4 43 40 - 49 5 91 90 - 99 6 32 30 - 39 7 87 80 - 89 8 36 30 - 39 9 8 0 - 9
See documentation for cut().
By passing a pandas.Categorical object to a Series or assigning it to a DataFrame. This is the only possibility to specify differently ordered categories (or no order at all) at creation time and the only reason to use pandas.Categorical directly:
In [10]: raw_cat = Categorical(["a","b","c","a"], categories=["b","c","d"], ....: ordered=False) ....: In [11]: s = Series(raw_cat) In [12]: s Out[12]: 0 NaN 1 b 2 c 3 NaN dtype: category Categories (3, object): [b, c, d] In [13]: df = DataFrame({"A":["a","b","c","a"]}) In [14]: df["B"] = raw_cat In [15]: df Out[15]: A B 0 a NaN 1 b b 2 c c 3 a NaN
Categorical data has a specific category dtype:
In [16]: df.dtypes Out[16]: A object B category dtype: object
Note
In contrast to R’s factor function, categorical data is not converting input values to strings and categories will end up the same data type as the original values.
Note
In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use categories to change the categories after creation time.
To get back to the original Series or numpy array, use Series.astype(original_dtype) or np.asarray(categorical):
In [17]: s = Series(["a","b","c","a"]) In [18]: s Out[18]: 0 a 1 b 2 c 3 a dtype: object In [19]: s2 = s.astype('category') In [20]: s2 Out[20]: 0 a 1 b 2 c 3 a dtype: category Categories (3, object): [a < b < c] In [21]: s3 = s2.astype('string') In [22]: s3 Out[22]: 0 a 1 b 2 c 3 a dtype: object In [23]: np.asarray(s2) Out[23]: array(['a', 'b', 'c', 'a'], dtype=object)
If you have already codes and categories, you can use the from_codes() constructor to save the factorize step during normal constructor mode:
In [24]: splitter = np.random.choice([0,1], 5, p=[0.5,0.5]) In [25]: s = Series(Categorical.from_codes(splitter, categories=["train", "test"]))Description¶
Using .describe() on categorical data will produce similar output to a Series or DataFrame of type string.
In [26]: cat = Categorical(["a","c","c",np.nan], categories=["b","a","c",np.nan] ) In [27]: df = DataFrame({"cat":cat, "s":["a","c","c",np.nan]}) In [28]: df.describe() Out[28]: cat s count 3 3 unique 3 2 top c c freq 2 2 In [29]: df["cat"].describe() Out[29]: count 3 unique 3 top c freq 2 Name: cat, dtype: objectWorking with categories¶
Categorical data has a categories and a ordered property, which list their possible values and whether the ordering matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually specify categories and ordering, they are inferred from the passed in values.
In [30]: s = Series(["a","b","c","a"], dtype="category") In [31]: s.cat.categories Out[31]: Index([u'a', u'b', u'c'], dtype='object') In [32]: s.cat.ordered Out[32]: True
It’s also possible to pass in the categories in a specific order:
In [33]: s = Series(Categorical(["a","b","c","a"], categories=["c","b","a"])) In [34]: s.cat.categories Out[34]: Index([u'c', u'b', u'a'], dtype='object') In [35]: s.cat.ordered Out[35]: True
Note
New categorical data is automatically ordered if the passed in values are sortable or a categories argument is supplied. This is a difference to R’s factors, which are unordered unless explicitly told to be ordered (ordered=TRUE). You can of course overwrite that by passing in an explicit ordered=False.
Renaming categories¶Renaming categories is done by assigning new values to the Series.cat.categories property or by using the Categorical.rename_categories() method:
In [36]: s = Series(["a","b","c","a"], dtype="category") In [37]: s Out[37]: 0 a 1 b 2 c 3 a dtype: category Categories (3, object): [a < b < c] In [38]: s.cat.categories = ["Group %s" % g for g in s.cat.categories] In [39]: s Out[39]: 0 Group a 1 Group b 2 Group c 3 Group a dtype: category Categories (3, object): [Group a < Group b < Group c] In [40]: s.cat.rename_categories([1,2,3]) Out[40]: 0 1 1 2 2 3 3 1 dtype: category Categories (3, int64): [1 < 2 < 3]
Note
In contrast to R’s factor, categorical data can have categories of other types than string.
Note
Be aware that assigning new categories is an inplace operations, while most other operation under Series.cat per default return a new Series of dtype category.
Categories must be unique or a ValueError is raised:
In [41]: try: ....: s.cat.categories = [1,1,1] ....: except ValueError as e: ....: print("ValueError: " + str(e)) ....: ValueError: Categorical categories must be uniqueAppending new categories¶
Appending categories can be done by using the Categorical.add_categories() method:
In [42]: s = s.cat.add_categories([4]) In [43]: s.cat.categories Out[43]: Index([u'Group a', u'Group b', u'Group c', 4], dtype='object') In [44]: s Out[44]: 0 Group a 1 Group b 2 Group c 3 Group a dtype: category Categories (4, object): [Group a < Group b < Group c < 4]Removing categories¶
Removing categories can be done by using the Categorical.remove_categories() method. Values which are removed are replaced by np.nan.:
In [45]: s = s.cat.remove_categories([4]) In [46]: s Out[46]: 0 Group a 1 Group b 2 Group c 3 Group a dtype: category Categories (3, object): [Group a < Group b < Group c]Renaming unused categories¶
Removing unused categories can also be done:
In [47]: s = Series(Categorical(["a","b","a"], categories=["a","b","c","d"])) In [48]: s Out[48]: 0 a 1 b 2 a dtype: category Categories (4, object): [a < b < c < d] In [49]: s.cat.remove_unused_categories() Out[49]: 0 a 1 b 2 a dtype: category Categories (2, object): [a < b]Setting categories¶
If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the categories to a predefined scale, use Categorical.set_categories().
In [50]: s = Series(["one","two","four", "-"], dtype="category") In [51]: s Out[51]: 0 one 1 two 2 four 3 - dtype: category Categories (4, object): [- < four < one < two] In [52]: s = s.cat.set_categories(["one","two","three","four"]) In [53]: s Out[53]: 0 one 1 two 2 four 3 NaN dtype: category Categories (4, object): [one < two < three < four]
Note
Be aware that Categorical.set_categories() cannot know whether some category is omitted intentionally or because it is misspelled or (under Python3) due to a type difference (e.g., numpys S1 dtype and python strings). This can result in surprising behaviour!
Ordered or not...¶If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and certain operations are possible. If the categorical is unordered, a TypeError is raised.
In [54]: s = Series(Categorical(["a","b","c","a"], ordered=False)) In [55]: try: ....: s.sort() ....: except TypeError as e: ....: print("TypeError: " + str(e)) ....: TypeError: Categorical not ordered In [56]: s = Series(["a","b","c","a"], dtype="category") # ordered per default! In [57]: s.sort() In [58]: s Out[58]: 0 a 3 a 1 b 2 c dtype: category Categories (3, object): [a < b < c] In [59]: s.min(), s.max() Out[59]: ('a', 'c')
Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for strings and numeric data:
In [60]: s = Series([1,2,3,1], dtype="category") In [61]: s.cat.categories = [2,3,1] In [62]: s Out[62]: 0 2 1 3 2 1 3 2 dtype: category Categories (3, int64): [2 < 3 < 1] In [63]: s.sort() In [64]: s Out[64]: 0 2 3 2 1 3 2 1 dtype: category Categories (3, int64): [2 < 3 < 1] In [65]: s.min(), s.max() Out[65]: (2, 1)
Reordering the categories is possible via the Categorical.reorder_categories() and the Categorical.set_categories() methods. For Categorical.reorder_categories(), all old categories must be included in the new categories and no new categories are allowed.
In [66]: s = Series([1,2,3,1], dtype="category") In [67]: s = s.cat.reorder_categories([2,3,1]) In [68]: s Out[68]: 0 1 1 2 2 3 3 1 dtype: category Categories (3, int64): [2 < 3 < 1] In [69]: s.sort() In [70]: s Out[70]: 1 2 2 3 0 1 3 1 dtype: category Categories (3, int64): [2 < 3 < 1] In [71]: s.min(), s.max() Out[71]: (2, 1)
Note
Note the difference between assigning new categories and reordering the categories: the first renames categories and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values in the Series are changed.
Note
If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Numeric operations like +, -, *, / and operations based on them (e.g.``Series.median()``, which would need to compute the mean between two values if the length of an array is even) do not work and raise a TypeError.
Comparisons¶Comparing Categoricals with other objects is possible in two cases:
- comparing a categorical Series to another categorical Series, when categories and ordered is the same or
- comparing a categorical Series to a scalar.
All other comparisons will raise a TypeError.
In [72]: cat = Series(Categorical([1,2,3], categories=[3,2,1])) In [73]: cat_base = Series(Categorical([2,2,2], categories=[3,2,1])) In [74]: cat_base2 = Series(Categorical([2,2,2])) In [75]: cat Out[75]: 0 1 1 2 2 3 dtype: category Categories (3, int64): [3 < 2 < 1] In [76]: cat_base Out[76]: 0 2 1 2 2 2 dtype: category Categories (3, int64): [3 < 2 < 1] In [77]: cat_base2 Out[77]: 0 2 1 2 2 2 dtype: category Categories (1, int64): [2]
Comparing to a categorical with the same categories and ordering or to a scalar works:
In [78]: cat > cat_base Out[78]: 0 True 1 False 2 False dtype: bool In [79]: cat > 2 Out[79]: 0 False 1 False 2 True dtype: bool
This doesn’t work because the categories are not the same:
In [80]: try: ....: cat > cat_base2 ....: except TypeError as e: ....: print("TypeError: " + str(e)) ....: TypeError: Categoricals can only be compared if 'categories' are the same
Note
Comparisons with Series, np.array or a Categorical with different categories or ordering will raise an TypeError because custom categories ordering could be interpreted in two ways: one with taking in account the ordering and one without. If you want to compare a categorical series with such a type, you need to be explicit and convert the categorical data back to the original values:
In [81]: base = np.array([1,2,3]) In [82]: try: ....: cat > base ....: except TypeError as e: ....: print("TypeError: " + str(e)) ....: TypeError: Cannot compare a Categorical for op <built-in function gt> with type <type 'numpy.ndarray'>. If you want to compare values, use 'series <op> np.asarray(cat)'. In [83]: np.asarray(cat) > base Out[83]: array([False, False, False], dtype=bool)Operations¶
Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with categorical data:
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data:
In [84]: s = Series(Categorical(["a","b","c","c"], categories=["c","a","b","d"])) In [85]: s.value_counts() Out[85]: c 2 b 1 a 1 d 0 dtype: int64
Groupby will also show “unused” categories:
In [86]: cats = Categorical(["a","b","b","b","c","c","c"], categories=["a","b","c","d"]) In [87]: df = DataFrame({"cats":cats,"values":[1,2,2,2,3,4,5]}) In [88]: df.groupby("cats").mean() Out[88]: values cats a 1 b 2 c 4 d NaN In [89]: cats2 = Categorical(["a","a","b","b"], categories=["a","b","c"]) In [90]: df2 = DataFrame({"cats":cats2,"B":["c","d","c","d"], "values":[1,2,3,4]}) In [91]: df2.groupby(["cats","B"]).mean() Out[91]: values cats B a c 1 d 2 b c 3 d 4 c c NaN d NaN
Pivot tables:
In [92]: raw_cat = Categorical(["a","a","b","b"], categories=["a","b","c"]) In [93]: df = DataFrame({"A":raw_cat,"B":["c","d","c","d"], "values":[1,2,3,4]}) In [94]: pd.pivot_table(df, values='values', index=['A', 'B']) Out[94]: A B a c 1 d 2 b c 3 d 4 c c NaN d NaN Name: values, dtype: float64Data munging¶
The optimized pandas data access methods .loc, .iloc, .ix .at, and .iat, work as normal, the only difference is the return type (for getting) and that only values already in categories can be assigned.
Getting¶If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.
In [95]: idx = Index(["h","i","j","k","l","m","n",]) In [96]: cats = Series(["a","b","b","b","c","c","c"], dtype="category", index=idx) In [97]: values= [1,2,2,2,3,4,5] In [98]: df = DataFrame({"cats":cats,"values":values}, index=idx) In [99]: df.iloc[2:4,:] Out[99]: cats values j b 2 k b 2 In [100]: df.iloc[2:4,:].dtypes Out[100]: cats category values int64 dtype: object In [101]: df.loc["h":"j","cats"] Out[101]: h a i b j b Name: cats, dtype: category Categories (3, object): [a < b < c] In [102]: df.ix["h":"j",0:1] Out[102]: cats h a i b j b In [103]: df[df["cats"] == "b"] Out[103]: cats values i b 2 j b 2 k b 2
An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype object:
# get the complete "h" row as a Series In [104]: df.loc["h", :] Out[104]: cats a values 1 Name: h, dtype: object
Returning a single item from categorical data will also return the value, not a categorical of length “1”.
In [105]: df.iat[0,0] Out[105]: 'a' In [106]: df["cats"].cat.categories = ["x","y","z"] In [107]: df.at["h","cats"] # returns a string Out[107]: 'x'
Note
This is a difference to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor.
To get a single value Series of type category pass in a list with a single value:
In [108]: df.loc[["h"],"cats"] Out[108]: h x Name: cats, dtype: category Categories (3, object): [x < y < z]Setting¶
Setting values in a categorical column (or Series) works as long as the value is included in the categories:
In [109]: idx = Index(["h","i","j","k","l","m","n"]) In [110]: cats = Categorical(["a","a","a","a","a","a","a"], categories=["a","b"]) In [111]: values = [1,1,1,1,1,1,1] In [112]: df = DataFrame({"cats":cats,"values":values}, index=idx) In [113]: df.iloc[2:4,:] = [["b",2],["b",2]] In [114]: df Out[114]: cats values h a 1 i a 1 j b 2 k b 2 l a 1 m a 1 n a 1 In [115]: try: .....: df.iloc[2:4,:] = [["c",3],["c",3]] .....: except ValueError as e: .....: print("ValueError: " + str(e)) .....: ValueError: cannot setitem on a Categorical with a new category, set the categories first
Setting values by assigning categorical data will also check that the categories match:
In [116]: df.loc["j":"k","cats"] = Categorical(["a","a"], categories=["a","b"]) In [117]: df Out[117]: cats values h a 1 i a 1 j a 2 k a 2 l a 1 m a 1 n a 1 In [118]: try: .....: df.loc["j":"k","cats"] = Categorical(["b","b"], categories=["a","b","c"]) .....: except ValueError as e: .....: print("ValueError: " + str(e)) .....: ValueError: Cannot set a Categorical with another, without identical categories
Assigning a Categorical to parts of a column of other types will use the values:
In [119]: df = DataFrame({"a":[1,1,1,1,1], "b":["a","a","a","a","a"]}) In [120]: df.loc[1:2,"a"] = Categorical(["b","b"], categories=["a","b"]) In [121]: df.loc[2:3,"b"] = Categorical(["b","b"], categories=["a","b"]) In [122]: df Out[122]: a b 0 1 a 1 b a 2 b b 3 1 b 4 1 a In [123]: df.dtypes Out[123]: a object b object dtype: objectMerging¶
You can concat two DataFrames containing categorical data together, but the categories of these categoricals need to be the same:
In [124]: cat = Series(["a","b"], dtype="category") In [125]: vals = [1,2] In [126]: df = DataFrame({"cats":cat, "vals":vals}) In [127]: res = pd.concat([df,df]) In [128]: res Out[128]: cats vals 0 a 1 1 b 2 0 a 1 1 b 2 In [129]: res.dtypes Out[129]: cats category vals int64 dtype: object
In this case the categories are not the same and so an error is raised:
In [130]: df_different = df.copy() In [131]: df_different["cats"].cat.categories = ["c","d"] In [132]: try: .....: pd.concat([df,df_different]) .....: except ValueError as e: .....: print("ValueError: " + str(e)) .....: ValueError: incompatible levels in categorical block merge
The same applies to df.append(df_different).
Getting Data In/Out¶Writing data (Series, Frames) to a HDF store that contains a category dtype will currently raise NotImplementedError.
Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the right categories and categories ordering.
In [133]: s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'd'])) # rename the categories In [134]: s.cat.categories = ["very good", "good", "bad"] # reorder the categories and add missing categories In [135]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"]) In [136]: df = DataFrame({"cats":s, "vals":[1,2,3,4,5,6]}) In [137]: csv = StringIO() In [138]: df.to_csv(csv) In [139]: df2 = pd.read_csv(StringIO(csv.getvalue())) In [140]: df2.dtypes Out[140]: Unnamed: 0 int64 cats object vals int64 dtype: object In [141]: df2["cats"] Out[141]: 0 very good 1 good 2 good 3 very good 4 very good 5 bad Name: cats, dtype: object # Redo the category In [142]: df2["cats"] = df2["cats"].astype("category") In [143]: df2["cats"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"], .....: inplace=True) .....: In [144]: df2.dtypes Out[144]: Unnamed: 0 int64 cats category vals int64 dtype: object In [145]: df2["cats"] Out[145]: 0 very good 1 good 2 good 3 very good 4 very good 5 bad Name: cats, dtype: category Categories (5, object): [very bad < bad < medium < good < very good]Missing Data¶
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section
There are two ways a np.nan can be represented in categorical data: either the value is not available (“missing value”) or np.nan is a valid category.
In [146]: s = Series(["a","b",np.nan,"a"], dtype="category") # only two categories In [147]: s Out[147]: 0 a 1 b 2 NaN 3 a dtype: category Categories (2, object): [a < b] In [148]: s2 = Series(["a","b","c","a"], dtype="category") In [149]: s2.cat.categories = [1,2,np.nan] # three categories, np.nan included In [150]: s2 Out[150]: 0 1 1 2 2 NaN 3 1 dtype: category Categories (3, object): [1 < 2 < NaN]
Note
As integer Series can’t include NaN, the categories were converted to object.
Note
Missing value methods like isnull and fillna will take both missing values as well as np.nan categories into account:
In [151]: c = Series(["a","b",np.nan], dtype="category") In [152]: c.cat.set_categories(["a","b",np.nan], inplace=True) # will be inserted as a NA category: In [153]: c[0] = np.nan In [154]: s = Series(c) In [155]: s Out[155]: 0 NaN 1 b 2 NaN dtype: category Categories (3, object): [a < b < NaN] In [156]: pd.isnull(s) Out[156]: 0 True 1 False 2 True dtype: bool In [157]: s.fillna("a") Out[157]: 0 a 1 b 2 a dtype: category Categories (3, object): [a < b < NaN]Differences to R’s factor¶
The following differences to R’s factor functions can be observed:
The memory usage of a Categorical is proportional to the number of categories times the length of the data. In contrast, an object dtype is a constant times the length of the data.
In [158]: s = Series(['foo','bar']*1000) # object dtype In [159]: s.nbytes Out[159]: 8000 # category dtype In [160]: s.astype('category').nbytes Out[160]: 2008
Note
If the number of categories approaches the length of the data, the Categorical will use nearly (or more) memory than an equivalent object dtype representation.
In [161]: s = Series(['foo%04d' % i for i in range(2000)]) # object dtype In [162]: s.nbytes Out[162]: 8000 # category dtype In [163]: s.astype('category').nbytes Out[163]: 12000Old style constructor usage¶
In earlier versions than pandas 0.15, a Categorical could be constructed by passing in precomputed codes (called then labels) instead of values with categories. The codes were interpreted as pointers to the categories with -1 as NaN. This type of constructor useage is replaced by the special constructor Categorical.from_codes().
Unfortunately, in some special cases, using code which assumes the old style constructor usage will work with the current pandas version, resulting in subtle bugs:
>>> cat = Categorical([1,2], [1,2,3]) >>> # old version >>> cat.get_values() array([2, 3], dtype=int64) >>> # new version >>> cat.get_values() array([1, 2], dtype=int64)
Warning
If you used Categoricals with older versions of pandas, please audit your code before upgrading and change your code to use the from_codes() constructor.
Categorical is not a numpy array¶Currently, categorical data and the underlying Categorical is implemented as a python object and not as a low-level numpy array dtype. This leads to some problems.
numpy itself doesn’t know about the new dtype:
In [164]: try: .....: np.dtype("category") .....: except TypeError as e: .....: print("TypeError: " + str(e)) .....: TypeError: data type "category" not understood In [165]: dtype = Categorical(["a"]).dtype In [166]: try: .....: np.dtype(dtype) .....: except TypeError as e: .....: print("TypeError: " + str(e)) .....: TypeError: data type not understood
Dtype comparisons work:
In [167]: dtype == np.str_ Out[167]: False In [168]: np.str_ == dtype Out[168]: False
Using numpy functions on a Series of type category should not work as Categoricals are not numeric data (even in the case that .categories is numeric).
In [169]: s = Series(Categorical([1,2,3,4])) In [170]: try: .....: np.sum(s) .....: except TypeError as e: .....: print("TypeError: " + str(e)) .....: TypeError: Categorical cannot perform the operation sumdtype in apply¶
Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also convert to object.
In [171]: df = DataFrame({"a":[1,2,3,4], .....: "b":["a","b","c","d"], .....: "cats":Categorical([1,2,3,2])}) .....: In [172]: df.apply(lambda row: type(row["cats"]), axis=1) Out[172]: 0 <type 'long'> 1 <type 'long'> 2 <type 'long'> 3 <type 'long'> dtype: object In [173]: df.apply(lambda col: col.dtype, axis=0) Out[173]: a object b object cats object dtype: objectNo Categorical Index¶
There is currently no index of type category, so setting the index to categorical column will convert the categorical data to a “normal” dtype first and therefore remove any custom ordering of the categories:
In [174]: cats = Categorical([1,2,3,4], categories=[4,2,3,1]) In [175]: strings = ["a","b","c","d"] In [176]: values = [4,2,3,1] In [177]: df = DataFrame({"strings":strings, "values":values}, index=cats) In [178]: df.index Out[178]: Int64Index([1, 2, 3, 4], dtype='int64') # This should sort by categories but does not as there is no CategoricalIndex! In [179]: df.sort_index() Out[179]: strings values 1 a 4 2 b 2 3 c 3 4 d 1Side Effects¶
Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to the Series will in most cases change the original Categorical:
In [180]: cat = Categorical([1,2,3,10], categories=[1,2,3,4,10]) In [181]: s = Series(cat, name="cat") In [182]: cat Out[182]: [1, 2, 3, 10] Categories (5, int64): [1 < 2 < 3 < 4 < 10] In [183]: s.iloc[0:2] = 10 In [184]: cat Out[184]: [10, 10, 3, 10] Categories (5, int64): [1 < 2 < 3 < 4 < 10] In [185]: df = DataFrame(s) In [186]: df["cat"].cat.categories = [1,2,3,4,5] In [187]: cat Out[187]: [5, 5, 3, 5] Categories (5, int64): [1 < 2 < 3 < 4 < 5]
Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:
In [188]: cat = Categorical([1,2,3,10], categories=[1,2,3,4,10]) In [189]: s = Series(cat, name="cat", copy=True) In [190]: cat Out[190]: [1, 2, 3, 10] Categories (5, int64): [1 < 2 < 3 < 4 < 10] In [191]: s.iloc[0:2] = 10 In [192]: cat Out[192]: [1, 2, 3, 10] Categories (5, int64): [1 < 2 < 3 < 4 < 10]
Note
This also happens in some cases when you supply a numpy array instead of a Categorical: using an int array (e.g. np.array([1,2,3,4])) will exhibit the same behaviour, while using a string array (e.g. np.array(["a","b","c","a"])) will not.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4