There are two ways to store text data in pandas:
object
-dtype NumPy array.
StringDtype
extension type.
We recommend using StringDtype
to store text data.
Prior to pandas 1.0, object
dtype was the only option. This was unfortunate for many reasons:
You can accidentally store a mixture of strings and non-strings in an object
dtype array. Itâs better to have a dedicated dtype.
object
dtype breaks dtype-specific operations like DataFrame.select_dtypes()
. There isnât a clear way to select just text while excluding non-text but still object-dtype columns.
When reading code, the contents of an object
dtype array is less clear than 'string'
.
Currently, the performance of object
dtype arrays of strings and arrays.StringArray
are about the same. We expect future enhancements to significantly increase the performance and lower the memory overhead of StringArray
.
Warning
StringArray
is currently considered experimental. The implementation and parts of the API may change without warning.
For backwards-compatibility, object
dtype remains the default type we infer a list of strings to
In [1]: pd.Series(["a", "b", "c"]) Out[1]: 0 a 1 b 2 c dtype: object
To explicitly request string
dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string") Out[2]: 0 a 1 b 2 c dtype: string In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype()) Out[3]: 0 a 1 b 2 c dtype: string
Or astype
after the Series
or DataFrame
is created
In [4]: s = pd.Series(["a", "b", "c"]) In [5]: s Out[5]: 0 a 1 b 2 c dtype: object In [6]: s.astype("string") Out[6]: 0 a 1 b 2 c dtype: string
You can also use StringDtype
/"string"
as the dtype on non-string data and it will be converted to string
dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string") In [8]: s Out[8]: 0 a 1 2 2 <NA> dtype: string In [9]: type(s[1]) Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64") In [11]: s1 Out[11]: 0 1 1 2 2 <NA> dtype: Int64 In [12]: s2 = s1.astype("string") In [13]: s2 Out[13]: 0 1 1 2 2 <NA> dtype: string In [14]: type(s2[0]) Out[14]: strBehavior differences#
These are places where the behavior of StringDtype
objects differ from object
dtype
For StringDtype
, string accessor methods that return numeric output will always return a nullable integer dtype, rather than either int or float dtype, depending on the presence of NA values. Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string") In [16]: s Out[16]: 0 a 1 <NA> 2 b dtype: string In [17]: s.str.count("a") Out[17]: 0 1 1 <NA> 2 0 dtype: Int64 In [18]: s.dropna().str.count("a") Out[18]: 0 1 2 0 dtype: Int64
Both outputs are Int64
dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object") In [20]: s2.str.count("a") Out[20]: 0 1.0 1 NaN 2 0.0 dtype: float64 In [21]: s2.dropna().str.count("a") Out[21]: 0 1 2 0 dtype: int64
When NA values are present, the output dtype is float64. Similarly for methods returning boolean values.
In [22]: s.str.isdigit() Out[22]: 0 False 1 <NA> 2 False dtype: boolean In [23]: s.str.match("a") Out[23]: 0 True 1 <NA> 2 False dtype: boolean
Some string methods, like Series.str.decode()
are not available on StringArray
because StringArray
only holds strings, not bytes.
In comparison operations, arrays.StringArray
and Series
backed by a StringArray
will return an object with BooleanDtype
, rather than a bool
dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing unequal like numpy.nan
.
Everything else that follows in the rest of this document applies equally to string
and object
dtype.
Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the str
attribute and generally have names matching the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series( ....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string" ....: ) ....: In [25]: s.str.lower() Out[25]: 0 a 1 b 2 c 3 aaba 4 baca 5 <NA> 6 caba 7 dog 8 cat dtype: string In [26]: s.str.upper() Out[26]: 0 A 1 B 2 C 3 AABA 4 BACA 5 <NA> 6 CABA 7 DOG 8 CAT dtype: string In [27]: s.str.len() Out[27]: 0 1 1 1 2 1 3 4 4 4 5 <NA> 6 4 7 3 8 3 dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"]) In [29]: idx.str.strip() Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object') In [30]: idx.str.lstrip() Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object') In [31]: idx.str.rstrip() Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance, you may have columns with leading or trailing whitespace:
In [32]: df = pd.DataFrame( ....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3) ....: ) ....: In [33]: df Out[33]: Column A Column B 0 0.469112 -0.282863 1 -1.509059 -1.135632 2 1.212112 -0.173215
Since df.columns
is an Index object, we can use the .str
accessor
In [34]: df.columns.str.strip() Out[34]: Index(['Column A', 'Column B'], dtype='object') In [35]: df.columns.str.lower() Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing whitespaces, lower casing all names, and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_") In [37]: df Out[37]: column_a column_b 0 0.469112 -0.282863 1 -1.509059 -1.135632 2 1.212112 -0.173215
Note
If you have a Series
where lots of elements are repeated (i.e. the number of unique elements in the Series
is a lot smaller than the length of the Series
), it can be faster to convert the original Series
to one of type category
and then use .str.<method>
or .dt.<property>
on that. The performance difference comes from the fact that, for Series
of type category
, the string operations are done on the .categories
and not on each element of the Series
.
Please note that a Series
of type category
with string .categories
has some limitations in comparison to Series
of type string (e.g. you canât add strings to each other: s + " " + s
wonât work if s
is a Series
of type category
). Also, .str
methods which operate on elements of type list
are not available on such a Series
.
Warning
The type of the Series is inferred and the allowed types (i.e. strings).
Generally speaking, the .str
accessor is intended to work only on strings. With very few exceptions, other uses are not supported, and may be disabled at a later point.
Methods like split
return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string") In [39]: s2.str.split("_") Out[39]: 0 [a, b, c] 1 [c, d, e] 2 <NA> 3 [f, g, h] dtype: object
Elements in the split lists can be accessed using get
or []
notation:
In [40]: s2.str.split("_").str.get(1) Out[40]: 0 b 1 d 2 <NA> 3 g dtype: object In [41]: s2.str.split("_").str[1] Out[41]: 0 b 1 d 2 <NA> 3 g dtype: object
It is easy to expand this to return a DataFrame using expand
.
In [42]: s2.str.split("_", expand=True) Out[42]: 0 1 2 0 a b c 1 c d e 2 <NA> <NA> <NA> 3 f g h
When original Series
has StringDtype
, the output columns will all be StringDtype
as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1) Out[43]: 0 1 0 a b_c 1 c d_e 2 <NA> <NA> 3 f g_h
rsplit
is similar to split
except it works in the reverse direction, i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1) Out[44]: 0 1 0 a_b c 1 c_d e 2 <NA> <NA> 3 f_g h
replace
optionally uses regular expressions:
In [45]: s3 = pd.Series( ....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"], ....: dtype="string", ....: ) ....: In [46]: s3 Out[46]: 0 A 1 B 2 C 3 Aaba 4 Baca 5 6 <NA> 7 CABA 8 dog 9 cat dtype: string In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True) Out[47]: 0 A 1 B 2 C 3 XX-XX ba 4 XX-XX ca 5 6 <NA> 7 XX-XX BA 8 XX-XX 9 XX-XX t dtype: string
Changed in version 2.0.
Single character pattern with regex=True
will also be treated as regular expressions:
In [48]: s4 = pd.Series(["a.b", ".", "b", np.nan, ""], dtype="string") In [49]: s4 Out[49]: 0 a.b 1 . 2 b 3 <NA> 4 dtype: string In [50]: s4.str.replace(".", "a", regex=True) Out[50]: 0 aaa 1 a 2 a 3 <NA> 4 dtype: string
If you want literal replacement of a string (equivalent to str.replace()
), you can set the optional regex
parameter to False
, rather than escaping each character. In this case both pat
and repl
must be strings:
In [51]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string") # These lines are equivalent In [52]: dollars.str.replace(r"-\$", "-", regex=True) Out[52]: 0 12 1 -10 2 $10,000 dtype: string In [53]: dollars.str.replace("-$", "-", regex=False) Out[53]: 0 12 1 -10 2 $10,000 dtype: string
The replace
method can also take a callable as replacement. It is called on every pat
using re.sub()
. The callable should expect one positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word In [54]: pat = r"[a-z]+" In [55]: def repl(m): ....: return m.group(0)[::-1] ....: In [56]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace( ....: pat, repl, regex=True ....: ) ....: Out[56]: 0 oof 123 1 rab zab 2 <NA> dtype: string # Using regex groups In [57]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)" In [58]: def repl(m): ....: return m.group("two").swapcase() ....: In [59]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace( ....: pat, repl, regex=True ....: ) ....: Out[59]: 0 bAR 1 <NA> dtype: string
The replace
method also accepts a compiled regular expression object from re.compile()
as a pattern. All flags should be included in the compiled regular expression object.
In [60]: import re In [61]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE) In [62]: s3.str.replace(regex_pat, "XX-XX ", regex=True) Out[62]: 0 A 1 B 2 C 3 XX-XX ba 4 XX-XX ca 5 6 <NA> 7 XX-XX BA 8 XX-XX 9 XX-XX t dtype: string
Including a flags
argument when calling replace
with a compiled regular expression object will raise a ValueError
.
In [63]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE) --------------------------------------------------------------------------- ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix
and removesuffix
have the same effect as str.removeprefix
and str.removesuffix
added in Python 3.9 <https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
Added in version 1.4.0.
In [64]: s = pd.Series(["str_foo", "str_bar", "no_prefix"]) In [65]: s.str.removeprefix("str_") Out[65]: 0 foo 1 bar 2 no_prefix dtype: object In [66]: s = pd.Series(["foo_str", "bar_str", "no_suffix"]) In [67]: s.str.removesuffix("_str") Out[67]: 0 foo 1 bar 2 no_suffix dtype: objectConcatenation#
There are several ways to concatenate a Series
or Index
, either with itself or others, all based on cat()
, resp. Index.str.cat
.
The content of a Series
(or Index
) can be concatenated:
In [68]: s = pd.Series(["a", "b", "c", "d"], dtype="string") In [69]: s.str.cat(sep=",") Out[69]: 'a,b,c,d'
If not specified, the keyword sep
for the separator defaults to the empty string, sep=''
:
In [70]: s.str.cat() Out[70]: 'abcd'
By default, missing values are ignored. Using na_rep
, they can be given a representation:
In [71]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string") In [72]: t.str.cat(sep=",") Out[72]: 'a,b,d' In [73]: t.str.cat(sep=",", na_rep="-") Out[73]: 'a,b,-,d'Concatenating a Series and something list-like into a Series#
The first argument to cat()
can be a list-like object, provided that it matches the length of the calling Series
(or Index
).
In [74]: s.str.cat(["A", "B", "C", "D"]) Out[74]: 0 aA 1 bB 2 cC 3 dD dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep
is specified:
In [75]: s.str.cat(t) Out[75]: 0 aa 1 bb 2 <NA> 3 dd dtype: string In [76]: s.str.cat(t, na_rep="-") Out[76]: 0 aa 1 bb 2 c- 3 dd dtype: stringConcatenating a Series and something array-like into a Series#
The parameter others
can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series
(or Index
).
In [77]: d = pd.concat([t, s], axis=1) In [78]: s Out[78]: 0 a 1 b 2 c 3 d dtype: string In [79]: d Out[79]: 0 1 0 a a 1 b b 2 <NA> c 3 d d In [80]: s.str.cat(d, na_rep="-") Out[80]: 0 aaa 1 bbb 2 c-c 3 ddd dtype: stringConcatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series
or DataFrame
, it is possible to align the indexes before concatenation by setting the join
-keyword.
In [81]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string") In [82]: s Out[82]: 0 a 1 b 2 c 3 d dtype: string In [83]: u Out[83]: 1 b 3 d 0 a 2 c dtype: string In [84]: s.str.cat(u) Out[84]: 0 aa 1 bb 2 cc 3 dd dtype: string In [85]: s.str.cat(u, join="left") Out[85]: 0 aa 1 bb 2 cc 3 dd dtype: string
The usual options are available for join
(one of 'left', 'outer', 'inner', 'right'
). In particular, alignment also means that the different lengths do not need to coincide anymore.
In [86]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string") In [87]: s Out[87]: 0 a 1 b 2 c 3 d dtype: string In [88]: v Out[88]: -1 z 0 a 1 b 3 d 4 e dtype: string In [89]: s.str.cat(v, join="left", na_rep="-") Out[89]: 0 aa 1 bb 2 c- 3 dd dtype: string In [90]: s.str.cat(v, join="outer", na_rep="-") Out[90]: -1 -z 0 aa 1 bb 2 c- 3 dd 4 -e dtype: string
The same alignment can be used when others
is a DataFrame
:
In [91]: f = d.loc[[3, 2, 1, 0], :] In [92]: s Out[92]: 0 a 1 b 2 c 3 d dtype: string In [93]: f Out[93]: 0 1 3 d d 2 <NA> c 1 b b 0 a a In [94]: s.str.cat(f, join="left", na_rep="-") Out[94]: 0 aaa 1 bbb 2 c-c 3 ddd dtype: stringConcatenating a Series and many objects into a Series#
Several array-like items (specifically: Series
, Index
, and 1-dimensional variants of np.ndarray
) can be combined in a list-like container (including iterators, dict
-views, etc.).
In [95]: s Out[95]: 0 a 1 b 2 c 3 d dtype: string In [96]: u Out[96]: 1 b 3 d 0 a 2 c dtype: string In [97]: s.str.cat([u, u.to_numpy()], join="left") Out[97]: 0 aab 1 bbd 2 cca 3 ddc dtype: string
All elements without an index (e.g. np.ndarray
) within the passed list-like must match in length to the calling Series
(or Index
), but Series
and Index
may have arbitrary length (as long as alignment is not disabled with join=None
):
In [98]: v Out[98]: -1 z 0 a 1 b 3 d 4 e dtype: string In [99]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-") Out[99]: -1 -z-- 0 aaab 1 bbbd 2 c-ca 3 dddc 4 -e-- dtype: string
If using join='right'
on a list-like of others
that contains different indexes, the union of these indexes will be used as the basis for the final concatenation:
In [100]: u.loc[[3]] Out[100]: 3 d dtype: string In [101]: v.loc[[-1, 0]] Out[101]: -1 z 0 a dtype: string In [102]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-") Out[102]: 3 dd- -1 --z 0 a-a dtype: stringIndexing with
.str
#
You can use []
notation to directly index by position locations. If you index past the end of the string, the result will be a NaN
.
In [103]: s = pd.Series( .....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string" .....: ) .....: In [104]: s.str[0] Out[104]: 0 A 1 B 2 C 3 A 4 B 5 <NA> 6 C 7 d 8 c dtype: string In [105]: s.str[1] Out[105]: 0 <NA> 1 <NA> 2 <NA> 3 a 4 a 5 <NA> 6 A 7 o 8 a dtype: stringTesting for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [131]: pattern = r"[0-9][a-z]" In [132]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="string", .....: ).str.contains(pattern) .....: Out[132]: 0 False 1 False 2 True 3 True 4 True 5 True dtype: boolean
Or whether elements match a pattern:
In [133]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="string", .....: ).str.match(pattern) .....: Out[133]: 0 False 1 False 2 True 3 True 4 False 5 True dtype: boolean
In [134]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="string", .....: ).str.fullmatch(pattern) .....: Out[134]: 0 False 1 False 2 True 3 True 4 False 5 False dtype: boolean
Note
The distinction between match
, fullmatch
, and contains
is strictness: fullmatch
tests whether the entire string matches the regular expression; match
tests whether there is a match of the regular expression that begins at the first character of the string; and contains
tests whether there is a match of the regular expression at any position within the string.
The corresponding functions in the re
package for these three match modes are re.fullmatch, re.match, and re.search, respectively.
Methods like match
, fullmatch
, contains
, startswith
, and endswith
take an extra na
argument so missing values can be considered True or False:
In [135]: s4 = pd.Series( .....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string" .....: ) .....: In [136]: s4.str.contains("A", na=False) Out[136]: 0 True 1 False 2 False 3 True 4 False 5 False 6 True 7 False 8 False dtype: booleanCreating indicator variables#
You can extract dummy variables from string columns. For example if they are separated by a '|'
:
In [137]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string") In [138]: s.str.get_dummies(sep="|") Out[138]: a b c 0 1 0 0 1 1 1 0 2 0 0 0 3 1 0 1
String Index
also supports get_dummies
which returns a MultiIndex
.
In [139]: idx = pd.Index(["a", "a|b", np.nan, "a|c"]) In [140]: idx.str.get_dummies(sep="|") Out[140]: MultiIndex([(1, 0, 0), (1, 1, 0), (0, 0, 0), (1, 0, 1)], names=['a', 'b', 'c'])
See also get_dummies()
.
Method
Description
Concatenate strings
Split strings on delimiter
Split strings on delimiter working from the end of the string
Index into each element (retrieve i-th element)
Join strings in each element of the Series with passed separator
Split strings on the delimiter returning DataFrame of dummy variables
Return boolean array if each string contains pattern/regex
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
Remove prefix from string i.e. only remove if string starts with prefix.
Remove suffix from string i.e. only remove if string ends with suffix.
Duplicate values (s.str.repeat(3)
equivalent to x * 3
)
Add whitespace to the sides of strings
Equivalent to str.center
Equivalent to str.ljust
Equivalent to str.rjust
Equivalent to str.zfill
Split long strings into lines with length less than a given width
Slice each string in the Series
Replace slice in each string with passed value
Count occurrences of pattern
Equivalent to str.startswith(pat)
for each element
Equivalent to str.endswith(pat)
for each element
Compute list of all occurrences of pattern/regex for each string
Call re.match
on each element returning matched groups as list
Call re.search
on each element returning DataFrame with one row for each element and one column for each regex capture group
Call re.findall
on each element returning DataFrame with one row for each match and one column for each regex capture group
Compute string lengths
Equivalent to str.strip
Equivalent to str.rstrip
Equivalent to str.lstrip
Equivalent to str.partition
Equivalent to str.rpartition
Equivalent to str.lower
Equivalent to str.casefold
Equivalent to str.upper
Equivalent to str.find
Equivalent to str.rfind
Equivalent to str.index
Equivalent to str.rindex
Equivalent to str.capitalize
Equivalent to str.swapcase
Return Unicode normal form. Equivalent to unicodedata.normalize
Equivalent to str.translate
Equivalent to str.isalnum
Equivalent to str.isalpha
Equivalent to str.isdigit
Equivalent to str.isspace
Equivalent to str.islower
Equivalent to str.isupper
Equivalent to str.istitle
Equivalent to str.isnumeric
Equivalent to str.isdecimal
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4