ndarray
)#
An ndarray
is a (usually fixed-size) multidimensional container of items of the same type and size. The number of dimensions and items in an array is defined by its shape
, which is a tuple
of N non-negative integers that specify the sizes of each dimension. The type of items in the array is specified by a separate data-type object (dtype), one of which is associated with each ndarray.
As with other container objects in Python, the contents of an ndarray
can be accessed and modified by indexing or slicing the array (using, for example, N integers), and via the methods and attributes of the ndarray
.
Different ndarrays
can share the same data, so that changes made in one ndarray
may be visible in another. That is, an ndarray can be a “view” to another ndarray, and the data it is referring to is taken care of by the “base” ndarray. ndarrays can also be views to memory owned by Python strings
or objects implementing the memoryview
or array interfaces.
Example
A 2-dimensional array of size 2 x 3, composed of 4-byte integer elements:
>>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32) >>> type(x) <class 'numpy.ndarray'> >>> x.shape (2, 3) >>> x.dtype dtype('int32')
The array can be indexed using Python container-like syntax:
>>> # The element of x in the *second* row, *third* column, namely, 6. >>> x[1, 2] 6
For example slicing can produce views of the array:
>>> y = x[:,1] >>> y array([2, 5], dtype=int32) >>> y[0] = 9 # this also changes the corresponding element in x >>> y array([9, 5], dtype=int32) >>> x array([[1, 9, 3], [4, 5, 6]], dtype=int32)Constructing arrays#
New arrays can be constructed using the routines detailed in Array creation routines, and also by using the low-level ndarray
constructor:
Arrays can be indexed using an extended Python slicing syntax, array[selection]
. Similar syntax is also used for accessing fields in a structured data type.
An instance of class ndarray
consists of a contiguous one-dimensional segment of computer memory (owned by the array, or by some other object), combined with an indexing scheme that maps N integers into the location of an item in the block. The ranges in which the indices can vary is specified by the shape
of the array. How many bytes each item takes and how the bytes are interpreted is defined by the data-type object associated with the array.
A segment of memory is inherently 1-dimensional, and there are many different schemes for arranging the items of an N-dimensional array in a 1-dimensional block. NumPy is flexible, and ndarray
objects can accommodate any strided indexing scheme. In a strided scheme, the N-dimensional index \((n_0, n_1, ..., n_{N-1})\) corresponds to the offset (in bytes):
\[n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k\]
from the beginning of the memory block associated with the array. Here, \(s_k\) are integers which specify the strides
of the array. The column-major order (used, for example, in the Fortran language and in Matlab) and row-major order (used in C) schemes are just specific kinds of strided scheme, and correspond to memory that can be addressed by the strides:
\[s_k^{\mathrm{column}} = \mathrm{itemsize} \prod_{j=0}^{k-1} d_j , \quad s_k^{\mathrm{row}} = \mathrm{itemsize} \prod_{j=k+1}^{N-1} d_j .\]
where \(d_j\) = self.shape[j].
Both the C and Fortran orders are contiguous, i.e., single-segment, memory layouts, in which every part of the memory block can be accessed by some combination of the indices.
Note
Contiguous arrays and single-segment arrays are synonymous and are used interchangeably throughout the documentation.
While a C-style and Fortran-style contiguous array, which has the corresponding flags set, can be addressed with the above strides, the actual strides may be different. This can happen in two cases:
If self.shape[k] == 1
then for any legal index index[k] == 0
. This means that in the formula for the offset \(n_k = 0\) and thus \(s_k n_k = 0\) and the value of \(s_k\) = self.strides[k] is arbitrary.
If an array has no elements (self.size == 0
) there is no legal index and the strides are never used. Any array with no elements may be considered C-style and Fortran-style contiguous.
Point 1. means that self
and self.squeeze()
always have the same contiguity and aligned
flags value. This also means that even a high dimensional array could be C-style and Fortran-style contiguous at the same time.
An array is considered aligned if the memory offsets for all elements and the base offset itself is a multiple of self.itemsize
. Understanding memory-alignment leads to better performance on most hardware.
Warning
It does not generally hold that self.strides[-1] == self.itemsize
for C-style contiguous arrays or self.strides[0] == self.itemsize
for Fortran-style contiguous arrays is true.
Data in new ndarrays
is in the row-major (C) order, unless otherwise specified, but, for example, basic array slicing often produces views in a different scheme.
Note
Several algorithms in NumPy work on arbitrarily strided arrays. However, some algorithms require single-segment arrays. When an irregularly strided array is passed in to such algorithms, a copy is automatically made.
Array attributes#Array attributes reflect information that is intrinsic to the array itself. Generally, accessing an array through its attributes allows you to get and sometimes set intrinsic properties of the array without creating a new array. The exposed attributes are the core parts of an array and only some of them can be reset meaningfully without creating a new array. Information on each attribute is given below.
Memory layout#The following attributes contain information about the memory layout of the array:
Data type#The data type object associated with the array can be found in the dtype
attribute:
ctypes
foreign function interface# Array methods#
An ndarray
object has many methods which operate on or with the array in some fashion, typically returning an array result. These methods are briefly explained below. (Each method’s docstring has a more complete description.)
For the following methods there are also corresponding functions in numpy
: all
, any
, argmax
, argmin
, argpartition
, argsort
, choose
, clip
, compress
, copy
, cumprod
, cumsum
, diagonal
, imag
, max
, mean
, min
, nonzero
, partition
, prod
, put
, ravel
, real
, repeat
, reshape
, round
, searchsorted
, sort
, squeeze
, std
, sum
, swapaxes
, take
, trace
, transpose
, var
.
For reshape, resize, and transpose, the single tuple argument may be replaced with n
integers which will be interpreted as an n-tuple.
For array methods that take an axis keyword, it defaults to None. If axis is None, then the array is treated as a 1-D array. Any other value for axis represents the dimension along which the operation should proceed.
Calculation#Many of these methods take an argument named axis. In such cases,
If axis is None (the default), the array is treated as a 1-D array and the operation is performed over the entire array. This behavior is also the default if self is a 0-dimensional array or array scalar. (An array scalar is an instance of the types/classes float32, float64, etc., whereas a 0-dimensional array is an ndarray instance containing precisely one array scalar.)
If axis is an integer, then the operation is done over the given axis (for each 1-D subarray that can be created along the given axis).
Example of the axis argument
A 3-dimensional array of size 3 x 3 x 3, summed over each of its three axes:
>>> x = np.arange(27).reshape((3,3,3)) >>> x array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) >>> x.sum(axis=0) array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]) >>> # for sum, axis is the first keyword, so we may omit it, >>> # specifying only its value >>> x.sum(0), x.sum(1), x.sum(2) (array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]), array([[ 9, 12, 15], [36, 39, 42], [63, 66, 69]]), array([[ 3, 12, 21], [30, 39, 48], [57, 66, 75]]))
The parameter dtype specifies the data type over which a reduction operation (like summing) should take place. The default reduce data type is the same as the data type of self. To avoid overflow, it can be useful to perform the reduction using a larger data type.
For several methods, an optional out argument can also be provided and the result will be placed into the output array given. The out argument must be an ndarray
and have the same number of elements. It can have a different data type in which case casting will be performed.
Arithmetic and comparison operations on ndarrays
are defined as element-wise operations, and generally yield ndarray
objects as results.
Each of the arithmetic operations (+
, -
, *
, /
, //
, %
, divmod()
, **
or pow()
, <<
, >>
, &
, ^
, |
, ~
) and the comparisons (==
, <
, >
, <=
, >=
, !=
) is equivalent to the corresponding universal function (or ufunc for short) in NumPy. For more information, see the section on Universal Functions.
Comparison operators:
Truth value of an array (bool()
):
Note
Truth-value testing of an array invokes ndarray.__bool__
, which raises an error if the number of elements in the array is not 1, because the truth value of such arrays is ambiguous. Use .any()
and .all()
instead to be clear about what is meant in such cases. (If you wish to check for whether an array is empty, use for example .size > 0
.)
Unary operations:
Arithmetic:
Note
Any third argument to pow
is silently ignored, as the underlying ufunc
takes only two arguments.
Because ndarray
is a built-in type (written in C), the __r{op}__
special methods are not directly defined.
The functions called to implement many arithmetic special methods for arrays can be modified using __array_ufunc__
.
Arithmetic, in-place:
Warning
In place operations will perform the calculation using the precision decided by the data type of the two operands, but will silently downcast the result (if necessary) so it can fit back into the array. Therefore, for mixed precision calculations, A {op}= B
can be different than A = A {op} B
. For example, suppose a = ones((3,3))
. Then, a += 3j
is different than a = a + 3j
: while they both perform the same computation, a += 3
casts the result to fit back in a
, whereas a = a + 3j
re-binds the name a
to the result.
Matrix Multiplication:
Note
Matrix operators @
and @=
were introduced in Python 3.5 following PEP 465, and the @
operator has been introduced in NumPy 1.10.0. Further information can be found in the matmul
documentation.
For standard library functions:
Basic customization:
Container customization: (see Indexing)
Conversion; the operations int()
, float()
and complex()
. They work only on arrays that have one element in them and return the appropriate scalar.
String representations:
Utility method for typing:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4