A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html below:

NLLLoss — PyTorch 2.8 documentation

The negative log likelihood loss. It is useful to train a classification problem with C classes.

If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.

The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either ( m i n i b a t c h , C ) (minibatch, C) (minibatch,C) or ( m i n i b a t c h , C , d 1 , d 2 , . . . , d K ) (minibatch, C, d_1, d_2, ..., d_K) (minibatch,C,d1,d2,...,dK) with K ≥ 1 K \geq 1 K1 for the K-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images.

Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.

The target that this loss expects should be a class index in the range [ 0 , C − 1 ] [0, C-1] [0,C1] where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).

The unreduced (i.e. with reduction set to 'none') loss can be described as:

ℓ ( x , y ) = L = { l 1 , … , l N }⊤ , l n = − w y n x n , y n , w c = weight [ c ] ⋅ 1 { c ≠ ignore_index } , \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \\ l_n = - w_{y_n} x_{n,y_n}, \\ w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}, (x,y)=L={l1,,lN},ln=wynxn,yn,wc=weight[c]1{c=ignore_index},

where x x x is the input, y y y is the target, w w w is the weight, and N N N is the batch size. If reduction is not 'none' (default 'mean'), then

ℓ ( x , y ) = { ∑ n = 1 N 1 ∑ n = 1 N w y n l n , if reduction = ‘mean’; ∑ n = 1 N l n , if reduction = ‘sum’. \ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases} (x,y)={n=1Nn=1Nwyn1ln,n=1Nln,if reduction=‘mean’;if reduction=‘sum’.

Parameters
Shape::

Examples

>>> log_softmax = nn.LogSoftmax(dim=1)
>>> loss_fn = nn.NLLLoss()
>>> # input to NLLLoss is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target must have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> loss = loss_fn(log_softmax(input), target)
>>> loss.backward()
>>>
>>>
>>> # 2D loss example (used, for example, with image inputs)
>>> N, C = 5, 4
>>> loss_fn = nn.NLLLoss()
>>> data = torch.randn(N, 16, 10, 10)
>>> conv = nn.Conv2d(16, C, (3, 3))
>>> log_softmax = nn.LogSoftmax(dim=1)
>>> # output of conv forward is of shape [N, C, 8, 8]
>>> output = log_softmax(conv(data))
>>> # each element in target must have 0 <= value < C
>>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
>>> # input to NLLLoss is of size N x C x height (8) x width (8)
>>> loss = loss_fn(output, target)
>>> loss.backward()

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4