A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/pytorch/pytorch/releases/tag/v2.0.0 below:

Our next generation release that is faster, more Pythonic and Dynamic as ever · pytorch/pytorch · GitHub

PyTorch 2.0 Release notes Highlights

We are excited to announce the release of PyTorch® 2.0 (release note) which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.

This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.

Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog.

This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.

Summary:

Stable Beta Prototype Platform Changes Accelerated PT 2 Transformers torch.compile DTensor CUDA support for 11.7 & 11.8 (deprecating CUDA 11.6) PyTorch MPS Backend TensorParallel Python 3.8 (deprecating Python 3.7) Scaled dot product attention 2D Parallel AWS Graviton3 Functorch Torch.compile (dynamic=True) Dispatchable Collectives torch.set_default_device and torch.device as context manager X86 quantization backend GNN inference and training performance

*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click here

Backwards Incompatible Changes Drop support for Python versions <= 3.7 (#93155)

Previously the minimum supported version of Python for PyTorch was 3.7. This PR updates the minimum version to require 3.8 in order to install PyTorch. See Hardware / Software Support for more information.

Drop support for CUDA 10 (#89582)

This PR updates the minimum CUDA version to 11.0. See the getting-started for installation or building from source for more information.

Gradients are now set to None instead of zeros by default in torch.optim.*.zero_grad() and torch.nn.Module.zero_grad() (#92731)

This changes the default behavior of zero_grad() to zero out the grads by setting them to None instead of zero tensors. In other words, the set_to_none kwarg is now True by default instead of False. Setting grads to None reduces peak memory usage and increases performance. This will break code that directly accesses data or does computation on the grads after calling zero_grad() as they will now be None. To revert to the old behavior, pass in zero_grad(set_to_none=False).

1.13 2.0
>>> import torch
>>> from torch import nn
>>> module = nn.Linear(2,22)
>>> i = torch.randn(2, 2, requires_grad=True)
>>> module(i).sum().backward()
>>> module.zero_grad()
>>> module.weight.grad == None
False
>>> module.weight.grad.data
tensor([[0., 0.],
        [0., 0.]])
>>> module.weight.grad + 1.0
tensor([[1., 1.],
        [1., 1.]])
>>> import torch
>>> from torch import nn
>>> module = nn.Linear(5, 5)
>>> i = torch.randn(2, 5, requires_grad=True)
>>> module(i).sum().backward()
>>> module.zero_grad()
>>> module.weight.grad == None
True
>>> module.weight.grad.data
AttributeError: 'NoneType' object has no attribute 'data'
>>> module.weight.grad + 1.0
TypeError: unsupported operand type(s) for +:
'NoneType' and 'float'
Update torch.tensor and nn.Parameter to serialize all their attributes (#88913)

Any attribute stored on torch.tensor and torch.nn.Parameter will now be serialized. This aligns the serialization behavior of torch.nn.Parameter, torch.Tensor and other tensor subclasses

1.13 2.0
# torch.Tensor behavior
>>> a = torch.Tensor()
>>> a.foo = 'hey'

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)

>>> print(a.foo)
hey
>>> print(b.foo)
AttributeError: 'Tensor' object has no attribute 'foo'

# torch.nn.Parameter behavior
>>> a = nn.Parameter()
>>> a.foo = 'hey'

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)
>>> print(a.foo)
hey
>>> print(b.foo)
AttributeError: 'Parameter' object has no attribute 'foo'

# torch.Tensor subclass behavior
>>> class MyTensor(torch.Tensor):
...   pass

>>> a = MyTensor()
>>> a.foo = 'hey'
>>> print(a.foo)
hey

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)
>>>print(b.foo)
hey
# torch.Tensor behavior
a = torch.Tensor()
a.foo = 'hey'

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)
>>> print(a.foo)
hey
>>> print(b.foo)
hey

# torch.nn.Parameter behavior
>>> a = nn.Parameter()
>>> a.foo = 'hey'

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)
>>> print(a.foo)
hey
>>> print(b.foo)
hey

# torch.Tensor subclass behavior
>>> class MyTensor(torch.Tensor):
...   pass

>>> a = MyTensor()
>>> a.foo = 'hey'
>>> print(a.foo)
hey

>>> buffer = io.BytesIO()
>>> torch.save(a, buffer)
>>> buffer.seek(0)
>>> b = torch.load(buffer)
>>>print(b.foo)
hey

If you have an attribute that you don't want to be serialized you should not store it as an attribute on tensor or Parameter but instead it is recommended to use torch.utils.weak.WeakTensorKeyDictionary

>>> foo_dict = weak.WeakTensorKeyDictionary()
>>> foo_dict[a] = 'hey'
>>> print(foo_dict[a])
hey
Algorithms {Adadelta, Adagrad, Adam, Adamax, AdamW, ASGD, NAdam, RAdam, RMSProp, RProp, SGD} default to faster foreach implementation when on CUDA + differentiable=False

When applicable, this changes the default behavior of step() and anything that calls into adadelta(...), adagrad(...), adam(...), adamax(...), adamw(...), asgd(...), nadam(...), radam(...), rmsprop(...), rprop(...), sgd(...) directly to use the foreach implementation instead of the for-loop for better performance. However, this change can potentially be backward incompatible since there may be small numerical differences between the results computed with the foreach implementation and the previous default. The foreach implementation will be the default only if the following conditions are met.

  1. The user has not specified kwargs relating to implementation (foreach, fused, or differentiable),
  2. All tensors are native tensors (not subclasses) and on CUDA,
  3. torch.jit.is_scripting is False.

When these conditions are satisfied, the implementation used will match the implementation used when one passes foreach=True. The user defined flag for foreach will NOT be overwritten in order to preserve user selections. For more details, check the documentation. There should be no significant differences between the results returned by these optimizers. To revert to the old behavior, say, for adam, pass in adam(..., foreach=False, ...) or initialize Adam with Adam(..., foreach=False, ...).

Pull Requests: #92306, #92716, #92723,#92724, #92726, #92727, #92728, #92715, #91896, #92730, #90865, #93184, #92181, #92923, #95415, #95818, #95811

torch.nn.utils.stateless.functional_call now respects tied weights (#90477)

Assume a module has two tied weights, x and x_tied. Previously, invoking functional_call(module, parameters_and_buffers, args, kwargs=None, *, strict=False) with a parameter dictionary of only one of the tied weights would result in the other one(s) not being updated.

We’ve changed the behavior so that providing one of the tied weights in the parameter dictionary will update all other tied weights. If you would like the behavior in previous versions of PyTorch, please set tie_weights=False.

Please also see the related deprecation section "torch.nn.stateless.functional_call in favor of torch.func.functional_call".

1.13 2.0
>>> class Foo(nn.Module):
...    def __init__(self):
...        super().__init__()
...        self.x = nn.Parameter(torch.zeros([]))
...        self.x_tied = self.x
...
...    def forward(self, inp):
...        return self.x + self.x_tied

>>> foo = Foo()
>>> params = {'x': torch.ones([])}
>>> result = functional_call(foo, params, torch.randn([]))
>>> print(result)
1.0
>>> class Foo(nn.Module):
...    def __init__(self):
...        super().__init__()
...        self.x = nn.Parameter(torch.zeros([]))
...        self.x_tied = self.x
...
...    def forward(self, inp):
...        return self.x + self.x_tied

>>> foo = Foo()
>>> params = {'x': torch.ones([])}
>>> result = functional_call(foo,
...                         params,
...                         torch.randn([]),
...                         tie_weights=False)
>>> print(result)
1.0
Require return_complex to be passed explicitly to torch.stft for real input (#86724)

torch.stft takes an optional return_complex parameter that indicates whether the output should be a floating point tensor or a complex tensor. return_complex previously defaulted to False for real input tensors. This PR removes the default and makes return_complex a required argument for real inputs. However, complex inputs will continue to default to return_complex=True.

1.13 2.0
>>> a = torch.rand(1024)
>>> _ = torch.stft(a, n_fft=128)
>>> t = torch.rand(1024)
>>> _ = torch.stft(t, n_fft=128, return_complex=False)
Require inputs to torch.istft to be complex valued

torch.istft no longer supports input in the form of real tensors
with shape (..., 2) to mimic complex tensors. Instead, convert
inputs to a complex tensor first before calling torch.istft.

1.13 2.0
>>> t = torch.rand(65, 33, 2)
>>> _ = torch.istft(t, n_fft=128, length=1024)
>>> t = torch.rand(65, 33, 2)
>>> _ = torch.istft(t, n_fft=128, length=1024)
RuntimeError: istft requires a complex-valued input
tensor matching the output from stft with return_complex=True.
>>> t_complex = torch.view_as_complex(t)
>>> _ = torch.istft(t_complex, n_fft=128, length=1024)
Change default behavior of sparse tensor construction to not do component verification(#92094)

We now disable the costly component verification of torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor by default. The user can use the new check_invariants flag or torch.sparse.check_sparse_tensor_invariants to locally enable component verification. This allows users to constrain these costly checks to specific regions of their code and enables better overall performance. Previously users had no access to public constructors that disable these checks.

1.13 2.0
>>> i = [[0, 1, 1],
         [2, 0, 5]]
>>> v =  [3, 4, 5]
>>> s = torch.sparse_coo_tensor(i, v, (2, 3))
RuntimeError: size is inconsistent with
indices: for dim 1, size is 3 but found index 5
>>> i = [[0, 1, 1],
         [2, 0, 5]]
>>> v =  [3, 4, 5]
>>> s = torch.sparse_coo_tensor(i,
...                            v,
...                            (2, 3),
...                            check_invariants=True)
RuntimeError: size is inconsistent with indices: for
dim 1, size is 3 but found index 5
>>> with torch.sparse.check_sparse_tensor_invariants():
...     s = torch.sparse_coo_tensor(i, v, (2, 3))
...
RuntimeError: size is inconsistent with indices: for
dim 1, size is 3 but found index 5
Remove deprecated functionality from torch.testing

Historically, torch.testing exposed a lot of private and undocumented functionality publicly. The 2.0 release completes the deprecation cycle for the following items and removes them:

Hooks registered on tensor to always run, even if they are the inputs to .grad() (#85849)

This is a bug fix. Per the docs, hooks registered to Tensor should fire any time gradients are computed w.r.t. to that tensor. This change corrects the behavior to be consistent with the documentation. See documentation for more details about backward hooks execution..

2.0

a = torch.tensor(1., requires_grad=True)
b = a.clone()
b.register_hook(hook)  # the hook registered here didn't fire before!
torch.autograd.grad(b.clone(), inputs=(b,))
grad_fn post-hooks can always observe the modifications to gradient by any grad_fn pre-hooks or hooks registered to Tensor, even if this is a leaf tensor (#85849)

This corrects the behavior of hooks to be consistent with the documentation in the case where the tensor is a leaf tensor, i.e. the node is a grad accumulator node. See documentation for more details about backward hooks execution.

2.0

def hook(grad):
   # updates grad
   return grad * 3

def hook2(grad_input, grad_output):
   # Before this change, grad_output would NOT see the x3
   print(grad_output)

a = torch.tensor(1., requires_grad=True)
b = a.clone()
acc_grad = b.grad_fn.next_functions[0][0]
acc_grad.register_hook(hook2)
b.register_hook(hook)
torch.autograd.backward(b.clone(), inputs=(a,))  # hook fire
Remove FSDP params_with_grad (#87480)

In FSDP, we used to have an API params_with_grad for users to get parameters which have gradients from the FSDP module. We decided not to expose this helper because it is not a common paradigm.

1.13 2.0
m = FullyShardedDataParallel(module)
m.params_with_grad()
m = FullyShardedDataParallel(module)
m.params_with_grad()  # Runtime error thrown
# For work-around, users can still do
[p for p in self.parameters() if p.grad is not None]
Users doing wildcard import of torch.distributed.fsdp.fully_sharded_data_parallel will no longer get non-public symbols (#87917)

Users could previously import both public and non-public symbols:

1.13 2.0
from torch.distributed.fsdp.fully_sharded_data_parallel import *
ShardingStrategy.FULL_SHARD # Non-public API
FullyShardedDataParallel(module) # public API
from torch.distributed.fsdp.fully_sharded_data_parallel import *
ShardingStrategy.FULL_SHARD # Non-public API, this will fail now
Fully`Sharded`DataParallel(module) # public API
...
# Users can instead
from torch.distributed.fsdp.fully_sharded_data_parallel import (
FullyShardedDataParallel,
ShardingStrategy,
)
FullyShardedDataParallel(module, sharding_strategy=ShardingStrategy.FULL_SHARD)
Signature of FSDP auto_wrap_policy related APIs were changed in (#88450). 1.13 2.0
lambda_auto_wrap_policy(m, unwrapped_params=...)
transformer_auto_wrap_policy(m, unwrapped_params=...)
size_based_auto_wrap_policy(m, unwrapped_params=...)
lambda_auto_wrap_policy(m, nonwrapped_numel=...)
transformer_auto_wrap_policy(m, nonwrapped_numel=...)
size_based_auto_wrap_policy(m, nonwrapped_numel=...)
Updated alltoall signature to be consistent with other c10d APIs (#90569)

The keyword argument names have been changed.

1.13 2.0
alltoall(output=..., input=...)
alltoall(output_tensors=..., input_tensors=...)
Remove unused functions in torch.ao.quantization.fx.utils (#90025)

This commit removes the following unused functions from both the torch.quantization and the
torch.ao.quantization namespaces:

Make torch.ao.quantization.backend_config.BackendConfig accept inputs in the right order (#90698)

The existing BackendConfig fusion pattern uses a "reversed nested tuple" format that is unintuitive.
This pattern format also complicates the signatures of the user specified "fuser methods", which needed to accept arguments in reverse nested order to match
the patterns:

1.13 2.0
import torch as nn
import torch.ao.nn.intrinsic as nni
from torch.ao.quantization.backend_config import (
  BackendPatternConfig
)
def fuse_linear_relu(is_qat, relu, bn_conv):
    (bn, conv) = bn_conv
    return nni.ConvBnReLU2d(conv, bn, relu)

config = (
    BackendPatternConfig((nn.ReLU, (nn.BatchNorm2d, nn.Conv2d)))
    .set_dtype_configs(...)
    .set_fuser_method(fuse_conv_bn_relu)
    .set_fused_module(nni.ConvBnReLU2d)
)

backend_config.configs  # returns Dict[Pattern, BackendPatternConfig]
def fuse_linear_relu(is_qat, conv, bn, relu):
    return nni.ConvBnReLU2d(conv, bn, relu)

config = (
    BackendPatternConfig((nn.Conv2d, nn.BatchNorm2d, nn.ReLU))
    .set_dtype_configs(...)
    .set_fuser_method(fuse_conv_bn_relu)
    .set_fused_module(nni.ConvBnReLU2d)
)

# Or for backward-compatibility
def fuse_linear_relu(is_qat, relu, bn_conv):
    (bn, conv) = bn_conv
    return nni.ConvBnReLU2d(conv, bn, relu)

config = (
    BackendPatternConfig()
    ._set_pattern_complex_format((nn.ReLU, (nn.BatchNorm2d, nn.Conv2d)))
    .set_dtype_configs(...)
    .set_fuser_method(fuse_conv_bn_relu)
    .set_fused_module(nni.ConvBnReLU2d)
)

backend_config.configs  # returns List[BackendPatternConfig]
Make the AO codebase compliant with the public vs private API guidelines of pytorch Public-API-definition-and-documentation

If users were using any of the AO private APIs then these would have to be accessed with a preceding _ to conform with the guidelines.

1.13 2.0
get_observer_dict()
_get_observer_dict()

Pull Requests: (#86029, #87515, #87516, #87517, #87518, #87519, #88392, #88394, #88396, #88397, #87521, #88395, #87883, #88399, #88398, #86022, #86023, #86024, #86025, #86026, #86027, #86028, #86030, #86031, #86032, #86033, #86034, #86037, #90315, #88391, #90554, #87520)

Remove overwrite_output_observer and represent the observer constraints for fixed qparams ops through the existing DTypeWithConstraints mechanism (#88620)

This commit removes overwrite_output_observer and overwrite_output_fake_quantize overwrite observer settings in the BackendConfig. Instead, we represent the observer constraints for
fixed qparams ops through the existing DTypeWithConstraints mechanism. Note that, however, to be consistent with other DTypeWithConstraints checks, we no longer throw an error if an incorrect observer is specified, but simply ignore the offending QConfig and log a warning instead. This is the BC-breaking part of the change.
1.13

from torch.ao.quantization.qconfig import default_qconfig
from torch.ao.quantization.quantize_fx import prepare_fx

model = ModelWithFixedQParamsOps()
qconfig_mapping = QConfigMapping().set_global(default_qconfig)
example_inputs = ...
prepare_fx(model, qconfig_mapping, example_inputs)

Before this commit, running the above leads to an exception because the wrong observers are used for fixed qparams ops. After this commit, the above will only encounter a warning,and the fixed qparams ops will not be quantized. In both cases, switching to get_default_qconfig_mapping will cause the fixed qparams ops to be quantized.

Remove torch.ao.quantization.quantization_patterns and torch.ao.quantization.fusion_patterns(#89872)

The following classes under the torch.ao.quantization.fx.quantization_patterns namespace are migrated to the torch.ao.quantization.fx.quantize_handler
namespace:

The following classes under the torch.ao.quantization.fx.fusion_patterns namespace are migrated to the torch.ao.quantization.fx.fuse_handler
namespace:

Remove public APIs under the torch.ao.quantization.fx.backend_config_utils namespace(#89810)

The following APIs that were mistakenly public under the torch.ao.quantization.fx.backend_config_utils namespace are removed in this commit.

1.13 2.0
from torch.ao.quantization.fx.backend_config_utils import (
    get_quantize_handler_cls,
    get_fusion_pattern_to_fuse_handler_cls,
    get_native_quant_patterns,
    get_pattern_to_quantize_handlers,
)
all_quant_patterns = get_native_quant_patterns()
from torch.ao.quantization.fx.quantization_patterns import (
    _get_quantize_handler_cls,
    _get_pattern_to_quantize_handlers,
)
from torch.ao.quantization.fx.fusion_patterns import (
    _get_fusion_pattern_to_fuse_handler_cls,
)
from torch.ao.quantization.backend_config import (
    get_native_backend_config,
)
all_quant_patterns = _get_pattern_to_quantize_handlers(
    get_native_backend_config()
)
Update torch.{slice|select|diagonal|as_strided}_scatter ops to preserve input stride/storage_offset (#91029)

These operators are primarily used by the functionalization pass, used in AOTAutograd. Previously, they would always return contiguous tensors. Now, they return a tensor with the same striding as their first argument.

1.13 2.0
>>> x = torch.ones(2, 2, 2)
>>> base = x[:, :, 1]
>>> base.stride()
(4, 2)
>>> x = torch.zeros(2, 2, 2)
>>> base = x[:, :, 1]
>>> base.stride()
(4, 2)
>>> torch.diagonal_scatter(base, torch.ones(2)).stride()
# returns a tensor with same strides as base.
(4, 2)
>>> x = torch.ones(2, 2, 2)
>>> base = x[:, :, 1]
>>> base.stride()
(4, 2)
>>> x = torch.zeros(2, 2, 2)
>>> base = x[:, :, 1]
>>> base.stride()
(4, 2)
>>> torch.diagonal_scatter(base, torch.ones(2)).stride()
# returns a contiguous tensor
(2, 1)
Remove ONNX deprecated monkey patches to torch.Graph (#94747)

The Deprecated monkey patches to torch.Graph, torch.Block and torch.Node are removed

Monkey patches to the classes torch.Graph, torch.Block and torch.Node from torch.onnx have been removed. This means the methods torch.Graph.op(), torch..Graph.at(), torch.Block.op(), torch.Graph.constant(), and torch.Node.__getitem__ are no longer available.

Users creating custom symbolic functions for the torch.onnx exporter can continue to assume the g.op() interface for creating an operator in the graph, which is now exposed via the GraphContext class. Users should not assume any other methods from the GraphContext class other than those defined natively by torch.Graph and .op().

Code change to existing symbolic functions is not expected with this change.

Add full checker mode in torch.onnx.export (#83186)

This removes boolean value of full_check parameter in TORCH API check_onnx_proto, and forces full_check with warning messages if it fails.

Also, the API didn’t check on types in the graph even with full_check=True previously. With the change, a warning message will show if the graph contains type error.

C++ API specific BC-Breaking Changes: Deleted torch::deploy from PyTorch Core (#85953)

torch::deploy has been migrated to over to MultiPy. Ongoing development will continue in this repository.

Remove the use of lazy::View (#87822)

The view and aliasing infrastructure in lazy tensor core has been deprecated in favor of functionalization.

Renamed c10::fromIntArrayRef to c10::fromIntArrayRefSlow and changed call sites (#86235)

The function has been renamed to more accurately reflect its performance characteristics.

Deprecations torch.func aka functorch We’ve deprecated the functorch module in favor of the new torch.func module

We’re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work.

We’ve deprecated functorch._ function transforms (e.g. vmap, grad, jvp) in favor of their identical torch.func._ counterparts (#92279).
PyTorch has consolidated on torch.func.functional_call as the NN module functional API. Please migrate from functorch.{make_functional, make_functional_with_buffers} to it. For more details see this Guide
Please migrate from functorch.combine_state_for_ensemble to torch.func.stack_module_state. For more details see this Guide
We are no longer supporting functorch.compile (also known as AOTAutograd) as a frontend for compilation in PyTorch; we have integrated AOTAutograd into PyTorch’s compilation story. If you are a user, please use torch.compile() instead.

Python API Deprecate TypedStorage, its derived classes, and all of their public methods (#85303)

Typed storages have been removed from the C++ side and torch.UntypedStorage is used in place. The use of torch.TypedStorage and all of its subclasses is now deprecated.

1.13 2.0
tensor.storage()
torch.TypedStorage(...)
tensor.untyped_storage()
torch.UntypedStorage(...)

If you need to access individual elements in a storage as a particular dtype, you can simply create a tensor to view it:

torch.tensor(storage, dtype=...)
Deprecate tensor.mT,tensor.T,tensor.mH,tensor.H on 0D-tensors (#92143) 1.13 2.0
>>> a = torch.tensor(10)
>>> a.T
>>> a.H
>>> a = torch.tensor(10)
>>> a.T
UserWarning: Tensor.T is deprecated on 0-D tensors.
This function is the identity in these cases.
>>> a.H
UserWarning: Tensor.H is deprecated on 0-D tensors.
Consider using x.conj().
Autograd API Deprecate decorating classes with torch.no_grad (#89522)

Decorating classes with torch.no_grad is now deprecated. You should be decorating its functions or methods instead. To preserve the current behavior of class decoration, you can directly decorate the __init__ method and nothing else.

1.13 2.0
@torch.no_grad()
class Blah():
  pass
class Blah():
  @torch.no_grad()
  def __init__(self):
    pass
Linalg Remove the use of overload at::frobenius_norm(const Tensor&) (#81762)

In continuation with the deprecation process from release 1.12 the tensor overload for this function has been removed. This function was not used in the bindings of Pytorch and should not impact users of torch.norm.

torch.nn API Canceling deprecation of functional.{tanh, sigmoid} functions (#86905)

Both these ops are heavily used and so will not be removed. Deprecation warnings have been removed.

Deprecated torch.nn.utils.stateless.functional_call in favor of torch.func.functional_call (#92280)

We’ve moved torch.nn.stateless.functional_call under the torch.func module to reflect how it is useful for working with nn.Modules in a functional style. As of PyTorch 2.0, torch.func.functional_call is a drop-in replacement for torch.nn.stateless.functional_call and we will remove torch.nn.utils.stateless.functional_call in a future version of PyTorch. However, please note that we did change the default behavior of torch.nn.stateless.functional_call in PyTorch 2.0 (see “torch.nn.utils.stateless.functional_call now respects tied weights” under BC-breaking notes).

Releng Deprecated private API torch._six (#94709)

Removed the Python 2 and 3 compatibility library six and future and torch._six.
2.0

# from torch._six import string_classes
str
# from torch._six import int_classes
int
# from torch._six import inf, nan
from torch import inf, nan
# torch._six.string_classes
str
Onnx Deprecated Caffe2 ONNX exporter support #95071

Users must use PyTorch 1.x versions to use Caffe2 ONNX exporter. This capability will be completely removed from PyTorch 2.x series.

New Features torch.nn API torch.func Cuda Cpp API NestedTensor API Distributed Mps Profiler Foreach API Mobile Sparse API Optimizer API Distributions Signals Quantization Vulkan ROCm Fx Jit Build ONNX Cudnn Improvements Python API Autograd API torch.nn API Distributed torch.func Cuda Serialization Cpp API Dataloader API NestedTensor API Complex API Composability Linalg API Mobile Sparse API Cpu Package Quantization ONNX Fx Mps Build Jit Releng Bug fixes Python API Autograd API torch.nn API torch.func Cuda Cpp API Visualization NestedTensor API Distributed Profiler Foreach API Complex API Linalg API Optimizer API Serialization Composability Sparse API Distributions Cpu Intel Package Quantization Fx ONNX ROCm Mps Build Jit Vulkan Cudnn Releng Performance Python API Autograd API torch.nn API torch.func Cuda Cpp API NestedTensor API Distributed Complex API Mobile Sparse API Optimizer API Cpu Fx Mps Jit Cudnn Documentation Python API Autograd API torch.nn API NestedTensor API Mps Distributed torch.func Linalg API Composability Dataloader API Sparse API Optimizer API Serialization Distributions Quantization ONNX Releng

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4