nn("head")
was also changed to match this. This means that for binary classification tasks, t_loss("cross_entropy")
now generates nn_bce_with_logits_loss
instead of nn_cross_entropy_loss
. This also came with a reparametrization of the t_loss("cross_entropy")
loss (thanks to @tdhock, #374).po("nn_identity")
po("nn_fn")
for calling custom functions in a network.nn("block")
(which allows to repeat the same network segment multiple times) now has an extra argument trafo
, which allows to modify the parameter values per layer.y_hat
).lr_one_cycle
callback now infers the total number of steps.digits
for controlling the precision with which validation/training scores are logged.TorchIngressToken
now also can take a Selector
as argument features
.lazy_shape()
to get the shape of a lazy tensor.LearnerTorch
base class now supports the private method $.ingress_tokens(task, param_vals)
for generating the torch::dataset
.NA
s and not only the batch dimension can be missing. However, most nn()
operators still expect only one missing values and will throw an error if multiple dimensions are unknown.NA
instead.param_groups
parameter.NA
is now a valid shape for lazy tensorslr_reduce_on_plateau
callback now works.LearnerTorchModel
can now be parallelized and trained with encapsulation activated.jit_trace
now works in combination with batch normalization.R6
version 2.6.0LearnerTorch$.dataloader()
method now operates no longer on the task
but on the dataset
generated by the private LearnerTorch$.dataset()
method.shuffle
parameter during model training is now initialized to TRUE
to sidestep issues where data is sorted.jit_trace
parameter was added to LearnerTorch
, which when set to TRUE
can lead to significant speedups. This should only be enabled for âstaticâ models, see the torch tutorial for more information.num_interop_threads
to LearnerTorch
.tensor_dataset
parameter was added, which allows to stack all batches at the beginning of training to make loading of batches afterwards faster.PipeOp
for adaptive average pooling.n_layers
parameter was added to the MLP learner.AutoTuner
.epochs - patience
for the internally tuned values instead of the trained number of epochs
as it was before.dataset
of a learner must no longer return the tensors on the specified device
, which allows for parallel dataloading on GPUs.PipeOpBlock
should no longer create ID clashes with other PipeOps in the graph (#260).data_formats
anymoreCallbackSetTB
, which allows logging that can be viewed by TensorBoard.PipeOps
such as po("trafo_resize")
which failed in some cases.LearnerTabResnet
now works correctlynn()
helper function to simplify the creation of neural network layersRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4