gluon.nn
and gluon.rnn
packages.SigmoidBinaryCrossEntropyLoss
, CTCLoss
, HuberLoss
, HingeLoss
, SquaredHingeLoss
, LogisticLoss
, TripletLoss
.gluon.Trainer
now allows reading and setting learning rate with trainer.learning_rate
property.HybridBlock.export
for exporting gluon models to MXNet format.gluon.contrib
package.
VariationalDropoutCell
autograd
package, which enables automatic differentiation of NDArray operations.autograd.Function
allows defining both forward and backward computation for custom operators.mx.autograd.grad
and experimental second order gradient support (most operators don't support second order gradient yet).x.copyto(mx.gpu(i))
and x.copyto(mx.cpu())
to do computation on multiple devices.Symbol
and NDArray
- CSRNDArray
and RowSparseNDArray
.LibSVMIter
.Ftrl
, SGD
and Adam
.push
and row_sparse_pull
with RowSparseNDArray
in distributed kvstore.x[idx_arr0, idx_arr1, ..., idx_arrn]
is now supported. Features such as combining and slicing are planned for the next release. Checkout master to get a preview.mx.nd.random.*
and mx.sym.random.*
now support both CPU and GPU.NDArray
and Symbol
now supports "fluent" methods. You can now use x.exp()
etc instead of mx.nd.exp(x)
or mx.sym.exp(x)
.mx.rtc.CudaModule
for writing and running CUDA kernels from python.multi_precision
option to optimizer for easier float16 training.mx.sym.linalg_*
and mx.sym.random_*
are now moved to mx.sym.linalg.*
and mx.sym.random.*
. The old names are still available but deprecated.sample_*
and random_*
are now merged as random.*
, which supports both scalar and NDArray
distribution parameters.argsort
operator to fail on large tensors.float64
inputs.For more information and examples, see full release notes
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4