library(torch)
torch_manual_seed(1) # setting seed for reproducibility
This vignette showcases the basic functionality of distributions in torch. Currently the distributions modules are considered âwork in progressâ and are still experimental features in the torch package. You can see the progress in this link.
The distributions modules in torch are modelled after PyTorchâs distributions module which in turn is based on the TensorFlow Distributions package.
This vignette is based in the TensorFlowâs distributions tutorial.
Basic univariate distributionsLetâs start and create a new instance of a normal distribution:
n <- distr_normal(loc = 0, scale = 1)
n
We can draw samples from it with:
or, draw multiple samples:
We can evaluate the log probability of values:
n$log_prob(0)
log(dnorm(0)) # equivalent R code
or, evaluate multiple log probabilities:
Multiple distributionsA distribution can take a tensor as itâs parameters:
b <- distr_bernoulli(probs = torch_tensor(c(0.25, 0.5, 0.75)))
b
This object represents 3 independent Bernoulli distributions, one for each element of the tensor.
We can sample a single observation:
or, a batch of n
observations:
The log_prob
method of distributions can be differentiated, thus, distributions can be used to train models in torch.
Letâs implement a Gaussian linear model, but first letâs simulate some data
x <- torch_randn(100, 1)
y <- 2*x + 1 + torch_randn(100, 1)
and plot:
plot(as.numeric(x), as.numeric(y))
We can now define our model:
GaussianLinear <- nn_module(
initialize = function() {
# this linear predictor will estimate the mean of the normal distribution
self$linear <- nn_linear(1, 1)
# this parameter will hold the estimate of the variability
self$scale <- nn_parameter(torch_ones(1))
},
forward = function(x) {
# we estimate the mean
loc <- self$linear(x)
# return a normal distribution
distr_normal(loc, self$scale)
}
)
model <- GaussianLinear()
We can now train our model with:
opt <- optim_sgd(model$parameters, lr = 0.1)
for (i in 1:100) {
opt$zero_grad()
d <- model(x)
loss <- torch_mean(-d$log_prob(y))
loss$backward()
opt$step()
if (i %% 10 == 0)
cat("iter: ", i, " loss: ", loss$item(), "\n")
}
We can see the parameter estimates with:
and quickly compare with the glm()
function:
summary(glm(as.numeric(y) ~ as.numeric(x)))
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4