Optuna provides various visualization features in optuna.visualization
to analyze optimization results visually.
Note that this tutorial requires Plotly to be installed:
$ pip install plotly # Required if you are running this tutorial in Jupyter Notebook. $ pip install nbformat
If you prefer to use Matplotlib instead of Plotly, please run the following command:
This tutorial walks you through this module by visualizing the optimization results of PyTorch model for FashionMNIST dataset.
For visualizing multi-objective optimization (i.e., the usage of optuna.visualization.plot_pareto_front()
), please refer to the tutorial of Multi-objective Optimization with Optuna.
Note
By using Optuna Dashboard, you can also check the optimization history, hyperparameter importances, hyperparameter relationships, etc. in graphs and tables. Please make your study persistent using RDB backend and execute following commands to run Optuna Dashboard.
$ pip install optuna-dashboard $ optuna-dashboard sqlite:///example-study.db
Please check out the GitHub repository for more details.
Manage Studies
Visualize with Interactive Graphs
import torch import torch.nn as nn import torch.nn.functional as F import torchvision import optuna # You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with # `optuna.visualization.matplotlib` in the following examples. from optuna.visualization import plot_contour from optuna.visualization import plot_edf from optuna.visualization import plot_intermediate_values from optuna.visualization import plot_optimization_history from optuna.visualization import plot_parallel_coordinate from optuna.visualization import plot_param_importances from optuna.visualization import plot_rank from optuna.visualization import plot_slice from optuna.visualization import plot_timeline SEED = 13 torch.manual_seed(SEED) DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") DIR = ".." BATCHSIZE = 128 N_TRAIN_EXAMPLES = BATCHSIZE * 30 N_VALID_EXAMPLES = BATCHSIZE * 10 def define_model(trial): n_layers = trial.suggest_int("n_layers", 1, 2) layers = [] in_features = 28 * 28 for i in range(n_layers): out_features = trial.suggest_int("n_units_l{}".format(i), 64, 512) layers.append(nn.Linear(in_features, out_features)) layers.append(nn.ReLU()) in_features = out_features layers.append(nn.Linear(in_features, 10)) layers.append(nn.LogSoftmax(dim=1)) return nn.Sequential(*layers) # Defines training and evaluation. def train_model(model, optimizer, train_loader): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE) optimizer.zero_grad() F.nll_loss(model(data), target).backward() optimizer.step() def eval_model(model, valid_loader): model.eval() correct = 0 with torch.no_grad(): for batch_idx, (data, target) in enumerate(valid_loader): data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE) pred = model(data).argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() accuracy = correct / N_VALID_EXAMPLES return accuracy
Define the objective function.
def objective(trial): train_dataset = torchvision.datasets.FashionMNIST( DIR, train=True, download=True, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( torch.utils.data.Subset(train_dataset, list(range(N_TRAIN_EXAMPLES))), batch_size=BATCHSIZE, shuffle=True, ) val_dataset = torchvision.datasets.FashionMNIST( DIR, train=False, transform=torchvision.transforms.ToTensor() ) val_loader = torch.utils.data.DataLoader( torch.utils.data.Subset(val_dataset, list(range(N_VALID_EXAMPLES))), batch_size=BATCHSIZE, shuffle=True, ) model = define_model(trial).to(DEVICE) optimizer = torch.optim.Adam( model.parameters(), trial.suggest_float("lr", 1e-5, 1e-1, log=True) ) for epoch in range(10): train_model(model, optimizer, train_loader) val_accuracy = eval_model(model, val_loader) trial.report(val_accuracy, epoch) if trial.should_prune(): raise optuna.exceptions.TrialPruned() return val_accuracy
0%| | 0.00/26.4M [00:00<?, ?B/s] 0%| | 65.5k/26.4M [00:00<01:20, 326kB/s] 0%| | 131k/26.4M [00:00<00:57, 461kB/s] 1%| | 197k/26.4M [00:00<00:49, 532kB/s] 2%|▏ | 426k/26.4M [00:00<00:22, 1.14MB/s] 3%|▎ | 819k/26.4M [00:00<00:12, 2.06MB/s] 6%|▋ | 1.67M/26.4M [00:00<00:06, 4.11MB/s] 13%|█▎ | 3.31M/26.4M [00:00<00:02, 7.93MB/s] 25%|██▌ | 6.62M/26.4M [00:00<00:01, 15.7MB/s] 36%|███▌ | 9.57M/26.4M [00:01<00:00, 19.8MB/s] 52%|█████▏ | 13.7M/26.4M [00:01<00:00, 26.0MB/s] 67%|██████▋ | 17.8M/26.4M [00:01<00:00, 30.2MB/s] 83%|████████▎ | 22.0M/26.4M [00:01<00:00, 33.1MB/s] 99%|█████████▊| 26.1M/26.4M [00:01<00:00, 35.1MB/s] 100%|██████████| 26.4M/26.4M [00:01<00:00, 18.5MB/s] 0%| | 0.00/29.5k [00:00<?, ?B/s] 100%|██████████| 29.5k/29.5k [00:00<00:00, 292kB/s] 100%|██████████| 29.5k/29.5k [00:00<00:00, 292kB/s] 0%| | 0.00/4.42M [00:00<?, ?B/s] 1%|▏ | 65.5k/4.42M [00:00<00:13, 331kB/s] 5%|▌ | 229k/4.42M [00:00<00:06, 623kB/s] 16%|█▌ | 688k/4.42M [00:00<00:02, 1.80MB/s] 37%|███▋ | 1.64M/4.42M [00:00<00:00, 4.10MB/s] 62%|██████▏ | 2.75M/4.42M [00:00<00:00, 6.21MB/s] 100%|██████████| 4.42M/4.42M [00:00<00:00, 5.57MB/s] 0%| | 0.00/5.15k [00:00<?, ?B/s] 100%|██████████| 5.15k/5.15k [00:00<00:00, 36.1MB/s]Plot functions
Visualize the optimization history. See plot_optimization_history()
for the details.
plot_optimization_history(study)
Visualize the learning curves of the trials. See plot_intermediate_values()
for the details.
plot_intermediate_values(study)
Visualize high-dimensional parameter relationships. See plot_parallel_coordinate()
for the details.
plot_parallel_coordinate(study)
Select parameters to visualize.
plot_parallel_coordinate(study, params=["lr", "n_layers"])
Visualize hyperparameter relationships. See plot_contour()
for the details.
Select parameters to visualize.
plot_contour(study, params=["lr", "n_layers"])
Visualize individual hyperparameters as slice plot. See plot_slice()
for the details.
Select parameters to visualize.
plot_slice(study, params=["lr", "n_layers"])
Visualize parameter importances. See plot_param_importances()
for the details.
plot_param_importances(study)
Learn which hyperparameters are affecting the trial duration with hyperparameter importance.
optuna.visualization.plot_param_importances( study, target=lambda t: t.duration.total_seconds(), target_name="duration" )
Visualize empirical distribution function. See plot_edf()
for the details.
Visualize parameter relations with scatter plots colored by objective values. See plot_rank()
for the details.
Visualize the optimization timeline of performed trials. See plot_timeline()
for the details.
In optuna.visualization
and optuna.visualization.matplotlib
, a function returns an editable figure object: plotly.graph_objects.Figure
or matplotlib.axes.Axes
depending on the module. This allows users to modify the generated figure for their demand by using API of the visualization library. The following example replaces figure titles drawn by Plotly-based plot_intermediate_values()
manually.
fig = plot_intermediate_values(study) fig.update_layout( title="Hyperparameter optimization for FashionMNIST classification", xaxis_title="Epoch", yaxis_title="Validation Accuracy", )
Total running time of the script: (1 minutes 18.696 seconds)
Gallery generated by Sphinx-Gallery
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4