A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://pystruct.github.io/generated/pystruct.learners.SubgradientLatentSSVM.html below:

pystruct.learners.SubgradientLatentSSVM — pystruct 0.2.4 documentation

model : StructuredModel

Object containing model structure. Has to implement loss, inference and loss_augmented_inference.

max_iter : int, default=100

Maximum number of passes over dataset to find constraints and perform updates.

C : float, default=1.

Regularization parameter

verbose : int, default=0

Verbosity.

learning_rate : float or ‘auto’, default=’auto’

Learning rate used in subgradient descent. If ‘auto’, the pegasos schedule is used, which starts with learning_rate = n_samples * C.

momentum : float, default=0.0

Momentum used in subgradient descent.

n_jobs : int, default=1

Number of parallel jobs for inference. -1 means as many as cpus.

show_loss_every : int, default=0

Controlls how often the hamming loss is computed (for monitoring purposes). Zero means never, otherwise it will be computed very show_loss_every’th epoch.

decay_exponent : float, default=1

Exponent for decaying learning rate. Effective learning rate is learning_rate / (decay_t0 + t)** decay_exponent. Zero means no decay.

decay_t0 : float, default=10

Offset for decaying learning rate. Effective learning rate is learning_rate / (decay_t0 + t)** decay_exponent.

break_on_no_constraints : bool, default=True

Break when there are no new constraints found.

averaging : string, default=None

Whether and how to average weights. Possible options are ‘linear’, ‘squared’ and None. The string reflects the weighting of the averaging:

Uniform averaging is not implemented as it is worse than linear weighted averaging or no averaging.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4