A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://pystruct.github.io/generated/pystruct.learners.StructuredPerceptron.html below:

pystruct.learners.StructuredPerceptron — pystruct 0.2.4 documentation

Structured Perceptron training.

Implements a simple structured perceptron with optional averaging. The structured perceptron approximately minimizes the zero-one loss, therefore the learning does not take model.loss into account. It is just shown to illustrate the learning progress.

As the perceptron learning is not margin-based, the model does not need to provide loss_augmented_inference.

Parameters:

model : StructuredModel

Object containing model structure. Has to implement loss, inference.

max_iter : int (default=100)

Maximum number of passes over dataset to find constraints and update parameters.

verbose : int (default=0)

Verbosity

batch : bool (default=False)

Whether to do batch learning or online learning.

decay_exponent : float, default=0

Exponent for decaying learning rate. Effective learning rate is (t0 + t)** decay_exponent. Zero means no decay.

decay_t0 : float, default=10

Offset for decaying learning rate. Effective learning rate is (t0 + t)** decay_exponent. Zero means no decay.

average : bool or int, default=False

Whether to average over all weight vectors obtained during training or simply keeping the last one. average=False does not perform any averaging. average=True averages over all epochs. average=k with k >= 0 waits k epochs before averaging. average=k with k < 0 averages over the last k epochs. So far k = -1 is the only negative value supported.

logger : logger object.

Attributes:

w : nd-array, shape=(model.size_joint_feature,)

The learned weights of the SVM.

``loss_curve_`` : list of float

List of loss values after each pass thorugh the dataset.

Methods

fit(X, Y[, initialize]) Learn parameters using structured perceptron. get_params([deep]) Get parameters for this estimator. predict(X) Predict output on examples in X. score(X, Y) Compute score as 1 - loss over whole data set. set_params(**params) Set the parameters of this estimator.
__init__(model, max_iter=100, verbose=0, batch=False, decay_exponent=0, decay_t0=10, average=False, n_jobs=1, logger=None)[source]
fit(X, Y, initialize=True)[source]

Learn parameters using structured perceptron.

Parameters:

X : iterable

Traing instances. Contains the structured input objects. No requirement on the particular form of entries of X is made.

Y : iterable

Training labels. Contains the strctured labels for inputs in X. Needs to have the same length as X.

initialize : boolean, default=True

Whether to initialize the model for the data. Leave this true except if you really know what you are doing.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(X)

Predict output on examples in X.

Parameters:

X : iterable

Traing instances. Contains the structured input objects.

Returns:

Y_pred : list

List of inference results for X using the learned parameters.

score(X, Y)

Compute score as 1 - loss over whole data set.

Returns the average accuracy (in terms of model.loss) over X and Y.

Parameters:

X : iterable

Evaluation data.

Y : iterable

True labels.

Returns:

score : float

Average of 1 - loss over training examples.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4