A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://pystruct.github.io/generated/pystruct.models.BinaryClf.html below:

pystruct.models.BinaryClf — pystruct 0.2.4 documentation

Formulate standard linear binary SVM in CRF framework.

Inputs x are simply feature arrays, labels y are -1 or 1.

Parameters:

n_features : int or None, default=None

Number of features of inputs x. If None, it is inferred from data.

Notes

No bias / intercept is learned. It is recommended to add a constant one feature to the data.

It is also highly recommended to use n_jobs=1 in the learner when using this model. Trying to parallelize the trivial inference will slow the infernce down a lot!

Methods

batch_inference(X, w) batch_joint_feature(X, Y) batch_loss(Y, Y_hat) batch_loss_augmented_inference(X, Y, w[, ...]) continuous_loss(y, y_hat) inference(x, w[, relaxed]) Inference for x using parameters w. initialize(X, Y) joint_feature(x, y) Compute joint feature vector of x and y. loss(y, y_hat) loss_augmented_inference(x, y, w[, relaxed]) Loss-augmented inference for x and y using parameters w. max_loss(y)
__init__(n_features=None)[source]
inference(x, w, relaxed=None)[source]

Inference for x using parameters w.

Finds armin_y np.dot(w, joint_feature(x, y)), i.e. best possible prediction.

For a binary SVM, this is just sign(np.dot(w, x) + b))

Parameters:

x : ndarray, shape (n_features,)

Input sample features.

w : ndarray, shape=(size_joint_feature,)

Parameters of the SVM.

relaxed : ignored

Returns:

y_pred : int

Predicted class label.

joint_feature(x, y)[source]

Compute joint feature vector of x and y.

Feature representation joint_feature, such that the energy of the configuration (x, y) and a weight vector w is given by np.dot(w, joint_feature(x, y)).

Parameters:

x : nd-array, shape=(n_features,)

Input sample features.

y : int

Class label, either +1 or -1.

Returns:

p : ndarray, shape (size_joint_feature,)

Feature vector associated with state (x, y).

loss_augmented_inference(x, y, w, relaxed=None)[source]

Loss-augmented inference for x and y using parameters w.

Minimizes over y_hat: np.dot(joint_feature(x, y_hat), w) + loss(y, y_hat) which is just sign(np.dot(x, w) + b - y)

Parameters:

x : ndarray, shape (n_features,)

Unary evidence / input to augment.

y : int

Ground truth labeling relative to which the loss will be measured.

w : ndarray, shape (size_joint_feature,)

Weights that will be used for inference.

Returns:

y_hat : int

Label with highest sum of loss and score.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4