Formulate linear multiclass SVM in C-S style in CRF framework.
Inputs x are simply feature arrays, labels y are 0 to n_classes.
Parameters:n_features : int
Number of features of inputs x. If None, it is inferred from data.
n_classes : int, default=None
Number of classes in dataset. If None, it is inferred from data.
class_weight : None, or array-like
Class weights. If an array-like is passed, it must have length n_classes. None means equal class weights.
rescale_C : bool, default=False
Whether the class-weights should be used to rescale C (liblinear-style) or just rescale the loss.
Notes
No bias / intercept is learned. It is recommended to add a constant one feature to the data.
It is also highly recommended to use n_jobs=1 in the learner when using this model. Trying to parallelize the trivial inference will slow the infernce down a lot!
Methods
batch_inference
(X, w[, relaxed]) batch_joint_feature
(X, Y[, Y_true]) batch_loss
(Y, Y_hat) batch_loss_augmented_inference
(X, Y, w[, ...]) continuous_loss
(y, y_hat) inference
(x, w[, relaxed, return_energy]) Inference for x using parameters w. initialize
(X, Y) joint_feature
(x, y[, y_true]) Compute joint feature vector of x and y. loss
(y, y_hat) loss_augmented_inference
(x, y, w[, relaxed, ...]) Loss-augmented inference for x and y using parameters w. max_loss
(y)
__init__
(n_features=None, n_classes=None, class_weight=None, rescale_C=False)[source]¶
inference
(x, w, relaxed=None, return_energy=False)[source]¶
Inference for x using parameters w.
Finds armin_y np.dot(w, joint_feature(x, y)), i.e. best possible prediction.
For an unstructured multi-class model (this model), this can easily done by enumerating all possible y.
Parameters:x : ndarray, shape (n_features,)
Input sample features.
w : ndarray, shape=(size_joint_feature,)
Parameters of the SVM.
relaxed : ignored
Returns:y_pred : int
Predicted class label.
joint_feature
(x, y, y_true=None)[source]¶
Compute joint feature vector of x and y.
Feature representation joint_feature, such that the energy of the configuration (x, y) and a weight vector w is given by np.dot(w, joint_feature(x, y)).
Parameters:x : nd-array, shape=(n_features,)
Input sample features.
y : int
Class label. Between 0 and n_classes.
y_true : int
Returns:True class label. Needed if rescale_C==True.
p : ndarray, shape (size_joint_feature,)
Feature vector associated with state (x, y).
loss_augmented_inference
(x, y, w, relaxed=None, return_energy=False)[source]¶
Loss-augmented inference for x and y using parameters w.
Minimizes over y_hat: np.dot(joint_feature(x, y_hat), w) + loss(y, y_hat)
Parameters:x : ndarray, shape (n_features,)
Unary evidence / input to augment.
y : int
Ground truth labeling relative to which the loss will be measured.
w : ndarray, shape (size_joint_feature,)
Returns:Weights that will be used for inference.
y_hat : int
Label with highest sum of loss and score.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4