Uses downhill simplex for optimizing an unconstraint primal.
This is basically a sanity check on all other implementations, as this is easier to check for correctness.
Methods
fit
(X, Y) get_params
([deep]) Get parameters for this estimator. predict
(X) Predict output on examples in X. score
(X, Y) Compute score as 1 - loss over whole data set. set_params
(**params) Set the parameters of this estimator.
__init__
(model, max_iter=100, C=1.0, verbose=0, n_jobs=1, show_loss_every=0, logger=None)¶
get_params
(deep=True)¶
Get parameters for this estimator.
Parameters:deep: boolean, optional :
Returns:If True, will return the parameters for this estimator and contained subobjects that are estimators.
params : mapping of string to any
Parameter names mapped to their values.
predict
(X)¶
Predict output on examples in X.
Parameters:X : iterable
Returns:Traing instances. Contains the structured input objects.
Y_pred : list
List of inference results for X using the learned parameters.
score
(X, Y)¶
Compute score as 1 - loss over whole data set.
Returns the average accuracy (in terms of model.loss) over X and Y.
Parameters:X : iterable
Evaluation data.
Y : iterable
Returns:True labels.
score : float
Average of 1 - loss over training examples.
set_params
(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4