scikits-learn

主要介绍了scikits-learn

classifier 与 regression

线性模型:from sklearn import linear_model

  • 最小二乘线性回归:
    reg = linear_model.LinearRegression()
    reg.fit()
    reg.coef_

  • Ridge Regression:岭回归
    from sklearn import linear_model
    reg = linear_model.Ridge (alpha = .5)
    reg.fit ([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
    reg.coef_

  • Lasso回归
    from sklearn import linear_model
    reg = linear_model.Lasso(alpha = 0.1)
    reg.fit([[0, 0], [1, 1]], [0, 1])
    reg.predict([[1, 1]])

  • Elastic Net:弹性网络,其实是将Ridge与lasso做了中和
    enet = ElasticNet(alpha=alpha, l1_ratio=0.7)
    y_pred_enet = enet.fit(X_train, y_train).predict(X_test)

  • and so on

LDA与QDA

Kernel ridge regression

SVM

SGD

Nearest Neighbors

Naive Bayes

Decision Trees

ensemble method:集成方法

from《机器学习》周志华:
目前的集成徐希方法大致分为2类:
个体学习器存在强依赖关系,必须串行生成的序列方法:代表Boost
个体学习其不存在强依赖关系,可同时生成的并行方法:代表Bagging和随机森林
继承方法并不是将几种方法进行融合,而是将一种方法进行不同的集成。

AdaBoost
The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data.The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction.

Feature Selection

Feature selection is usually used as a pre-processing step before doing the actual learning. The recommended way to do this in scikit-learn is to use a sklearn.pipeline.Pipeline:

clf = Pipeline([
('feature_selection', SelectFromModel(LinearSVC(penalty="l1"))),
('classification', RandomForestClassifier())
])
clf.fit(X, y)

In this snippet we make use of a sklearn.svm.LinearSVC coupled with sklearn.feature_selection.SelectFromModel to evaluate feature importances and select the most relevant features. Then, a sklearn.ensemble.RandomForestClassifier is trained on the transformed output, i.e. using only relevant features. You can perform similar operations with the other feature selection methods and also classifiers that provide a way to evaluate feature importances of course.

cluster

KMean

MiniBatchKMean

从大块数据随机抽出数据来进行训练
This example uses a large dataset of faces to learn a set of 20 x 20 images patches that constitute faces.
From the programming standpoint, it is interesting because it shows how to use the online API of the scikit-learn to process a very large dataset by chunks. The way we proceed is that we load an image at a time and extract randomly 50 patches from this image. Once we have accumulated 500 of these patches (using 10 images), we run the partial_fit method of the online KMeans object, MiniBatchKMeans.

MeanShift

Spctral Clustering:光谱聚合

SpectralClustering does a low-dimension embedding of the affinity matrix between samples, followed by a KMeans in the low dimensional space. It is especially efficient if the affinity matrix is sparse and the pyamg module is installed. SpectralClustering requires the number of clusters to be specified. It works well for a small number of clusters but is not advised when using many clusters.
For two clusters, it solves a convex relaxation of the normalised cuts problem on the similarity graph: cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster. This criteria is especially interesting when working on images: graph vertices are pixels, and edges of the similarity graph are a function of the gradient of the image.

不同的策略,产生的结果不同
Different label assignment strategies can be used, corresponding to the assign_labels parameter of SpectralClustering. The "kmeans" strategy can match finer details of the data, but it can be more unstable. In particular, unless you control the random_state, it may not be reproducible from run-to-run, as it depends on a random initialization. On the other hand, the "discretize" strategy is 100% reproducible, but it tends to create parcels of fairly even and geometrical shape.

Hierachical Cluster:层次聚类

《机器学习》周志华:
AGENS(层次聚类)是采用一种自底向上聚合策略的聚类算法。它先将数据集中每个样本看作一个初始聚类镞,然后在算法运行每一步找到距离最近的两个聚类镞进行合并,该过程不断重复,直到达到预设的聚类镞个数。

连通性检测
An interesting aspect of AgglomerativeClustering is that connectivity constraints can be added to this algorithm (only adjacent clusters can be merged together), through a connectivity matrix that defines for each sample the neighboring samples following a given structure of the data.

DBSCAN:密度聚类

《机器学习》:密度聚类从样本密度的角度来考察样本之间的可连接性,并给予可连接样本不断扩展聚类簇一获得最终的聚类结果。
簇:由密度可达关系导出的最大密度连接样本集合。P213
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped.

高斯混合聚类

其他

Birch

Clustering performance evaluation评价

Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred, the adjusted Rand index is a function that measures the similarity of the two assignments, ignoring permutations and with chance normalization:

from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
就是用来衡量聚合结果的

  • Mutual Information based scores
    the Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations. Two different normalized versions of this measure are available, Normalized Mutual Information(NMI) and Adjusted Mutual Information(AMI). NMI is often used in the literature while AMI was proposed more recently and is normalized against chance:
    from sklearn import metrics
    labels_true = [0, 0, 0, 1, 1, 1]
    labels_pred = [0, 0, 1, 1, 2, 2]

    metrics.adjusted_mutual_info_score(labels_true, labels_pred )

  • Homogeneity, completeness and V-measure
    In particular Rosenberg and Hirschberg (2007) define the following two desirable objectives for any cluster assignment:
    homogeneity: each cluster contains only members of a single class.
    completeness: all members of a given class are assigned to the same cluster.

    from sklearn import metrics

labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]

metrics.homogeneity_score(labels_true, labels_pred)
0.66...

metrics.completeness_score(labels_true, labels_pred)
0.42...

dimension reduce:PCA

PCA

Incremental PCA

The PCA object is very useful, but has certain limitations for large datasets. The biggest limitation is that PCA only supports batch processing, which means all of the data to be processed must fit in main memory. The IncrementalPCA object uses a different form of processing and allows for partial computations which almost exactly match the results of PCA while processing the data in a minibatch fashion.

Kernel PCA

KernelPCA is an extension of PCA which achieves non-linear dimensionality reduction through the use of kernels (see Pairwise metrics, Affinities and Kernels). It has many applications including denoising, compression and structured prediction (kernel dependency estimation). KernelPCA supports both transform and inverse_transform

Sparse PCA

SparsePCA is a variant of PCA, with the goal of extracting the set of sparse components that best reconstruct the data.

Mini-batch sparse PCA (MiniBatchSparsePCA) is a variant of SparsePCA that is faster but less accurate. The increased speed is reached by iterating over small chunks of the set of features, for a given number of iterations.

PCA在分解Sparse时的劣势,然后就有了Sparse PCA,在人脸识别方面有用
Principal component analysis (PCA) has the disadvantage that the components extracted by this method have exclusively dense expressions, i.e. they have non-zero coefficients when expressed as linear combinations of the original variables. This can make interpretation difficult. In many cases, the real underlying components can be more naturally imagined as sparse vectors; for example in face recognition, components might naturally map to parts of faces.
Sparse principal components yields a more parsimonious, interpretable representation, clearly emphasizing which of the original features contribute to the differences between samples.

其他

Truncated SVD 与 LSD

DictionaryLearning

FA(Factor Analysis)

ICA(Independent component analysis)

NMF(Non-negative matrix factorization)

LDA(Latent Dirichlet Allocation):潜在Dirichlet分布

Model Selection

cross-validation:数据集的交叉验证

Cross-validation:交叉验证

为了防止过拟合,需要有训练集与测试集,然后可以通过 train_test_split来快速切分。

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn import svm

iris = datasets.load_iris()
# iris.data.shape, iris.target.shape
# ((150, 4), (150,))

X_train, X_test, y_train, y_test = train_test_split(
     iris.data, iris.target, test_size=0.4, random_state=0)

# X_train.shape, y_train.shape
# ((90, 4), (90,))
# X_test.shape, y_test.shape
# ((60, 4), (60,))

clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)   #0.96

score

from sklearn import preprocessing
X_train, X_test, y_train, y_test = train_test_split(
     iris.data, iris.target, test_size=0.4, random_state=0)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_transformed = scaler.transform(X_train)
clf = svm.SVC(C=1).fit(X_train_transformed, y_train)
X_test_transformed = scaler.transform(X_test)
clf.score(X_test_transformed, y_test)  
# 0.9333...

cross_val_score

上述的简单形式

from sklearn.pipeline import make_pipeline
 clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
 cross_val_score(clf, iris.data, iris.target, cv=cv)

# array([ 0.97...,  0.93...,  0.95...]) 

cross_validate

from sklearn.model_selection import cross_validate
from sklearn.metrics import recall_score
scoring = ['precision_macro', 'recall_macro']
clf = svm.SVC(kernel='linear', C=1, random_state=0)
scores = cross_validate(clf, iris.data, iris.target, scoring=scoring,
                         cv=5, return_train_score=False)
sorted(scores.keys())
# ['fit_time', 'score_time', 'test_precision_macro', 'test_recall_macro']
scores['test_recall_macro']                       
# array([ 0.96...,  1.  ...,  0.96...,  0.96...,  1.        ])

K-fold

Repeated K-Fold

Leave One Out (LOO)

Leave P Out (LPO)

Stratified k-fold

Group k-fold

Leave One Group Out

Leave P Groups Out

Group Shuffle Split

Time Series Split

shuffle注意

If the data ordering is not arbitrary (e.g. samples with the same class label are contiguous), shuffling it first may be essential to get a meaningful cross- validation result. However, the opposite may be true if the samples are not independently and identically distributed. For example, if samples correspond to news articles, and are ordered by their time of publication, then shuffling the data will likely lead to a model that is overfit and an inflated validation score: it will be tested on samples that are artificially similar (close in time) to training samples.

Tuning the hyper-parameters of an estimator:调参

Hyper-parameters are parameters that are not directly learnt within estimators. In scikit-learn they are passed as arguments to the constructor of the estimator classes. Typical examples include C, kernel and gamma for Support Vector Classifier, alpha for Lasso, etc

GridSearchCV

param_grid = {"max_depth": [3, None],
"max_features": [1, 3, 10],
"min_samples_split": [2, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
grid_search = GridSearchCV(clf, param_grid=param_grid)
grid_search.fit(X, y)
report(grid_search.cv_results_)

RandomizedSearchCV

param_dist = {"max_depth": [3, None],
              "max_features": sp_randint(1, 11),
              "min_samples_split": sp_randint(2, 11),
              "min_samples_leaf": sp_randint(1, 11),
              "bootstrap": [True, False],
              "criterion": ["gini", "entropy"]}
#run randomized search
n_iter_search = 20
random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=n_iter_search)
random_search.fit(X, y)
report(random_search.cv_results_)

tips:

score

By default, parameter search uses the score function of the estimator to evaluate a parameter setting. These are the sklearn.metrics.accuracy_score for classification and sklearn.metrics.r2_score for regression. For some applications, other scoring functions are better suited (for example in unbalanced classification, the accuracy score is often uninformative). An alternative scoring function can be specified via the scoring parameter to GridSearchCV, RandomizedSearchCV and many of the specialized cross-validation tools described below.

quantifying the quality of predictions:量化预测质量

There are 3 different APIs for evaluating the quality of a model’s predictions:

Scoring parameter:

Scoring parameter: Model-evaluation tools using cross-validation (such as model_selection.cross_val_score and model_selection.GridSearchCV) rely on an internal scoring strategy. This is discussed in the section The scoring parameter: defining model evaluation rules.

For the most common use cases, you can designate a scorer object with the scoring parameter; ; the table below shows all possible values

cross_val_score(model, X, y, scoring='wrong_choice')

Estimator score method

Estimator score method: Estimators have a score method providing a default evaluation criterion for the problem they are designed to solve. This is not discussed on this page, but in each estimator’s documentation.

Metric functions

Metric functions: The metrics module implements functions assessing prediction error for specific purposes. These metrics are detailed in sections on Classification metrics, Multilabel ranking metrics, Regression metrics and Clustering metrics.

Preprocessing Data

Standardization, or mean removal and variance scaling

scale

In practice we often ignore the shape of the distribution and just transform the data to center it by removing the mean value of each feature, then scale it by dividing non-constant features by their standard deviation.

The function scale provides a quick and easy way to perform this operation on a single array-like dataset:

X_scaled = preprocessing.scale(X_train)

StandardScaler

The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing se

scaler = preprocessing.StandardScaler().fit(X_train)
scaler.transform(X_train)

MinMaxScaloer

An alternative standardization is scaling features to lie between a given minimum and maximum value, often between zero and one, or so that the maximum absolute value of each feature is scaled to unit size. This can be achieved using MinMaxScaler or MaxAbsScaler, respectively.

min_max_scaler = preprocessing.MinMaxScaler()
X_train_minmax = min_max_scaler.fit_transform(X_train)
X_test_minmax = min_max_scaler.transform(X_test)

Scaling data with outliers

If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. In these cases, you can use robust_scale and RobustScaler as drop-in replacements instead.

Non-Linear transformation

Like scalers, QuantileTransformer puts each feature into the same range or distribution. However, by performing a rank transformation, it smooths out unusual distributions and is less influenced by outliers than scaling methods. It does, however, distort correlations and distances within and across features.

QuantileTransformer || quantile_transform

quantile_transformer = preprocessing.QuantileTransformer(random_state=0)
X_train_trans = quantile_transformer.fit_transform(X_train)
X_test_trans = quantile_transformer.transform(X_test)

Normalization

Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples

X_normalized = preprocessing.normalize(X, norm='l2')
或者
normalizer = preprocessing.Normalizer().fit(X) # fit does nothing
normalizer.transform(X)

Binarization二值化处理

Feature binarization is the process of thresholding numerical features to get boolean values. This can be useful for downstream probabilistic estimators that make assumption that the input data is distributed according to a multi-variate Bernoulli distribution

binarizer = preprocessing.Binarizer().fit(X)
binarizer.transform(X)

小于阈值的就变成0,反之为1
it is possible to adjust the threshold of the binarizer:
binarizer = preprocessing.Binarizer(threshold=1.1)
binarizer.transform(X)

Encoding categorical features

Imputation of missing values

For various reasons, many real world datasets contain missing values, often encoded as blanks, NaNs or other placeholders. Such datasets however are incompatible with scikit-learn estimators which assume that all values in an array are numerical, and that all have and hold meaning. A basic strategy to use incomplete datasets is to discard entire rows and/or columns containing missing values. However, this comes at the price of losing data which may be valuable (even though incomplete). A better strategy is to impute the missing values

The Imputer class provides basic strategies for imputing missing values, either using the mean, the median or the most frequent value of the row or column in which the missing values are located. This class also allows for different missing values encoding.s

from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit([[1, 2], [np.nan, 3], [7, 6]])

X = [[np.nan, 2], [6, np.nan], [7, 6]]
imp.transform(X)

Generating polynomial features

Often it’s useful to add complexity to the model by considering nonlinear features of the input data. A simple and common method to use is polynomial features, which can get features’ high-order and interaction terms. It is implemented in PolynomialFeatures:

from sklearn.preprocessing import PolynomialFeatures
X = np.arange(6).reshape(3, 2)
poly = PolynomialFeatures(2)
poly.fit_transform(X)

he features of X have been transformed from (X1, X2) to (1, X1, X2, X12, X1*X2, X22).

Custom transformers

Often, you will want to convert an existing Python function into a transformer to assist in data cleaning or processing. You can implement a transformer from an arbitrary function with FunctionTransformer

import numpy as np
from sklearn.preprocessing import FunctionTransformer
transformer = FunctionTransformer(np.log1p)
X = np.array([[0, 1], [2, 3]])
transformer.transform(X)

# python 

评论

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×