模型评估与优化常用方法实战

通过酶活性预测实战来体验体验模型从欠拟合到过拟合、再到拟合的过程。选择模型从线性回归 -> 多项式回归。通过进行异常检测,帮助找到了潜在的异常数据点,进行主成分分析,判断是否需要降低数据维度,然后通过数据分离,即使未提供测试样本,也能从训练数据中分离出测试数据。然后计算得到混淆矩阵,实现模型更全面的评估。最后通过调整KNN核心参数让模型在训练数据和测试数据上均得到了好的表现。

欠拟合与过拟合

酶活性预测实战task:

1、基于T-R-train.csv数据,建立线性回归模型,计算其在T-R-test.csv数据上的r2分数,可视化模型预测结果

2、加入多项式特征(2次、5次),建立回归模型

3、计算多项式回归模型对测试数据进行预测的r2分数,判断哪个模型预测更准确

4、可视化多项式回归模型数据预测结果,判断哪个模型预测更准确

下面直接贴Jupyter Notebook:

1#load the data
2import pandas as pd
3import numpy as np
4data_train = pd.read_csv('T-R-train.csv')
5data_train.head()
T rate
0 46.53 2.49
1 48.14 2.56
2 50.15 2.63
3 51.36 2.69
4 52.57 2.74
1#define X_train and y_train
2X_train = data_train.loc[:,'T']
3y_train = data_train.loc[:,'rate']
1#visualize the data
2from matplotlib import pyplot as plt
3fig1 = plt.figure(figsize=(5,5))
4plt.scatter(X_train,y_train)
5plt.title('raw data')
6plt.xlabel('temperature')
7plt.ylabel('rate')
8plt.show()

1# 转换为N行1列的数据
2X_train = np.array(X_train).reshape(-1,1)

先使用线行回归模型进行训练(其实通过看图其实我们已经大致能推断出线行模型不会有较好的表现):

1#linear regression model prediction
2from sklearn.linear_model import  LinearRegression
3lr1 = LinearRegression()
4lr1.fit(X_train,y_train)
LinearRegression()
on()
1#load the test data
2data_test = pd.read_csv('T-R-test.csv')
3X_test = data_test.loc[:,'T']
4y_test = data_test.loc[:,'rate']
1X_test = np.array(X_test).reshape(-1,1)

$$R^2=1-\frac{\sum_i(y_i-y_i)^2/n}{\sum_i(y_i-\hat{y})^2/n}=1-\frac{RMSE}{Var}$$

对于 $R^2$ 可以通俗地理解为使用均值作为误差基准,看预测误差是否大于或者小于均值基准误差。 R2_score = 1,样本中预测值和真实值完全相等,没有任何误差,表示回归分析中自变量对因变量的解释越好。 R2_score = 0。此时分子等于分母,样本的每项预测值都等于均值。

1#make prediction on the training and testing data
2y_train_predict = lr1.predict(X_train)
3y_test_predict = lr1.predict(X_test)
4from sklearn.metrics import r2_score
5r2_train = r2_score(y_train,y_train_predict)
6r2_test = r2_score(y_test,y_test_predict)
7print('training r2:',r2_train)
8print('test r2:',r2_test)
training r2: 0.016665703886981964
test r2: -0.758336343735132

因为图中温度大概是在40-90之间,所以生成300个40-90之间的X,也就是X_range

1#generate new data 用于生成对应的预测值
2X_range = np.linspace(40,90,300).reshape(-1,1)
3y_range_predict = lr1.predict(X_range)
1fig2 = plt.figure(figsize=(5,5))
2plt.plot(X_range,y_range_predict)
3plt.scatter(X_train,y_train)
4
5plt.title('prediction data')
6plt.xlabel('temperature')
7plt.ylabel('rate')
8plt.show()

很明显模型欠拟合,然后分别选择二次项多项式和五次项多项式模型进行回归:

 1#多项式模型
 2#generate new features
 3from sklearn.preprocessing import PolynomialFeatures
 4
 5# 2阶多项式
 6poly2 = PolynomialFeatures(degree=2)
 7X_2_train = poly2.fit_transform(X_train)
 8X_2_test = poly2.transform(X_test)
 9
10# 5阶多项式
11poly5 = PolynomialFeatures(degree=5)
12X_5_train = poly5.fit_transform(X_train)
13X_5_test = poly5.transform(X_test)
14print(X_5_train.shape)
(18, 6)
 1lr2 = LinearRegression()
 2lr2.fit(X_2_train,y_train)
 3
 4
 5y_2_train_predict = lr2.predict(X_2_train)
 6y_2_test_predict = lr2.predict(X_2_test)
 7r2_2_train = r2_score(y_train,y_2_train_predict)
 8r2_2_test = r2_score(y_test,y_2_test_predict)
 9
10lr5 = LinearRegression()
11lr5.fit(X_5_train,y_train)
12
13
14y_5_train_predict = lr5.predict(X_5_train)
15y_5_test_predict = lr5.predict(X_5_test)
16r2_5_train = r2_score(y_train,y_5_train_predict)
17r2_5_test = r2_score(y_test,y_5_test_predict)
18
19
20
21
22print('training r2_2:',r2_2_train)
23print('test r2_2:',r2_2_test)
24print('training r2_5:',r2_5_train)
25print('test r2_5:',r2_5_test)
training r2_2: 0.9700515400689426
test r2_2: 0.9963954556468683
training r2_5: 0.9978527267327939
test r2_5: 0.5437885877449662

二次项回归模型很明显是比较合适的,五次项回归虽然对于训练数据有0.9978的分数,但是对于测试数据却差强人意,这就是明显的过拟合。

1X_2_range = np.linspace(40,90,300).reshape(-1,1)
2X_2_range = poly2.transform(X_2_range)
3y_2_range_predict = lr2.predict(X_2_range)
4
5X_5_range = np.linspace(40,90,300).reshape(-1,1)
6X_5_range = poly5.transform(X_5_range)
7y_5_range_predict = lr5.predict(X_5_range)

同样通过生成数据的方式分别看看二次项回归模型(正好拟合)和五次项回归模型(过拟合)的图形化展示:

1fig3 = plt.figure(figsize=(5,5))
2plt.plot(X_range,y_2_range_predict)
3plt.scatter(X_train,y_train)
4plt.scatter(X_test,y_test)
5
6plt.title('polynomial prediction result (2)')
7plt.xlabel('temperature')
8plt.ylabel('rate')
9plt.show()

1fig4 = plt.figure(figsize=(5,5))
2plt.plot(X_range,y_5_range_predict)
3plt.scatter(X_train,y_train)
4plt.scatter(X_test,y_test)
5
6plt.title('polynomial prediction result (5)')
7plt.xlabel('temperature')
8plt.ylabel('rate')
9plt.show()

酶活性预测实战: 1、通过建立二阶多项式回归模型,对酶活性实现了一个较好的预测,无论针对训练或测试数据都得到一个高的r2分数;

2、通过建立线性回归、五阶多项式回归模型,发现存在过拟合或欠拟合情况。过拟合情况下,对于训练数据r2分数高(预测准确),但对于预测数据r2分数低(预测不准确);

3、无论时通过r2分数,或是可视化模型结果,都可以发现二阶多项式回归模型效果最好;

4、核心算法参考链接:https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression

高斯分布去除异常点

基于data_class_raw.csv数据,根据高斯分布概率密度函数,寻找异常点并剔除

1#load the data
2import pandas as pd
3import numpy as np
4data = pd.read_csv('data_class_raw.csv')
5data.head()
x1 x2 y
0 0.77 3.97 0
1 1.71 2.81 0
2 2.18 1.31 0
3 3.80 0.69 0
4 5.21 1.14 0
1#define X and y
2X = data.drop(['y'],axis=1)
3y = data.loc[:,'y']
 1#visualize the data
 2from matplotlib import pyplot as plt
 3fig1 = plt.figure(figsize=(5,5))
 4bad = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
 5good = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
 6plt.legend((good,bad),('good','bad'))
 7plt.title('raw data')
 8plt.xlabel('x1')
 9plt.ylabel('x2')
10plt.show()

1from sklearn.covariance import EllipticEnvelope
2
3ad_model = EllipticEnvelope(contamination=0.02)
4
5ad_model.fit(X[y==0])
6y_predict_bad = ad_model.predict(X[y==0])
7print(y_predict_bad)
[ 1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1 -1]
1fig2 = plt.figure(figsize=(5,5))
2bad = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
3good = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
4plt.scatter(X.loc[:,'x1'][y==0][y_predict_bad==-1],X.loc[:,'x2'][y==0][y_predict_bad==-1], marker='x',s=150)
5plt.legend((good,bad),('good','bad'))
6plt.title('raw data')
7plt.xlabel('x1')
8plt.ylabel('x2')
9plt.show()

此时不管是用代码删除也好,还是手动删除csv的数据也行,只要去掉异常数据点即可。

PCA分析与降维

基于data_class_processed.csv数据(也就是去掉异常点之后的数据),进行PCA处理,确定重要数据维度及成分。

1# 此时操作就是已经经过异常处理之后的数据
2data = pd.read_csv('data_class_processed.csv')
3data.head()
1#define X and y
2X = data.drop(['y'],axis=1)
3y = data.loc[:,'y']
1fig3 = plt.figure(figsize=(5,5))
2bad = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
3good = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
4plt.legend((good,bad),('good','bad'))
5plt.title('raw data')
6plt.xlabel('x1')
7plt.ylabel('x2')
8plt.show()

 1# pca
 2from sklearn.preprocessing import StandardScaler
 3from sklearn.decomposition import PCA
 4# 标准化处理
 5X_nomr = StandardScaler().fit_transform(X)
 6pca = PCA(n_components=2)
 7X_reduced = pca.fit_transform(X_nomr)
 8
 9# 计算维度的标准差比例
10var_ratio = pca.explained_variance_ratio_
11print(var_ratio)
[0.5369408 0.4630592]
1fig4 = plt.figure(figsize=(5,5))
2plt.bar([1,2],var_ratio)
3plt.show()

经过主成分分析,可以看出,这两个维度的都是需要保留的。

训练与测试数据分离

1# 数据分离
2from sklearn.model_selection import train_test_split
3X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=4,test_size=0.4)
4print(X_train.shape, X_test.shape, X.shape)
(21, 2) (14, 2) (35, 2)

建立KNN模型

建立KNN模型,完成分类任务

 1from sklearn.neighbors import KNeighborsClassifier
 2
 3knn_10 = KNeighborsClassifier(n_neighbors=10)
 4knn_10.fit(X_train,y_train)
 5y_train_predict = knn_10.predict(X_train)
 6y_test_predict = knn_10.predict(X_test)
 7
 8# 看下准确率
 9from sklearn.metrics import accuracy_score
10train_acc = accuracy_score(y_train,y_train_predict)
11test_acc = accuracy_score(y_test,y_test_predict)
12
13print(train_acc, test_acc)
0.9047619047619048 0.6428571428571429

测试数据表现不太理想

1#可视化分类边界
2xx,yy = np.meshgrid(np.arange(0,10,0.05), np.arange(0,10,0.05))
3print(xx.shape, yy.shape)
4
(200, 200) (200, 200)
1x_range = np.c_[xx.ravel(),yy.ravel()]
2print(x_range.shape)
(40000, 2)
 1y_range_predict = knn_10.predict(x_range)
 2fig5 = plt.figure(figsize=(5,5))
 3knn_bad = plt.scatter(x_range[:,0][y_range_predict==0],x_range[:,1][y_range_predict==0])
 4knn_good = plt.scatter(x_range[:,0][y_range_predict==1],x_range[:,1][y_range_predict==1])
 5
 6bad = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
 7good = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
 8
 9plt.legend((good,bad,knn_good,knn_bad),('good','bad','knn_good','knn_bad'))
10plt.title('predict_result')
11plt.xlabel('x1')
12plt.ylabel('x2')
13plt.show()

混淆矩阵评估

1# 计算混淆矩阵
2from sklearn.metrics import confusion_matrix
3cm = confusion_matrix(y_test, y_test_predict)
4print(cm)
[[4 2]
 [3 5]]
1TP = cm[1,1]
2TN = cm[0,0]
3FP = cm[0,1]
4FN = cm[1,0]
5print(TP,TN,FP,FN)
5 4 2 3
1#准确率: 整体样本中,预测正确样本数的比例
2accuracy = (TP + TN)/(TP + TN + FP + FN)
3
4print(accuracy)
0.6428571428571429
1#灵敏度(召回率): 正样本中,预测正确的比例
2recall = TP/(TP + FN)
3print(recall)
0.625
1#特异度: 负样本中,预测正确的比例
2specificity = TN/(TN + FP)
3print(specificity)
0.6666666666666666
1#精确率: 预测结果为正的样本中,预测正确的比例
2precision = TP/(TP + FP)
3print(precision)
0.7142857142857143
1#F1分数: 综合Precision和Recall的一个判断指标
2#F1 Score = 2*Precision X Recall/(Precision + Recall)
3
4f1 = 2*precision*recall/(precision+recall)
5print(f1)
0.6666666666666666

调整至合适参数

 1#try different k and calcualte the accuracy for each
 2n = [i for i in range(1,21)]
 3accuracy_train = []
 4accuracy_test = []
 5for i in n:
 6    knn = KNeighborsClassifier(n_neighbors=i)
 7    knn.fit(X_train,y_train)
 8    y_train_predict = knn.predict(X_train)
 9    y_test_predict = knn.predict(X_test)
10    accuracy_train_i = accuracy_score(y_train,y_train_predict)
11    accuracy_test_i = accuracy_score(y_test,y_test_predict)
12    accuracy_train.append(accuracy_train_i)
13    accuracy_test.append(accuracy_test_i)
14print(accuracy_train,accuracy_test)
[1.0, 1.0, 1.0, 1.0, 1.0, 0.9523809523809523, 0.9523809523809523, 0.9523809523809523, 0.9047619047619048, 0.9047619047619048, 0.9047619047619048, 0.9523809523809523, 0.9047619047619048, 0.9047619047619048, 0.9523809523809523, 0.9047619047619048, 0.9047619047619048, 0.5714285714285714, 0.5714285714285714, 0.5714285714285714] [0.5714285714285714, 0.5, 0.5, 0.5714285714285714, 0.7142857142857143, 0.5714285714285714, 0.5714285714285714, 0.5714285714285714, 0.6428571428571429, 0.6428571428571429, 0.6428571428571429, 0.5714285714285714, 0.6428571428571429, 0.6428571428571429, 0.5714285714285714, 0.5714285714285714, 0.5714285714285714, 0.42857142857142855, 0.42857142857142855, 0.42857142857142855]
 1fig6 = plt.figure(figsize=(12,5))
 2plt.subplot(121)
 3plt.plot(n,accuracy_train,marker='o')
 4plt.title('training accuracy vs n_neighbors')
 5plt.xlabel('n_neighbors')
 6plt.ylabel('accuracy')
 7plt.subplot(122)
 8plt.plot(n,accuracy_test,marker='o')
 9plt.title('testing accuracy vs n_neighbors')
10plt.xlabel('n_neighbors')
11plt.ylabel('accuracy')
12
13plt.show()

可以看出,n_neighbors 参数大约在5的时候,不管是对于训练数据还是测试数据,表现都还是不错的

 1from sklearn.neighbors import KNeighborsClassifier
 2
 3knn_5 = KNeighborsClassifier(n_neighbors=5)
 4knn_5.fit(X_train,y_train)
 5y_train_predict = knn_5.predict(X_train)
 6y_test_predict = knn_5.predict(X_test)
 7
 8# 看下准确率
 9from sklearn.metrics import accuracy_score
10train_acc = accuracy_score(y_train,y_train_predict)
11test_acc = accuracy_score(y_test,y_test_predict)
12
13print(train_acc, test_acc)
1.0 0.7142857142857143

重新绘制一下分类边界:

 1y_range_predict = knn_5.predict(x_range)
 2fig6 = plt.figure(figsize=(5,5))
 3knn_bad = plt.scatter(x_range[:,0][y_range_predict==0],x_range[:,1][y_range_predict==0])
 4knn_good = plt.scatter(x_range[:,0][y_range_predict==1],x_range[:,1][y_range_predict==1])
 5
 6bad = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
 7good = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
 8
 9plt.legend((good,bad,knn_good,knn_bad),('good','bad','knn_good','knn_bad'))
10plt.title('when n=5 predict_result')
11plt.xlabel('x1')
12plt.ylabel('x2')
13plt.show()

好坏质检分类实战summary: 1、通过进行异常检测,帮助找到了潜在的异常数据点;

2、通过PCA分析,发现需要保留2维数据集;

3、实现了训练数据与测试数据的分离,并计算模型对于测试数据的预测准确率

4、计算得到混淆矩阵,实现模型更全面的评估

5、通过新的方法,可视化分类的决策边界

6、通过调整核心参数n_neighbors值,在计算对应的准确率,可以帮助我们更好的确定使用哪个模型

7、核心算法参考链接:https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier