多层感知机(MLP)

之前主要学习了常见的机器学习算法,现在开始进入另一个环节:深度学习。首先就是多层感知机模型,Multi-Layer Perception(简称MLP)是标准的全连接神经网络模型,它由节点层组成,其中每个节点连接到上一层的所有输出,每个节点的输出连接到下一层节点的所有输入。本节会从一个简单的分类任务开始,逐步探讨多层感知机的工作原理。

MLP基本概念

尝试建立一个模型,其结构模仿人的思考机制: 需要注意的是,这里探讨的是早期人工神经网络的实现方式,由sigmoid函数来模仿生物学上的神经元是否激发,但是现代神经网络几乎不使用sigmoid函数了,它已经过时了。ReLU(线性整流函数)是经过验证的更好的选择,但是这里还是从sigmoid函数这种方式来探讨。

MLP实现非线性分类

先来看一个简单的分类任务,对于这样的一个分类任务,很显然如果使用逻辑回归来完成,确实能做到,但是肯定需要增加高次项数据,以生成复杂的决策边界,可以回顾 《分类问题与逻辑回归》 中的内容:

现在呢,数据整体来看,左下角和右上角都是圆圈(Y = 0),左上角和右下角都是叉(Y = 1)。首先尝试简化一下模型,即y为$x_1$和$x_2$的同或门:

除了同或门,可以再看几个其他的例子:

y为$x_1$和$x_2$的与门: 假设我们已经提供了一个边界函数,$h(\theta,x)=g(-20+15x_1+15x_2)$, 可以尝试计算一下,另外呢,下图中也提供了$(NOT x_1) AND (NOT x_2)$、$x_1 OR x_2$:

把这三个模型组合一下,便可以得到一个复杂模型来处理上面的非线性分类的简化问题: 实际计算一下,跟简化模型是符合的:

那么思考一下,MLP如何实现多分类呢?其实很简单,直接在输出结果上多增加一层,表示对应类别的概率:

思考问题:在建立MLP模型实现图像多分类任务中其流程应该是怎么样的?

1、加载图片并将其转换为数字矩阵

2、对输入数据进行维度转换与归一化处理

3、建立MLP模型结构

4、MLP模型训练参数配置

5、模型训练与预测

6、对输出结果进行格式转化

MLP实战准备

首先准备阶段,这里使用的是Keras框架,Keras 是一个用 Python 编写的用于神经网络开发的应用接口调用开接口可以实现神经网络、卷积神经网络、循环神经网络等常用深度学习算法的开发。

其实Keras通常将TensorFlow,或者Theano作为后端,Keras框架提供上层API,使用起来更加便捷,如果你想快速验证模型,那么Keras框架将是一个不错的选择。Keras集成了深度学习中各类成熟的算法,容易安装和使用,样例丰富教程和文档也非常详细。这里给出文档地址: https://keras.io ,中本版本: https://keras.io/zh

Tensorflow是一个采用数据流图用于数值计算的开源软件库,可自动计算模型相关的微分导数: 非常舌合用于神经网络模型的求解。

Keras可以看作为Tensorflow封装好的一个接口(Keras作为前端,TensorFlow作为后端)。Keras为用户提供了一个易于交互的外壳方便进行深度学习的快速开发。

Keras建立MLP模型:

keras环境安装与GPU加速

其实如果是使用conda安装tensorflow的话,完全没有这么复杂,直接使用conda安装tensorflow-gpu即可,剩下的依赖包conda直接会默认安装好。

:::tip{title=“提示”} 需要注意的是:在Windows上,TensorFlow-GPU最多支持到CUDA 11.2,可以参考这个链接: https://tensorflow.google.cn/install/source_windows ,所以我们如果在不借助WSL的情况下在Windows上运行TensorFlow-GPU,版本需要把CUDA驱动版本控制在11.2及以下,Python版本根据要安装的TensorFlow版本确定即可,目前我是使用的版本是tensorflow-gpu:2.6.0 : :::

安装完成后确认一下cuda版本:

导出的conda环境如下 my_environment.yml(其中就包括了tensorflow-gpu、keras、sklearn常用包):

  1name: tf
  2channels:
  3  - defaults
  4dependencies:
  5  - _tflow_select=2.1.0=gpu
  6  - abseil-cpp=20210324.2=hd77b12b_0
  7  - absl-py=1.4.0=py38haa95532_0
  8  - aiohttp=3.8.5=py38h2bbff1b_0
  9  - aiosignal=1.2.0=pyhd3eb1b0_0
 10  - anyio=3.5.0=py38haa95532_0
 11  - appdirs=1.4.4=pyhd3eb1b0_0
 12  - argon2-cffi=21.3.0=pyhd3eb1b0_0
 13  - argon2-cffi-bindings=21.2.0=py38h2bbff1b_0
 14  - astor=0.8.1=py38haa95532_0
 15  - asttokens=2.0.5=pyhd3eb1b0_0
 16  - astunparse=1.6.3=py_0
 17  - async-timeout=4.0.2=py38haa95532_0
 18  - attrs=22.1.0=py38haa95532_0
 19  - backcall=0.2.0=pyhd3eb1b0_0
 20  - beautifulsoup4=4.12.2=py38haa95532_0
 21  - blas=1.0=mkl
 22  - bleach=4.1.0=pyhd3eb1b0_0
 23  - blinker=1.4=py38haa95532_0
 24  - bottleneck=1.3.5=py38h080aedc_0
 25  - brotlipy=0.7.0=py38h2bbff1b_1003
 26  - c-ares=1.19.1=h2bbff1b_0
 27  - ca-certificates=2023.08.22=haa95532_0
 28  - cachetools=4.2.2=pyhd3eb1b0_0
 29  - certifi=2023.7.22=py38haa95532_0
 30  - cffi=1.15.1=py38h2bbff1b_3
 31  - charset-normalizer=2.0.4=pyhd3eb1b0_0
 32  - click=8.0.4=py38haa95532_0
 33  - colorama=0.4.6=py38haa95532_0
 34  - comm=0.1.2=py38haa95532_0
 35  - cryptography=41.0.3=py38h3438e0d_0
 36  - cudatoolkit=11.3.1=h59b6b97_2
 37  - cudnn=8.2.1=cuda11.3_0
 38  - debugpy=1.6.7=py38hd77b12b_0
 39  - decorator=5.1.1=pyhd3eb1b0_0
 40  - defusedxml=0.7.1=pyhd3eb1b0_0
 41  - entrypoints=0.4=py38haa95532_0
 42  - executing=0.8.3=pyhd3eb1b0_0
 43  - flatbuffers=2.0.0=h6c2663c_0
 44  - frozenlist=1.3.3=py38h2bbff1b_0
 45  - gast=0.4.0=pyhd3eb1b0_0
 46  - giflib=5.2.1=h8cc25b3_3
 47  - google-auth=2.22.0=py38haa95532_0
 48  - google-auth-oauthlib=0.4.1=py_2
 49  - google-pasta=0.2.0=pyhd3eb1b0_0
 50  - grpcio=1.42.0=py38hc60d5dd_0
 51  - h5py=3.7.0=py38h3de5c98_0
 52  - hdf5=1.10.6=h1756f20_1
 53  - icc_rt=2022.1.0=h6049295_2
 54  - icu=68.1=h6c2663c_0
 55  - idna=3.4=py38haa95532_0
 56  - importlib-metadata=6.0.0=py38haa95532_0
 57  - importlib_resources=5.2.0=pyhd3eb1b0_1
 58  - intel-openmp=2021.4.0=haa95532_3556
 59  - ipykernel=6.25.0=py38h9909e9c_0
 60  - ipython=8.12.2=py38haa95532_0
 61  - ipython_genutils=0.2.0=pyhd3eb1b0_1
 62  - jedi=0.18.1=py38haa95532_1
 63  - jinja2=3.1.2=py38haa95532_0
 64  - joblib=1.2.0=py38haa95532_0
 65  - jpeg=9e=h2bbff1b_1
 66  - jsonschema=4.17.3=py38haa95532_0
 67  - jupyter_client=7.4.9=py38haa95532_0
 68  - jupyter_core=5.3.0=py38haa95532_0
 69  - jupyter_server=1.23.4=py38haa95532_0
 70  - jupyterlab_pygments=0.1.2=py_0
 71  - keras=2.6.0=pyhd3eb1b0_0
 72  - keras-applications=1.0.8=py_1
 73  - keras-preprocessing=1.1.2=pyhd3eb1b0_0
 74  - libcurl=8.2.1=h86230a5_0
 75  - libiconv=1.16=h2bbff1b_2
 76  - libpng=1.6.39=h8cc25b3_0
 77  - libprotobuf=3.17.2=h23ce68f_1
 78  - libsodium=1.0.18=h62dcd97_0
 79  - libssh2=1.10.0=hcd4344a_2
 80  - libxml2=2.10.4=h0ad7f3c_1
 81  - libxslt=1.1.37=h2bbff1b_1
 82  - lxml=4.9.2=py38h2bbff1b_0
 83  - markdown=3.4.1=py38haa95532_0
 84  - markupsafe=2.1.1=py38h2bbff1b_0
 85  - matplotlib-inline=0.1.6=py38haa95532_0
 86  - mistune=0.8.4=py38he774522_1000
 87  - mkl=2021.4.0=haa95532_640
 88  - mkl-service=2.4.0=py38h2bbff1b_0
 89  - mkl_fft=1.3.1=py38h277e83a_0
 90  - mkl_random=1.2.2=py38hf11a4ad_0
 91  - multidict=6.0.2=py38h2bbff1b_0
 92  - nbclassic=0.5.5=py38haa95532_0
 93  - nbclient=0.5.13=py38haa95532_0
 94  - nbconvert=6.5.4=py38haa95532_0
 95  - nbformat=5.9.2=py38haa95532_0
 96  - nest-asyncio=1.5.6=py38haa95532_0
 97  - notebook=6.5.4=py38haa95532_1
 98  - notebook-shim=0.2.2=py38haa95532_0
 99  - numexpr=2.8.4=py38h5b0cc5e_0
100  - numpy=1.20.3=py38ha4e8547_0
101  - numpy-base=1.20.3=py38hc2deb75_0
102  - oauthlib=3.2.2=py38haa95532_0
103  - openssl=1.1.1w=h2bbff1b_0
104  - opt_einsum=3.3.0=pyhd3eb1b0_1
105  - packaging=23.1=py38haa95532_0
106  - pandas=1.5.2=py38hf11a4ad_0
107  - pandocfilters=1.5.0=pyhd3eb1b0_0
108  - parso=0.8.3=pyhd3eb1b0_0
109  - pickleshare=0.7.5=pyhd3eb1b0_1003
110  - pip=23.2.1=py38haa95532_0
111  - pkgutil-resolve-name=1.3.10=py38haa95532_0
112  - platformdirs=3.10.0=py38haa95532_0
113  - pooch=1.4.0=pyhd3eb1b0_0
114  - prometheus_client=0.14.1=py38haa95532_0
115  - prompt-toolkit=3.0.36=py38haa95532_0
116  - protobuf=3.17.2=py38hd77b12b_0
117  - psutil=5.9.0=py38h2bbff1b_0
118  - pure_eval=0.2.2=pyhd3eb1b0_0
119  - pyasn1=0.4.8=pyhd3eb1b0_0
120  - pyasn1-modules=0.2.8=py_0
121  - pycparser=2.21=pyhd3eb1b0_0
122  - pygments=2.15.1=py38haa95532_1
123  - pyjwt=2.4.0=py38haa95532_0
124  - pyopenssl=23.2.0=py38haa95532_0
125  - pyreadline=2.1=py38_1
126  - pyrsistent=0.18.0=py38h196d8e1_0
127  - pysocks=1.7.1=py38haa95532_0
128  - python=3.8.12=h6244533_0
129  - python-dateutil=2.8.2=pyhd3eb1b0_0
130  - python-fastjsonschema=2.16.2=py38haa95532_0
131  - python-flatbuffers=1.12=pyhd3eb1b0_0
132  - pytz=2022.7=py38haa95532_0
133  - pywin32=305=py38h2bbff1b_0
134  - pywinpty=2.0.10=py38h5da7b33_0
135  - pyzmq=23.2.0=py38hd77b12b_0
136  - re2=2022.04.01=hd77b12b_0
137  - requests=2.31.0=py38haa95532_0
138  - requests-oauthlib=1.3.0=py_0
139  - rsa=4.7.2=pyhd3eb1b0_1
140  - scikit-learn=1.2.2=py38hd77b12b_0
141  - scipy=1.10.1=py38h321e85e_0
142  - send2trash=1.8.0=pyhd3eb1b0_1
143  - setuptools=68.0.0=py38haa95532_0
144  - six=1.16.0=pyhd3eb1b0_1
145  - snappy=1.1.9=h6c2663c_0
146  - sniffio=1.2.0=py38haa95532_1
147  - soupsieve=2.4=py38haa95532_0
148  - sqlite=3.41.2=h2bbff1b_0
149  - stack_data=0.2.0=pyhd3eb1b0_0
150  - tbb=2021.8.0=h59b6b97_0
151  - tensorboard=2.6.0=py_1
152  - tensorboard-data-server=0.6.1=py38haa95532_0
153  - tensorboard-plugin-wit=1.8.1=py38haa95532_0
154  - tensorflow=2.6.0=gpu_py38hc0e8100_0
155  - tensorflow-base=2.6.0=gpu_py38hb3da07e_0
156  - tensorflow-estimator=2.6.0=pyh7b7c402_0
157  - tensorflow-gpu=2.6.0=h17022bd_0
158  - termcolor=2.1.0=py38haa95532_0
159  - terminado=0.17.1=py38haa95532_0
160  - threadpoolctl=2.2.0=pyh0d69192_0
161  - tinycss2=1.2.1=py38haa95532_0
162  - tornado=6.3.2=py38h2bbff1b_0
163  - traitlets=5.7.1=py38haa95532_0
164  - typing-extensions=4.7.1=py38haa95532_0
165  - typing_extensions=4.7.1=py38haa95532_0
166  - urllib3=1.26.16=py38haa95532_0
167  - vc=14.2=h21ff451_1
168  - vs2015_runtime=14.27.29016=h5e58377_2
169  - wcwidth=0.2.5=pyhd3eb1b0_0
170  - webencodings=0.5.1=py38_1
171  - websocket-client=0.58.0=py38haa95532_4
172  - werkzeug=2.2.3=py38haa95532_0
173  - wheel=0.35.1=pyhd3eb1b0_0
174  - win_inet_pton=1.1.0=py38haa95532_0
175  - winpty=0.4.3=4
176  - wrapt=1.14.1=py38h2bbff1b_0
177  - yarl=1.8.1=py38h2bbff1b_0
178  - zeromq=4.3.4=hd77b12b_0
179  - zipp=3.11.0=py38haa95532_0
180  - zlib=1.2.13=h8cc25b3_0
181  - pip:
182      - contourpy==1.1.1
183      - cycler==0.11.0
184      - fonttools==4.42.1
185      - importlib-resources==6.0.1
186      - kiwisolver==1.4.5
187      - matplotlib==3.7.3
188      - pillow==10.0.1
189      - pyparsing==3.1.1
190prefix: C:\Users\zclhl\anaconda3\envs\tf

导入的时候只需要:

1conda env create -f my_environment.yml

非线性二分类任务实战

先测试一下GPU是否可用:

1import tensorflow as tf
2# tf.test.gpu_device_name()
3print(tf.test.is_gpu_available())
WARNING:tensorflow:From C:\Users\zclhl\AppData\Local\Temp\ipykernel_13180\3149783646.py:3: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
True

下面开始编码:

1#load the data
2import pandas as pd
3import numpy as np
4data = pd.read_csv('data.csv')
5data.head()
x1 x2 y
0 0.0323 0.0244 1
1 0.0887 0.0244 1
2 0.1690 0.0163 1
3 0.2420 0.0000 1
4 0.2420 0.0488 1
1#define X y
2X = data.drop(['y'],axis=1)
3y = data.loc[:,'y']
4X.head()
x1 x2
0 0.0323 0.0244
1 0.0887 0.0244
2 0.1690 0.0163
3 0.2420 0.0000
4 0.2420 0.0488
 1#visualize data
 2from matplotlib import pyplot as plt
 3fig1 = plt.figure(figsize=(5,5))
 4passed = plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
 5failed = plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
 6plt.legend((passed, failed),('passed','failed'))
 7plt.xlabel('x1')
 8plt.ylabel('x2')
 9
10plt.title('raw data')
11plt.show()

1# 数据分离
2from sklearn.model_selection import train_test_split
3X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.13,random_state=10)
4print(X_train.shape, X_test.shape, X.shape)
(357, 2) (54, 2) (411, 2)
1#构建模型
2from keras.models import Sequential
3from keras.layers import Dense,Activation
4mlp = Sequential()
5mlp.add(Dense(units=20,input_dim=2,activation="sigmoid"))#20个神经元 激活函数digmoid
6mlp.add(Dense(units=1,activation="sigmoid"))
7mlp.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 20)                60        
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 21        
=================================================================
Total params: 81
Trainable params: 81
Non-trainable params: 0
_________________________________________________________________
1#配置模型参数,优化方法 损失函数
2mlp.compile(optimizer="adam",loss="binary_crossentropy")#二分类的
1#模型训练,迭代3000次
2mlp.fit(X_train,y_train,epochs=3000)
Epoch 1/3000
12/12 [==============================] - 1s 3ms/step - loss: 0.9955
Epoch 2/3000
12/12 [==============================] - 0s 2ms/step - loss: 0.9439
Epoch 3/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.8962
Epoch 4/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.8561
Epoch 5/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.8213
Epoch 6/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7920
Epoch 7/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7678
Epoch 8/3000
12/12 [==============================] - 0s 2ms/step - loss: 0.7494
Epoch 9/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7363
Epoch 10/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7255
Epoch 11/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7165
Epoch 12/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.7105
……
Epoch 2996/3000
12/12 [==============================] - 0s 4ms/step - loss: 0.1437
Epoch 2997/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.1436
Epoch 2998/3000
12/12 [==============================] - 0s 3ms/step - loss: 0.1435
Epoch 2999/3000
12/12 [==============================] - 0s 4ms/step - loss: 0.1438
Epoch 3000/3000
12/12 [==============================] - 0s 4ms/step - loss: 0.1433



<keras.callbacks.History at 0x1da35d40280>
1#make prediction and calc accuary 
2# y_train_predict = mlp.predict_classes(X_train)
3y_train_predict = (mlp.predict(X_train) > 0.5).astype("int32")
4from sklearn.metrics import accuracy_score
5acc_train = accuracy_score(y_train, y_train_predict)
6print(acc_train)
0.9551820728291317
1y_test_predict = (mlp.predict(X_test) > 0.5).astype("int32")
2#from sklearn.metrics import accuracy_score
3acc_test = accuracy_score(y_test, y_test_predict)
4print(acc_test)
0.9629629629629629
1print(type(y_train_predict), y_train_predict)
<class 'numpy.ndarray'> [[0]
 [1]
 [0]
 [1]
 [1]
 [1]
 [0]
 [0]
 [0]
 [1]
 [0]
 [1]
 [0]
 [0]
 [1]
 [0]
1#generate new data for plot
2xx, yy = np.meshgrid(np.arange(0,1,0.01),np.arange(0,1,0.01))
3x_range = np.c_[xx.ravel(),yy.ravel()]
4#y_range_predict = mlp.predict_classes(x_range)
5y_range_predict = (mlp.predict(x_range) > 0.5).astype("int32")
6print(type(y_range_predict))
<class 'numpy.ndarray'>
1#format the output
2y_range_predict_form = pd.Series(i[0] for i in y_range_predict)
3
4print(y_range_predict_form)
0       1
1       1
2       1
3       1
4       1
       ..
9995    1
9996    1
9997    1
9998    1
9999    1
Length: 10000, dtype: int32
 1fig2 = plt.figure(figsize=(5,5))
 2passed_predict=plt.scatter(x_range[:,0][y_range_predict_form==1],x_range[:,1][y_range_predict_form==1])
 3failed_predict=plt.scatter(x_range[:,0][y_range_predict_form==0],x_range[:,1][y_range_predict_form==0])
 4
 5passed=plt.scatter(X.loc[:,'x1'][y==1],X.loc[:,'x2'][y==1])
 6failed=plt.scatter(X.loc[:,'x1'][y==0],X.loc[:,'x2'][y==0])
 7plt.legend((passed,failed,passed_predict,failed_predict),('passed','failed','passed_predict','failed_predict'))
 8plt.xlabel('x1')
 9plt.ylabel('x2')
10plt.title('prediction result')
11plt.show()