unityloverz 发表于 2021-12-4 10:46

支持向量机(SVM)——原理篇

目录

SVM简介
线性SVM算法原理
非线性SVM算法原理
SVM简介

支持向量机(support vector machines, SVM)是一种二分类模型,它的基本模型是定义在特征空间上的间隔最大的线性分类器,间隔最大使它有别于感知机;SVM还包括核技巧,这使它成为实质上的非线性分类器。SVM的的学习策略就是间隔最大化,可形式化为一个求解凸二次规划的问题,也等价于正则化的合页损失函数的最小化问题。SVM的的学习算法就是求解凸二次规划的最优化算法。

SVM算法原理

SVM学习的基本想法是求解能够正确划分训练数据集并且几何间隔最大的分离超平面。如下图所示, https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7Bw%7D%5Ccdot+x%2Bb%3D0+ 即为分离超平面,对于线性可分的数据集来说,这样的超平面有无穷多个(即感知机),但是几何间隔最大的分离超平面却是唯一的。


在推导之前,先给出一些定义。假设给定一个特征空间上的训练数据集

https://www.zhihu.com/equation?tex=+T%3D%5Cleft%5C%7B+%5Cleft%28+%5Cboldsymbol%7Bx%7D_1%2Cy_1+%5Cright%29+%2C%5Cleft%28+%5Cboldsymbol%7Bx%7D_2%2Cy_2+%5Cright%29+%2C...%2C%5Cleft%28+%5Cboldsymbol%7Bx%7D_N%2Cy_N+%5Cright%29+%5Cright%5C%7D+
其中, https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bx%7D_i%5Cin+%5Cmathbb%7BR%7D%5En+ , https://www.zhihu.com/equation?tex=+y_i%5Cin+%5Cleft%5C%7B+%2B1%2C-1+%5Cright%5C%7D+%2Ci%3D1%2C2%2C...N+ , https://www.zhihu.com/equation?tex=x_i+ 为第 https://www.zhihu.com/equation?tex=+i+ 个特征向量, https://www.zhihu.com/equation?tex=+y_i+ 为类标记,当它等于+1时为正例;为-1时为负例。再假设训练数据集是线性可分的。
       几何间隔:对于给定的数据集 https://www.zhihu.com/equation?tex=+T+ 和超平面 https://www.zhihu.com/equation?tex=w%5Ccdot+x%2Bb%3D0 ,定义超平面关于样本点 https://www.zhihu.com/equation?tex=+%5Cleft%28+x_i%2Cy_i+%5Cright%29+ 的几何间隔为

https://www.zhihu.com/equation?tex=+%5Cgamma+_i%3Dy_i%5Cleft%28+%5Cfrac%7B%5Cboldsymbol%7Bw%7D%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2B%5Cfrac%7Bb%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert%7D+%5Cright%29+
超平面关于所有样本点的几何间隔的最小值为

https://www.zhihu.com/equation?tex=+%5Cgamma+%3D%5Cunderset%7Bi%3D1%2C2...%2CN%7D%7B%5Cmin%7D%5Cgamma+_i+
实际上这个距离就是我们所谓的支持向量到超平面的距离。
根据以上定义,SVM模型的求解最大分割超平面问题可以表示为以下约束最优化问题

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmax%7D%5C+%5Cgamma+

https://www.zhihu.com/equation?tex=+s.t.%5C+%5C+%5C+y_i%5Cleft%28+%5Cfrac%7B%5Cboldsymbol%7Bw%7D%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2B%5Cfrac%7Bb%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert%7D+%5Cright%29+%5Cge+%5Cgamma+%5C+%2Ci%3D1%2C2%2C...%2CN+
将约束条件两边同时除以,得到

https://www.zhihu.com/equation?tex=+y_i%5Cleft%28+%5Cfrac%7B%5Cboldsymbol%7Bw%7D%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5Cgamma%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2B%5Cfrac%7Bb%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5Cgamma%7D+%5Cright%29+%5Cge+1+
因为 https://www.zhihu.com/equation?tex=+%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5Ctext%7B%EF%BC%8C%7D%5Cgamma+ 都是标量,所以为了表达式简洁起见,令

https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bw%7D%3D%5Cfrac%7B%5Cboldsymbol%7Bw%7D%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5Cgamma%7D

https://www.zhihu.com/equation?tex=b%3D%5Cfrac%7Bb%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5Cgamma%7D+
得到

https://www.zhihu.com/equation?tex=y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%5Cge+1%2C%5C+i%3D1%2C2%2C...%2CN
又因为最大化,等价于最大化 https://www.zhihu.com/equation?tex=+%5Cfrac%7B1%7D%7B%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert%7D ,也就等价于最小化 https://www.zhihu.com/equation?tex=+%5Cfrac%7B1%7D%7B2%7D%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5E2+ ( https://www.zhihu.com/equation?tex=%5Cfrac%7B1%7D%7B2%7D 是为了后面求导以后形式简洁,不影响结果),因此SVM模型的求解最大分割超平面问题又可以表示为以下约束最优化问题

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5C+%5Cfrac%7B1%7D%7B2%7D%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5E2+

https://www.zhihu.com/equation?tex=+s.t.%5C+%5C+y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%5Cge+1%2C%5C+i%3D1%2C2%2C...%2CN+
这是一个含有不等式约束的凸二次规划问题,可以对其使用拉格朗日乘子法得到其对偶问题(dual problem)。
首先,我们将有约束的原始目标函数转换为无约束的新构造的拉格朗日目标函数

https://www.zhihu.com/equation?tex=L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+%3D%5Cfrac%7B1%7D%7B2%7D%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5E2-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%5Cleft%28+y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+-1+%5Cright%29%7D+
其中为拉格朗日乘子,且 https://www.zhihu.com/equation?tex=+%5Calpha+_i%5Cge+0+ 。现在我们令

https://www.zhihu.com/equation?tex=+%5Ctheta+%5Cleft%28+%5Cboldsymbol%7Bw%7D+%5Cright%29+%3D%5Cunderset%7B%5Calpha+_%7B_i%7D%5Cge+0%7D%7B%5Cmax%7D%5C+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+
当样本点不满足约束条件时,即在可行解区域外:

https://www.zhihu.com/equation?tex=+y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%3C1+
此时,将 https://www.zhihu.com/equation?tex=%5Calpha+_i+ 设置为无穷大,则也为无穷大。
当满本点满足约束条件时,即在可行解区域内:

https://www.zhihu.com/equation?tex=y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%5Cge+1+
此时,为原函数本身。于是,将两种情况合并起来就可以得到我们新的目标函数

https://www.zhihu.com/equation?tex=+%5Ctheta+%5Cleft%28+%5Cboldsymbol%7Bw%7D+%5Cright%29+%3D%5Cbegin%7Bcases%7D+%5Cfrac%7B1%7D%7B2%7D%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5E2%5C+%2C%5Cboldsymbol%7Bx%7D%5Cin+%5Ctext%7B%E5%8F%AF%E8%A1%8C%E5%8C%BA%E5%9F%9F%7D%5C%5C+%2B%5Cinfty+%5C+%5C+%5C+%5C+%5C+%2C%5Cboldsymbol%7Bx%7D%5Cin+%5Ctext%7B%E4%B8%8D%E5%8F%AF%E8%A1%8C%E5%8C%BA%E5%9F%9F%7D%5C%5C+%5Cend%7Bcases%7D+
于是原约束问题就等价于

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5C+%5Ctheta+%5Cleft%28+%5Cboldsymbol%7Bw%7D+%5Cright%29+%3D%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5Cunderset%7B%5Calpha+_i%5Cge+0%7D%7B%5Cmax%7D%5C+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+%3Dp%5E%2A+
看一下我们的新目标函数,先求最大值,再求最小值。这样的话,我们首先就要面对带有需要求解的参数 https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7Bw%7D+ 和 https://www.zhihu.com/equation?tex=+b 的方程,而又是不等式约束,这个求解过程不好做。所以,我们需要使用拉格朗日函数对偶性,将最小和最大的位置交换一下,这样就变成了:

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Calpha+_i%5Cge+0%7D%7B%5Cmax%7D%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5C+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+%3Dd%5E%2A+
要有 https://www.zhihu.com/equation?tex=+p%5E%2A%3Dd%5E%2A+ ,需要满足两个条件:
① 优化问题是凸优化问题
② 满足KKT条件
首先,本优化问题显然是一个凸优化问题,所以条件一满足,而要满足条件二,即要求


为了得到求解对偶问题的具体形式,令 https://www.zhihu.com/equation?tex=+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+ 对和的偏导为0,可得




将以上两个等式带入拉格朗日目标函数,消去和, 得

https://www.zhihu.com/equation?tex=+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+%3D%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_iy_i%5Cleft%28+%5Cleft%28+%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_jy_j%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D%7D+%5Cright%29+%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%2B%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+

https://www.zhihu.com/equation?tex=%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5C+%3D-%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D%2B%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+

https://www.zhihu.com/equation?tex=%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5C+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+%3D-%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D%2B%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+
求 https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%7D%7B%5Cmin%7D%5C+L%5Cleft%28+%5Cboldsymbol%7Bw%2C%7Db%2C%5Cboldsymbol%7B%5Calpha+%7D+%5Cright%29+ 对 https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7B%5Calpha+%7D+ 的极大,即是对偶问题

https://www.zhihu.com/equation?tex=%5Cunderset%7B%5Cboldsymbol%7B%5Calpha+%7D%7D%7B%5Cmax%7D%5C+-%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D%2B%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+



https://www.zhihu.com/equation?tex=+%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5Calpha+_i%5Cge+0%2C%5C+i%3D1%2C2%2C...%2CN
把目标式子加一个负号,将求解极大转换为求解极小

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7B%5Calpha+%7D%7D%7B%5Cmin%7D%5C+%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+



https://www.zhihu.com/equation?tex=%5C+%5C+%5C+%5C+%5C+%5C+%5C+%5Calpha+_i%5Cge+0%2C%5C+i%3D1%2C2%2C...%2CN+
现在我们的优化问题变成了如上的形式。对于这个问题,我们有更高效的优化算法,即序列最小优化(SMO)算法。这里暂时不展开关于使用SMO算法求解以上优化问题的细节,下一篇文章再加以详细推导。
我们通过这个优化算法能得到,再根据   ,我们就可以求解出和,进而求得我们最初的目的:找到超平面,即”决策平面”。
前面的推导都是假设满足KKT条件下成立的,KKT条件如下


另外,根据前面的推导,还有下面两个式子成立




由此可知在中,至少存在一个 https://www.zhihu.com/equation?tex=+%5Calpha+_%7Bj%7D%5E%7B%2A%7D%3E0 (反证法可以证明,若全为0,则 https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7Bw%7D%3D0+ ,矛盾),对此 https://www.zhihu.com/equation?tex=+j+ 有

https://www.zhihu.com/equation?tex=+y_j%5Cleft%28+%5Cboldsymbol%7Bw%7D%5E%2A%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D%2Bb%5E%2A+%5Cright%29+-1%3D0+
因此可以得到

https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bw%7D%5E%2A%3D%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_i%5Cboldsymbol%7Bx%7D_i%7D+

https://www.zhihu.com/equation?tex=+b%5E%2A%3Dy_j-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_i%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D+
对于任意训练样本 https://www.zhihu.com/equation?tex=+%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Cy_i+%5Cright%29+ ,总有或者 https://www.zhihu.com/equation?tex=y_%7Bj%7D%5Cleft%28%5Cboldsymbol%7Bw%7D+%5Ccdot+%5Cboldsymbol%7Bx%7D_%7Bj%7D%2Bb%5Cright%29%3D1 。若,则该样本不会在最后求解模型参数的式子中出现。若 https://www.zhihu.com/equation?tex=+%5Calpha+_i%3E0 ,则必有 https://www.zhihu.com/equation?tex=+y_j%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D%2Bb+%5Cright%29+%3D1+ ,所对应的样本点位于最大间隔边界上,是一个支持向量。这显示出支持向量机的一个重要性质:训练完成后,大部分的训练样本都不需要保留,最终模型仅与支持向量有关。
到这里都是基于训练集数据线性可分的假设下进行的,但是实际情况下几乎不存在完全线性可分的数据,为了解决这个问题,引入了“软间隔”的概念,即允许某些点不满足约束

https://www.zhihu.com/equation?tex=y_j%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D%2Bb+%5Cright%29+%5Cge+1
采用hinge损失,将原优化问题改写为

https://www.zhihu.com/equation?tex=%5Cunderset%7B%5Cboldsymbol%7Bw%2C%7Db%2C%5Cxi+_i%7D%7B%5Cmin%7D%5C+%5Cfrac%7B1%7D%7B2%7D%5ClVert+%5Cboldsymbol%7Bw%7D+%5CrVert+%5E2%2BC%5Csum_%7Bi%3D1%7D%5Em%7B%5Cxi+_i%7D+

https://www.zhihu.com/equation?tex=+s.t.%5C+%5C+y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%5Cge+1-%5Cxi+_i+

https://www.zhihu.com/equation?tex=+%5C+%5C+%5C+%5C+%5C+%5Cxi+_i%5Cge+0%5C+%2C%5C+i%3D1%2C2%2C...%2CN+
其中 https://www.zhihu.com/equation?tex=+%5Cxi+_i+ 为“松弛变量”, https://www.zhihu.com/equation?tex=%5Cxi+_i%3D%5Cmax+%5Cleft%28+0%2C1-y_i%5Cleft%28+%5Cboldsymbol%7Bw%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2Bb+%5Cright%29+%5Cright%29+,即一个hinge损失函数。每一个样本都有一个对应的松弛变量,表征该样本不满足约束的程度。   称为惩罚参数, https://www.zhihu.com/equation?tex=C 值越大,对分类的惩罚越大。跟线性可分求解的思路一致,同样这里先用拉格朗日乘子法得到拉格朗日函数,再求其对偶问题。
综合以上讨论,我们可以得到线性支持向量机学习算法如下:
输入:训练数据集其中,,;
输出:分离超平面和分类决策函数
(1)选择惩罚参数,构造并求解凸二次规划问题

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7B%5Calpha+%7D%7D%7B%5Cmin%7D%5C+%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_j%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D




得到最优解
(2)计算

https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7Bw%7D%5E%2A%3D%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_i%5Cboldsymbol%7Bx%7D_i%7D+
选择的一个分量满足条件,计算

https://www.zhihu.com/equation?tex=b%5E%2A%3Dy_j-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_i%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%5Ccdot+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D
(3)求分离超平面

https://www.zhihu.com/equation?tex=+%5Cboldsymbol%7Bw%7D%5E%2A%5Ccdot+%5Cboldsymbol%7Bx%7D%2Bb%5E%2A%3D0+
分类决策函数:

https://www.zhihu.com/equation?tex=+f%5Cleft%28+%5Cboldsymbol%7Bx%7D+%5Cright%29+%3Dsign%5Cleft%28+%5Cboldsymbol%7Bw%7D%5E%2A%5Ccdot+%5Cboldsymbol%7Bx%7D%2Bb%5E%2A+%5Cright%29+

非线性SVM算法原理

对于输入空间中的非线性分类问题,可以通过非线性变换将它转化为某个维特征空间中的线性分类问题,在高维特征空间中学习线性支持向量机。由于在线性支持向量机学习的对偶问题里,目标函数和分类决策函数都只涉及实例和实例之间的内积,所以不需要显式地指定非线性变换,而是用核函数替换当中的内积。核函数表示,通过一个非线性转换后的两个实例间的内积。具体地,是一个函数,或正定核,意味着存在一个从输入空间到特征空间的映射 https://www.zhihu.com/equation?tex=+%5Cphi+%5Cleft%28+x+%5Cright%29+ ,对任意输入空间中的 https://www.zhihu.com/equation?tex=+x%2Cz+ ,有

https://www.zhihu.com/equation?tex=K%5Cleft%28+x%2Cz+%5Cright%29+%3D%5Cphi+%5Cleft%28+x+%5Cright%29+%5Ccdot+%5Cphi+%5Cleft%28+z+%5Cright%29
在线性支持向量机学习的对偶问题中,用核函数   替代内积,求解得到的就是非线性支持向量机

https://www.zhihu.com/equation?tex=f%5Cleft%28+x+%5Cright%29+%3Dsign%5Cleft%28+%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_iK%5Cleft%28+x%2Cx_i+%5Cright%29+%2Bb%5E%2A%7D+%5Cright%29+
综合以上讨论,我们可以得到非线性支持向量机学习算法如下:
输入:训练数据集其中,,;
输出:分离超平面和分类决策函数
(1)选取适当的核函数和惩罚参数,构造并求解凸二次规划问题

https://www.zhihu.com/equation?tex=+%5Cunderset%7B%5Cboldsymbol%7B%5Calpha+%7D%7D%7B%5Cmin%7D%5C+%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%3D1%7D%5EN%7B%5Csum_%7Bj%3D1%7D%5EN%7B%5Calpha+_i%5Calpha+_jy_iy_jK%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2C%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D%7D-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_i%7D+




得到最优解
(2)计算
选择的一个分量满足条件,计算

https://www.zhihu.com/equation?tex=+b%5E%2A%3Dy_j-%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_iK%5Cleft%28+%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bi%7D%7D%2C%5Cboldsymbol%7Bx%7D_%7B%5Cboldsymbol%7Bj%7D%7D+%5Cright%29%7D+
(3)分类决策函数:

https://www.zhihu.com/equation?tex=+f%5Cleft%28+x+%5Cright%29+%3Dsign%5Cleft%28+%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_iK%5Cleft%28+x%2Cx_i+%5Cright%29+%2Bb%5E%2A%7D+%5Cright%29+

介绍一个常用的核函数——高斯核函数

https://www.zhihu.com/equation?tex=+K%5Cleft%28+x%2Cz+%5Cright%29+%3D%5Cexp+%5Cleft%28+-%5Cfrac%7B%5ClVert+x-z+%5CrVert+%5E2%7D%7B2%5Csigma+%5E2%7D+%5Cright%29+
对应的SVM是高斯径向基函数分类器,在此情况下,分类决策函数为

https://www.zhihu.com/equation?tex=f%5Cleft%28+x+%5Cright%29+%3Dsign%5Cleft%28+%5Csum_%7Bi%3D1%7D%5EN%7B%5Calpha+_%7Bi%7D%5E%7B%2A%7Dy_i%5Cexp+%5Cleft%28+-%5Cfrac%7B%5ClVert+x-z+%5CrVert+%5E2%7D%7B2%5Csigma+%5E2%7D+%5Cright%29+%2Bb%5E%2A%7D+%5Cright%29+

参考

《统计学习方法》 李航
《机器学习》周志华
Python3《机器学习实战》学习笔记(八):支持向量机原理篇之手撕线性SVM Jack-Cui
深入理解拉格朗日乘子法(Lagrange Multiplier) 和KKT条件
支持向量机通俗导论(理解SVM的三层境界)
Support Vector Machines for Classification

super1 发表于 2021-12-4 10:49

首先,点赞收藏!在李航的书的基础上,有精简也有写得更详细的地方(这个对我帮助不小)。总体来说,清晰明了,是一篇好文章。感谢作者!

acecase 发表于 2021-12-4 10:52

“又因为最大化,等价于最大化 ”两者为什么等价?

unityloverz 发表于 2021-12-4 10:56

你看一下gamma的定义。

RedZero9 发表于 2021-12-4 11:00

写得这么好,竟然没多少人看

acecase 发表于 2021-12-4 11:07

写得不错,可惜我好久没搞理论了,好多数学有些生疏了。

mypro334 发表于 2021-12-4 11:09

李航《统计学方法》

Zephus 发表于 2021-12-4 11:17

写给你自己看的,还是copy过来的?

TheLudGamer 发表于 2021-12-4 11:17

那个w应该是上面刚定义过的新w。

Ilingis 发表于 2021-12-4 11:20

感谢
页: [1] 2 3
查看完整版本: 支持向量机(SVM)——原理篇