JamesB 发表于 2021-12-29 18:16

非线性优化(一):优化方法

讲完了李群和李代数,就轮到最小二乘优化了。和之前一样,应该会是一系列文章。这一章就先讲讲一些基本的概念和常见的最小二乘算法。
1.问题的定义

一个最小二乘问题的定义如下:
寻找一个关于下列函数的局部最小值:

https://www.zhihu.com/equation?tex=F%28x%29+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Csum%5Em_%7Bi%3D1%7D%28f_i+%28%5Cmathbf%7Bx%7D%29+%29%5E2+%5Cquad+f_i%3A+%5Cmathbf%7BR%7D%5En+%5Cto+%5Cmathbf%7BR%7D%2C+%5Cquad+m+%5Cge+n

这里, https://www.zhihu.com/equation?tex=f_i+%28+%5Cmathbf%7Bx%7D+%29 是误差方程。

而局部最小值的定义为:

给定 https://www.zhihu.com/equation?tex=F%3A+%5Cmathbf%7BR%7D%5En+%5Cto+%5Cmathbf%7BR%7D ,找到一个,使得


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D%5E%2A%29+%5Cle+F%28%5Cmathbf%7Bx%7D%29+%5Cquad+%5Ctext%7Bfor%7D+%5C+%7C%7C+%5Cmathbf%7Bx%7D+-+%5Cmathbf%7Bx%7D%5E%2A+%7C%7C+%3C+%5Cdelta

这里, https://www.zhihu.com/equation?tex=%5Cdelta 是一个给定的小正数。

假设损失函数F是光滑、可微的,就可以对其进行泰勒展开:


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D%29+%3D+F%28%5Cmathbf%7Bx%7D%29+%2B+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7Bg%7D+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BH%7D+%5Cmathbf%7Bh%7D+%2B+O%28%7C%7C+%5Cmathbf+%7Bh%7D+%7C%7C%5E3%29

其中,g是一阶导数(gradient):


https://www.zhihu.com/equation?tex=g+%3D+F%27+%28%5Cmathbf%7Bx%7D%29+%3D+%5Cbegin%7Bbmatrix%7D+%5Cfrac%7B%5Cpartial+F%7D%7B%5Cpartial+x_1%7D+%28%5Cmathbf%7Bx%7D%29+%5C%5C+%5C%5C+%5Cfrac%7B%5Cpartial+F%7D%7B%5Cpartial+x_2%7D+%28%5Cmathbf%7Bx%7D%29+%5C%5C+%5C%5C+%5Cvdots+%5C%5C+%5Cfrac%7B%5Cpartial+F%7D%7B%5Cpartial+x_n%7D+%28%5Cmathbf%7Bx%7D%29+%5C%5C+%5Cend%7Bbmatrix%7D

而H为海塞矩阵(Hessian)


https://www.zhihu.com/equation?tex=%5Cmathbf%7BH%7D+%3D+F%27%27%28%5Cmathbf%7Bx%7D%29+%3D++%5Cbegin%7Bbmatrix%7D+%5Cfrac%7B+%5Cpartial%5E2+F+%7D%7B%5Cpartial+x_i+%5Cpartial+x_j+%7D+%28%5Cmathbf%7Bx%7D%29+%5Cend%7Bbmatrix%7D

显然,如果是一个局部最小值且 https://www.zhihu.com/equation?tex=%7C%7C%5Cmathbf%7Bh%7D%7C%7C 足够小,那么我们不可能再找到一个更小的 https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D%5E%2A+%2B+%5Cmathbf%7Bh%7D%29 。

如此,可以先给出一个局部极小值的必要条件:

如果是一个局部极小值,那么 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bg%7D%5E%2A+%3D+F%27%28%5Cmathbf%7Bx%7D%5E%2A%29++%3D+0 。

我们知道,满足上述条件的点包括:局部极小值,局部极大值和鞍点,统称为驻点(stationary point)。为了判断一个驻点 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D_s 是否是一个局部极小值,将之反代回F的泰勒展开式:


https://www.zhihu.com/equation?tex=F%28+%5Cmathbf%7Bx%7D_s+%2B+%5Cmathbf%7Bh%7D+%29+%3D+F%28+%5Cmathbf%7Bx%7D_s+%29+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BH%7D_s+%5Cmathbf%7Bh%7D+%2B+h.o.t

由海塞矩阵的定义可知,H是一个对称矩阵。如果它同时是一个正定矩阵的话,那么其特征值会大于某个数值 https://www.zhihu.com/equation?tex=%5Cdelta+%3E+0 ,并且


https://www.zhihu.com/equation?tex=%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BH%7D_s+%5Cmathbf%7Bh%7D++%3E+%5Cdelta+%7C%7C%5Cmathbf%7Bh%7D%7C%7C+%5E2

显然,此时 https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D_s+%2B+%5Cmathbf%7Bh%7D%29+%3E+F%28%5Cmathbf%7Bx%7D_s%29 。

由此,可以得到一个局部最小值的充分条件:

假定是一个驻点,且 https://www.zhihu.com/equation?tex=F%27%27%28%5Cmathbf%7Bx%7D%5E%2A%29 是一个正定矩阵,那么是一个局部极小值。

显然,如果H是一个负定矩阵,那么x是一个局部极大值;如果H既有正的特征值,又有负的特征值,那么s就是个鞍点。

2. 迭代梯度下降方法


对于非线性优化方法,其解法基本都是迭代方法:给定一个起始点 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D_0 ,不断迭代产生新的 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D_1%2C+%5Cmathbf%7Bx%7D_2+%5Cdots ,并最终收敛至。为此,引入下降条件(decending condition):


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D_%7Bk%2B1%7D%29+%3C+F%28%5Cmathbf%7Bx%7D_k%29

回到F的泰勒展开式,我们有


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D+%2B+%5Calpha+%5Cmathbf%7Bh%7D%29+%5Capprox+F%28%5Cmathbf%7Bx%7D%29+%2B+%5Calpha+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BF%7D%27+%28%5Cmathbf%7Bx%7D%29

可以看到,如果 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BF%7D%27+%28%5Cmathbf%7Bx%7D%29+%3C+0 ,那么h就是F的一个下降方向。

很显然,基本所有下降方法都可以概括为以下两步:

1. 找到一个下降方向 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_d ,然后
2. 找到一个迭代步长使得F的值减小;



其中,这个步长 https://www.zhihu.com/equation?tex=%5Calpha 可以通过线性搜索(line search)的方式得到。

2.1 最速下降法


由前面的等式可以得到


https://www.zhihu.com/equation?tex=%5Clim_%7B%5Calpha+%5Cto+0%7D+%5Cfrac%7BF%28%5Cmathbf%7Bx%7D%29+-+F%28%5Cmathbf%7Bx%7D+%2B+%5Calpha+%5Cmathbf%7Bh%7D+%29+%7D%7B%5Calpha+%7C%7C%5Cmathbf%7Bh%7D%7C%7C%7D+%3D+%5Cfrac%7B1%7D%7B%7C%7C%5Cmathbf%7Bh%7D%7C%7C%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29+%3D+-%7C%7C%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29%7C%7C+%5Ccos+%5Ctheta

该方程表示的是方程F的相对下降(relative gain)。可以发现, 它在https://www.zhihu.com/equation?tex=%5Ctheta+%3D+%5Cpi 的时候可以获得最大的下降。

由此,可以得到最速下降法的下降方向


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Bsd%7D+%3D+-%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29

最速下降法的特点是在刚开始的阶段有较好的表现,但是到了最后是线性收敛。

对此,也诞生了混合方法(hybrid methods):在开始阶段使用一种方法,在最后阶段使用另一种方法。

2.2 牛顿法


我们知道,对于局部极小值有 https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%5E%2A%29%3D+0 。这是一个非线性方程组,为此,对其进行泰勒展开有:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D%29+%5Capprox+%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29+%2B+%5Cmathbf%7BF%7D%27%27%28%5Cmathbf%7Bx%7D%29+%5Cmathbf%7Bh%7D

令上式等于0就可以得到牛顿法:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BH%7D+%5Cmathbf%7Bh%7D_n+%3D+-%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29

下次迭代就有


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D%3A+%3D+%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D_n

如果H是正定的(说明上式具有唯一解),即对于所有 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bu%7D%3E0 有 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bu%7D%5ET+%5Cmathbf%7BH%7D+%5Cmathbf%7Bu%7D%3E0 ,则有


https://www.zhihu.com/equation?tex=0+%3C+%5Cmathbf%7Bh%7D%5ET_n+%5Cmathbf%7BH%7D+%5Cmathbf%7Bh%7D_n+%3D+-%5Cmathbf%7Bh%7D%5ET_n+%5Cmathbf%7BF%7D%27%28%5Cmathbf%7Bx%7D%29

说明 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_n 是损失函数F的一个下降方向。

牛顿法在最后的收敛阶段有很好的表现,在一定条件下(局部最小值附近的H是正定的且x已接近局部最小值),其可以达到二次收敛。

问题在于,如果H是负定的,且附近有一个局部最大值,牛顿法可能会收敛至该局部最大值。我们可以要求损失函数每一步都是减小的,或者使用如下的混合方法:


2.3 置信区域法和阻尼法


使用一个模型L来拟合F在其临域内的行为,则有


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D%29+%5Capprox+L%28%5Cmathbf%7Bh%7D%29+%3D+F%28%5Cmathbf%7Bx%7D%29+%2B+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7Bc%7D+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BB%7D%5Cmathbf%7Bh%7D

可以看到,L是F的二次泰勒展开式的一个近似。其中,c来近似梯度,而B为近似的海塞矩阵。

对于置信区域法,我们认为这个模型L在一个限定范围内是足够精确的。如此,就可以得到对应的下降方向


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D+%3D+%5Cmathbf%7Bh%7D_%7Btr%7D+%3D+%5Carg%5Cmin_%7B%7C%7Ch%7C%7C+%3C+%5CDelta%7D+%5C%7B+L%28%5Cmathbf%7Bh%7D%29+%5C%7D

而对于阻尼法,有


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D+%3D+%5Cmathbf%7Bh%7D_%7Bdm%7D+%3D+%5Carg%5Cmin_h+%5C%7B+L%28%5Cmathbf%7Bh%7D+%29+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmu+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7Bh%7D%5C%7D

阻尼因子 https://www.zhihu.com/equation?tex=%5Cmu+%5Cge+0 ,该阻尼是用来惩罚大步长的。
算法2.4就可以变为


其实就是如果h满足下降条件,则 https://www.zhihu.com/equation?tex=%5Calpha+%3D+1 ;否则 https://www.zhihu.com/equation?tex=%5Calpha+%3D+0 。并且通过调整或,算法可以避免被困在某一点。

考虑到L是对F的临域的模拟,算法更新失败可能就是h(步长)太大了,导致近似不太好。为了衡量L对F的拟合度,定义一个gain ratio


https://www.zhihu.com/equation?tex=%5Crho+%3D+%5Cfrac%7BF%28%5Cmathbf%7Bx%7D%29+-+F%28%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D%29%7D%7BL%280%29+-+L%28%5Cmathbf%7Bh%7D%29%7D


表示的是损失函数真实下降值和预计下降值之间的比例。由于分母是正数,当分子很小或者为负时,说明此时损失函数没有下降多少甚至还上升来,我们就知道当前的h太大了,导致L不是一个好的近似。

对于置信区域法和阻尼法,常见的更新策略为





算法对于其中的常数的设置并不敏感,重要的是选择合适的值使得或不震荡。

3. 非线性优化算法


前面的损失函数还可以展开为如下形式:


https://www.zhihu.com/equation?tex=F%28%5Cmathbf%7Bx%7D%29+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Csum_%7Bi%3D1%7D%5Em+%28f_i+%28%5Cmathbf%7Bx%7D%29+%29%5E2+%3D+%5Cfrac%7B1%7D%7B2%7D+%7C%7C+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29+%7C%7C%5E2+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29%5ET+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29

误差函数f(x)的泰勒展开式为:


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D+%29+%3D+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29+%5Cmathbf%7Bh%7D+%2B+O+%28+%7C%7C+%5Cmathbf%7Bh%7D+%7C%7C%5E2+%29

其中,雅可以矩阵J可以写为


https://www.zhihu.com/equation?tex=+%28+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29+%29_%7Bij%7D+%3D+%5Cfrac%7B+%5Cpartial+f_i+%7D%7B+%5Cpartial+x_j+%7D+%28+%5Cmathbf%7Bx%7D+%29

如此一来,损失函数F的微分就可以写成:


https://www.zhihu.com/equation?tex=%5Cfrac%7B+%5Cpartial+F+%7D%7B+%5Cpartial+x_j+%7D+%28+%5Cmathbf%7Bx%7D+%29+%3D+%5Csum_%7Bi%3D1%7D%5Em+f_i+%28+%5Cmathbf%7Bx%7D+%29+%5Cfrac%7B+%5Cpartial+f_i+%7D%7B+%5Cpartial+x_j+%7D+%28+%5Cmathbf%7Bx%7D+%29

将它写成矩阵的形式就有:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27+%28+%5Cmathbf%7Bx%7D+%29+%3D+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29%5ET+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29

最后,F的二阶导(海塞矩阵)为:


https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmathbf%7BF%7D%27%27+%28+%5Cmathbf%7Bx%7D+%29+%26+%3D+%5Cfrac%7B+%5Cpartial+%5E2+F+%7D%7B+%5Cpartial+x_j+%5Cpartial+x_k+%7D+%28+%5Cmathbf%7Bx%7D+%29+%3D+%5Csum_%7Bi%3D1%7D%5Em+%28+%5Cfrac%7B+%5Cpartial+f_i+%7D%7B+%5Cpartial+x_j+%7D+%28+%5Cmathbf%7Bx%7D+%29+%5Cfrac%7B+%5Cpartial+f_i+%7D%7B+%5Cpartial+x_k+%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+f_i+%28+%5Cmathbf%7Bx%7D+%29+%5Cfrac%7B+%5Cpartial+%5E2+f_i+%7D%7B+%5Cpartial+x_j+%5Cpartial+x_k+%7D+%28+%5Cmathbf%7Bx%7D+%29+%29+%5C%5C+%26+%3D+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29%5ET+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Csum_%7Bi%3D1%7D%5Em+f_i+%28+x+%29+%5Cmathbf%7Bf%7D%27%27_i+%28+%5Cmathbf%7Bx%7D+%29+%5Cend%7Baligned%7D

3.1 高斯-牛顿法


GN算法基于对误差函数f的线性近似:


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D+%29+%5Capprox+%5Cmathbf%7Bl%7D+%28+%5Cmathbf%7Bh%7D+%29+%3D+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Cmathbf%7BJ%7D+%28+%5Cmathbf%7Bx%7D+%29+%5Cmathbf%7Bh%7D

因此,损失函数F就变为


https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+F%28+%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D+%29+%5Capprox+L+%28+%5Cmathbf%7Bh%7D+%29+%26+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bl%7D+%28+%5Cmathbf%7Bh%7D+%29%5ET+%5Cmathbf%7Bl%7D+%28+%5Cmathbf%7Bh%7D+%29+%5C%5C+%26+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bf%7D%5ET+%5Cmathbf%7Bf%7D+%2B+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D+%5C%5C+%26+%3D+F%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D+%5Cend%7Baligned%7D

L 的梯度和海塞矩阵就是:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BL%7D%27+%28+%5Cmathbf%7Bh%7D+%29+%3D+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+%2B+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D+%5Cquad+%5Cmathbf%7BL%7D%27%27+%28+%5Cmathbf%7Bh%7D+%29+%3D+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D

而且,如果梯度J是满秩的,那么L的海塞矩阵H就是正定的,说明L(h)拥有唯一的极小值。

通过令L'(h) = 0就可以得到GN算法的解:


https://www.zhihu.com/equation?tex=%28%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%29+%5Cmathbf%7Bh%7D_%7Bgn%7D+%3D+-+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D

可以证明, https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Bgn%7D 确为损失函数F的下降方向:


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Bgn%7D%5ET+%5Cmathbf%7BF%7D%27+%28+%5Cmathbf%7Bx%7D+%29+%3D+%5Cmathbf%7Bh%7D_%7Bgn%7D%5ET+%28+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+%29+%3D+-%5Cmathbf%7Bh%7D_%7Bgn%7D%5ET+%28+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%29+%5Cmathbf%7Bh%7D_%7Bgn%7D+%3C+0

3.1.1 收敛速度


上一小节提到过,牛顿法可以达到二次收敛。但GN法却不行。为此,我们可以对比这两种方法:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27%27%28x%29+%5Cmathbf%7Bh%7D_n+%3D+-%5Cmathbf%7BF%7D%27%28x%29+%5Cquad+%5Ctext%7Band%7D+%5Cquad+%5Cmathbf%7BL%7D%27%27+%28%5Cmathbf%7Bh%7D%29+%5Cmathbf%7Bh%7D_%7Bgn%7D+%3D+-%5Cmathbf%7BL%7D%27%280%29

已知这两个等式的右边是一样的,但是等式左边却是不一样的:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27%27+%28+%5Cmathbf%7Bx%7D+%29+%3D+%5Cmathbf%7BL%7D%27%27+%28+%5Cmathbf%7Bh%7D+%29+%2B+%5Csum_%7Bi%3D1%7D%5Em+f_i+%28+x+%29+%5Cmathbf%7Bf%7D%27%27_i+%28+%5Cmathbf%7Bx%7D+%29

显然,如果 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D%5E%2A+%29+%3D+0 ,那么当x接近x*时, https://www.zhihu.com/equation?tex=%5Cmathbf%7BL%7D%27%27+%28%5Cmathbf%7Bh%7D%29+%5Cbacksimeq+%5Cmathbf%7BF%7D%27%27%28x%29 。此时,GN算法可以有二次收敛;但更多时候,GN算法只能达到线性收敛。

3.2 Levenberg-Marquardt 法


LM算法是阻尼法的一种,其步长可以这么算出:


https://www.zhihu.com/equation?tex=%28+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%2B+%5Cmu+%5Cmathbf%7BI%7D+%29+%5Cmathbf%7Bh%7D_%7Blm%7D+%3D+-%5Cmathbf%7Bg%7D+%5Cquad+%5Ctext%7Bwith%7D+%5C+%5Cmathbf%7Bg%7D+%3D+-+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+%5C+%5Ctext%7Band%7D+%5C+%5Cmu+%5Cge+0

其中,u是拉格朗日算子,它有以下作用:

1. 对于u > 0,可以保证系数矩阵是正定的。这就保证了 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Blm%7D 必然是一个下降方向;
2. 如果u值很大,那么 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Blm%7D+%5Cbacksimeq+-+%5Cfrac%7B1%7D%7B%5Cmu%7D+%5Cmathbf%7Bg%7D+%3D+-%5Cfrac%7B1%7D%7B%5Cmu%7D+%5Cmathbf%7BF%7D%27+%28+%5Cmathbf%7Bx%7D+%29 . 这是最速下降法下的一小步。这应用在当前迭代距离结果很远的时候;
3. 如果u值很小,那么 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Blm%7D+%5Csimeq+%5Cmathbf%7Bh%7D_%7Bgn%7D ,说明当前的线性近似比较准确。

因此,阻尼因子同时影响了当前迭代的方向和步长。其更新方式在前面已经介绍过了,而初始值可以根据 https://www.zhihu.com/equation?tex=%5Cmathbf%7BA%7D_0+%3D+%5Cmathbf%7BJ%7D%5ET%28%5Cmathbf%7Bx%7D_0%29+%5Cmathbf%7BJ%7D%28%5Cmathbf%7Bx%7D_0%29 中元素的大小给出:


https://www.zhihu.com/equation?tex=%5Cmu_0+%3D+%5Ctau+%5Ccdot+%5Cmax_i+%5C%7B+a%5E%7B%280%29%7D_%7Bii%7D+%5C%7D

对于的选择,如果起始点x0是对结果的一个比较好的近似,应该选取一个比较小的值,如 https://www.zhihu.com/equation?tex=10%5E%7B-6%7D ;否则,可以选择 https://www.zhihu.com/equation?tex=%5Ctau+%3D+10%5E%7B-3%7D 或者 https://www.zhihu.com/equation?tex=%5Ctau+%3D+1 .

其更新策略同样是计算gain ratio:


https://www.zhihu.com/equation?tex=%5Crho+%3D+%5Cfrac%7BF%28%5Cmathbf%7Bx%7D%29+-+F%28%5Cmathbf%7Bx%7D+%2B+%5Cmathbf%7Bh%7D_%7Blm%7D%29%7D%7BL%280%29+-+L%28%5Cmathbf%7Bh%7D_%7Blm%7D%29%7D

其中,


https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+L%280%29+-+L%28%5Cmathbf%7Bh%7D_%7Blm%7D%29%26+%3D+-%5Cmathbf%7Bh%7D_%7Blm%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D+-+%5Cfrac%7B1%7D%7B2%7D%5Cmathbf%7Bh%7D_%7Blm%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Blm%7D+%5C%5C+%26+%3D+-%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D_%7Blm%7D%5ET+%28+2+%5Cmathbf%7Bg%7D+%2B+%28%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%2B+%5Cmu+%5Cmathbf%7BI%7D+-+%5Cmu+%5Cmathbf%7BI%7D%29+%5Cmathbf%7Bh%7D_%7Blm%7D%29+%5C%5C+%26+%3D+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbf%7Bh%7D_%7Blm%7D%5ET+%28+%5Cmu+%5Cmathbf%7Bh%7D_%7Blm%7D+-+%5Cmathbf%7Bg%7D+%29+%5Cend%7Baligned%7D

注意到等式最后两项必然是正的,因此,就像前面说的,的值取决于F的增量。

最后,LM算法的流程如下:




3.2.1 停止条件


首先,当算法到达一个最小值时,有 https://www.zhihu.com/equation?tex=%5Cmathbf%7BF%7D%27+%28+%5Cmathbf%7Bx%7D%5E%2A+%29+%3D+%5Cmathbf%7Bg%7D+%28+%5Cmathbf%7Bx%7D%5E%2A+%29+%3D+0 。因此,第一种停止条件可以为


https://www.zhihu.com/equation?tex=%7C%7C+%5Cmathbf%7Bg%7D%7C%7C_%5Cinfty+%5Cle++%5Cepsilon_1

这里, https://www.zhihu.com/equation?tex=%5Cepsilon_1 是一个很小的正数。

此外,第二种停止条件为在x变化很小时停止:


https://www.zhihu.com/equation?tex=%7C%7C+%5Cmathbf%7Bx%7D_%7Bnew%7D+-+%5Cmathbf%7Bx%7D+%7C%7C+%5Cle+%5Cepsilon_2+%28+%7C%7C+%5Cmathbf%7Bx%7D+%7C%7C+%2B+%5Cepsilon_2+%29

这说明了在x数值较大时,使用前者(相对量)来说为停止条件;在x数值较小时,使用后者(绝对量)来作为停止条件。

最后,还可以限制迭代次数:


https://www.zhihu.com/equation?tex=k+%5Cge+k_%7Bmax%7D
3.3 狗腿法(Powell's dog-leg method)


我们已经知道,GN算法通过求解如下等式(normal equation)得到迭代方向和步长:


https://www.zhihu.com/equation?tex=%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Bgn%7D+%3D+-+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D

而最速下降法则用逆梯度方向作为迭代方向:


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bh%7D_%7Bsd%7D+%3D+-+%5Cmathbf%7Bg%7D+%3D+-+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D

其步长可以通过以下方式得到:


https://www.zhihu.com/equation?tex=%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%2B+%5Calpha+%5Cmathbf%7Bh%7D_%7Bsd%7D+%29+%5Csimeq+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Calpha+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Bsd%7D

对损失函数则有


https://www.zhihu.com/equation?tex=F%28+%5Cmathbf%7Bx%7D+%2B+%5Calpha+%5Cmathbf%7Bh%7D_%7Bsd%7D+%29+%5Csimeq+%5Cfrac%7B1%7D%7B2%7D+%7C%7C+%5Cmathbf%7Bf%7D+%28+%5Cmathbf%7Bx%7D+%29+%2B+%5Calpha+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Bsd%7D+%7C%7C+%5E2++%3D+F%28+%5Cmathbf%7Bx%7D+%29++%2B+%5Calpha+%5Cmathbf%7Bh%7D_%7Bsd%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bf%7D+%2B+%5Cfrac%7B1%7D%7B2%7D+%5Calpha%5E2+%7C%7C+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Bsd%7D+%7C%7C%5E2

通过对上式进行求导,并令导数为0可得


https://www.zhihu.com/equation?tex=%5Calpha+%3D+-%5Cfrac%7B+%5Cmathbf%7Bh%7D_%7Bsd%7D%5ET+%5Cmathbf%7BJ%7D%5ET+%5Cmathbf%7Bf%7D%7D%7B+%7C%7C+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bh%7D_%7Bsd%7D+%7C%7C+%5E2+%7D+%3D+%5Cfrac%7B%7C%7C+%5Cmathbf%7Bg%7D%7C%7C%5E2+%7D%7B+%7C%7C+%5Cmathbf%7BJ%7D+%5Cmathbf%7Bg%7D+%7C%7C%5E2+%7D

这样一来,在更新x的时候,我们就有两个可选的解了: https://www.zhihu.com/equation?tex=%5Cmathbf%7Ba%7D+%3D+%5Calpha+%5Cmathbf%7Bh%7D_%7Bsd%7D 和 https://www.zhihu.com/equation?tex=%5Cmathbf%7Bb%7D+%3D+%5Cmathbf%7Bh%7D_%7Bgn%7D 。

而狗腿法是一种置信区域方法。其选取迭代方向和步长的方法如下图所示:



通过上图可以发现,还有一个参数 https://www.zhihu.com/equation?tex=%5Cbeta 没有确定。令 https://www.zhihu.com/equation?tex=c+%3D+%5Cmathbf%7Ba%7D%5ET+%28+%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D+%29 ,和


https://www.zhihu.com/equation?tex=%5Cpsi%28%5Cbeta%29+%3D+%7C%7C+%5Cmathbf%7Ba%7D+%2B+%5Cbeta+%28+%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D%29+%7C%7C%5E2+-+%5CDelta%5E2+%3D+%7C%7C+%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D+%7C%7C%5E2+%5Cbeta%5E2+%2B+2+c+%5Cbeta+%2B+%7C%7C+%5Cmathbf%7Ba%7D+%7C%7C+-+%5CDelta%5E2

则有


https://www.zhihu.com/equation?tex=%5Cbegin%7Bcases%7D+%5Cbeta+%3D+%28-c+%2B+%5Csqrt%7Bc%5E2+%2B+%7C%7C%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D%7C%7C%5E2+%28%5CDelta%5E2+-+%7C%7C%5Cmathbf%7Ba%7D%7C%7C%5E2%29%7D%29+%2F+%7C%7C+%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D+%7C%7C%5E2+%26+%5Cquad+c+%5Cle+0+%5C%5C+%5Cbeta+%3D%28%5CDelta%5E2+-+%7C%7C%5Cmathbf%7Ba%7D%7C%7C%5E2%29+%2F+%28c+%2B+%5Csqrt%7Bc%5E2+%2B+%7C%7C%5Cmathbf%7Bb%7D+-+%5Cmathbf%7Ba%7D%7C%7C%5E2+%28%5CDelta%5E2+-+%7C%7C%5Cmathbf%7Ba%7D%7C%7C%5E2%29%7D%29+%26+%5Cquad+c+%3E+0+%5Cend%7Bcases%7D

最终的算法如下图所示


<hr/>参考文献:METHODS FOR NON-LINEAR LEAST SQUARES PROBLEMS

如果觉得有用,还请点个赞: )

KaaPexei 发表于 2021-12-29 18:21

以李群为代表的微分流行方法,为众多高维问题提供了精致完备简洁的描述;非线性优化(高斯-牛顿,LM等)和状态观测器(贝叶斯,卡尔曼滤波等)提供了递归迭代求解最优值的途径。难点就在于,如何将欧式空间的优化方法用在流行(manifold)空间,对实际问题建立合理的manifold模型。欢迎讨论!正研究相关问题。

kyuskoj 发表于 2021-12-29 18:27

解释的真清楚

mastertravels77 发表于 2021-12-29 18:33

最速下降法第一个式子的中间部分少了一个负号

acecase 发表于 2021-12-29 18:36

非常棒[赞同]

ainatipen 发表于 2021-12-29 18:36

谢谢博主,解释得很清楚!

zt3ff3n 发表于 2021-12-29 18:39

为什么lm部分,L(0)-L(hlm)的最后两项一定是正的呀?

NoiseFloor 发表于 2021-12-29 18:47

这是我们期望的误差减小量,公式代入就可以得到
页: [1]
查看完整版本: 非线性优化(一):优化方法