最优化原理与算法复习
一、无约束优化1 线搜索
线搜索类算法的数学表述为:给定当前迭代点 ,首先通过某种算法选取向量 ,之后确定正数 ,则下一步的迭代点可写作
https://www.zhihu.com/equation?tex=x_%7Bk%2B1%7D+%3D+x_k+%2B+%CE%B1_k+d_k
我们称为迭代点处的搜索方向,为相应的步长.这里要求是一个下降方向,即
https://www.zhihu.com/equation?tex=%28d_k%29%5ET+%5Cnabla+f%28x_k%29%3C0
这个下降性质保证了沿着此方向搜索函数 https://www.zhihu.com/equation?tex=f 的值会减小.线搜索类算法的关键是如何选取一个好的方向 https://www.zhihu.com/equation?tex=d_k+%5Cin+R%5En 以及合适的步长 .
首先讨论如何选取适当的步长.
1.1 非精确线搜索
1.1.1 线搜索准则
下面介绍的线搜索准则都是单调线搜索,还有一些非单调线搜索准则,这里不加介绍
[*]Armijo准则
Armijo准则对步长作了本质要求,即保证每一步迭代充分下降。
定义: 设是点处的下降方向,若
https://www.zhihu.com/equation?tex=f%28x_k%2B%5Calpha+d_k%29%5Cleq+f%28x_k%29+%2B+c_1+%5Calpha+%5Cnabla+f%28x_k%29%5ET+d_k
则称步长满足Armijo准则,其中https://www.zhihu.com/equation?tex=c_1%5Cin%280%2C1%29是一个常数.但Armijo准则并不能保证迭代能够顺利进行,例如 https://www.zhihu.com/equation?tex=%5Calpha+%3D+0 就是一个满足Armijo准则的步长。因此常常需要其他准则来配合Armijo准则使用。下面列举两种准则。
[*]Goldstein-Armijo准则
为了克服 Armijo 准则的缺陷,我们需要引入其他准则来保证每一步 的不会太小。
定义: 设是点处的下降方向,若
https://www.zhihu.com/equation?tex=%5Cbegin%7Balign%2A%7D%3Cbr%3Ef%28x_k%2B%5Calpha+d_k%29+%26%5Cleq+f%28x_k%29+%2B+c+%5Calpha+%5Cnabla+f%28x_k%29%5ET+d_k%3Cbr%3E%5C%5C%3Cbr%3Ef%28x_k%2B%5Calpha+d_k%29+%26%5Cgeq+f%28x_k%29+%2B+%EF%BC%881-c%EF%BC%89+%5Calpha+%5Cnabla+f%28x_k%29%5ET+d_k%3Cbr%3E%5Cend%7Balign%2A%7D
则称步长 满足 Goldstein 准则,其中 https://www.zhihu.com/equation?tex=c%5Cin%280%2C%5Cfrac%7B1%7D%7B2%7D%29 .
# Goldstein-Armijo一维线搜索
[*]Wolfe-Armijo准则
Goldstein准则能够保证函数值充分下降,但是它可能避开了最优函数值,为此我们引入Armijo-Wolfe准则。
定义: 设是点处的下降方向,若
https://www.zhihu.com/equation?tex=%5Cbegin%7Balign%2A%7D%3Cbr%3Ef%28x_k%2B%5Calpha+d_k%29+%26%5Cleq+f%28x_k%29+%2B+c_1+%5Calpha+%5Cnabla+f%28x_k%29%5ET+d_k%3Cbr%3E%5C%5C%3Cbr%3E%5Cnabla+f%28x_k%2B%5Calpha+d_k%29%5ET+d_k+%26%5Cgeq+c_2+%5Cnabla+f%28x_k%29%5ET+d_k%3Cbr%3E%5Cend%7Balign%2A%7D
则称步长满足Wolfe准则,其中 https://www.zhihu.com/equation?tex=c_1%2Cc_2+%5Cin+%280%2C1%29为给定的常数且 https://www.zhihu.com/equation?tex=c_1%3Cc_2.#Wolfe-Armijo一维搜索
1.1.2 线搜索算法
[*]回退法
1.2 精确线搜索
1.2.1 试探法
[*]黄金分割法
黄金分割法是一种区间收缩法,其主要用于求解单峰函数的极值问题。其利用了单峰函数的性质,排除极值不可能存在的区间,从而达到逐渐缩小目标区间的效果。
求极小值的黄金分割法具体实现流程如下:
[*]给定 https://www.zhihu.com/equation?tex=a%2Cb%2C%5Cvarepsilon+%3E0 .
[*]计算 https://www.zhihu.com/equation?tex=x_1+%3D+a+%2B+0.382%28b-a%29%2Cx_2%3Da+%2B+0.618%28b-a%29 .
[*]若 https://www.zhihu.com/equation?tex=b-a%3C%5Cvarepsilon ,停止,输出 https://www.zhihu.com/equation?tex=x_%2A%3D%5Cfrac%7Ba%2Bb%7D%7B2%7D ;否则转(4).
[*]计算 https://www.zhihu.com/equation?tex=f_1+%3D+f%28x_1%29%2Cf_2+%3D+f%28x_2%29 .
[*]如果 https://www.zhihu.com/equation?tex=f_1+%3E+f_2 ,令 https://www.zhihu.com/equation?tex=a+%3D+x_1%2Cx_1+%3D+x_2%2Cx_2+%3D+a%2B0.618%28b-a%29 , 否则令 https://www.zhihu.com/equation?tex=b%3D+x_2%2Cx_2+%3D+x_1%2Cx_1+%3D+a%2B0.382%28b-a%29 ,转(3).
#黄金分割法
def goldenSearch(a,b,fun,epsilon = 0.01):
x1 = a + 0.382 * (b-a)
x2 = a + 0.618 * (b-a)
f1 = fun(x1)
f2 = fun(x2)
while b-a > epsilon:
print('a={},b={},x1={},x2={}'.format(a,b,x1,x2))
if f1 < f2:
b = x2
x2 = x1
x1 = a + 0.382 * (b - a)
f1 = fun(x1)
f2 = fun(x2)
else:
a = x1
x1 = x2
x2 = a + 0.618 * (b - a)
f1 = fun(x1)
f2 = fun(x2)
return (b+a)/2, fun((b+a)/2)
if __name__ == &#39;__main__&#39;:
fun = lambda x: 2*x**2 - x - 1
print(goldenSearch(-10,10,fun,0.01))
[*]斐波那契法
斐波那契法是分割法求解一维极小值问题的最优策略,而黄金分割法只是近似最优。
求极小值的斐波那契法实现流程如下。
#斐波那契法
import math
class Fibs:
def __init__(self):
self.a = 0
self.b = 1
def __next__(self):
self.a, self.b = self.b, self.a + self.b
return self.a
def __iter__(self):
return self
def FibsSearch(a,b,fun,epsilon = 0.01,delta = 0.001):
ctrl = math.ceil((b-a)/epsilon)
i = 0
fibs = Fibs()
F = []
for f in fibs:
if f > ctrl:
break
else:
F.append(f)
i += 1
F.insert(0,0)
n = len(F) - 1
x1 = a + F/F * (b-a)
x2 = a + F/F * (b-a)
f1 = fun(x1)
f2 = fun(x2)
print(a, &#39;\t&#39;, b, &#39;\t&#39;, fun(x1), &#39;\t&#39;, fun(x2))
for k in range(1,n-1):
if k == n-2:
i = k
break
if f1 > f2:
a = x1
x1 = x2
x2 = a + F/F * (b-a)
f1 = fun(x1)
f2 = fun(x2)
else:
b = x2
x2 = x1
x1 = a + F/F * (b-a)
f1 = fun(x1)
f2 = fun(x2)
print(a,&#39;\t&#39;, b,&#39;\t&#39;, fun(x1),&#39;\t&#39;, fun(x2))
k = i
x1 = x1
x2 = x1 + delta
f1 = fun(x1)
f2 = fun(x2)
if f1 > f2:
a = x1
else:
b = x2
print(a,&#39;\t&#39;,b,&#39;\t&#39;,fun(x1),&#39;\t&#39;,fun(x2))
return a,b,fun(x1),fun(x2)
if __name__ == &#39;__main__&#39;:
fun = lambda x: 2 * x ** 2 - x - 1
print(FibsSearch(-1,1,fun,0.16,0.01))
1.2.2 函数逼近法
[*]牛顿法
我们首先介绍解方程的牛顿迭代法,即常说的切线迭代法,其选取一个可微函数上的初始点 https://www.zhihu.com/equation?tex=x_1 ,在其处作切线,在切线与 https://www.zhihu.com/equation?tex=x 轴交点处作垂线与函数得到一个新的交点作为 https://www.zhihu.com/equation?tex=x_2 ,如此不断迭代。
容易得到牛顿迭代法的迭代公式为
https://www.zhihu.com/equation?tex=x_%7Bk%2B1%7D+%3D+x_k+-+%5Cfrac%7Bf%28x_k%29%7D%7Bf%27%28x_k%29%7D
而极值优化中的牛顿法则可以看作是对 https://www.zhihu.com/equation?tex=f%27 通过切线法求解零点,从而求解函数的驻点,这要求函数是二阶可微的。
很明显牛顿迭代法是一个局部算法,初始点的选取对算法的收敛性和收敛速度有很大影响
牛顿迭代法的计算步骤:
[*]取初始点 https://www.zhihu.com/equation?tex=x_0 ,允许误差 https://www.zhihu.com/equation?tex=%5Cvarepsilon ,计数器 https://www.zhihu.com/equation?tex=k%3D0 .
[*]若 https://www.zhihu.com/equation?tex=%5Cparallel+f%27%28x_k%29+%5Cparallel+%3C+%5Cvarepsilon ,停止迭代,得到 https://www.zhihu.com/equation?tex=x_%2A%3Dx_k .
[*]否则,根据迭代公式计算 https://www.zhihu.com/equation?tex=x_%7Bk%2B1%7D ,置 https://www.zhihu.com/equation?tex=k%3Dk%2B1 ,回到(2).
[*]割线法
[*]抛物线法
本文使用 Zhihu On VSCode 创作并发布
页:
[1]