FeastSC 发表于 2022-1-3 11:54

ADMM算法原理详解

作为统计学专业研究高维统计问题的菜鸟,自从学了ADMM算法,内心极度膨胀(不是)


遇事不决,ADMM;如果一个ADMM不能解决,那就ADMM套ADMM!
(正经)ADMM算法提供了一个求解含线性等式约束优化问题的框架,方便我们将原始的优化问题拆解成几个相对好解决的子优化问题进行迭代求解。这种“拆解”的功能是ADMM算法的核心要义。
去年刚学ADMM的时候写过一个notes,按自己的想法整理了一套理解ADMM算法原理的流程,贴出来和大家交流交流~
0. ADMM是个啥?

ADMM用于求解如下最优化问题:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmin_%7B%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%7D+f%28%5Cmathbf%7Bx%7D%29%2Bg%28%5Cmathbf%7Bz%7D%29+%5C%5C+s.t.+%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D%3D%5Cmathbf%7Bc%7D+%5Cend%7Baligned%7D 其中: https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D%5Cin+%5Cmathbb%7BR%7D%5E%7Bp%7D , https://www.zhihu.com/equation?tex=%5Cmathbf%7Bz%7D%5Cin+%5Cmathbb%7BR%7D%5E%7Bq%7D , https://www.zhihu.com/equation?tex=%5Cmathbf%7BA%7D%5Cin+%5Cmathbb%7BR%7D%5E%7Bk%5Ctimes+p%7D , https://www.zhihu.com/equation?tex=%5Cmathbf%7BB%7D%5Cin+%5Cmathbb%7BR%7D%5E%7Bk%5Ctimes+q%7D , https://www.zhihu.com/equation?tex=%5Cmathbf%7Bc%7D%5Cin+%5Cmathbb%7BR%7D%5E%7Bk%7D ; https://www.zhihu.com/equation?tex=f%3A%5Cmathbb%7BR%7D%5Ep%5Cto+%5Cmathbb%7BR%7D , https://www.zhihu.com/equation?tex=g%3A%5Cmathbb%7BR%7D%5Eq%5Cto+%5Cmathbb%7BR%7D 。
简单来讲,这一优化问题的目标函数包含两组可分离自变量(和),且存在线性等式约束。对于这一优化问题,ADMM算法首先对目标函数进行增广,将原始优化问题转化为:
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bc%7D+%5Cmin_%7B%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%7D+Q_%7B%5Crho%7D%28%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%29%3Df%28%5Cmathbf%7Bx%7D%29%2Bg%28%5Cmathbf%7Bz%7D%29%2B%5Cfrac%7B%5Crho%7D%7B2%7D%5C%7C%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D-%5Cmathbf%7Bc%7D%5C%7C_2%5E2+%5C%5C+s.t.+%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D%3D%5Cmathbf%7Bc%7D+%5Cend%7Barray%7D 进一步写出该问题的拉格朗日函数式子:

https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+L_%7B%5Crho%7D%28%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%2C%5Cboldsymbol%7B%5Clambda%7D%29%26%3D+Q_%7B%5Crho%7D%28%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%29%2B%5Cboldsymbol%7B%5Clambda%7D%5E%7B%5Ctop%7D%28%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D-%5Cmathbf%7Bc%7D%29%5C%5C+%26%3Df%28%5Cmathbf%7Bx%7D%29%2Bg%28%5Cmathbf%7Bz%7D%29%2B%5Cfrac%7B%5Crho%7D%7B2%7D%5C%7C%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D-%5Cmathbf%7Bc%7D%5C%7C_2%5E2%2B%5Cboldsymbol%7B%5Clambda%7D%5E%7B%5Ctop%7D%28%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D-%5Cmathbf%7Bc%7D%29+%5Cend%7Baligned%7D+
其中为拉格朗日乘子(向量)。
接着使用如下更新步骤进行迭代(第 https://www.zhihu.com/equation?tex=l 步更新)直至收敛:

[*]更新: https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D%3D%5Carg%5Cmin_%7B%5Cmathbf%7Bx%7D%7D+L_%7B%5Crho%7D%28%5Cmathbf%7Bx%7D%2C%5Cmathbf%7Bz%7D%5E%7B%28l-1%29%7D%2C%5Cboldsymbol%7B%5Clambda%7D%5E%7B%28l-1%29%7D%29
[*]更新: https://www.zhihu.com/equation?tex=%5Cmathbf%7Bz%7D%3D%5Carg%5Cmin_%7B%5Cmathbf%7Bz%7D%7D+L_%7B%5Crho%7D%28%5Cmathbf%7Bx%7D%5E%7B%28l%29%7D%2C%5Cmathbf%7Bz%7D%2C%5Cboldsymbol%7B%5Clambda%7D%5E%7B%28l-1%29%7D%29
[*]更新 https://www.zhihu.com/equation?tex=%5Cboldsymbol%7B%5Clambda%7D : https://www.zhihu.com/equation?tex=%5Cboldsymbol%7B%5Clambda%7D%5E%7B%28l%29%7D%3D%5Cboldsymbol%7B%5Clambda%7D%5E%7B%28l-1%29%7D%2B%5Crho%28%5Cmathbf%7BA%7D%5Cmathbf%7Bx%7D%5E%7B%28l%29%7D%2B%5Cmathbf%7BB%7D%5Cmathbf%7Bz%7D%5E%7B%28l%29%7D-%5Cmathbf%7Bc%7D%29
这个更新步骤还是很容易看明白的。 但朋友,你懵逼了没有?至少我第一次看这玩意的时候是不知道这三个更新步骤是什么意思的。后来看了CMU那个凸优化的课才慢慢搞清楚这个脑洞是怎么开来的。今天来说道说道......
1. 相关发展脉络

1.1 包络定理(Envelop Theorem)

简单来讲,包络定理其实研究的是:对于一个带超参数的优化问题而言,这个超参数的变动会对这一优化问题的最优值产生什么样的影响。
在线性规划里头常见的“影子价格”问题其实就是这个研究的一个特例~
具体来讲,考虑一个带超参数的优化问题:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+V%28%5Cboldsymbol%7B%5Calpha%7D%26%29%3D%5Cmax_%7B%5Cmathbf%7Bx%7D%7D+f%28%5Cmathbf%7Bx%7D%3B+%5Cboldsymbol%7B%5Calpha%7D%29+%5C%5C+%26s.t.++%5Cquad+g%28%5Cmathbf%7Bx%7D%3B+%5Cboldsymbol%7B%5Calpha%7D%29+%5Cgeq+%5Cmathbf%7B0%7D+%5Cend%7Baligned%7D 其中: https://www.zhihu.com/equation?tex=%5Cmathbf%7Bx%7D%5Cin+%5Cmathbb%7BR%7D%5Ep ;https://www.zhihu.com/equation?tex=f%3A%5Cmathbb%7BR%7D%5E%7Bp%7D%5Cto+%5Cmathbb%7BR%7D , https://www.zhihu.com/equation?tex=g%3A%5Cmathbb%7BR%7D%5E%7Bp%7D%5Cto+%5Cmathbb%7BR%7D%5Ek ;是相关的超参数(可以是向量或标量)。
我们可以写出这个优化问题的拉格朗日函数:
https://www.zhihu.com/equation?tex=L%28%5Cmathbf%7Bx%7D%2C%5Cboldsymbol%7B%5Clambda%7D%3B%5Cboldsymbol%7B%5Calpha%7D%29%3Df%28%5Cmathbf%7Bx%7D%3B%5Cboldsymbol%7B%5Calpha%7D%29%2B%5Cboldsymbol%7B%5Clambda%7D%5E%7B%5Ctop%7Dg%28%5Cmathbf%7Bx%7D%3B%5Cboldsymbol%7B%5Calpha%7D%29%29 其中:为拉格朗日乘子向量。
假设这个优化问题的最优值点为 https://www.zhihu.com/equation?tex=%28%5Cmathbf%7Bx%7D%5E%2A%28%5Cboldsymbol%7B%5Calpha%7D%29%2C%5Cboldsymbol%7B%5Clambda%7D%5E%2A%28%5Cboldsymbol%7B%5Calpha%7D%29%29 ,那么包络定理告诉我们:
https://www.zhihu.com/equation?tex=%5Cfrac%7B%5Cpartial+V%28%5Cboldsymbol%7B%5Calpha%7D%29%7D%7B%5Cpartial+%5Cboldsymbol%7B%5Calpha%7D%7D%3D%5Cfrac%7B%5Cpartial+L%28%5Cmathbf%7Bx%7D%5E%2A%28%5Cboldsymbol%7B%5Calpha%7D%29%2C%5Cboldsymbol%7B%5Clambda%7D%5E%2A%28%5Cboldsymbol%7B%5Calpha%7D%29%3B%5Cboldsymbol%7B%5Calpha%7D%29%7D%7B%5Cpartial+%5Cboldsymbol%7B%5Calpha%7D%7D 一句话概括下,包络定理其实说的是:优化问题的最优值对超参数的偏导数等于拉格朗日函数函数在最优点处对该超参数的偏导。
有了这个定理,我们下面可以引出来凸优化问题里面的一个概念——共轭函数~
1.2 共轭函数 (Conjugate Function)

(待更)
1.3 对偶梯度下降法 (Dual Subgradient Method)

1.4 增广拉格朗日方法 (Augmented Lagrangian Method)

2. 算法有关细节

2.1 Scaled Form——表示形式更简单的ADMM算法等价写法

2.2 终止条件

3. ADMM为什么强大——举个例子(Lasso)
页: [1]
查看完整版本: ADMM算法原理详解