APSchmidt 发表于 2021-12-29 22:17

GBDT算法介绍

文章主要是阅读论文《GREEDY FUNCTION APPROXIMATION: A GRADIENT BOOSTING MACHINE》的一些记录。
函数估计

函数估计是从函数空间的角度对数值进行优化,而不是参数空间。
假设一个系统是由随机的输入变量 https://www.zhihu.com/equation?tex=x%3D+%5Cleft%5C%7B+x_%7B1%7D%2C...x_%7Bn%7D+%5Cright%5C%7D和随机输出变量 https://www.zhihu.com/equation?tex=y 组成,已知的训练样本 为 ,目标获得一个真实目标函数的函数估计 https://www.zhihu.com/equation?tex=%5Chat%7BF%7D%28x%29 ,对 https://www.zhihu.com/equation?tex=x+-%3E+y 进行映射。最小化在一些特殊损失函数上面的期望值。
(1) https://www.zhihu.com/equation?tex=F%5E%7B%2A%7D+%3D+arg%5Cmin_%7BP%7D+E_%7By%2Cx%7DL%28y%2CF%28x%29%29+%3D+arg%5Cmin_%7BP%7D+E_%7Bx%7D%5Cleft%5B+E_%7By%7D%28L%28y%2CF%28x%29%29%29%7Cx+%5Cright%5D.

   通常损失函数有均方误差https://www.zhihu.com/equation?tex=%5Cleft%28+y-F+%5Cright%29%5E%7B2%7D 、绝对值误差 https://www.zhihu.com/equation?tex=%5Cleft%7C+y-F+%5Cright%7C 、分类问题时的负二项对数似然 https://www.zhihu.com/equation?tex=log%281%2Be%5E%7B-2yF%7D%29%EF%BC%8Cy%5Cin%5Cleft%5C%7B+-1%2C1+%5Cright%5C%7D .
一种常用的处理方式是将   限制为参数化函数的成员, https://www.zhihu.com/equation?tex=P+%3D+%5Cleft%5C%7B+P1%2CP2%2CP3...+%5Cright%5C%7D是一组有限的参数,它们共同决定了某个特殊的参数化函数。但是这篇文章总主要讲的是另一种的表示形式,称为   “additive”expansions 表示形式:
(2) https://www.zhihu.com/equation?tex=F%28x%3B%5Cleft%5C%7B+%5Cbeta_%7Bm%7D%2Ch%28x%3Ba_%7Bm%7D%29+%5Cright%5C%7D%3D%5Csum_%7Bm%3D1%7D%5E%7BM%7D%7B%5Cbeta_%7Bm%7Dh%28x%3Ba_%7Bm%7D%29%7D
公式(2)中的函数通常是一个参数化函数,它的参数为 https://www.zhihu.com/equation?tex=a+%3D+%5Cleft%5C%7B+a_%7B1%7D%2C+a_%7B2%7D+...%5Cright%5C%7D 。每一项通过不同的参数来生成不同基函数,公式(2)展示的这种扩展方法是许多函数逼近方法的核心,如神经网络、径向基函数、MARS、和SVM。在这篇文章中每个是一颗小的回归树,如CART树,而每一棵树的参数是是每一个分裂节点的分裂变量,分裂的位置,以及叶子结点的数量。
数值优化

但选择一个参数优化模型的时候,函数优化问题就变成了参数优化问题,
(3) https://www.zhihu.com/equation?tex=P%5E%7B%2A%7D+%3D+arg%5Cmin_%7BP%7D%5CPhi%28P%29
其中 https://www.zhihu.com/equation?tex=%5CPhi%28P%29%3D+E_%7Bx%2Cy%7DL%28y%2CF%28x%3BP%29%29%29 , https://www.zhihu.com/equation?tex=F%5E%7B%2A%7D%28x%29+%3D+F%28x%3BP%5E%7B%2A%7D%29
对于大部分和损失函数 https://www.zhihu.com/equation?tex=L 来说,数值优化方法主要用来解决公式(3)的,这通常涉及用以下表达式解决参数优化
(4) https://www.zhihu.com/equation?tex=P%5E%7B%2A%7D+%3D+%5Csum_%7Bm%3D1%7D%5E%7BM%7D%7Bp_%7Bm%7D%7D
这里 https://www.zhihu.com/equation?tex=p_%7B0%7D 是默认值,是连续的增量(称为“steps”或“boosts”),每一个值都是在上一步的基础上按序列生成,每一步 https://www.zhihu.com/equation?tex=p_%7Bm%7D 的计算逻辑由优化方法定义,如梯度下降方法。
梯度下降

梯度下降是最常用的数值优化方法之一,它把公式(4)中定义的每一次的增量 https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+p_%7Bm%7D+%5Cright%5C%7D_%7B1%7D%5E%7BM%7D 用如下表达式生成。首先生成梯度

https://www.zhihu.com/equation?tex=g_%7Bm%7D+%3D+%5Cleft%5C%7B+g_%7Bjm%7D+%5Cright%5C%7D+%3D+%5Cleft%5C%7B+%5Cleft%5B+%5Cfrac%7B%5Cpartial%5CPhi%28P%29%7D%7B%5Cpartial+P_%7Bj%7D%7D+%5Cright%5D_%7BP%3DP_%7BM-1%7D%7D+%5Cright%5C%7D ,      其中 https://www.zhihu.com/equation?tex=P_%7Bm-1%7D+%3D+%5Csum_%7B0%7D%5E%7Bm-1%7D%7Bp_%7Bi%7D%7D
第m次迭代的 https://www.zhihu.com/equation?tex=p_%7Bm%7D+%3D+-%5Crho_%7Bm%7Dg_%7Bm%7D ,其中
(5)   https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+arg%5Cmin_%7B%5Crho%7D%5CPhi%28P_%7Bm-1%7D-%5Crho%7Bg_%7Bm%7D%7D%29
负梯度 https://www.zhihu.com/equation?tex=-g_%7Bm%7D 代表了梯度下降的方向,公式(5)表示延梯度下降方向上的线性搜索。
函数空间上的数值优化

使用非参数化方法将数值优化应用到函数空间中,这种方法认为在每个评估的是一个“参数”,并寻求最小化,并且最小化如下公式:

https://www.zhihu.com/equation?tex=%5CPhi%28F%29+%3D+E_%7By%2Cx%7DL%28y%2CF%28x%29%29+%3D+E_%7Bx%7D%5BE_%7By%7D%28L%28y%2CF%28x%29%29%29%7Cx%5D
或者:

https://www.zhihu.com/equation?tex=%5Cphi%28F%28x%29%29+%3D+E_%7By%7D%5BL%28y%2CF%28x%29%29%7Cx%5D
由于这样的参数是无限的,但是数据集是有限的,因此在函数空间解决数值优化问题的方案如下,

https://www.zhihu.com/equation?tex=F%5E%7B%2A%7D%28x%29+%3D+%5Csum_%7Bm%3D0%7D%5E%7BM%7D%7Bf_%7Bm%7D%28x%29%7D
这里 https://www.zhihu.com/equation?tex=f_%7B0%7D%28x%29 是初始化, https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+f_%7Bm%7D%28x%29+%5Cright%5C%7D_%7B1%7D%5E%7BM%7D 是增量函数(   “steps” or “boosts” )。
对应梯度下降:
(6)      https://www.zhihu.com/equation?tex=f_%7Bm%7D%28x%29+%3D+-%5Crho_%7Bm%7Dg_%7Bm%7D%28x%29
其中:

https://www.zhihu.com/equation?tex=g_%7Bm%7D+%3D+%5Cleft%5B+%5Cfrac%7B%5Cpartial%5Cphi%28F%28x%29%29%7D%7B%5Cpartial+F%28x%29%7D+%5Cright%5D_%7BF%28x%29%3DF_%7Bm-1%7D%28x%29%7D+%3D+%5Cleft%5B+%5Cfrac%7B%5Cpartial+E_%7By%7D%5Cleft%5B+L%28y%2CF%28x%29%29%7Cx+%5Cright%5D%7D%7B%5Cpartial+F%28x%29%7D+%5Cright%5D_%7BF%28x%29%3DF_%7Bm-1%7D%28x%29%7D

https://www.zhihu.com/equation?tex=F_%7Bm-1%7D%28x%29+%3D+%5Csum_%7Bi%3D1%7D%5E%7Bm-1%7D%7Bf_%7Bi%7D%28x%29%7D
交换积分和微分
(7)https://www.zhihu.com/equation?tex=g_%7Bm%7D%28x%29+%3D+E_%7By%7D%5Cleft%5B+%5Cfrac%7B%5Cpartial+L%28y%2CF%28x%29%29%7D%7B%5Cpartial+F%28x%29%7D%7Cx%5Cright%5D_%7BF%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D
公式(6)中的通过下面公式得到
(8) https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+arg+%5Cmin_%7B%5Crho%7DE_%7By%2Cx%7DL%28y%2C+F_%7Bm-1%7D%28x%29+-+%5Crho+g_%7Bm%7D%28x%29%29
有限的数据集

在有限的数据集上面无法准确预估每个 https://www.zhihu.com/equation?tex=x_%7Bi%7D 附近的 https://www.zhihu.com/equation?tex=E_%7By%7D%5B.%7Cx%5D,即使能够预估,我们需要的也是在附近的而不是训练集上面的。可以通过借用附近点来平滑实现。一种实现方法是像1.1表述的参数优化方法那样,参考等式(2),然后在数据集上面最小化期望损失

https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+%5Cbeta_%7Bm%7D%2C%5Calpha_%7Bm%7D+%5Cright%5C%7D_%7B1%7D%5E%7BM%7D+%3D+arg%5Cmin_%7B%5Cleft%5C%7B+%5Cbeta_%7Bm%7D%5E%7B%27%7D%2C%5Calpha_%7Bm%7D%5E%7B%27%7D+%5Cright%5C%7D%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7BL%28y_%7Bi%7D%2C%5Csum_%7Bm%3D1%7D%5E%7BM%7D%7B%5Cbeta_%7Bm%7D%5E%7B%27%7Dh%28x_%7Bi%7D%2Ca_%7Bm%7D%5E%7B%27%7D%29%7D%29%7D
直接优化上面公式中的参数是不可取的,因此采用贪婪的方式优化
(9)https://www.zhihu.com/equation?tex=%28%5Cbeta_%7Bm%7D%2C%5Calpha_%7Bm%7D%29+%3D+arg%5Cmin_%7B%5Cbeta%2C%5Calpha%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7BL%28y_%7Bi%7D%2CF_%7Bm-1%7D%28x_%7Bi%7D%29%2B%5Cbeta+h%28x_%7Bi%7D%3B%5Calpha%29%7D%29
这时
(10)https://www.zhihu.com/equation?tex=F_%7Bm%7D%28x%29+%3D+F_%7Bm-1%7D%28x%29+%2B+%5Cbeta_%7Bm%7Dh%28x%3B%5Calpha%29
假设一个特定的损失函数 https://www.zhihu.com/equation?tex=L%28y%2CF%28x%29%29 和特定的基学习器,公式(9)也是比较难优化的,给定任意一个估计,(9)(10)中的 https://www.zhihu.com/equation?tex=%5Cbeta_%7Bm%7Dh%28x%3Ba_%7Bm%7D%29 项的计算可以被看成基于数据集去预估目标函数最佳贪婪方案,每一步的方向是由参数限制的内的成员函数,整个流程可以看成公式(6)在参数约束下的梯度下降。
公式(7)在特定数据集上面的非约束负梯度为

https://www.zhihu.com/equation?tex=-g_%7Bm%7D%28x_%7Bi%7D%29+%3D+-%5Cleft%5B+%5Cfrac%7B%5Cpartial+L%28y_%7Bi%7D%2CF%28x_%7Bi%7D%29%29%7D%7B%5Cpartial+F%28x_%7Bi%7D%29%7D+%5Cright%5D_%7BF%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D
因此,通过参数化方法训练一个基分类器,使其在训练集上面高度拟合负梯度 https://www.zhihu.com/equation?tex=-g_%7Bm%7D%28x%29
(11) https://www.zhihu.com/equation?tex=a_%7Bm%7D+%3D+arg%5Cmin_%7B%5Calpha%2C%5Cbeta%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D%5B%7B-g_%7Bm%7D%28x_i%29%7D-%5Cbeta+h%28x_i%3Ba%29%5D%5E2
梯度方向上的步长
(12)   https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+arg%5Cmin_%5Cbeta+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7BL%28y_%7Bi%7D%2CF_%7Bm-1%28x_%7Bi%7D%29%7D%2B%5Crho+h%28x_%7Bi%7D%3Ba_%7Bm%7D%29%29%7D
模型更新方法如下

https://www.zhihu.com/equation?tex=F_%7Bm%7D%28x%29+%3D+F_%7Bm-1%7D%28x%29+%2B+%5Crho_%7Bm%7Dh%28x%3Ba_%7Bm%7D%29+
从根本上来说,上述过程将参数化解题方案应用到非参数梯度下降优化方案中去拟合“伪响应” https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+%5Ctilde%7By%7D_%7Bi%7D+%3D+-+g_%7Bm%7D%28x_%7Bi%7D%29%5Cright%5C%7D_%7B1%7D%5E%7BN%7D (7),而不是直接在公式(9)上面进行拟合优化,这样做的好处是,(9)的优化问题是比较难解的,但是改成公式(11)中的均方误差就比较容易计算,然后再通过(12)解出 https://www.zhihu.com/equation?tex=%5Crho ,仅仅只需要解决单个参数的优化问题。因此,对于任何能用最小二乘算法来求解 (11)的函数来说,可以使用任意复杂的损失函数 https://www.zhihu.com/equation?tex=L%28y%2CF%29 结合   stage-wise additive modeling 。
以上算法的通用伪代码如下,算法1:


GBDT框架应用

这里讲述了将GBDT算法框架应用到几个主流的损失函数上,主要包括   least-squares (LS), least absolute deviation (LAD), Huber (M), 和 logistic binomial log-likelihood (L).
1. Least-squares regression

最小二乘法损失函数的形式为 https://www.zhihu.com/equation?tex=L%28y%2CF%29+%3D+%28y-F%29%5E%7B2%7D%2F2 ,因此,负梯度为

https://www.zhihu.com/equation?tex=-g_%7Bm%7D%28x_%7Bi%7D%29+%3D+-%5Cleft%5B+%5Cfrac%7B%5Cpartial+L%28y_%7Bi%7D%2CF%28x_%7Bi%7D%29%29%7D%7B%5Cpartial+F%28x_%7Bi%7D%29%7D+%5Cright%5D_%7BF%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D+%3D+-%5Cleft%5B+%5Cfrac%7B%5Cpartial+%28y_%7Bi%7D-F%28x_%7Bi%7D%29%29%5E%7B2%7D%29%7D%7B%5Cpartial+F%28x_%7Bi%7D%29%7D+%5Cright%5D_%7BF%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D+%3D+y_%7Bi%7D+-+F_%7Bm-1%7D%28x_%7Bi%7D%29
因此,替换通用算法伪代码第三行 https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D+%3D+y-F_%7Bm-1%7D%28x_%7Bi%7D%29 ,同时合并第4,5行让 https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+%5Cbeta_%7Bm%7D ,得到算法2


因此,在均方误差损失函数下,GBDT算法拟合的是上一步的残差。
2. Least absolute deviation (LAD) regression.

绝对值损失函数的形式为: https://www.zhihu.com/equation?tex=L%28y%2CF%29+%3D+%7Cy-F%7C ,其梯度计算:
(13)https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D_%7Bi%7D+%3D+-g_%7Bm%7D%28x_%7Bi%7D%29+%3D+-%5Cleft%5B+%5Cfrac%7B%5Cpartial+L%28y_%7Bi%7D%2CF%28x_%7Bi%7D%29%29%7D%7B%5Cpartial+F%28x_%7Bi%7D%29%7D+%5Cright%5D_%7BF%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D+%3D+sign%28y_%7Bi%7D+-+F_%7Bm-1%7D%28x_%7Bi%7D%29%29
这里拟合上一步残差的sign值(算法1第四行),第5行的更新如下:
(14)https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+arg%5Cmin_%5Crho%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7B%7Cy_%7Bi%7D+-+F_%7Bm-1%7D%28x_%7Bi%7D%29-%5Crho+h%28x_%7Bi%7D%3Ba_%7Bm%7D%29%7C%7D++

https://www.zhihu.com/equation?tex=%3D+arg%5Cmin_%5Crho%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7C%7Bh%28x_%7Bi%7D%3Ba_%7Bm%7D%29%7C.%7C%5Cfrac%7By_%7Bi%7D+-+F_%7Bm-1%7D%28x_%7Bi%7D%29%7D%7Bh%28x_%7Bi%7D%3Ba_%7Bm%7D%29%7D+-%5Crho%7C%7D

https://www.zhihu.com/equation?tex=%3D+median_%7BW%7D%5Cleft%5C%7B+%5Cfrac%7By_%7Bi%7D+-+F_%7Bm-1%7D%28x_%7Bi%7D%29%7D%7Bh%28x_%7Bi%7D%3Ba_%7Bm%7D%29%7D+%5Cright%5C%7D_%7B1%7D%5E%7BN%7D%2C++w_%7Bi%7D+%3D+%7Ch%28x_%7Bi%7D%3Ba_%7Bm%7D%29%7C
其中 https://www.zhihu.com/equation?tex=median_%7BW%7D%5Cleft%5C%7B+.+%5Cright%5C%7D 表示在权重为 https://www.zhihu.com/equation?tex=w_%7Bi%7D 下面的加权中位数。将公式(13)(14)插入到算法1的第4,5行,得到损失函数为绝对值误差的求解方法,基学习器可取任何模型。
3. Regression trees.

这里考虑,基分类器使用包含k个叶子结点的回归树。每棵回归树的形式如下:
(15)https://www.zhihu.com/equation?tex=h%28x%3B%5Cleft%5C%7B+b_%7Bj%7D%2CR_%7Bj%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D%29+%3D+%5Csum_%7Bj%3D1%7D%5E%7BJ%7D%7Bb_%7Bj%7D1%28x%E2%88%88+R_%7Bj%7D%29%7D
这里 https://www.zhihu.com/equation?tex=%5Cleft%5C%7BR_%7Bj%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D 是不相交的区域,是所有输入变量的分段函数映射值。 通过决策树的路径映射到最终叶子节点。指示函数 https://www.zhihu.com/equation?tex=1%28.%29 表示当参数为true的时候,值为1,否则,为0。决策树(公式15)的参数是系数 https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+b_%7Bj%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D以及叶子结点的 https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+R_%7Bj%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D 以及用来用来分裂的非叶子结点上的变量集分裂位置。
对于回归树来说,算法1中第六行,模型的迭代如下:
(16)https://www.zhihu.com/equation?tex=F_%7Bm%7D%28x%29+%3D+F_%7Bm-1%7D%28x%29+%2B+%5Crho_%7Bm%7D%5Csum_%7Bj%3D1%7D%5E%7BJ%7D%7Bb_%7Bjm%7D1%28x%E2%88%88R_%7Bjm%7D%29%7D
在均方误差损失函数下, https://www.zhihu.com/equation?tex=b_%7Bjm%7D 的值

https://www.zhihu.com/equation?tex=b_%7Bjm%7D+%3D+avg_%7Bx%E2%88%88R_%7Bjm%7D%7D%5Ctilde%7By%7D_%7Bi%7D
算法1第5行的缩放因为,将公式(16)重写为
(17)https://www.zhihu.com/equation?tex=F_%7Bm%7D%28x%29+%3D+F_%7Bm-1%7D%28x%29+%2B+%5Csum_%7Bj%3D1%7D%5E%7BJ%7D%7B%5Cgamma_%7Bjm%7D1%28x%E2%88%88R_%7Bjm%7D%29%7D
其中 https://www.zhihu.com/equation?tex=%5Cgamma_%7Bjm%7D+%3D+%5Crho_%7Bm%7Db_%7Bjm%7D。可以将公式(17)看成J个基础函数的相加,而不是像公式(16)那样加上一个单独的函数。这样做的好处是可以通过优化公式(17)中每个单独函数的系数进一步优化拟合效果。系数的优化如下:

https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+r_%7Bjm%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D+%3D+arg%5Cmin_%7B%5Cleft%5C%7B+r_%7Bj%7D+%5Cright%5C%7D_%7B1%7D%5E%7BJ%7D%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7BL%28y_%7Bi%7D%2CF_%7Bm-1%7D%28x%29%2B%5Csum_%7Bj%3D1%7D%5E%7BJ%7D%7B%5Cgamma_%7Bj%7D1%28x_%7Bi%7D%E2%88%88R_%7Bjm%7D%29%7D%29%7D
因为回归树的叶子结点是不相交的,上述公式可以优化为:
(18)https://www.zhihu.com/equation?tex=r_%7Bjm%7D+%3D+arg%5Cmin_%5Cgamma+%5Csum_%7Bx%E2%88%88R_%7Bjm%7D%7D%7BL%28y_%7Bi%7D%2CF_%7Bm-1%7D%28x_i%29%2B%5Cgamma%29%7D
给定上一步的累计值,基于绝对值损失函数 https://www.zhihu.com/equation?tex=L%3D%7Cy-F%7C 优化每一个叶子结点上面的值

https://www.zhihu.com/equation?tex=%5Cgamma_%7Bjm%7D+%3D+median_%7Bx_%7Bi%7D%E2%88%88R_%7Bjm%7D%7D%5Cleft%5C%7B+y_%7Bi%7D+-+F_%7Bm-1%7D%28x_i%29%5Cright%5C%7D
仅需要取每个叶子结点的残差值的中位数作为叶子节点的值即可。在绝对值损失函数下,每次迭代,基函数拟合的是当前残差的sign值,最后,将每个叶子结点的值修正为到达该叶子结点的每个样本的残差值的中位数,得到如下算法,算法三:


该算法具有高度鲁棒性的优点。因为回归树算法仅用到每个输入变量 https://www.zhihu.com/equation?tex=x_%7Bj%7D 的顺序信息,目标值也仅仅只有两个值 https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D_i+%E2%88%88+%5Cleft%5C%7B+%2B1%2C-1+%5Cright%5C%7D ,以及叶子结点是通过残差中位数更新的。
另一种方法是构建一棵树来直接最小化损失标准,

https://www.zhihu.com/equation?tex=tree_m+%3D+arg%5Cmin_%7BJ-node%5C+tree%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7B%7Cy_%7Bi%7D-F_%7Bm-1%7D%28x_i%29-tree%28x_i%29%7C%7D

https://www.zhihu.com/equation?tex=F_m%28x%29+%3D+F_%7Bm-1%7D%28x%29+%2B+tree_m%28x%29
然而,算法3更快,因为他使用均方误差损失函数来建树。在建树过程中,均方误差损失函数搜索分裂点的速度远大于绝对值损失函数。
4. M-Regression.

M回归的设计是针对长尾和异常点有高度鲁棒性,同时保持对正态分布误差的高敏感性。该算法使用Huber损失函数
(19)https://www.zhihu.com/equation?tex=L%28y%2CF%29+%3D+%5Cbegin%7Bcases%7D+%5Cfrac%7B1%7D%7B2%7D%28y-F%29%5E%7B2%7D%2C++%5C+%5C++%7Cy-F%7C%5Cleq%5Cdelta%2C+%5C%5C+%5Cdelta%28%7Cy-F%7C-%5Cdelta%2F2%29%2C+++%5C+%5C++%7Cy-F%7C%3E%5Cdelta.+%5Cend%7Bcases%7D
因此,伪响应 https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D_%7Bi%7D 为:

https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D_%7Bi%7D+%3D++%5Cbegin%7Bcases%7D+y_%7Bi%7D-F_%7Bm-1%7D%28x_i%29%2C++%5C+%5C++%7Cy_%7Bi%7D-F_%7Bm-1%7D%28x_i%29%7C%5Cleq%5Cdelta%2C+%5C%5C+%5Cdelta+%2A+sign%28y_%7Bi%7D-F_%7Bm-1%7D%28x_i%29%29%2C+++%5C+%5C++%7Cy_%7Bi%7D-F_%7Bm-1%7D%28x_i%29%7C%3E%5Cdelta%2C+%5Cend%7Bcases%7D
这时为:
(20)https://www.zhihu.com/equation?tex=%5Crho_%7Bm%7D+%3D+arg%5Cmin_%5Crho+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7BL%28y_i%2CF_%7Bm-1%7D%28x_i%29%2B%5Crho+h%28x_i%3Ba_m%29%29%7D
其中L是公式(19)中的损失函数。
在这个损失函数中,阈值将残差的某些值定义为异常值,并且用绝对值损失函数而不是均方误差去定义其造成的损失。一个好的是需要根据 https://www.zhihu.com/equation?tex=y-F%5E%2A%28x%29 的分布定义,是目标函数。一个常用的做法是选择 https://www.zhihu.com/equation?tex=%7Cy-F%5E%2A%28x%29%7C 的 https://www.zhihu.com/equation?tex=%5Calpha 分位点作为的值,这样 https://www.zhihu.com/equation?tex=1-%5Calpha 决定了异常值的数量。但由于目标函数是未知的,因此在每一步迭代的时候,使用上一步的结果作为第m次迭代的的估计值,进行计算,因此 https://www.zhihu.com/equation?tex=%5Cdelta_m 为:

https://www.zhihu.com/equation?tex=%5Cdelta_m+%3D+quantile_%5Calpha%5Cleft%5C%7B+%7Cy_i+-+F_%7Bm-1%7D%28x_i%29%7C+%5Cright%5C%7D_%7B1%7D%5E%7BN%7D
当基函数为回归树的时候,并且按照公式(18)进行每个叶子结点 https://www.zhihu.com/equation?tex=R_%7Bjm%7D 节点值的计算,带入公式(19)的Huber损失函数形式得到节点值的计算逻辑如下:

https://www.zhihu.com/equation?tex=%5Ctilde%7B%5Cgamma%7D_%7Bjm%7D+%3D+median_%7Bx_i+%E2%88%88+R_%7Bjm%7D%7D%5Cleft%5C%7B+y_i+-+F_%7Bm-1%7D%28x_i%29+%5Cright%5C%7D

https://www.zhihu.com/equation?tex=%5Cgamma_%7Bjm%7D+%3D+%5Ctilde%7B%5Cgamma%7D_%7Bjm%7D%5Cfrac%7B1%7D%7BN_%7Bjm%7D%7D%5Csum_%7Bx_i%E2%88%88R_%7Bjm%7D%7D%7Bsign%28y_i-F_%7Bm-1%7D%28x_i%29-%5Ctilde%7B%5Cgamma%7D_%7Bjm%7D%29.min%28%5Cdelta_m%2Cabs%28y_i-F_%7Bm-1%7D%28x_i%29-%5Ctilde%7B%5Cgamma%7D_%7Bjm%7D%29%29%7D
https://www.zhihu.com/equation?tex=N_%7Bjm%7D 是第j个叶子节点上面的样本数,因此,基于Huber损失函数的GBDT算法伪代码如下:



基于模型鲁棒性提出的算法四,在正态分布误差上面的表现接近于算法2(损失函数为均方误差),在长尾误差分布上面的表现接近于算法3(绝对值误差),对于只有中长尾误差的分布,它的性能可以优于两者。
5.   Two-class logistic regression and classification.

这里,损失函数为   negative binomial log-likelihood (FHT00)

https://www.zhihu.com/equation?tex=L%28y%2CF%29+%3D+log%281%2Be%5E%7B-2yF%7D%29%2C%5C+%5C+%5C+%5C+y%E2%88%88%5Cleft%5C%7B+-1%2C1+%5Cright%5C%7D%2C
其中 https://www.zhihu.com/equation?tex=F%28x%29+ 拟合的是对数几率
(21)   https://www.zhihu.com/equation?tex=F%28x%29+%3D+%5Cfrac%7B1%7D%7B2%7Dlog%5Cleft%5B+%5Cfrac%7BPr%28y%3D1%7Cx%29%7D%7BPr%28y%3D-1%7Cx%29%7D+%5Cright%5D
伪响应
(22)https://www.zhihu.com/equation?tex=%5Ctilde%7By%7D_i+%3D+-%5Cleft%5B+%5Cfrac%7B%5Cpartial+L%28y_i%2CF%28x_i%29%29%7D%7B%5Cpartial+F%28x_i%29%7D+%5Cright%5D_%7BF_m%28x%29+%3D+F_%7Bm-1%7D%28x%29%7D+%3D+2y_i%2F%281%2Bexp%282y_iF_%7Bm-1%7D%28x_i%29%29%29

的更新如下:

https://www.zhihu.com/equation?tex=%5Crho_m+%3D+arg%5Cmin_%5Crho%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7Blog%281%2Bexp%28-2y_i%28F_%7Bm-1%7D%28x_i%29%2B%5Crho+h%28x_i%3Ba_m%29%29%29%29%7D
当使用回归树作为基函数时,每个叶子结点 https://www.zhihu.com/equation?tex=J_%7Bjm%7D 的值如下:
(23) https://www.zhihu.com/equation?tex=%5Cgamma_%7Bjm%7D+%3D+arg%5Cmin_%5Cgamma%5Csum_%7Bx_i%E2%88%88R_%7Bjm%7D%7D%7Blog%281%2Bexp%28-2y_i%28F_%7Bm-1%7D%28x_i%29%2B%5Cgamma%29%29%29%7D
公式(23)没有闭式解,根据算法FHT00,得到的解如下:

https://www.zhihu.com/equation?tex=%5Cgamma_%7Bjm%7D+%3D+%5Csum_%7Bx%E2%88%88R_%7Bjm%7D%7D%7B%5Ctilde%7By%7D_i%2F+%5Csum_%7Bx%E2%88%88R_%7Bjm%7D%7D%7C%5Ctilde%7By%7D_i%7C%282-%7C%5Ctilde%7By%7D_i%7C%29%7D
因此,二分类问题下的GBDT算法如下:


最终的拟合函数 https://www.zhihu.com/equation?tex=F_M%28x%29 就与公式(21)中的对数几率相关。通过如下方式将它转换成概率:

https://www.zhihu.com/equation?tex=p_%2B%28x%29+%3D+%5Chat%7BPr%7D%28y%3D1%7Cx%29+%3D+1%2F%281%2Be%5E%7B-2F_M%28x%29%7D%29%2C

https://www.zhihu.com/equation?tex=p_-%28x%29+%3D+%5Chat%7BPr%7D%28y%3D-1%7Cx%29+%3D+1%2F%281%2Be%5E%7B2F_M%28x%29%7D%29.
最终的类别可用如下公式确定:

https://www.zhihu.com/equation?tex=%5Chat%7By%7D%28x%29+%3D+2%2A1%5Cleft%5B+c%28-1%2C1%29p_%2B%28x%29%3Ec%281%2C-1%29p_-%28x%29+%5Cright%5D-1
其中 https://www.zhihu.com/equation?tex=c%28%5Chat%7By%7D%2Cy%29 是预测错误的代价。
5.1   Influence trimming
对于2分类分体来说,在第次迭代时的经验损失为:
(24)    https://www.zhihu.com/equation?tex=%5Cphi_m%28%5Crho%2Ca%29+%3D+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7Blog%281%2Be%5E%7B-2y_i%28F_%7Bm-1%7D%28x_i%29%2B%5Crho+h%28x_i%3Ba%29%29%7D%29%7D+%3D+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7Blog%5Cleft%5B+1%2B+exp%28-2y_iF_%7Bm-1%7D%28x_i%29%29exp%28-2y_i%5Crho+h%28x_i%3Ba%29%29+%5Cright%5D%7D
当 很大的时候,公式(24)几乎不依赖于,且接近于0,这意味着该样本对损失函数几乎没有贡献,因此在求解 https://www.zhihu.com/equation?tex=%28%5Crho_m%2Ca_m%29

https://www.zhihu.com/equation?tex=%28%5Crho_m%2Ca_m%29+%3D+arg%5Cmin_%7B%5Crho%2Ca%7D%5Cphi_m%28%5Crho%2Ca%29
在求解上述公式的时候,可以将值很大的样本对从第迭代的计算中删除也不会对结果有显著的影响。因此,
(25)https://www.zhihu.com/equation?tex=%5Comega_i+%3D+exp%28-2y_iF_%7Bm-1%7D%28x_i%29%29
可以被看成第 https://www.zhihu.com/equation?tex=i 个样本对训练的影响或者说是权重。
此外,在第二章所讲述的函数空间视角下,观察值 https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+F%28x_i%29+%5Cright%5C%7D_1%5EN 就是参数,参数 https://www.zhihu.com/equation?tex=F%28x_i%29 对函数估计的影响(保持其他值固定),可以用损失函数对该参数的二阶导来衡量。第次迭代的二阶导为 https://www.zhihu.com/equation?tex=%7C%5Ctilde%7By%7D_i%7C%282-%7C%5Ctilde%7By%7D_i%7C%29 ,因此,另外一个衡量样本在第次迭代对训练函数 https://www.zhihu.com/equation?tex=%5Crho_m+h%28x_i%3Ba_m%29 的贡献值指标为:
(26)https://www.zhihu.com/equation?tex=%5Comega_i+%3D+%7C%5Ctilde%7By%7D_i%7C%282-%7C%5Ctilde%7By%7D_i%7C%29
在次迭代中,删除所有 https://www.zhihu.com/equation?tex=%5Comega_i%3C%5Comega_%7Bl%28a%29%7D 的样本, https://www.zhihu.com/equation?tex=l%28a%29 通过如下公式计算:
(27)    https://www.zhihu.com/equation?tex=%5Csum_%7Bi%3D1%7D%5E%7Bl%28a%29%7D%7Bw_%7B%28l%29%7D%7D+%3D+%5Calpha+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%7Bw_i%7D
这里, https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+%5Comega_%7B%28i%29%7D+%5Cright%5C%7D_1%5EN 是 https://www.zhihu.com/equation?tex=%5Cleft%5C%7B+%5Comega_i+%5Cright%5C%7D_1%5EN 的增序序列,通常 https://www.zhihu.com/equation?tex=%5Calpha+%E2%88%88+%5B0.05%2C0.2%5D。算法Real AdaBoost中使用的迭代方案是公式(25),(27),而FHT00 中   LogitBoost算法使用的迭代方案是公式(26),(27)。大概有90%~95%的样本在每次迭代中被删除而不会影响算法的整体精度,使计算量相应减少 10 到 20 倍。
页: [1]
查看完整版本: GBDT算法介绍