ChuanXin 发表于 2022-4-14 21:10

离线强化学习(Offline RL)系列3: (算法篇) CQL 算法详解与 ...


[更新记录]
论文信息:Conservative Q-Learning for Offline Reinforcement Learning],
本文由UC Berkeley的Sergey Levine团队(一作是Aviral Kumar)于2020年提出,并发表在NIPS2020会议上。论文的主要思想是在值基础上增加一个regularizer,学习一个保守的Q函数,作者从理论上证明了CQL可以产生一个当前策略的真实值下界,并且是可以进行策略评估和策略提升的过程。从代码的角度上来说,本文的regularizer只需要20行代码即可实现,大幅提升了实验结果。同时作者也全部opensource了代码,非常推荐研究。
摘要:在CQL算法出来之前,离线强化学习中对于分布偏移问题的解决思路是将待优化策略的动作选择限制在离线数据集的动作分布上,从而避免分布外的动作出现Q值的过估计问题,进而减少了未知的动作在策略训练学习过程中的影响,这种方法被称为策略约束(Policy constraint),比如离线强化学习中的BCQ和BEAR算法。CQL尝试通过修改值函数的back up方式,在值的基础上添加一个regularizer,得到真实动作值函数的下界估计。实验表明,CQL的表现非常好,特别是在学习复杂和多模态数据分布的时候1、预备知识

1.1 sample error

离线数据集是通过使用行为策略https://www.zhihu.com/equation?tex=%5Cpi_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29采样得到的,https://www.zhihu.com/equation?tex=d%5E%7B%5Cpi_%7B%5Cbeta%7D%7D%28%5Cmathbf%7Bs%7D%29是折扣的边缘状态分布,https://www.zhihu.com/equation?tex=%5Cmathcal%7BD%7D++%5Csim+d%5E%7B%5Cpi_%7B%5Cbeta%7D%7D%28%5Cmathbf%7Bs%7D%29%5Cpi_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29,采样的过程会因为状态动作对的采样不充分产生sample error。
1.2 Operator

关于对Bellman算子的理解和策略迭代过程可以参考这篇文章,通过Bellman算子理解动态规划
1.2.1 Bellman operator


https://www.zhihu.com/equation?tex=%5Cmathcal%7BB%7D%5E%7B%5Cpi%7D+Q%3Dr%2B%5Cgamma+P%5E%7B%5Cpi%7D+Q+%5C%5Chttps://www.zhihu.com/equation?tex=P%5E%7B%5Cpi%7D+Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%3D%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+T%5Cleft%28%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Cmid+%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%5Cright%29%2C+%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D+%5Csim+%5Cpi%5Cleft%28%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D+%5Cmid+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D%5Cright%29%7D%5Cleft%5BQ%5Cleft%28%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D%2C+%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D%5Cright%29%5Cright%5D+%5C%5C
1.2.2 Empirical Bellman operator

离线数据集无法包含所有动作的转移数据,因此只能用中包含的数据进行back up,用https://www.zhihu.com/equation?tex=%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi%7D表示。
1.2.3 Optimal Bellman operator


https://www.zhihu.com/equation?tex=%5Cmathcal%7BB%7D%5E%7B%2A%7D+Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%3Dr%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%2B%5Cgamma+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+P%5Cleft%28%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Cmid+%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%5Cright%29%7D%5Cleft%5B%5Cmax+_%7B%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D%7D+Q%5Cleft%28%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D%2C+%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D%5Cright%29%5Cright%5D+%5C%5C
1.3 策略迭代

1.3.1 策略评估

当前我们在优化这个策略的过程中,会得到对应策略的值函数,根据值函数估计策略的价值。

https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D%5E%7Bk%2B1%7D+%5Cleftarrow+%5Carg+%5Cmin+_%7BQ%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28%5Cleft%28r%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%2B%5Cgamma+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D+%5Csim+%5Chat%7B%5Cpi%7D%5E%7Bk%7D%5Cleft%28%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D+%5Cmid+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D%5Cright%29%7D%5Cleft%5B%5Chat%7BQ%7D%5E%7Bk%7D%5Cleft%28%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D%2C+%5Cmathbf%7Ba%7D%5E%7B%5Cprime%7D%5Cright%29%5Cright%5D%5Cright%29-Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5Cright%29%5E%7B2%7D%5Cright%5D+%5Ctext+%7B+%28policy+evaluation%29+%7D+%5C%5C
1.3.2 策略提升

通过在函数上取极大化,然后在这个函数上面做一个贪心的搜索来进一步改进它的策略。

https://www.zhihu.com/equation?tex=%5Chat%7B%5Cpi%7D%5E%7Bk%2B1%7D+%5Cleftarrow+%5Carg+%5Cmax+_%7B%5Cpi%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Cpi%5E%7Bk%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5Cleft%5B%5Chat%7BQ%7D%5E%7Bk%2B1%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5Cright%5D+%5Cquad+%5Ctext+%7B+%28policy+improvement%29+%7D+%5C%5C
2、算法框架

离线强化学习算法的关键在于避免因为分布偏移导致的值过估计问题,CQL算法直接从值函数出发,旨在找到原本 https://www.zhihu.com/equation?tex=%5Cmathrm%7BQ%7D 值函数的下界估计,进而使用其去优化具有更加保守的policy value的策略。在离线数据集中采样状态 https://www.zhihu.com/equation?tex=s ,并且的分布要能够和 https://www.zhihu.com/equation?tex=d%5E%7B%5Cpi+%5Cbeta%7D%28s%29 匹配: https://www.zhihu.com/equation?tex=%5Cmu%28s%2C+a%29%3Dd%5E%7B%5Cpi+%5Cbeta%7D%28s%29+%5Cmu%28a+%5Cmid+s%29 。基于这个思路,给Q函数添加正则项使得Q的估计值变的保守。
2.1 Q_1

Q函数的更新公式:

https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D%5E%7Bk%2B1%7D+%5Cleftarrow+%5Carg+%5Cmin+_%7BQ%7D+%5Calpha+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Cmu%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%2B%5Cfrac%7B1%7D%7B2%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29-%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi%7D+%5Chat%7BQ%7D%5E%7Bk%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5Cright%29%5E%7B2%7D%5Cright%5D+%5C%5C
对于任意的策略https://www.zhihu.com/equation?tex=%5Cmu%28a+%5Cmid+s%29,其中https://www.zhihu.com/equation?tex=%5Cmu+%5Cin+%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D。

https://www.zhihu.com/equation?tex=%5Cforall+%5Cmathbf%7Bs%7D+%5Cin+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Chat%7BQ%7D%5E%7B%5Cpi%7D%28s%2C+a%29+%5Cleq+Q%5E%7B%5Cpi%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29-%5Calpha%5Cleft%5B%5Cleft%28I-%5Cgamma+P%5E%7B%5Cpi%7D%5Cright%29%5E%7B-1%7D+%5Cfrac%7B%5Cmu%7D%7B%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%7D%5Cright%5D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%2B%5Cleft%5B%5Cleft%28I-%5Cgamma+P%5E%7B%5Cpi%7D%5Cright%29%5E%7B-1%7D+%5Cfrac%7BC_%7Br%2C+T%2C+%5Cdelta%7D+R_%7B%5Cmax+%7D%7D%7B%281-%5Cgamma%29+%5Csqrt%7B%7C%5Cmathcal%7BD%7D%7C%7D%7D%5Cright%5D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29+%5C%5C
当https://www.zhihu.com/equation?tex=%5Calpha足够大时,https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D%5E%7B%5Cpi%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29+%5Cleq+Q%5E%7B%5Cpi%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29,如果https://www.zhihu.com/equation?tex=%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi%7D%3D%5Cmathcal%7BB%7D%5E%7B%5Cpi%7D, 也就没有sample error,https://www.zhihu.com/equation?tex=%5Calpha+%5Cgeq+0, 为 https://www.zhihu.com/equation?tex=Q%5E%7B%5Cpi%7D 的逐点下界。
2.2 Q_2

Q函数的更新公式:

https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Chat%7BQ%7D%5E%7Bk%2B1%7D+%5Cleftarrow+%5Carg+%5Cmin+_%7BQ%7D+%5Calpha+%5Ccdot%5Cleft%28%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Cmu%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%5Cright.%26%5Cleft.-%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%5Cright%29+%5C%5C+%26%2B%5Cfrac%7B1%7D%7B2%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29-%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi%7D+%5Chat%7BQ%7D%5E%7Bk%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5Cright%29%5E%7B2%7D%5Cright%5D+%5Cend%7Baligned%7D+%5C%5C
令 https://www.zhihu.com/equation?tex=%5Cmu%28a+%5Cmid+s%29%3D%5Cpi%28a+%5Cmid+s%29

https://www.zhihu.com/equation?tex=%5Cforall+%5Cmathbf%7Bs%7D+%5Cin+%5Cmathcal%7BD%7D%2C+%5Chat%7BV%7D%5E%7B%5Cpi%7D%28%5Cmathbf%7Bs%7D%29+%5Cleq+V%5E%7B%5Cpi%7D%28%5Cmathbf%7Bs%7D%29-%5Calpha%5Cleft%5B%5Cleft%28I-%5Cgamma+P%5E%7B%5Cpi%7D%5Cright%29%5E%7B-1%7D+%5Cmathbb%7BE%7D_%7B%5Cpi%7D%5Cleft%5B%5Cfrac%7B%5Cpi%7D%7B%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%7D-1%5Cright%5D%5Cright%5D%28%5Cmathbf%7Bs%7D%29%2B%5Cleft%5B%5Cleft%28I-%5Cgamma+P%5E%7B%5Cpi%7D%5Cright%29%5E%7B-1%7D+%5Cfrac%7BC_%7Br%2C+T%2C+%5Cdelta%7D+R_%7B%5Cmax+%7D%7D%7B%281-%5Cgamma%29+%5Csqrt%7B%7C%5Cmathcal%7BD%7D%7C%7D%7D%5Cright%5D%29+%5C%5C
此时得到的估计不一定是真实Q值的下界,但是此时策略的值是真实值函数的下界 。
2.3 CQL

在https://www.zhihu.com/equation?tex=Q%5C_2中,https://www.zhihu.com/equation?tex=%5Cmu%3D%5Cpi,也就是说需要与当前待提升策略相同,此时策略的值是真实值函数的下界,相仿online中策略迭代的过程,直接将定义为能够最大化Q值的策略。

https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmin+_%7BQ%7D+%5Cmax+_%7B%5Cmu%7D+%5Calpha%5Cleft%28%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Cmu%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5Cright.%26+%7B%5Cleft.%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D-%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%2C+%5Cmathbf%7Ba%7D+%5Csim+%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%5Cright%29+%7D+%5C%5C+%2B%26+%5Cfrac%7B1%7D%7B2%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29-%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi_%7Bk%7D%7D+%5Chat%7BQ%7D%5E%7Bk%7D%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5Cright%29%5E%7B2%7D%5Cright%5D%2B%5Cmathcal%7BR%7D%28%5Cmu%29+%5Cquad%28%5Coperatorname%7BCQL%7D%28%5Cmathcal%7BR%7D%29%29+.+%5Cend%7Baligned%7D+%5C%5C
添加了正则化项,如果选择是当前策略和先验策略的KL散度,那么原式就是:

https://www.zhihu.com/equation?tex=%5Cmax+_%7B%5Cmu%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bx%7D+%5Csim+%5Cmu%28%5Cmathbf%7Bx%7D%29%7D%5Bf%28%5Cmathbf%7Bx%7D%29%5D%2BD_%7B%5Cmathrm%7BKL%7D%7D%28%5Cmu+%5C%7C+%5Crho%29+%5Ctext+%7B+s.t.+%7D+%5Cquad+%5Csum_%7B%5Cmathbf%7Bx%7D%7D+%5Cmu%28%5Cmathbf%7Bx%7D%29%3D1%2C+%5Cmu%28%5Cmathbf%7Bx%7D%29+%5Cgeq+0+%5Cforall+%5Cmathbf%7Bx%7D+.+%5C%5C
最优解决方法就是https://www.zhihu.com/equation?tex=%5Cmu%5E%7B%2A%7D%28%5Cmathbf%7Bx%7D%29%3D%5Cfrac%7B1%7D%7BZ%7D+%5Crho%28%5Cmathbf%7Bx%7D%29+%5Cexp+%28f%28%5Cmathbf%7Bx%7D%29%29,其中https://www.zhihu.com/equation?tex=Z是正则化因子,将https://www.zhihu.com/equation?tex=%5Cmu%5E%7B%2A%7D带回原式就得到:

https://www.zhihu.com/equation?tex=%5Cmin+_%7BQ%7D+%5Calpha+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+d%5E%7B%5Cpi_%7B%5Cbeta%7D%7D%28%5Cmathbf%7Bs%7D%29%7D%5Cleft%5B%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Ba%7D+%5Csim+%5Crho%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5Cleft%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29+%5Cfrac%7B%5Cexp+%28Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%29%7D%7BZ%7D%5Cright%5D-%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Ba%7D+%5Csim+%5Cpi_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%5Cright%5D%2B%5Cfrac%7B1%7D%7B2%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28Q-%5Cmathcal%7BB%7D%5E%7B%5Cpi_%7Bk%7D%7D+%5Chat%7BQ%7D%5E%7Bk%7D%5Cright%29%5E%7B2%7D%5Cright%5D+%5C%5C
令https://www.zhihu.com/equation?tex=%5Crho%3DUnif%28a%29,就得到了https://www.zhihu.com/equation?tex=CQL%28H%29

https://www.zhihu.com/equation?tex=%5Cmin+_%7BQ%7D+%5Calpha+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Clog+%5Csum_%7B%5Cmathbf%7Ba%7D%7D+%5Cexp+%28Q%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%29-%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Ba%7D+%5Csim+%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5BQ%28%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%29%5D%5Cright%5D%2B%5Cfrac%7B1%7D%7B2%7D+%5Cmathbb%7BE%7D_%7B%5Cmathbf%7Bs%7D%2C+%5Cmathbf%7Ba%7D%2C+%5Cmathbf%7Bs%7D%5E%7B%5Cprime%7D+%5Csim+%5Cmathcal%7BD%7D%7D%5Cleft%5B%5Cleft%28Q-%5Chat%7B%5Cmathcal%7BB%7D%7D%5E%7B%5Cpi_%7Bk%7D%7D+%5Chat%7BQ%7D%5E%7Bk%7D%5Cright%29%5E%7B2%7D%5Cright%5D+%5C%5C
2.4 策略提升

使用推导出的保守Q进行策略评估,能够使得提升后的策略值依然是保守的。https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D%28s%2C+a%29 保证了,修改https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D使得https://www.zhihu.com/equation?tex=%5Cpi_k到https://www.zhihu.com/equation?tex=%5Cpi_%7Bk%2B1%7D变化尽可能小,这样引起的策略批偏移也就更小。 学习到的Q值下界是:

https://www.zhihu.com/equation?tex=%5Cmathbb%7BE%7D_%7B%5Cpi_%7B%5Chat%7BQ%7D%5E%7Bk%7D%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5Cleft%5B%5Cfrac%7B%5Cpi_%7B%5Chat%7BQ%7D%5E%7Bk%7D%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%7B%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D-1%5Cright%5D+%5Cgeq+%5Cmax+_%7B%5Cmathbf%7Ba%7D+%5Ctext+%7B+s.t.+%7D+%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%3E0%7D%5Cleft%28%5Cfrac%7B%5Cpi_%7B%5Chat%7BQ%7D%5E%7Bk%7D%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%7B%5Chat%7B%5Cpi%7D_%7B%5Cbeta%7D%28%5Cmathbf%7Ba%7D+%5Cmid+%5Cmathbf%7Bs%7D%29%7D%5Cright%29+%5Ccdot+%5Cvarepsilon+%5C%5C
增加了Q估计值和真实值之间的距离,使得Q更加保守。
2.5 伪代码




[*]如果是Q-learning模式:可以作为最终的策略
[*]如果是Actor-Critic模式:需要使用SAC的训练方式额外训练actor
3、结果

Gym结果



D4RL结果



Atari结果



4、代码实现

Github Github


参考文献

. Aviral Kumar, Aurick Zhou, George Tucker, Sergey Levine:"Conservative Q-Learning for Offline Reinforcement Learning",2020;arXiv:2006.04779. . Conservative Q-Learning
<hr/>OfflineRL推荐阅读

离线强化学习(Offline RL)系列3: (算法篇) TD3+BC 算法详解与实现(经验篇)
离线强化学习(Offline RL)系列3: (算法篇) REM(Random Ensemble Mixture)算法详解与实现
离线强化学习(Offline RL)系列3: (算法篇)策略约束 - BRAC算法原理详解与实现(经验篇)
离线强化学习(Offline RL)系列3: (算法篇)策略约束 - BEAR算法原理详解与实现
离线强化学习(Offline RL)系列3: (算法篇)策略约束-BCQ算法详解与实现
离线强化学习(Offline RL)系列2: (环境篇)D4RL数据集简介、安装及错误解决
离线强化学习(Offline RL)系列1:离线强化学习原理入门
页: [1]
查看完整版本: 离线强化学习(Offline RL)系列3: (算法篇) CQL 算法详解与 ...