JoshWindsor 发表于 2022-7-17 20:46

2022对比学习(Contrastive Learning)在各大顶会上的最新 ...

本文首发于微信公众号「对白的算法屋」,来一起学AI叭大家好,我是对白。
推荐系统重在挖掘用户偏好,无论是针对数据稀疏、噪声问题或是增强其他效果,对比学习无疑是锦上添花。纵观2022年推荐系统在各大顶会上的最新论文,我们可以观察到对比学习出现的频率明显增高,并且各种改进也层出不穷,具有多样化发展的趋势,我总结为以下四点:

[*]图数据增强方式的改进(NodeDrop、EdgeDrop、随机游走、引入辅助信息的drop);
[*]基于多视图的对比学习(例如在图的结构视图、语义视图、解耦子图间进行对比学习,可以应用到社交网络,知识图谱、bundle推荐、跨域推荐等方向);
[*]利用节点关系进行对比任务(利用节点与邻居节点的关系作为样本选取准则、可以考虑GNN节点在不同层的输出表示间的关系、超图);
[*]其他角度的对比学习任务(例如,对embedding添加噪声)。
基于以上总结,这些改进不仅适用于推荐系统领域,在CV与NLP领域同样可以借鉴。
本文从2022年各大顶会上选取了5篇采用对比学习的相关论文与大家分享~
一、NCL



论文标题:Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning
论文来源:WWW’22
论文链接: https://doi.org/10.1145/3485447.3512104
代码链接:https://github.com/RUCAIBox/NCL1.1 NCL核心思想

利用用户交互历史信息构建的二部图,节点关系可以分为四种:1. 相似用户;2. 相似物品;3. 用户-物品交互关系;4. 具有相似语义关系(例如用户意图等)。大多数推荐任务围绕用户-物品交互关系展开,而对同质节点之间的结构关系考虑较少,对语义关系考虑的次数则更少。NCL创新点则在于利用推荐系统中节点的潜在关系设计对比学习任务(包括结构关系与语义关系)。


在这里我们重点阐释如何去构建这两种对比学习任务,完整具体的算法细节还请大家观看原文~
1.2 算法细节

节点级的对比学习任务针对于每个节点进行两两学习,这对于大量邻居来说是极其耗时的,考虑效率问题,文章学习了每种邻居的单一代表性embedding,这样一个节点的对比学习就可以通过两个代表性embedding(结构、语义)来完成。


构建基于节点结构关系的对比学习任务:将每个用户(物品)与它的结构邻居进行对比。
由于GNN的第层输出包含了-hop邻居的聚合信息,而基于交互二部图的GNN模型的信息在图上经过偶数次传播,则可以聚合同质节点的邻居信息。
因此,为了避免构建其他图,我们直接利用GNN的第层输出作为一个节点的-hop邻居的表示,将节点自身的embedding和偶数层GNN的相应输出的embedding视为正样本对:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BS%7D%5E%7BU%7D%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Bz%7D_%7Bu%7D%5E%7B%28k%29%7D+%5Ccdot+%5Cmathbf%7Bz%7D_%7Bu%7D%5E%7B%280%29%7D+%2F+%5Ctau%5Cright%29%5Cright%29%7D%7B%5Csum_%7Bv+%5Cin+%5Cmathcal%7BU%7D%7D+%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Bz%7D_%7Bu%7D%5E%7B%28k%29%7D+%5Ccdot+%5Cmathbf%7Bz%7D_%7Bv%7D%5E%7B%280%29%7D+%2F+%5Ctau%5Cright%29%5Cright%29%7D+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BS%7D%5E%7BI%7D%3D%5Csum_%7Bi+%5Cin+I%7D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7B%28k%29%7D+%5Ccdot+%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7B%280%29%7D+%2F+%5Ctau%5Cright%29%5Cright%29%7D%7B%5Csum_%7Bj+%5Cin+I%7D+%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7B%28k%29%7D+%5Ccdot+%5Cmathbf%7Bz%7D_%7Bj%7D%5E%7B%280%29%7D+%2F+%5Ctau%5Cright%29%5Cright%29%7D+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BS%7D%3D%5Cmathcal%7BL%7D_%7BS%7D%5E%7BU%7D%2B%5Calpha+%5Cmathcal%7BL%7D_%7BS%7D%5E%7BI%7D+%5C%5C
构建基于节点语义关系的对比学习任务:将每个用户(物品)与它具有相似语义关系的节点进行对比。这里具有语义关系指的是,图上不可到达,但具有相似物品特征、用户偏好等的节点。
怎么识别具有相同语义的节点呢?我们认为相似的节点倾向于落在临近的embedding空间中,而我们的目标就是寻找代表一组语义邻居的中心(原型)。因此,我们对节点embedding应用聚类算法去获取用户或物品的原型。
由于这个过程不可以端到端优化,我们用EM算法学习提出的语义原型对比任务。形式上,GNN模型的目标是最大化对数似然函数:
https://www.zhihu.com/equation?tex=%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D+%5Clog+p%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D+%5Cmid+%5CTheta%2C+%5Cmathbf%7BR%7D%5Cright%29%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D+%5Clog+%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cin+C%7D+p%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5CTheta%2C+%5Cmathbf%7BR%7D%5Cright%29+%5C%5C
其中,是用户https://www.zhihu.com/equation?tex=u的潜在原型。采用基于InfoNCE来最小化以下函数:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D%5E%7BU%7D%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bi%7D+%2F+%5Ctau%5Cright%29%7D%7B%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bj%7D+%5Cin+C%7D+%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bj%7D+%2F+%5Ctau%5Cright%29%7D+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D%5E%7BI%7D%3D%5Csum_%7Bi+%5Cin+%5Cmathcal%7BI%7D%7D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bj%7D+%2F+%5Ctau%5Cright%29%7D%7B%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bt%7D+%5Cin+C%7D+%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bt%7D+%2F+%5Ctau%5Cright%29%7D+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D%3D%5Cmathcal%7BL%7D_%7BP%7D%5E%7BU%7D%2B%5Calpha+%5Cmathcal%7BL%7D_%7BP%7D%5E%7BI%7D+%5C%5C
优化
总体损失函数为:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D%3D%5Cmathcal%7BL%7D_%7BB+P+R%7D%2B%5Clambda_%7B1%7D+%5Cmathcal%7BL%7D_%7BS%7D%2B%5Clambda_%7B2%7D+%5Cmathcal%7BL%7D_%7BP%7D%2B%5Clambda_%7B3%7D%5C%7C%5CTheta%5C%7C_%7B2%7D+%5C%5C
其中,由于https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D不可以端到端优化,采用EM算法。利用Jensen不等式得到上述最大化对数似然函数的下界(LB):
https://www.zhihu.com/equation?tex=L+B%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D+%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cin+C%7D+Q%5Cleft%28%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5Cmathbf%7Be%7D_%7Bu%7D%5Cright%29+%5Clog+%5Cfrac%7Bp%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5CTheta%2C+%5Cmathbf%7BR%7D%5Cright%29%7D%7BQ%5Cleft%28%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5Cmathbf%7Be%7D_%7Bu%7D%5Cright%29%7D+%5C%5C

https://www.zhihu.com/equation?tex=Q%5Cleft%28%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5Cmathbf%7Be%7D_%7Bu%7D%5Cright%29表示观察到时,潜在变量的分布。
E步:采用-means进行聚类得到不同节点embedding对应的聚类中心。若属于聚类,则https://www.zhihu.com/equation?tex=%5Chat%7BQ%7D%5Cleft%28%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5Cmathbf%7Be%7D_%7Bu%7D%5Cright%29+%3D+1,反之为0。
M步:得到聚类中心,目标函数重写为:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D%5E%7BU%7D%3D-%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D+%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cin+C%7D+%5Chat%7BQ%7D%5Cleft%28%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5Cmathbf%7Be%7D_%7Bu%7D%5Cright%29+%5Clog+p%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bi%7D+%5Cmid+%5CTheta%2C+%5Cmathbf%7BR%7D%5Cright%29+%5C%5C
假设用户在所有聚类上的分布是各向同性高斯分布。因此,函数可以写成:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BP%7D%5E%7BU%7D%3D-%5Csum_%7Bu+%5Cin+%5Cmathcal%7BU%7D%7D+%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28-%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D-%5Cmathbf%7Bc%7D_%7Bi%7D%5Cright%29%5E%7B2%7D+%2F+2+%5Csigma_%7Bi%7D%5E%7B2%7D%5Cright%29%7D%7B%5Csum_%7B%5Cmathbf%7Bc%7D_%7Bj%7D+%5Cin+C%7D+%5Cexp+%5Cleft%28-%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D-%5Cmathbf%7Bc%7D_%7Bj%7D%5Cright%29%5E%7B2%7D+%2F+2+%5Csigma_%7Bj%7D%5E%7B2%7D%5Cright%29%7D+%5C%5C
完整算法:


1.3 实验结果

论文的实验结果可以说是很震撼一张大表······但结果属实不错,在五个数据集上的结果证明NCL的有效性,尤其是在Yelp和Amazon book数据集上,与其他模型相比,性能分别提高了26%和17%。


二、ICL



论文标题:Intent Contrastive Learning for Sequential Recommendation
论文来源:WWW’22
论文链接:https://doi.org/10.1145/3485447.3512090
代码链接: https://github.com/salesforce/ICLRec2.1 ICL核心思想

很有趣的一点,这篇文章所提模型简称ICL和上篇NCL很像啊······
以下图作引,我们直观理解这篇文章的思想。Figure 1中展现了两个用户的购物序列,尽管没有出现一样相同的商品,但是他们最后却购买了同样的物品。原因很简单,因为他俩同为钓鱼爱好者,购买意图冥冥中含有不可言说的关系。


正是因此,我们必须重视不同用户购买序列之间的潜在意图关系。文章提出的一个良好的解决方案是,我们从未标记的用户行为序列中学习用户的意图分布函数,并使用对比学习优化SR模型。具体来说,我们引入一个潜在变量来表示用户的意图,并通过聚类学习潜在变量的分布函数。
2.2 算法细节



这篇文章的重要价值之一:模型图画的很漂亮,深得我心。从图中可以看出,模型采用EM算法进行优化,在E步中进行聚类,在M步进行损失函数的计算和参数更新。
重在体会思想,详细步骤解释请参照原文。
假设有个用户潜在意图https://www.zhihu.com/equation?tex=%5Cleft%5C%7Bc_%7Bi%7D%5Cright%5C%7D_%7Bi%3D1%7D%5E%7BK%7D,则目标公式可以改写为:
https://www.zhihu.com/equation?tex=%5Ctheta%5E%7B%2A%7D%3D%5Cunderset%7B%5Ctheta%7D%7B%5Carg+%5Cmax+%7D+%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Cln+%5Cmathbb%7BE%7D_%7B%28c%29%7D%5Cleft%5BP_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29%5Cright%5D+%5C%5C
由于上述公式优化复杂,根据EM思想,构造下界函数,并使下界最大化:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Cln+%5Cmathbb%7BE%7D_%7B%28c%29%7D%5Cleft%5BP_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29%5Cright%5D+%26%3D%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Cln+%5Csum_%7Bi%3D1%7D%5E%7BK%7D+P_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29+%5C%5C+%26%3D%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Cln+%5Csum_%7Bi%3D1%7D%5E%7BK%7D+Q%5Cleft%28c_%7Bi%7D%5Cright%29+%5Cfrac%7BP_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29%7D%7BQ%5Cleft%28c_%7Bi%7D%5Cright%29%7D+%5Cend%7Baligned%7D+%5C%5C
根据Jensen不等式,得到
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bl%7D+%5Cgeq+%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Csum_%7Bi%3D1%7D%5E%7BK%7D+Q%5Cleft%28c_%7Bi%7D%5Cright%29+%5Cln+%5Cfrac%7BP_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29%7D%7BQ%5Cleft%28c_%7Bi%7D%5Cright%29%7D+%5C%5C+%5Cpropto+%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bt%3D1%7D%5E%7BT%7D+%5Csum_%7Bi%3D1%7D%5E%7BK%7D+Q%5Cleft%28c_%7Bi%7D%5Cright%29+%5Ccdot+%5Cln+P_%7B%5Ctheta%7D%5Cleft%28s_%7Bt%7D%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29+%5Cend%7Barray%7D+%5C%5C
简单起见,在优化下界时,我们只关注最后一个位置步骤,下界定义为:
https://www.zhihu.com/equation?tex=%5Csum_%7Bu%3D1%7D%5E%7BN%7D+%5Csum_%7Bi%3D1%7D%5E%7BK%7D+Q%5Cleft%28c_%7Bi%7D%5Cright%29+%5Ccdot+%5Cln+P_%7B%5Ctheta%7D%5Cleft%28S%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29+%5C%5C
其中https://www.zhihu.com/equation?tex=Q%5Cleft%28c_%7Bi%7D%5Cright%29%3DP_%7B%5Ctheta%7D%5Cleft%28c_%7Bi%7D+%5Cmid+S%5E%7Bu%7D%5Cright%29。
为了**学习意图分布**,利用编码器将序列得到表示https://www.zhihu.com/equation?tex=%5Cleft%5C%7Bh%5E%7Bu%7D%5Cright%5C%7D_%7Bu%3D1%7D%5E%7B%7CU%7C%7D,并在学习到的表示上进行-means聚类,从而得到https://www.zhihu.com/equation?tex=P_%7B%5Ctheta%7D%5Cleft%28c_%7Bi%7D+%5Cmid+S%5E%7Bu%7D%5Cright%29。
https://www.zhihu.com/equation?tex=Q%5Cleft%28c_%7Bi%7D%5Cright%29%3DP_%7B%5Ctheta%7D%5Cleft%28c_%7Bi%7D+%5Cmid+S%5E%7Bu%7D%5Cright%29%3D%5Cleft%5C%7B%5Cbegin%7Barray%7D%7Blc%7D+1+%26+%5Ctext+%7B+if+%7D+S%5E%7Bu%7D+%5Ctext+%7B+in+cluster+%7D+i+%5C%5C+0+%26+%5Ctext+%7B+else+%7D+%5Cend%7Barray%7D%5Cright.+%5C%5C
得到意图分布,下一步需要**求得**。假设意图满足均匀分布,且给定意图https://www.zhihu.com/equation?tex=c时,https://www.zhihu.com/equation?tex=S%5Eu的条件分布和https://www.zhihu.com/equation?tex=L_2标准化的高斯分布同向,则可把改写为:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+P_%7B%5Ctheta%7D%5Cleft%28S%5E%7Bu%7D%2C+c_%7Bi%7D%5Cright%29+%26%3DP_%7B%5Ctheta%7D%5Cleft%28c_%7Bi%7D%5Cright%29+P_%7B%5Ctheta%7D%5Cleft%28S%5E%7Bu%7D+%5Cmid+c_%7Bi%7D%5Cright%29%3D%5Cfrac%7B1%7D%7BK%7D+%5Ccdot+P_%7B%5Ctheta%7D%5Cleft%28S%5E%7Bu%7D+%5Cmid+c_%7Bi%7D%5Cright%29+%5C%5C+%26+%5Cpropto+%5Cfrac%7B1%7D%7BK%7D+%5Ccdot+%5Cfrac%7B%5Cexp+%5Cleft%28-%5Cleft%28%5Cmathbf%7Bh%7D%5E%7Bu%7D-%5Cmathbf%7Bc%7D_%7Bi%7D%5Cright%29%5E%7B2%7D%5Cright%29%7D%7B%5Csum_%7Bj%3D1%7D%5E%7BK%7D+%5Cexp+%5Cleft%28-%5Cleft%28%5Cmathbf%7Bh%7D_%7Bi%7D%5E%7Bu%7D-%5Cmathbf%7Bc%7D_%7Bj%7D%5Cright%29%5E%7B2%7D%5Cright%29%7D+%5C%5C+%26+%5Cpropto+%5Cfrac%7B1%7D%7BK%7D+%5Ccdot+%5Cfrac%7B%5Cexp+%5Cleft%28%5Cmathbf%7Bh%7D%5E%7Bu%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bi%7D%5Cright%29%7D%7B%5Csum_%7Bj%3D1%7D%5E%7BK%7D+%5Cexp+%5Cleft%28%5Cmathbf%7Bh%7D%5E%7Bu%7D+%5Ccdot+%5Cmathbf%7Bc%7D_%7Bj%7D%5Cright%29%7D%2C+%5Cend%7Baligned%7D+%5C%5C
求得下界最大化即相当于最小化以下损失函数:
https://www.zhihu.com/equation?tex=-%5Csum_%7Bv%3D1%7D%5E%7BN%7D+%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Cmathbf%7Bh%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bi%7D%5Cright%29%5Cright%29%7D%7B%5Csum_%7Bj%3D1%7D%5E%7BK%7D+%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Cmathbf%7Bh%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bj%7D%5Cright%29%5Cright%29%7D+%5C%5C
可以发现,上式最大化了一个单独序列与其相应意图之间的互信息。我们为每个序列通过增强构建用于对比学习的正样本,然后优化以下损失函数:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%3D%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29%2B%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B2%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29%3D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29%5Cright%29%7D%7B%5Csum_%7Bn+e+g%7D+%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bn+e+g%7D%5Cright%29%5Cright%29%7D%2C+%5C%5C
其中,https://www.zhihu.com/equation?tex=c_%7Bneg%7D为一个batch中的所有意图,而同一个batch中的用户可能有相同的意图,故为了减轻假阴性的影响,将上式改为:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29%3D-%5Clog+%5Cfrac%7B%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%5E%7Bu%7D%2C+%5Cmathbf%7Bc%7D_%7Bu%7D%5Cright%29%5Cright%29%7D%7B%5Csum_%7Bv%3D1%7D%5E%7BN%7D+%5Cmathbb%7B1%7D_%7Bv+%5Cnotin+%5Cmathcal%7BF%7D%7D+%5Cexp+%5Cleft%28%5Coperatorname%7Bsim%7D%5Cleft%28%5Ctilde%7B%5Cmathbf%7Bh%7D%7D_%7B1%7D%2C+%5Cmathbf%7Bc%7D_%7Bv%7D%5Cright%29%5Cright%29%7D%2C+%5C%5C
最终,模型损失函数为:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D%3D%5Cmathcal%7BL%7D_%7B%5Ctext+%7BNextItem+%7D%7D%2B%5Clambda+%5Ccdot+%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BICL%7D%7D%2B%5Cbeta+%5Ccdot+%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BSeqCL%7D%7D+%5C%5C
完整算法:


2.3 实验结果

可以看出,ICLRec在所有数据集上始终优于现有方法。与最佳baseline相比,HR和NDCG的平均改善率在7.47%到33.33%之间。


三、RGCL



论文标题:A Review-aware Graph Contrastive Learning Framework for Recommendation
论文来源:SIGIR’22
论文链接:https://doi.org/10.1145/3477495.35319273.1 RGCL核心思想

这篇论文的创新点是引入了用户评分和评论作为辅助信息,为了将二者更好地融入图结构中,RGCL以交互评论作为图的边信息,并以此为基础设计了两个分别基于节点增强和边增强的对比学习任务。


3.2 算法细节



模型图可以说画的是通俗易懂,我们对照图来依次看一下每个模块~
具有特征边的图构建:从Figure 2中的图构建部分我们可以看到,用户物品交互评分矩阵https://www.zhihu.com/equation?tex=R与评论https://www.zhihu.com/equation?tex=E组成用户物品交互https://www.zhihu.com/equation?tex=%5Cvarepsilon%3D%5C%7BR%2C+E%5C%7D,而基于评论的推荐数据表示为具有特征边的二部图https://www.zhihu.com/equation?tex=%5Cmathcal%7BG%7D%3D%5Clangle%5Cmathcal%7BU%7D+%5Ccup+%5Cmathcal%7BV%7D%2C+%5Cmathcal%7BE%7D%5Crangle。
评论感知图学习:利用评分区分边的类型,评论的表示需要由BERT-Whitening生成。这里介绍如何利用基于特征边的图学习节点表示。

[*]评论感知信息传递公式为:
https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bx%7D_%7Br+%3B+j+%5Crightarrow+i%7D%5E%7B%28l%29%7D%3D%5Cfrac%7B%5Csigma%5Cleft%28%5Cboldsymbol%7Bw%7D_%7Br%2C+1%7D%5E%7B%28l%29+%5Ctop%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%5Cright%29+%5Cboldsymbol%7BW%7D_%7Br%2C+1%7D%5E%7B%28l%29%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%2B%5Csigma%5Cleft%28%5Cboldsymbol%7Bw%7D_%7Br%2C+2%7D%5E%7B%28l%29+%5Ctop%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%5Cright%29+%5Cboldsymbol%7BW%7D_%7Br%2C+2%7D%5E%7B%28l%29%7D+%5Cboldsymbol%7Bv%7D_%7Bj%7D%5E%7B%28l-1%29%7D%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bj%7D%5Cright%7C%5Cleft%7C%5Cmathcal%7BN%7D_%7Bi%7D%5Cright%7C%7D%7D%5C%5C+x_%7Br+%3B+i+%5Crightarrow+j%7D%5E%7B%28l%29%7D%3D%5Cfrac%7B%5Csigma%5Cleft%28%5Cboldsymbol%7Bw%7D_%7Br%2C+1%7D%5E%7B%28l%29+%5Ctop%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%5Cright%29+%5Cboldsymbol%7BW%7D_%7Br%7D%5E%7B%28l%29%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%2B%5Csigma%5Cleft%28%5Cboldsymbol%7Bw%7D_%7Br%2C+2%7D%5E%7B%28l%29+%5Ctop%7D+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%5Cright%29+%5Cboldsymbol%7BW%7D_%7Br%2C+2%7D%5E%7B%28l%29%7D+%5Cboldsymbol%7Bu%7D_%7Bi%7D%5E%7B%28l-1%29%7D%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bi%7D%5Cright%7C%5Cleft%7C%5Cmathcal%7BN%7D_%7Bj%7D%5Cright%7C%7D%7D+%5C%5C

[*]消息聚合:
https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bu%7D_%7Bi%7D%5E%7B%28l%29%7D%3D%5Cboldsymbol%7BW%7D%5E%7B%28l%29%7D+%5Csum_%7Br+%5Cin+%5Cmathcal%7BR%7D%7D+%5Csum_%7Bk+%5Cin+%5Cmathcal%7BN%7D_%7Bi%2C+r%7D%7D+x_%7Br+%3B+k+%5Crightarrow+i%7D%5E%7B%28l%29%7D%2C+%5Cquad+v_%7Bj%7D%5E%7B%28l%29%7D%3DW%5E%7B%28l%29%7D+%5Csum_%7Br+%5Cin+%5Cmathcal%7BR%7D%7D+%5Csum_%7Bk+%5Cin+%5Cmathcal%7BN%7D_%7Bj%2C+r%7D%7D+x_%7Br+%3B+k+%5Crightarrow+j%7D%5E%7B%28l%29%7D+%5C%5C

[*]得到用户和物品最终表示:
https://www.zhihu.com/equation?tex=%5Chat%7B%5Cboldsymbol%7Bu%7D%7D_%7Bi%7D%3D%5Cboldsymbol%7Bu%7D_%7Bi%7D%5E%7B%28L%29%7D%2C+%5Cquad+%5Chat%7B%5Cboldsymbol%7Bv%7D%7D_%7Bj%7D%3D%5Cboldsymbol%7Bv%7D_%7Bj%7D%5E%7B%28L%29%7D+%5C%5C
交互建模:区别于一般推荐采用的内积预测方式,论文采用MLP学习用户物品的交互特征,并根据得到的交互特征预测评分(此处的预测评分在对比学习部分会用到):
https://www.zhihu.com/equation?tex=%5Cboldsymbol%7Bh%7D_%7Bi+j%7D%3D%5Coperatorname%7BMLP%7D%5Cleft%28%5Cleft%5B%5Chat%7B%5Cboldsymbol%7Bu%7D%7D_%7Bi%7D%2C+%5Chat%7Bv%7D_%7Bj%7D%5Cright%5D%5Cright%29+%5C%5Chttps://www.zhihu.com/equation?tex=%5Chat%7Br%7D_%7Bi+j%7D%3D%5Cboldsymbol%7Bw%7D%5E%7B%5Ctop%7D+%5Cboldsymbol%7Bh%7D_%7Bi+j%7D+%5C%5C
两种对比学习任务:增强节点embedding学习和增强交互建模。
基于节点的数据增强采用的是node drop,指定概率随机丢弃物品节点和相应的评论特征:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B3%7D%5E%7B%5Ctext+%7Buser+%7D%7D%3D-%5Cmathbb%7BE%7D_%7B%5Cmathcal%7BU%7D%7D%5Cleft%5B%5Clog+%5Cleft%28F%5Cleft%28%5Chat%7Bu%7D_%7Bi%7D%5E%7B1%7D%2C+%5Chat%7Bu%7D_%7Bi%7D%5E%7B2%7D%5Cright%29%5Cright%29%5Cright%5D%2B%5Cmathbb%7BE%7D_%7B%5Cmathcal%7BU%7D+%5Ctimes+%5Cmathcal%7BU%7D%5E%7B%5Cprime%7D%7D%5Cleft%5B%5Clog+%5Cleft%28F%5Cleft%28%5Chat%7Bu%7D_%7Bi%7D%5E%7B1%7D%2C+%5Chat%7Bu%7D_%7Bi%5E%7B%5Cprime%7D%7D%5E%7B2%7D%5Cright%29%5Cright%29%5Cright%5D+%5C%5C
损失为:https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B3%7D%3D%5Cmathcal%7BL%7D_%7B3%7D%5E%7B%5Ctext+%7Buser+%7D%7D%2B%5Cmathcal%7BL%7D_%7B3%7D%5E%7B%5Ctext+%7Bitem+%7D%7D。
增强交互建模,利用交互建模得到的交互特征作为anchor example。选择相对应的交互评论作为正样本,而从整个训练集中随机得到的一个评论作为负样本,ED的目标是将与靠近,而与远离。优化目标公式为:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B2%7D%3D-%5Cmathbb%7BE%7D_%7B%5Cmathcal%7BE%7D%7D%5Cleft%5B%5Clog+%5Cleft%28F%5Cleft%28%5Cboldsymbol%7Bh%7D_%7Bi+j%7D%2C+%5Cboldsymbol%7Be%7D_%7Bi+j%7D%5Cright%29%5Cright%29%5Cright%5D%2B%5Cmathbb%7BE%7D_%7B%5Cmathcal%7BE%7D+%5Ctimes+%5Cmathcal%7BE%7D%5E%7B%5Cprime%7D%7D%5Cleft%5B%5Clog+%5Cleft%28F%5Cleft%28%5Cboldsymbol%7Bh%7D_%7Bi+j%7D%2C+%5Cboldsymbol%7Be%7D_%7Bi%5E%7B%5Cprime%7D+j%5E%7B%5Cprime%7D%7D%5Cright%29%5Cright%29%5Cright%5D+%5C%5C
优化:由于RGCL侧重于预测用户对物品的评分,因此采用均方误差(MSE)作为优化目标:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7B1%7D%3D%5Cfrac%7B1%7D%7B%7C%5Cmathcal%7BS%7D%7C%7D+%5Csum_%7B%28i%2C+j%29+%5Cin+%5Cmathcal%7BS%7D%7D%5Cleft%28%5Chat%7Br%7D_%7Bi+j%7D-r_%7Bi+j%7D%5Cright%29%5E%7B2%7D+%5C%5Chttps://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D%3D%5Cmathcal%7BL%7D_%7B1%7D%2B%5Calpha+%5Cmathcal%7BL%7D_%7B2%7D%2B%5Cbeta+%5Cmathcal%7BL%7D_%7B3%7D+.+%5C%5C
3.3 实验结果



四、MCCLK



论文标题:Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System
论文来源:SIGIR’22
论文链接:https://arxiv.org/abs/2204.08807
代码链接:https://github.com/CCIIPLab/MCCLK.4.1 MCCLK核心思想

传统的对比学习方法多通过统一的数据增强方式生成两个不同的视图,本文别出心裁,从知识图不同视图的角度去应用对比学习,提出了一种多层次跨视图对比学习机制。
结合了KGR的特点,论文考虑了三种不同的图视图,包括全局结构视图、局部协同视图和语义视图,视图的理解参见下图~


值得一说的是,针对在语义视图,文章提出物品-物品语义图构建模块去获取以往工作中经常忽略的重要物品-物品语义关系。
4.2 算法细节



从模型图中可以看出,MCCLK包括三个主要部分视图生成、局部对比学习和全局对比学习。
多视图生成:前面我们说道一共需要构建三个视图,这里详细解释一下三个视图究竟是什么。其中全局结构视图为原始的用户-物品-实体图,协同视图与语义视图分别为用户-物品-实体图生成的用户-物品图和物品-实体图。由于全局结构视图与协同视图很常见,所以重点在构建语义视图。
为了考虑物品-物品语义关系,构建带有关系感知聚合机制的阶邻居物品-物品语义图 https://www.zhihu.com/equation?tex=S,可同时保留相邻实体和关系信息。其中 https://www.zhihu.com/equation?tex=S_%7Bij%7D表示物品https://www.zhihu.com/equation?tex=i与https://www.zhihu.com/equation?tex=j之间的语义相似度,https://www.zhihu.com/equation?tex=S_%7Bij%7D%3D0表示两个物品之间没有联系。
从知识图https://www.zhihu.com/equation?tex=%5Cmathcal%7BG%7D递归学习https://www.zhihu.com/equation?tex=K%27次物品表示,提出的关系感知聚合机制为:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28k%2B1%29%7D%3D%5Cfrac%7B1%7D%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bi%7D%5Cright%7C%7D+%5Csum_%7B%28r%2C+v%29+%5Cin+%5Cmathcal%7BN%7D_%7Bi%7D%7D+%5Cmathbf%7Be%7D_%7Br%7D+%5Codot+%5Cmathbf%7Be%7D_%7Bv%7D%5E%7B%28k%29%7D%5C%5C+%5Cmathbf%7Be%7D_%7Bv%7D%5E%7B%28k%2B1%29%7D%3D%5Cfrac%7B1%7D%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bv%7D%5Cright%7C%7D%5Cleft%28%5Csum_%7B%28r%2C+v%29+%5Cin+%5Cmathcal%7BN%7D_%7Bv%7D%7D+%5Cmathbf%7Be%7D_%7Br%7D+%5Codot+%5Cmathbf%7Be%7D_%7Bv%7D%5E%7B%28k%29%7D%2B%5Csum_%7B%28r%2C+i%29+%5Cin+%5Cmathcal%7BN%7D_%7Bv%7D%7D+%5Cmathbf%7Be%7D_%7Br%7D+%5Codot+%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28k%29%7D%5Cright%29+%5C%5C
通过这种方式将KG中的相邻实体和关系编码到物品表示中,并通过余弦相似度构建物品-物品的相似度:
https://www.zhihu.com/equation?tex=S_%7Bi+j%7D%3D%5Cfrac%7B%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%5Cleft%28K%5E%7B%5Cprime%7D%5Cright%29%7D%5Cright%29%5E%7B%5Ctop%7D+%5Cmathbf%7Be%7D_%7Bj%7D%5E%7B%5Cleft%28K%5E%7B%5Cprime%7D%5Cright%29%7D%7D%7B%5Cleft%5C%7C%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%5Cleft%28K%5E%7B%5Cprime%7D%5Cright%29%7D%5Cright%5C%7C%5Cleft%5C%7C%5Cmathbf%7Be%7D_%7Bj%7D%5E%7B%5Cleft%28K%5E%7B%5Cprime%7D%5Cright%29+%5C%7C%7D%5Cright%5C%7C%7D+%5C%5C
在全连接物品-物品图上进行KNN稀疏化,以减少计算需求、可行噪声和不重要边:
https://www.zhihu.com/equation?tex=%5Cwidehat%7BS%7D_%7Bi+j%7D%3D%5Cleft%5C%7B%5Cbegin%7Barray%7D%7Bll%7D+S_%7Bi+j%7D%2C+%26+S_%7Bi+j%7D+%5Cin+%5Ctext+%7B+top-k+%7D%5Cleft%28S_%7Bi%7D%5Cright%29+%5C%5C+0%2C+%26+%5Ctext+%7B+otherwise+%7D+%5Cend%7Barray%7D%5Cright.+%5C%5C
局部级对比学习:从模型图可以看出,利用协同视图和语义视图中物品的视图embedding,可以实现局部级的交叉视图对比学习。在这之前,我们先来看两个视图中的编码部分。

[*]协同视图(即,物品-用户-物品)编码,采用Light-GCN递归地执行聚合操作:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%28k%2B1%29%7D+%26%3D%5Csum_%7Bi+%5Cin+%5Cmathcal%7BN%7D_%7Bu%7D%7D+%5Cfrac%7B1%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bu%7D%5Cright%7C%7C%5Cmathcal%7BN%7D+i%7C%7D%7D+%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28k%29%7D+%5C%5C+%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28k%2B1%29%7D+%26%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BN%7D_%7Bi%7D%7D+%5Cfrac%7B1%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bu%7D%5Cright%7C%7C%5Cmathcal%7BN%7D+i%7C%7D%7D+%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%28k%29%7D+%5Cend%7Baligned%7D+%5C%5C
将不同层地表示相加,得到局部协同表示:
https://www.zhihu.com/equation?tex=%5Cmathrm%7Bz%7D_%7Bu%7D%5E%7Bc%7D%3D%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%280%29%7D%2B%5Ccdots%2B%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%28K%29%7D%2C+%5Cquad+%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7Bc%7D%3D%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%280%29%7D%2B%5Ccdots%2B%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28K%29%7D+%5C%5C

[*]语义视图,关注物品之间的语义相似度,同样地采用Light-GCN执行聚合操作:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28l%2B1%29%7D%3D%5Csum_%7Bj+%5Cin+%5Cmathcal%7BN%7D%28i%29%7D+%5Cwidetilde%7BS%7D_%7Bj%7D%5E%7B%28l%29%7D+%5C%5C
将不同层相加,得到局部语义表示:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7BS%7D%3D%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%280%29%7D%2B%5Ccdots%2B%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28L%29%7D+%5C%5C

[*]局部级交叉视图对比优化
首先将上述两个视图的embedding送到一个具有隐藏层MLP中:
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bl%7D+%5Cmathbf%7Bz%7D_%7Bi-1%7D%5E%7Bc%7D+%5Cmathrm%7Bp%7D%3DW%5E%7B%282%29%7D+%5Csigma%5Cleft%28W%5E%7B%281%29%7D+%5Cmathrm%7Bz%7D_%7Bi%7D%5E%7Bc%7D%2Bb%5E%7B%281%29%7D%5Cright%29%2Bb%5E%7B%282%29%7D+%5C%5C+%5Cmathbf%7Bz%7D_%7Bi-1%7D%5E%7Bs%7D+%5Cmathrm%7Bp%7D%3DW%5E%7B%282%29%7D+%5Csigma%5Cleft%28W%5E%7B%281%29%7D+%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7Bs%7D%2Bb%5E%7B%281%29%7D%5Cright%29%2Bb%5E%7B%282%29%7D+%5Cend%7Barray%7D+%5C%5C
对比损失为:


注意,负样本有两个来源,分别为视图内节点和视图间节点,对应于公式分母中的第二项和第三项。
全局级对比学习:这里设计了一个路径感知GNN(该GNN可以在进行https://www.zhihu.com/equation?tex=L%27次聚合的同时保留路径信息,即user-interact-item-relation-entity等远程连接),将路径信息自动编码到节点embedding中,然后利用全局级视图和局部级视图的编码embedding,进行全局级对比学习。
结构视图的聚合公式为:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%28l%2B1%29%7D+%26%3D%5Cfrac%7B1%7D%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bu%7D%5Cright%7C%7D+%5Csum_%7Bi+%5Cin+%5Cmathcal%7BN%7D_%7Bu%7D%7D+%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28l%29%7D%2C+%5C%5C+%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%28l%2B1%29%7D+%26%3D%5Cfrac%7B1%7D%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bi%7D%5Cright%7C%7D+%5Csum_%7B%28r%2C+v%29+%5Cin+%5Cmathcal%7BN%7D_%7Bi%7D%7D+%5Cbeta%28i%2C+r%2C+v%29+%5Cmathbf%7Be%7D_%7Br%7D+%5Codot+%5Cmathbf%7Be%7D_%7Bv%7D%5E%7B%28l%29%7D%2C+%5Cend%7Baligned%7D+%5C%5C
其中,注意力权重https://www.zhihu.com/equation?tex=%5Cbeta%28i%2C+r%2C+v%29的计算公式为:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cbeta%28i%2C+r%2C+v%29+%26%3D%5Coperatorname%7Bsoftmax%7D%5Cleft%28%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5E%7BT%7D+%5Ccdot%5Cleft%28%5Cmathbf%7Be%7D_%7Bv%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5Cright%29+%5C%5C+%26%3D%5Cfrac%7B%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5E%7BT%7D+%5Ccdot%5Cleft%28%5Cmathbf%7Be%7D_%7Bv%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5Cright%29%7D%7B%5Csum_%7B%5Cleft%28v%5E%7B%5Cprime%7D%2C+r%5Cright%29+%5Cin+%5Chat%7B%5Cmathrm%7BN%7D%7D%28i%29%7D+%5Cexp+%5Cleft%28%5Cleft%28%5Cmathbf%7Be%7D_%7Bi%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5E%7BT%7D+%5Ccdot%5Cleft%28%5Cmathbf%7Be%7D_%7Bv%5E%7B%5Cprime%7D%7D+%5C%7C+%5Cmathbf%7Be%7D_%7Br%7D%5Cright%29%5Cright%29%7D%2C+%5Cend%7Baligned%7D+%5C%5C
将所有层的表示相加,得到全局表示:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Bz%7D_%7Bu%7D%5E%7Bg%7D%3D%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%280%29%7D%2B%5Ccdots%2B%5Cmathbf%7Be%7D_%7Bu%7D%5E%7B%5Cleft%28L%5E%7B%5Cprime%7D%5Cright%29%7D%2C+%5Cquad+%5Cmathbf%7Bz%7D_%7Bi%7D%5E%7Bg%7D%3D%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%280%29%7D%2B%5Ccdots%2B%5Cmathbf%7Be%7D_%7Bi%7D%5E%7B%5Cleft%28L%5E%7B%5Cprime%7D%5Cright%29%7D+%5C%5C
全局级交叉视图对比优化,获取全局级和局部级视图下的节点表示,首先对其映射计算:
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bc%7D%5Cmathrm%7Bz%7D_%7Bi-%7D%5E%7Bg%7D+%5Cmathrm%7Bp%7D%3DW%5E%7B%282%29%7D+%5Csigma%5Cleft%28W%5E%7B%281%29%7D+%5Cmathrm%7Bz%7D_%7Bi%7D%5E%7Bg%7D%2Bb%5E%7B%281%29%7D%5Cright%29%2Bb%5E%7B%282%29%7D+%5C%5C%5Cmathrm%7Bz%7D_%7Bi-%7D%5E%7Bl%7D+%5Cmathrm%7Bp%7D%3DW%5E%7B%282%29%7D+%5Csigma%5Cleft%28W%5E%7B%281%29%7D%5Cleft%28%5Cmathrm%7Bz%7D_%7Bi%7D%5E%7Bc%7D%2B%5Cmathrm%7Bz%7D_%7Bi%7D%5E%7Bs%7D%5Cright%29%2Bb%5E%7B%281%29%7D%5Cright%29%2Bb%5E%7B%282%29%7D%5Cend%7Barray%7D+%5C%5C
采用与局部级对比相同的正负采样策略,有以下对比损失:


总体目标如下:
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D%5E%7B%5Ctext+%7Bglobal+%7D%7D%3D%5Cfrac%7B1%7D%7B2+N%7D+%5Csum_%7Bi%3D1%7D%5E%7BN%7D%5Cleft%28%5Cmathcal%7BL%7D_%7Bi%7D%5E%7Bg%7D%2B%5Cmathcal%7BL%7D_%7Bi%7D%5E%7Bl%7D%5Cright%29%2B%5Cfrac%7B1%7D%7B2+M%7D+%5Csum_%7Bi%3D1%7D%5E%7BM%7D%5Cleft%28%5Cmathcal%7BL%7D_%7Bu%7D%5E%7Bg%7D%2B%5Cmathcal%7BL%7D_%7Bu%7D%5E%7Bl%7D%5Cright%29+%5C%5C
多任务训练
https://www.zhihu.com/equation?tex=%5Cmathcal%7BL%7D_%7BM+C+C+L+K%7D%3D%5Cmathcal%7BL%7D_%7B%5Cmathrm%7BBPR%7D%7D%2B%5Cbeta%5Cleft%28%5Calpha+%5Cmathcal%7BL%7D%5E%7B%5Ctext+%7Blocal+%7D%7D%2B%281-%5Calpha%29+%5Cmathcal%7BL%7D%5E%7B%5Ctext+%7Bglobal+%7D%7D%5Cright%29%2B%5Clambda%5C%7C%5CTheta%5C%7C_%7B2%7D%5E%7B2%7D%2C+%5C%5C
4.3 实验结果

实验可见,MCCLK在所有度量方面均优于三个数据集的baselines。针对AUC在书籍、电影和音乐数据集,分别提高了3.11%、1.61%和2.77%。


五、MIDGN



论文标题:Multi-view Intent Disentangle Graph Networks for Bundle Recommendation
论文来源:AAAI’22
论文链接:https://arxiv.org/pdf/2202.11425.pdf
代码链接:CCIIPLab/MIDGN (github.com)5.1 MIDGN核心思想

该模型将DGCF的解耦思路用在了bundle推荐,从全局(解耦bundle间的用户意图)和局部(解耦bundle中的用户意图)两个视图对用户意图进行解耦,并采用InfoNCE加强学习效果。下图可以形象地解释上述两个视图~


5.2 算法细节



从图中可以看到,MIDGN由四个不同的模块组成:图解耦模块、 视图交叉传播模块、意图对比模块和预测模块。
图解耦模块
初始化。模型设置个意图,分别对应一组意图感知图https://www.zhihu.com/equation?tex=%5Cmathcal%7BG%7D+%3D+%5Cleft%5C%7B%5Cmathcal%7BG%7D_%7B1%7D%2C+%5Cmathcal%7BG%7D_%7B2%7D%2C+%5Ccdots%2C+%5Cmathcal%7BG%7D_%7BK%7D%5Cright%5C%7D。由于用户和bundle在不同的意图上应该有不同的embedding,故将其embedding分为块,https://www.zhihu.com/equation?tex=%5Cleft%28%5Cmathbf%7Bu%7D_%7B1%7D%2C+%5Cmathbf%7Bu%7D_%7B2%7D%2C+%5Ccdots%2C+%5Cmathbf%7Bu%7D_%7BK%7D%5Cright%29%2C+%5Cquad+%5Cmathbf%7Bb%7D%3D%5Cleft%28%5Cmathbf%7Bb%7D_%7B1%7D%2C+%5Cmathbf%7Bb%7D_%7B2%7D%2C+%5Ccdots%2C+%5Cmathbf%7Bb%7D_%7BK%7D%5Cright%29块与意图相耦合。由于每一件物品为用户出于某一种意图购买的,不需要对物品进行embedding划分,物品embedding通过随机初始化得到。
为每个意图感知图构建一个加权邻接矩阵,其中https://www.zhihu.com/equation?tex=A_k%28c%2Ci%29表示bundle与物品交互基于第个意图的置信度。
https://www.zhihu.com/equation?tex=%5Cmathbf%7BA%7D%28c%2C+i%29%3D%5Cleft%28%5Cmathbf%7BA%7D_%7B1%7D%28c%2C+i%29%2C+%5Cmathbf%7BA%7D_%7B2%7D%28c%2C+i%29%2C+%5Ccdots%2C+%5Cmathbf%7BA%7D_%7BK%7D%28c%2C+i%29%5Cright%29+%5C%5C
初始化https://www.zhihu.com/equation?tex=%5Cmathbf%7BA%7D%28c%2C+i%29%3D%281%2C1%2C+%5Ccdots%2C+1%29。
意图感知交互图解耦。基于每个意图感知图计算用户和bundle的embedding:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Be%7D_%7Bc+k%7D%5E%7B%281%29%7D%3Dg%5Cleft%28%5Cmathbf%7Bc%7D_%7Bk%7D%2C%5Cleft%5C%7B%5Cmathbf%7Bi%7D%2C+%5Cmathbf%7Bi%7D+%5Cin+%5Cmathcal%7BN%7D_%7Bc%7D%5Cright%5C%7D%5Cright%29+%5C%5C
图解耦模块采用邻居路由机制,对图https://www.zhihu.com/equation?tex=%5Cmathcal%7BG%7D_k迭代更新用户/bundle的embedding块和邻接矩阵。每次迭代使用和来记录https://www.zhihu.com/equation?tex=c_k和的更新。对于每个交互https://www.zhihu.com/equation?tex=%28c%2Ci%29,记录其在个意图上的置信度。为了得到分布,对置信度应用softmax:
https://www.zhihu.com/equation?tex=%5Ctilde%7B%5Cmathbf%7BA%7D%7D_%7Bk%7D%5E%7Bt%7D%28c%2C+i%29%3D%5Cfrac%7B%5Cexp+%5Cmathbf%7BA%7D_%7Bk%7D%5E%7Bt%7D%28c%2C+i%29%7D%7B%5Csum_%7Bk%5E%7B%5Cprime%7D%3D1%7D%5E%7BK%7D+%5Cexp+%5Cmathbf%7BA%7D_%7Bk%5E%7B%5Cprime%7D%7D%5E%7Bt%7D%28c%2C+i%29%7D+%5C%5C
然后在每个意图感知图进行embedding传播:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Bc%7D_%7Bk%7D%5E%7Bt%7D%3D%5Csum_%7Bi+%5Cin+%5Cmathcal%7BN%7D_%7Bc%7D%7D+%5Cfrac%7B%5Ctilde%7B%5Cmathbf%7BA%7D%7D_%7Bk%7D%5E%7Bt%7D%28c%2C+i%29%7D%7B%5Csqrt%7BD_%7Bt%7D%5E%7Bk%7D%28c%29+%5Ccdot+D_%7Bt%7D%5E%7Bk%7D%28i%29%7D%7D+%5Ccdot+%5Cmathbf%7Bi%7D+%5C%5C
每个意图下交互的置信度更新公式为:
https://www.zhihu.com/equation?tex=%5Cmathbf%7BA%7D_%7Bk%7D%5E%7Bt%2B1%7D%28c%2C+i%29%3D%5Cmathbf%7BA%7D_%7Bk%7D%5E%7Bt%7D%28c%2C+i%29%2B%5Cmathbf%7Bc%7D_%7Bk%7D%5E%7Bt%5E%7B%5Cmathrm%7BT%7D%7D%7D+%5Ccdot+%5Cmathbf%7Bi%7D+%5C%5C
embedding的多层组合。模型聚合高阶信息:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Be%7D_%7Bc+k%7D%5E%7Bl%7D%3Dg%5Cleft%28%5Cmathbf%7Be%7D_%7Bc+k%7D%5E%7Bl-1%7D%2C%5Cleft%5C%7B%5Cmathbf%7Bi%7D%2C+%5Cmathbf%7Bi%7D+%5Cin+%5Cmathcal%7BN%7D_%7Bc%7D%5Cright%5C%7D%5Cright%29+%5C%5C
将来自不同层的意图感知表示求和,得到最终表示:
https://www.zhihu.com/equation?tex=%5Cmathbf%7Be%7D_%7Bc+k%7D%3D%5Csum_%7Bl%7D+%5Cmathbf%7Be%7D_%7Bc+k%7D%5E%7Bl%7D+%5C%5C
图解耦模块从用户-物品交互图中学习分布在不同bundle(全局视图)中的用户意图;从bundle-物品图中,学习用户在每个bundle(局部视图)中的多个意图。结合来自全局和局部视图的意图,得到用户和bundle的表示:
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bl%7D+%5Cmathbf%7Be%7D_%7Bu%7D%3D%5Cleft%28%5Cmathbf%7Be%7D_%7Bu+1%7D%2C+%5Cmathbf%7Be%7D_%7Bu+2%7D%2C+%5Ccdots%2C+%5Cmathbf%7Be%7D_%7Bu+K%7D%5Cright%29+%5C%5C+%5Cmathbf%7Be%7D_%7Bb%7D%3D%5Cleft%28%5Cmathbf%7Be%7D_%7Bb+1%7D%2C+%5Cmathbf%7Be%7D_%7Bb+2%7D%2C+%5Ccdots%2C+%5Cmathbf%7Be%7D_%7Bb+K%7D%5Cright%29+%5Cend%7Barray%7D+%5C%5C
视图交叉传播模块:为了在不同的视图下在用户和bundle块之间交流意图,模型在用户-bundle交互图采用了LightGCN:
https://www.zhihu.com/equation?tex=%5Cbegin%7Baligned%7D+%5Cmathbf%7Bv%7D_%7Bu%7D+%26%3D%5Csum_%7Bb+%5Cin+%5Cmathcal%7BN%7D_%7Bu%7D%7D+%5Cfrac%7B1%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bu%7D%5Cright%7C%7D+%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bb%7D%5Cright%7C%7D%7D+%5Cmathbf%7Be%7D_%7Bb%7D%2C+%5C%5C+%5Cmathbf%7Bv%7D_%7Bb%7D+%26%3D%5Csum_%7Bu+%5Cin+%5Cmathcal%7BN%7D_%7Bb%7D%7D+%5Cfrac%7B1%7D%7B%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bb%7D%5Cright%7C%7D+%5Csqrt%7B%5Cleft%7C%5Cmathcal%7BN%7D_%7Bu%7D%5Cright%7C%7D%7D+%5Cmathbf%7Be%7D_%7Bu%7D+.+%5Cend%7Baligned%7D+%5C%5C
意图对比模块:从不同的角度对比用户和bundle的embedding从而捕获意图:
https://www.zhihu.com/equation?tex=L_%7B%5Ctext+%7Bcontrast+%7D%7D%3D-%5Clog+%5Cleft%28%5Cfrac%7B%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bc+k%7D+%5Ccdot+%5Cmathbf%7Bv%7D_%7Bc+k_%7B%2B%7D%7D%5Cright%29%7D%7B%5Csum_%7Bk%5E%7B%5Cprime%7D%7D+%5Cexp+%5Cleft%28%5Cmathbf%7Be%7D_%7Bc+k%7D+%5Ccdot+%5Cmathbf%7Bv%7D_%7Bc+k%5E%7B%5Cprime%7D%7D%5Cright%29%7D%5Cright%29+%5C%5C
其中,正样本是不同视图中具有相同意图的块,负样本是其他所有块。
预测和优化
https://www.zhihu.com/equation?tex=%5Cbegin%7Barray%7D%7Bl%7D+%5Chat%7B%5Cmathbf%7By%7D%7D_%7Bu+b%7D%3D%5Cleft%28%5Cmathbf%7Be%7D_%7Bu%7D+%5Coplus+%5Cmathbf%7Bv%7D_%7Bu%7D%5Cright%29+%5Codot%5Cleft%28%5Cmathbf%7Be%7D_%7Bb%7D+%5Coplus+%5Cmathbf%7Bv%7D_%7Bb%7D%5Cright%29%5C%5C+L_%7B%5Ctext+%7Bpred+%7D%7D%3D%5Csum_%7B%28u%2C+b%2C+d%29+%5Cin+Q%7D-%5Cln+%5Csigma%5Cleft%28%5Chat%7B%5Cmathbf%7By%7D%7D_%7Bu+b%7D-%5Chat%7B%5Cmathbf%7By%7D%7D_%7Bu+d%7D%5Cright%29%2B%5Clambda+%5Ccdot%5C%7C%5Ctheta%5C%7C%5E%7B2%7D+%5Cend%7Barray%7D+%5C%5C
5.3 实验结果

实验在两个数据集上进行,效果很好,MIDGN在NetEase数据集上将性能提高了26.8%-38.2%,在Youshu上提高了10.7%-14.2%。


总结

五篇文章分别涉及到了协同过滤、序列推荐、基于知识图谱的推荐与bundle推荐,具体应用的技术涉及了EM算法、解耦等。其中第一篇文章NCL和第二篇文章ICL思路较为相近,均采用聚类的思想为节点/序列寻找语义(意图)相似的“邻居”;第三篇文章引入辅助信息(评分和评论)作为边特征改进GNN的图建模,其对比学习的应用也与辅助信息相结合;第四篇文章在KGR中针对不同视图进行了对比任务;第五篇文章则利用解耦的思想为GNN生成多个子图,并利用解耦的子图作为对比任务的视图,思路虽然简单,但效果不错。
这里是「对白的算法屋」第四十二篇文章,因为文章的内容较多,水平有限,希望大家能够踊跃指出文章中的错误,先多谢大家!
最后欢迎大家关注我的 微信公众号:对白的算法屋(duibainotes),跟踪NLP、推荐系统和对比学习等机器学习领域前沿,日常还会分享我的职场心得和人生感悟。想进一步交流的同学也可以通过公众号加我的微信,和我一同探讨技术问题,谢谢!
推荐阅读

万物皆Contrastive Learning,从ICLR和NIPS上解读对比学习最新研究进展
推荐系统中不得不学的对比学习(Contrastive Learning)方法
ICLR2021对比学习(Contrastive Learning)NLP领域论文进展梳理
对比学习(Contrastive Learning)在CV与NLP领域中的研究进展
页: [1]
查看完整版本: 2022对比学习(Contrastive Learning)在各大顶会上的最新 ...