Improving Graph Neural Networks with Learnable Propagation Operators

要約

タイトル:可学习传播算子优化图神经网络
要約:
– Graph Neural Networks(GNN)局限于传播算子
– 传播算子通常只包含非负元素,并在各通道之间共享,限制了GNN的表现力
– 一些GNN受到过度平滑的影响,限制了它们的深度
– 相比之下,卷积神经网络(CNN)可以学习多样化的传播滤波器,并且通常不存在过度平滑的现象
– 本文通过在每层中加入可学习的通道加权因子ω来学习和混合多种平滑和锐化传播算子,以弥合这些差距,介绍了通用的ωGNN方法,易于实现
– 本文研究了两个变体:ωGCN和ωGAT
– 对于ωGCN,我们理论上分析了它的行为和ω对获得的节点特征的影响,我们的实验验证了这些发现,证明并解释了这两种变体都不会过度平滑
– 此外,我们在15个现实世界的节点和图分类任务上进行了实验,其中我们的ωGCN和ωGAT表现与最先进的方法相当。

要約(オリジナル)

Graph Neural Networks (GNNs) are limited in their propagation operators. In many cases, these operators often contain non-negative elements only and are shared across channels, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge these gaps by incorporating trainable channel-wise weighting factors $\omega$ to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called $\omega$GNN, and is easy to implement. We study two variants: $\omega$GCN and $\omega$GAT. For $\omega$GCN, we theoretically analyse its behaviour and the impact of $\omega$ on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our $\omega$GCN and $\omega$GAT perform on par with state-of-the-art methods.

arxiv情報

著者 Moshe Eliasof,Lars Ruthotto,Eran Treister
発行日 2023-05-05 07:00:41+00:00
arxivサイト arxiv_id(pdf)

提供元, 利用サービス

arxiv.jp, OpenAI

カテゴリー: cs.LG パーマリンク