Document Type

Article

Publication Date

2-15-2025

Publisher

Elsevier

Source Publication

Knowledge-Based Systems

Source ISSN

1872-7409

Original Item ID

DOI: 10.1016/j.knosys.2024.112895

Abstract

Despite the success of graph neural networks (GNNs) in various domains, they exhibit susceptibility to adversarial attacks. Understanding these vulnerabilities is crucial for developing robust and secure applications. In this paper, we investigate the impact of evasion adversarial attacks through edge perturbations which involve both edge insertions and deletions. A novel explainability-based method is proposed to identify important nodes in the graph and perform edge perturbation between these nodes. The task of node classification in GNNs has a substantial effect on tasks that involve network analysis in numerous domains. Considering the broad applicability of this method, understanding potential strategies for adversarial attacks can provide insight to defend against them. Explainability offers comprehensive reasoning behind the predictions made by GNNs and facilitates transparency about the inner operation of the model. We show that additional information and insights that can be gained through GNN-based explainability methods can be utilized to strengthen the adversarial attack. The proposed method is tested for node classification with three different architectures and datasets. The results suggest that introducing edges between nodes of different classes has a higher impact as compared to removing edges among nodes within the same class.

Comments

Accepted version. Knowledge-Based Systems, Vol. 310 (February 15, 2025). DOI. © 2025 Elsevier. Used with permistion.

Available for download on Monday, March 01, 2027

Share

COinS