Document Type

Article

Publication Date

8-2024

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Source Publication

IEEE Transactions on Artificial Intelligence

Source ISSN

2691-4581

Original Item ID

DOI: 10.1109/TAI.2024.3365779

Abstract

Presenting whole slide images (WSIs) as graph will enable a more efficient and accurate learning framework for cancer diagnosis. Due to the fact that a single WSI consists of billions of pixels and there is a lack of vast annotated datasets required for computational pathology, the problem of learning from WSIs using typical deep learning approaches such as convolutional neural network (CNN) is challenging. Additionally, WSIs downsampling may lead to the loss of data that is essential for cancer detection. A novel two-stage learning technique is presented in this work. Since context, such as topological features in the tumor surroundings, may hold important information for cancer grading and diagnosis, a graph representation capturing all dependencies among regions in the WSI is very intuitive. Graph convolutional network (GCN) is deployed to include context from the tumor and adjacent tissues, and self-supervised learning is used to enhance training through unlabeled data. More specifically, the entire slide is presented as a graph, where the nodes correspond to the patches from the WSI. The proposed framework is then tested using WSIs from prostate and kidney cancers.

Comments

Accepted version. IEEE Transactions on Artificial Intelligence, Vol. 5, No. 8 (August 2024): 4111-4120. DOI. © 2024 Institute of Electrical and Electronic Engineers (IEEE). Used with permission.

Yahyasoltani_16699acc.docx (1738 kB)
ADA Accessible Version

Share

COinS