Document Type
Conference Proceeding
Publication Date
5-2022
Publisher
Institute of Electrical and Electronic Engineers (IEEE)
Source Publication
IEEE International Conference on Acoustics, Speech, and Signal Processes
Source ISSN
9781665405409
Abstract
The gigapixel resolution of a single whole slide image (WSI), and the lack of huge annotated datasets needed for computational pathology, makes cancer diagnosis and grading with WSIs a challenging task. Moreover, downsampling of WSIs might result in loss of information critical for cancer diagnosis. Motivated by the fact that context such as topological structures in the tumor environment may contain critical information in cancer grading and diagnosis, a novel two-stage learning approach is proposed. Self-supervised learning is applied to improve training through unlabled data and graph convolutional network (GCN) is deployed to incorporate context from tumor and surrounding tissues. More specifically, we represent the whole slide as a graph, where nodes are patches from the WSIs. The patches in the graph are represented as feature vectors obtained from pre-training the patches in self-supervised learning. The graph is trained using GCN which accounts for the context of each tissue for the cancer grading and classification. In this work, WSIs for prostrate cancer are validated and the model performance is evaluated based on diagnosis and grading of prostrate cancer and compared with ResNet50 as a traditional convolutional neural network (CNN) and multi-instance learning (MIL) as a leading approach in WSI diagnosis.
Recommended Citation
Aryal, Milam and Yahyasoltani, Nasim, "Context-Aware Graph-Based Self-Supervised Learning of Whole Slide Images" (2022). Computer Science Faculty Research and Publications. 78.
https://epublications.marquette.edu/comp_fac/78
Comments
Accepted version. IEEE International Conference on Acoustics, Speech, and Signal Processes, (May 2022). DOI. © 2022 Institute of Electrical and Electronic Engineers (IEEE)