Date of Award

Fall 9-19-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mathematics, Statistics and Computer Science

First Advisor

Nasim Yahyasoltani

Second Advisor

Gregory Ongie

Third Advisor

Praveen Madiraju

Abstract

Advancements in computational pathology hold immense potential to transform cancer diagnosis by assisting pathologists in achieving more efficient, accurate, and interpretable analysis of histopathological images. Whole slide images (WSIs), which are high-resolution digital scans of tissue slides, are central to this transformation, offering unprecedented detail for the development of automated algorithms. However, the gigapixel scale, multi-resolution format, and limited availability of WSIs pose significant challenges for the development of reliable automated diagnostic systems. To overcome these obstacles, this work proposes a comprehensive framework that integrates graph-based learning, self-supervised feature extraction, and vision–language models to enhance diagnostic performance and clinical applicability. For classification tasks, WSIs are represented as graphs to capture neighborhood-level relationships while preserving global slide information. An initial single-resolution approach is extended to a multi-resolution strategy that mirrors the diagnostic workflow of pathologists. The framework also prioritizes explainability by identifying key cancerous regions driving model predictions, fostering transparency and clinical trust. Data scarcity, a key challenge in computational pathology, is addressed through few-shot learning strategies that combine pathologist-validated prompts from large language models with WSIs to enable robust classification even with limited annotated datasets. The proposed methods are extensively evaluated on multiple cancer datasets, including kidney, lung, breast, and prostate cancers, demonstrating superior performance compared to state-of-the-art approaches. In addition to WSI classification, this work explores automated stain transfer to further improve diagnostic workflows. Different stains are routinely used in cancer diagnosis to highlight various characteristics of cellular and tissue, yet manual conversion between staining modalities remains labor intensive and time consuming. Using latent diffusion models, this framework enables the transformation of images stained with hematoxylin and eosin (H&E) stained images into alternative modalities, including trichrome stains for liver tissue and multiple immunohistochemistry (IHC) stains for breast tissue. Comparative evaluations with existing generative techniques demonstrate the effectiveness and versatility of the proposed approach in achieving efficient stain transfer.

Comments

Doctor of Philosophy (PhD)

Available for download on Wednesday, December 23, 2026

Share

COinS