Document Type

Conference Proceeding

Publication Date

2022

Publisher

Society of Photo-optical Instrumentation Engineers (SPIE)

Source Publication

Proceedings of SPIE 12035: Medical Imaging

Source ISSN

9781510649460

Original Item ID

DOI: 10.1117/12.2613050

Abstract

Deep neural networks used for reconstructing sparse-view CT data are typically trained by minimizing a pixel- wise mean-squared error or similar loss function over a set of training images. However, networks trained with such losses are prone to wipe out small, low-contrast features that are critical for screening and diagnosis. To remedy this issue, we introduce a novel training loss inspired by the model observer framework to enhance the detectability of weak signals in the reconstructions. We evaluate our approach on the reconstruction of synthetic sparse-view breast CT data, and demonstrate an improvement in signal detectability with the proposed loss.

Comments

Published in Proceedings of SPIE 12035: Medical Imaging, 2022. DOI. © Society of Photo-optical Instrumentation Engineers (SPIE). Used with permission.

ongie_15665acc.docx (863 kB)
ADA Accessible Version

Share

COinS