FUSION VIEW-NET: DUAL-VIEW DEEP LEARNING FOR ROBUST MAMMOGRAPHIC BREAST CANCER CLASSIFICATION

Authors

DOI:

https://doi.org/10.37943/23OUMR1748

Keywords:

Deep Learning, Mammography, Breast Cancer, Computer-Aided Diagnosis (CADx), Medical Image Analysis, Classification

Abstract

Breast cancer is still one of the top causes of cancer-related death for women globally, and better patient outcomes depend on early identification. Although mammography is the main imaging modality used for screening, the delicate nature of early clinical symptoms and inter-reader variability sometimes compromise diagnostic accuracy. We examine the application of deep convolutional neural networks (CNNs) to automated classification of mammogram images in this work. FusionView-Net (FV-Net) is also presented, a novel dual-view integration framework that combines data from mediolateral oblique (MLO) and craniocaudal (CC) views to improve diagnostic precision. To produce a more comprehensive depiction of the breast tissue than conventional single-view methods, FV-Net combines contextual and spatial data from both standard perspectives. Two publicly available mammography datasets, which have been properly divided to allow for both seen-unseen data configurations and cross-dataset generalization testing, are used to assess the approach. A variety of CNN architectures are evaluated on separate and combined datasets, including ResNet18 and a specially created CNN. Findings indicate that FV-Net significantly increases model robustness and classification accuracy, as evidenced by consistently better F1 scores and ROC AUC values, especially when combined with ResNet18 and the custom CNN. The necessity for flexible models in actual clinical settings is shown by generalization studies, which further highlight the significance of dataset diversity by showing a noticeable drop in performance when domain shifts are present. Our results demonstrate how well multi-view fusion works for CNN-based mammography classification and provide useful guidance for choosing architectures and training methods. The development of trustworthy, broadly applicable AI technologies to assist radiologists in the early diagnosis of breast cancer is made possible by FV-Net.

Author Biographies

Beibit Abdikenov, Astana IT University, Kazakhstan

PhD, Director of Science and Innovation Center “Artificial Intelligence”

Tomiris Zhaksylyk, Astana IT University, Kazakhstan

Master of Science, Researcher at Science and Innovation Center “Artificial Intelligence”

Aruzhan Imasheva, Astana IT University, Kazakhstan

Master’s student, Researcher at Science and Innovation Center “Artificial Intelligence”

Yerzhan Orazayev, Astana IT University, Kazakhstan

Master of Science, Researcher at Science and Innovation Center “Artificial Intelligence”

Danara Suleimenova, Astana IT University, Kazakhstan

Master of Medicine, Researcher at Science and Innovation Center “Artificial Intelligence”

References

World Health Organization. (2021). Breast cancer. Geneva, Switzerland: World Health Organization. Retrieved from https://www.who.int/news-room/fact-sheets/detail/breast-cancer

Shen, L., Margolies, L. R., Rothstein, J. H., Fluder, E., McBride, R., & Sieh, W. (2019). Deep learning to improve breast cancer detection on screening mammography. Radiology, 292(3), 535–540.

Lehman, C. D., Wellman, R. D., Buist, D. S. M., Kerlikowske, K., Tosteson, A. N. A., & Miglioretti, D. L. (2019). Mammographic breast density assessment using deep learning: Clinical implementation. Radiology, 290(1), 52–58.

McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., et al. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94.

Karaca Aydemir, B.K., Telatar, Z., Güney, S. et al. (2025). Detecting and classifying breast masses via YOLO-based deep learning. Neural Comput & Applic 37, 11555–11582. https://doi.org/10.1007/s00521-025-11153-1

Lotter, W., Sorensen, G., Ding, J., Kim, H., Ghassemi, M., Haider, Z., et al. (2021). Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nature Medicine, 27(2), 244–249.

Carriero, A., Groenhoff, L., Vologina, E., Basile, P., & Albera, M. (2024). Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics, 14(8), 848. https://doi.org/10.3390/diagnostics14080848.

Songsaeng, Chatsuda & Pradaranon, Varanatjaa & Chaichulee, Sitthichok. (2021). Multi-Scale Convolutional Neural Networks for Classification of Digital Mammograms With Breast Calcifications. IEEE Access. PP. 1-1. 10.1109/ACCESS.2021.3104627.

Arevalo, J., González, F. A., Ramos-Pollán, R., Oliveira, J. L., & Guevara López, M. A. (2020). Representation learning for mammography classification using multi-view information. Computer Methods and Programs in Biomedicine, 190, 105361.

Manigrasso, F., Milazzo, R., Russo, A. S., Lamberti, F., Strand, F., Pagnani, A., & Morra, L. (2025). Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures. Medical Image Analysis, 99, 103320. https://doi.org/10.1016/j.media.2024.103320.

Nasir, I. M., Alrasheedi, M. A., & Alreshidi, N. A. (2024). MFAN: Multi-Feature Attention Network for Breast Cancer Classification. Mathematics, 12(23), 3639. https://doi.org/10.3390/math12233639.

F. Manigrasso, R. Milazzo, A. S. Russo, F. Lamberti, F. Strand, A. Pagnani, & L. Morra. (2025). Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures. Medical Image Analysis, vol. 99, Art. no. 103320. https://doi.org/10.1016/j.media.2024.103320

Yang, B., Peng, H., Luo, X., & Wang, J. (2024). Multi-stages attention breast cancer classification based on nonlinear spiking neural P neurons with autapses. arXiv. https://arxiv.org/abs/2312.12804

Ribli D., Horváth A., Unger Z., Pollner P. (2021). Detecting and classifying lesions in mammograms with deep learning. Scientific Reports, vol. 8, no. 1, p. 4165. https://doi.org/10.1038/s41598-018-22437-z.

Lotter W., Sorensen K., Golan T., Barzilay R. (2021). Breast cancer detection with a transformer-based model for high-resolution mammograms. Nature Communications, vol. 12, p. 518. https://doi.org/10.1038/s41467-020-20407-z.

Yala A., Mikhael P., Strand F., Lin G., Smith K., Barzilay R.. (2022). Toward robust mammography-based models for breast cancer risk. Science Translational Medicine, vol. 14, no. 629, eabj5325. https://doi.org/10.1126/scitranslmed.abj5325.

Trivizakism E., Tsiknakis S., Vamvakas G., Marias K. (2020). A deep learning approach for automatic classification of breast lesions on mammography. Journal of Healthcare Engineering, vol. 2019, Art. no. 4180212. https://doi.org/10.1155/2019/4180212.

Wu N., Phang J., Park J., Shen Y.,. Huang Z, Zorin M. (2023). Self-supervised learning for mammography. Medical Image Analysis, vol. 87, 102787. https://doi.org/10.1016/j.media.2023.102787.

Raghu M., Zhang C., Kleinberg J., Bengio S. (2021). Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems, vol. 34, pp. 12116–12128, 2021.

D'Orsi, C. J., Sickles, E. A., Mendelson, E. B., & Morris, E. A. (2014). 2013 ACR BI-RADS Atlas: Breast Imaging Reporting and Data System. American College of Radiology.

Downloads

Published

2025-09-30

How to Cite

Abdikenov, B., Zhaksylyk, T., Imasheva, A., Orazayev, Y., & Suleimenova, D. (2025). FUSION VIEW-NET: DUAL-VIEW DEEP LEARNING FOR ROBUST MAMMOGRAPHIC BREAST CANCER CLASSIFICATION. Scientific Journal of Astana IT University, 23, 78–90. https://doi.org/10.37943/23OUMR1748

Issue

Section

Information Technologies