IMPACT OF LOSS FUNCTION ON SYNTHETIC BREAST ULTRASOUND IMAGE GENERATION
DOI:
https://doi.org/10.37943/24MMIK3887Keywords:
BUSI dataset, DGAN-WP-TL, WGAN-GP, BCE loss, synthetic medical images, loss function analysisAbstract
The BUSI (Breast Ultrasound Images) dataset is small and imbalanced, which limits the effective training of deep learning diagnostic models. Generative Adversarial Networks (GANs) offer a promising and increasingly popular solution for synthesizing realistic medical images to augment scarce training data and improve overall model generalization. This study investigates the impact of loss function selection in our previously published Deep Generative Adversarial Network with Wasserstein Gradient Penalty and Transfer Learning (DGAN-WP-TL). Two configurations were evaluated: one trained using Wasserstein GAN with Gradient Penalty (WGAN-GP) and another trained using Binary Cross-Entropy (BCE) loss. The experiments were conducted on the BUSI dataset with perceptual loss weights λ = 0.5, 3.0, 5.0, 7.0, and 10.0. Model performance was comprehensively assessed using Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Learned Perceptual Image Patch Similarity (LPIPS), and Multi-Scale Structural Similarity Index (MS-SSIM). Results demonstrate that WGAN-GP consistently outperformed BCE across all λ values, generating images with higher fidelity, improved realism, and greater visual diversity. The superiority was most pronounced for λ = 3.0 and λ = 5.0, where WGAN-GP achieved the lowest KID and FID and the most balanced diversity–fidelity trade-off. The best-performing DGAN-WP-TL configuration (WGAN-GP, λ = 5.0) achieved KID = 0.14, FID = 179.42, LPIPS (fake–fake) = 0.49, and MS-SSIM (fake–fake) = 0.18. These results highlight the crucial role of loss function design in medical image synthesis. Overall, the study confirms that WGAN-GP provides superior image realism and variability, making it the preferred choice for high-quality, clinically relevant synthetic data generation, while BCE remains a lightweight and practical alternative for constrained computational environments.
References
Sechopoulos, I., Teuwen, J., & Mann, R. (2020). Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: State of the art. Seminars in Cancer Biology, 72, 214–225. https://doi.org/10.1016/j.semcancer.2020.06.002
Negi, A., Joseph Raj, A. N., Nersisson, R., Zhuang, Z., & Murugappan, P. (2020). RDA-UNet-WGAN: An accurate breast ultrasound lesion segmentation using Wasserstein generative adversarial networks. Arabian Journal for Science and Engineering, 45, 6909–6921. https://doi.org/10.1007/s13369-020-04480-z
Ryspayeva, M. (2023). Generative adversarial network as data balance and augmentation tool in histopathology of breast cancer (pp. 99–104). Proceedings of the 2023 IEEE International Conference on Smart Information Systems and Technologies (SIST). https://doi.org/10.1109/SIST58284.2023.10223577
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. arXiv preprint arXiv:1406.2661. https://arxiv.org/abs/1406.2661
Ryspayeva, M., & Salykova, O. (2025). Effect of data balancing methods on MRI Alzheimer’s classification. Proceedings of the 2025 IEEE 5th International Conference on Smart Information Systems and Technologies (SIST) (pp. 1–7). IEEE. https://doi.org/10.1109/SIST61657.2025.11139255
Motamed, S., Rogalla, P., & Khalvati, F. (2021). Data augmentation using generative adversarial networks (GANs) for GAN-based detection of pneumonia and COVID-19 in chest X-ray images. Informatics in Medicine Unlocked, 27, 100779. https://doi.org/10.1016/j.imu.2021.100779
Ryspayeva, M. (2023). Generative adversarial network as data balance and augmentation tool in histopathology of breast cancer (pp. 99–104). Proceedings of the 2023 IEEE International Conference on Smart Information Systems and Technologies (SIST). IEEE. https://doi.org/10.1109/SIST58284.2023.10223577
Haq, D. Z., & Fatichah, C. (2023). Ultrasound image synthetic generating using deep convolution generative adversarial network for breast cancer identification. IPTEK The Journal for Technology and Science, 34(1), 12–21. https://doi.org/10.12962/j20882033.v34i1.14968
Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2019). Deep learning approaches for data augmentation and classification of breast masses using ultrasound images. International Journal of Advanced Computer Science and Applications, 10(5), 1–11. https://doi.org/10.14569/IJACSA.2019.0100579
Gulrajani, I., Ahmed, F., Arjovsky, M., & Dumoulin, V. (2017). Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028. https://doi.org/10.48550/arXiv.1704.00028
Liu, Z., Lv, Q., Lee, C., & Shen, L. (2023). GSDA: Generative adversarial network-based semi-supervised data augmentation for ultrasound image classification. Heliyon, 9(8), e19585. https://doi.org/10.1016/j.heliyon.2023.e19585
You, G., Qin, Y., Zhao, C., Zhao, Y., Zhu, K., Yang, X., & Li, Y. (2023). A CGAN-based tumor segmentation method for breast ultrasound images. Physics in Medicine & Biology, 68(7), 075010. https://doi.org/10.1088/1361-6560/acdbb4
Han, L., Huang, Y., Dou, H., Wang, S., Ahamad, S., Luo, H., Liu, Q., Fan, J., & Zhang, J. (2020). Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network. Computer Methods and Programs in Biomedicine, 189, 105275. https://doi.org/10.1016/j.cmpb.2019.105275
Xing, J., Li, Z., Wang, B., Qi, Y., Yu, B., Ghazvinian Zanjani, F., Zheng, A., Duits, R., & Tan, T. (2020). Lesion segmentation in ultrasound using semi-pixel-wise cycle generative adversarial nets. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(3), 940–949. https://doi.org/10.1109/TCBB.2020.297847
Barkat, L., Freiman, M., & Azhari, H. (2023). Image translation of breast ultrasound to pseudo anatomical display by CycleGAN. Bioengineering, 10(3), 388. https://doi.org/10.3390/bioengineering10030388
Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2019). Dataset of breast ultrasound images. Data in Brief, 28, 104863. https://doi.org/10.1016/j.dib.2019.104863
Ryspayeva, M., & Salykova, O. (2025). Multi-domain synthetic medical image generation and dataset balancing with DGAN-WP-TL. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 13(1). https://doi.org/10.1080/21681163.2025.2556687
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252. https://doi.org/10.1007/s11263-015-0816-y
Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://arxiv.org/abs/1409.1556
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Advances in Neural Information Processing Systems, 30, 6626–6637. https://proceedings.neurips.cc/paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html
Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying MMD GANs. International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=r1lUOzWCW
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 586–595. https://doi.org/10.1109/CVPR.2018.00068
Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003). Multiscale structural similarity for image quality assessment. Proceedings of the 37th Asilomar Conference on Signals, Systems & Computers, 2, 1398–1402. IEEE. https://doi.org/10.1109/ACSSC.2003.1292216
Deshpande, I., Zhang, Z., Schwing, A. G., & Forsyth, D. (2018). Generative modeling using the sliced Wasserstein distance. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3483–3491. https://doi.org/10.1109/CVPR.2018.00366
Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202. https://doi.org/10.1098/rsta.2015.0202
McInnes, L., Healy, J., & Melville, J. (2018). UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. https://arxiv.org/abs/1802.03426
Karras, T., Laine, S., & Aila, T. (2020). A style-based generator architecture for generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4217–4228. https://doi.org/10.1109/TPAMI.2020.2970919
Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., & Aila, T. (2021). Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34, 852–863. https://proceedings.neurips.cc/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Articles are open access under the Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish a manuscript in this journal agree to the following terms:
- The authors reserve the right to authorship of their work and transfer to the journal the right of first publication under the terms of the Creative Commons Attribution License, which allows others to freely distribute the published work with a mandatory link to the the original work and the first publication of the work in this journal.
- Authors have the right to conclude independent additional agreements that relate to the non-exclusive distribution of the work in the form in which it was published by this journal (for example, to post the work in the electronic repository of the institution or publish as part of a monograph), providing the link to the first publication of the work in this journal.
- Other terms stated in the Copyright Agreement.