A NOVEL ACOUSTIC-ASSISTED CHIP-OFF FRAMEWORK FOR DATA EXTRACTION FROM DAMAGED HARD DISK DRIVES
DOI:
https://doi.org/10.37943/25IIRG2444Keywords:
signal classification, machine learning, forensic analysis, SMART attributes, logical malfunction, environmental sound recognition, chip-off, data recoveryAbstract
This article discusses best practices for extracting data from damaged mobile phones and hard drives while maintaining the integrity of the storage hardware. It emphasizes that data recovery is essential for digital forensics and cybersecurity due to a common approach to data recovery from mobile devices. In many cases, step-by-step low-level collections instead of quick logical groups reveal hidden artifacts or recently deleted files. Sometimes, this is the only reliable option.
Hard disk errors are usually divided into two categories: logical errors and physical damage. The recovery platform combines proven diagnostics, predictive analysis, and specially designed tools, ranging from installing a magnetic head and replacing an image disk to changing file system settings to make the data readable again. One of the new ideas is acoustic perception of the environment. Just as a device listens to the sound of a running engine, it listens to an acoustic response that can be used to automate the detection of mechanical defects. Tics or stuttering can tell you a lot. The study includes two models: one for detecting problems on the hard drive and the other for data recovery. Model A detects errors related to noise, and Model B tries to recover the data. Thus, this study uses a combined approach to extract data from a damaged hard drive.
With the proliferation of devices for the Internet of Things, acoustic-enabled chip disconnection methods provide forensic protection for repairing and inspecting damaged equipment, such as sensors and damaged industrial components. These results should be of interest to research groups, corporate lawyers, and criminologists in terms of broader coverage and reliability of data recovery operations.
References
S. Tanenbaum and H. Bos, Modern Operating Systems, 5th ed. Harlow, U.K.: Pearson Education, 2022.
L. Rzayeva, A. Imanberdi, I. Opirskyy, O. Harasymchuk, and G. Abitova, “Analysis of technical features of data encryption implementation on SD cards in the Android system,” Scientific Journal of Astana IT University, pp. 157–171, 2025. https://doi.org/10.37943/21LMQF2486
D. Barchiesi, D. Giannoulis, D. Stowell, and M. D. Plumbley, “Acoustic scene classification: Classifying environments from the sounds they produce,” IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 16–34, 2015. https://doi.org/10.1109/MSP.2014.2326181
P. Cruickshank, “Discarded laptop yields revelations on network behind Brussels, Paris attacks,” CNN, Jan. 25, 2017. [Online]. Available: https://edition.cnn.com/2017/01/24/europe/brussels-laptop-revelations/index.html. [Accessed: Mar. 27, 2025].
K. J. Piczak, “Environmental sound classification with convolutional neural networks,” in Proc. 2015 IEEE 25th Int. Workshop Machine Learning for Signal Processing (MLSP), 2015, pp. 1–6. https://doi.org/10.1109/MLSP.2015.7324337
“San Bernardino shooters tried to destroy phones, hard drives, sources say,” ABC News. [Online]. Available: https://abcnews.go.com/US/san-bernardino-shooters-destroy-phones-hard-drives-sources/story?id=35570286. [Accessed: Mar. 27, 2025].
Y. Saraçlıoğlu, B. Saoud, I. Shayea, G. Y. Pil, and L. Rzayeva, “Environmental Sound Recognition (ESR) with Python,” in Proceedings - 29th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2025-Summer), 2025, https://doi.org/10.1109/SNPD65828.2025.11254371
S. Hershey et al., “CNN architectures for large-scale audio classification,” in Proc. 2017 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 131–135. https://doi.org/10.48550/arXiv.1609.09430
J. Salamon and J. P. Bello, “Deep convolutional neural networks and data augmentation for environmental sound classification,” IEEE Signal Processing Letters, vol. 24, no. 3, pp. 279–283, 2017. https://doi.org/10.48550/arXiv.1608.04363
M. Gül and E. Kugu, “A survey on anti-forensics techniques,” International Artificial Intelligence and Data Processing Symposium (IDAP), 2017. https://doi.org/10.1109/IDAP.2017.8090341
K. J. Piczak, “ESC: Dataset for environmental sound classification,” in Proc. 23rd ACM Int. Conf. Multimedia, 2015, pp. 1015–1018. https://doi.org/10.1145/2733373.2806390
Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “PANNs: Large-scale pretrained audio neural networks for audio pattern recognition,” IEEE/ACM Trans. Audio, Speech, Language Process., vol. 28, pp. 2880–2894, 2020. https://doi.org/10.48550/arXiv.1912.10211
S. Schneider, A. Baevski, R. Collobert, M. Auli, and A. Mohamed, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” in Proc. Interspeech, 2021, pp. 3652–3656. https://doi.org/10.48550/arXiv.2006.11477
J.-P. Van Belle, “Anti-forensics: A practitioner perspective,” International Journal of Cyber-Security and Digital Forensics, vol. 4, no. 2, pp. 390–403, 2015, https://doi.org/10.17781/P001593
J. Oh and H. Hwang, “Advanced forensic recovery of deleted file data in F2FS,” Forensic Science International: Digital Investigation, vol. 54, Art. no. 301976, 2025, https://doi.org/10.1016/j.fsidi.2025.301976
R. Xu, X. Wang, and J. Wu, “Classification based hard disk drive failure prediction: Methodologies, performance evaluation and comparison,” in Proc. 2022 IEEE 18th Int. Conf. Automation Science and Engineering (CASE), Aug. 2022, pp. 189–195, https://doi.org/10.1109/CASE49997.2022.9926720
A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System Concepts, 10th ed. Hoboken, NJ, USA: Wiley, 2018.
A. Mesaros, T. Heittola, and T. Virtanen, “Metrics for polyphonic sound event detection,” in Proc. 2016 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 351–355. https://doi.org/10.3390/app6060162
R. Chandramouli and E. Hibbard, Guidelines for Media Sanitization, NIST SP 800-88 Rev. 2. Gaithersburg, MD, USA: National Institute of Standards and Technology, Sep. 2025, https://doi.org/10.6028/NIST.SP.800-88r2
Scientific Working Group on Digital Evidence. Best Practices for Data Destruction Media Sterilization and Sanitization. – SWGDE F-24-001-1.0, version 1.1. – 2025. – URL: https://swgde.org/wp-content/uploads/2025/11/Best-Practices-for-Data-Destruction-Media-Sterilization-and-Sanitization-24-F-001-1.0.pdf
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Articles are open access under the Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish a manuscript in this journal agree to the following terms:
- The authors reserve the right to authorship of their work and transfer to the journal the right of first publication under the terms of the Creative Commons Attribution License, which allows others to freely distribute the published work with a mandatory link to the the original work and the first publication of the work in this journal.
- Authors have the right to conclude independent additional agreements that relate to the non-exclusive distribution of the work in the form in which it was published by this journal (for example, to post the work in the electronic repository of the institution or publish as part of a monograph), providing the link to the first publication of the work in this journal.
- Other terms stated in the Copyright Agreement.