Research Article | | Peer-Reviewed

AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection

Received: 10 November 2025     Accepted: 20 November 2025     Published: 17 December 2025
Views:       Downloads:
Abstract

Ensuring reliable camera vision in autonomous driving systems requires continuous monitoring of image quality and lens integrity. External contaminants such as dust, raindrops, and mud, as well as permanent defects like cracks or scratches, can severely degrade visual perception and compromise safety-critical tasks such as lane detection, obstacle recognition, and path planning. This paper presents an AI-based framework that integrates image quality assessment (IQA) and lens defect analysis to enhance the robustness of camera-based perception systems in autonomous vehicles. Building on previous conceptual work in safety-aware lens defect detection, the proposed framework introduces a dual-layer architecture that combines real-time IQA monitoring with deep learning-based soiling segmentation. As an initial experimental validation, a U-Net model was trained on the WoodScape Soiling dataset to perform pixel-level detection of lens contamination. The model achieved an average Intersection-over-Union (IoU) of 0.6163, a Dice coefficient of 0.7626, and a recall of 0.9780, confirming its effectiveness in identifying soiled regions under diverse lighting and environmental conditions. Beyond the experiment, this framework outlines pathways for future integration of semantic segmentation, anomaly detection, and safety-driven decision policies aligned with ISO 26262 and ISO 21448 standards. By bridging conceptual modeling with experimental evidence, this study establishes a foundation for intelligent camera health monitoring and fault-tolerant perception in autonomous driving. The presented results demonstrate that AI-based image quality and defect assessment can significantly improve system reliability, supporting safer and more adaptive driving under real-world conditions.

Published in American Journal of Mechanics and Applications (Volume 12, Issue 4)
DOI 10.11648/j.ajma.20251204.14
Page(s) 93-101
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Autonomous Driving, Image Quality Assessment, Lens Defect Detection, U-Net Segmentation, Soiling Detection, Camera Reliability, Safety Framework

References
[1] Uřičář, M., Křížek, P., Sistu, G., & Yogamani, S. (2019, October). Soilingnet: Soiling detection on automotive surround-view cameras. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 67-72). IEEE.
[2] A. Das, P. Křížek, G. Sistu, F. Bürger, S. Madasamy, M. Uřičár, V. R. Kumar, and S. Yogamani, “TiledSoilingNet: Tile-level Soiling Detection on Automotive Surround-View Cameras Using Coverage Metric,” in Proc. IEEE Intelligent Transportation Systems Conference (ITSC), Rhodes, Greece, Sept. 2020, pp. 1-6,
[3] Soboleva, V., & Shipitko, O. (2021, December). Raindrops on windshield: Dataset and lightweight gradient-based detection algorithm. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1-7). IEEE.
[4] S. Yogamani, M. Uricar, G. Sistu, et al., “WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving,” Proc. ICCV Workshops, 2019.
[5] Axmedov A. G. & Dadaxanov M. X. (2025). IMAGE QUALITY CHALLENGES IN AUTONOMOUS DRIVING: A CONCEPTUAL FRAMEWORK FOR CAMERA LENS DEFECT DETECTION. Development Of Science, 9(2), pp. 69-79.
[6] Axmedov Abdulazizxon Ganijon O'g'li, & Dadaxanov Musoxon Xoshimxonovich. (2025). ENHANCING AUTONOMOUS DRIVING SYSTEMS WITH AI-BASED IMAGE QUALITY ASSESSMENT FOR LENS DEFECT DETECTION. PORTUGAL-SCIENTIFIC REVIEW OF THE PROBLEMS AND PROSPECTS OF MODERN SCIENCE AND EDUCATION, 1(3), 38-43.
[7] J. Yang, Z. Zhang and others, “Deep Learning-Based Image Quality Assessment: A Survey,” Procedia Computer Science, vol. 221, pp. 1000-1005, 2023,
[8] Shi, J., Gao, P., & Qin, J. (2024, March). Transformer-based no-reference image quality assessment via supervised contrastive learning. In Proceedings of the AAAI conference on artificial intelligence (Vol. 38, No. 5, pp. 4829-4837).
[9] Q. Yang, et al., “An Unsupervised Method for Industrial Image Anomaly Detection,” Sensors, vol. 24, no. 8, Article 2440, 2024.
[10] Lan, G., Peng, Y., Hao, Q., & Xu, C. (2024). Sustechgan: image generation for object detection in adverse conditions of autonomous driving. IEEE Transactions on Intelligent Vehicles.
[11] Mohammadi, P., Ebrahimi-Moghadam, A., & Shirani, S. (2014). Subjective and objective quality assessment of image: A survey. arXiv preprint arXiv: 1406.7799.
[12] Ma, C., Shi, Z., Lu, Z., Xie, S., Chao, F., & Sui, Y. (2025). A survey on image quality assessment: Insights, analysis, and future outlook. arXiv preprint arXiv: 2502.08540.
[13] Yao, Juncai; Shen, Jing; Yao, Congying. (2023). Image quality assessment based on the perceived structural similarity index of an image. Mathematical Biosciences and Engineering, 20(5), 9385-9409.
[14] Ahmed, I. T., Der, C. S., & Tareq, B. (2017). A Survey of Recent Approaches on No-Reference Image Quality Assessment with Multiscale Geometric Analysis Transforms.
[15] Kim, H., Yang, Y., Kim, Y., Jang, D.-W., Choi, D., Park, K., Chung, S., & Kim, D. (2025). Effect of Droplet Contamination on Camera Lens Surfaces: Degradation of Image Quality and Object Detection Performance. Applied Sciences, 15(5), 2690. HYPERLINK "
Cite This Article
  • APA Style

    O’g’li, A. A. G. (2025). AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection. American Journal of Mechanics and Applications, 12(4), 93-101. https://doi.org/10.11648/j.ajma.20251204.14

    Copy | Download

    ACS Style

    O’g’li, A. A. G. AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection. Am. J. Mech. Appl. 2025, 12(4), 93-101. doi: 10.11648/j.ajma.20251204.14

    Copy | Download

    AMA Style

    O’g’li AAG. AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection. Am J Mech Appl. 2025;12(4):93-101. doi: 10.11648/j.ajma.20251204.14

    Copy | Download

  • @article{10.11648/j.ajma.20251204.14,
      author = {Axmedov Abdulazizxon Ganijon O’g’li},
      title = {AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection},
      journal = {American Journal of Mechanics and Applications},
      volume = {12},
      number = {4},
      pages = {93-101},
      doi = {10.11648/j.ajma.20251204.14},
      url = {https://doi.org/10.11648/j.ajma.20251204.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajma.20251204.14},
      abstract = {Ensuring reliable camera vision in autonomous driving systems requires continuous monitoring of image quality and lens integrity. External contaminants such as dust, raindrops, and mud, as well as permanent defects like cracks or scratches, can severely degrade visual perception and compromise safety-critical tasks such as lane detection, obstacle recognition, and path planning. This paper presents an AI-based framework that integrates image quality assessment (IQA) and lens defect analysis to enhance the robustness of camera-based perception systems in autonomous vehicles. Building on previous conceptual work in safety-aware lens defect detection, the proposed framework introduces a dual-layer architecture that combines real-time IQA monitoring with deep learning-based soiling segmentation. As an initial experimental validation, a U-Net model was trained on the WoodScape Soiling dataset to perform pixel-level detection of lens contamination. The model achieved an average Intersection-over-Union (IoU) of 0.6163, a Dice coefficient of 0.7626, and a recall of 0.9780, confirming its effectiveness in identifying soiled regions under diverse lighting and environmental conditions. Beyond the experiment, this framework outlines pathways for future integration of semantic segmentation, anomaly detection, and safety-driven decision policies aligned with ISO 26262 and ISO 21448 standards. By bridging conceptual modeling with experimental evidence, this study establishes a foundation for intelligent camera health monitoring and fault-tolerant perception in autonomous driving. The presented results demonstrate that AI-based image quality and defect assessment can significantly improve system reliability, supporting safer and more adaptive driving under real-world conditions.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - AI-Based Image Quality and Lens Defect Analysis in Autonomous Driving: A Framework with U-Net-Based Soiling Detection
    AU  - Axmedov Abdulazizxon Ganijon O’g’li
    Y1  - 2025/12/17
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajma.20251204.14
    DO  - 10.11648/j.ajma.20251204.14
    T2  - American Journal of Mechanics and Applications
    JF  - American Journal of Mechanics and Applications
    JO  - American Journal of Mechanics and Applications
    SP  - 93
    EP  - 101
    PB  - Science Publishing Group
    SN  - 2376-6131
    UR  - https://doi.org/10.11648/j.ajma.20251204.14
    AB  - Ensuring reliable camera vision in autonomous driving systems requires continuous monitoring of image quality and lens integrity. External contaminants such as dust, raindrops, and mud, as well as permanent defects like cracks or scratches, can severely degrade visual perception and compromise safety-critical tasks such as lane detection, obstacle recognition, and path planning. This paper presents an AI-based framework that integrates image quality assessment (IQA) and lens defect analysis to enhance the robustness of camera-based perception systems in autonomous vehicles. Building on previous conceptual work in safety-aware lens defect detection, the proposed framework introduces a dual-layer architecture that combines real-time IQA monitoring with deep learning-based soiling segmentation. As an initial experimental validation, a U-Net model was trained on the WoodScape Soiling dataset to perform pixel-level detection of lens contamination. The model achieved an average Intersection-over-Union (IoU) of 0.6163, a Dice coefficient of 0.7626, and a recall of 0.9780, confirming its effectiveness in identifying soiled regions under diverse lighting and environmental conditions. Beyond the experiment, this framework outlines pathways for future integration of semantic segmentation, anomaly detection, and safety-driven decision policies aligned with ISO 26262 and ISO 21448 standards. By bridging conceptual modeling with experimental evidence, this study establishes a foundation for intelligent camera health monitoring and fault-tolerant perception in autonomous driving. The presented results demonstrate that AI-based image quality and defect assessment can significantly improve system reliability, supporting safer and more adaptive driving under real-world conditions.
    VL  - 12
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Sections