Open Access iconOpen Access

ARTICLE

crossmark

Visual Perception and Adaptive Scene Analysis with Autonomous Panoptic Segmentation

Darthy Rabecka V1,*, Britto Pari J1, Man-Fai Leung2,*

1 School of Electrical and Communication, Department of ECE, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
2 School of Computing and Information Science, Department of Science and Engineering, Anglia Ruskin University, Cambridge, CB1 1PT, UK

* Corresponding Authors: Darthy Rabecka V. Email: email; Man-Fai Leung. Email: email

Computers, Materials & Continua 2025, 85(1), 827-853. https://doi.org/10.32604/cmc.2025.064924

Abstract

Techniques in deep learning have significantly boosted the accuracy and productivity of computer vision segmentation tasks. This article offers an intriguing architecture for semantic, instance, and panoptic segmentation using EfficientNet-B7 and Bidirectional Feature Pyramid Networks (Bi-FPN). When implemented in place of the EfficientNet-B5 backbone, EfficientNet-B7 strengthens the model’s feature extraction capabilities and is far more appropriate for real-world applications. By ensuring superior multi-scale feature fusion, Bi-FPN integration enhances the segmentation of complex objects across various urban environments. The design suggested is examined on rigorous datasets, encompassing Cityscapes, Common Objects in Context, KITTI Karlsruhe Institute of Technology and Toyota Technological Institute, and Indian Driving Dataset, which replicate numerous real-world driving conditions. During extensive training, validation, and testing, the model showcases major gains in segmentation accuracy and surpasses state-of-the-art performance in semantic, instance, and panoptic segmentation tasks. Outperforming present methods, the recommended approach generates noteworthy gains in Panoptic Quality: +0.4% on Cityscapes, +0.2% on COCO, +1.7% on KITTI, and +0.4% on IDD. These changes show just how efficient it is in various driving circumstances and datasets. This study emphasizes the potential of EfficientNet-B7 and Bi-FPN to provide dependable, high-precision segmentation in computer vision applications, primarily autonomous driving. The research results suggest that this framework efficiently tackles the constraints of practical situations while delivering a robust solution for high-performance tasks involving segmentation.

Keywords

Panoptic segmentation; multi-scale features; efficient net-B7; Feature Pyramid Network

Cite This Article

APA Style
V, D.R., J, B.P., Leung, M. (2025). Visual Perception and Adaptive Scene Analysis with Autonomous Panoptic Segmentation. Computers, Materials & Continua, 85(1), 827–853. https://doi.org/10.32604/cmc.2025.064924
Vancouver Style
V DR, J BP, Leung M. Visual Perception and Adaptive Scene Analysis with Autonomous Panoptic Segmentation. Comput Mater Contin. 2025;85(1):827–853. https://doi.org/10.32604/cmc.2025.064924
IEEE Style
D. R. V, B. P. J, and M. Leung, “Visual Perception and Adaptive Scene Analysis with Autonomous Panoptic Segmentation,” Comput. Mater. Contin., vol. 85, no. 1, pp. 827–853, 2025. https://doi.org/10.32604/cmc.2025.064924



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2452

    View

  • 2054

    Download

  • 0

    Like

Share Link