Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    An Improved High Precision 3D Semantic Mapping of Indoor Scenes from RGB-D Images

    Jing Xin1,*, Kenan Du1, Jiale Feng1, Mao Shan2

    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.3, pp. 2621-2640, 2023, DOI:10.32604/cmes.2023.027467

    Abstract This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images. The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance. To address these issues, we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model. Then, an indoor RGB-D image semantic segmentation network is proposed, which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model. Finally, Bayesian updating is… More >

  • Open Access

    ARTICLE

    Visual Saliency Prediction Using Attention-based Cross-modal Integration Network in RGB-D Images

    Xinyue Zhang1, Ting Jin1,*, Mingjie Han1, Jingsheng Lei2, Zhichao Cao3

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 439-452, 2021, DOI:10.32604/iasc.2021.018643

    Abstract Saliency prediction has recently gained a large number of attention for the sake of the rapid development of deep neural networks in computer vision tasks. However, there are still dilemmas that need to be addressed. In this paper, we design a visual saliency prediction model using attention-based cross-model integration strategies in RGB-D images. Unlike other symmetric feature extraction networks, we exploit asymmetric networks to effectively extract depth features as the complementary information of RGB information. Then we propose attention modules to integrate cross-modal feature information and emphasize the feature representation of salient regions, meanwhile neglect the surrounding unimportant pixels, so… More >

Displaying 1-10 on page 1 of 2. Per Page