Special Issue "Recent Advanced in Virtual Reality"

Submission Deadline: 31 May 2022
Submit to Special Issue
Guest Editors
Prof. Zhigeng Pan, Nanjing University of Information Science & Technology, China
Dr. Gustavo Marfia, University of Bologna, Italy
Prof. Zhihan Lv, Qingdao University, China

Summary

Interactive technologies, such as eye tracking, speech recognition, and gesture input, have made great progress with the deepening of research. Users can use these interactive technologies to obtain information from virtual reality systems through multi-channels to eliminate the boundaries between the human environment and computer systems. Currently, various sensory organs of humans participate in the process of interaction between humans and computer systems, which can be called multimodal interaction from the perspective of system and technology. Compared with single-channel interaction, multimodal interaction has more extensive application potential in the human-computer natural interaction system. In the study of multimodal interaction, user behavior is one of the main input methods. The effective natural interaction can be achieved through classifying user behavior which conveys user intention. Besides, there is a corresponding mathematical mapping between user cognition and user behavior. It is necessary to diversify into various interactive devices for the sensory interactive system to enhance users’ interactive experience. At present, the interactive devices are mainly divided into voice input and output devices, image input and image display devices, the TouchPad, input devices, and visual tracking devices according to different sensory attributes. These interactive devices are applied to various computer systems to improve the computer efficiency and the accuracy of human-computer interaction. Through reasonable application in augmented reality systems, they will greatly improve the interactivity and user experience of augmented reality. Relevant researches have proved that the multimodal interaction system has achieved good application effects in the fields of digital media, cultural communication, commodity sales, information communication and so on.


Although virtual reality technology has been greatly developed and promoted, there are still some limitations in the study of multimodal interaction. Limited by technology, the system cannot directly identify user cognition. Meanwhile, the imperfect design of virtual reality system will affect the accuracy and efficiency of information transmission, thereby increasing cognitive load and reducing user experience. The existing research of multimodal interaction is mainly in the two fields of software design and hardware equipment with the focus on the application of multimodal system. For current augmented reality systems and applications, most studies still focus on visual and gesture-based augmented reality, which lacks direct physical and sensory stimulation for users. Therefore, it is expected to explore more interactive channels for users to input information which is processed by the system into different forms of information feedback to the sensory organs of users participating in the interaction corresponding to the information form. Research in the future also should pay attention to how to combine the architecture of multimodal interaction with augmented reality.


Some research topics of multimodal interaction have evoked public interest, including user cognition, user recognition, and behavior coding. The research topic of this special issue aims to provide readers with a comprehensive overview of multimodal interaction research and work in virtual reality. This special issue particularly expects original comment articles, opinions, methods and modeling research. The excellent papers of ICVR2022 (2022 IEEE 8th International Conference on Virtual Reality) will be considered for inclusion in the Special Issue. All submitted papers will undergo the Journal's standard peer-review process.


The areas covered by this special issue may include but are not limited to the following:

 

• Construction and Optimization of the Multimodal Interaction 

• Multimodal Interaction Realization for Virtual Reality

• Analysis of Application Scenario of Multimodal Interaction

• Comparative analysis of Multimodal Interaction and Single-channel Interaction

• Analysis of Application Values of Multimodal Interaction

• Cognitive Expression under Multichannel Integration

• Quantitative Description of User Cognition in Interactive Systems

• Mathematical logic Relationship between Multimodal Natural Interrelation and User Cognition

• Data Fusion of User Behavior in Multimodal Natural Interaction

• Research on Cognition of Multimodal Interaction

• Relationship between Cognitive Load and User Behavior in Interactive Systems

• Evaluation and Analysis of User Multisensory Cognitive Channels

• Analysis of Weighting of Multisensory Channels

• Research on Multimodal Interaction Based on User Cognition