Augmented reality superimposes digital information onto objects in the physical world and enables multi-user collaboration. Despite that previous proxemic interaction research has explored many applications of user-object distance and user-user distance in an augmented reality context, respectively, and combining both types of distance can improve the efficiency of users’ perception and interaction with task objects and collaborators by providing users with insight into spatial relations of user-task object and user-user, less is concerned about how the two types of distances can be simultaneously adopted to assist collaboration tasks across multi-users. To fulfill the gap, we present UOUU, the user-object distance and user-user distance combined method for dynamically assigning tasks across users. We conducted empirical studies to investigate how the method affected user collaboration tasks in terms of collaboration occurrence and overall task efficiency. The results show that the method significantly improves the speed and accuracy of the collaboration tasks as well as the frequencies of collaboration occurrences. The study also confirms the method’s effects on stimulating collaboration activities, as the UOUU method has effectively reduced the participants’ perceived workload and the overall moving distances during the tasks. Implications for generalising the use of the method are discussed.
Augmented reality (AR) superimposes virtual information on real-world objects [
Given the benefits of leveraging multi-user collaboration in augmented reality, the UOD and UUD have become an increasingly important topic. For example, UOD eliminates cognitive user load in user-screen information browsing tasks [
To understand how the UOD and UUD can be simultaneously adopted to assist collaboration tasks, the paper presents UOUU, a UOD and UUD combined collaboration method that is specialised for distributing collaboration tasks across users. The method measures the UOD and UUD during collaboration tasks, and displays corresponding user interfaces driven by task assignment mechanisms that determine when and where to assign collaborative tasks to which user. For example, when the UUD decreases, the users are to be assigned more collaboration tasks to support each other, which maintains task operation efficiency. We conducted empirical studies to investigate how the UOUU method affected multi-user collaboration in an augmented reality-based parcel sorting task.
The main contributions of this paper are two-fold. Firstly, it proposes the UOUU method that integrates the UOD and UUD to enable dynamic collaboration task assignment in an augmented reality context. Unlike the conventional studies that use the UOD or UUD as a separate method, the proposed method enables simultaneous use of the two types of distances. Secondly, it adds a new understanding of how the proposed method influences multi-user collaboration in augmented reality. It assesses the role of combined distances in terms of collaboration occurrences and efficiency.
The remainder of the paper is organised as follows.
Proxemics interaction involves multiple factors such as distance, position, direction, and motion [
The UOD is regarded as an effective modality in revealing user’s attention and enhancing the engagement. Research of UOD attempted to perceive user’s attention to the screen. A shorter UOD used to suggest an increase of user attention [
UUD variations such as proximity and remoteness between users affect collaborative task performing [
Previous research has separately adopted the UOD and UUD in various interaction applications, but they usually helped users to understand their relation with task objects or collaborators instead of both of them, users cannot directly adopt the guidance given by the method in collboarations [
Augmented Reality (AR) integrates virtual objects and real environments and supported real-time interaction [
Previous research in AR explored the collaborative observation, creation, and modification of virtual information and collaborators’ information sharing and receiving. In an application of information editing, a study transformed sound data into 3D stereo waveforms and superimposed it in the real world, enabling multi-users to view and modify the sound by altering 3D models of stereo waveform through hand gestures [
Although there are many applications of AR based multi-user collaboration, the current interaction modalities and user interface paradigms are mainly derived from the conventional desktop metaphor, which is largely optimised for the individual user. This inspires us to consider the spatial characteristics of AR. And as distance-based interaction has been validated as an efficient spatial interaction modality, we combined them to further improve AR’s capability to support collaborative tasks.
The review authenticates that UOD acts as a spatial interaction medium to improve perception efficiency by driving the decentralised task execution within the users’ existing experiences. UUD acts as a representation of the development of user-user relationships in group interactions, and its variations in shape and size symbolises user’s collaborative intentions such as synergistic and dyadic relationships. However, either of these distances alone lacks a focus on the task object or collaborator, the UOUU, which combines these two distances, is therefore a promising method for effectively leveraging both relations to address the challenges of when and how to collaborate in a collaborative task. AR provides users with a shared hybrid environment that maps virtual information to the real world, which allows users to focus more on group communication and task operation. Therefore, integrating UOD and UUD in AR allows for natural and efficient interaction of both task operation and collaboration. The current key research challenge is how to combine UOD and UUD to achieve more effective collaboration in AR.
Given the above understanding, we develop hypotheses as follows:
H0: The UOD and UUD combined collaboration is not improved over the use of isolated UOD or UUD. H1: The UOUU improves participants’ collaboration occurrences. H2: The UOUU improves overall task efficiency.
In face-to-face collaboration, there are tasks that require users to frequently traverse between locations of objects and users. Production pipelines in the industry have been proven to improve task efficiency. Inspired by this, the production pipeline formed by assigning different task steps to different users based on spatial relations can reduce user’s movement and task context switching and improve efficiency. However, different from the normal production pipeline, when a user needs to move frequently during the interaction, it is difficult to distribute a fixed flow for user collaboration. How to dynamically allocate task steps to the specific user requires analysis of spatial relations between task objects and users.
UOUU instructs users to collaborate or work independently from a combined analysis of UOD and UUD, assigns optimal task flow guidance and forms a dynamic production line. The details of the method design are described as follows.
In a task with multiple objects, the UOD between the user and each object changes corresponding to different part of a task. This UOD defines the part of the task and users’ interaction intention. Interaction based on UOD improved the user’s cognitive efficiency [
This model divides the UOD into 4 different distance ranges:
0 to 0.6 m is operator distance (Dop), in which the users can directly touch task objects. 0.6 to 1.2 m is observer distance (Dob), in which the task object is the main object in the user’s field of vision. 1.2 to 2.4 m is passerby distance (Dp), in which the user may not identify the task objects. 2.4 m and above is world distance (Dw), in which the task object is part of the world environment and is hard to identify.
UOUU guides the user to operate the task on the object with visual cues of operation gestures at Dop, and guides the user to observe the object by presenting corresponding text and patterns at Dob, and reminds the user with a visual cue to point out the task object at Dp, and enhances the visual effect of cues to highlight the task object as the user regards task objects at Dw. Besides, UOUU instructs users to an object with a smaller UOD to perform the next task step to shorten the task path the user needs to travel.
The procedures of UOD in UOUU are as follows:
instruct users to perform tasks at the closer object to optimise the task route.
In which the user needs to interact with gradually instruct the users from observation to operation by following user egocentric interaction model.
In which
In multi-user tasks, UUD indicates users’ willingness and probability of collaboration [
Users are unlikely to collaborate in the intimate distance. Therefore, UUD in the UOUU is designed for the other 3 distance ranges. As users can directly interact with collaborators within Dps, UOUU helps users cooperate with visual cues of collaboration actions; as the users can express and interpret their intentions within Ds, UOUU promotes users to communicate by presenting collaborative task-related text and patterns, and users regard collaborators are hard to identify within Dpb, UOUU enhances the visual effect of cues to remind users the position of the task object (a set of examples is shown at the bottom of
The interactions of the UUD in UOUU are as follows:
instruct users to perform tasks at the closer collaborator to optimize the task route
In which there are gradually instruct user from communicating to collaborate by following the Hall’s proxemic zones.
In which
UOD and UUD focus on how to perform a task by individual user and when to collaborate separately. To address when and how to collaborate efficiently, we considered relations between UOD and UUD to drive collaboration.
To compare whether collaboration is more efficient, we took the route distance for collaboration as UUD (between the user and the closest collaborator). We took the route distance for an independent task as UOD (between the user and the closest object to interact with). UUD and UOD mentioned in remaining of
The interactions of UOUU in the UUD-driven part are as follows:
Instruct the user to perform independently or collaborate by comparing UOD between the user and task object with UUD between the user and collaborator.
In which
The study scenario requires the participants to move during task operation and its completion is related to the users’ position. Therefore, the parcel-sorting task that requires frequent movement in the warehouse to pick and distribute parcels becomes an appropriate scenario to verify the effectiveness of UOUU in certain tasks.
We first conducted a field study to understand collaboration scenarios in real logistics industries and validate the experimental scenario. We adopted the observation and interview method [
The parcel-sorting operation
During the field study, the participants faced two challenges in boosting the work speed. Firstly, they needed to switch frequently between the real world and the screen. After scanning the goods, workers needed to check the corresponding position of the goods and put them in the correct position. This process was mechanical and repetitive, but the workers were easily distracted by the multiple switches between the screen and the environment. The other one was the uncertain collaborator state. Workers needed to determine what each other was doing, whether the operation needed help, and whether collaboration here and now was more efficient than working alone with each other. Without a clear judgement on the above questions, workers were more likely to work alone. In contrast, the UOUU in this paper reduced the workload associated with virtual information and reality switching by adopting AR and enhancing the user’s perception of task and collaborator state through combined distance-based interaction, which addressed the above challenges.
Sixteen participants (7 females and 9 males) were recruited to participate in the experiments with a payment of 25 CNY. All the participants were between 20 and 31 years old with no physical disabilities (Mage = 23.4, SDage = 4.082). There are 4 undergraduate students, 6 master students, and 6 doctor students. Education background of 14 participants is in engineering and 2 in science. All the participants had no AR experience and were unfamiliar with each other before the experiment. The participants were positioned between shelves during the study, and the researcher stood away from the shelves (
We set up controlled experiments to validate the effectiveness of UOUU. And the independent variable was whether the users were assigned collaboration tasks based on combined distances. The dependent variables we investigated were the number of user’s collaboration occurrences and task efficiency. We used collaboration and task efficiency evaluation methods to assess the significance of their differences.
We set up a parcel sorting task that required participants to move frequently during the task. The scenario simulated the environment of a parcel sorting task in the real warehouse Participants picked and scanned goods on unsorted shelves. Participants sorted the goods according to the parcel information on the screen of headset. Participants distributed them to the correct location on the 2 sorted shelves for further packing and shipping.
The collaboration in the task required the participants to perform the corresponding task steps. The task steps operated by each participant were connected to form a dynamic production pipeline. In the parcel sorting task, UOUU coordinated participants with objects and other participants to form a lean workflow. Users need to consider these relations and build the process themselves in controlled groups.
There were three shelves in the experiment site (
The prototype followed the interaction of UOUU in the parcel sorting scene. The experiment was based on a repetitive two-person task that the corresponding information of objects and collaborators had been introduced prior to the experiment, and the whole task was performed within
The interactions of the UOUU experiment prototype are as follows:
In which the user needs to interact with
The graphic interface of UOUU displays a full-screen camera feed (
The other is a green rectangle that superimposes on AR markers of goods and shelves. The green rectangle on the goods is the response to scan goods correctly, and the method adds it to a task list to be performed. The green rectangle on the shelves indicates where the good should distribute at (
The controlled group removed the UOD and UUD-combined analysis. It stopped analysis of when and how to collaborate efficiently. Participants needed to consider when and how to collaborate by themselves (
We used two types of HMDs in the study. ARBOX HMD contained reflective optical lens and two external cameras (
The UOUU application is an Android application. We implemented rendering of AR experience and 3D information on AR markers through Vuforia AR API. We used image scanning to read parcel information and used depth of field (DOF) in unity to identify the parcel locations in real time supplemented with location API. The locations of shelves were recorded in the prototype as fixed location and switched after one round of the experiment. UOD and UUD were calculated by Euclidean distance based on location of user and object [
The procedures included preparation, task operation, and questionnaire. All the procedures were recorded by two cameras for further analysis. The participants were equally divided into two groups. The group 1 used the UOUU and the group 2 used the method and interface of the controlled group. For the second round of each group, the distance between two sorted shelves increased by three meters to imitate the random distance between sorted shelves in a real warehouse.
Experiment preparation: The participants were asked to fill out a printed consent form and questionnaires of demographic data and previous AR experience. The participants learned the operation for using the devices and the order-sorting task while a trial of task. All participants were informed to feel free to collaborate and complete tasks efficiently. In group 1, when participants performed the task, UOUU considered when and how to perform collaboration more efficiently at the real time according to criteria in
When the method suggested collaboration was efficient at one moment of independent task procedure, it instructed the participant to collaborate by an arrow pointing to the other user. The user interface in this application scene showed an arrow as gradual instruction to reduce variables. But it can be replaced by specific collaboration in other applications to achieve better performance. In group 2, the method instructed working independently. The participants needed to decide when and how to collaborate by themselves.
In the second round of experiment, one of the sorted shelves was moved two meters away. The other procedures were the same as in the round 1. After the task, the experimenters recorded the participants’ completion time and accuracy of parcel sorting. The participants were asked to fill out the questionnaire (
To reveal the influence of UOUU on collaboration and task efficiency, we analysed collaboration occurrences, task completion time and accuracy. The participants were asked to fill out questionnaires: NASA-TLX [
We analysed participants’ collaborative behaviours and categorised them into the following types according to the video material coding and notes framework mentioned above:
We analysed participants’ collaborative behaviour according to the above types and followed a grounded coding framework to measure the influence of the method on collaborative behaviours during the task.
Shapro-Wilk test showed none of VC (W = 0.743,
Between rounds 1 and 2, the MannWhitney test showed no significant differences on number of occurrences of collaboration styles including CP (z = −0.870,
We took NMM and psychological distance to analyse perceived awareness of collaboration in terms of the sense of collaborator’s presence, communication, and shared experience. Social presence (
Shapro-Wilk test showed co-presence (W = 0.943,
Shapro-Wilk test showed social (W = 0.950,
We recorded task completion time, and calculated accuracy (
In the second round, task completion time (F (1, 14) = 0.000,
We observed that the participants focused more on scanning and remembering target location even if this did not require long-term memory, which is more often in group 2 with less collaboration. Due to the repetitive nature of parcel-sorting task, participants were vulnerable to make errors and distract by irrelated items. We found some participants repeatedly scanned good and shelf several times to confirm whether they were operating correctly. In fact, time spent on delivering goods does not account for a high proportion of the entire task.
We used NASA-TLX to measure perceived workload, especially in terms of performance and effort categories that measure how the user perceived his performance in such workload (
Shapro-Wilk test showed all of NASA-TLX data did not follow regular distribution (
We conducted an informal post-study interview to identify user’s perception and behaviors. We analyse the responses of 16 participants to the open-ended question of “What do you like and dislike about this system?”
Four participants in the experimental group reported the effect of UOUU on collaboration improvement. For example, participants mentioned: “it helps me understand my collaborators’ intentions and collaborate with each other faster”, “no need to make decisions in collaboration by myself”, “matching of collaborative tasks were conducted very fast”, “the prompts for sorting and dividing tasks are concise and clear” It can be seen that the UOUU can effectively convey the intent of the collaborator. And others can perceive the intention quickly.
In addition, the two participants in the experimental group supposed that they did not need to make decisions about collaboration, allowing them to collaborate more easily and achieve better task performance. In contrast, in the second group of experiments (control group), three participants gave feedback that: “after becoming distant from the collaborator, it felt like he was always out of my field of vision and it was a bit difficult to collaborate”, “I feel that I don’t know how to collaborate with others when there are no certain instructions.”, “I don’t know what the other person is doing when I want to collaborate”. This suggests the difficulty of judgment on collaboration willingness and perception of collaborators’ location and status were major limitations for participants in control group collaboration, and that the UOUU played a role in solving this problem.
Four participants supposed that augmented reality could improve the perception of collaboration, and this perception included the perception of the collaborators and the perception of the task object. This benefit was mentioned by participants in both the experimental and control group. For example, the participant mentioned: “The task process does not need to be memorized, the instructions are intuitive and clear, and you can directly determine the location of the task”, “after using it for a period of time, I feel that I can become more familiar with the collaborator, and the collaboration efficiency will become higher”, “the interface is simple, convenient and effective”. This indicated that using augmented reality technology to collaborate can increase participants’ perception of task objects and collaborators.
Among the “dislikes” given by the participants, ten participants mentioned complaints about the hardware. For example, “the augmented reality device is too heavy and uncomfortable to wear”, “I get dizzy after wearing it for a long time”, “sometimes the camera is out of focus”, which indicates that the current augmented reality device still needs to be optimized in terms of user experience. In addition, three participants thought that the augmented reality method was somewhat interesting, and it could gamify the uninteresting and repetitive tasks. “Augmented reality is quite interesting, it’s kind of like playing a game where you can see virtual things in the real world”.
The UOUU assisted participants in perceiving and communicating collaborative intent, improving and facilitating collaborative behaviour and enhancing task performance. The UOUU enhances the cognitive efficiency for participants to interact with objects and collaborators, and adds some fun. However, augmented reality suffers from technical limitations and its devices are still not highly user-usable because they are too bulky, and the user experience is not rated well enough.
Given the results (
Collaboration occurrences | Occurrence number | A significant increase on collaboration occurrence number in group 1 ( |
No significant difference between rounds 1 and 2. | ||
A significant increase in the collaborator’s social presence in terms of co-presence, and perceived message understanding ( |
||
Perceived collaboration | NMM and phycological distance | No significant difference in terms of attention allocation. |
A significant decrease in the psychological distance in terms of temporal, social, and uncertainty ( |
||
No significant difference in terms of geographical. | ||
Task performance | Task completion time and accuracy | A significant decrease in task completion time and increase in accuracy in group 1 ( |
No significant difference between round 1 and round 2. | ||
Perceived performance | NASA-TLX | A significant decrease in workload in terms of physical ( |
No significant difference in perceived workload in terms of mental, temporal, performance, effort, and frustration. |
The results demonstrated the effectiveness of the UOUU on improving the participants’ overall task performance, collaboration occurrences and efficiency. The UOUU method significantly reduced the task completion time of operations and improved accuracy. The operation guidance based on combined UOD and UUD reduced the operation time for participants to interpret the spatial relations between them and objects and other users. More occurrences of collaboration reduced the possibility of making mistakes.
Compared with the control group, UOUU significantly increased the occurrence number of collaboration styles, as the method helped users to perceive their spatial relations with other user and task objects. Participants spent less time interpreting their situation to collaborate, which increased collaboration occurrences. Participants took less time to consider whether collaboration was efficient, which simplified collaboration and made them more willing to act collaboratively.
We adopted NMM social presence and psychological distance to measure collaboration relations [
Different from previous research that described partial spatial relations between participants and environment, we analysed distances composed of multiple UOD and UUD in collaborative work to form a more detailed spatial relation. Interpreting these spatial relations simultaneously by measuring and comparing distance on possible interaction routes can better understand the possibility and efficiency of interaction between user-object and user-user. Therefore, to fill this gap, we considered multiple UOD and UUD to help users enhance AR collaborative task performance. The UOUU assigned efficient task instruction gradually through measuring multiple UOD and UUD. Then, we conducted empirical research to explore whether adopting UOD and UUD-combined interaction can enhance collaboration and task performance.
The paper provided a new interaction method for the design of AR collaboration. Since this interaction approach was based on a face-to-face collaborative work scenario, it was applied to a wide range of collaborative work with varying distances. In addition to supporting collaboration, participants in the post-study interview mentioned that distance-driven interactions were more appealing to them and enhanced their engagement compared to traditional on-screen interactions, which has benefits on entertainment design for multi-player collaborative games and multi-player interactive devices. For other disciplines, the results are able to be used both in engineering collaboration to improve efficiency and in social relations disciplines to explore the interplay of group relations and interaction styles. In engineering, it can be applied to various production scenes that require flexible production lines instead of fixed ones according to position of workers and task objects. In the social sciences, the results of the UOUU can be used to explore methods of interaction that change the collaborative relationships of group activities as well as psychological distance, and to further investigate their biological principles.
There are several limitations of the study. For example, the experiment was based on the logistics industry, and the effectiveness of UOUU in other scenes that require frequent movement to carry out the task remains to be proven. The change in the relationship between participants is based on self-reported social presence and psychological distance, whose sociological mechanism still needs to be further explored. However, the two-person simple repetition task in the experimental scenario was representative of a range of collaborative tasks. To some extent the results of this paper are representative of numerous collaborative work scenarios with frequent distance variation.
The paper presents a UOD and UUD combined method, the UOUU method, for collaboration task assignment and conducts empirical studies to evaluate it to explore the influence of UOD and UUD combination on participants’ collaboration and task performance in the AR-based parcel sorting context. The results showed that the UOUU is an effective method that leverages the user-object and user-user distances to simulate collaboration in multi-user tasks. It successfully sensed these distances and accordingly assigned collaboration operations to the participants, thus promoting multi-user collaboration. The study evidence also indicates that the UOUU method enhanced the participant’s perception of the collaborator’s social presence.
The authors thank Yifei Shan, Zhongnan Huang, and Zihan Yan from Cross Device Computing Laboratory at Zhejiang University for prototype development and experiment assistance. We also thank all the participants.
The work is supported by
The authors declare that they have no conflicts of interest to report regarding the present study.
Psychological distance | Q1 | When I think of the collaborator, he or she seems distant | |
Q2 | The collaborator is too far away to move forward quickly | ||
Q3 | I find it difficult to spontaneously interact with collaborators in one place (e.g., discussions and decision making) | ||
Q4 | I feel I am interacting with my collaborators | ||
Q5 | My collaborators are quick to respond to my actions | ||
Q6 | I feel that my team interacts very closely | ||
Q7 | I feel isolated from my collaborators | ||
Q8 | I am sure my collaborators will respond to me | ||
Q9 | I’m not sure we work well together | ||
NMM social presence | Q10 | I always notice my collaborators | |
Q11 | I often see my collaborators paying attention to me | ||
Q12 | I always understand my collaborators’ intentions | ||
Q13 | My collaborators always fail to understand me | ||
NASA-TLX | Q14 | How much brain power you are required to use in the task execution process | |
Q15 | How much physical effort you are required to use in the task execution process | ||
Q16 | Do you feel overwhelmed by worrying about the progress of the task? | ||
Q17 | How well do you feel you performed in completing the task? | ||
Q18 | What percentage of effort do you need to put in yourself to complete this task? | ||
Q19 | How much worry, disappointment, nervousness or sadness do you feel during the task? |
Round 1: 1 Round 2: 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Experimental: 1 Control: 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
depression | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 3 | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 3 |
effort | 2 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 3 | 3 | 1 | 1 | 1 | 1 | 3 | 2 |
performance | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 2 | 1 | 3 | 1 | 0 | 0 | 0 | 1 | 0 |
time | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 2 | 3 | 4 | 1 | 2 | 3 | 1 | 1 | 2 |
physical | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 3 | 3 | 2 | 2 | 3 | 2 | 3 | 3 |
mental | 2 | 1 | 1 | 1 | 2 | 2 | 1 | 3 | 3 | 4 | 3 | 2 | 1 | 2 | 1 | 2 |
uncertainty2 | 2 | 1 | 1 | 1 | 2 | 1 | 2 | 4 | 4 | 4 | 2 | 3 | 2 | 2 | 5 | 5 |
uncertainty1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 | 3 | 1 | 2 | 0 | 0 | 4 | 4 |
social2 | 1 | 5 | 3 | 1 | 3 | 2 | 2 | 1 | 2 | 3 | 4 | 3 | 3 | 1 | 4 | 4 |
social1 | 0 | 0 | 2 | 1 | 2 | 1 | 1 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 3 | 3 |
tempral2 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 3 |
temporal1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 2 | 4 | 3 | 2 | 2 | 2 | 2 | 3 |
geographical3 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 1 | 3 | 3 | 3 | 1 | 2 | 2 | 3 |
geographical2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 4 | 3 | 1 | 1 | 3 | 3 |
geographical1 | 1 | 1 | 2 | 1 | 1 | 3 | 1 | 1 | 3 | 3 | 4 | 3 | 2 | 1 | 3 | 3 |
Perceived message understanding 2 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 4 | 3 | 2 | 2 | 2 | 4 | 1 | 2 | 2 |
Perceived message understanding 1 | 5 | 5 | 5 | 5 | 3 | 5 | 2 | 5 | 2 | 2 | 3 | 3 | 4 | 4 | 4 | 3 |
Attention allocation2 | 2 | 2 | 2 | 2 | 1 | 1 | 2 | 5 | 2 | 1 | 3 | 3 | 4 | 4 | 1 | 1 |
Attention allocation1 | 2 | 2 | 2 | 2 | 2 | 2 | 4 | 5 | 1 | 2 | 3 | 3 | 5 | 4 | 3 | 3 |
Co-presence2 | 4 | 0 | 2 | 4 | 2 | 3 | 3 | 4 | 3 | 2 | 1 | 2 | 2 | 4 | 1 | 1 |
Co-presence1 | 5 | 5 | 3 | 4 | 3 | 4 | 4 | 3 | 3 | 2 | 3 | 3 | 3 | 3 | 2 | 2 |
Round 1: 1 Round 2: 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Experimental: 1 Control:0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Completion time (s) | 145 | 193 | 180 | 123 | 210 | 184 | 119 | 190 | 220 | 235 | 198 | 210 | 237 | 228 | 187 | 235 |
Accuracy | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 89% | 100% | 100% | 89% | 100% | 100% | 89% | 89% |
Collaborative picking (CP) | 1 | 7 | 4 | 0 | 11 | 6 | 6 | 7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Collaborative distributing (CD) | 4 | 1 | 3 | 9 | 1 | 1 | 3 | 3 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Vocal communication (VC) | 6 | 0 | 5 | 4 | 5 | 4 | 3 | 3 | 1 | 3 | 1 | 1 | 0 | 0 | 0 | 0 |