Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3613904.3642359acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Navigating the Virtual Gaze: Social Anxiety's Role in VR Proxemics

Published: 11 May 2024 Publication History

Abstract

For individuals with Social Anxiety (SA), interacting with others can be a challenging experience, a concern that extends into the virtual world. While technology has made significant strides in creating more realistic virtual human agents (VHA), the interplay of gaze and interpersonal distance when interacting with VHAs is often neglected. This paper investigates the effect of dynamic and static Gaze animations in VHAs on interpersonal distance and their relation to SA. A Bayesian analysis shows that static centered and dynamic centering gaze led participants to stand closer to VHAs than static averted and dynamic averting gaze, respectively. In the static gaze conditions, this pattern was found to be reversed in SA: participants with higher SA kept larger distances for static-centered gaze than for averted gaze VHAs. These findings update theory, elucidate how nuanced interactions with VHAs must be designed, and offer renewed guidelines for pleasant VHA interaction design.
Figure 1:
Figure 1: Top left: centered Gaze animation, bottom left: averted Gaze animation. Right: Virtual human agents used, including (from left to right) two female (F1, F2) and two male (M1, M2) agents.

1 Introduction

Social interactions are essential for well-being [45]. During these interactions, information is sent and received (e.g., spoken words, gestures), including subtle social clues [84, 90], which offer insights into feelings, intentions, and emotions, playing a vital role in setting conversational tones, reflecting societal norms, and establishing personal boundaries [37].
Empathetic computing seeks to equip computing systems with capabilities to sense, process and react emphatically to verbal and nonverbal social clues, making interfaces intuitive, effective and meaningful[20]. By using virtual human-like agents (VHAs) that transmit social clues as embodied interfaces (i.e., that speak and use gestures), one can fundamentally build social human-computer interactions [6, 20, 71, 91]. Therefore, the design of empathetic computing systems like VHAs must consider personality traits that change the processing of social information. However, research is riddled by either focusing only on gaze or proxemics [11, 12], neglecting individual differences, or by just studying the direct effect of personal space or gaze [31, 32, 74].
For some, social clues have a profoundly different meaning, as is the case of people with Social Anxiety (SA) [60]. SA, characterized by the intense fear of others’ evaluation, leads to the development of cognitive and behavioral biases causing individuals to avoid social interactions or endure them with severe unease [89]. Most importantly, affected individuals show an interpretation bias towards social clues, misinterpreting neutral stimuli as hostile [104]. These patterns stretch into digital domains, bringing both challenges and opportunities for VHA design [46, 55]: for instance, in Virtual-Reality (VR), individuals with SA tend to walk further around a VHA and stand further away from it [46, 55]. SA can also amplify social cues perception, making interactions feel more intense and leading to compensatory behaviors such as establishing a larger Interpersonal Distance (IPD) or avoiding direct gaze [57, 89].
In VR one can interact with objects ranging from rudimentary 3D models to complex VHAs [7, 91]. By emulating human attributes like gestures, speech, and gaze, VHAs act as interfaces, foster natural interactions with users [6, 20, 71] and create a distinct social context where people communicate and collaborate in shared spaces enhanced by nonverbal cues [50, 75, 106]. Nonverbal cues, particularly gaze direction [31, 33] and IPD [87, 101], are cornerstones of human communication that are engrained in Metaverse applications such as Social VR [107]. These are vital if VHAs are to inspire human-like interactions [33, 56]. Yet, these social cues are subjective and based on individual traits, such as SA [31, 100]. Therefore, the design of empathetic computing systems like VHAs must consider how individual traits change the way social information is processed.
Few exploratory but highly influential studies on Social VR have focused on gaze, proxemics and their intricate relationship, revealing diverse findings [11, 12]. One study has found gender-driven nuanced differences in clearance distance behind a VHA, for tracked gaze as compared to static gaze [12]. Other authors have found clearance distance to be enlarged when comparing VHAs that have their eyes closed, with ones with open eyes [11]. However, more recent research argues that these methods might not capture the true essence of conversational IPD in virtual interactions [39, 43, 86]. This is further complicated by the inherent subjectivity of gaze and proxemics due to the influence of individual traits (e.g., [31, 100]) and mental health conditions like SA [46].
Existing research on the relationship between gaze and proxemics remains too inconsistent to inform design choices effectively and is riddled by either focusing only on gaze and proxemics [11, 12], neglecting individual differences and the study of the IPD or gaze in relation to SA [31, 32, 74]. The present study, thus, compares preferred IPD for different dynamic and static animations for centered and averted gaze in interactions with VHAs. It was found, contrary to Bailenson et al. [11, 12], that participants preferred shorter IPDs in situations of centered gaze, irrespective of dynamic or static displays, therefore underlining the role of direct social gaze as an affiliative signal. Considering SA, the pattern found by Bailenson et al. [11, 12] was reproduced: in increased SA, averted gaze led to smaller IPDs compared to centered gaze, while in decreased SA, the opposite was found. Thus, by considering participants’ subjective experiences, we could solve a controversy in the literature on gaze and proxemics. From here, we revised design recommendations for VHA’s nonverbal behavior to consider individual variation, intending to inform the design of more engaging Social VR applications for those with SA and the design of embodied conversational interfaces.

2 Related Work

2.1 Understanding Social Anxiety

SA is characterized by an intense fear of being evaluated by others [89] and ranges from experiencing elevated anxiety symptoms in certain situations (e.g., giving a speech), to clinical levels [57, 65]. SA manifests through behavioral, physical, and cognitive symptoms [23, 89]. Behaviorally, individuals might exhibit withdrawal and avoidance tendencies [66]. Physical symptoms include blushing and trembling or even excessive sweating in social situations [5, 24, 89]. On the cognitive side, SA can lead to biased recollections of social events [80] and the feeling of being observed when they are not [32].
Those affected by SA often encounter difficulties in forming and sustaining relationships [15] and experience elevated risk of bullying [15]. Individuals with SA, thus, often dread scenarios where they are either directly interacting with others or under the perception of being observed [89]. Such fears are not limited to the moment and can amplify disproportionately, beginning days or weeks prior to the actual interaction [89].
In direct encounters, those with SA show distinct avoidance-related patterns: direct eye contact avoidance [99] or consciously keeping larger IPDs in conversations [34, 55, 57, 74, 105]. Prior work shows evidence that socially anxious individuals tend towards an enhanced self-directed perception of subtle gaze clues [82], particularly for negative and neutral facial expressions [83], which then may ampilfy experienced social stress [95]. These avoidance behaviors highlight the necessity of understanding SA’s nonverbal patterns.

2.2 The Dynamics of Proxemics and Gaze

Proxemics, the study of personal space in human interactions, delineates the surrounding area into different spaces [37, 102]: intimate space, for close relationships (0-45 cm); personal space, for friends (45-120 cm); social space, for interactions with unfamiliar people (120-365 cm); and beyond that, up to 762 cm, for addressing public space [37]. Note, however, that recent studies have found personal space to have a radius of about 1 m [39].
Personal space violations can create discomfort, often leading to defensive reactions [8, 38, 43, 58, 101]. However, personal space size is not static, being influenced by personal characteristics, such as gender [12, 37, 38, 39, 43, 44, 78, 103] and SA level [34, 55, 57, 74, 105]. There is a tendency for SA to increase IPD, which has neurological underpinnings: with increased SA, distance is perceived as closer, more salient, and as requiring more attentional resources [34, 74, 105].
One important situational determinant of IPD is gaze [11, 37, 67], which plays a role in personal space perception and regulation in virtual environments [11]. Some studies have explored gaze influence on proxemics, but results are mixed. Some reveal an enlarged IPD in mutual gaze conditions [11, 12, 67], while others show no significant effect [86]. It is important to note that minimal distance paradigms to measure proxemic behavior in these studies have been shown to be inferior to approach paradigms, as pointed out both by empirical HCI work [43] and meta-analysis [38].
Theoretical factors can also explain the mixed results. Equilibrium Theory (ET) offers a framework to understand the relationship between gaze and proxemics [3, 9, 11]. This theory posits that nonverbal cues help balance intimacy levels, by adopting avoidance and approach behaviors. Avoidance behaviors include gaze aversion, negative facial expressions, and increased IPD, whereas approach behaviors are marked by happy expressions, direct gaze, and reduced IPD [2, 102].
For VHA interactions we can posit different hypotheses based on different ET interpretations: one can argue that direct gaze intensifies intimacy and arousal, leading to increased IPD to reach an intimacy equilibrium [11, 12]. In contrast, traditional psychological theories suggest direct gaze is an affiliative behavior, leading to smaller IPD [9]. Previous SA research has shown that direct gaze cues inflict arousal and avoidance patterns among individuals with SA [89, 99]; it is possible to pit these interpretations of ET against each other when considering SA to resolve the discrepancies within the literature.

2.3 Social Virtual Interactions

Social VR spaces, such as VRchat [28], have risen in popularity [22, 59]. These are characterized by full body movement and gestures in real-time, support of vivid spatial and temporal experiences, and mediation of both verbal and non-verbal communication [62, 63, 70]. Research on Social VR effects is inconclusive: some argue that Social VR may increase the risk of harassment, while others argue that Social VR spaces produce more satisfying social experiences due to the increased sense of co-presence [16]. Personality traits, and mental health conditions manifest virtually, increasing the risk of a low-quality experience. In SA, the design of the user’s self-representation [27] and agents’ social cues [21] can cause users to experience social stress in the game. With various hardware capabilities, this misunderstanding intensifies as users don’t know if the lack of social clues (e.g., gazing at them) is intentional by the user or a hardware issue. Given the important role of gaze and its interpretation on IPD in the physical world [82, 95] there is a need to better understand the relationship between SA, gaze, subtle social clues like facial expressions, and IPD in virtual environments.

3 Method

In this study, we contrast competing ET interpretations on the interplay of gaze and IPD in a VR context, considering SA. On one side, prior research provides evidence for the hypothesis that socially anxious individuals may tend to avoid looking into facial regions, which is referred to as hypervigilance-avoidance-hypothesis [26, 69]. On the other side, researchers challenged this biased gaze behaviour and found that the biases may fade in real-life interactions with others [81, 94]. Therefore, we explore the effects of gaze on IPD, in relation to SA. Bailenson et al. [11, 12] contend that direct gaze amplifies intimacy, leading individuals to augment their IPD from VHAs. Conversely, Argyle and Dean [9] posit that direct gaze acts as an affiliative cue, resulting in a decreased IPD.
We aimed to replicate Bailenson et al. [11, 12] experiments in a realistic setting using a stop-distance task [86] (H1). Interactions are initiated with a VHA exhibiting either a centered (0º) or averted gaze (-15º or +15º from the centered direction).
We hypothesized that socially anxious participants show a larger IPD when VHA’s gaze is centered (H1.1). Further, we hypothesize that socially anxious individuals keep a larger IPD even if gaze is averted (H1.2). Next, we assessed the influence of dynamic gaze shifts on IPD [53, 54]. Based on ET’s social signalling perspective [9], we hypothesized that a VHA averting its gaze dynamically (from 0º to +/-15º at 1 m) leads to an increased IPD compared to centering its gaze (from +/-15º to 0º at 1 m) (H2.1). Again, this effect is hypothesized to be magnified in participants with pronounced SA (H2.2). Conforming to prior research, we predict a positive correlation between SA and preferred IPD (H3; [34, 55, 57, 74, 105]). All hypotheses and analyses were pre-registered.1 In alignment with Sicorello et al. [86], we adopted a Bayesian approach to quantify the likelihood of null differences.

3.1 Study Design

A stop-distance task was used in a within-participants design, where we measured IPD, our dependent variable, as the frontal distance (in cm) between participants and the VHA, logged by participants when approaching the VHA. We manipulated Gaze Dynamics as our independent variable in four conditions (dynamic averting gaze, dynamic centering gaze, static averted gaze, static centered gaze), creating seven gaze levels, given that in the first three levels gaze could be directed to left or right on a horizontal plane. Besides, we used four different VHAs (two male and two female), with seven gaze levels repeated three times for each VHA (3 repetitions x 7 gaze manipulations x 4 VHA), resulting in 84 trials for each participant. We decided on these conditions building on prior work, which emphasized that the gaze’s angle with neutral facial expression is key and found that interpretation of these angles is affected by SA, amplifying social stress when misinterpreted as staring [82].
We also measured SA using the Liebowitz Social Anxiety Scale (LSAS; [30, 57]). Since SA is a dimensional trait [52], and to avoid loss of power in the analyses due to binarising SA, we followed prior research [46, 100, 101, 103] assessing SA’s effect on proxemics continuously using bayesian linear mixed models.
Furthermore, we explored participants’ gaze behavior when approaching the VHA by dividing our environment into two areas of interest (environment vs. VHA). This study was approved by the research ethics committee of Aalto University (D/718/03.04/2023). All data and data analyses can be found online 2.

3.2 Participants

Seventy-nine participants underwent the study. Three were excluded from further analyses due to experiencing motion sickness during the VR immersion, indicated by a score ≥ 14 on the Fast Motion Sickness (FMS) Scale [49] (remaining sample: M = 2.62, SD = 3.30). Two participants were excluded as they did not have normal or corrected-to-normal vision. We also tested for minimal visual acuity (all > LogMar ≤ 1) 3[10], confirmed by the Landolt C Visual Acuity Test [10]. Another six participants were excluded, due to data issues. Three participants were removed due to poor questionnaire data (e.g., most left-side options in nearly all questions), two due to one missing value on the LSAS and one because of lost IPD data.
Table 1:
VariableCategoriesn%MSD
FMS 68 2.623.30
Age 68 26.156.31
GenderFemale3450.00 %  
 Male3348.53 %  
 Prefer not to say11.47 %  
RaceWhite3348.53 %  
 Asian2333.82 %  
 Hispanic22.94 %  
 Middle Eastern45.88 %  
 Self-Described68.82 %  
OccupationStudent5580.88 %  
 Acedemic45.88 %  
 Technical Expert11.47 %  
 Designer22.94 %  
 Administrator/Manager34.41 %  
 Other34.41 %  
VR ExperienceNone4566.18 %  
 Once a month2130.88 %  
 Three times a month11.47 %  
 Three times a week11.47 %  
Co-presence 68 2.531.30
VHA Gender Perception (F1)Female68100 %  
VHA Gender Perception (F2)Female6697.06 %  
 Male11.47 %  
 Non-Binary11.47 %  
VHA Gender Perception (M1)Male68100 %  
VHA Gender Perception (M2)Male6798.53 %  
 Female11.47 %  
Emotionality (F1) 68 2.870.62
Emotionality (F2) 68 3.100.72
Emotionality (M1) 68 3.381.75
Emotionality (M2) 68 3.250.68
IPQ General Score 68 3.031.86
IPQ Real Score 68 1.801.58
IPQ INV Score 68 2.911.81
IPQ SP Score 68 3.931.57
LSAS Total Score 68 45.9421.04
LSAS Fearfulness Score 68 23.9111.93
LSAS Avoidance Score 68 22.0311.40
Table 1: Demographic information, FMS, Age, Gender, Race, VR Experience, Co-presence, VHA Gender Perception, Emotionality, IPQ and LSAS data
The remaining sample comprises 68 participants (33 male, 34 female, one did not disclose gender, \(M_\text{age}\) = 26.15, \(SD_\text{age}\) = 6.31). The detailed demographics are reported in Table 1. Participants were recruited through flyers spread at Aalto University and the Helsinki region. Participants received a 20 EUR gift voucher as compensation for their participation.

3.3 Liebowitz Social Anxiety Scale

The Liebowitz Social Anxiety Scale (LSAS) is a questionnaire designed to assess cognitive, behavioral, and somatic manifestations of social phobia and anxiety [30, 57]. It consists of 24 items, rated on a 4-point Likert scale, with each being answered twice: once rating how anxious or fearful you feel in the situation, ranging from 0 (none) to 3 (severe), and then how often the situation is avoided, ranging from 0 (never) to 3 (usually). The LSAS score was obtained by summing up all item values. Effectively, the score could range from 0 to 144. LSAS has a high test-retest reliability of r = 0.83 and an internal consistency reliability of Cronbach’s α = .79 - .92 [13], which aligns with our empirical data α = .93 [0.91, 0.95].

3.4 Apparatus

We used the Meta Quest Pro Head-Mounted Display4 to render the VR environment, an iPad to present the post-experiment survey, and a computer for data recording. The VR environment was implemented using Unity game engine 2021.3.16f1 [93], integrated with the Oculus XR Plugin [72], and the Ultimate XR Plugin [98]. Datalogging was integrated into the rendering pipeline for efficient performance data collection. While the rendering pipeline generated frames at a rate of 90 fps, data logging was limited to 35.9 Hz. The headset was calibrated by adjusting the interpupillary distance and calibrating the eye-tracking for each participant individually.

3.5 Stimuli

The virtual environment was designed to match the dimensions of the physical environment, to allow participants to walk up to the VHA in a natural manner without colliding with objects or walls in the real environment. To measure the sense of presence in the virtual environment, participants filled in the IGroup Presence Questionnaire (IPQ; [77]) after completing the experiment. The IPQ measures spatial presence, user involvement, and experienced realness of the virtual environment, using 7-point Likert scale items, ranging from 0 to 6. The IPQ showed participants felt relatively present in the virtual environment (M = 3.03, SD = 1.86).
The VHAs were selected from the Microsoft Rocketbox Avatar library given its’ high-definition, fully rigged human-like avatars that are popular and well-used in AR/VR and HCI research [35, 61, 103]. We selected four white adult VHAs (two females; and two males) previously used in proxemics research in HCI [43]. Voice responses for each were prerecorded and implemented using the Amazon Text-to-Speech Software: Amazon Polly [4].
All VHAs were set to have a neutral facial expression (see Figure 1), as validated by participants’ emotionality ratings of each VHA (depicted in Table 1). To control for potential effects of gaze direction [9, 11, 79, 103], VHA’s height was dynamically adjusted to participants’ height. At the beginning of every trial, participants were positioned in front of the VHA, facing it directly.

3.6 Gaze Visualization

For the static gaze condition, VHA’s gaze was set constant during the participants’ approach, fixed at the starting position either centered (0º; VHA established mutual eye contact during participants’ approach) or averted (+15º/left or -15º/right, VHAs eye gaze was deviated from participants). In the dynamic gaze condition, VHA’s gaze shifted in response to the participants’ approach. In the dynamic averted condition, the VHA’s gaze was centered at the beginning of the trial and gradually averted to the right (+15º) or left (-15º). In the dynamic centered condition, VHA’s gaze was averted at the beginning of the trial (+15º or -15º) and gradually centered (0º). The dynamic shift in gaze direction started when participants stood 2.5 m from the VHA, reaching its’ endpoint when participants stood at 1 m (horizontal body movement did not change gaze). Horizontal eye movement was modelled with an inverse Lerp Function using the following formula:
\begin{equation} \frac{Distance_{VHA} - 2.5}{1 - 2.5} \end{equation}
(1)
Thus, VHA’s eyes changed linearly dependent on the IPD. Vertical gaze was not manipulated (e.g., see [31]) and kept constant at the participants’ eye level [100].
To validate gaze manipulation, participants were asked to report their subjective sense of feeling looked at on a visual analogue scale ranging from 0 ("I don’t feel looked at") to 1 ("I feel looked at."), in increments of 0.01. In the static gaze condition, participants felt more looked at when the VHA had a mutual gaze (0º; M = 0.64, SD = 0.31), compared to averted gaze (M = 0.18, SD = 0.24). In the dynamic gaze condition, participants felt more looked at when the gaze was centered (15º → 0 º; M = 0.66, SD = 0.31), compared to when the gaze was being averted (0º → 15º; M = 0.24, SD = 0.28). There was no effect of SA on the subjective feeling of feeling looked at (all pb > 19.67%).
Table 2:
ParameterMedian95% HDIpb
(Intercept; static centered)*0.65[0.52, 0.77]0.00%
dynamic averting*-0.45[-0.60, -0.29]0.00%
dynamic centering0.03[-0.04, 0.09]22.61%
static averted*-0.52[-0.68, -0.36]0.00%
LSAS Total0.00[0.00, 0.00]40.10%
dynamic averting × LSAS Total0.00[0.00, 0.00]21.87%
dynamic centering × LSAS Total0.00[0.00, 0.00]44.87%
static averted × LSAS Total0.00[0.00, 0.00]19.67%
Table 2: Model parameters for a Bayesian linear mixed model predicting the feeling of being looked at, from Gaze animations (static centered, static averted, dynamic centering, dynamic averting), LSAS Total representing the sum score of the LSAS questionnaire, and all interaction effects. We present the median of the parameter with the 95% HDI, representing the most likely parameter values, pb denoting the relative amount of samples depicting an opposite pattern of effect and the standardized parameter estimates \(\tilde{\delta }\) with 95% HDI. * indicates that the parameter is distinguishable from zero.

3.7 Stop-Distance Task

The social situation was standardized to minimize situational effects on IPD (e.g., [37, 100, 101]). Participants had to imagine a scenario in which they were in an unfamiliar location, asking a stranger for directions. In the VR stop-distance task, participants approached the VHA until a comfortable IPD was reached and pressed the controller’s trigger button. Then, on the participants’ left hand, a slider on which they assessed their subjective feeling of feeling looked at appeared. After, the VHA instructed on whether to press a white or a black button and disappeared. Then, participants would take a step forward, making the buttons to be pressed appear on the wall. After pressing, participants turned around to initiate the next trial. No time constraints were imposed.

3.8 Procedure

According to the Declaration of Helsinki, participants gave written informed consent before starting the study. Participants were informed about the possibility of experiencing motion sickness during the VR immersion, explaining control with the FMS Scale [49]. This was followed by 10 practice trials (using a centered-gaze-female VHA), where participants could clarify doubts and adjust the headset volume to control for effects of sound on IPD [37]. Then, participants completed the stop-distance task. The post-experiment survey was completed on an iPad, provided to them digitally on Qualtrics5 [76]. The survey included the FMS Scale [49], general demographic information (i.e., age, gender, race, and occupation), previous VR experience, VHAs’ gender and emotionality ratings [102], the IGroup Presence Questionnaire (IPQ; [77]) measurement of experienced co-presence [14]; the LSAS [30, 57]) and the Triarchic Psychopathy Measure Screening (TriPM; [73] for separate replication purposes; not analyzed in this study). A summary of all descriptive statistics can be found in Table 1. Participants were debriefed after the experiment. Participation in the whole study took approximately 60 minutes.

3.9 Bayesian Data analysis

We used Bayesian parameter estimation (for benefits on the Bayesian approach in HCI, see [1, 36, 48, 96]) to quantify effects, using brms [17], a STAN-sampler wrapper [18] in R [92]. We computed 4 Hamilton-Monte-Carlo chains, each with 40000 iterations and a 20% warm-up. We use the metric pb for inference decisions. It is similar to the traditional p-value [42, 64, 85], which denotes the proportion of probability that the effect is negligible or reversed. A pb value less than or equal to 2.5% was deemed significant. We also calculated the 95% High-Density Interval (HDI) for all parameters and conducted mean comparisons on standardized outcome variables. We utilized δt as an effect size estimate, comparable to Cohen’s d [40, 47].

4 Results

4.1 IPD Characteristics

4.1.1 Priors and Model formulation.

For multilevel-data and trial-based modeling of IPD data, we applied normally-distributed priors (M = 0, SD = 30cm) on all population-level effects, with Cholesky priors on the unstructured (residual) correlation (η = 2). Two-way interactions in our model were followed up by posterior predictive plots. We used effect-coding on categorical variables (e.g., 1, -1) with two levels and set static centered gaze as intercept. Regression weights with regard to gaze condition estimate differences from static centered gaze. We estimated a varying intercept for every participant with varying slopes for VHA gender to account for the repeated measures structure of the data. All population-level effects on the outcomes (Gaze condition, VHA gender, and LSAS scores) were fully crossed in the model.
We compared a simple model where we predicted IPD from Gaze condition, VHA gender and its interaction to models that used Gaze condition and LSAS scores and their interaction as predictors. Comparing the simple model (LOO = 41407.3) to models with total LSAS score (LOO = 41396.5), fear scale (LOO = 41393.3) or avoidance scale (LOO =41401.7) 6, did not show any large discrepancies between the models. We thus chose to analyze the posterior of the total-score model (all model parameters can be found in appendix Table 4 and Table 5). This model explained 85.23% [84.92, 85.51] of the IPD data (for model parameters see Table 3).
Table 3:
ParameterMedian95% HDIpb\(\tilde{\delta }\), 95% HDI
(Intercept; static centered)102.97[96.50, 109.22]0.00%(3.58, [3.02, 4.15])
dynamic averting*1.06[0.14, 1.97]1.23%(0.04, [0.00, 0.07])
dynamic centering*-1.25[-2.18, -0.35]0.41%(-0.04, [-0.08, -0.01])
static averted*1.32[0.41, 2.25]0.26%(0.05, [0.01, 0.08])
LSAS Total1.27[-5.26, 7.65]34.85%(0.04, [-0.17, 0.27])
VHA gender*-1.55[-2.30, -0.77]0.00%(-0.05, [-0.08, -0.03])
dynamic averting × LSAS Total-0.69[-1.61, 0.22]7.05%(-0.02, [-0.06, 0.01])
dynamic centering × LSAS Total-0.43[-1.35, 0.48]17.64%(-0.01, [-0.05, 0.02])
static averted × LSAS Total*-1.25[-2.18, -0.34]0.41%(-0.04, [-0.08, -0.01])
dynamic averting × VHA gender0.57[-0.35, 1.49]11.36%(0.02, [-0.01, 0.05])
dynamic centering × VHA gender0.10[-0.82, 1.02]41.72%(0.00, [-0.03, 0.04])
static averted × VHA gender0.26[-0.66, 1.18]29.00%(0.01, [-0.02, 0.04])
LSAS Total × VHA gender-0.21[-0.98, 0.56]29.56%(-0.01, [-0.03, 0.02])
dynamic averting × LSAS Total × VHA gender0.24[-0.68, 1.16]30.75%(0.02, [-0.02, 0.04])
dynamic centering × LSAS Total × VHA gender-0.13[-1.07, 0.77]38.85%(0.00, [-0.04, 0.03])
static averted × LSAS Total × VHA gender0.24[-0.68, 1.17]30.68%(0.01, [-0.02, 0.04])
Table 3: Model parameters for a Bayesian linear mixed model predicting interpersonal distance (IPD) from Gaze animations (static centered, static averted, dynamic centering, dynamic averting), LSAS Total representing the sum score of the LSAS questionnaire and VHA gender, and all interaction effects. We present the median of the parameter with the 95% HDI, representing the most likely parameter values, pb denoting the relative amount of samples depicting an opposite pattern of effect and the standardized parameter estimates \(\tilde{\delta }\) with 95% HDI. * indicates that the parameter is distinguishable from zero.
Figure 2:
Figure 2: Interpersonal distance in cm as a function of Gaze animation averaged across trials and participants with error bars depicting +-1 standard error of the mean.

4.1.2 Posterior Distribution Analysis.

Our model showed that participants considered an aversion of gaze in the static conditions (static centered: M = 102.94, SD = 26.94; static averted: M = 104.29, SD = 26.10) when making judgments on preferred IPD, \(\tilde{b}_{\text{static averted}}\)= 1.32 cm [0.41, 2.25], pb = 0.20%, \(\tilde{\delta }_\text{b}\) = 0.04 [0.01, 0.08]. Our results are thus opposite to Bailenson et al. [11] and H1.1 (see also Figure 2 and Figure 5). In line with H1.2, we found that the effect, although opposite, was enlarged in increased SA, \(\tilde{b}_\text{static averted $\times $ LSAS Total}\)= -1.25 cm [-2.18, -0.34], pb = 0.41%, \(\tilde{\delta }_\text{b}\) = -0.04 [-0.08, -0.01]. While for decreased SA, we found the pattern static centered IPD < static averted IPD, this was opposite for increased SA (static centered IPD > static averted IPD; see Figure 3 for the raw data and Figure 4 for model predictions).
Figure 3:
Figure 3: Left: Mean individual interpersonal distance in cm as a function of total LSAS score averaged across trials and participants, there was no correlation (H3), \(\tilde{r}\)= .02 [-.23, .27], pb = 43.16% with relatively high uncertainty; Right: Mean individual interpersonal distance in cm as a function of total LSAS score averaged across trials and participants for each Gaze animation.
Figure 4:
Figure 4: Predicted average IPD by our model as a function of Gaze condition and LSAS total score.
Next, we set out to consider our predictions with regard to H2. In line with H2.1, participants preferred a larger IPD for dynamic averted gaze (M = 104.08, SD = 26.45) compared to dynamic centering gaze (M = 101.69, SD = 26.67), \(\tilde{b}_\text{diff}\) = 2.31cm [1.56, 3.08], pb = 0.00%. No evidence was found for H2.2 (see also Table 3), since there was no indication of an increased effect of SA on IPD in any dynamic condition, pb > 7.05%.
Figure 5:
Figure 5: Posterior density plot comparing levels for Gaze condition with median and 95% HDI. The proportion of blue/green area indicates the proportion of posterior samples opposite to the median and thus is a visual representation of the posterior p-value. It quantifies the proportion of probability that the effect is zero or opposite given the data observed. The smaller the blue areas in comparison to the green areas are, the more reliable is the estimation of the effect.
Still, underlining dynamic centering gaze as an affiliative signal, we found that dynamic centering gaze produced slightly smaller IPDs as compared to static centered gaze (M = 102.94, SD = 26.94, \(\tilde{b}_\text{dynamic centering}\) = -1.25cm [-2.18, -0.35], pb = 0.41%, \(\tilde{\delta }_\text{b}\) = -0.04 [-0.08, -0.01]). Furthermore, we found an effect of VHA gender, \(\tilde{b}_\text{VHA Gender}\) = -1.55 cm [-2.30, -0.77], pb = 0.00%, \(\tilde{\delta }_\text{b}\)= -0.05 [-0.08, -0.03]), resembling common effects in proxemics, IPD to female VHA’s was smaller (M = 102.07, SD = 26.31) as compared to male-VHAs (M = 104.61, SD = 26.25). Lastly, female participants kept a larger IPD to both female and male avatars (F-F pairs: M = 103.47, SD = 25.03; F-M pairs: M = 106.05, SD = 24.70), compared to male participants (M-F pairs: M = 101.74, SD = 27.54; M-M pairs: M = 104.29, SD = 27.64).
Contrary to Bailenson et al. [12], there was no indication that VHA gender is moderating the effect of gaze on IPD, pb > 11.36%. All other effects were centered at zero (see Table 3 and Figure 5 for a visual representation).
Figure 6:
Figure 6: Posterior density plot for each parameter in the IPD regression model (Predicting IPD from Gaze condition, total LSAS score, VHA gender and all interactions), with median and 95% HDI. The proportion of blue/green area indicates the proportion of posterior samples opposite to the median and thus is a visual representation of the posterior p-value. It quantifies the proportion of probability that the effect is zero or opposite given the data observed. The smaller the blue areas in comparison to the green areas are, the more reliable is the estimation of the effect.

4.2 Approach Behavior Exploration

To analyze eye-tracking data, we defined one area of interest, a cylindric shape centered at the middle of the body (with a radius of body = 37.5 cm, head = 10 cm) around the VHA. We compared the number of samples in relative amount of samples in a given trial until participants logged their preferred IPD, where participants looked at the VHA as compared to the environment.
Running a linear mixed model with Gaze, SA and their interaction as a predictor (with a normal prior of M = 0, SD = 10%), we find a contrasting pattern of VHA gaze and participant gaze. With static centered gaze at \(\tilde{b}_\text{static centered}\) = 18.84 % [17.22, 20.41], we see that averted static gaze increased the percentage of looking at the VHA by about 2%, \(\tilde{b}_\text{static averted}\) = 1.88% [1.40, 2.37], pb = 0%, \(\tilde{\delta }_\text{b}\) = 0.22 [0.16, 0.28], this also holds for the difference to dynamic centering, \(\tilde{b}_\text{dynamic centering}\) = -0.46% [-0.95, 0.02], pb = 3.08%, \(\tilde{\delta }_\text{b}\) = -0.05 [-0.11, 0.00], and dynamic averting, \(\tilde{b}_\text{dynamic averting}\) = 1.54% [1.05, 2.03], pb = 0%, \(\tilde{\delta }_\text{b}\) = 0.18 [0.12, 0.24] (see also Figure 7). Comparing the other conditions directly by inspecting the posterior predictive distribution, we find all conditions to differ from each other, dynamic averting vs. dynamic centering, \(\tilde{b}_\text{diff}\) = 2.00% [1.61, 2.40], pb = 0%, dynamic centering vs. static averted, \(\tilde{b}_\text{diff}\) = -2.34% [-2.74, -1.95], pb = 0%, but dynamic averting vs. static averted, \(\tilde{b}_\text{diff}\) = -0.34% [-0.74, 0.06], pb = 4.61%, and dynamic centering vs. static centered, \(\tilde{b}_\text{diff}\) = -0.46% [-0.95, 0.02], pb = 3.08%. Therefore, participants looked the least at the VHA when it had a dynamic centering gaze and most at the VHA when the gaze was averted (see again Figure 7). We also found an effect of SA for centered static gaze. Each standard deviation in SA scores increased the relative time looking at the VHA for centered static gaze by about 2.7%, \(\tilde{b}_\text{LSAS total}\) = 2.73% [1.13, 4.31], pb = 0.05%, \(\tilde{\delta }_\text{b}\) = 0.32 [0.13, 0.50[0.12, 0.60]. There was no interaction effect of SA and Gaze condition, all pb > 3.49%.
Figure 7:
Figure 7: Relative amount of eye-tracking samples (VHA/all) as a function of Gaze condition averaged across trials and participants with error bars depicting +-1 standard error of the mean.

5 Discussion

5.1 Summary of the Results

Our study illuminates the interplay between gaze, proxemics, and SA. We used Bayesian parameter estimation to account for the uncertainty in estimating the size of the effects. We found, contrary to Bailenson et al. [11, 12] and H1.1., that participants prefer shorter IPD to VHAs with a centered gaze as compared to averted or dynamically averting gaze. The difference between static averted and centered gaze was diminished in SA up to the point where participants with higher levels of SA preferred closer IPD when the gaze was averted compared to when it was centered.
Aligning with H2.1., participants preferred larger IPD when the gaze was averted compared to when it was centered. Notably, dynamically centering the gaze produced the smallest IPD in our study, highlighting that dynamic gaze is less ambiguous and can enrich our interactions with VHAs. However, no support was found for H2.2. We did not find any indication that SA increased IPD (H3); however, our Bayesian approach to analyzing the data could show that we did not have enough data to be conclusive about the effect of SA on IPD.
Conversely, by exploring participants’ gaze using eye-tracking, we found that VHA gaze centered/centering diminished the amount of time participants looked at the VHA and that SA increased the overall time spent looking at the VHAs.

5.2 Explanation of Findings

To reiterate, ET posits that individuals engage in a dynamic balance between intimacy and personal space [12, 51]. When confronted with cues that increase perceived intimacy (e.g., direct gaze), IPD may be increased to maintain a comfortable equilibrium, thus adopting compensation behaviors [51]. Argyle and Dean [9] have previously highlighted gaze’s significance as an affiliative signal, suggesting that it can serve as an invitation for closer interaction, opposite to what Bailenson et al. [12] proposed.
In our study, for average SA levels, the pattern was evident: centered gaze was preferred over averted gaze, possibly indicating a feeling of comfort and affiliation, thus supporting Argyle and Dean [9] proposal. Regarding SA’s effect on IPD, no main effect was found, contrasting with previous findings [34, 46], however, more data is needed to estimate this effect.
Despite this, a tendency was found within participants with increased SA: a centered gaze seemed to evoke heightened intimacy and arousal, leading to bigger IPD. When looking at the VHA, increased SA possibly led to higher arousal, caused by biased social cue interpretation. According to ET, individuals try to lower arousal promoted by the increase in perceived intimacy by increasing their IPD. Arguably, SA possibly moderates gaze-promoted intimacy: direct mutual gaze is not intrinsically positive or negative since it can either induce feelings of intimacy and signal attention in those with average levels of SA, or promote uncomfortable levels of arousal that lead to compensation behaviors in individuals with increased levels of SA.
This latter interpretation aligns with Bailenson et al. [11, 12]. Essentially, while Argyle and Dean [9] proposal holds in the broader context, the specific direction of the balance (approach or avoidance) can vary based on, for example, personality traits. Therefore, designers of inclusive social VR experiences have to know their audience and design VHA interactions with ET in mind.

5.3 Limitations and Future Research

Our study, while shedding light on several nuances of VHA interactions, is not one without limitations.
First, we must acknowledge the influence of cultural backgrounds on the experience of SA characteristics [41]. For instance, what might be deemed as an intimate distance in one culture might be perceived as too distant in another [37, 86, 88]. Future research should delve deeper into the role of cultural background affecting the user’s behavior and perception in-virtuo.
Secondly, physiological arousal, which could moderate the relationship between gaze, proxemics and SA, was not measured (see e.g., [19, 58]). Future studies may simulate personal space violations for SA and enhance measurements with physiological sensing (e.g., skin-conductance response;[43]).
Third, one could argue that the initial gaze and not the final gaze is critical for IPD and its interplay with SA. Be reminded that in the static conditions, gaze was either averted or centered, while in the dynamic centering condition, gaze was initially averted and then gradually centered and the opposite for the dynamic averting condition. IPD for static and dynamic conditions resemble each other regarding the end of the animation and not the beginning (averting/averted > centering/centered), mirrored in the ratings of gaze. Thus, the effect of gaze on IPD cannot be explained by initial gaze alone. Could this explain the differences in gaze conditions regarding SA, e.g., entertaining that with SA, one did not look at the avatars when approaching? This is also unlikely. We find an SA effect only for static conditions and not for dynamic conditions. Supporting this, we find no interaction between SA and Gaze condition in our eye-tracking data. Nevertheless, future researchers interested in dynamic gaze patterns should add conditions with fully averting gaze (i.e., looking away from the left to looking away on the right as one approaches.)

5.4 Implications

In line with van Berkel and Hornbæk [97], we will highlight theoretical and HCI-oriented implications for the design of social VR as well as the implications for social anxiety research:

5.4.1 Implications for Human-Computer Interaction.

Our research highlights the crucial relationship between gaze and proxemics and their interaction with SA, pointing to three key approaches for improving user experience:
First, dynamic responsiveness is essential, where VHAs adjust their gaze and other behaviors in real-time, based on user actions or physiological data like eye-tracking to foster an engaging environment.
Second, designs should be context-aware, adapting to the unique dynamics of virtual settings. For example, in intimate conversations, VHAs might adjust their gaze and distance differently than they would in traditional face-to-face interactions, taking cues from ET.
Third, introducing training modes could help users unfamiliar or uncomfortable with VHA behaviors to acclimate by adjusting settings to their comfort level. Additionally, designers should consider implementing "gaze awareness" features, since a lack of gaze tracking could result in unintentional staring by VHAs affecting proxemics.

5.4.2 Implications for Social Anxiety.

As shown in earlier work, socially anxious individuals tend to prefer online communication, which allows them to hide themselves from the potential evaluation by others. With the broader application of better tracking techniques for gaze and other subtle social clues, social VR may become a challenging environment for socially anxious individuals. Biases learned from the physical world may be transferred and even intensified through online replication, causing more social stress than relief. On the other side, if they try to hide their social clues in VR, others may feel discomfort engaging with the socially anxious individual, causing an increase in their SA. Therefore, designers of social VR and empathetic computing systems at large need to carefully consider their design choices and how to present social clues. Our results may help designers of assessment and digital interventions to find new ways to harness behavioral data in virtual environments for the early detection of SA, which is a critical aspect for successful treatment [89].

5.5 Ethical concerns

The prospect of detecting personality traits in users, especially within virtual environments, raises several ethical concerns. One primary concern is that of consent [25]. Users might not be aware that their interactions, behaviors, and responses can be indicative of their personality traits. Extracting such information without explicit consent infringes on individual privacy rights [25]. Miller et al. [68] could identify people from 5 minutes of motion-data with high accuracy. They propose that such data should be regarded as personal data. While we were interested in the correlation pattern of IPD and personality to improve the design of virtual environments, given that personality traits can be distinctly linked to stimuli and behavior in VR [100, 102, 103], we encourage research into privacy-preserving techniques which can be adapted to users personality. However, if virtual environments are tailored to cater to identified personality traits, users might end up in echo chambers reinforcing existing beliefs and behaviors instead of deconstructing harmful behavior of users with SA [29, 57].

6 Conclusion

The present experimental study in the domain of empathetic computing solves inconsistencies in the literature concerning the interplay of gaze and proxemics for VHAs by considering SA. We found that participants generally prefer shorter distances to VHAs displaying a static centered gaze or dynamically centering gaze as compared to an averted gaze. With an increase in SA, however, this pattern reverses, with participants with SA traits keeping larger distances when being looked at directly, indicating a nuanced interplay of gaze, proxemics and SA. In the metaverse, understanding the nuances of VHA interaction becomes pivotal for designing rich, inclusive, and comfortable virtual experiences. Our study into the interplay of gaze, proxemics, and SA provides new insights into their complexity. While foundational theories provide overarching frameworks, the intricacies of individual factors can significantly modify social interaction patterns. As our digital and real worlds continue to merge, researchers and designers must account for these subtleties, guaranteeing inclusive digital interactions.

Acknowledgments

Aalto Science Institute funded Beatriz Mello. This work was supported by the national infrastructure for human virtualization and remote presence, MAGICS. The study was funded by HORIZON-CL4-2023-HUMAN-01-CNECT award number(s): 101136006 (XTREME). Special thanks go to Jasper Quinn and Agnes Kloft for supporting in testing and Esko Evtyukov for making the environment.

A LSAS Subscales

A.1 LSAS Avoidance

Table 4:
ParameterMedian95% HDIpb\(\tilde{\delta }\), 95% HDI
(Intercept; static-mutual)102.97[96.60, 109.44]0.00%(3.58, [3.02, 4.15])
dynamic averting*1.06[0.14, 1.98]1.26%(0.04, [0.00, 0.07])
dynamic centering*-1.25[-2.17, -0.32]0.43%(-0.04, [-0.08, -0.01])
static averted*1.32[0.40, 2.24]0.26%(0.05, [0.01, 0.08])
LSAS Avoidance-0.29[-6.55, 6.21]46.42%(-0.01, [-0.23, 0.21])
VHA gender*-1.55[-2.32, -0.78]0.00%(-0.05, [-0.08, -0.03])
dynamic averting × LSAS Avoidance-0.71[1.63, 0.20]6.45%(-0.02, [-0.06, 0.01])
dynamic centering × LSAS Avoidance-0.46[-1.37, 0.45]16.34%(-0.02, [-0.05, 0.02])
static averted × LSAS Avoidance*-0.98[-1.88, -0.05]1.84%(-0.03, [-0.07, 0.00])
dynamic averting × VHA gender0.57[-0.36, 1.48]11.17%(0.02, [-0.01, 0.05])
dynamic centering × VHA gender0.10[-0.81, 1.03]41.67%(0.00, [-0.03, 0.03])
static averted × VHA gender0.26[-0.66, 1.18]29.06%(0.01, [-0.02, 0.04])
LSAS Avoidance × VHA gender-0.28[-1.03, 0.49]24.01%(-0.01, [-0.04, 0.02])
dynamic averting × LSAS Avoidance × VHA gender0.27[-0.66, 1.17]28.37%(0.01, [-0.02, 0.04])
dynamic centering × LSAS Avoidance × VHA gender-0.01[-0.94, 0.90]48.86%(0.00, [-0.03, 0.03])
static averted × LSAS Avoidance × VHA gender0.23[-0.70, 1.13]31.38%(0.01, [-0.02, 0.04])
Table 4: Model parameters for a Bayesian linear mixed model predicting Interpersonal distance (IPD) from Gaze animations (static centered, static averted, dynamic centering, dynamic averting), LSAS Avoidance representing the sum score of the LSAS Avoidance subscale and VHA gender, an all interaction effects. We present the median of the parameter with the 95% HDI, representing the most likely parameter values, pb denoting the relative amount of samples depicting an opposite pattern of effect and the standardized parameter estimates \(\tilde{\delta }\) with 95% HDI. * indicates that the parameter is distinguishable from zero.

A.2 LSAS Fearfulness

Table 5:
ParameterMedian95% HDIpb\(\tilde{\delta }\), 95% HDI
(Intercept; static-mutual)102.92[96.53, 109.26]0.00%(3.58, [3.01, 4.15])
dynamic averting*1.05[0.12, 1.96]1.31%(0.04, [0.00, 0.07])
dynamic centering*-1.26[-2.18, -0.33]0.38%(-0.04, [-0.08, -0.01])
static averted*1.31[0.39, 2.23]0.28%(0.05, [0.01, 0.08])
LSAS Fearfulness2.42[-4.01, 8.87]35.40%(0.08, [-0.13, 0.31])
VHA gender*-1.55[-2.32, -0.78]0.01%(-0.05, [-0.08, -0.03])
dynamic averting × LSAS Fearfulness-0.53[-1.45, 0.39]12.85%(-0.02, [-0.05, 0.01])
dynamic centering × LSAS Fearfulness-0.32[-1.27, 0.58]24.61%(-0.01, [-0.04, 0.02])
static averted × LSAS Fearfulness*-1.26[-2.18, -0.33]0.35%(-0.04, [-0.08, -0.01])
dynamic averting × VHA gender0.57[-0.35, 1.49]11.36%(0.02, [-0.01, 0.05])
dynamic centering × VHA gender0.10[-0.81, 1.03]41.63%(0.00, [-0.03, 0.04])
static averted × VHA gender0.26[-0.67, 1.18]28.96%(0.01, [-0.02, 0.04])
LSAS Fearfulness × VHA gender-0.11[-0.89, 0.66]39.21%(0.00, [-0.03, 0.02])
dynamic averting × LSAS Fearfulness × VHA gender0.16[-0.79, 1.05]36.66%(0.01, [-0.03, 0.04])
dynamic centering × LSAS Fearfulness × VHA gender-0.23[-1.15, 0.70]31.48%(-0.01, [-0.04, 0.02])
static averted × LSAS Fearfulness × VHA gender0.20[-0.73, 1.12]33.66%(0.01, [-0.03, 0.04])
Table 5: Model parameters for a Bayesian linear mixed model predicting Interpersonal distance (IPD) from Gaze animations (static centered, static averted, dynamic centering, dynamic averting), LSAS Fearfulness representing the sum score of the LSAS Fearfulness subscale and VHA gender, an all interaction effects. We present the median of the parameter with the 95% HDI, representing the most likely parameter values, pb denoting the relative amount of samples depicting an opposite pattern of effect and the standardized parameter estimates \(\tilde{\delta }\) with 95% HDI. * indicates that the parameter is distinguishable from zero.

Footnotes

1
see https://aspredicted.org/52M_RRZ; Hypotheses are numbered differently.
3
Deviates from pre-registration.
6
We refit the model with 4 chains and 4000 iterations for loo-computation.

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Kevin Ackermans, Ellen Rusman, Rob Nadolski, Marcus Specht, and l. Saskia Brand-Gruwel. 2019. Video-or text-based rubrics: What is most effective for mental model growth of complex skills within formative assessment in secondary schools?Computers in Human Behavior 101 (2019), 248–258. https://doi.org/10.1016/j.chb.2019.07.011
[2]
Reginald B. Adams Jr and Robert E. Kleck. 2005. Effects of direct and averted gaze on the perception of facially communicated emotion.Emotion 5, 1 (2005), 3–11. https://doi.org/10.1037/1528-3542.5.1.3
[3]
John R. Aiello and Donna E. Thompson. 1980. When compensation fails: Mediating effects of sex and locus of control at extended interaction distances. Basic and Applied Social Psychology 1, 1 (1980), 65–82. https://doi.org/10.1207/s15324834basp0101_5
[4]
Inc. Amazon Web Services. 2023. Text to speech software – Amazon Polly. https://aws.amazon.com/polly/
[5]
DSMTF American Psychiatric Association, American Psychiatric Association, 2013. Diagnostic and statistical manual of mental disorders: DSM-5. Vol. 5. American psychiatric association Washington, DC.
[6]
Sean Andrist, Michael Gleicher, and Bilge Mutlu. 2017. Looking coordinated: Bidirectional gaze mechanisms for collaborative interaction with virtual characters. In Proceedings of the 2017 CHI conference on human factors in computing systems. ACM Press, 2571–2582. https://doi.org/10.1145/3025453.3026033
[7]
Sean Andrist, Tomislav Pejsa, Bilge Mutlu, and Michael Gleicher. 2012. Designing Effective Gaze Mechanisms for Virtual Agents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 705––714. https://doi.org/10.1145/2207676.2207777
[8]
Martin Aranguren. 2015. Nonverbal interaction patterns in the Delhi Metro: Interrogative looks and play-faces in the management of interpersonal distance. Interaction Studies 16, 3 (2015), 526–552. https://doi.org/10.1075/is.16.3.08ara
[9]
Michael Argyle and Janet Dean. 1965. Eye-contact, distance and affiliation. Sociometry 28, 3 (1965), 289–304. https://doi.org/10.2307/2786027
[10]
Michael Bach. 1996. The Freiburg Visual Acuity Test-automatic measurement of visual acuity. Optometry and vision science 73, 1 (1996), 49–53. https://doi.org/10.1097/00006324-199601000-00008
[11]
Jeremy N. Bailenson, Jim Blascovich, Andrew C. Beall, and Jack M. Loomis. 2001. Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence: Teleoperators & Virtual Environments 10, 6 (2001), 583–598. https://doi.org/10.1162/105474601753272844
[12]
Jeremy N. Bailenson, Jim Blascovich, Andrew C. Beall, and Jack M. Loomis. 2003. Interpersonal distance in immersive virtual environments. Personality and social psychology bulletin 29, 7 (2003), 819–833. https://doi.org/10.1177/0146167203029007002
[13]
Sandra L. Baker, Nina Heinrichs, Hyo-Jin Kim, and Stefan G. Hofmann. 2002. The Liebowitz social anxiety scale as a self-report instrument: a preliminary psychometric analysis. Behaviour research and therapy 40, 6 (2002), 701–715. https://doi.org/10.1016/s0005-7967(01)00060-2
[14]
Elisabetta Bevacqua, Romain Richard, and Pierre De Loor. 2017. Believability and co-presence in human-virtual character interaction. IEEE computer graphics and applications 37, 4 (2017), 17–29. https://doi.org/10.1109/MCG.2017.3271470
[15]
Bridget K. Biggs, Eric M. Vernberg, and Yelena P. Wu. 2012. Social anxiety and adolescents’ friendships: The role of social withdrawal. The Journal of Early Adolescence 32, 6 (2012), 802–823. https://doi.org/10.1177/0272431611426145
[16]
Saniye Tugba Bulu. 2012. Place presence, social presence, co-presence, and satisfaction in virtual worlds. Computers & Education 58, 1 (2012), 154–161. https://doi.org/10.1016/j.compedu.2011.08.024
[17]
Paul-Christian Bürkner. 2017. brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software 80, 1 (2017), 1–28. https://doi.org/10.18637/jss.v080.i01
[18]
Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of statistical software 76, 1 (2017), 1–32. https://doi.org/10.18637/jss.v076.i01
[19]
Alice Cartaud, Gennaro Ruggiero, Laurent Ott, Tina Iachini, and Yann Coello. 2018. Physiological response to facial expressions in peripersonal space determines interpersonal distance in a social interaction context. Frontiers in psychology 9, 657 (2018), 1–11. https://doi.org/10.3389/fpsyg.2018.00657
[20]
Justine Cassell, Timothy Bickmore, Mark Billinghurst, Lee Campbell, Kenny Chang, Hannes Vilhjálmsson, and Hao Yan. 1999. Embodiment in conversational interfaces: Rea. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. Association for Computing Machinery, 520––527. https://doi.org/10.1145/302979.303150
[21]
Junwen Chen, Michelle Short, and Eva Kemps. 2020. Interpretation bias in social anxiety: A systematic review and meta-analysis. Journal of Affective Disorders 276 (2020), 1119–1130. https://doi.org/10.1016/j.jad.2020.07.121
[22]
Ruizhi Cheng, Nan Wu, Matteo Varvello, Songqing Chen, and Bo Han. 2022. Are We Ready for Metaverse? A Measurement Study of Social Virtual Reality Platforms. In Proceedings of the 22nd ACM Internet Measurement Conference. Association for Computing Machinery, 504––518. https://doi.org/10.1145/3517745.3561417
[23]
David M. Clark and Adrian Wells. 1995. A cognitive model of social phobia. In Social phobia: Diagnosis, assessment, and treatment. The Guilford Press, 69–93.
[24]
Kathryn M. Connor, Jonathan R. T. Davidson, Henry Chung, Ruoyong Yang, and Cathryn M. Clary. 2006. Multidimensional effects of sertraline in social anxiety disorder. Depression and Anxiety 23, 1 (2006), 6–10. https://doi.org/10.1002/da.20086
[25]
Jaybie A. De Guzman, Kanchana Thilakarathna, and Aruna Seneviratne. 2019. Security and Privacy Approaches in Mixed Reality: A Literature Survey. Comput. Surveys 52, 6 (2019), 1–37. https://doi.org/10.1145/3359626
[26]
Martin Dechant, Sabine Trimpl, Christian Wolff, Andreas Mühlberger, and Youssef Shiban. 2017. Potential of virtual reality as a diagnostic tool for social anxiety: A pilot study. Computers in Human Behavior 76 (2017), 128–134. https://doi.org/10.1016/j.chb.2017.07.005
[27]
Martin Johannes Dechant, Max V. Birk, Youssef Shiban, Knut Schnell, and Regan L. Mandryk. 2021. How Avatar Customization Affects Fear in a Game-Based Digital Exposure Task for Social Anxiety. Proceedings of the ACM on Human-Computer Interaction 5, Chi Play (2021), 1–27. https://doi.org/10.1145/3474675
[28]
Related Designs, Ubisoft Blue Byte, and Ubisoft. 2014. VR Chat. https://hello.vrchat.com/
[29]
Paul M. G. Emmelkamp, Katharina Meyerbröker, and Nexhmedin Morina. 2020. Virtual reality therapy in social anxiety disorder. Current Psychiatry Reports 22, 7 (2020), 32. https://doi.org/10.1007/s11920-020-01156-1
[30]
D. M. Fresco, M. E. Coles, Richard G. Heimberg, M. R. Liebowitz, S. Hami, Murray B. Stein, and D. Goetz. 2001. The Liebowitz Social Anxiety Scale: a comparison of the psychometric properties of self-report and clinician-administered formats. Psychological medicine 31, 6 (2001), 1025–1035. https://doi.org/10.1017/s0033291701004056
[31]
Matthias Gamer and Heiko Hecht. 2007. Are you looking at me? Measuring the cone of gaze.Journal of Experimental Psychology: Human Perception and Performance 33, 3 (2007), 705–715. https://doi.org/10.1037/0096-1523.33.3.705
[32]
Matthias Gamer, Heiko Hecht, Nina Seipp, and Wolfgang Hiller. 2011. Who is looking at me? The cone of gaze widens in social phobia. Cognition and Emotion 25, 4 (2011), 756–764. https://doi.org/10.1080/02699931.2010.503117
[33]
Maia Garau, Mel Slater, Vinoba Vinayagamoorthy, Andrea Brogni, Anthony Steed, and M. Angela Sasse. 2003. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI conference on Human factors in computing systems. Association for Computing Machinery, 529–536. https://doi.org/10.1145/642611.642703
[34]
Nur Givon-Benjio and Hadas Okon-Singer. 2020. Biased estimations of interpersonal distance in non-clinical social anxiety. Journal of Anxiety Disorders 69 (2020), 102171. https://doi.org/10.1016/j.janxdis.2019.102171
[35]
Mar Gonzalez-Franco, Eyal Ofek, Ye Pan, Angus Antley, Anthony Steed, Bernhard Spanlang, Antonella Maselli, Domna Banakou, Nuria Pelechano, and Sergio Orts-Escolano. 2020. The rocketbox library and the utility of freely available rigged avatars. Frontiers in virtual reality 1 (2020), 20. https://doi.org/10.3389/frvir.2020.561558
[36]
Noa Gueron-Sela, Ido Shalev, Avigail Gordon-Hacker, Alisa Egotubov, and Rachel Barr. 2023. Screen media exposure and behavioral adjustment in early childhood during and after COVID-19 home lockdown periods. Computers in Human Behavior 140 (2023), 107572. https://doi.org/10.1016/j.chb.2022.107572
[37]
Edward T. Hall, Ray L. Birdwhistell, Bernhard Bock, Paul Bohannan, A. Richard Diebold Jr, Marshall Durbin, Munro S. Edmonson, J. L. Fischer, Dell Hymes, and Solon T. Kimball. 1968. Proxemics [and comments and replies]. Current anthropology 9, 2/3 (1968), 83–108. https://doi.org/10.1086/200975
[38]
Leslie A. Hayduk. 1983. Personal space: where we now stand.Psychological bulletin 94, 2 (1983), 293–335. https://doi.org/10.1037/0033-2909.94.2.293
[39]
Heiko Hecht, Robin Welsch, Jana Viehoff, and Matthew R. Longo. 2019. The shape of personal space. Acta psychologica 193 (2019), 113–122. https://doi.org/10.1016/j.actpsy.2018.12.009
[40]
Larry V. Hedges. 2007. Effect sizes in cluster-randomized designs. Journal of Educational and Behavioral Statistics 32, 4 (2007), 341–370. https://doi.org/10.3102/1076998606298043
[41]
Stefan G. Hofmann, M. A. Anu Asnaani, and Devon E. Hinton. 2010. Cultural aspects in social anxiety and social anxiety disorder. Depression and anxiety 27, 12 (2010), 1117–1127. https://doi.org/10.1002/da.20759
[42]
Herbert Hoijtink and Rens van de Schoot. 2018. Testing small variance priors using prior-posterior predictive p values.Psychological Methods 23, 3 (2018), 561–569. https://doi.org/10.1037/met0000131
[43]
Ann Huang, Pascal Knierim, Francesco Chiossi, Lewis L. Chuang, and Robin Welsch. 2022. Proxemics for human-agent interaction in augmented reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 1–13. https://doi.org/10.1145/3491102.3517593
[44]
Tina Iachini, Yann Coello, Francesca Frassinetti, Vincenzo Paolo Senese, Francesco Galante, and Gennaro Ruggiero. 2016. Peripersonal and interpersonal space in virtual and real environments: Effects of gender and age. Journal of Environmental Psychology 45 (2016), 154–164. https://doi.org/10.1016/j.jenvp.2016.01.004
[45]
Masako Ishii-Kuntz. 1990. Social interaction and psychological well-being: Comparison across stages of adulthood. The International Journal of Aging and Human Development 30, 1 (1990), 15–36. https://doi.org/10.2190/0WTY-XBXJ-GVV9-XWM9
[46]
Martin Johannes Dechant, Julian Frommel, and Regan Mandryk. 2021. Assessing social anxiety through digital biomarkers embedded in a gaming task. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 1–15. https://doi.org/10.1145/3411764.3445238
[47]
Charles M. Judd, Jacob Westfall, and David A. Kenny. 2017. Experiments with more than one random factor: Designs, analytic models, and statistical power. Annual review of psychology 68 (2017), 601–625. https://doi.org/10.1146/annurev-psych-122414-033702
[48]
Matthew Kay, Gregory L. Nelson, and Eric B. Hekler. 2016. Researcher-centered design of statistics: Why Bayesian statistics better fit the culture and incentives of HCI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 4521–4532. https://doi.org/10.1145/2858036.2858465
[49]
Behrang Keshavarz and Heiko Hecht. 2011. Validating an efficient method to quantify motion sickness. Human factors 53, 4 (2011), 415–426. https://doi.org/10.1177/0018720811403736
[50]
Kangsoo Kim, Gerd Bruder, and Gregory F. Welch. 2018. Blowing in the wind: Increasing copresence with a virtual human via airflow influence in augmented reality. In International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. The Eurographics Association, 105–114. https://doi.org/10.2312/egve.20181332
[51]
Jan Kolkmeier, Jered Vroon, and Dirk Heylen. 2016. Interacting with virtual agents in shared space: Single and joint effects of gaze and proxemics. In Intelligent Virtual Agents: 16th International Conference. Springer, 1–14. https://doi.org/10.1007/978-3-319-47665-0_1
[52]
Dylan M. Kollman, Timothy A. Brown, Gabrielle I. Liverant, and Stefan G. Hofmann. 2006. A taxometric investigation of the latent structure of social anxiety disorder in outpatients with anxiety and mood disorders. Depression and Anxiety 23, 4 (2006), 190–199. https://doi.org/10.1002/da.20158
[53]
Leon O. H. Kroczek and Andreas Mühlberger. 2022. Returning a smile: Initiating a social interaction with a facial emotional expression influences the evaluation of the expression received in return. Biological Psychology 175, 4 (2022), 190–199. https://doi.org/10.1016/j.biopsycho.2022.108453
[54]
Leon O. H. Kroczek and Andreas Mühlberger. 2023. Time to Smile: How Onset Asynchronies Between Reciprocal Facial Expressions Influence the Experience of Responsiveness of a Virtual Agent. Journal of Nonverbal Behavior 47 (2023), 345–360. https://doi.org/10.1007/s10919-023-00430-z
[55]
Bastian Lange and Paul Pauli. 2019. Social anxiety changes the way we move—A social approach-avoidance task in a virtual reality CAVE system. PLoS One 14, 12 (2019), e0226805. https://doi.org/10.1371/journal.pone.0226805
[56]
Marc Erich Latoschik, Florian Kern, Jan-Philipp Stauffert, Andrea Bartl, Mario Botsch, and Jean-Luc Lugrin. 2019. Not alone here?! scalability and user experience of embodied ambient crowds in distributed social virtual reality. IEEE transactions on visualization and computer graphics 25, 5 (2019), 2134–2144. https://doi.org/10.1109/TVCG.2019.2899250
[57]
Michael R. Liebowitz. 1987. Social phobia.Modern problems of pharmacopsychiatry 22 (1987), 141––173. https://doi.org/10.1159/000414022
[58]
Joan Llobera, Bernhard Spanlang, Giulio Ruffini, and Mel Slater. 2010. Proxemics with multiple dynamic characters in an immersive virtual environment. ACM Transactions on Applied Perception (TAP) 8, 1 (2010), 1–12. https://doi.org/10.1145/1857893.1857896
[59]
Divine Maloney and Guo Freeman. 2020. Falling asleep together: What makes activities in social virtual reality meaningful to users. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. Association for Computing Machinery, 510–521. https://doi.org/10.1145/3410404.3414266
[60]
Warren Mansell, David M. Clark, Anke Ehlers, and Yi-Ping Chen. 2010. Social anxiety and attention away from emotional faces. Cognition & Emotion 13, 6 (2010), 673–690. https://doi.org/10.1080/026999399379032
[61]
Antonella Maselli and Mel Slater. 2013. The building blocks of the full body ownership illusion. Frontiers in human neuroscience 7 (2013), 83. https://doi.org/10.3389/fnhum.2013.00083
[62]
Joshua McVeigh-Schultz, Anya Kolesnichenko, and Katherine Isbister. 2019. Shaping pro-social interaction in VR: an emerging design framework. In Proceedings of the 2019 CHI conference on human factors in computing systems. Association for Computing Machinery, 1–12. https://doi.org/10.1145/3290605.3300794
[63]
Joshua McVeigh-Schultz, Elena Márquez Segura, Nick Merrill, and Katherine Isbister. 2018. What’s It Mean to" Be Social" in VR? Mapping the Social VR Design Ecology. In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems. Association for Computing Machinery, 289––294. https://doi.org/10.1145/3197391.3205451
[64]
Xiao-Li Meng. 1994. Posterior predictive p -values. The Annals of Statistics 22, 3 (1994), 1142–1160. https://doi.org/10.1214/aos/1176325622
[65]
Douglas S. Mennin, David M. Fresco, Richard G. Heimberg, Franklin R. Schneier, Sharon O. Davies, and Michael R. Liebowitz. 2002. Screening for social anxiety disorder in the clinical setting: using the Liebowitz Social Anxiety Scale. Journal of anxiety disorders 16, 6 (2002), 661–673. https://doi.org/10.1016/s0887-6185(02)00134-2
[66]
Anne C. Miers, Anke W. Blöte, David A. Heyne, and P. Michiel Westenberg. 2014. Developmental pathways of social avoidance across adolescence: The role of social anxiety and negative cognition. Journal of anxiety disorders 28, 8 (2014), 787––794. https://doi.org/10.1016/j.janxdis.2014.09.008
[67]
Mark Roman Miller, Cyan DeVeaux, Eugy Han, Nilam Ram, and Jeremy N. Bailenson. 2023. A Large-Scale Study of Proxemics and Gaze in Groups. In 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). Ieee, 409–417. https://doi.org/10.1109/VR55154.2023.00056
[68]
Mark Roman Miller, Fernanda Herrera, Hanseul Jun, James A. Landay, and Jeremy N. Bailenson. 2020. Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports 10, 1 (2020), 17404. https://doi.org/10.1038/s41598-020-74486-y
[69]
Karin Mogg, Brendan P. Bradley, Jo De Bono, and Michelle Painter. 1997. Time course of attentional bias for threat information in non-clinical anxiety. Behaviour Research and Therapy 35, 4 (1997), 297–303. https://doi.org/10.1016/S0005-7967(96)00109-X
[70]
Fares Moustafa and Anthony Steed. 2018. A longitudinal study of small group interaction in social virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology. Association for Computing Machinery, 1–10. https://doi.org/10.1145/3281505.3281527
[71]
Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78. https://doi.org/10.1145/191666.191703
[72]
Oculus. 2019. Oculus Integration | Integration | Unity Asset Store. https://assetstore.unity.com/packages/tools/integration/oculus-integration-82022
[73]
Christopher J. Patrick and Laura E. Drislane. 2015. Triarchic model of psychopathy: Origins, operationalizations, and observed linkages with personality and general psychopathology. Journal of personality 83, 6 (2015), 627–643. https://doi.org/10.1111/jopy.12119
[74]
Anat Perry, Einat Levy-Gigi, Gal Richter-Levin, and Simone G. Shamay-Tsoory. 2015. Interpersonal distance and social anxiety in autistic spectrum disorders: A behavioral and ERP study. Social Neuroscience 10, 4 (2015), 354–365. https://doi.org/10.1080/17470919.2015.1010740
[75]
Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-me: An adaptive avatar for mixed reality remote collaboration. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13. https://doi.org/10.1145/3173574.3173620
[76]
Qualtrics. 2005. Qualtrics XM. https://www.qualtrics.com/
[77]
Holger Regenbrecht and Thomas Schubert. 2002. Real and illusory interactions enhance presence in virtual environments. Presence: Teleoperators & Virtual Environments 11, 4 (2002), 425–434. https://doi.org/10.1162/105474602760204318
[78]
Radiah Rivu, Yumeng Zhou, Robin Welsch, Ville Mäkelä, and Florian Alt. 2021. When friends become strangers: Understanding the influence of avatar gender on interpersonal distance in virtual reality. In IFIP Conference on Human-Computer Interaction. Springer, 234–250. https://doi.org/10.1007/978-3-030-85607-6_16
[79]
Karin Roelofs, Peter Putman, Sonja Schouten, Wolf-Gero Lange, Inge Volman, and Mike Rinck. 2010. Gaze direction differentially affects avoidance tendencies to happy and angry faces in socially anxious individuals. Behaviour research and therapy 48, 4 (2010), 290–294. https://doi.org/10.1016/j.brat.2009.11.008
[80]
Mia Romano, Emma Tran, and David A. Moscovitch. 2020. Social anxiety is associated with impaired memory for imagined social events with positive outcomes. Cognition and Emotion 34, 4 (2020), 700–712. https://doi.org/10.1080/02699931.2019.1675596
[81]
Mikael Rubin, Sean Minns, Karl Muller, Matthew H. Tong, Mary M. Hayhoe, and Michael J. Telch. 2020. Avoidance of social threat: Evidence from eye movements during a public speaking challenge using 360-video. Behaviour research and therapy 134 (2020), 103706. https://doi.org/10.1016/j.brat.2020.103706
[82]
Lars Schulze, Janek S. Lobmaier, Manuel Arnold, and Babette Renneberg. 2013. All eyes on me?! Social anxiety and self-directed perception of eye gaze. Cognition and Emotion 27, 7 (2013), 1305–1313. https://doi.org/10.1080/02699931.2013.773881
[83]
Lars Schulze, Babette Renneberg, and Janek Lobmaier. 2013. Gaze perception in social anxiety and social anxiety disorder. Frontiers in Human Neuroscience 7 (2013), 872. https://doi.org/10.3389/fnhum.2013.00872
[84]
Navya N. Sharan, Alexander Toet, Tina Mioch, Omar Niamut, and Jan B. F. van Erp. 2022. The relative importance of social cues in immersive mediated communication. In Human Interaction, Emerging Technologies and Future Systems V: Proceedings of the 5th International Virtual Conference on Human Interaction and Emerging Technologies, IHIET 2021, August 27-29, 2021 and the 6th IHIET: Future Systems (IHIET-FS 2021), October 28-30, 2021, France. Springer, 491–498. https://doi.org/10.1007/978-3-030-85540-6_62
[85]
Haolun Shi and Guosheng Yin. 2020. Reconnecting p-value and Posterior Probability under One-and Two-sided Tests. The American Statistician 75, 3 (2020), 265–275. https://doi.org/10.1080/00031305.2020.1717621
[86]
Maurizio Sicorello, Jasmina Stevanov, Hiroshi Ashida, and Heiko Hecht. 2019. Effect of gaze on personal space: A japanese–german cross-cultural study. Journal of Cross-Cultural Psychology 50, 1 (2019), 8–21. https://doi.org/10.1177/00220221187985
[87]
Robert Sommer. 1962. The distance for comfortable conversation: A further study. Sociometry 25, 1 (1962), 111–116. https://doi.org/10.2307/2786041
[88]
Agnieszka Sorokowska, Piotr Sorokowski, Peter Hilpert, Katarzyna Cantarero, Tomasz Frackowiak, Khodabakhsh Ahmadi, Ahmad M. Alghraibeh, Richmond Aryeetey, Anna Bertoni, and Karim Bettache. 2017. Preferred interpersonal distances: A global comparison. Journal of Cross-Cultural Psychology 48, 4 (2017), 577–592. https://doi.org/10.1177/00220221176980
[89]
Susan H. Spence and Ronald M. Rapee. 2016. The etiology of social anxiety disorder: An evidence-based model. Behaviour Research and Therapy 86 (2016), 50–67. https://doi.org/10.1016/j.brat.2016.06.007
[90]
Toni-Lee Sterley and Jaideep S. Bains. 2022. Social communication of affective states. Current Opinion in Neurobiology 68 (2022), 44–51. https://doi.org/10.1016/j.neuroscience.2022.08.022
[91]
Akikazu Takeuchi and Taketo Naito. 1995. Situated facial displays: towards social interaction. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM Press/Addison-Wesley Publishing Co., 450–455. https://doi.org/10.1145/223904.223965
[92]
R Core Team. 2021. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
[93]
Unity Technologies. 2021. Unity.https://unity.com/
[94]
Daniel Tönsing, Bastian Schiller, Antonia Vehlen, Ines Spenthof, Gregor Domes, and Markus Heinrichs. 2022. No evidence that gaze anxiety predicts gaze avoidance behavior during face-to-face social interaction. Scientific Reports 12, 31 (2022), 21332.
[95]
Yuki Tsuji and Sotaro Shimada. 2018. Socially anxious tendencies affect neural processing of gaze perception. Brain and Cognition 9 (2018), 63–70. https://doi.org/10.3389/fpsyg.2018.02111
[96]
Rafal Urbaniak, Patrycja Tempska, Maria Dowgiałło, Michał Ptaszyński, Marcin Fortuna, Michał Marcińczuk, Jan Piesiewicz, Gniewosz Leliwa, Kamil Soliwoda, Ida Dziublewska, Nataliya Sulzhytskaya, Aleksandra Karnicka, Paweł Skrzek, Paula Karbowska, Maciej Brochocki, and Michał Wroczyński. 2022. Namespotting: Username toxicity and actual toxic behavior on Reddit. Computers in Human Behavior 136 (2022), 107371. https://doi.org/10.1016/j.chb.2022.107371
[97]
Niels van Berkel and Kasper Hornbæk. 2023. Implications of Human-Computer Interaction Research. Interactions 30, 4 (2023), 50–55. https://doi.org/10.1145/3600103
[98]
Vrmada. 2023. UltimateXR. https://www.ultimatexr.io/
[99]
Justin W. Weeks, Ashley N. Howell, and Philippe R. Goldin. 2013. Gaze avoidance in social anxiety disorder. Depression and anxiety 30, 8 (2013), 749–756. https://doi.org/10.1002/da.22146
[100]
Robin Welsch, Heiko Hecht, and Christoph von Castell. 2018. Psychopathy and the regulation of interpersonal distance. Clinical Psychological Science 6, 6 (2018), 835–847. https://doi.org/10.1177/2167702618788874
[101]
Robin Welsch, Christoph von Castell, and Heiko Hecht. 2019. The anisotropy of personal space. PloS one 14, 6 (2019), e0217587. https://doi.org/10.1371/journal.pone.0217587
[102]
Robin Welsch, Christoph von Castell, and Heiko Hecht. 2020. Interpersonal distance regulation and approach-avoidance reactions are altered in psychopathy. Clinical Psychological Science 8, 2 (2020), 211–225. https://doi.org/10.1177/2167702619869336
[103]
Robin Welsch, Christoph von Castell, Martin Rettenberger, Daniel Turner, Heiko Hecht, and Peter Fromberger. 2020. Sexual attraction modulates interpersonal distance and approach-avoidance movements towards virtual agents in males. PloS one 15, 4 (2020), e0231539. https://doi.org/10.1371/journal.pone.0231539
[104]
Matthias J. Wieser and David A. Moscovitch. 2015. The effect of affective context on visuocortical processing of neutral faces in social anxiety. Frontiers in psychology 6 (2015), 1824. https://doi.org/10.3389/fpsyg.2015.01824
[105]
Matthias J. Wieser, Paul Pauli, Miriam Grosseibl, Ina Molzow, and Andreas Mühlberger. 2010. Virtual social interactions in social anxiety—the impact of sex, gaze, and interpersonal distance. Cyberpsychology, behavior and social networking 13, 5 (2010), 547––554. https://doi.org/10.1089/cyber.2009.0432
[106]
Julie Williamson, Jie Li, Vinoba Vinayagamoorthy, David A. Shamma, and Pablo Cesar. 2021. Proxemics and social interactions in an instrumented virtual reality workshop. In Proceedings of the 2021 CHI conference on human factors in computing systems. Association for Computing Machinery, 1–13. https://doi.org/10.1145/3411764.3445729
[107]
Julie R. Williamson, Joseph O’Hagan, John Alexis Guerra-Gomez, John H. Williamson, Pablo Cesar, and David A. Shamma. 2022. Digital proxemics: Designing social and collaborative interaction in virtual environments. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 1–12. https://doi.org/10.1145/3491102.3517594

Cited By

View all
  • (2024)Duet: VR Pair Dancing with Partner Movement ManipulationCompanion Proceedings of the 2024 Annual Symposium on Computer-Human Interaction in Play10.1145/3665463.3678838(306-311)Online publication date: 14-Oct-2024

Index Terms

  1. Navigating the Virtual Gaze: Social Anxiety's Role in VR Proxemics
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    18961 pages
    ISBN:9798400703300
    DOI:10.1145/3613904
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Author Tags

    1. Proxemics
    2. Virtual Human Agents
    3. Virtual Reality

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Aalto Science Institute
    • MAGICS
    • HORIZON-CL4-2023-HUMAN-01-CNECT

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,173
    • Downloads (Last 6 weeks)222
    Reflects downloads up to 20 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Duet: VR Pair Dancing with Partner Movement ManipulationCompanion Proceedings of the 2024 Annual Symposium on Computer-Human Interaction in Play10.1145/3665463.3678838(306-311)Online publication date: 14-Oct-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media