Merging Mixed Realities (CSCW 2024 Workshop)

byBotao Amber Hu, Yue Li

Workshop Topic

Merging Mixed Realities: Envisioning a Future with Prevalent Use of HMD

Workshop Description

As see-through Mixed Reality (MR) Head-Mounted Displays (HMDs) become more prevalent, it is predictable that wearers will start to encounter each other spontaneously in real-world settings. This situation introduces new complexities in how they interact. The central challenge is the inherent asymmetry of MR experiences: while see-through technology allows wearers to share the same physical environment, the augmented layers they perceive are unique to each individual by default. How, then, do they share their mixed realities? Although collaborative MR is a well-researched area, it often presumes that all wearers are running the same MR layer — a situation that may not be common in the future. Our workshop will explore critical questions about the permission of digital objects and protocols for merging MR environments: Can you see the objects I see? What protocols will be necessary to allow wearers into each other's realities? How do mixed realities merge? To investigate this under-explored area, our workshop will feature in-person multiplayer MR demonstrations, collaborative speculative design sprints, and in-depth discussions. We aim to bring together a diverse group of HCI and CSCW researchers to examine the technical, social, and ethical implications, identify key research questions, and envision strategic futures for MR.

Keywords

Social Computing, Asymmetric Mixed Reality, Social Protocol Design, Digital Objects, Permission of Mixed Realities

Introduction

In recent years, the proliferation of see-through Mixed Reality (MR) head-mounted display (HMD) devices, such as Vision Pro [@applevisionpro] or Meta Quest 3 [@metaquest3], has increased their prevalence in the consumer market. The increasing prominence of MR HMDs suggests a future where MR HMDs are as ubiquitous as smartphones. Thanks to video or optical see-through technology [@Itoh2021Indistinguishable], an HMD wearer can see their surroundings and move around freely, expanding usage scenarios from private, controlled environments to public, spontaneous settings [@Lee2021Augmented; @Auda2023Scoping]. Introducing MR headsets into public and social spaces creates complex social challenges. The first challenge is the inherent power imbalance between HMD wearers and non-wearers [@OHagan2021Safetya; @Denning2014situa; @Chung2023Negotiating; @Xu2019Sharinga], which impacts social acceptance, privacy, and the safety of non-wearers [@DeGuzman2019Security; @OHagan2021Safetya; @Due2015social]. The second challenge is the inherent asymmetry of MR experiences. While see-through technology allows wearers to share the same physical environment, the augmented layers they perceive are unique to each individual by default. However, the study of collaborative collocated MR often presumes that all wearers are perceiving the same MR layer---a situation that may not be common in the future.

This workshop will explore the near-term future scenario where MR HMD wearers encounter each other in-person spontaneously. How do they share and merge their mixed realities? To investigate this under-explored area, we will delve into three interconnected research themes.

Why Merging Realities -- "A Dream You Dream Alone is Only a Dream"

The inherent nature of mixed reality as a medium is "permissionless". HMD wearers can use their MR apps privately to perceive their own version of reality, without needing anyone else's permission. Even when two HMD wearers collocate, they do not perceive the same realities - so their realities are asymmetric by default. Based on that, we envision a future where reality becomes asymmetric, layered with numerous entangled mixed realities, all without needing anyone's permission.

The research questions in this theme are

  1. Issues of Asymmetry in Collocated MR Wearers: What misunderstandings, ethical dilemmas, security concerns, and social acceptance issues arise when there is asymmetry among collocated MR wearers?

  2. Scenarios for Merging Mixed Realities: In which scenarios do mixed realities need to merge in collocated settings? Possible contexts include indoor environments, outdoor or public spaces, as well as the domains of gameplay, healthcare, e-commerce, education and training, etc. Conversely, in which scenarios should mixed realities remain private?

  3. Intentions for Merging Mixed Realities: What motivations do users have for merging mixed realities? In the same collocated scenario, some users may want to merge while others do not. How can we match those who want to merge? How can users indicate their intention? Are swipeable pop-up windows, like those used in matchmaking dating apps, appropriate for mixed realities?

  4. Permission and Protocol with Digital Objects in Mixed Realities: Can you see the objects that I see? What is the permission structure for digital objects in mixed reality layers? For instance, what if digital objects can be seen by others, but only you can edit and move them? Alternatively, what if digital objects can be moved by everyone?

How to Merge Realities -- "Allow Me Into Your Dream"

We illustrate the concept of merging mixed realities, where the environment include both HMD wearers and non-wearers. The encounters of two MR wearers can be spontaneous, unplanned, quick and temporary, not heavily dependent on preset technology such as pre-scanned areas [@Sarlin2022LaMAR], AR Cloud [@Zhang2017CloudAR; @Duong2022AR], WiFi routers, cellular networks, or even internet connections, which may not always be available. This is likely to occur in future realities, leading to the question of how to merge realities.

The research questions in this theme are

  1. Interaction. What interaction protocol will be necessary to grant wearers permission to access each other's mixed realities?

  2. Granting Permission. What kinds of permissions can a wearer grant to others? Can they allow merging per digital object or sharing and merging the entire mixed reality layers?

  3. Security Issue. Is the merging process secure against attacks? What cryptographic protocol is needed to secure the merging process?

  4. Synchronization. How do two or more HMDs precisely synchronize timestamps and coordinates, and digital objects in a spontaneous manner?

  5. Information and Algorithm Required to Establish Session. What necessary information needs to be transmitted for all wearers to establish a collocated mixed reality so they can see the same digital environment?

  6. Information and Algorithm Required to Synchronize in Real-Time. What real-time information needs to be exchanged to synchronize the attributes of digital objects, such as position, rotation, and other physical properties?

  7. Merging More than Two Realities. How do we merge more than two realities? For instance, what happens if a wearer who has already merged with another wearer then merges with a third wearer?

  8. Unmerging. How can merged realities be unmerged or disengaged?

  9. Group Mixed Realities. Are merged mixed realities similar to group chats in instant messaging apps, where people can join and leave at will?

What Happens after Merging Realities -- "A Dream You Dream Together is Reality"

After merging mixed realities, wearers can share their perceptions to some degree. The more wearers who can see the same mixed reality layer, the stronger the need to preserve this layer for later use. Digital objects in the merged mixed reality can remain in the shared layer, preserving collective memories. In the meantime, it will bring more possible issues and problems. The critical research questions in this theme are

  1. Digital Physics: Can a mixed reality layer have its own "digital physics"---a protocol that everyone in the merged mixed reality layer agrees to operate under?

  2. Managing Merged Mixed Reality: Who has the permission to manage the digital objects in the shared mixed reality layer? How are objects added to or removed from this merged mixed reality layer?

  3. Asymmetric Digital Objects in Merged Mixed Realities: In the same merged mixed reality, do digital objects necessarily have the same appearance for different MR wearers? For example, can the same object have different textures, such as appearing blue for male users and red for female users?

  4. Persisting Merged Mixed Realities: After merging mixed realities, how long do they exist? Do they disappear once the MR wearer leaves? Conversely, how can layered mixed realities persist, even when there are no HMD wearers to perceive them?

  5. Storage of Merged Mixed Realities: Where are such merged mixed realities stored? Are they maintained by government or public institutions, similar to urban infrastructure? Are they hosted on big internet corporations' cloud servers, like today's social networks? Or are they stored on blockchain, allowing users to own their mixed realities?

  6. Autonomous Realities: Can mixed reality layers evolve and operate autonomously? Much like nature, where trees and animals grow without human permission, can these layers function independently? This would create parallel realities, similar to Pokémon Go, but without the need for an operating company.

  7. Social Concerns of Merged Mixed Realities: After merging realities, what misunderstandings, ethical dilemmas, security concerns, and social acceptance issues arise due to asymmetry among groups of collocated MR wearers? For instance, one group of MR wearers might share the same merged mixed realities, while another group shares a different merged mixed reality.

  8. Evaluation of Merged Mixed Reality. How can we evaluate group user behaviors within the same merged mixed reality layer?

Call for Participants

To be accepted into the workshop, interested participants will be asked to submit a 2--4-page position paper. A rudimentary agenda is proposed for the workshop.

  • Submissions deadline: August 30, 2024

  • Notification of acceptance: October 1, 2024

  • Word limit: 2000 words (excluding references)

  • Template: ACM Master Article Submission Templates, single column (https://www.acm.org/publications/proceedings-template)

  • Selection criteria: Contribution to workshop's themes and potential to stimulate discussions.

  • Submission: Google Form linked on the website.

Topics of Interest

Addressing the three research themes and questions, this workshop encourages submissions around the following topics:

  1. Conceptual design for merging mixed realities

  2. Speculative design fiction for merging mixed realities

  3. Interaction techniques for merging mixed realities

  4. Systems and applications for merging mixed realities

  5. User behavior and experience when merging mixed realities

  6. Social concerns about merging mixed realities

  7. Ethics implications about merging mixed realities

  8. Security issues during merging mixed realities

  9. Any other related topics

Specifically, we welcome theoretical, practical, and art works that address a specific domain. Example domains include but not limited to:

  1. Games and plays

  2. Collaborative works

  3. Data visualization and analytics

  4. Education and training

  5. Healthcare and well-being

  6. Museums and cultural heritage

  7. E-commerce

  8. Smart cities

  9. Any other related areas

Key Information about the Workshop

  • Format: In-person

  • Acronym: MergeReality' 24

  • Duration: One day

  • Expected number of submissions: 10

  • Maximum number of participants: 40

  • Website: https://merge.reality.design

  • Equipment and supplies: HMDs (e.g., 2x Apple Vision Pro) will be prepared by the workshop organizers.

Participant Recruitment and Selection

We expect a maximum of 40 participants for this workshop. We will use social media and mailing lists to invite academic researchers and practitioners to submit extended abstracts. We will also invite experienced members of the communities being discussed, e.g., user experience designers, science fiction writers, digital artists, creative technologist, security experts, protocol design experts, programmable cryptography experts, mixed reality researchers, to participate in our panel and dialogue with researchers.

Individuals interested in participating will be asked to submit an extended abstract for a 5-minute lightning talk they will give as part of the workshop activities. Submissions should be 2-4 pages long and outline the participants' speculative design and scenarios under the key themes of our workshop. This paper should reflect on their involvement in the topic, past experiments and learning, the successes they have achieved, the challenges they have faced, and their objectives or plans. The organizing team will review submissions and seek external reviewers. We plan to have two to three reviewers for each submission. Reviewer comments will be shared with the authors. We welcome early-stage analytical work, case studies, essays, and design fictions.

Workshop Activities and Goals

We plan to host a one-day in-person workshop including lightning talks, multiplayer MR demonstrations, collaborative speculative design sprints, and a debriefing to foster future collaboration.

Workshop Goals

This workshop aims to bring together researchers in the community working on a diverse range of topics. We hope to involve an interdisciplinary group of researchers who may not otherwise have the opportunity to engage in an extended dialogue about merging mixed realities: digital artists, creative technologists, UX designers, speculative designers, social science researchers, etc.

The workshop activities are designed to facilitate the sharing of research ideas across specialties; highlight common threads between adjacent areas, such as cross reality, social computing, and cloud computing; and understand how our research can implicate the design for future realities. In summary, our workshop has five concrete outcomes:

  1. Formulate the research agenda beyond the technical aspects, but the social, ethical, security concerns for the design of merging mixed realities.

  2. Collect a series of speculative design ideas and scenarios that could inform the creation of novel merged mixed reality techniques, systems, and applications.

  3. Identify potential authors to implement the design ideas and publish the artefacts from the workshop.

  4. Establish clear communication lines for the development of this community of practice.

  5. Identify interested community members for long-term interest in the topic.

Workshop Activities

Lightning Talks

Individuals interested in participating will be asked to give a 5-minute lightning talk as part of the workshop activities. The lightning talks include brief and dynamic presentations that aim to showcase diverse perspectives, ideas, and projects related to merging mixed realities. This will allow participants to get to know each other's research interests and how they relate to the key themes of our workshop.

Multiplayer MR Experience

As several organizers of this workshop are well engaged in the MR practitioner community, we will bring some existing works related to the workshop topic and encourage participants of the workshop to bring their demos. For example, we created a demo with two Apple Vision Pros, allowing users to synchronize and share their mixed realities simply by touching fingertips, akin to the famous Sistine Touch, thus creating layered mixed realities. Participants are invited to engage in onsite MR experiences and observe the merging mixed reality experiences. This will not only foster social interaction and teamwork but also provides a unique opportunity for individuals to collectively experience and envision how to shape the future of mixed realities.

Speculative Design Sprint

By bringing together researchers and practitioners from various fields, we will envision and prototype future scenarios where different realities blend seamlessly. This sprint involves intensive ideation sessions, prototyping, and user testing to explore the possibilities and implications of merging virtual, augmented, and physical environments. Participants will engage in creative thinking to push the boundaries of what is currently possible in mixed realities. The outcome of this sprint is not only tangible prototypes but also valuable insights into how these merged realities can shape our future experiences and interactions.

Below is the proposed agenda for the one-day workshop activities:

  • 9:00 - 9:30 Introduction and Ice-breakers

  • 9:30 - 10:30 Lightning Talks

  • 10:30 - 11:00 Morning Coffee Break

  • 11:00 - 12:00 Multiplayer MR Experience

  • 12:00 - 1:00 Group Lunch

  • 1:00 - 3:00 Speculative Design Sprint

  • 3:00 - 3:30 Afternoon Coffee Break

  • 3:30 - 4:00 Reflection, group discussion on follow-on activities

Post-workshop Plans

We will explicate our workshop outcomes in the format of a blog post or submission to the Communications of ACM. We hope that, by distributing the outcomes of our workshop more broadly, we can engage with more members of the CSCW community in the discourse on merging mixed realities. We will also create a mailing list for participants to facilitate post-workshop communications.

Organizers

Our organizing team consists of researchers across three institutions, who collectively contribute expansive research expertise on Mixed Reality.

  • Botao Amber Hu is a researcher, designer, and creative technologist. He founded and leads Reality Design Lab, an interdisciplinary research and design lab that focuses on the intersection of philosophy of technology, speculative design, spatial computing, and programmable cryptography. He also serves as a visiting lecturer teaching "Mixed Reality Design" at the China Academy of Art. His primary focus is on developing bodily interplay within collocated mixed reality, democratizing education of mixed reality design, and exploring blockchain-based protocols for social coordination. His works have been featured at top conferences such as SIGGRAPH, CHI, WWW, SXSW, and TEDx, and have received accolades including the CHI Best Interactivity Award, the Red Dot Design Award, the iF Design Award, the Webby Awards, and the Core77 Design Award. He holds a bachelor's degree in computer science from Tsinghua University and a master's degree in AI from Stanford University.

  • Yilan Elan Tao is a user experience designer and architect at Reality Design Lab, focusing on XR design and theory. Her skills in design and presentation have played a vital role in Reality Design Lab's award-winning projects showcased at top conferences such as CHI, SXSW, and ISMAR, as well as recognized by awards including the Red Dot Design Award, Webby Awards, and Core77 Design Award. She holds a Bachelor's degree in Architecture from Tsinghua University and a Master's degree in Architecture from Cornell University.

  • Rem RunGu Lin is a digital artist, creative technologist and the co-founder of Bach Innovative, Funtheory/Befun Lab. He investigates the intersections of mixed reality, human-computer interaction, bio-data and generative art in his research and artwork. He holds a master's degree from MIT and is pursuing his PhD in Computational Media and Art (CMA) at The Hong Kong University of Science and Technology (Guangzhou). His works/papers have been Recognized in SIGGRAPH, Leonardo, SIGGRAPH ASIA, ISEA, Ars Electronica , IEEE VIS AP, VINCI, R.A.W.!, Bi-City Biennale of Urbanism/Architecture, "Vision Shenzhen" Shenzhen Light Art Museum.

  • Dr. Yue Li is an Assistant Professor at the Department of Computing, Xi'an Jiaotong-Liverpool University. Her research interest is in the field of Human-Computer Interaction (HCI), with particular emphasis on Extended Reality (XR) technologies. She has been actively involved in research related to the design, evaluation and application of VR and AR in cultural heritage and education. Dr. Li leads several prestigious research grants, including the Young Scientist Programme of the National Natural Science Foundation of China (NSFC) and the General Programme of the Natural Science Foundation of the Jiangsu Higher Education Institutions of China. Her scholarly contributions have found publication in esteemed international journals and conferences such as ACM ToCHI, IJHCI, ACM JoCCH, ACM CHI, IEEE VR, IEEE ISMAR, and so on.

  • Dr. Hai-Ning Liang is a Professor in the Department of Computing at Xi'an Jiaotong-Liverpool University (XJTLU). He was the Founding Head of Department (2019-2023). He is also the Deputy Director of the Suzhou Municipal Key Lab for Intelligent Virtual Engineering, the Suzhou Key Lab for Virtual Reality Technologies, and the XJTLU Virtual Engineering Center. Dr. Liang completed his PhD in computer science at the University of Western Ontario in Canada. Dr. Liang's main research interests fall in the broad area of human-computer interaction, focusing on virtual/augmented reality and gaming technologies. He has published more than 250 papers in top-rated conferences and journals. He has organized numerous special issues at leading journals and workshops at top conferences. He also serves regularly on the organization committee of several top-ranked conferences.