Holograms don’t need to stay private to just one user. Holographic apps may share spatial anchors from one HoloLens, iOS, or Android device to another, enabling users to render a hologram at the same place in the real world across multiple devices.
Before you begin designing for shared experiences, it’s important to define the target scenarios. These scenarios help clarify what you’re designing and establish a common vocabulary to help compare and contrast features required in your experience. Understanding the core problem, and the different avenues for solutions, is key to uncovering opportunities inherent in this new medium.
Through internal prototypes and explorations from our HoloLens partner agencies, we created six questions to help you define shared scenarios. These questions form a framework, not intended to be exhaustive, to help distill the important attributes of your scenarios.
A presentation might be led by a single virtual user, while multiple users can collaborate, or a teacher might provide guidance to virtual students working with virtual materials—the complexity of the experiences increases based on the level of agency a user has or can have in a scenario.
There are many ways to share, but we’ve found that most of them fall into three categories:
One-to-one sharing experiences can provide a strong baseline and ideally your proofs of concept can be created at this level. But be aware that sharing with large groups (beyond six people) can lead to difficulties both technical (data and networking) and social (the impact of being in a room with several avatars). Complexity increases exponentially as you go from small to large groups.
We have found that the needs of groups can fall into three size categories:
Group size makes for an important question because it influences:
The strength of mixed reality comes into play when a shared experience can take place in the same location. We call that colocated. Conversely, when the group is distributed and at least one participant isn’t in the same physical space (as is often the case with VR) we call that a remote experience. Often, it’s the case that your group has both colocated and remote participants (for example, two groups in conference rooms).
Following categories help convey where users are located:
This question is crucial because it influences:
We typically think of synchronous experiences when shared experiences come to mind: We’re all doing it together. But if we include a single, virtual element that was added by someone else, we have an asynchronous scenario. Imagine a note, or voice memo, left in a virtual environment. How do you handle 100 virtual memos left on your design? What if they’re from dozens of people with different levels of privacy?
Consider your experiences as one of these categories of time:
This question is important because it influences:
The likelihood of two identical real-life environments, outside of colocated experiences, is slim unless those environments have been designed to be identical. You’re more likely to have similar environments. For example, conference rooms are similar—they typically have a centrally located table surrounded by chairs. Living rooms, on the other hand, are dissimilar** and can include any number of pieces of furniture in an infinite array of layouts.
Consider your sharing experiences fitting into one of these two categories:
It’s important to think about the environment, as it will influence:
Today you’re often likely to see shared experiences between two immersive devices (those devices might differ slightly For buttons and relative capability, but not greatly) or two holographic devices given the solutions being targeted at these devices. But consider if 2D devices (a mobile/desktop participant or observer) will be a necessary consideration, especially in situations of mixed 2D and 3D devices. Understanding the types of devices your participants will be using is important, not only because they come with different fidelity and data constraints and opportunities, but because users have unique expectations for each platform.
Answers to the questions above can be combined to better understand your shared scenario, crystallizing the challenges as you expand the experience. For the team at Microsoft, this helped establish a road map for improving the experiences we use today, understanding the nuance of these complex problems and how to take advantage of shared experiences in mixed reality.
For example, consider one of Skype’s scenarios from the HoloLens launch: a user worked through how to fix a broken light switch with help from a remotely located expert.
An expert provides 1:1 guidance from his 2D, desktop computer to a user of a 3D, mixed-reality device. The guidance is synchronous and the physical environments are dissimilar.
An experience like this is a step-change from our current experience—applying the paradigm of video and voice to a new medium. But as we look to the future, we must better define the opportunity of our scenarios and build experiences that reflect the strength of mixed reality.
Consider the OnSight collaboration tool, developed by NASA’s Jet Propulsion Laboratory. Scientists working on data from the Mars rover missions can collaborate with colleagues in real time within the data from the Martian landscape.
A scientist explores an environment using a 3D, mixed-reality device with a small group of remote colleagues using 3D and 2D devices. The collaboration is synchronous (but can be revisited asynchronously) and the physical environments are (virtually) similar.
Experiences like OnSight present new opportunities to collaborate. From physically pointing out elements in the virtual environment to standing next to a colleague and sharing their perspective as they explain their findings. OnSight uses the lens of immersion and presence to rethink sharing experiences in mixed reality.
Intuitive collaboration is the bedrock of conversation, working together and understanding how we can apply this intuition to the complexity of mixed reality is crucial. If we can not only recreate sharing experiences in mixed reality but supercharge them, it will be a paradigm shift for the future of work. Designing for shared experiences in mixed reality is new and exciting space—and we’re only at the beginning.
Depending on your application and scenario, there will be various requirements to achieve your desired experience. Some of these include:
The key to shared experiences is having multiple users seeing the same holograms in the world on their own device, frequently done by sharing anchors to align coordinates across devices.
To share anchors, use the Azure Spatial Anchors:
With a shared spatial anchor, the app on each device now has a common coordinate system in which they can place content. Now the app can ensure to position and orient the hologram at the same location.
On HoloLens devices, you can also share anchors offline from one device to another. Use the links below to decide what’s best for your application.
There are various service and technology options available to help build multi-user mixed reality experiences. It can be tricky to choose a path, so taking a scenario-focused perspective, some options are detailed below.
Leverage Azure Spatial Anchors in your app. Enabling and sharing spatial anchors across devices allows you to create an application where users see holograms in the same place at the same time. Additional syncing across devices is needed to enable users to interact with holograms and see movements or state updates of holograms.
Leverage built-in Miracast support for local users when you have a supported Miracast receiver such as a PC or TV. No additional app code is needed.
Start with our multi-user learning tutorial, which leverages Azure Spatial Anchors for local users and Photon SDK for syncing the content/state in the scene. Create locally collaborative applications where each user has his/her own perspective on the holograms in the scene and can each fully interact with the holograms. Updates are provided across all devices and interaction conflict management is handled by Photon.
[!NOTE] Please note that Photon is a non-Microsoft product, so a billing relationship with Photon may be required to productize and scale for higher usage.
Component capabilities and interfaces will help in providing common consistency and robust support across the various scenarios and underlying technologies. Until then, choose the best path that aligns to the scenario you’re trying to achieve in your application.
Different scenario or desire to use a different tech/service? Provide feedback as GitHub issues in the corresponding repo, at the bottom of this page, or reach out on HoloDevelopers slack.