When you wear a mixed reality headset, it becomes the center of your holographic world. The Unity Camera component will automatically handle stereoscopic rendering and follow your head movement and rotation. However, to fully optimize visual quality and hologram stability, you should set the camera settings described below.
The default settings on the Unity Camera component are for traditional 3D applications, which need a skybox-like background as they don’t have a real world.
Color.clear
Whatever kind of experience you’re developing, the Main Camera is always the primary stereo rendering component attached to your device’s head-mounted display. It’ll be easier to lay out your app if you imagine the starting position of the user as (X: 0, Y: 0, Z: 0). Since the Main Camera is tracking movement of the user’s head, the starting position of the user can be set by setting the starting position of the Main Camera.
The central choice you need to make is whether you’re developing for HoloLens or VR immersive headsets. Once you’ve got that, skip to whichever setup section applies.
For HoloLens apps, you need to use anchors for any objects you want to lock to the scene environment. We recommend using unbounded space to maximize stability and create anchors in multiple rooms.
Windows Mixed Reality supports apps across a wide range of experience scales, from orientation-only and seated-scale apps up through room-scale apps. On HoloLens, you can go further and build world-scale apps that let users walk beyond 5 meters, exploring an entire floor of a building and beyond.
Your first step in building a mixed reality experience in Unity is to determine which experience scale your app will target:
[!NOTE] If you’re building for HL2, we recommend creating an eye-level experience, or consider using Scene Understanding to reason about the floor of your scene.
If you’re using MRTK, the camera’s background is automatically configured and managed. For XR SDK or Legacy WSA projects, we recommend setting the camera’s background to solid black on HoloLens and keeping the skybox for VR.
When there are multiple Camera components in the scene, Unity knows which camera to use for stereoscopic rendering based on which GameObject has the MainCamera tag. In legacy XR, it also uses this tag to sync head tracking. In XR SDK, head tracking is driven by a TrackedPoseDriver script attached to the camera.
Sharing your app’s depth buffer to Windows each frame will give your app one of two boosts in hologram stability, based on the type of headset you’re rendering for:
Rendering content too close to the user can be uncomfortable in mixed reality. You can adjust the near and far clip planes on the Camera component.
If you’re building a seated-scale experience, you can recenter Unity’s world origin at the user’s current head position by calling the XR.InputTracking.Recenter method in legacy XR or the XRInputSubsystem.TryRecenter method in XR SDK.
This feature is typically reserved for VR experiences:
Both HoloLens and immersive headsets will reproject each frame your app renders to adjust for any misprediction of the user’s actual head position when photons are emitted.
By default:
If you’re following the Unity development journey we’ve laid out, you’re in the midst of exploring the MRTK core building blocks. From here, you can continue to the next building block:
[!div class=”nextstepaction”] Gaze
Or jump to Mixed Reality platform capabilities and APIs:
[!div class=”nextstepaction”] Shared experiences
You can always go back to the Unity development checkpoints at any time.