To achieve stable holograms, HoloLens has a built-in image stabilization pipeline. The stabilization pipeline works automatically in the background, so you don’t need to take any extra steps to enable it. However, you should exercise techniques that improve hologram stability and avoid scenarios that reduce stability.
The quality of holograms is a result of good environment and good app development. Apps running at a constant 60 frames-per-second in an environment where HoloLens can track the surroundings ensures the hologram and the matching coordinate system are in sync. From a user’s perspective, holograms that are meant to be stationary won’t move relative to the environment.
The following terminology can help you when you’re identifying problems with the environment, inconsistent or low rendering rates, or anything else.
Frame rate is the first pillar of hologram stability. For holograms to appear stable in the world, each image presented to the user must have the holograms drawn in the correct spot. The displays on HoloLens refresh 240 times a second, showing four separate color fields for each newly rendered image, resulting in a user experience of 60 FPS (frames per second). To provide the best experience possible, application developers must maintain 60 FPS, which translates to consistently providing a new image to the operating system every 16 milliseconds.
60 FPS To draw holograms to look like they’re sitting in the real world, HoloLens needs to render images from the user’s position. Since image rendering takes time, HoloLens predicts where a user’s head will be when the images are shown in the displays. However, this prediction algorithm is an approximation. HoloLens has hardware that adjusts the rendered image to account for the discrepancy between the predicted head position and the actual head position. The adjustment makes the image the user sees appear as if it’s rendered from the correct location, and holograms feel stable. The image updates work best with small changes, and it can’t completely fix certain things in the rendered image like motion-parallax.
By rendering at 60 FPS, you’re doing three things to help make stable holograms:
Frame-rate consistency Frame rate consistency is as important as a high frames-per-second. Occasionally dropped frames are inevitable for any content-rich application, and the HoloLens implements some sophisticated algorithms to recover from occasional glitches. However, a constantly fluctuating framerate is a lot more noticeable to a user than running consistently at lower frame rates. For example, an application that renders smoothly for five frames (60 FPS for the duration of these five frames) and then drops every other frame for the next 10 frames (30 FPS for the duration of these 10 frames) will appear more unstable than an application that consistently renders at 30 FPS.
On a related note, the operating system throttles down applications to 30 FPS when mixed reality capture is running.
Performance analysis There are different kinds of tools that can be used to benchmark your application frame rate, such as:
[!VIDEO https://www.youtube.com/embed/-606oZKLa_s]
The human visual system integrates multiple distance-dependent signals when it fixates and focuses on an object.
Convergence and accommodation are unique because their extra-retinal cues related to how the eyes change to perceive objects at different distances. In natural viewing, convergence and accommodation are linked. When the eyes view something near (for example, your nose), the eyes cross and accommodate to a near point. When the eyes view something at infinity, the eyes become parallel and the eye accommodates to infinity.
Users wearing HoloLens will always accommodate to 2.0 m to maintain a clear image because the HoloLens displays are fixed at an optical distance approximately 2.0 m away from the user. App developers control where users’ eyes converge by placing content and holograms at various depths. When users accommodate and converge to different distances, the natural link between the two cues is broken, which can lead to visual discomfort or fatigue, especially when the magnitude of the conflict is large.
Discomfort from the vergence-accommodation conflict can be avoided or minimized by keeping converged content as close to 2.0 m as possible (that is, in a scene with lots of depth place the areas of interest near 2.0 m, when possible). When content can’t be placed near 2.0 m, discomfort from the vergence-accommodation conflict is greatest when user’s gaze back and forth between different distances. In other words, it’s much more comfortable to look at a stationary hologram that stays 50 cm away than to look at a hologram 50 cm away that moves toward and away from you over time.
Placing content at 2.0 m is also advantageous because the two displays are designed to fully overlap at this distance. For images placed off this plane, as they move off the side of the holographic frame they’ll appear from one display while still being visible on the other. This binocular rivalry can be disruptive to the depth perception of the hologram.
Optimal distance for placing holograms from the user
Clip Planes For maximum comfort, we recommend clipping render distance at 85 cm with fade out of content starting at 1 m. In applications where holograms and users are both stationary, holograms can be viewed comfortably as near as 50 cm. In those cases, applications should place a clip plane no closer than 30 cm and fade out should start at least 10 cm away from the clip plane. Whenever content is closer than 85 cm, it’s important to ensure that users don’t frequently move closer or farther from holograms or that holograms don’t frequently move closer to or farther from the user as these situations are most likely to cause discomfort from the vergence-accommodation conflict. Content should be designed to minimize the need for interaction closer than 85 cm from the user, but when content must be rendered closer than 85 cm, a good rule of thumb for developers is to design scenarios where users and/or holograms don’t move in depth more than 25% of the time.
Best practices When holograms can’t be placed at 2 m and conflicts between convergence and accommodation can’t be avoided, the optimal zone for hologram placement is between 1.25 m and 5 m. In every case, designers should structure content to encourage users to interact 1+ m away (for example, adjust content size and default placement parameters).
HoloLens has a sophisticated hardware-assisted holographic stabilization technique known as reprojection. Reprojection takes into account motion and change of the point of view (CameraPose) as the scene animates and the user moves their head. Applications need to take specific actions to best use reprojection.
There are four main types of reprojection
Applications need to take specific actions to enable the different types of reprojection
DepthReprojectionMode
in the HolographicCameraRenderingParameters to AutoPlanar
each frame. For HoloLens generation 1, the application should not call SetFocusPoint.Stabilization Type | Immersive Headsets | HoloLens generation 1 | HoloLens 2 |
---|---|---|---|
Depth Reprojection | Recommended | N/A | Recommended Unity applications must use Unity 2018.4.12+, Unity 2019.3+ or Unity 2020.3+. Otherwise use Automatic Planar Reprojection. |
Automatic Planar Reprojection | N/A | Recommended default | Recommended if Depth Reprojection isn’t giving the best results Unity applications are recommended to use Unity 2018.4.12+, Unity 2019.3+ or Unity 2020.3+. Previous Unity versions will work with slightly degraded reprojection results. |
Planar Reprojection | Not Recommended | Recommended if Automatic Planar isn’t giving the best results | Use if neither of the depth options give desired results |
When a reprojection method uses the depth buffer, it’s important to verify the contents of the depth buffer represent the application’s rendered scene. A number of factors can cause problems. If there’s a second camera used to render user interface overlays, for example, it’s likely to overwrite all the depth information from the actual view. Transparent objects often don’t set depth. Some text rendering won’t set depth by default. There will be visible glitches in the rendering when depth doesn’t match the rendered holograms.
HoloLens 2 has a visualizer to show where depth is and isn’t being set, which can be enabled from Device Portal. On the Views > Hologram Stability tab, select the Display depth visualization in headset checkbox. Areas that have depth set properly will be blue. Rendered items that don’t have depth set are marked in red and need to be fixed.
[!NOTE] The visualization of the depth will not show up in Mixed Reality Capture. It is only visible through the device.
Some GPU viewing tools will allow visualization of the depth buffer. Application developers can use these tools to make sure depth is being set properly. Consult the documentation for the application’s tools.
[!NOTE] For desktop immersive headsets, setting a stabilization plane is usually counter-productive, as it offers less visual quality than providing your app’s depth buffer to the system to enable per-pixel depth-based reprojection. Unless running on a HoloLens, you should generally avoid setting the stabilization plane.
The device will automatically attempt to choose this plane, but the application should assist by selecting the focus point in the scene. Unity apps running on a HoloLens should choose the best focus point based on your scene and pass it into SetFocusPoint(). An example of setting the focus point in DirectX is included in the default spinning cube template.
Unity will submit your depth buffer to Windows to enable per-pixel reprojection when you run your app on an immersive headset connected to a desktop PC, which provides even better image quality without explicit work by the app. You should only provide a Focus Point when your app is running on a HoloLens, or the per-pixel reprojection will be overridden.
// SetFocusPoint informs the system about a specific point in your scene to
// prioritize for image stabilization. The focus point is set independently
// for each holographic camera.
// You should set the focus point near the content that the user is looking at.
// In this example, we put the focus point at the center of the sample hologram,
// since that is the only hologram available for the user to focus on.
// You can also set the relative velocity and facing of that content; the sample
// hologram is at a fixed point so we only need to indicate its position.
renderingParameters.SetFocusPoint(
currentCoordinateSystem,
spinningCubeRenderer.Position
);
Placement of the focus point largely depends on what the hologram is looking at. The app has the gaze vector for reference and the app designer knows what content they want the user to observe.
The single most important thing a developer can do to stabilize holograms is to render at 60 FPS. Dropping below 60 FPS will dramatically reduce hologram stability, whatever the stabilization plane optimization.
Best practices There’s no universal way to set up the stabilization plane and it’s app-specific. Our main recommendation is to experiment and see what works best for your scenario. However, try to align the stabilization plane with as much content as possible because all the content on this plane is perfectly stabilized.
For example:
Things to Avoid The stabilization plane is a great tool to achieve stable holograms, but if misused it can result in severe image instability.
Because of the nature of HoloLens displays, an artifact called “color-separation” can sometimes be perceived. It manifests as the image separating into individual base colors - red, green, and blue. The artifact can be especially visible when displaying white objects, since they have large amounts of red, green, and blue. It’s most pronounced when a user visually tracks a hologram that is moving across the holographic frame at high speed. Another way the artifact can manifest is warping/deformation of objects. If an object has high contrast and/or pure colors such as red, green, blue, color-separation will be perceived as warping of different parts of the object.
Example of what the color separation of a head-locked white round cursor could look like as a user rotates their head to the side:
Though it’s difficult to completely avoid color separation, there are several techniques available to mitigate it.
Color-separation can be seen on:
To attenuate the effects of color-separation:
As before, rendering at 60 FPS and setting the stabilization plane are the most important techniques for hologram stability. If facing noticeable color separation, first make sure the frame rate meets expectations.