You can port your input logic to Windows Mixed Reality by using one of two approaches:
Unity currently uses its general Input.GetButton
and Input.GetAxis
APIs to expose input for the Oculus SDK and the OpenVR SDK. If your apps already use these APIs for input, these APIs are the easiest path to support motion controllers in Windows Mixed Reality. You just need to remap buttons and axes in the Input Manager.
For more information, see the Unity button/axis mapping table and the overview of the Common Unity APIs.
[!IMPORTANT] If you use HP Reverb G2 controllers, see HP Reverb G2 Controllers in Unity for further input mapping instructions.
Unity releases have phased out the XR.WSA APIs in favor of the XR SDK. For new projects, it’s best to use the XR input APIs from the beginning. For more information, see Unity XR Input.
If your app already builds custom input logic for each platform, you can use the Windows-specific spatial input APIs in the UnityEngine.InputSystem.XR namespace. These APIs let you access more information, such as position accuracy or source kind, to tell hands and controllers apart on HoloLens.
[!NOTE] If you use HP Reverb G2 controllers, all input APIs continue to work except for
InteractionSource.supportsTouchpad
, which returns false with no touchpad data.
Windows Mixed Reality supports motion controllers in different form factors. Each controller’s design differs in its relationship between the user’s hand position and the natural forward direction that apps use for pointing when rendering the controller.
To better represent these controllers, you can investigate two kinds of poses for each interaction source, grip pose and pointer pose. You express all pose coordinates in Unity world coordinates.
The grip pose represents the location of either the palm of a hand detected by a HoloLens, or the palm holding a motion controller. On immersive headsets, use this pose to render the user’s hand or an object held in the user’s hand, such as a sword or gun.
Access the grip pose through Unity’s XR.InputTracking.GetNodeStates APIs, like XRNodeState.TryGetPosition or XRNodeState.TryGetRotation.
The pointer pose represents the tip of the controller pointing forward. This pose is best used to ray cast pointing at UI when you’re rendering the controller model itself.
The pointer pose is available in Unity only through the Windows MR-specific API, sourceState.sourcePose.TryGetPosition/Rotation
, passing in InteractionSourceNode.Pointer
as the argument.