There are two key ways to take action on your gaze in Unity, hand gestures and motion controllers in HoloLens and Immersive HMD. You access the data for both sources of spatial input through the same APIs in Unity.
Unity provides two primary ways to access spatial input data for Windows Mixed Reality. The common Input.GetButton/Input.GetAxis APIs work across multiple Unity XR SDKs, while the InteractionManager/GestureRecognizer API specific to Windows Mixed Reality exposes the full set of spatial input data.
Namespace: UnityEngine.XR.WSA.Input
Types: GestureRecognizer, GestureSettings, InteractionSourceKind
Your app can also recognize higher-level composite gestures for spatial input sources, Tap, Hold, Manipulation, and Navigation gestures. You can recognize these composite gestures across both hands and motion controllers using the GestureRecognizer.
Each Gesture event on the GestureRecognizer provides the SourceKind for the input as well as the targeting head ray at the time of the event. Some events provide additional context-specific information.
There are only a few steps required to capture gestures using a Gesture Recognizer:
To use the GestureRecognizer, you must have created a GestureRecognizer:
GestureRecognizer recognizer = new GestureRecognizer();
Specify which gestures you’re interested in via SetRecognizableGestures():
recognizer.SetRecognizableGestures(GestureSettings.Tap | GestureSettings.Hold);
Subscribe to events for the gestures you’re interested in.
void Start()
{
recognizer.Tapped += GestureRecognizer_Tapped;
recognizer.HoldStarted += GestureRecognizer_HoldStarted;
recognizer.HoldCompleted += GestureRecognizer_HoldCompleted;
recognizer.HoldCanceled += GestureRecognizer_HoldCanceled;
}
[!NOTE] Navigation and Manipulation gestures are mutually exclusive on an instance of a GestureRecognizer.
By default, a GestureRecognizer doesn’t monitor input until StartCapturingGestures() is called. It’s possible that a gesture event may be generated after StopCapturingGestures() is called if input was performed before the frame where StopCapturingGestures() was processed. The GestureRecognizer will remember whether it was on or off during the previous frame in which the gesture actually occurred, and so it’s reliable to start and stop gesture monitoring based on this frame’s gaze targeting.
recognizer.StartCapturingGestures();
To stop gesture recognition:
recognizer.StopCapturingGestures();
Remember to unsubscribe from subscribed events before destroying a GestureRecognizer object.
void OnDestroy()
{
recognizer.Tapped -= GestureRecognizer_Tapped;
recognizer.HoldStarted -= GestureRecognizer_HoldStarted;
recognizer.HoldCompleted -= GestureRecognizer_HoldCompleted;
recognizer.HoldCanceled -= GestureRecognizer_HoldCanceled;
}
Motion controller model and teleportation
To render motion controllers in your app that match the physical controllers your users are holding and articulate as various buttons are pressed, you can use the MotionController prefab in the Mixed Reality Toolkit. This prefab dynamically loads the correct glTF model at runtime from the system’s installed motion controller driver. It’s important to load these models dynamically rather than importing them manually in the editor, so that your app will show physically accurate 3D models for any current and future controllers your users may have.
Throwing objects in virtual reality is a harder problem than it may at first seem. As with most physically based interactions, when throwing in game acts in an unexpected way, it’s immediately obvious and breaks immersion. We’ve spent some time thinking deeply about how to represent a physically correct throwing behavior, and have come up with a few guidelines, enabled through updates to our platform, that we would like to share with you.
You can find an example of how we recommend to implement throwing here. This sample follows these four guidelines:
throwing.cs
file in the GetThrownObjectVelAngVel
static method, within the package linked above:
objectAngularVelocity = throwingControllerAngularVelocity;
As the center of mass of the thrown object is likely not at the origin of the grip pose, it likely has a different velocity than that of the controller in the frame of reference of the user. The portion of the object’s velocity contributed in this way is the instantaneous tangential velocity of the center of mass of the thrown object around the controller origin. This tangential velocity is the cross product of the angular velocity of the controller with the vector representing the distance between the controller origin and the center of mass of the thrown object.
Vector3 radialVec = thrownObjectCenterOfMass - throwingControllerPos;
Vector3 tangentialVelocity = Vector3.Cross(throwingControllerAngularVelocity, radialVec);
objectVelocity = throwingControllerVelocity + tangentialVelocity;
Throwing will continue to improve with future Windows updates, and you can expect to find more information on it here.
You can access gesture and motion controller from the input Manager.
Step-by-step tutorials, with more detailed customization examples, are available in the Mixed Reality Academy:
MR Input 213 - Motion controller
If you’re following the Unity development journey we’ve laid out, you’re in the midst of exploring the MRTK core building blocks. From here, you can continue to the next building block:
[!div class=”nextstepaction”] Hand and eye tracking
Or jump to Mixed Reality platform capabilities and APIs:
[!div class=”nextstepaction”] Shared experiences
You can always go back to the Unity development checkpoints at any time.