Mobile devices with passive depth sensing capabilities are ubiquitous, and recently active depth sensors have become available on some tablets and AR/VR devices. Although real-time depth data is accessible, its rich value to mainstream AR applications has been sorely under-explored. Adoption of depth-based UX has been impeded by the complexity of performing even simple operations with raw depth data, such as detecting intersections or constructing meshes. In this paper, we introduce DepthLab, a software library that encapsulates a variety of depth-based UI/UX paradigms, including geometry-aware rendering (occlusion, shadows), surface interaction behaviors (physics-based collisions, avatar path planning), and visual effects (relighting, 3D-anchored focus and aperture effects). We break down the usage of depth into localized depth, surface depth, and dense depth, and describe our real-time algorithms for interaction and rendering tasks. We present the design process, system, and components of DepthLab to streamline and centralize the development of interactive depth features. We have open-sourced our software at https://github.com/aibolem/depthlab/ to external developers, conducted performance evaluation, and discussed how DepthLab can accelerate the workflow of mobile AR designers and developers. With DepthLab we aim to help mobile developers to effortlessly integrate depth into their AR experiences and amplify the expression of their creative vision.
GitHub
Depth Lab is available as open-source code on GitHub. Depth Lab is a set of ARCore Depth API samples that provides assets using depth for advanced geometry-aware features in AR interaction and rendering.
Google Play Store
Download the pre-built ARCore Depth Lab app on Google Play Store today.
Media Coverage
3-min deep dive of ARCore Depth API
partners leveraging Depthlab
Key Quotes
“The result is a more believable scene, because the depth detection going on under the hood means your smartphone better understands every object in a scene and how far apart each object is from one another. Google says it’s able to do this through optimizing existing software, so you won’t need a phone with a specific sensor or type of processor. It’s also all happening on the device itself, and not relying on any help from the cloud.” - The Verge
“Occlusion is arguably as important to AR as positional tracking is to VR. Without it, the AR view will often “break the illusion” through depth conflicts.” - UploadVR
“Alone, that feature (creating a depth map with one camera) would be impressive, but Google’s intended use of the API is even better: occlusion, a trick by which digital objects can appear to be overlapped by real-world objects, blending the augmented and real worlds more seamlessly than with mere AR overlays.” - VentureBeat
“Along with the Environmental HDR feature that blends natural light into AR scenes, ARCore now rivals ARKit with its own exclusive feature. While ARKit 3 offers People Occlusion and Body Tracking on compatible iPhones, the Depth API gives ARCore apps a level of environmental understanding that ARKit can't touch as of yet.” - Next Reality
"More sophisticated implementations make use of multiple cameras...That’s what makes this new Depth API almost magical. With just one camera, ARCore is able to create 3D depth maps ... in real-time as you move your phone around." - Slash Gear
5-min UIST talk. Click here to watch the 15-min version.
Supplementary Material
We list all ideas from our brainstorming sessions and discuss their depth representation requirements, use cases, and whether each is implemented in DepthLab in this 4-page PDF. We further present more results in this 3-page PDF for UIST demo.