A geek is a geek even on his birthday, and to celebrate that I blogged once again a handy #realitycollective service: one that send telemetry to an Azure Application Insights resource, using the official Microsoft SDK. You can log events and exceptions, but you can also let the service log automatically stuff that passes through the internal Unity Application log.
If you are supporting any kind of professional business app, you don’t want to rely on users reporting issues. They typically concentrate on doing their job, which in their opinion usually does not include detailed descriptions, reproduction paths, or even just the error message itself (especially if it’s longer than five words). So you want the app to ‘phone home’ itself. Microsoft have been doing that for years, if not decades, and so have I since I started putting Mixed Reality apps out into the wild. To this end, I have been successfully using the UnityApplicationInsights project from GitHub to log telemetry in Azure Application Insights. It simply uses the instrumentation key, you plonk a behaviour in your scene, and you are good to go. However, this code has not seen an update in 6 years, and recently when I looked at some Application Insights documentation I noticed this banner:
Sometimes you need to know whether or not the user is now actually wearing the device while your app is running. I wrote a little #RealityCollective #ServiceFramework Service to make that easily detectable #MixedReality #HoloLens2
https://localjoost.github.io/Detecting-user-presence-using-MRTK3-gaze-tracking-state/
Sometimes it’s necessary to know whether or not the user is actually wearing the headset while running your app. For instance, this might be because the user is doing an important task that may not be stopped until completed, or because you want to pause critical, performance-heavy, or battery-draining processes on the device when the user takes it off for a few minutes. I have found MRTK3 gaze tracking state to be a reliable way to detect user presence, and wrote a little ServiceFramework Service to utilize that.
After I successfully got to run YoloV8 models on HoloLens 2 to recognize the model aircraft I made as a teenager and locate them in space - using the Unity Barracuda inference engine to process the model - I thought it would be fun to try this on the Magic Leap 2 as well. The short version of this experiment is - it worked, as this 4x sped up video shows: