Cross Motion _ Fusing Device And Image Motion For User

Andrew D. Wilson 

Topics: Cross Motion, Computing, Tracking, Algorithm, Kinect 

Transcript Excerpt

The capacity to automatically identify and track people and mobile devices indoors is critical in many ubiquitous computing applications. Cross motion is a sensor fusion technique for identifying and tracking mobile devices and their users. It works by comparing the acceleration of the mobile device to accelerations measured by a Microsoft Kinect camera’s infrared and depth images. The algorithm’s major steps will be described. The phone’s acceleration is reduced and wirelessly sent to the host computer, as indicated by its inbuilt inertial measurement unit. On the Kinect infrared image, dense optical flow is computed at the same time. Using the depth image, each 2D flow vector is translated to a 3D motion. A common filter is used to estimate the 3D acceleration at each pixel in the image. To run at a video rate, we employ a GPU-accelerated solution. This page compares 3D devices and picture accelerations for each pixel. At each pixel, we can see the match. Agreement is indicated by darker values. These numbers have been smoothed over time. A red marker represents the minimum over the photographs listed here. We can see how the minimum acceleration roughly mirrors the maximum acceleration. The device’s acceleration-the phone’s inertial measurement unit compares accelerations in the earth’s frame of reference. The Kinect camera is calibrated to report values in the same frame of reference. Cross motion match can be used to track the person carrying the gadget, but it can also be used to track the device itself in many circumstances. even if it is hidden from the camera’s perspective Even with a distractor in the scene, the strategy works when the phone is in the person’s shirt pocket. Is there a link between how the body accelerates and how the phone accelerates? When the phone is in the person’s jeans pocket, the strategy also works. Cross motion is an interesting supplement to existing video-based tracking approaches because of its ability to match the movement of entirely occluded objects. Because cross motion is independent of the vision. It has a wide range of applications. We demonstrate a scenario in which linking skeletal tracking frequently fails. Cross-motion cannot identify the device while it is stationary, such as when it is left on the table, because it is reliant on matching accelerations. However, as soon as movement is detected, it instantly latches on. To assess performance in different motions. We conducted a user study in which participants used their phones to draw letters in the air. The cross motion accurately landed on the person’s body 99 percent of the time in this trial, with an average distance of roughly 7cm. A distractor in this scene attempted to duplicate the participants’ movements in one condition, even though some of the actions were visually extremely similar, the distractor never misunderstood cross-motion. Cross-motion provides a real-time identification and tracking solution for smartphones and other IMU-equipped devices that may be used in several ubiquitous computing applications. 

Share:

Building Small Containers

Windows 10 IoT and Azure IoT Device Management Enhancements