Google Open-Sources Gesture Tracking AI For Mobile Devices

Google previewed the new technique at the 2019 Conference on Computer Vision and Pattern Recognition in June and recently implemented it in MediaPipe, a cross-platform framework for building multimodal applied machine learning pipelines to process perceptual data of different modalities (such as video and audio). Both the source code and an end-to-end usage scenario are available on GitHub. (Source: VentureBeat)

Google previewed the new technique at the 2019 Conference on Computer Vision and Pattern Recognition in June and recently implemented it in MediaPipe, a cross-platform framework for building multimodal applied machine learning pipelines to process perceptual data of different modalities (such as video and audio). Both the source code and…

Google previewed the new technique at the 2019 Conference on Computer Vision and Pattern Recognition in June and recently implemented it in MediaPipe, a cross-platform framework for building multimodal applied machine learning pipelines to process perceptual data of different modalities (such as video and audio). Both the source code and…

Leave a Reply

Your email address will not be published. Required fields are marked *