Google is rolling out support for its advanced Augmented Reality (AR) effect tools for YouTube Stories to allow users to add animated masks, glasses, 3D hats and more such objects to their selfies. Google believes, proper anchoring of the virtual content to the real world is the most difficult challenge — like precisely tracking dynamic surface geometries across smiles, frowns or smirks. Also Read - Google Pixel Watch renders reveal minimal design, deeper Google Assistant integration
“To make all this possible, we employ Machine Learning (ML) to infer approximate 3D surface geometry to enable visual effects and ML Pipeline for Selfie AR,” Artsiom Ablavatski and Ivan Grishchenko, Research Engineers, Google Artificial Intelligence (AI), wrote in a blog post on Saturday. Also Read - OnePlus, LG allegedly blocked by Google to pre-load Fortnite launcher on their phones
The company claimed to have worked on improving the accuracy and robustness by refining processes. “That way we can grow our dataset to increasingly challenging cases, such as grimaces, oblique angle and occlusions. Dataset augmentation techniques also expanded the available ground truth data, developing model resilience to artifacts like camera imperfections or extreme lighting conditions,” the post said. Also Read - Fortnite banned by Apple, Google: Epic Games files lawsuit against both
Facebook and Instagram already support AR filters for Stories on their platforms. “We are excited to share this new technology with creators, users and developers alike. In the future we plan to broaden this technology to more Google products,” Ablavatski and Grishchenko noted. The tech engine giant is also bringing AR features to Maps to allow users to find their way with directions overlaid on top of the real world.