Facebook recently made progress in the GPU department. A team from Facebook AI Research (FAIR) recently developed a new low-dimensional design space. Named ‘RegNet’ the new design outperforms traditional available models including ones from Google. Further, it runs five times faster on GPUs. Also Read - Facebook launches Quiet Mode to curb social media addiction; here's when you will get it
RegNet produces simple, fast and versatile networks. Moreover, in certain experiments, it even outperformed Google’s SOTA EfficientNet models, said researchers in a paper titled ‘Designing Network Design Spaces. The same can also be found published on pre-print repository ArXiv. The researchers aimed for “interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings”. Also Read - Facebook quietly adds private dating app to App store
Watch: 5 Tips to Save Mobile Data
The Facebook AI team also conducted controlled comparisons with Google’s EfficientNet with no training-time enhancements, under the same training setup. Introduced in 2019, Google’s EfficientNet design uses a combination of NAS and model scaling rules, representing the current SOTA. With similar training settings and Flops, RegNet models outperformed EfficientNet models also being up to 5 times faster on GPUs. Also Read - Facebook tried to buy NSO Spyware Pegasus to monitor its users: Report
Rather than designing and developing individual networks, the FAIR team focused on designing actual network design spaces. These comprise huge, possibly infinite populations of model architectures. Design space quality is analyzed using error empirical distribution function (EDF).
Further analyzing RegNet’s design space also gave researchers other unexpected insights into its network design. For instance, they noticed that the depth of the best models is stable across regimes with an optimal depth of 20 blocks (60 layers).
“While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck, said the paper.
Facebook’s AI research team recently developed a tool that tracks the facial recognition system to wrongly identify people in video footage. The “de-identification” system, which also works in live videos, uses machine learning to change key facial features of a subject in real-time. FAIR is advancing the state-of-the-art in artificial intelligence through fundamental and applied research in open collaboration with the community.
The history behind FAIR
The social networking giant created the Facebook AI Research (FAIR) group in 2014 to advance the state of the art of AI through open research for the benefit of all. Since then, FAIR has grown into an international research organization with labs in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh, and London.
(With inputs from IANS)