The AI Research group at Facebook is making AI Habitat available and Open Source. 

AI Habitat is a simulator that can train AI agents that can eventually be used to control physical robots that you might find in your home someday. The simulator consists of several different training environments that enable the researchers to test their AI algorithms to accomplish certain tasks like navigation, data collection, and a combination of both through high-level questions that can only be answered by the AI moving to a destination and collecting information not explicitly asked for. 

You’ll get a taste of what this AI Habitat is focusing on by watching the demo video where the AI agent is give a question such as “what color is the TV stand” – requiring the AI agent to navigate the virtual world and then identify an object and report back the color it detected. 

The interesting thing about AI algorithms like this is that they are not capable of knowing information without obtaining it through the same sensory import that a human could. The AI is limited in it’s data collection capabilities to those that are possible in the real world.

As in the the TV Stand Color question, a physical robot with built-in AI would need to navigate to physical location and then search for data before analyzing and reporting back an answer. (unless there was method for the TV stand to report it’s own color back to the robot so it didn’t have to go anywhere – but TV stands aren’t that sophisticated…. yet)

The researchers at Facebook are confident that “Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms that operate in the real world.”

Some big fps numbers are touted as helping this AI technology achieve these incredible feats. For purposes of machine learning, the more data that can be processed, the faster the AI can learn. However, it’s unclear how higher frame rates will help a robot do a better job if it’s core programming wasn’t good to begin with. More data is just more data at that a point, and isn’t actually helping to make things better. So, the difference between 1,000 and 10,000 fps of simulation speed only makes a difference if the underlying technology is sound. 

Facebook says that AI Habitat-Sim can achieve several thousand frames per second running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. These numbers are orders of magnitude faster than the closest simulator. 

Facebook researchers believe that the AI Habitat platform has that kind of potential and indeed believes the open source community can help make this platform of computer vision, natural language understanding, and reinforcement learning even better. 

Along with the release of open source AI Habitat Facebook will also provide a rich dataset that researchers can leverage as they begin to train AI for different purposes.  

The implications for VR Developers is very positive, making the creation of AI in games and other virtual experiences even more realistic and believable. The inability of most gaming AI to navigate a scene with an abstract objective is one of the primary advantages that human intelligence has against computers. This research effort should be a move in the direction of training better AI to eventually eliminate unrealistic advantages that usually show in the form of loop holes in games. 

The 3D environments that Facebook will be providing as open source is called Replica. It is a photorealistic set of indoor environments that resemble the real world. The dataset includes buildings like retail stores, apartments and homes. AI Habitat is also able to work with other datasets that a developer might bring along themselves or that might be acquired from places like Matterport3D. 

Facebook released a PyTorch Hub to further support the distribution of the AI Habitat technology stack to enable a unifying platform where a larger set of researchers can work to solve the bigger problems of AI navigation indoors. 

“We aim to learn from the successes of previous frameworks and develop a unifying platform that combines their desirable characteristics while addressing their limitations. A common, unifying platform can significantly accelerate research by enabling code re-use and consistent experimental methodology. Moreover, a common platform enables us to easily carry out experiments testing agents based on different paradigms (learned vs. classical) and generalization of agents between datasets,” said Facebook.

The API from Facebook Reality Labs will include high-level embodied AI algorithms to achieve basic functions like navigation, instruction following, and question answering. The hope is that the AI algorithms will be used in various contexts with embodiments (simulated robots) that have different sensors than the typical robot with depth sensors that use SLAM (Simultaneous Localization and Mapping) for spacial awareness and navigation. Enabling simpler embodiments to achieve the same quality of navigation and question answering will result in a greater number of problems being solved at all ends of the cost spectrum. A commercial application of these technologies will need to be low-cost and affordable for consumers if mass adoption is ever going to be a reality. 

“AI Habitat consists of a stack of three modular layers, each of which can be configured or even replaced to work with different kinds of agents, training techniques, evaluation protocols, and environments. Separating these layers differentiates the platform from other simulators, whose design can make it difficult to decouple parameters in order to reuse assets or compare results,” the AI Habitat white paper reads.

Facebook VP and Chief AI scientist Yann LeCun is hopeful that pursuing the opportunity to solve difficult and complex AI problems will attract top talent to come work at Facebook, according to venture beat. Similar efforts are underway at Microsoft and Amazon who both have AI robotics platforms that are under development that will also be competing for the best talent to solve these complex problems. 

The paper on how AI Habitat works was submitted to the arXiv journal on Computer Vision and Pattern Recognition. Authors listed include Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here