MIT scientists have developed a new artificial intelligence (AI) system that can predict ingredients and suggest recipes just by looking at photos of food. The AI system called Pic2Recipe could help us learn recipes and better understand people’s eating habits. “In computer vision, food is mostly neglected because we don’t have the large-scale data sets needed to make predictions,” said Yusuf Aytar, from Massachusetts Institute of Technology in the US. Also Read - Realme announces 'D' under its TechLife division; will focus on smart home devicesAlso Read - Sony leading image sensor market, Samsung still trails behind
“But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences,” said Aytar. Researchers combed websites like All Recipes and Food.com to develop “Recipe1M,” a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. Also Read - Instagram uses AI to automatically hide offensive comments
They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes. Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. ALSO READ: Nokia, Xiaomi sign agreement to explore IoT, AR, VR, artificial intelligence and more
The system did particularly well with desserts like cookies or muffins since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies. It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldn’t “penalize” recipes that are similar when trying to separate those that are different. In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (ie stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.
The researchers are also interested in potentially developing the system into a “dinner aide” that could figure out what to cook given a dietary preference and a list of items in the fridge. “This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” said Nick Hynes, a graduate student at MIT. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal,” said Hynes.