top of page

MyDietCam- A Food Recognition Based Mobile Diet Application


MyDietCam is a system that uses image recognition to monitor your diet. It takes a picture of your food, analyzes it using machine learning models, and identifies the type of food and its ingredients. It also provides personalized feedback and nutritional information to help you make healthier choices.

Photo by abillion on Unsplash


MyDietCam, is a system for monitoring diet using food recognition is provided. The system comprises an image recognition module (10) configured to obtain an image input depicting the food and a cloud server (20) configured to perform a background segmentation on the image input, perform a data augmentation on a training dataset, and perform a training process and an inference process of the food recognition using machine learning models, where the cloud server (20) is connected to the image recognition module (10). The system is characterized in that the system further comprises a feature extractor (21) configured to extract a plurality of feature vectors using a feature detection algorithm and a similarity algorithm on the image input; a feature selector (22) configured to select the features of the food image by using a list of estimated class probabilities from the feature extractor (21); and a classifier (23) configured to detect and classify a plurality of feature vectors from the feature selector (22) into a plurality of food categories and ingredients (Figure 1).


Figure 1

The system further comprising a display module (30) configured to present a personalise real-time feedback and summary consisting of behavioural change components to a user. The machine learning models include a convolutional neural network or CNN algorithm and a progressive kernel extreme learning machine or PKELM algorithm. The features or attributes of the image input are selected based on SHapley Additive exPlanations or SHAP score values.

Figure 2

Another aspect of the present invention includes a method for monitoring diet using food recognition (Figure 2). The method is characterised by the steps of obtaining an image input using a camera with visible food or meal in a frame by an image recognition module (10); uploading the captured image input into a cloud server (20) by the image recognition module (10); performing a background segmentation on the image input by the cloud server (20); performing a data augmentation on the image input by the cloud server (20); performing a training process on machine learning 10 models using a training dataset and the augmented dataset by the cloud server (20); performing an inference process on a validation dataset using the trained machine learning models by the cloud server (20), wherein the cloud server (20) provides a plurality of main predictions and a plurality of multilabel predictions of food categories and ingredients; generating a report of food nutrient summary and diet quality score by the cloud server (20); and presenting personalised feedback and summary of behavioural change components to a user by a display module (30).

 






Professor Dr Moy Foong Ming
Department of Social and Preventive Medicine, Faculty of Medicine
moyfm@um.edu.my

4 views0 comments

Comentarios


bottom of page