Found 236 repositories(showing 30)
indirect
the cube rule of food identification, explained
vivien-yang
The Calorie Estimation Project can be mainly divided into two parts, identifying food from image, and estimating calorie from certain food image. For the identification of food image, we performed multi-class SVM algorithms, with different features explored and compared, including HOG (Histogram of Gradients), LBP (Linear Binary Pattern) and CNN. The result shows that the local feature LBP performs the best overall. The food calorie data from Internet is collected to conclude a table for easy conversion from food category to calorie.
This website can automatically estimate the various food attributes like the nutrients and ingredients by classifying the food image, that is given as input. The approach involves the use of different types of deep learning models for accurate food item identification. Apart from image recognition and analysis, the food ingredients, nutrients and attributes are obtained and estimated by extracting words which are semantically related from a large collection of text, accumulated over the internet. Experimentation has been performed with the Food-101 dataset. The proposed system assists the user to obtain the nutritional value of the food item in real-time which is effective and simple to use. The proposed system also provides supporting features such as food logging, calorie tracking and healthy recipe recommendations for self-monitoring of the user.
shamspias
AgriAid is an AI-powered tool for farmers & agricultural agents in Bangladesh, offering plant disease forecasting & identification. Using machine learning, deep learning & Python, it helps increase crop yield & food security.
C-Logesh-Perumal-29
Vegetable Classification & Detection, a web-based tool, leverages Streamlit, TensorFlow, and OpenCV. It employs CNN and YOLO models to classify and detect vegetables from images and live feeds, benefiting agriculture and food processing with accurate identification & detection tasks.
sgandhi04
Parks, downtowns, malls, and stores are places that we visit frequently in our day to day lives. These public venues are used for a multitude of different things such as socializing, dining, shopping, playing, etc. Therefore, it is important that these venues are easily accessible to EVERYONE. Out of the 7.5 billion that live on today's planet, around 285 million individuals suffer from visual impairment. Therefore, it is crucial that we make public venues accessible to 3% of our world’s population. This project focuses on creating a navigational aid, leveraging computer vision, artificial intelligence, robotics, and a variety of sensors, to make an ideal assistive technology for the visually impaired which they can use in public environments. The K9 includes many features, listed below: A robotic navigation guide vehicle aid that would help the visually impaired and elderly navigate public indoor/outdoor surroundings Easily able to control the speed of the guide vehicle Able to move on a predetermined path. The guided vehicle can move independently by detecting & avoiding obstacles Able to Identify obstacles/various objects Able to provide sound feedback I decided to create a device on the Arduino platform, using a cheap computer vision camera and vibration motors for obstacle detection. After some research, I discovered the low-cost cmuCam5 Pixy Cam computer vision camera that is capable of recording signatures of objects. The device was able to detect pre-programmed obstacles, by its hue. Leveraging the Pixy Cam, ultrasonic sensor, and line follower, I created a device that can navigate a user around a store. This product can follow a predetermined path, avoid obstacles and come back on the path, and beep when it finds a specific object. Not only does this product navigate the user around a public environment, but it also identifies specific objects (i.e. tomato). In order to give more control of the robot to the user, a hand dynamometer was made that would allow the robot to change its speed based on the strength of one's hand. To test my product, I replicated an indoor public environment using toy food, wood for aisles, and electrical tape for the predefined path. I tested my product three times for each nature of object detection: object on left, right, and both for a total of 9 trials. K-9 was 80% successful for objects on the left, 84.6% on the right, and 79% on left and right. From my data, as well as qualitative observations, I can conclude that this product has the potential to help guide visually impaired individuals in public surroundings. Although it meets all of my criteria with an 82% accuracy, it will need to reach a 100% accuracy to hit the mainstream. In the future, I hope to leverage other types of computer vision cameras such as Google AIY to aid further in object identification.
Big-Dakka
Food Identification System project
Buy-Canadian
A cross-platform application to easily identify Canadian products. Features real-time barcode scanning, product data visualization, and easy identification of product origin. Uses data from Open Food Facts.
magnuspalmblad
compareMS2 provides a pairwise global comparison between LC-MS/MS datasets based on similarity between the tandem mass spectra. Applications include molecular phylogenetics, quality control, pathogen identification and food authentication.
Naming common Nigerian foods using ResNet
makeavish
Identification of multiple food items in an image using Yolov2.
3rd year B.Tech - Minor Project
kumarrishav4
Machine learning-based system that automates food identification from images and estimates their caloric values. Leveraging deep learning (CNNs) and traditional ML models, the system recognizes food types and predicts calorie content using nutritional data.
The difficulty of identifying a body's behavior based on sensor data, such as an accelerometer in a smartphone, is known as activity recognition. It's among the most widely studied topics in the field of machine learning-based classification. Cooking Activity Recognition Challenge (CARC) asked participants to recognize food preparation using motion capture and acceleration sensors. Two smartphones, two wristbands, and motion-capturing equipment were used to collect three-axis (x, y, z) acceleration data and motion data for the CARC dataset. One of the most challenging difficulties to solve in this investigation was identifying complicated tasks as smaller activities that are part of larger activities. Using a Convolutional Neural Network (CNN) and a Bidirectional LSTM, we’ve built a deep learning approach that extracts dynamical data for macro and micro activity identification. The model we proposed for that kind of dataset has a classification accuracy of 83% for macro activity and 85.3% for micro activity, respectively.
dhruvmsheth
This project has been designed for Arduino touch free-challenge Functioning: Usually, in malls or supermarkets, many people touch certain objects in aisles and it is really hard to monitor each object and sanitise each aisle individually. Most of the times, manual sanitisation is completed after each day and individual sanitisation of each aisle is not possible. Sometimes it happens that the food-aisle is more contaminated and the clothes-aisle is less contaminated, hence extra care is required for monitoring and sanitisatio of food-aisle. But such a monitoring solution is not possible manually. Therefore, I decided to make an autonomous solution based on TinyML deployed on Arduino Nano 33 BLE Sense. The person-identification model is highly accurate and uses an arducam which can be used in 5mp or 2mp variants. In solutions that contain ultrasonic sensors, or Lidar sensors, the readings or measurements are not accurate and can send false readings. Hence, the person detction model uses person recogition technology based on TensorflowLite framework. When, 1-50 people are detcted, a green light is switched on which means that the area is yet secure. When, 51-75 people are detected, a yellow light is flashed which means that awareness is required. When, 76-100 people are detected, a red light is flashed which means that people need to carefully touch objects and only touch those objects which are required. When 101-110 people are detected, the UV sanitisation system is activated which sanitises the complete area and objects are secure again. The count is again reset to 0 and the cycle is looped. The people vs Time graph is potrayed on the ThingSpeak web app where mall owners can track the time when the crowd is more or less and accordingly maintain more awareness during peak timings.
deepalimudale
This project intends to build a food identification based on food image. We use a new dataset of the most consumed local food items which was collected from the publicly available restaurant and Internet sources. In this system, we had taken 10 states of India and then applied the CNN algorithm for identifying the famous food items of individuals. This is an evolving dataset, where we will enhance more data as the dataset varies over time.
t8lionchion
圖形辨識的訓練,將辨識11種不同的食物辨識。
Food Item Identification Using Python with the help of the pretrained model in Huggingface By nateraw
ai-cfia
Nachet - AI-powered seed identification system for the Canadian Food Inspection Agency (CFIA)
GuoquanPei
Official implementation of "Phenotypic Feature-Based Identification of Tea Geographical Origin Using Lightweight Deep Learning" (npj Science of Food, 2026).
Jade-Cartagena
A GitHub Repository for the dataset and QSAR/QSPR predictive models used for the undergraduate thesis "Repurposing QSPR/QSAR Drug Discovery Pipelines for the Identification of Toxicity, Antioxidant and Anti-Inflammatory Properties in Food Compounds"
Codelikeamachine
This project aims to develop a crop disease detection system leveraging machine learning techniques. By integrating image processing and deep learning algorithms, the system targets precise identification of crop diseases to enable timely interventions, mitigating their impact on agricultural productivity and food security.
The Productive Safety Net Programme (PSNP) is Ethiopia’s rural safety net designed to support poor food insecure rural households through the provision of timely and predictable benefits. The PSNP was launched in 2005 and currently provides support to approximately 2 million eligible households (8 million beneficiaries) in: Afar, Amhara, Dire Dawa, Harari, Oromia, Southern Nations, Nationalities and Peoples (SNNP), Somali and Tigray. Households that have able-bodied adult labour engage in Public Works (PW) and receive transfers for six months of the year. Households without labour capacity, Permanent Direct Support (PDS) clients, receive 12 months of unconditional transfers. The timing of PW and associated transfers vary from region to region. While many woredas are scheduled to pay benefits to PW clients during the months February to July (for January-June entitlements) all woredas in Somali region and some woredas in Oromia have different transfer schedules. One of the major agreements and changes made during the PSNP 4 Mid Term Review (MTR) was not to finance the federal contingency budget from the existing resources and scaling up safety net support in response to drought shocks primarily due to funding gap for the core program which will in turn help (i) strengthen the linkage between the PSNP and Humanitarian Food Assistance (HFA) and (ii) support the application of a common set of operational procedures to the provision of the PSNP transfers and transfers to the non PSNP households in response to drought. Linked to this for 2017 ad-hoc Federal contingency resources were mobilized through the World Bank Group, USAID through its PSNP implementing NGO Partners, DFID, WFP and UNICEF financial contributions to finance payments to be transferred to PSNP and non PSNP beneficiaries affected by the on-going drought. In the PSNP 4 logical framework, there are indicators that track timeliness of Federal contingency budget utilization. Specifically, these performance targets indicate that percentage of clients receiving contingency resources within 60 days of identification of needs .
Vivek-Gera
This repository contains code and resources for a project focused on the automatic identification of food items using Convolutional Neural Networks (CNN), specifically a pretrained DenseNet-161 model. The project aims to improve the accuracy of food classification by enhancing class separability through successive data augmentation techniques
cfagafaga
A food identification repository
ChengChao02
This is an innovative and entrepreneurial project for college students with me as the first person in charge.
nacoline
No description available
sudhanshuk21
No description available
anindyaca
Identification of food items, volume from food images
poojabansal87
Food Allergy Identification <in progress>