Found 128 repositories(showing 30)
ohumkar
A scaled down version of the self-driving system using OpenCV. The system comprises of - • Raspberry Pi with a webcam and an ultrasonic sensor as inputs, ◦ Steering using move in sdcar.py ◦ Stop sign detection using houghcircles and colour intensities ◦ Front collision avoidance using an ultrasonic sensor • l298N motor controller • project structure: *sdcar.py is a combination of all the following *lane_lines.py: step1.take the webcam feed and apply the canny edge algorithm to detect the edges step2. detect the lines in an edged image using houghlines step3. average the lines according to the slope step4.making points using slope step5. return right, left, camera and central line *sensor.py: distance measurement using input and output pins *sign.py: *detection of circles in image using hough circles *if the dominant colour in a square region around the circle is red then it is stop sign. *if the dominant colour in a square region around the circle is blue then there are 5 cases left, right, forward, forward and right or forward and left for this: • make the 3 zones of square regions the right , left, upper(for forward) • if the right zone is white and the other two are blue then the sum of RGB colour intensities in the right zone will be obviously greater than the other two zones then the sign is right similarly for others. Note : sign.py will work on following type of sign:
soniyamaurya
My project “UNDERGROUND CABLE FAULT DETECTION” comes under the domain of Internet of Things. It is used to detect the fault in underground cable wire. If the fault is being detected it will be shown on the LCD display. There are three wires which is being used i.e., Red, Yellow and Blue if the fault is being detected in any of these wires along with the colour the distance will be shown. Initially it shows NF (No Fault) and the program runs continuously to check for faults. So, it requires less human intervention
Deepanshu1008
The process of detecting the name of any color in an image is known as color detection.Colours are made up of 3 primary colours: red, green, and blue. In computers, we define each colour value within a range of 0 to 255. So, there are approximately 16.5 million different ways to represent a colour. This software helps us to find out or extract RGB values and the colour name of a pixel on which you double clicked.The Approach is basic and simple.In this project, we have used python language along with pandas and cv as libraries. To create the user interface which will enable the user to select a point or area such that whenever a double click event occurs, it will update the colour name and RGB values on the window. If the event is double-clicked then we calculate and set the RGB values along with x, y positions of the mouse.
Detect Colour Names using Mouse Cursor on an Image using OpenCV.
Anurag-Adoni
Olfactory scent projection based on Color detection in video scenes and objects for spatial visualization
AKeerthana
Project developed in python to develop an object detection system using OpenCV software. The main functionalities displayed in this project include Object Detection based on color that is to classify objects in images according to colour , Pedestrian detection , Human face detection, Vehicle motion Detection from a video file which can be used to detect traffic in a particular area.
explaura
This project aims to segment hands in each frame based on colour detection
RhuthuHegde
This project intends to make a drawing or annotating hands-free by using Hand tracking modules and virtually drawing on a Scanned document and is a major application of Computer Vision. The document is captured in real-time and stored in the system, which is then cropped to remove the unwanted edges and background and saved again. It uses edge detection and Canny detection and does automatic corner detection, image sharpening, and colour thresholding. The hand tracking is module is used to detect the fingertips from the web camera for drawing on the Canvas using the Index finger. Two fingers are used for colour selection and an Index finger is used to annotate on the document making it contact-less and easy
ROHAN337
Function getColorName is just used to get the color names from the dataset. The cv2.imread returns the image coordinate matrix. It just gives the matrix of pixels in the image. That way we know where our mouse pointer clicked to get the coordinates. The draw_function is called upon a mouse click using cv2.setMouseCallback() method. It provides the draw_function with proper event and coordinates of mouse pointer. The values of R,G,B are taken from the matrix generated from imread. Then they are just displayed in a box of that color. putText is used to insert a text inside an image.
Prothoma2001
Colour detection is necessary to recognize objects. Using colours the computers can detect scenery or situation too. It is also used as a tool in various image editing and drawing apps. Colour blind people can also use it to detect colours and work on it. Using this project, we can detect colour names and their RGB value from any picture.
Color Recognition Project: Advancing Computer Vision for Accurate Color Detection
No description available
gautampatil1202
A Python project utilizing Pandas and OpenCV for colour detection using real time and image detection.
MUNEEBURREHMANAI
This Repository contains three beginner level projects of Artificial intelligence : 1:FAQ(pyqt colour ),2:Object detection,3:Music generator
Anjal10911
A Python project that creates a real-time invisibility effect using OpenCV. It detects a specific colour (e.g., red) and replaces it with the background using colour detection & background subtraction. Inspired by Harry Potter!
This project demonstrates image segmentation using the HSV (Hue, Saturation, Value) colour space for object detection. The primary goal of this project was to explore the effectiveness of HSV in segmenting images based on colour and evaluating its performance in various real-world test cases.
Zahir-Khan98
This folder contains DIP (EE5005) course assignment solution in python and project material titled as 'Fast Neonatal Jaundice Detection Using Colour Models and Machine Learning Classifiers ' .
WillStephenn
Arduino code for a TinkerKit Braccio arm that autonomously detects, retrieves, and sorts objects by colour. It uses an HC-SR04 ultrasonic sensor for object detection , a TCS3200 sensor for colour identification , and an inverse kinematics library for path planning. A project for the UCL Engineering Foundation Year.
Shubham722-227
A MATLAB project to examine edge detection results across grayscale, Intensity (I), and Hue (H) components of a picture using the HSI colour space to determine the best successful way for capturing unique edges.
SuruchiParashar
With the advancement of modern technologies areas related to computer vision and real time image processing has become a major technology under consideration. Our aim in this project is to combine the processes of object detection, colour detection and object tracking to implement what can be used as a virtual drawing board. Our proposed software detects objects from the webcam in real-time and tracks their movement to replicate the object’s path of motion as a drawing on the screen. A colour detection module is embedded which to allow only objects of a particular colour (blue here) to be used as the painting stick. All this will be achieved using the OpenCV library of Python. By using open source computer vision library (OpenCV for short), an image can be captured on the bases of its hue, saturation and colour value (HSV) range. The basic library functions for image handling and processing are used. The new features which we will attempt to embed in this project are the following; 1. Touch-less interface 2. Any object of any colour and size can be used 3. Automatic colour detection and selection based on the object's colour 4. Infant-friendly application which can be used to teach colours in fun way 5. Interactive and engaging interface for autistic patients
Viresh103
A scaled down version of the self-driving system using OpenCV. The system comprises of - • Raspberry Pi with a webcam and an ultrasonic sensor as inputs, ◦ Steering using move in sdcar.py ◦ Stop sign detection using houghcircles and colour intensities ◦ Front collision avoidance using an ultrasonic sensor • l298N motor controller • project structure: *sdcar.py is a combination of all the following *lane_lines.py: step1.take the webcam feed and apply the canny edge algorithm to detect the edges step2. detect the lines in an edged image using houghlines step3. average the lines according to the slope step4.making points using slope step5. return right, left, camera and central line *sensor.py: distance measurement using input and output pins *sign.py: *detection of circles in image using hough circles *if the dominant colour in a square region around the circle is red then it is stop sign. *if the dominant colour in a square region around the circle is blue then there are 5 cases left, right, forward, forward and right or forward and left for this: • make the 3 zones of square regions the right , left, upper(for forward) • if the right zone is white and the other two are blue then the sum of RGB colour intensities in the right zone will be obviously greater than the other two zones then the sign is right similarly for others. Note : sign.py will work on following type of sign:
aliyassine1
In this project we want to detect and classify traffic lights in images. Street-level images can contain multiple traffic lights. So, we want to detect the location of illuminated and unilluminated traffic lights in images and then classify the colour of traffic lights( red, green, yellow for illuminated ones and black for unilluminated lights). We employed two methods : one stage object detection and a two-stage object detection.
arun-tandon
Computer vision project implemented with OpenCV Draw your imagination by just waiving your finger in air We will be using the computer vision techniques of OpenCV to build this project. The preffered language is python due to its exhaustive libraries and easy to use syntax but understanding the basics it can be implemented in any OpenCV supported language. Here Colour Detection and tracking is used in order to achieve the objective. The colour marker in detected and a mask is produced. It includes the further steps of morphological operations on the mask produced which are Erosion and Dilation. Erosion reduces the impurities present in the mask and dilation further restores the eroded main mask
Snehakri022
# Invisible-cloak Being a harry potter fan I always had a childhood fantasy of using an invisibility cloak. Well it turns out that using simple image processing tricks I can now actually fulfil my childhood fantasy. This code turns a red colour cloth into an invisibility cloak. It's a fun application which you will enjoy using. You can learn some key functions of opencv from this project. # How it Works? Capture and store the background frame. Detect the red colored cloth using color detection algorithm. Segment out the red colored cloth by generating a mask. Generate the final augmented output to create the magical effect.
amansahu112
Emotion recognition through facial expression detection is one of the important fields of study for human-computer interaction. To detect a facial Expression one system need to come across various variability of human faces such as colour, posture, expression, orientation, etc. To detect the expression of a human face first it is required to detect the different facial features such as the movements of eye, nose, lips, etc. and then classify them comparing with trained data using a suitable classifier for expression recognition. In this research, a human facial expression recognition system is modelled using eigenface approach. The proposed method uses the HSV (Hue-Saturation-Value) colour model to detect the face in an image. PCA has been used for reducing the high dimensionality of the eigenspace and then by projecting the test image upon the eigenspace and calculating the Euclidean distance between the test image and mean of the eigenfaces of the training dataset the expressions are classified. A generic dataset is used for training purpose. The gray scale images of the face is used by the system to classify five basic emotions such as surprise, sorrow, fear, anger and happiness.
VanshitaBhansali
No description available
rokit1512
No description available
karthik2726
No description available
nevin33
Using python and open cv the webcam detects the colours red, green and blue. The program draws bounding boxes around detected objects and display their coordinates.
ayanvs
No description available