Found 9 repositories(showing 9)
Komal01
Phishing website detection system provides strong security mechanism to detect and prevent phishing domains from reaching user. This project presents a simple and portable approach to detect spoofed webpages and solve security vulnerabilities using Machine Learning. It can be easily operated by anyone since all the major tasks are happening in the backend. The user is required to provide URL as input to the GUI and click on submit button. The output is shown as “YES” for phishing URL and “NO” for not phished URL. PYTHON DEPENDENCIES: • NumPy, Pandas, Scikit-learn: For Data cleaning, Data analysis and Data modelling. • Pickle: For exporting the model to local machine • Tkinter, Pyqt, QtDesigner: For building up the Graphical User Interface (GUI) of the software. To avoid the pain of installing independent packages and libraries of python, install Anaconda from www.anaconda.com. It is a Python data science platform which has all the ML libraries, Data analysis libraries, Jupyter Notebooks, Spyder etc. built in it which makes it easy to use and efficient. Steps to be followed for running the code of the software: • Install anaconda in the system. • gui.py : It contains the code for the GUI and is linked to other modules of the software. • Feature_extractor.py: It contains the code of Data analysis and data modelling. • Rf_model.py: It contains the trained machine learning model. • Only gui.py is to be run to execute the whole software.
daksh26022002
Fake News Detection Fake News Detection in Python In this project, we have used various natural language processing techniques and machine learning algorithms to classify fake news articles using sci-kit libraries from python. Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. Prerequisites What things you need to install the software and how to install them: Python 3.6 This setup requires that your machine has python 3.6 installed on it. you can refer to this url https://www.python.org/downloads/ to download python. Once you have python downloaded and installed, you will need to setup PATH variables (if you want to run python program directly, detail instructions are below in how to run software section). To do that check this: https://www.pythoncentral.io/add-python-to-path-python-is-not-recognized-as-an-internal-or-external-command/. Setting up PATH variable is optional as you can also run program without it and more instruction are given below on this topic. Second and easier option is to download anaconda and use its anaconda prompt to run the commands. To install anaconda check this url https://www.anaconda.com/download/ You will also need to download and install below 3 packages after you install either python or anaconda from the steps above Sklearn (scikit-learn) numpy scipy if you have chosen to install python 3.6 then run below commands in command prompt/terminal to install these packages pip install -U scikit-learn pip install numpy pip install scipy if you have chosen to install anaconda then run below commands in anaconda prompt to install these packages conda install -c scikit-learn conda install -c anaconda numpy conda install -c anaconda scipy Dataset used The data source used for this project is LIAR dataset which contains 3 files with .tsv format for test, train and validation. Below is some description about the data files used for this project. LIAR: A BENCHMARK DATASET FOR FAKE NEWS DETECTION William Yang Wang, "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection, to appear in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), short paper, Vancouver, BC, Canada, July 30-August 4, ACL. the original dataset contained 13 variables/columns for train, test and validation sets as follows: Column 1: the ID of the statement ([ID].json). Column 2: the label. (Label class contains: True, Mostly-true, Half-true, Barely-true, FALSE, Pants-fire) Column 3: the statement. Column 4: the subject(s). Column 5: the speaker. Column 6: the speaker's job title. Column 7: the state info. Column 8: the party affiliation. Column 9-13: the total credit history count, including the current statement. 9: barely true counts. 10: false counts. 11: half true counts. 12: mostly true counts. 13: pants on fire counts. Column 14: the context (venue / location of the speech or statement). To make things simple we have chosen only 2 variables from this original dataset for this classification. The other variables can be added later to add some more complexity and enhance the features. Below are the columns used to create 3 datasets that have been in used in this project Column 1: Statement (News headline or text). Column 2: Label (Label class contains: True, False) You will see that newly created dataset has only 2 classes as compared to 6 from original classes. Below is method used for reducing the number of classes. Original -- New True -- True Mostly-true -- True Half-true -- True Barely-true -- False False -- False Pants-fire -- False The dataset used for this project were in csv format named train.csv, test.csv and valid.csv and can be found in repo. The original datasets are in "liar" folder in tsv format. File descriptions DataPrep.py This file contains all the pre processing functions needed to process all input documents and texts. First we read the train, test and validation data files then performed some pre processing like tokenizing, stemming etc. There are some exploratory data analysis is performed like response variable distribution and data quality checks like null or missing values etc. FeatureSelection.py In this file we have performed feature extraction and selection methods from sci-kit learn python libraries. For feature selection, we have used methods like simple bag-of-words and n-grams and then term frequency like tf-tdf weighting. we have also used word2vec and POS tagging to extract the features, though POS
njadNissi
step by step string and latex generation from fucntions calls on numpy based operartions(linear algebra and matrix theory)
Ola-Kaznowska
My first steps in the NumPy library
Md-Farhan-Jeelani
Gesture Game Control. Hey guys, Here I'm using openCV and Numpy libraries of Python to control a racing game with the steering wheel. It gives you a virtual driving experience. The screen is logically divided into 4 parts. When a particular color (in my case Blue) is detected in those parts a key press is called. Suppose Blue color was detected on the top left part of screen then a "A" key press is initiated and the car will turn left. The color boundaries were set using color.py in which we set range of HSV values for Blue color. Key press and key release fucntion was used from directkeys.py file. I'm using a WindowsOS and Spyder to run my project. This code will be compatable with any game on WindowsOS. If you are using MacOS then you might have modify the directkeys.py file. Follow the steps in requirements file to get the setup ready, then you could run the color.py file and directkeys.py file in order to set them up and make sure they're working correctly. Then you could run gameControll.py finally to get the output window.
akritibhan
<--------ATTENDANCE TRACKING SYSTEM--------> <--NECESSARY LIBRARIES AND MODULES---> email pip install Flask pip install face-recognition imflask import Flask, Response, redirect, render_template, request, redirect, session mysql.connector os pip install opencv-python face_recognition numpy as ndatetime import datetime <---STEPS INVOLVED----> 1. Run the main2.py file, a server link will be prompted in the terminal of the type http://127.0.0.1:5000 2. Click on this server link to start the project. 3.Click on the attendance-tracker in the navigation-menu and you will land to the attendance tracking system section. 4. Click on the register/login button to register or login to the admin portal. 5. When you will click on the 'Take attendance' Button, webcam will pop up and will recognise the face. 6. Click on the "Open Excel Sheet" button to view your attendance in the excel sheet. 7. Finally, click on the Logout button to log out of the system.
LordEnnard14
No description available
First projects/programs with Pandas
yatharthjain
Mr.DocBot is an interactive chatbot used for the information about Covid-19/Corona Virus. To run the program follow the given steps:- 1.Ensure that you are using python version 3.6. 2.Then install the required library using pip command:- 1)nltk 2)numpy 3)tflearn 4)tensorflow==1.14.0 5)SimpleWebSocketServer 3.Download all the programs and files in the same folder. 4.Afterthat run the program main.py. (While the program is being processed if the error "ImportError: DLL load failed: The specified procedure could not be found." occurs then use this command pip install pip install protobuf==3.6.0 to degrade the version of protobuf==3.6.1 to 3.6.0) 5.After the model is trained run server.py. 6.Then open index.html in the web browser and chat with the bot!
All 9 repositories loaded