Found 6 repositories(showing 6)
HlaingPhyoAung
Usage: python sqlmap.py [options] Options: -h, --help Show basic help message and exit -hh Show advanced help message and exit --version Show program's version number and exit -v VERBOSE Verbosity level: 0-6 (default 1) Target: At least one of these options has to be provided to define the target(s) -d DIRECT Connection string for direct database connection -u URL, --url=URL Target URL (e.g. "http://www.site.com/vuln.php?id=1") -l LOGFILE Parse target(s) from Burp or WebScarab proxy log file -x SITEMAPURL Parse target(s) from remote sitemap(.xml) file -m BULKFILE Scan multiple targets given in a textual file -r REQUESTFILE Load HTTP request from a file -g GOOGLEDORK Process Google dork results as target URLs -c CONFIGFILE Load options from a configuration INI file Request: These options can be used to specify how to connect to the target URL --method=METHOD Force usage of given HTTP method (e.g. PUT) --data=DATA Data string to be sent through POST --param-del=PARA.. Character used for splitting parameter values --cookie=COOKIE HTTP Cookie header value --cookie-del=COO.. Character used for splitting cookie values --load-cookies=L.. File containing cookies in Netscape/wget format --drop-set-cookie Ignore Set-Cookie header from response --user-agent=AGENT HTTP User-Agent header value --random-agent Use randomly selected HTTP User-Agent header value --host=HOST HTTP Host header value --referer=REFERER HTTP Referer header value -H HEADER, --hea.. Extra header (e.g. "X-Forwarded-For: 127.0.0.1") --headers=HEADERS Extra headers (e.g. "Accept-Language: fr\nETag: 123") --auth-type=AUTH.. HTTP authentication type (Basic, Digest, NTLM or PKI) --auth-cred=AUTH.. HTTP authentication credentials (name:password) --auth-file=AUTH.. HTTP authentication PEM cert/private key file --ignore-401 Ignore HTTP Error 401 (Unauthorized) --proxy=PROXY Use a proxy to connect to the target URL --proxy-cred=PRO.. Proxy authentication credentials (name:password) --proxy-file=PRO.. Load proxy list from a file --ignore-proxy Ignore system default proxy settings --tor Use Tor anonymity network --tor-port=TORPORT Set Tor proxy port other than default --tor-type=TORTYPE Set Tor proxy type (HTTP (default), SOCKS4 or SOCKS5) --check-tor Check to see if Tor is used properly --delay=DELAY Delay in seconds between each HTTP request --timeout=TIMEOUT Seconds to wait before timeout connection (default 30) --retries=RETRIES Retries when the connection timeouts (default 3) --randomize=RPARAM Randomly change value for given parameter(s) --safe-url=SAFEURL URL address to visit frequently during testing --safe-post=SAFE.. POST data to send to a safe URL --safe-req=SAFER.. Load safe HTTP request from a file --safe-freq=SAFE.. Test requests between two visits to a given safe URL --skip-urlencode Skip URL encoding of payload data --csrf-token=CSR.. Parameter used to hold anti-CSRF token --csrf-url=CSRFURL URL address to visit to extract anti-CSRF token --force-ssl Force usage of SSL/HTTPS --hpp Use HTTP parameter pollution method --eval=EVALCODE Evaluate provided Python code before the request (e.g. "import hashlib;id2=hashlib.md5(id).hexdigest()") Optimization: These options can be used to optimize the performance of sqlmap -o Turn on all optimization switches --predict-output Predict common queries output --keep-alive Use persistent HTTP(s) connections --null-connection Retrieve page length without actual HTTP response body --threads=THREADS Max number of concurrent HTTP(s) requests (default 1) Injection: These options can be used to specify which parameters to test for, provide custom injection payloads and optional tampering scripts -p TESTPARAMETER Testable parameter(s) --skip=SKIP Skip testing for given parameter(s) --skip-static Skip testing parameters that not appear dynamic --dbms=DBMS Force back-end DBMS to this value --dbms-cred=DBMS.. DBMS authentication credentials (user:password) --os=OS Force back-end DBMS operating system to this value --invalid-bignum Use big numbers for invalidating values --invalid-logical Use logical operations for invalidating values --invalid-string Use random strings for invalidating values --no-cast Turn off payload casting mechanism --no-escape Turn off string escaping mechanism --prefix=PREFIX Injection payload prefix string --suffix=SUFFIX Injection payload suffix string --tamper=TAMPER Use given script(s) for tampering injection data Detection: These options can be used to customize the detection phase --level=LEVEL Level of tests to perform (1-5, default 1) --risk=RISK Risk of tests to perform (1-3, default 1) --string=STRING String to match when query is evaluated to True --not-string=NOT.. String to match when query is evaluated to False --regexp=REGEXP Regexp to match when query is evaluated to True --code=CODE HTTP code to match when query is evaluated to True --text-only Compare pages based only on the textual content --titles Compare pages based only on their titles Techniques: These options can be used to tweak testing of specific SQL injection techniques --technique=TECH SQL injection techniques to use (default "BEUSTQ") --time-sec=TIMESEC Seconds to delay the DBMS response (default 5) --union-cols=UCOLS Range of columns to test for UNION query SQL injection --union-char=UCHAR Character to use for bruteforcing number of columns --union-from=UFROM Table to use in FROM part of UNION query SQL injection --dns-domain=DNS.. Domain name used for DNS exfiltration attack --second-order=S.. Resulting page URL searched for second-order response Fingerprint: -f, --fingerprint Perform an extensive DBMS version fingerprint Enumeration: These options can be used to enumerate the back-end database management system information, structure and data contained in the tables. Moreover you can run your own SQL statements -a, --all Retrieve everything -b, --banner Retrieve DBMS banner --current-user Retrieve DBMS current user --current-db Retrieve DBMS current database --hostname Retrieve DBMS server hostname --is-dba Detect if the DBMS current user is DBA --users Enumerate DBMS users --passwords Enumerate DBMS users password hashes --privileges Enumerate DBMS users privileges --roles Enumerate DBMS users roles --dbs Enumerate DBMS databases --tables Enumerate DBMS database tables --columns Enumerate DBMS database table columns --schema Enumerate DBMS schema --count Retrieve number of entries for table(s) --dump Dump DBMS database table entries --dump-all Dump all DBMS databases tables entries --search Search column(s), table(s) and/or database name(s) --comments Retrieve DBMS comments -D DB DBMS database to enumerate -T TBL DBMS database table(s) to enumerate -C COL DBMS database table column(s) to enumerate -X EXCLUDECOL DBMS database table column(s) to not enumerate -U USER DBMS user to enumerate --exclude-sysdbs Exclude DBMS system databases when enumerating tables --pivot-column=P.. Pivot column name --where=DUMPWHERE Use WHERE condition while table dumping --start=LIMITSTART First query output entry to retrieve --stop=LIMITSTOP Last query output entry to retrieve --first=FIRSTCHAR First query output word character to retrieve --last=LASTCHAR Last query output word character to retrieve --sql-query=QUERY SQL statement to be executed --sql-shell Prompt for an interactive SQL shell --sql-file=SQLFILE Execute SQL statements from given file(s) Brute force: These options can be used to run brute force checks --common-tables Check existence of common tables --common-columns Check existence of common columns User-defined function injection: These options can be used to create custom user-defined functions --udf-inject Inject custom user-defined functions --shared-lib=SHLIB Local path of the shared library File system access: These options can be used to access the back-end database management system underlying file system --file-read=RFILE Read a file from the back-end DBMS file system --file-write=WFILE Write a local file on the back-end DBMS file system --file-dest=DFILE Back-end DBMS absolute filepath to write to Operating system access: These options can be used to access the back-end database management system underlying operating system --os-cmd=OSCMD Execute an operating system command --os-shell Prompt for an interactive operating system shell --os-pwn Prompt for an OOB shell, Meterpreter or VNC --os-smbrelay One click prompt for an OOB shell, Meterpreter or VNC --os-bof Stored procedure buffer overflow exploitation --priv-esc Database process user privilege escalation --msf-path=MSFPATH Local path where Metasploit Framework is installed --tmp-path=TMPPATH Remote absolute path of temporary files directory Windows registry access: These options can be used to access the back-end database management system Windows registry --reg-read Read a Windows registry key value --reg-add Write a Windows registry key value data --reg-del Delete a Windows registry key value --reg-key=REGKEY Windows registry key --reg-value=REGVAL Windows registry key value --reg-data=REGDATA Windows registry key value data --reg-type=REGTYPE Windows registry key value type General: These options can be used to set some general working parameters -s SESSIONFILE Load session from a stored (.sqlite) file -t TRAFFICFILE Log all HTTP traffic into a textual file --batch Never ask for user input, use the default behaviour --binary-fields=.. Result fields having binary values (e.g. "digest") --charset=CHARSET Force character encoding used for data retrieval --crawl=CRAWLDEPTH Crawl the website starting from the target URL --crawl-exclude=.. Regexp to exclude pages from crawling (e.g. "logout") --csv-del=CSVDEL Delimiting character used in CSV output (default ",") --dump-format=DU.. Format of dumped data (CSV (default), HTML or SQLITE) --eta Display for each output the estimated time of arrival --flush-session Flush session files for current target --forms Parse and test forms on target URL --fresh-queries Ignore query results stored in session file --hex Use DBMS hex function(s) for data retrieval --output-dir=OUT.. Custom output directory path --parse-errors Parse and display DBMS error messages from responses --save=SAVECONFIG Save options to a configuration INI file --scope=SCOPE Regexp to filter targets from provided proxy log --test-filter=TE.. Select tests by payloads and/or titles (e.g. ROW) --test-skip=TEST.. Skip tests by payloads and/or titles (e.g. BENCHMARK) --update Update sqlmap Miscellaneous: -z MNEMONICS Use short mnemonics (e.g. "flu,bat,ban,tec=EU") --alert=ALERT Run host OS command(s) when SQL injection is found --answers=ANSWERS Set question answers (e.g. "quit=N,follow=N") --beep Beep on question and/or when SQL injection is found --cleanup Clean up the DBMS from sqlmap specific UDF and tables --dependencies Check for missing (non-core) sqlmap dependencies --disable-coloring Disable console output coloring --gpage=GOOGLEPAGE Use Google dork results from specified page number --identify-waf Make a thorough testing for a WAF/IPS/IDS protection --skip-waf Skip heuristic detection of WAF/IPS/IDS protection --mobile Imitate smartphone through HTTP User-Agent header --offline Work in offline mode (only use session data) --page-rank Display page rank (PR) for Google dork results --purge-output Safely remove all content from output directory --smart Conduct thorough tests only if positive heuristic(s) --sqlmap-shell Prompt for an interactive sqlmap shell --wizard Simple wizard interface for beginner users
Jai-Agarwal-04
Sentiment Analysis with Insights using NLP and Dash This project show the sentiment analysis of text data using NLP and Dash. I used Amazon reviews dataset to train the model and further scrap the reviews from Etsy.com in order to test my model. Prerequisites: Python3 Amazon Dataset (3.6GB) Anaconda How this project was made? This project has been built using Python3 to help predict the sentiments with the help of Machine Learning and an interactive dashboard to test reviews. To start, I downloaded the dataset and extracted the JSON file. Next, I took out a portion of 7,92,000 reviews equally distributed into chunks of 24000 reviews using pandas. The chunks were then combined into a single CSV file called balanced_reviews.csv. This balanced_reviews.csv served as the base for training my model which was filtered on the basis of review greater than 3 and less than 3. Further, this filtered data was vectorized using TF_IDF vectorizer. After training the model to a 90% accuracy, the reviews were scrapped from Etsy.com in order to test our model. Finally, I built a dashboard in which we can check the sentiments based on input given by the user or can check the sentiments of reviews scrapped from the website. What is CountVectorizer? CountVectorizer is a great tool provided by the scikit-learn library in Python. It is used to transform a given text into a vector on the basis of the frequency (count) of each word that occurs in the entire text. This is helpful when we have multiple such texts, and we wish to convert each word in each text into vectors (for using in further text analysis). CountVectorizer creates a matrix in which each unique word is represented by a column of the matrix, and each text sample from the document is a row in the matrix. The value of each cell is nothing but the count of the word in that particular text sample. What is TF-IDF Vectorizer? TF-IDF stands for Term Frequency - Inverse Document Frequency and is a statistic that aims to better define how important a word is for a document, while also taking into account the relation to other documents from the same corpus. This is performed by looking at how many times a word appears into a document while also paying attention to how many times the same word appears in other documents in the corpus. The rationale behind this is the following: a word that frequently appears in a document has more relevancy for that document, meaning that there is higher probability that the document is about or in relation to that specific word a word that frequently appears in more documents may prevent us from finding the right document in a collection; the word is relevant either for all documents or for none. Either way, it will not help us filter out a single document or a small subset of documents from the whole set. So then TF-IDF is a score which is applied to every word in every document in our dataset. And for every word, the TF-IDF value increases with every appearance of the word in a document, but is gradually decreased with every appearance in other documents. What is Plotly Dash? Dash is a productive Python framework for building web analytic applications. Written on top of Flask, Plotly.js, and React.js, Dash is ideal for building data visualization apps with highly custom user interfaces in pure Python. It's particularly suited for anyone who works with data in Python. Dash apps are rendered in the web browser. You can deploy your apps to servers and then share them through URLs. Since Dash apps are viewed in the web browser, Dash is inherently cross-platform and mobile ready. Dash is an open source library, released under the permissive MIT license. Plotly develops Dash and offers a platform for managing Dash apps in an enterprise environment. What is Web Scrapping? Web scraping is a term used to describe the use of a program or algorithm to extract and process large amounts of data from the web. Running the project Step 1: Download the dataset and extract the JSON data in your project folder. Make a folder filtered_chunks and run the data_extraction.py file. This will extract data from the JSON file into equal sized chunks and then combine them into a single CSV file called balanced_reviews.csv. Step 2: Run the data_cleaning_preprocessing_and_vectorizing.py file. This will clean and filter out the data. Next the filtered data will be fed to the TF-IDF Vectorizer and then the model will be pickled in a trained_model.pkl file and the Vocabulary of the trained model will be stored as vocab.pkl. Keep these two files in a folder named model_files. Step 3: Now run the etsy_review_scrapper.py file. Adjust the range of pages and product to be scrapped as it might take a long long time to process. A small sized data is sufficient to check the accuracy of our model. The scrapped data will be stored in csv as well as db file. Step 4: Finally, run the app.py file that will start up the Dash server and we can check the working of our model either by typing or either by selecting the preloaded scrapped reviews.
AlexeiMas
Predict DB Server part
QuentinCody
AlphaFold DB MCP Server — pre-computed predicted protein structures for 200M+ proteins (Cloudflare Worker)
julianlpz69
Full-stack technical test project to create and manage customer orders and predict the next purchase date based on order history. Built with .NET Core (API), Angular 17 (SPA), SQL Server (DB), and D3.js (bar chart visualization with vanilla JS).
AKs2001
# STUDENT'S INFORMATION FROM DATABASE import sqlite3 connection = sqlite3.connect('Credentials.db') cursor = connection.cursor() create_table = "CREATE TABLE IF NOT EXISTS users_credential (id INTEGER PRIMARY KEY," \ "reg_number text," \ "surname text," \ "middle_name text," \ "first_name text," \ "email text,"\ "date_registered text," \ "profile_picture text," \ cursor.execute(create_table) connection.commit() connection.close() #GETTING STUDENTS MARKS AND PREDICTING THEIR CHANCES import numpy as np from models.Prediction.Training import DecisionTreeClassifier returned_grades = list() student_mark = [] model_year = DecisionTreeClassifier() new_input_value = [scores] new_input = np.array(new_input_value) new_input = new_input.reshape(-1, 1) model_year.load_dataset() model_year.encode_variables_for_y(model_year.Y) model_year.spliting_to_training_and_test_set_no_return(model_year.X, model_year.Y) scaled_input = model_year.feature_scaling(new_input) new_prediction = saved_model.predict(scaled_input) if new_prediction == 0: new_prediction = '(0, 5]' elif new_prediction == 1: new_prediction = '(10, 15]' elif new_prediction == 2: new_prediction = '(15, 20]' elif new_prediction == 3: new_prediction = '(20, 25]' elif new_prediction == 4: new_prediction = '(25, 30]' elif new_prediction == 5: new_prediction = '(30, 35]' elif new_prediction == 6: new_prediction = '(35, 40]' elif new_prediction == 7: new_prediction = '(40, 45]' elif new_prediction == 8: new_prediction = '(45, 50]' elif new_prediction == 9: new_prediction = '(5, 10]' elif new_prediction == 10: new_prediction = '(50, 55]' elif new_prediction == 11: new_prediction = '(55, 60]' elif new_prediction == 12: new_prediction = '(60, 65]' elif new_prediction == 13: new_prediction = '(65, 70]' elif new_prediction == 14: new_prediction = '(70, 75]' elif new_prediction == 15: new_prediction = '(75, 80]' elif new_prediction == 16: new_prediction = '(80, 85]' elif new_prediction == 17: new_prediction = '(85, 90]' elif new_prediction == 18: new_prediction = '(90, 95]' elif new_prediction == 19: new_prediction = '(95, 100]' student_mark.append(new_input_value[0]) student_mark.append(new_prediction) returned_grades.append(student_mark) return returned_grades #CLUSTERING import pandas as pd import matplotlib matplotlib.use('Qt4Agg') import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler dataset = pd.read_csv('dataset/clustering/system_eng_cluster.csv') print(dataset.describe()) print(dataset.get_values()) #K MEAN CLUSTERING import matplotlib import pandas as pd matplotlib.use('Qt4Agg') import matplotlib.pyplot as plt from sklearn.cluster import KMeans class Clustering(object): def __init__(self, csv, ymeans_1=None, ymeans_2=None): self.csv = csv self.ymeans_1 = ymeans_1 self.ymeans_2 = ymeans_2 # importing the dataset with pandas # 'dataset/clustering/system_eng_cluster.csv' self.dataset_loader = pd.read_csv(self.csv) self.X1 = self.dataset_loader.iloc[:, [2, 4]].values self.X2 = self.dataset_loader.iloc[:, [3, 4]].values @staticmethod def process_wcss(x_column_for_wcss): wcss_to_process = [] for i in range(1, 11): kmeans_1 = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans_1.fit(x_column_for_wcss) wcss_to_process.append(kmeans_1.inertia_) return wcss_to_process @staticmethod def plot_wcss(wcss_list, course_title): plt.plot(range(1, 11), wcss_list) plt.title("The Elbow Method For Test") plt.xlabel("Number of clusters") plt.ylabel("wcss for {}".format(course_title)) plt.show() plt.imsave() def predict_data(self): # applying k-means to the mall dataset kmeans_predict = KMeans(n_clusters=6, init='k-means++', max_iter=300, n_init=10, random_state=0) self.ymeans_1 = kmeans_predict.fit_predict(self.X1) self.ymeans_2 = kmeans_predict.fit_predict(self.X2) return self.ymeans_1, self.ymeans_2 @staticmethod def visualise_clusters(x_column_to_visualize, y_column_to_visualise, test_title): kmeans_clusters = KMeans(n_clusters=6, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans_clusters.fit(x_column_to_visualize) # Visualizing the clusters plt.scatter(x_column_to_visualize[y_column_to_visualise == 0, 0], x_column_to_visualize[y_column_to_visualise == 0, 1], s=10, c='red', label='Cluster 1') plt.scatter(x_column_to_visualize[y_column_to_visualise == 1, 0], x_column_to_visualize[y_column_to_visualise == 1, 1], s=10, c='blue', label='Cluster 2') plt.scatter(x_column_to_visualize[y_column_to_visualise == 2, 0], x_column_to_visualize[y_column_to_visualise == 2, 1], s=10, c='green', label='Cluster 3') plt.scatter(x_column_to_visualize[y_column_to_visualise == 3, 0], x_column_to_visualize[y_column_to_visualise == 3, 1], s=10, c='cyan', label='Cluster 4') plt.scatter(x_column_to_visualize[y_column_to_visualise == 4, 0], x_column_to_visualize[y_column_to_visualise == 4, 1], s=10, c='magenta', label='Cluster 5') plt.scatter(x_column_to_visualize[y_column_to_visualise == 5, 0], x_column_to_visualize[y_column_to_visualise == 5, 1], s=10, c='black', label='Cluster 6') plt.scatter(kmeans_clusters.cluster_centers_[:, 0], kmeans_clusters.cluster_centers_[:, 1], s=50, c='yellow', label='Centroids') plt.title("Clusters OF Students Performance Based On Test Score") plt.xlabel("{} SCORE".format(test_title)) plt.ylabel("Test score") plt.legend() plt.show() #QUESTION AND ANSWER SESSIONS TO SEE THE INTEREST OF THE STUDENT import random import pandas as pd from models.aos_questions_and_answer.processedlistofdictionaries import Util # Initializing variables ai_correct = 0 ai_failed = 0 se_correct = 0 se_failed = 0 cn_correct = 0 cn_failed = 0 sye_correct = 0 sye_failed = 0 tc_correct = 0 tc_failed = 0 AI = [] SE = [] CN = [] SYE = [] TC = [] final_scores = [] current_question_number = 0 total_questions = 0 # Reading the CSV file that contains all compiled questions with respective answers dataset = pd.read_csv('models/aos_questions_and_answer/dataset/core_courses.csv') # AI Data processing ai_questions = dataset.iloc[:, :1].values ai_answers = dataset.iloc[:, 1].values ai_list_of_dictionaries_of_questions_and_answers = Util.processed_list_dict(ai_questions, ai_answers) ai_selected_six_random = Util.select_six_random(ai_list_of_dictionaries_of_questions_and_answers) # Software Engineering Data processing software_engineering_questions = dataset.iloc[:, 2:3].values software_engineering_answers = dataset.iloc[:, 3].values software_engineering_list_of_dictionaries_of_questions_and_answers = \ Util.processed_list_dict(software_engineering_questions, software_engineering_answers) se_selected_six_random = Util.select_six_random(software_engineering_list_of_dictionaries_of_questions_and_answers) # Computer Networks Data processing computer_networks_questions = dataset.iloc[:, 4:5].values computer_networks_answers = dataset.iloc[:, 5].values computer_networks_list_of_dictionaries_of_questions_and_answers =\ Util.processed_list_dict(computer_networks_questions, computer_networks_answers) cn_selected_six_random = Util.select_six_random(computer_networks_list_of_dictionaries_of_questions_and_answers) # Systems Engineering Data processing systems_engineering_questions = dataset.iloc[:, 6:7].values systems_engineering_answers = dataset.iloc[:, 7].values systems_engineering_list_of_dictionaries_of_questions_and_answers = \ Util.processed_list_dict(systems_engineering_questions, systems_engineering_answers) sye_selected_six_random = Util.select_six_random(systems_engineering_list_of_dictionaries_of_questions_and_answers) # Theoretical Computing Data processing theoretical_computing_questions = dataset.iloc[:, 8:9].values theoretical_computing_answers = dataset.iloc[:, 9].values theoretical_computing_list_of_dictionaries_of_questions_and_answers = \ Util.processed_list_dict(theoretical_computing_questions, theoretical_computing_answers) tc_selected_six_random = Util.select_six_random(theoretical_computing_list_of_dictionaries_of_questions_and_answers) # Getting total questions and answers to be asked for ever user total_questions_and_answer = Util.all_selected_questions_with_answers(ai_selected_six_random, se_selected_six_random, cn_selected_six_random, sye_selected_six_random, tc_selected_six_random) # print(total_questions_and_answer) for i in total_questions_and_answer.values(): for j in i: total_questions += 1 #APPLICATION FORMS from flask_wtf import FlaskForm from flask_wtf.file import FileField, FileAllowed from flask_login import current_user from models.users.users import User from wtforms import StringField, PasswordField, SubmitField, BooleanField, RadioField, SelectField from wtforms.validators import DataRequired, Length, Email, EqualTo, ValidationError class AdminAddUserForm(FlaskForm): registration_number = StringField('Registration Number', validators=[DataRequired(), Length(min=1, max=20)]) surname = StringField('Surname', validators=[DataRequired(), Length(min=1, max=20)]) middle_name = StringField('Middle Name', validators=[DataRequired(), Length(min=1, max=20)]) first_name = StringField('First Name', validators=[DataRequired(), Length(min=1, max=20)]) email = StringField('Email', validators=[DataRequired(), Email()]) password = PasswordField('Password', validators=[DataRequired(), Length(min=2)]) submit = SubmitField('Register') def validate_email(self, email): _, all_emails_from_database = User.find_all_emails_and_registration_number() if email.data: if email.data in all_emails_from_database: raise ValidationError("That email is taken. Please choose another one!") else: raise ValidationError("This field cannot be blank!") def validate_registration_number(self, registration_number): all_registration_number_from_database, _ = User.find_all_emails_and_registration_number() if registration_number.data: if registration_number.data in all_registration_number_from_database: raise ValidationError("That Registration Number is taken. Please choose another one!") class UserLoginForm(FlaskForm): registration_number = StringField('Registration Number/Username', validators=[DataRequired(), Length(min=1)]) password = PasswordField('Password', validators=[DataRequired(), Length(min=2)]) remember_me = BooleanField('Remember Me') submit = SubmitField('Log In') class UpdateAccountForm(FlaskForm): registration_number = StringField('Registration Number', validators=[DataRequired(), Length(min=1)]) surname = StringField('Surname', validators=[DataRequired(), Length(min=1, max=20)]) middle_name = StringField('Middle Name', validators=[DataRequired(), Length(min=1, max=20)]) first_name = StringField('First Name', validators=[DataRequired(), Length(min=1, max=20)]) password = PasswordField('Password', validators=[DataRequired(), Length(min=2)]) email = StringField('Email', validators=[DataRequired(), Email()]) picture = FileField('Update Profile Picture', validators=[FileAllowed(['jpg', 'png', 'jpeg'])]) submit = SubmitField('Update') def validate_email(self, email): _, all_emails_from_database = User.find_all_emails_and_registration_number() if email.data != current_user.email: if email.data in all_emails_from_database: raise ValidationError("That email is taken. Please choose another one!") class UpdateAdminAccountForm(FlaskForm): registration_number = StringField('Username', validators=[DataRequired(), Length(min=1)]) surname = StringField('Surname', validators=[DataRequired(), Length(min=1, max=20)]) middle_name = StringField('Middle Name', validators=[DataRequired(), Length(min=1, max=20)]) first_name = StringField('First Name', validators=[DataRequired(), Length(min=1, max=20)]) password = PasswordField('Password', validators=[DataRequired(), Length(min=2)]) email = StringField('Email', validators=[DataRequired(), Email()]) picture = FileField('Update Profile Picture', validators=[FileAllowed(['jpg', 'png', 'jpeg'])]) submit = SubmitField('Update') def validate_email(self, email): _, all_emails_from_database = User.find_all_emails_and_registration_number() if email.data != current_user.email: if email.data in all_emails_from_database: raise ValidationError("That email is taken. Please choose another one!") class AdminUpdateStudentAccountForm(FlaskForm): registration_number = StringField('Registration Number', validators=[DataRequired(), Length(min=1)]) surname = StringField('Surname', validators=[DataRequired(), Length(min=1, max=20)]) middle_name = StringField('Middle Name', validators=[DataRequired(), Length(min=1, max=20)]) first_name = StringField('First Name', validators=[DataRequired(), Length(min=1, max=20)]) email = StringField('Email', validators=[DataRequired(), Email()]) submit = SubmitField('Update') def validate_email(self, email): _, all_emails_from_database = User.find_all_emails_and_registration_number() if email.data: if email.data in all_emails_from_database: raise ValidationError("That email is taken. Please choose another one!") class SelectElectiveCourses(FlaskForm): user_type = SelectField('Select Suited Area Of Specialization', validators=[DataRequired()], choices=(("ai", "Artificial Intelligence"), ("cn", "Computer Networks"), ("se", "Software Engineering"), ("sye", "Systems Engineering"))) submit = SubmitField('START TEST') class StartQuiz(FlaskForm): submit = SubmitField('START TEST') class QuestionForm(FlaskForm): question_option = RadioField("Answers", coerce=str) submit_next = SubmitField('NEXT') # submit_previous = SubmitField('PREVIOUS') #USER LOGIN SYSTEM import datetime import sqlite3 import uuid from flask_login import UserMixin from extensions import login_manager from utils import Utils @login_manager.user_loader def load_user(user_id): return User.find_by_id(user_id) class User(UserMixin): date_time = str(datetime.datetime.utcnow()).split() date, time = date_time date = str(date) time = time.split(".") time = time[0].__str__() def __init__(self, inc_id=None, reg_number=None, surname=None, middle_name=None, first_name=None, email=None, password=None, _id=None, timestamp=time, date=date, default_image=None, account_type=None): self.inc_id = inc_id self.reg_number = reg_number self.surname = surname self.middle_name = middle_name self.first_name = first_name self.email = email self.password = password self.id = uuid.uuid4().__str__() if _id is None else _id self.timestamp = timestamp self.date_registered = date self.default_image = "default.png" if default_image is None else default_image self.account_type = account_type def save_to_db(self): """ This saves the question to the database Returns: A notification string """ connection = sqlite3.connect("./database/Credentials.db") cursor = connection.cursor() query = "INSERT INTO users_credential VALUES (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)" cursor.execute(query, (self.reg_number, self.surname, self.middle_name, self.first_name, self.email, self.password, self.id, self.timestamp, self.date_registered, self.default_image, self.account_type,)) connection.commit() connection.close() def create_admin(self, surname, middle_name, first_name, email, password, username): self.account_type = "admin" connection = sqlite3.connect("./database/credentials.db") cursor = connection.cursor() query = "INSERT INTO users_credential VALUES (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)" cursor.execute(query, (username, surname, middle_name, first_name, email, password, self.id, self.timestamp, self.date_registered, self.default_image, self.account_type,)) connection.commit() connection.close() def insert_student_into_db(self, surname, middle_name, first_name, reg_number, email, password): encrypted_password = Utils.encrypt_password(password=password) self.account_type = "student" connection = sqlite3.connect("./database/credentials.db") cursor = connection.cursor() query = "INSERT INTO users_credential VALUES (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)" cursor.execute(query, (reg_number, surname, middle_name, first_name, email, encrypted_password, self.id, self.timestamp, self.date_registered, self.default_image, self.account_type,)) connection.commit() connection.close() @staticmethod def update_profile(username, surname, middle_name, first_name, password, email, picture_to_update, user_corresponding_id): encrypted_password = Utils.encrypt_password(password=password) connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "UPDATE users_credential SET reg_number=?, surname=?, middle_name=?, first_name=?, password=?," \ "email=?, profile_picture=? WHERE _id=?" cursor.execute(query, (username,surname, middle_name, first_name, encrypted_password, email, picture_to_update, user_corresponding_id,)) connection.commit() connection.close() @staticmethod def update_student_profile_by_admin(reg_number, surname, middle_name, first_name, email, user_corresponding_id): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "UPDATE users_credential SET reg_number=?, surname=?, middle_name=?, first_name=?," \ "email=? WHERE _id=?" cursor.execute(query, (reg_number, surname, middle_name, first_name, email, user_corresponding_id,)) connection.commit() connection.close() @staticmethod def update_password(new_password, user_corresponding_id): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "UPDATE users_credential SET password=? WHERE _id=?" cursor.execute(query, (new_password, user_corresponding_id,)) connection.commit() connection.close() @staticmethod def update_email(email_update, user_corresponding_id): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "UPDATE users_credential SET email=? WHERE _id=?" cursor.execute(query, (email_update, user_corresponding_id,)) connection.commit() connection.close() @staticmethod def update_profile_picture(picture_file, user_corresponding_id): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "UPDATE users_credential SET profile_picture=? WHERE _id=?" cursor.execute(query, (picture_file, user_corresponding_id,)) connection.commit() connection.close() @classmethod def find_by_registration_number(cls, reg_number): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "SELECT * FROM users_credential WHERE reg_number=?" result = cursor.execute(query, (reg_number,)) row = result.fetchone() if row: user = cls(*row) # same as row[0], row[1], row[2]...passing args by position else: user = None connection.close() return user @classmethod def find_by_email(cls, email): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "SELECT * FROM users_credential WHERE email=?" result = cursor.execute(query, (email,)) row = result.fetchone() if row: user = cls(*row) # same as row[0], row[1], row[2]...passing args by position else: user = None connection.close() return user @staticmethod def find_all_emails_and_registration_number(): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "SELECT * FROM users_credential ORDER BY email ASC " result = cursor.execute(query, ) rows = result.fetchall() new_registration_number = [] new_email = [] for row in rows: new_registration_number.append(row[1]) new_email.append(row[5]) return new_registration_number, new_email @classmethod def find_by_id(cls, _id): connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "SELECT * FROM users_credential WHERE _id=?" result = cursor.execute(query, (_id,)) row = result.fetchone() if row: user = cls(*row) # same as row[0], row[1], row[2]...passing args by position else: user = None connection.close() return user @classmethod def fetch_all_students_by_account_type(cls): student = [] connection = sqlite3.connect('./database/Credentials.db') cursor = connection.cursor() query = "SELECT * FROM users_credential WHERE account_type='student'" result = cursor.execute(query,) rows = result.fetchall() if rows: for row in rows: student.append(row) else: student = [] connection.close() return student #APP CREATION AND RATING BY STUDENTS from flask import Flask # Blueprints Imports from blueprints.page import page from blueprints.users import user from blueprints.questions import aos_test, elective_course # extensions Import from extensions import mail, csrf, login_manager CELERY_TASK_LIST = ['blueprints.contact.tasks', ] # app = Flask(__name__, instance_relative_config=True) # app.config.from_object('config.settings') # app.config.from_pyfile('settings.py', silent=True) # # app.register_blueprint(page) def create_app(settings_override=None): """ Create a Flask application using the app factory pattern. :param settings_override: Override settings :return: Flask app """ application = Flask(__name__, instance_relative_config=True) application.config.from_object('config.settings') application.config.from_pyfile('settings.py', silent=True) if settings_override: application.config.update(settings_override) application.register_blueprint(page) application.register_blueprint(user) application.register_blueprint(aos_test) application.register_blueprint(elective_course) extensions(application) return application def extensions(our_app): mail.init_app(our_app) csrf.init_app(our_app) login_manager.init_app(our_app) login_manager.login_view = 'user.login' login_manager.login_message_category = 'info' return None #CONTACT US from flask_mail import Mail from flask_wtf import CSRFProtect from flask_login import LoginManager mail = Mail() csrf = CSRFProtect() login_manager = LoginManager() #HOW TO USE from app import create_app from models.users.users import User from utils import Utils if __name__ == '__main__': app = create_app() with open("first_time_server_run.txt", "r") as new_file: content = new_file.read() if content == "": var = True while var: print("Welcome Admin Please put in the following Credentials") surname = input("Surname: ") middle_name = input("Middle Name: ") first_name = input("First Name: ") user_name = input("Username: ") email = input("E-mail: ") password = input("Password: ") if surname != "" and middle_name != "" and first_name != "" and email != "" and user_name != "" \ and password != "": encrypted_password = Utils.encrypt_password(password) grand_admin = User() grand_admin.create_admin(surname=surname, middle_name=middle_name, email=email, first_name=first_name, password=encrypted_password, username=user_name) with open("first_time_server_run.txt", "a") as new_file_write: new_file_write.write("true") var = False break else: continue app.run() #UTILS from passlib.hash import pbkdf2_sha512 import constants import re class Utils(object): @staticmethod def encrypt_password(password): return pbkdf2_sha512.encrypt(password) @staticmethod def check_encrypted_password(password, hashed_password): return pbkdf2_sha512.verify(password, hashed_password) @staticmethod def allowed_file(filename): return '.' in filename and \ filename.rsplit('.', 1)[1].lower() in constants.ALLOWED_EXTENSIONS @staticmethod def strong_password(password_to_check): a = b = c = d = e = f = '' try: matcher_digits = re.compile(r'[0-9]+') matcher_lowercase = re.compile(r'[a-z]+') matcher_uppercase = re.compile(r'[A-Z]+') matcher_special = re.compile(r'[\W.\\?\[\]|+*$()_^{\}]+') mo_digits = matcher_digits.search(password_to_check) mo_lowercase = matcher_lowercase.search(password_to_check) mo_uppercase = matcher_uppercase.search(password_to_check) mo_special = matcher_special.search(password_to_check) if mo_digits and mo_lowercase and mo_uppercase and mo_special: return None if not mo_digits or not mo_lowercase or not mo_uppercase or not mo_special: if not mo_special: a += "one special character is required" if not mo_digits: b += "a number is required" if not mo_lowercase: c += "a lowercase letter is required" if not mo_uppercase: d += "an uppercase letter is required" if not mo_digits and not mo_lowercase and not mo_uppercase and not mo_special: e += "Password should include a Lowercase, a Uppercase, Numbers and special characters" return a, b, c, d, e except Exception as _: f += "Password should include a Lowercase, a Uppercase, Numbers and special characters" return f @staticmethod def check_reg_number(reg_num): try: matcher = re.compile(r'\d{4}/\d{6}') matching_reg_number = matcher.search(reg_num) reg_num_format_length = reg_num.split("/") reg_num_format_length_first = reg_num_format_length[0] reg_num_format_length_last = reg_num_format_length[1] if matching_reg_number and \ len(reg_num_format_length_first) == 4 and \ len(reg_num_format_length_last) == 6 and \ len(reg_num_format_length) == 2: return None else: return "Incorrect formatted Registration Number" except Exception as _: return "Incorrect formatted Registration Number"
All 6 repositories loaded