Found 439 repositories(showing 30)
shivam-199
This repository contains the code for emotion recognition using wavelet transform and svm classifiers' rbf kernel.
ksopyla
Kernel Machine Library - fast GPU SVM in.net. Implemented kernels on CPU and GPU (Linear,RBF,Chi-Square,Exp Chi-Square). Library includes GPU SVM solvers for sparse problems.
Compare SVM mode yoga movement classification accuracy with Linear kernel, Polynomial kernel, RBF (Radial Basis Function) kernel, LSTM with accuracy up to 98%. In addition, it also supports adjusting the practitioner's movements according to standard movements.
Traditional methods for volatility forecast of multiscale and high-dimensional data like foreign-exchange and stock market volatility have both advantages and disadvantages which have been identified. In my project, I apply the Support Vector Machine (SVM) as a complimentary volatility method which is capable dealing of such type of data. SVM-based models may extract extra information of time series data and handle the long memory effect very well. Our Support Vector Machine for Regression (SVR) model has better result than the common GARCH (1, 1) model. The predictions are closer to the historical data and the error is lower. In addition, I test different kernels to see the performance difference. For my data, rbf kernel has an overall better performance than linear and polynomial kernels. I conclude that SVM-based model may be applied more frequently in the emerging field of high-frequency finance and in multivariate models for portfolio risk management.
mmoksas68
Training CNN, LSTM and SVM with RBF kernels models on GTZAN Genre collection dataset
Jayesh368
This Project is based on the human activity detection. In this project we have collected the data of 12 activities from the sensors of the smartphone using Andro-Sensor mobile application then we have perform EDA and performed required statistical calculation and added it in our training dataset in feature engineering. We have visualise the high demensional data using t-SNE and implemented machine learning classifier SVM using poly and rbf kernel, KNN, Logistic Regression, Decision Tree , XGBoost then applied GridSearch to get best parameters and we got best accuracy in Logistic Regression and we have successfully tested new test data in model. This model can be used for health monitoring activities pattern.
claesenm
Approximating nonlinear SVM models with RBF kernel.
Ras-al-Ghul
Code which implements the RBF kernel PCA and RBF kernel LDA techniques followed by classification using SVM
daugaard
Example of implementing Support Vector Machine (SVM) with RBF (Gaussian) kernel in Ruby using rb-libsvm. .
amitojbrar
Multi-class SVM with RBF kernel for activity classification from UCI HAR dataset
giovannitjahyamulia
This is a thesis that I did to get a Bachelor's degree in Informatics at MDP University. On this repository you can use it for classification using the SVM method, SVM-GLCM, SVM-Color Moments, and SVM-GLCM-Color Moments by using multiple kernels such as linear, RBF, Polynomial, and sigmoid, some GLCM angles like 0, 45 , 90 and 135, the value of C is 0.1, 1, and 10, gamma with auto and scale values. In addition there is a script that can be used to resize automatically per folder per file that the results will be moved to the new directory. There is also a file for doing Random Split automatically per folder per file that the results will be moved to the new directory.
susobhang70
Kernel PCA and Kernel LDA Implementation in Python using RBF Kernel, and using SVM to classify reduced dimensional data
RohithM191
Amazon-Food-Reviews-Analysis-and-Modelling Using Various Machine Learning Models Performed Exploratory Data Analysis, Data Cleaning, Data Visualization and Text Featurization(BOW, tfidf, Word2Vec). Build several ML models like KNN, Naive Bayes, Logistic Regression, SVM, Random Forest, GBDT, LSTM(RNNs) etc. Objective: Given a text review, determine the sentiment of the review whether its positive or negative. Data Source: https://www.kaggle.com/snap/amazon-fine-food-reviews About Dataset The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon. Number of reviews: 568,454 Number of users: 256,059 Number of products: 74,258 Timespan: Oct 1999 - Oct 2012 Number of Attributes/Columns in data: 10 Attribute Information: Id ProductId - unique identifier for the product UserId - unqiue identifier for the user ProfileName HelpfulnessNumerator - number of users who found the review helpful HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not Score - rating between 1 and 5 Time - timestamp for the review Summary - brief summary of the review Text - text of the review 1 Amazon Food Reviews EDA, NLP, Text Preprocessing and Visualization using TSNE Defined Problem Statement Performed Exploratory Data Analysis(EDA) on Amazon Fine Food Reviews Dataset plotted Word Clouds, Distplots, Histograms, etc. Performed Data Cleaning & Data Preprocessing by removing unneccesary and duplicates rows and for text reviews removed html tags, punctuations, Stopwords and Stemmed the words using Porter Stemmer Documented the concepts clearly Plotted TSNE plots for Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec 2 KNN Applied K-Nearest Neighbour on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both brute & kd-tree implementation of KNN Evaluated the test data on various performance metrics like accuracy also plotted Confusion matrix using seaborne Conclusions: KNN is a very slow Algorithm takes very long time to train. Best Accuracy is achieved by Avg Word2Vec Featurization which is of 89.38%. Both kd-tree and brute algorithms of KNN gives comparatively similar results. Overall KNN was not that good for this dataset. 3 Naive Bayes Applied Naive Bayes using Bernoulli NB and Multinomial NB on Different Featurization of Data viz. BOW(uni-gram), tfidf. Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Printed Top 25 Important Features for both Negative and Positive Reviews Conclusions: Naive Bayes is much faster algorithm than KNN The performance of bernoulli naive bayes is way much more better than multinomial naive bayes. Best F1 score is acheived by BOW featurization which is 0.9342 4 Logistic Regression Applied Logistic Regression on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search & Randomized Search Cross Validation Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Showed How Sparsity increases as we increase lambda or decrease C when L1 Regularizer is used for each featurization Did pertubation test to check whether the features are multi-collinear or not Conclusions: Sparsity increases as we decrease C (increase lambda) when we use L1 Regularizer for regularization. TF_IDF Featurization performs best with F1_score of 0.967 and Accuracy of 91.39. Features are multi-collinear with different featurization. Logistic Regression is faster algorithm. 5 SVM Applied SVM with rbf(radial basis function) kernel on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search & Randomized Search Cross Validation Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Evaluated SGDClassifier on the best resulting featurization Conclusions: BOW Featurization with linear kernel with grid search gave the best results with F1-score of 0.9201. Using SGDClasiifier takes very less time to train. 6 Decision Trees Applied Decision Trees on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search with random 30 points for getting the best max_depth Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Plotted feature importance recieved from the decision tree classifier Conclusions: BOW Featurization(max_depth=8) gave the best results with accuracy of 85.8% and F1-score of 0.858. Decision Trees on BOW and tfidf would have taken forever if had taken all the dimensions as it had huge dimension and hence tried with max 8 as max_depth 6 Ensembles(RF&GBDT) Applied Random Forest on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search with random 30 points for getting the best max_depth, learning rate and n_estimators. Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Plotted world cloud of feature importance recieved from the RF and GBDT classifier Conclusions: TFIDF Featurization in Random Forest (BASE-LEARNERS=10) with random search gave the best results with F1-score of 0.857. TFIDF Featurization in GBDT (BASE-LEARNERS=275, DEPTH=10) gave the best results with F1-score of 0.8708.
darshanmehta17
Custom implementation of SVM for classification with support for Gaussian RBF kernel, Polynomial kernel and Linear kernel.
junmoan
MNIST Dataset Classifier using Grid Search and Radial Basis Function (RBF) kernel SVM
UnixJunkie
Access the Linear or RBF kernel SVM from OCaml using the R e1071 or svmpath packages
junmoan
Letter Prediction using Grid Search and Radial Basis Function (RBF) kernel SVM
Chaitanyakota9
No description available
RenzhiHuang
Implementation of SVM with multiple kernels. The kernels include linear kernel, polynomial kernel and RBF kernel.
Golnaz-spa
Use SVM model for Image Recognition. SVM model with only SVM model with kernel:'linear' and SVM with 'kernel':['linear','rbf','poly']
sashakttripathi
Classification of a radially seperated dataset using SVM with RBF kernel using CVXOPT
Human Activity Recognition using Grid Search and Radial Basis Function (RBF) kernel SVM
AndMastro
Repository containing the code for the exact computation of Shapley Values using SVMs with RBF Kernel.
sherlvick
Sentiment analyis of Amazon product reviews using SVM 'rbf':kernel classifier in which word vectorization is done using TF_IDF and CountVectorizer.
PravinTiwari023
SVM and k-means are very different. SVM is supervised (supervised classification) and k-means is unsupervised (clustering). so it depend on the goal of your application. for supervised classification, SVM is the best algorithm and you need to precise je most efficient kernel (linear, RBF, etc...)
RohithM191
Amazon Fine Food Reviews for textual raw data Performing SVM with RBF Kernel algorithm on different text featurization for data visualization technique .Evaluating test data on various performance metrics & also implementing SGD Classifier for best featurization
tetaniarizki
This repository is an analysis of the classification of sentiment reviews from users of the marketplace application, where the word weighting methods used are TFIDF and Word2Vec. Meanwhile, the classification method used is Support Vector Machine (SVM). There are two kernels used in this analysis, namely the kernel Linear and the kernel Radial Basis Function (RBF).
Harishri2002
This repository explores the use of convolutional neural networks (CNNs) such as AlexNet, LeNet, and VGG-16, along with Support Vector Machines (SVM) with RBF kernels, for detecting melanoma skin cancer. The project includes data preprocessing, model training, and evaluation to improve diagnostic accuracy.(Backend Api Managment)
vatsnishu
ained nonlinear SVM classifier using polynomial and rbf kernel. Varied the values of C and d (degree of polynomial) in some range. For each combination of C and d, ran 10-fold cross validation 30 times and report the average cross validation accuracy and standard deviation. Found the best combination of C and d. Plotted the final classifier.
DhivyaRenuka
Train SVM classifier using sklearn digits dataset (i.e. from sklearn.datasets import load_digits) and then,Measure accuracy of your model using different kernels such as rbf and linear.Tune your model further using regularization and gamma parameters and try to come up with highest accurancy score.Used 80% of samples as training data size