Found 16,520 repositories(showing 30)
Aastha2104
Introduction Parkinson’s Disease is the second most prevalent neurodegenerative disorder after Alzheimer’s, affecting more than 10 million people worldwide. Parkinson’s is characterized primarily by the deterioration of motor and cognitive ability. There is no single test which can be administered for diagnosis. Instead, doctors must perform a careful clinical analysis of the patient’s medical history. Unfortunately, this method of diagnosis is highly inaccurate. A study from the National Institute of Neurological Disorders finds that early diagnosis (having symptoms for 5 years or less) is only 53% accurate. This is not much better than random guessing, but an early diagnosis is critical to effective treatment. Because of these difficulties, I investigate a machine learning approach to accurately diagnose Parkinson’s, using a dataset of various speech features (a non-invasive yet characteristic tool) from the University of Oxford. Why speech features? Speech is very predictive and characteristic of Parkinson’s disease; almost every Parkinson’s patient experiences severe vocal degradation (inability to produce sustained phonations, tremor, hoarseness), so it makes sense to use voice to diagnose the disease. Voice analysis gives the added benefit of being non-invasive, inexpensive, and very easy to extract clinically. Background Parkinson's Disease Parkinson’s is a progressive neurodegenerative condition resulting from the death of the dopamine containing cells of the substantia nigra (which plays an important role in movement). Symptoms include: “frozen” facial features, bradykinesia (slowness of movement), akinesia (impairment of voluntary movement), tremor, and voice impairment. Typically, by the time the disease is diagnosed, 60% of nigrostriatal neurons have degenerated, and 80% of striatal dopamine have been depleted. Performance Metrics TP = true positive, FP = false positive, TN = true negative, FN = false negative Accuracy: (TP+TN)/(P+N) Matthews Correlation Coefficient: 1=perfect, 0=random, -1=completely inaccurate Algorithms Employed Logistic Regression (LR): Uses the sigmoid logistic equation with weights (coefficient values) and biases (constants) to model the probability of a certain class for binary classification. An output of 1 represents one class, and an output of 0 represents the other. Training the model will learn the optimal weights and biases. Linear Discriminant Analysis (LDA): Assumes that the data is Gaussian and each feature has the same variance. LDA estimates the mean and variance for each class from the training data, and then uses properties of statistics (Bayes theorem , Gaussian distribution, etc) to compute the probability of a particular instance belonging to a given class. The class with the largest probability is the prediction. k Nearest Neighbors (KNN): Makes predictions about the validation set using the entire training set. KNN makes a prediction about a new instance by searching through the entire set to find the k “closest” instances. “Closeness” is determined using a proximity measurement (Euclidean) across all features. The class that the majority of the k closest instances belong to is the class that the model predicts the new instance to be. Decision Tree (DT): Represented by a binary tree, where each root node represents an input variable and a split point, and each leaf node contains an output used to make a prediction. Neural Network (NN): Models the way the human brain makes decisions. Each neuron takes in 1+ inputs, and then uses an activation function to process the input with weights and biases to produce an output. Neurons can be arranged into layers, and multiple layers can form a network to model complex decisions. Training the network involves using the training instances to optimize the weights and biases. Naive Bayes (NB): Simplifies the calculation of probabilities by assuming that all features are independent of one another (a strong but effective assumption). Employs Bayes Theorem to calculate the probabilities that the instance to be predicted is in each class, then finds the class with the highest probability. Gradient Boost (GB): Generally used when seeking a model with very high predictive performance. Used to reduce bias and variance (“error”) by combining multiple “weak learners” (not very good models) to create a “strong learner” (high performance model). Involves 3 elements: a loss function (error function) to be optimized, a weak learner (decision tree) to make predictions, and an additive model to add trees to minimize the loss function. Gradient descent is used to minimize error after adding each tree (one by one). Engineering Goal Produce a machine learning model to diagnose Parkinson’s disease given various features of a patient’s speech with at least 90% accuracy and/or a Matthews Correlation Coefficient of at least 0.9. Compare various algorithms and parameters to determine the best model for predicting Parkinson’s. Dataset Description Source: the University of Oxford 195 instances (147 subjects with Parkinson’s, 48 without Parkinson’s) 22 features (elements that are possibly characteristic of Parkinson’s, such as frequency, pitch, amplitude / period of the sound wave) 1 label (1 for Parkinson’s, 0 for no Parkinson’s) Project Pipeline pipeline Summary of Procedure Split the Oxford Parkinson’s Dataset into two parts: one for training, one for validation (evaluate how well the model performs) Train each of the following algorithms with the training set: Logistic Regression, Linear Discriminant Analysis, k Nearest Neighbors, Decision Tree, Neural Network, Naive Bayes, Gradient Boost Evaluate results using the validation set Repeat for the following training set to validation set splits: 80% training / 20% validation, 75% / 25%, and 70% / 30% Repeat for a rescaled version of the dataset (scale all the numbers in the dataset to a range from 0 to 1: this helps to reduce the effect of outliers) Conduct 5 trials and average the results Data a_o a_r m_o m_r Data Analysis In general, the models tended to perform the best (both in terms of accuracy and Matthews Correlation Coefficient) on the rescaled dataset with a 75-25 train-test split. The two highest performing algorithms, k Nearest Neighbors and the Neural Network, both achieved an accuracy of 98%. The NN achieved a MCC of 0.96, while KNN achieved a MCC of 0.94. These figures outperform most existing literature and significantly outperform current methods of diagnosis. Conclusion and Significance These robust results suggest that a machine learning approach can indeed be implemented to significantly improve diagnosis methods of Parkinson’s disease. Given the necessity of early diagnosis for effective treatment, my machine learning models provide a very promising alternative to the current, rather ineffective method of diagnosis. Current methods of early diagnosis are only 53% accurate, while my machine learning model produces 98% accuracy. This 45% increase is critical because an accurate, early diagnosis is needed to effectively treat the disease. Typically, by the time the disease is diagnosed, 60% of nigrostriatal neurons have degenerated, and 80% of striatal dopamine have been depleted. With an earlier diagnosis, much of this degradation could have been slowed or treated. My results are very significant because Parkinson’s affects over 10 million people worldwide who could benefit greatly from an early, accurate diagnosis. Not only is my machine learning approach more accurate in terms of diagnostic accuracy, it is also more scalable, less expensive, and therefore more accessible to people who might not have access to established medical facilities and professionals. The diagnosis is also much simpler, requiring only a 10-15 second voice recording and producing an immediate diagnosis. Future Research Given more time and resources, I would investigate the following: Create a mobile application which would allow the user to record his/her voice, extract the necessary vocal features, and feed it into my machine learning model to diagnose Parkinson’s. Use larger datasets in conjunction with the University of Oxford dataset. Tune and improve my models even further to achieve even better results. Investigate different structures and types of neural networks. Construct a novel algorithm specifically suited for the prediction of Parkinson’s. Generalize my findings and algorithms for all types of dementia disorders, such as Alzheimer’s. References Bind, Shubham. "A Survey of Machine Learning Based Approaches for Parkinson Disease Prediction." International Journal of Computer Science and Information Technologies 6 (2015): n. pag. International Journal of Computer Science and Information Technologies. 2015. Web. 8 Mar. 2017. Brooks, Megan. "Diagnosing Parkinson's Disease Still Challenging." Medscape Medical News. National Institute of Neurological Disorders, 31 July 2014. Web. 20 Mar. 2017. Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection', Little MA, McSharry PE, Roberts SJ, Costello DAE, Moroz IM. BioMedical Engineering OnLine 2007, 6:23 (26 June 2007) Hashmi, Sumaiya F. "A Machine Learning Approach to Diagnosis of Parkinson’s Disease."Claremont Colleges Scholarship. Claremont College, 2013. Web. 10 Mar. 2017. Karplus, Abraham. "Machine Learning Algorithms for Cancer Diagnosis." Machine Learning Algorithms for Cancer Diagnosis (n.d.): n. pag. Mar. 2012. Web. 20 Mar. 2017. Little, Max. "Parkinsons Data Set." UCI Machine Learning Repository. University of Oxford, 26 June 2008. Web. 20 Feb. 2017. Ozcift, Akin, and Arif Gulten. "Classifier Ensemble Construction with Rotation Forest to Improve Medical Diagnosis Performance of Machine Learning Algorithms." Computer Methods and Programs in Biomedicine 104.3 (2011): 443-51. Semantic Scholar. 2011. Web. 15 Mar. 2017. "Parkinson’s Disease Dementia." UCI MIND. N.p., 19 Oct. 2015. Web. 17 Feb. 2017. Salvatore, C., A. Cerasa, I. Castiglioni, F. Gallivanone, A. Augimeri, M. Lopez, G. Arabia, M. Morelli, M.c. Gilardi, and A. Quattrone. "Machine Learning on Brain MRI Data for Differential Diagnosis of Parkinson's Disease and Progressive Supranuclear Palsy."Journal of Neuroscience Methods 222 (2014): 230-37. 2014. Web. 18 Mar. 2017. Shahbakhi, Mohammad, Danial Taheri Far, and Ehsan Tahami. "Speech Analysis for Diagnosis of Parkinson’s Disease Using Genetic Algorithm and Support Vector Machine."Journal of Biomedical Science and Engineering 07.04 (2014): 147-56. Scientific Research. July 2014. Web. 2 Mar. 2017. "Speech and Communication." Speech and Communication. Parkinson's Disease Foundation, n.d. Web. 22 Mar. 2017. Sriram, Tarigoppula V. S., M. Venkateswara Rao, G. V. Satya Narayana, and D. S. V. G. K. Kaladhar. "Diagnosis of Parkinson Disease Using Machine Learning and Data Mining Systems from Voice Dataset." SpringerLink. Springer, Cham, 01 Jan. 1970. Web. 17 Mar. 2017.
luanshiyinyang
包含一些比较常见的数据挖掘竞赛或者项目的源码
jaichaudhry323
Latest compilation of 1500+ project Ideas across multiple domains like Machine Learning, Web Development App Development, Data Mining, Networking, Cloud Computing etc etc.
This was my Master's project where i was involved using a dataset from Wireless Sensor Data Mining Lab (WISDM) to build a machine learning model to predict basic human activities using a smartphone accelerometer, Using Tensorflow framework, recurrent neural nets and multiple stacks of Long-short-term memory units(LSTM) for building a deep network. After the model was trained, it was saved and exported to an android application and the predictions were made using the model and the interface to speak out the results using text-to-speech API.
sharispe
`Slib` is a JAVA library dedicated to semantic data mining based on texts and/or ontology processing. The library is composed of various modules dedicated to specific treatments - they can be used in the context of information retrieval, data analysis, recommendation system design... The Semantic Measures Library (SML) is a sub-project of the Slib.
According to a 2015 study on job seeking behavior by Pew Research Center, 79% of the job seekers utilized the online resources for their most recent employment (Aaron ,2015). This study result suggests that the online job boards become the major channel for job seekers in the digital era. However, another finding in the study indicates that most of the job seekers fail to match their experiences with the job requirements and spend hours on job board to apply job which is not seen to be suitable (Aaron, 2015). Additionally, Dr. John Sullivan conducted a similar research in 2013 which highlighted some interesting aspects: on average, 250 resumes are received for each job opening by the major organizations, more than 50% of the resumes does not meet the minimum requirement (John, 2013). This means the time our recruiter spends on these 50% of the resumes for each job is wasted. From both candidate and recruiter’s points of view, the phenomenon may suggest that the traditional online job board does not seem to simplify the job application process or reduce the effort required from both parties. With this challenge getting bigger and bigger, the demand to automate the resume - job matching process is getting increased as well. For instance, the content - based recommendation system (CBR) is introduced to analyze the job description to identify the potential area of interest to the job seekers (Shiqiang et al., 2016). To apply the concept in Singapore local context, our team has conducted a text mining project based on the data acquired from the major online job board in Singapore. The primary objective of this project is to create a machine learning model to accelerate the job - resume matching process. The detail of the text mining methodology and results are presented in the following sections.
Aryia-Behroziuan
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
megansquire
Code and examples for the Mastering Data Mining project
amankaushik
The information system chosen for the project was a stock investment management website providing live prices, historical data, news articles, etc and also basic analysis and recommendations using data mining techniques. 1. Crawling and parsing Yahoo-Finance, Reuters and Twitter data (Java, twitter4j). 2. Web Interface using J2EE and Struts-2 framework. jQuery (highstocks lib) for showing technical charts. 3. Database integration, data cleaning, feature selection on the collected data and applying linear regression and classification algorithms : SVM, Naive Bayes to produce detailed analysis and recommendations.
vidushi4
The objective of this project is to analyse and classify personalities of a given set of people or an individual using advanced data mining concepts
mrsan22
Recognizing human activity using multiple wearable accelerometer sensors placed at different body positions.
mihinsumaria
Agro Analytics - Data Mining/Machine Learning Project based on Agricultural datasets. For more info, go to www.agroanalytics.info
ruy1su
Twitter project with topic: Stock Prediction with Neural Network, SVM, etc.
ying-wen
Time series prediction project for Information Retrieval and Data Mining(COMPGI15)
wesslen
Text Mining Patents for Big Data Course Project
ma-xu
Java implementation of the classic Data mining (big data) algorithm. Create a new Java project, and copy this project to the SRC directory is ok.
ThunderingII
垃圾短信自动识别的代码,使用了adaboost、决策树、感知机、svm、lr等机器学习算法
During my undergrad, I implemented a music recommendation system based on music digital track analysis. However, it's time for me to use text mining technology on lyrics to upgrade that project. Goals: (1)build a music mood(happy or sad) classifier based on lyrics analysis (2)what words and their distributions are in different mood categories? (3)How are the key words change in songs for the recent years? Project evaluation: (1)data collection: the training data and validation data will be collected from the largest lyric database on Lyricwiki.org (2)feature selection: the most common feature type to consider are BOW(bag of word) and POS(part of speech) combined with stemming using word-net (3)Training model : SVM, Naive Bayes using grid search method. (4)data visualization for goal two and three This project will be done using python on jupyter notebook. reference: Hu, X. (2010). Improving music mood classification using lyrics, audio and social tags (Doctoral dissertation, University of Arizona).
Bee-Mar
The final project for my graduate level Data Mining course
rohitdate
The project involves data mining algorithms used to mine patterns and trends in the occurrence of crimes in any city, with respect to day of a week,time of the day and place in a city.
Minhchuyentoancbn
Implementation-based Projects in Data Mining and Machine Learning
abdallahkhairy
Human locomotion affects our daily living activities. Losing limbs or having neurological disorders with motor deficits could affect the quality of life. Gait analysis is a systematic study of human locomotion, which is defined as body movements through aerial, aquatic, or terrestrial space. This analysis has been used to study people ambulation, registration, and reconstruction of physical location and orientation of individual limbs used to quantify and characterize human locomotion using different gait parameters including gait activities such as walking, stairs ascending/descending, … etc., phases, and spatiotemporal parameters of human gait. Additionally, gait analysis parameters can be used to evaluate the functionality of patients and wearable system users. The evaluation is based on patient's stability, energy consumption, gait symmetry, ability to recover from perturbations, and ability to perform activities of daily living. Many companies develop assistive, wearable, and rehabilitation devices for patients with lower limb neurological disorders. These devices are tested and evaluated inside controlled lab environments. However, they don’t have enough data on the patient's performance in real world and harsh environments. Collecting large datasets of device users and their gait performance data in real environment are notoriously difficult. Additionally, collecting data on less prevalent or on gait activities other than level walking, stair ascending/descending, sitting, standing, …etc. on hard surfaces is rarely attempted. However, the scope for collecting gait data from alternative sources other than traditional gait labs could be attained with the help of IoT data collection embedded on the wearable and assistive devices and well-established cloud platforms equipped with big-data analytics and data visualization capabilities. This project aims to develop a cloud platform capable of collect data from wearable and assistive devices such as prostheses, exoskeleton, gait analysis wearable sensors, …etc. using IoT technologies. This platform is capable of automatically use data mining and visualization tools. Additionally, it uses statistical and machine learning techniques to estimate gait events, gait symmetry, gait speed, gait activities, stability, energy consumption, …etc. Also, it is capable of predicting patient's progress over time. The project will be composed of two major components, hardware component and software component. In hardware component, the students will design and implement the IoT that collects the different readings for gait analysis and send them to the cloud. Meanwhile, in the software component, the students will design and implement a set of algorithms to visualize the collected data, then design and implement data analytics to automatically analyze the collected data, so that we can estimate gait events, gait symmetry, gait speed, classify gait activities, stability, energy consumption, …etc. and predicting patient's progress over time. By analyzing the collected data, the patient's progress can be predicted over time. Additionally, these data can be used through manufacturers of prostheses legs to improve their products, as well as through health-care centers to assess the patient's performance. The following figures describe the main modules of our graduation project.
mrsac7
Hate Speech Detection | Data Mining (CSE-362) Project | IIT (BHU) Varanasi | Odd Semester 2020-21
SJD1882
Télécom Paris | MS Big Data | SD 701 | Big Data Mining Course Project using Spark and Google Colab for building Scalable Recommender Systems
ananya2001gupta
Identify the software project, create business case, arrive at a problem statement. REQUIREMENT: Window XP, Internet, MS Office, etc. Problem Description: - 1. Introduction of AI and Machine Learning: - Artificial Intelligence applies machine learning, deep learning and other techniques to solve actual problems. Artificial intelligence (AI) brings the genuine human-to-machine interaction. Simply, Machine Learning is the algorithm that give computers the ability to learn from data and then make decisions and predictions, AI refers to idea where machines can execute tasks smartly. It is a faster process in learning the risk factors, and profitable opportunities. They have a feature of learning from their mistakes and experiences. When Machine learning is combined with Artificial Intelligence, it can be a large field to gather an immense amount of information and then rectify the errors and learn from further experiences, developing in a smarter, faster and accuracy handling technique. The main difference between Machine Learning and Artificial Intelligence is , If it is written in python then it is probably machine learning, If it is written in power point then it is artificial intelligence. As there are many existing projects that are implemented using AI and Machine Learning , And one of the project i.e., Bitcoin Price Prediction :- Bitcoin (₿ ) (founder - Satoshi Nakamoto , Ledger start: 3 January 2009 ) is a digital currency, a type of electronic money. It is decentralized advanced cash without a national bank or single chairman that can be sent from client to client on the shared Bitcoin arrange without middle people's requirement. Machine learning models can likely give us the insight we need to learn about the future of Cryptocurrency. It will not tell us the future but it might tell us the general trend and direction to expect the prices to move. These machine learning models predict the future of Bitcoin by coding them out in Python. Machine learning and AI-assisted trading have attracted growing interest for the past few years. this approach is to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. the application of machine learning algorithms to the cryptocurrency market has been limited so far to the analysis of Bitcoin prices, using random forests , Bayesian neural network , long short-term memory neural network , and other algorithms. 2. Applications/Scope of AI and Machine Learning :- a) Sentiment Analysis :- It is the classification of subjective opinions or emotions (positive, negative, and neutral) within text data using natural language processing. b) It is Characterized as a use of computerized reasoning where accessible data is utilized through calculations to process or help the handling of factual information. BITCOIN PRICE PREDICTION USING AI AND MACHINE LEARNING: - The main aim of this is to find the actual Bitcoin price in US dollars can be predicted. The chance to make a model equipped for anticipating digital currencies fundamentally Bitcoin. # It works the prediction by taking the coinMarkup cap. # CoinMarketCap provides with historical data for Bitcoin price changes, keep a record of all the transactions by recording the amount of coins in circulation and the volume of coins traded in the last 24-hours. # Quandl is used to filter the dataset by using the MAT Lab properties. 3. Problem statement: - Some AI and Machine Learning problem statements are: - a) Data Privacy and Security: Once a company has dug up the data, privacy and security is eye-catching aspect that needs to be taken care of. b) Data Scarcity: The data is a very important aspect of AI, and labeled data is used to train machines to learn and make predictions. c) Data acquisition: In the process of machine learning, a large amount of data is used in the process of training and learning. d) High error susceptibility: In the process of artificial intelligence and machine learning, the high amount of data is used. Some problem statements of Bitcoin Price Prediction using AI and Machine Learning: - a) Experimental Phase Risk: It is less experimental than other counterparts. In addition, relative to traditional assets, its level can be assessed as high because this asset is not intended for conservative investors. b) Technology Risks: There is a technological risk to other cryptocurrencies in the form of the potential appearance of a more advanced cryptocurrency. Investors may simply not notice the moment when their virtual assets lose their real value. c) Price Variability: The variability of the value of cryptocurrency are the large volumes of exchange trading, the integration of Bitcoin with various companies, legislative initiatives of regulatory bodies and many other, sometimes disregarded phenomena. d) Consumer Protection: The property of the irreversibility of transactions in itself has little effect on the risks of investing in Bitcoin as an asset. e) Price Fluctuation Prediction: Since many investors care more about whether the sudden rise or fall is worth following. Bitcoin price often fluctuates by more than 10% (or even more than 30%) at some times. f) Lacks Government Regulation: Regulators in traditional financial markets are basically missing in the field of cryptocurrencies. For instance, fake news frequently affects the decisions of individual investors. g) It is difficult to use large interval data (e.g., day-level, and month-level data) . h) The change time of mining difficulties is much longer. Moreover, do not consider the news information since it is hard to determine the authenticity of a news or predict the occurrence of emergencies.
divyakkm
Analyzing Airline data to predict delays
Community Detection on Higher-Order Networks: Identifying Patterns in US Air Traffic
AyushGupta51379
- Image classification using Deep learning. - Utilizing both frequency and pixel domain information of images. - Implemented MVNN model from a research paper published in 2019 IEEE ICDM. Achieved comparable accuracy to the original model. - Compared with several deep learning models: Pre-trained (VGG-16, VGG-19) and our own implementations of CNN based models (with different number of layers) - Research project utilized as a part of a course: COMP 5331 - Knowledge Discovery in Databases, from HKUST (The Hong Kong University of Science and Technology). - Referenced paper (for MVNN model) - P. Qi, J. Cao, T. Yang, J. Guo, and J. Li. Exploiting multi-domain visual information for fake news detection. In 2019 IEEE International Conference on Data Mining (ICDM), pages 518–527, 2019.
BlockchainLabs
About: AEON was launched on 6.6.2014 at 6:00 PM UTC, with no premine or instamine. AEON is for people who want to pay and live freely, who want to be part of the cryptocurrency revolution and want to try something new. It is based on the CryptoNote protocol and uses the CryptoNight-Lite[1] algorithm, and features: - True anonymity & data protection - Untraceable payments uses ring signature - Unlinkable transactions with random data by the sender - Blockchain analysis resistant - CPU/GPU mining, ASIC-resistant Roadmap April 26, 2015 - new roadmap announced Mobile-friendly PoW and block time (released) GUI wallet (in progress) 32-bit and ARM support (released, but requires low memory footprint below) Low memory footprint (in progress) Signature trimming Blockchain pruning (test release available) Multisig and payment channels (instant payments) Development Team: Lead developer: smooth Release engineering, Q/A, support: Arux Other roles: open (PM smooth) Original developer (as Monero fork): anonymous Bounties: None currently open. You can send donations for the AEON bounty fund and development. Code: AEON address: WmsSWgtT1JPg5e3cK41hKXSHVpKW7e47bjgiKmWZkYrhSS5LhRemNyqayaSBtAQ6517eo5PtH9wxHVmM78JDZSUu2W8PqRiNs View Key: 71bf19a7348ede17fa487167710dac401ef1556851bfd36b76040facf051630b Specifications: PoW algorithm: CryptoNight-Lite[1] Max supply: ~18.4 million[2] Block reward: Smoothly varying using the formula (M−A) / (218) / (1012) where M = 264 −1 and A = supply mined to date.[3] Block time: 240 seconds[3] Difficulty: Retargets at every block RPC-bind-port: 11180 P2P-bind-port: 11181 Downloads: Current release 0.9.6.0 (source code, 64 bit Windows binaries) bootstrap for linux-x64 (by community member Phantas 2016-03-10) bootstrap for Windows-x64 (by community member Phantas 2016-03-11) bootstrap for OS X (by community member sammy007 2015-08-08) GUI for Windows 0.2.3 (by community member h0g0f0g0, src.zip, sha1) Instructions to compile on Windows (provided by community member cryptrol): see bottom of this post Recommended: Use caution with community-provided downloads, check reputation and scan for malware Recommended: Use the --donate option when starting the daemon to donate a portion of your computer power to support the project and the network Links & Resources: Trading: - Bittrex - AEON/BTC - Cryptopia - AEON/BTC (also has DOGE and LTC pairs) - OTC thread - AEON/XMR - Speculation thread (moderated by americanpegasus) Pools: - http://52.8.47.33:8080 - Arux's personal pool (2% fee) - http://98.238.231.31:9000 - The Cryptophilanthropist (2% fee) Block Explorers: - Chainradar - Minergate Community: - Reddit - Steem - Twitter - IRC channel #aeon @ Freenode (Webchat Link) Dead Links / Outdated: cryptocointalk white paper Mining: 1. Compile from source code. 2. Launch aeond and wait until it is synchronized. 3. Launch simplewallet --generate-new-wallet=wallet_name.bin --pass=12345 4. Start mining from the wallet using start_mining command Windows Compilation: (provided by community member cryptrol) Compile steps for Windows x64 using MSVC First of all let's get all the tools we need : - Download and install Microsoft Visual Studio Community 2013 (It's a free version of visual studio with some license limitations). You can uncheck the web development tools and SQL tools since you won't use them for building AEON. This will take time to download and install and you will have to reboot upon completion. - Download and install cMake for windows from : http://www.cmake.org/download/ (Win32 install) - Download Boost 1.57 from http://www.boost.org/users/download/ , use the zip or 7zip archive and extract. You can use c:\boost_1_57_0 since this is what I am using for this steps. - Download and install Github for Windows from https://windows.github.com/ (This also includes a Git shell that we will use later). Now the nasty part compile & build time ! - Build Boost : Open a command line and type : Code: > cd c:\boost_1_57_0 > bootstrap.bat > b2 --toolset=msvc variant=release link=static threading=multi runtime-link=static address-model=64 - Open the Git Shell (or Git bash) depending what you downloaded previously and do. Code: > git clone https://github.com/aeonix/aeon.git > cd aeon > mkdir build > cd build > cmake -G "Visual Studio 12 Win64" -DBOOST_ROOT=c:\boost_1_57_0 -DBOOST_LIBRARYDIR=c:\boost_1_57_0\stage\lib .. > MSBuild Project.sln /p:Configuration=release /m You should now find the exe files under build/src/release . Aeon isn't a cryptocurrency. It's a lifestyle. It's about polished perfection, attained by breaking the rules with calculated mastery of the art. It's about respecting history and pushing innovation forward at the same time. It's about more than just math: it's a vision of a world where luxury is the same as entry-level, and the limits are the heavens themselves. If you're just buying Aeon to get rich, don't even bother. Aeon needs more than just the next wave of crypto speculators: we're looking for the truly elite. But if you think you have what it takes to redefine global finance and discover new magnitudes of wealth in the process... Well, Aeon is ready for you. Are you ready for Aeon?
alabid
CS 324 (Data Mining) final project: efficient regularized SVD/UV-decomposition on large, partial matrices