Found 667 repositories(showing 30)
shafiab
My Insight Data Engineering Fellowship project. I implemented a big data processing pipeline based on lambda architecture, that aggregates Twitter and US stock market data for user sentiment analysis using open source tools - Apache Kafka for data ingestions, Apache Spark & Spark Streaming for batch & real-time processing, Apache Cassandra f or storage, Flask, Bootstrap and HighCharts f or frontend.
A real-time interactive web app based on data pipelines using streaming Twitter data, automated sentiment analysis, and MySQL&PostgreSQL database (Deployed on Heroku)
uclatommy
Real-time sentiment analysis in Python using twitter's streaming api
Aryia-Behroziuan
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
jmcilhargey
A company tracking app that requests real time stock data via websocket connections and draws interactive graphs with d3.js and HTML5 canvas. Uses Twitter streaming data and algorithmic analysis to explore relationship between company performance and Twitter sentiment. HTML/CSS, Javascript, React, Node, Express, MongoDB, d3.js
rishikonapure
A Real-Time Cryptocurrency Price and Twitter Sentiments Analysis
drisskhattabi6
This repo contains Big Data Project, its about "Real Time Twitter Sentiment Analysis via Kafka, Spark Streaming, MongoDB and Django Dashboard".
pedrovgs
Spark project written in Scala used to perform real time sentiment analysis on top of Twitter's streaming API
ibm-watson-data-lab
Real-time dashboard for Twitter Sentiment analysis using Spark Streaming and Watson Tone Analyzer
pran4ajith
A real-time streaming ETL pipeline for streaming and performing sentiment analysis on Twitter data using Apache Kafka, Apache Spark and Delta Lake.
Aghoreshwar
Customer analytics has been one of hottest buzzwords for years. Few years back it was only marketing department’s monopoly carried out with limited volumes of customer data, which was stored in relational databases like Oracle or appliances like Teradata and Netezza. SAS & SPSS were the leaders in providing customer analytics but it was restricted to conducting segmentation of customers who are likely to buy your products or services. In the 90’s came web analytics, it was more popular for page hits, time on sessions, use of cookies for visitors and then using that for customer analytics. By the late 2000s, Facebook, Twitter and all the other socialchannels changed the way people interacted with brands and each other. Businesses needed to have a presence on the major social sites to stay relevant. With the digital age things have changed drastically. Customer issuperman now. Their mobile interactions have increased substantially and they leave digital footprint everywhere they go. They are more informed, more connected, always on and looking for exceptionally simple and easy experience. This tsunami of data has changed the customer analytics forever. Today customer analytics is not only restricted to marketing forchurn and retention but more focus is going on how to improve thecustomer experience and is done by every department of the organization. A lot of companies had problems integrating large bulk of customer data between various databases and warehouse systems. They are not completely sure of which key metrics to use for profiling customers. Hence creating customer 360 degree view became the foundation for customer analytics. It can capture all customer interactions which can be used for further analytics. From the technology perspective, the biggest change is the introduction of big data platforms which can do the analytics very fast on all the data organization has, instead of sampling and segmentation. Then came Cloud based platforms, which can scale up and down as per the need of analysis, so companies didn’t have to invest upfront on infrastructure. Predictive models of customer churn, Retention, Cross-Sell do exist today as well, but they run against more data than ever before. Even analytics has further evolved from descriptive to predictive to prescriptive. Only showing what will happen next is not helping anymore but what actions you need to take is becoming more critical. There are various ways customer analytics is carried out: Acquiring all the customer data Understanding the customer journey Applying big data concepts to customer relationships Finding high propensity prospects Upselling by identifying related products and interests Generating customer loyalty by discovering response patterns Predicting customer lifetime value (CLV) Identifying dissatisfied customers & churn patterns Applying predictive analytics Implementing continuous improvement Hyper-personalization is the center stage now which gives your customer the right message, on the right platform, using the right channel, at the right time. Now via Cognitive computing and Artificial Intelligence using IBM Watson, Microsoft and Google cognitive services, customer analytics will become sharper as their deep learning neural network algorithms provide a game changing aspect. Tomorrow there may not be just plain simple customer sentiment analytics based on feedback or surveys or social media, but with help of cognitive it may be what customer’s facial expressions show in real time. There’s no doubt that customer analytics is absolutely essential for brand survival.
sherlockjjj
Real Time Twitter Sentiment Analysis Product
desultoryhalibut
Mainstreet Analytics is a real-time data visualization dashboard. It collects data from Twitter API, Google Trends API, and NY Times API and uses AlchemyAPI sentiment analysis to provide company specific and overall market insights as a resource to help investors make investment decisions by "predicting the present."
astranovasky
Twitter real-time sentiment analysis using Spark Structured Streaming and Python
Real-time sentiment analysis on tweets using tweepy and kafka. Graphed using the output of a neural network and Dash/Plotly.
data-han
Ingesting real-time Twitter API using tweepy into Kafka and process using Apache Spark Structured Streaming with Sentiment Analysis TextBlob before loading into time-series database, InfluxDB and monitoring dashboard, Grafana
venkat-0706
Twitter sentiment analysis project using machine learning to classify tweets and understand audience mood, opinions, and behavior trends in real-time.
degenspot
OnChain Sage is an AI-driven, decentralized trading assistant that fuses real-time social sentiment analysis with on-chain market data. It helps crypto traders identify high-potential tokens by scanning Twitter for trending narratives and monitoring on-chain metrics from platforms like Raydium and Dex Screener.
iojw
Real-time sentiment analysis and visualization of Twitter
s1s1fo
TwitterMon is a module developed for AIL framework which allows to monitor the content published in Twitter either within a certain period of time or in real time, in addition to performing a sentiment analysis and a statistical analysis of the publications collected.
LorenzoAgnolucci
Lambda architecture implementation using Apache Storm, Hadoop and HBase to perform Twitter real-time sentiment analysis
gauthamkrishna-g
A Real-Time Graphing of Twitter Trends using Sentiment Analysis
Implemented the following framework using Apache Spark Streaming, Kafka, Elastic, and Kibana. The framework performs SENTIMENT analysis of hash tags in twitter data in real-time. For example, we want to do the sentiment analysis for all the tweets for #trump, #coronavirus.
lowellbmarinas
#Applying NLP, Latent Dirichlet Allocation clustering, and Sentiment Analysis to real-time Twitter data via an API
amine-akrout
Sentiment Analysis and visualisation of tweets, using python, Easticsearch, logstash, kibana and kafka in a Docker container
ziedYazidi
Big data sentiment analysis pipeline using Apache Kafka, Spark Streaming, and HBase. Demonstrates ingestion and real-time processing of Twitter streams, basic sentiment scoring, and persistent storage for querying and visualization.
pfcurtis
A Real Time Apache Spark based Twitter sentiment analysis
jaeger-2601
Stock Sentiment Tracker is an innovative platform for monitoring current market sentiments through real time sentiment analysis of textual information from social media sites like Twitter and Reddit.
savanidhene
One of the major projects I have worked on till now outside of curriculum is a Twitter Government Sentiment Analysis. It is not just a regular sentiment analysis from a tweet input but has a lot more functionalities and complexity. To give a brief idea about what it does, the project searches a hashtag and displays real time tweets, the user who tweeted it, total retweet count of that tweet, all the hashtags used in each tweet, and most importantly the sentiment analysis of each tweet (whether it is a positive tweet or negative). The result shows the most recent 200 tweets from the day you want it to be searched from by taking a hashtag and date as input from the user. At the top of the result table, you get the total positive tweets percentage and negative tweets percentage of that hashtag. It is a full-fledged website with attractive frontend and smooth backend developed by me. I have developed the sentiment analysis model using logistic regression algorithm, and sqlite3 for database management. The major libraries I needed in the machine learning part are sklearn for logistic regression, nltk for preprocessing and tweepy for twitter authentication and tweets handling. I used matplotlib and seaborn libraries for result visualization to improve the accuracy of my project. The final accuracy I achieved is 98%. Coming to the website building, I have used Flask as my backend language and HTML, CSS, Javascript for frontend. Using Javascript, I was able to add beautiful scroll-animation effect to my project which gave it a more subtle and pleasing user experience. This project can be very useful for companies wanting to take a quick review on what's being said about their product on social media, especially from a specific period where they have made a significant change in their servicing or any other prospect of their product. They can understand the percentage of people who find their product/service positive or negative within seconds.
BerniceYeow
Abstract Depression brings significant challenges to the overall global public health. Each day, millions of people suffered from depression and only a small fraction of them undergo proper treatments. In the past, doctors will diagnose a patient via a face to face session using the diagnostic criteria that determine depression such as the Depression DSM-5 Diagnostic Criteria. However, past research revealed that most patients would not seek help from doctors at the early stage of depression which results in a declination in their mental health condition. On the other hand, many people are using social media platforms to share their feelings on a daily basis. Since then, there have been many studies on using social media to predict mental and physical diseases such as studies about cardiac arrest (Bosley et al., 2013), Zika virus (Miller, Banerjee, Muppalla, Romine, & Sheth, 2017), prescription drug abuse (Coppersmith, Dredze, Harman, Hollingshead, & Mitchell, 2015) mental health (De Choudhury, Kiciman, Dredze, Coppersmith, & Kumar, 2016) and studies particularly about depressive behavior within an individual (Kiang, Anthony, Adrian, Sophie, & Siyue, 2015). This research particularly focuses on leveraging social media data for detecting depressive thoughts among social media users. In essence, this research incorporated text analysis that focuses on drawing insights from written communication in order to conclude whether a tweet is related to depressive thoughts. This research produced a web application that performs a real-time enhanced classification of tweets based on a domain-specific lexicon-based method, which utilizes an improved dictionary that consists of depressive and non-depressive words with their associated orientations to classify depressive tweets. Problem understanding or Business Understanding Depression is the main cause of disability worldwide (De Choudhury et al., 2013). Statistically, an estimation of nearly 300 million people around the world suffers from depression. Shen et al (2017) mentioned that approximately 70% of people with early stages of depression would not consult a clinical psychologist. Many people are utilizing social media sites like Facebook and Instagram to disclose their feelings. This research persists the hypothesis that there are similarities between the mental state of an individual and the sentiment of their tweets and investigated the potentiality of social media (like twitter) as a data source for classifying depression among individuals.