Found 77 repositories(showing 30)
sayantann11
Classification - Machine Learning This is ‘Classification’ tutorial which is a part of the Machine Learning course offered by Simplilearn. We will learn Classification algorithms, types of classification algorithms, support vector machines(SVM), Naive Bayes, Decision Tree and Random Forest Classifier in this tutorial. Objectives Let us look at some of the objectives covered under this section of Machine Learning tutorial. Define Classification and list its algorithms Describe Logistic Regression and Sigmoid Probability Explain K-Nearest Neighbors and KNN classification Understand Support Vector Machines, Polynomial Kernel, and Kernel Trick Analyze Kernel Support Vector Machines with an example Implement the Naïve Bayes Classifier Demonstrate Decision Tree Classifier Describe Random Forest Classifier Classification: Meaning Classification is a type of supervised learning. It specifies the class to which data elements belong to and is best used when the output has finite and discrete values. It predicts a class for an input variable as well. There are 2 types of Classification: Binomial Multi-Class Classification: Use Cases Some of the key areas where classification cases are being used: To find whether an email received is a spam or ham To identify customer segments To find if a bank loan is granted To identify if a kid will pass or fail in an examination Classification: Example Social media sentiment analysis has two potential outcomes, positive or negative, as displayed by the chart given below. https://www.simplilearn.com/ice9/free_resources_article_thumb/classification-example-machine-learning.JPG This chart shows the classification of the Iris flower dataset into its three sub-species indicated by codes 0, 1, and 2. https://www.simplilearn.com/ice9/free_resources_article_thumb/iris-flower-dataset-graph.JPG The test set dots represent the assignment of new test data points to one class or the other based on the trained classifier model. Types of Classification Algorithms Let’s have a quick look into the types of Classification Algorithm below. Linear Models Logistic Regression Support Vector Machines Nonlinear models K-nearest Neighbors (KNN) Kernel Support Vector Machines (SVM) Naïve Bayes Decision Tree Classification Random Forest Classification Logistic Regression: Meaning Let us understand the Logistic Regression model below. This refers to a regression model that is used for classification. This method is widely used for binary classification problems. It can also be extended to multi-class classification problems. Here, the dependent variable is categorical: y ϵ {0, 1} A binary dependent variable can have only two values, like 0 or 1, win or lose, pass or fail, healthy or sick, etc In this case, you model the probability distribution of output y as 1 or 0. This is called the sigmoid probability (σ). If σ(θ Tx) > 0.5, set y = 1, else set y = 0 Unlike Linear Regression (and its Normal Equation solution), there is no closed form solution for finding optimal weights of Logistic Regression. Instead, you must solve this with maximum likelihood estimation (a probability model to detect the maximum likelihood of something happening). It can be used to calculate the probability of a given outcome in a binary model, like the probability of being classified as sick or passing an exam. https://www.simplilearn.com/ice9/free_resources_article_thumb/logistic-regression-example-graph.JPG Sigmoid Probability The probability in the logistic regression is often represented by the Sigmoid function (also called the logistic function or the S-curve): https://www.simplilearn.com/ice9/free_resources_article_thumb/sigmoid-function-machine-learning.JPG In this equation, t represents data values * the number of hours studied and S(t) represents the probability of passing the exam. Assume sigmoid function: https://www.simplilearn.com/ice9/free_resources_article_thumb/sigmoid-probability-machine-learning.JPG g(z) tends toward 1 as z -> infinity , and g(z) tends toward 0 as z -> infinity K-nearest Neighbors (KNN) K-nearest Neighbors algorithm is used to assign a data point to clusters based on similarity measurement. It uses a supervised method for classification. The steps to writing a k-means algorithm are as given below: https://www.simplilearn.com/ice9/free_resources_article_thumb/knn-distribution-graph-machine-learning.JPG Choose the number of k and a distance metric. (k = 5 is common) Find k-nearest neighbors of the sample that you want to classify Assign the class label by majority vote. KNN Classification A new input point is classified in the category such that it has the most number of neighbors from that category. For example: https://www.simplilearn.com/ice9/free_resources_article_thumb/knn-classification-machine-learning.JPG Classify a patient as high risk or low risk. Mark email as spam or ham. Keen on learning about Classification Algorithms in Machine Learning? Click here! Support Vector Machine (SVM) Let us understand Support Vector Machine (SVM) in detail below. SVMs are classification algorithms used to assign data to various classes. They involve detecting hyperplanes which segregate data into classes. SVMs are very versatile and are also capable of performing linear or nonlinear classification, regression, and outlier detection. Once ideal hyperplanes are discovered, new data points can be easily classified. https://www.simplilearn.com/ice9/free_resources_article_thumb/support-vector-machines-graph-machine-learning.JPG The optimization objective is to find “maximum margin hyperplane” that is farthest from the closest points in the two classes (these points are called support vectors). In the given figure, the middle line represents the hyperplane. SVM Example Let’s look at this image below and have an idea about SVM in general. Hyperplanes with larger margins have lower generalization error. The positive and negative hyperplanes are represented by: https://www.simplilearn.com/ice9/free_resources_article_thumb/positive-negative-hyperplanes-machine-learning.JPG Classification of any new input sample xtest : If w0 + wTxtest > 1, the sample xtest is said to be in the class toward the right of the positive hyperplane. If w0 + wTxtest < -1, the sample xtest is said to be in the class toward the left of the negative hyperplane. When you subtract the two equations, you get: https://www.simplilearn.com/ice9/free_resources_article_thumb/equation-subtraction-machine-learning.JPG Length of vector w is (L2 norm length): https://www.simplilearn.com/ice9/free_resources_article_thumb/length-of-vector-machine-learning.JPG You normalize with the length of w to arrive at: https://www.simplilearn.com/ice9/free_resources_article_thumb/normalize-equation-machine-learning.JPG SVM: Hard Margin Classification Given below are some points to understand Hard Margin Classification. The left side of equation SVM-1 given above can be interpreted as the distance between the positive (+ve) and negative (-ve) hyperplanes; in other words, it is the margin that can be maximized. Hence the objective of the function is to maximize with the constraint that the samples are classified correctly, which is represented as : https://www.simplilearn.com/ice9/free_resources_article_thumb/hard-margin-classification-machine-learning.JPG This means that you are minimizing ‖w‖. This also means that all positive samples are on one side of the positive hyperplane and all negative samples are on the other side of the negative hyperplane. This can be written concisely as : https://www.simplilearn.com/ice9/free_resources_article_thumb/hard-margin-classification-formula.JPG Minimizing ‖w‖ is the same as minimizing. This figure is better as it is differentiable even at w = 0. The approach listed above is called “hard margin linear SVM classifier.” SVM: Soft Margin Classification Given below are some points to understand Soft Margin Classification. To allow for linear constraints to be relaxed for nonlinearly separable data, a slack variable is introduced. (i) measures how much ith instance is allowed to violate the margin. The slack variable is simply added to the linear constraints. https://www.simplilearn.com/ice9/free_resources_article_thumb/soft-margin-calculation-machine-learning.JPG Subject to the above constraints, the new objective to be minimized becomes: https://www.simplilearn.com/ice9/free_resources_article_thumb/soft-margin-calculation-formula.JPG You have two conflicting objectives now—minimizing slack variable to reduce margin violations and minimizing to increase the margin. The hyperparameter C allows us to define this trade-off. Large values of C correspond to larger error penalties (so smaller margins), whereas smaller values of C allow for higher misclassification errors and larger margins. https://www.simplilearn.com/ice9/free_resources_article_thumb/machine-learning-certification-video-preview.jpg SVM: Regularization The concept of C is the reverse of regularization. Higher C means lower regularization, which increases bias and lowers the variance (causing overfitting). https://www.simplilearn.com/ice9/free_resources_article_thumb/concept-of-c-graph-machine-learning.JPG IRIS Data Set The Iris dataset contains measurements of 150 IRIS flowers from three different species: Setosa Versicolor Viriginica Each row represents one sample. Flower measurements in centimeters are stored as columns. These are called features. IRIS Data Set: SVM Let’s train an SVM model using sci-kit-learn for the Iris dataset: https://www.simplilearn.com/ice9/free_resources_article_thumb/svm-model-graph-machine-learning.JPG Nonlinear SVM Classification There are two ways to solve nonlinear SVMs: by adding polynomial features by adding similarity features Polynomial features can be added to datasets; in some cases, this can create a linearly separable dataset. https://www.simplilearn.com/ice9/free_resources_article_thumb/nonlinear-classification-svm-machine-learning.JPG In the figure on the left, there is only 1 feature x1. This dataset is not linearly separable. If you add x2 = (x1)2 (figure on the right), the data becomes linearly separable. Polynomial Kernel In sci-kit-learn, one can use a Pipeline class for creating polynomial features. Classification results for the Moons dataset are shown in the figure. https://www.simplilearn.com/ice9/free_resources_article_thumb/polynomial-kernel-machine-learning.JPG Polynomial Kernel with Kernel Trick Let us look at the image below and understand Kernel Trick in detail. https://www.simplilearn.com/ice9/free_resources_article_thumb/polynomial-kernel-with-kernel-trick.JPG For large dimensional datasets, adding too many polynomial features can slow down the model. You can apply a kernel trick with the effect of polynomial features without actually adding them. The code is shown (SVC class) below trains an SVM classifier using a 3rd-degree polynomial kernel but with a kernel trick. https://www.simplilearn.com/ice9/free_resources_article_thumb/polynomial-kernel-equation-machine-learning.JPG The hyperparameter coefθ controls the influence of high-degree polynomials. Kernel SVM Let us understand in detail about Kernel SVM. Kernel SVMs are used for classification of nonlinear data. In the chart, nonlinear data is projected into a higher dimensional space via a mapping function where it becomes linearly separable. https://www.simplilearn.com/ice9/free_resources_article_thumb/kernel-svm-machine-learning.JPG In the higher dimension, a linear separating hyperplane can be derived and used for classification. A reverse projection of the higher dimension back to original feature space takes it back to nonlinear shape. As mentioned previously, SVMs can be kernelized to solve nonlinear classification problems. You can create a sample dataset for XOR gate (nonlinear problem) from NumPy. 100 samples will be assigned the class sample 1, and 100 samples will be assigned the class label -1. https://www.simplilearn.com/ice9/free_resources_article_thumb/kernel-svm-graph-machine-learning.JPG As you can see, this data is not linearly separable. https://www.simplilearn.com/ice9/free_resources_article_thumb/kernel-svm-non-separable.JPG You now use the kernel trick to classify XOR dataset created earlier. https://www.simplilearn.com/ice9/free_resources_article_thumb/kernel-svm-xor-machine-learning.JPG Naïve Bayes Classifier What is Naive Bayes Classifier? Have you ever wondered how your mail provider implements spam filtering or how online news channels perform news text classification or even how companies perform sentiment analysis of their audience on social media? All of this and more are done through a machine learning algorithm called Naive Bayes Classifier. Naive Bayes Named after Thomas Bayes from the 1700s who first coined this in the Western literature. Naive Bayes classifier works on the principle of conditional probability as given by the Bayes theorem. Advantages of Naive Bayes Classifier Listed below are six benefits of Naive Bayes Classifier. Very simple and easy to implement Needs less training data Handles both continuous and discrete data Highly scalable with the number of predictors and data points As it is fast, it can be used in real-time predictions Not sensitive to irrelevant features Bayes Theorem We will understand Bayes Theorem in detail from the points mentioned below. According to the Bayes model, the conditional probability P(Y|X) can be calculated as: P(Y|X) = P(X|Y)P(Y) / P(X) This means you have to estimate a very large number of P(X|Y) probabilities for a relatively small vector space X. For example, for a Boolean Y and 30 possible Boolean attributes in the X vector, you will have to estimate 3 billion probabilities P(X|Y). To make it practical, a Naïve Bayes classifier is used, which assumes conditional independence of P(X) to each other, with a given value of Y. This reduces the number of probability estimates to 2*30=60 in the above example. Naïve Bayes Classifier for SMS Spam Detection Consider a labeled SMS database having 5574 messages. It has messages as given below: https://www.simplilearn.com/ice9/free_resources_article_thumb/naive-bayes-spam-machine-learning.JPG Each message is marked as spam or ham in the data set. Let’s train a model with Naïve Bayes algorithm to detect spam from ham. The message lengths and their frequency (in the training dataset) are as shown below: https://www.simplilearn.com/ice9/free_resources_article_thumb/naive-bayes-spam-spam-detection.JPG Analyze the logic you use to train an algorithm to detect spam: Split each message into individual words/tokens (bag of words). Lemmatize the data (each word takes its base form, like “walking” or “walked” is replaced with “walk”). Convert data to vectors using scikit-learn module CountVectorizer. Run TFIDF to remove common words like “is,” “are,” “and.” Now apply scikit-learn module for Naïve Bayes MultinomialNB to get the Spam Detector. This spam detector can then be used to classify a random new message as spam or ham. Next, the accuracy of the spam detector is checked using the Confusion Matrix. For the SMS spam example above, the confusion matrix is shown on the right. Accuracy Rate = Correct / Total = (4827 + 592)/5574 = 97.21% Error Rate = Wrong / Total = (155 + 0)/5574 = 2.78% https://www.simplilearn.com/ice9/free_resources_article_thumb/confusion-matrix-machine-learning.JPG Although confusion Matrix is useful, some more precise metrics are provided by Precision and Recall. https://www.simplilearn.com/ice9/free_resources_article_thumb/precision-recall-matrix-machine-learning.JPG Precision refers to the accuracy of positive predictions. https://www.simplilearn.com/ice9/free_resources_article_thumb/precision-formula-machine-learning.JPG Recall refers to the ratio of positive instances that are correctly detected by the classifier (also known as True positive rate or TPR). https://www.simplilearn.com/ice9/free_resources_article_thumb/recall-formula-machine-learning.JPG Precision/Recall Trade-off To detect age-appropriate videos for kids, you need high precision (low recall) to ensure that only safe videos make the cut (even though a few safe videos may be left out). The high recall is needed (low precision is acceptable) in-store surveillance to catch shoplifters; a few false alarms are acceptable, but all shoplifters must be caught. Learn about Naive Bayes in detail. Click here! Decision Tree Classifier Some aspects of the Decision Tree Classifier mentioned below are. Decision Trees (DT) can be used both for classification and regression. The advantage of decision trees is that they require very little data preparation. They do not require feature scaling or centering at all. They are also the fundamental components of Random Forests, one of the most powerful ML algorithms. Unlike Random Forests and Neural Networks (which do black-box modeling), Decision Trees are white box models, which means that inner workings of these models are clearly understood. In the case of classification, the data is segregated based on a series of questions. Any new data point is assigned to the selected leaf node. https://www.simplilearn.com/ice9/free_resources_article_thumb/decision-tree-classifier-machine-learning.JPG Start at the tree root and split the data on the feature using the decision algorithm, resulting in the largest information gain (IG). This splitting procedure is then repeated in an iterative process at each child node until the leaves are pure. This means that the samples at each node belonging to the same class. In practice, you can set a limit on the depth of the tree to prevent overfitting. The purity is compromised here as the final leaves may still have some impurity. The figure shows the classification of the Iris dataset. https://www.simplilearn.com/ice9/free_resources_article_thumb/decision-tree-classifier-graph.JPG IRIS Decision Tree Let’s build a Decision Tree using scikit-learn for the Iris flower dataset and also visualize it using export_graphviz API. https://www.simplilearn.com/ice9/free_resources_article_thumb/iris-decision-tree-machine-learning.JPG The output of export_graphviz can be converted into png format: https://www.simplilearn.com/ice9/free_resources_article_thumb/iris-decision-tree-output.JPG Sample attribute stands for the number of training instances the node applies to. Value attribute stands for the number of training instances of each class the node applies to. Gini impurity measures the node’s impurity. A node is “pure” (gini=0) if all training instances it applies to belong to the same class. https://www.simplilearn.com/ice9/free_resources_article_thumb/impurity-formula-machine-learning.JPG For example, for Versicolor (green color node), the Gini is 1-(0/54)2 -(49/54)2 -(5/54) 2 ≈ 0.168 https://www.simplilearn.com/ice9/free_resources_article_thumb/iris-decision-tree-sample.JPG Decision Boundaries Let us learn to create decision boundaries below. For the first node (depth 0), the solid line splits the data (Iris-Setosa on left). Gini is 0 for Setosa node, so no further split is possible. The second node (depth 1) splits the data into Versicolor and Virginica. If max_depth were set as 3, a third split would happen (vertical dotted line). https://www.simplilearn.com/ice9/free_resources_article_thumb/decision-tree-boundaries.JPG For a sample with petal length 5 cm and petal width 1.5 cm, the tree traverses to depth 2 left node, so the probability predictions for this sample are 0% for Iris-Setosa (0/54), 90.7% for Iris-Versicolor (49/54), and 9.3% for Iris-Virginica (5/54) CART Training Algorithm Scikit-learn uses Classification and Regression Trees (CART) algorithm to train Decision Trees. CART algorithm: Split the data into two subsets using a single feature k and threshold tk (example, petal length < “2.45 cm”). This is done recursively for each node. k and tk are chosen such that they produce the purest subsets (weighted by their size). The objective is to minimize the cost function as given below: https://www.simplilearn.com/ice9/free_resources_article_thumb/cart-training-algorithm-machine-learning.JPG The algorithm stops executing if one of the following situations occurs: max_depth is reached No further splits are found for each node Other hyperparameters may be used to stop the tree: min_samples_split min_samples_leaf min_weight_fraction_leaf max_leaf_nodes Gini Impurity or Entropy Entropy is one more measure of impurity and can be used in place of Gini. https://www.simplilearn.com/ice9/free_resources_article_thumb/gini-impurity-entrophy.JPG It is a degree of uncertainty, and Information Gain is the reduction that occurs in entropy as one traverses down the tree. Entropy is zero for a DT node when the node contains instances of only one class. Entropy for depth 2 left node in the example given above is: https://www.simplilearn.com/ice9/free_resources_article_thumb/entrophy-for-depth-2.JPG Gini and Entropy both lead to similar trees. DT: Regularization The following figure shows two decision trees on the moons dataset. https://www.simplilearn.com/ice9/free_resources_article_thumb/dt-regularization-machine-learning.JPG The decision tree on the right is restricted by min_samples_leaf = 4. The model on the left is overfitting, while the model on the right generalizes better. Random Forest Classifier Let us have an understanding of Random Forest Classifier below. A random forest can be considered an ensemble of decision trees (Ensemble learning). Random Forest algorithm: Draw a random bootstrap sample of size n (randomly choose n samples from the training set). Grow a decision tree from the bootstrap sample. At each node, randomly select d features. Split the node using the feature that provides the best split according to the objective function, for instance by maximizing the information gain. Repeat the steps 1 to 2 k times. (k is the number of trees you want to create, using a subset of samples) Aggregate the prediction by each tree for a new data point to assign the class label by majority vote (pick the group selected by the most number of trees and assign new data point to that group). Random Forests are opaque, which means it is difficult to visualize their inner workings. https://www.simplilearn.com/ice9/free_resources_article_thumb/random-forest-classifier-graph.JPG However, the advantages outweigh their limitations since you do not have to worry about hyperparameters except k, which stands for the number of decision trees to be created from a subset of samples. RF is quite robust to noise from the individual decision trees. Hence, you need not prune individual decision trees. The larger the number of decision trees, the more accurate the Random Forest prediction is. (This, however, comes with higher computation cost). Key Takeaways Let us quickly run through what we have learned so far in this Classification tutorial. Classification algorithms are supervised learning methods to split data into classes. They can work on Linear Data as well as Nonlinear Data. Logistic Regression can classify data based on weighted parameters and sigmoid conversion to calculate the probability of classes. K-nearest Neighbors (KNN) algorithm uses similar features to classify data. Support Vector Machines (SVMs) classify data by detecting the maximum margin hyperplane between data classes. Naïve Bayes, a simplified Bayes Model, can help classify data using conditional probability models. Decision Trees are powerful classifiers and use tree splitting logic until pure or somewhat pure leaf node classes are attained. Random Forests apply Ensemble Learning to Decision Trees for more accurate classification predictions. Conclusion This completes ‘Classification’ tutorial. In the next tutorial, we will learn 'Unsupervised Learning with Clustering.'
chankwpj
Automatic Analysis of Music Performance Style One fundamental problem in computational music is analysis and modeling of performance style. Last year’s successful CUROP project revealed, through perceptual experiments, that players' control over rhythm is the strongest factor in the perceived quality of performance (already a publishable result). This year's project will hence investigate the computer analysis of the rhythmic component of performances in more detail, with the following aims: Implement and improve upon state-of-the-art beat detection methods. Carry out statistical analysis of rhythmic variation on a corpus of performances: Train a classifier into professional/amateur performance. Investigate to what extent rhythmic variations are controlled as opposed to random. Devise rhythmic style signatures of various performers for style recognition and retrieval. Investigate operations on rhythmic styles, e.g. apply Rachmaninoff's style to one's amateur recording. Solving the above problems is paramount to our understanding of what makes a good performance and what, quantitatively, are the differences between professional musician's styles. Applications include: musicology, teaching, automatic performance of music, high-level editing of music. This project requires integration of data mining, machine learning, and digital signal processing techniques, which are closely aligned with the expertise of the two supervisors: Dr Kirill Sidorov and Dr Andrew Jones. who are also experienced musicians. Via this project, the student will learn a variety of digital signal processing and machine learning techniques and will develop enhanced MATLAB programming skills, that are increasingly in demand for graduates. The student will work in our lab, with state-of-the-art facilities (powerful audio workstation, digital piano, audio gear). We will work collaboratively to ensure successful completion, including daily 30 minute meetings and longer weekly review meetings. The student will participate in the recently established Computational Music research sub-group. This project will contribute to longer-term development of this sub-group and foster new research avenues. Project Start/End Dates: Any 8 week period from 13th June 2016 to September 19th 2016. Contact/Supervisors: Kirill Sidorov Andrew Jones
Location Dinovative is looking for Back-end NodeJS Developer who is excited to work alongside a talented group of innovators to craft digital products. Being part of the conversation from the start, you will be expected to lead full lifecycle webbased projects including guiding technical scoping, design and implementation. Collaborating with UX & UI designers, researchers and other engineers (web & mobile), you will have the ability to flex your full stack chops to help shape, design and build digital products that help solve key business needs. We want people who love being involved in challenging and innovative work. 63A Nam Ky Khoi Nghia, Ben Thanh Ward, District 1 Salary Expectation Negotiable Requirements Minimum 2-3 year working experience with Nodejs Solid understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Strong experience with RESTful APIs and WebSocket APIs Experience working with database system such as mySQL, MongoDB, NoSQL Solid understanding of object-oriented programming Understanding fundamental design principles behind a scalable application and microservice architecture Experience in front-end development (ReactJS) is a big advantage Strong analytical and problem solving abilities Strong communication and client facing skills Skills Javascript Nodejs PostgreSQL Setting up servers (nginx etc.) and automating deployment process (Docker, Ansible, Chef etc.) Ideally Experience setting up servers (nginx etc.) and automating deployment process (Docker, Ansible, Chef etc.) Experience with ES6 Features, MEAN stack, other frameworks such as ExpressJS, Loopback ,... Responsibilities Design, build, and maintain efficient, reusable, and reliable Nodejs code Integration of data storage solutions Help maintain code quality, organization and automatization Constantly learn and keep abreast of emerging technologies Contribute to the software design processes including whiteboarding sessions, workshops and prototyping Critique software designs and architectures Peer review colleagues code and identifying areas for improvement Provide development task effort estimates Conduct client requirements gathering and analysis Review test plans Follow defined development best practice including commenting and documenting code, contribute to development wikis and using source control Why it would be awesome to work with us Come and work with us. You will have opportunity to learn new technologies with international standard, build products that are different/innovative and get various benefits: Working in dynamic, young, friendly, flexible invironment Laptop is provided 12+ days annual leaves, working Monday - Friday, flexible working time Attractive salary based on skills and experiences Salary review: Once a year, base on individual performance review Performance bonus: Twice a year (every 6 months), base on individual performance, profit and company policy Gift occasions: 8/3; 1/6 (for children); Mid-autumn festival; 20/10; Lunar New Year, Wedding Free fruits/ snacks on happy hours time Celebration: birthday Party time: Team building (frequent) and Xmas Healthcare, Unemployment insurance and Sick leave: based on current relevant Laws Great opportunity for career development Training new technology frequently, and become a fullstack developer Interesting engineering projects We use international standard in building system such as agile, test driven development, continuous integration and continuous deployment. Various projects with so many new technologies applied Contact Email: oanh.tang@dinovative.com Phone: 0909 617 173 Skype: tangoanh FB: https://www.facebook.com/oanhtang195
Welcome to the Data Analysis Fundamentals & Problem Solving repository! This collection of assignments and projects is designed to enhance your skills in data analysis, statistics, and Python programming. The repository covers various aspects of data analysis, from basic statistical measures to complex data-driven modules. l
17ucs126krish
Practical Python exercises covering data structures, loops, conditionals, functions, list comprehensions, and problem-solving fundamentals for data analysis.
hassan-fayyaz
Introduction to data analytics and data science using R programming. This repository covers the fundamentals of coding to solve data problems and automate analysis and reporting.
Swatikhedekar
This repository contains my solutions to all 8 case studies from Danny Ma’s 8 Week SQL Challenge. Each case study includes well-structured SQL queries, clear problem breakdowns, and optimized solutions demonstrating strong SQL fundamentals, data analysis, and business problem-solving skills.
shrutirabara
This is a collection of Python scripts, which I created using Jupyter Notebook, where I practiced solving typical bioinformatics problems as a part of my Bioinformatics (2111) course! I learned the basics of using the UNIX environment and fundamentals of python in the context of real-world bioinformatics data analysis problems!
Sree-ai-sketch
This winter-based project focuses on the practical implementation and analysis of fundamental Data Structures and Algorithms (DSA). The project aims to strengthen algorithmic thinking and problem-solving skills by applying theoretical concepts to structured program modules.
PlanetDestroyyer
Developed a movie recommendation system using Python and machine learning. Built and trained a model to suggest relevant movies based on user preferences or movie attributes. Utilized data analysis, feature engineering, and UI design skills. Project showcases proficiency in machine learning fundamentals, Python programming, and problem-solving .
Coder-Stark
Welcome to my GitHub repository dedicated to solving the 450 Data Structures and Algorithms (DSA) questions curated by Love Babbar. This repository showcases my journey of tackling these fundamental DSA problems and providing detailed solutions along with their time and space complexity analysis.
Kadiri-Mohamed
A collection of JavaScript programming exercises focused on data manipulation and analysis using an employee dataset. The project includes 60+ challenges across three difficulty levels (Easy, Medium, Advanced) covering fundamental concepts like loops, conditionals, arrays, and objects. Designed to improve problem-solving skills without using advanc
prettyquail
About this Course Kickstart your learning of Python for data science, as well as programming in general, with this beginner-friendly introduction to Python. Python is one of the world’s most popular programming languages, and there has never been greater demand for professionals with the ability to apply Python fundamentals to drive business solutions across industries. This course will take you from zero to programming in Python in a matter of hours—no prior programming experience necessary! You will learn Python fundamentals, including data structures and data analysis, complete hands-on exercises throughout the course modules, and create a final project to demonstrate your new skills. By the end of this course, you’ll feel comfortable creating basic programs, working with data, and solving real-world problems in Python. You’ll gain a strong foundation for more advanced learning in the field, and develop skills to help advance your career. This course can be applied to multiple Specialization or Professional Certificate programs. Completing this course will count towards your learning in any of the following programs: IBM Applied AI Professional Certificate Applied Data Science Specialization IBM Data Science Professional Certificate Upon completion of any of the above programs, in addition to earning a Specialization completion certificate from Coursera, you’ll also receive a digital badge from IBM recognizing your expertise in the field.
Janani-Sankarasubramanian
What if we were to fundamentally transform the way hedge funds operate? If we could find a way for hedge funds to invest in equities that are less risky, but still provide a higher return, we are helping investors eliminate a part or all of their risk. Our project aims at solving this problem by performing a risk analysis on several equities using various factors used to measure the financial health of an equity. We do so by analyzing the P/E ratio P/E growth ratio, EPS for several past quarters, market capitalization, Sharpe ratio and market sentiment through Twitter. All the ratios are calculated for an adjusted closing price of an equity. We use a Morningstar fundamental data field called normalized_basic_eps since it is a more accurate representation of a company's recent quaterly earnings. The normalized EPS excludes one-time and unusual expenses and acts as a measure for a company's true earnings. By relying on the accuracy of the factors used to measure a company's fundamentals, we perform due diligence for the investor, reducing a part of the risk.
Loknath816
The content will be Python fundamentals, data analysis and database connectivity , DSA and problem solving
TechTinkererShivam
This repository contains fundamentals of data structures and algorithms, along with examples, problem-solving approaches, and performance analysis.
Likhitha-shivappa
This repo is created for python training by samsung ,it contains python fundamentals ,dsa, data analysis and problem solving.
omairchaudhary
Python to Machine Learning learning journey. Covers Python fundamentals, problem-solving, data structures, data analysis, and machine learning projects with consistent GitHub updates.
Nagmun-Onu
A solid foundation module for understanding the fundamentals of data structure and algorithms, including their design, analysis, and implementation for problem-solving.
asmiszha
Python-based data analysis projects developed during the Global Career Accelerator Program, focusing on real-world datasets, coding fundamentals, and analytical problem-solving.
AdityaMane231105
A structured implementation of core Data Structures in C, C++, and Java, demonstrating strong fundamentals in algorithm design, problem-solving and complexity analysis.
anuragKumarCB
Python practice repo for data analysis and problem-solving. Focused on refining fundamentals, improving efficiency, and building stronger analytical workflows through consistent practice and iteration.
saritalavania93
This assignment demonstrates my understanding of Python fundamentals such as data types, control flow, functions, and data structures. By solving practical problems using lists, dictionaries, loops, and conditions, I strengthened my problem-solving skills and built a strong foundation for data analysis
nehallashkar
A collection of hands-on data science lab projects covering Python fundamentals, data analysis, machine learning, and real-world problem solving from basics to advanced AI-driven applications.
wongwingyinrenee-afk
This repository collects my Java programming exercises focusing on data structures, basic algorithms, and OOP concepts—fundamental skills for efficient data analysis and problem solving.
lucanudo
This repository contains the final project for the "Fundamentals of Data Science" course, applying data analysis, modeling, and visualization techniques to solve a real-world problem.
Harshit2493
A collection of core projects showcasing fundamental concepts, hands-on problem-solving, and practical applications across civil engineering, data analysis, and computational methods.
shakhawathossain07
CSE 225 Data Structures course: Learn fundamental data techniques - ADTs, stacks, queues, trees, graphs; emphasize domain analysis, Big-O for efficient problem-solving. Prerequisite: CSE 215.
Ratan573
Completed Data Science course from Saylor Academy, gaining practical knowledge in data analysis, visualization, and machine learning fundamentals. Developed strong analytical and problem-solving skills through hands-on projects and real-world data applications.
vishipayyallore
A structured, implementation-focused journey through Data Structures and Algorithms using Python. Covers fundamentals, complexity analysis, core data structures, recursion, searching, sorting, and problem-solving patterns for interviews and real-world engineering.