Found 228 repositories(showing 30)
coqui-ai
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
alirezadir
A guideline for building practical production-level deep learning systems to be deployed in real world applications.
ahkarami
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
The-AI-Summer
Build, train, deploy, scale and maintain deep learning models. Understand ML infrastructure and MLOps using hands-on examples.
NVIDIA-Merlin
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.
ritchieng
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, Apptainer, and more.
Neuraxio
The world's cleanest AutoML library ✨ - Do hyperparameter tuning with the right pipeline abstractions to write clean deep learning production pipelines. Let your pipeline steps have hyperparameter spaces. Design steps in your pipeline like components. Compatible with Scikit-Learn, TensorFlow, and most other libraries, frameworks and MLOps environments.
SynaLinks
From idea to production in just few lines: Graph-Based Programmable Neuro-Symbolic LM Framework - a production-first LM framework built with decade old Deep Learning best practices
Genius-apple
RusTorch is a production-grade deep learning framework re-imagined in Rust. It combines the usability you love from PyTorch with the performance, safety, and concurrency guarantees of Rust. Say goodbye to GIL locks, GC pauses, and runtime errors. Say hello to RusTorch.
Ryan-Ray-Martin
This MLOps project productionizes a Deep Reinforcement Learning agent with a scalable, distributed data streaming infrastructure using Kafka and Ray. A thorough walkthrough of the code is described in this article on medium: https://ryanraymartin.medium.com/deep-reinforcement-learning-for-stock-trading-with-kafka-and-rllib-d738b9634675
Many companies are utilizing the cloud for their day to day activities. Many big cloud service providers like AWS, Microsoft Azure have been success-fully serving its increasing customer base. A brief understanding of the char-acteristics of production virtual machine (VM) workloads of large cloud pro-viders can inform the providers resource management systems, e.g. VM scheduler, power manager, server health manager. In our project we will be analysing Microsoft Azure’s VM CPU utilization dataset released in October 2017. We predict the VM workload from the CPU usage pattern like mini-mum, maximum and average from the Azure dataset. Different techniques among Deep learning are used for the prediction by considering the history of the workload. By considering real VM traces, we can show that the predic-tion-informed schedules increase utilization and stop physical resource ex-haustion. We can arrive at a conclusion that cloud service providers can use their workloads’ characteristics and machine learning techniques to enhance resource management greatly.
AKAGIwyf
In recent years, UAV began to appear in all aspects of production and life of human society, and has been widely used in aerial photography, monitoring, security, disaster relief and other fields. For example, UAV tracking can be used for urban security, automatic cruise to find suspects and assist in intelligent urban security management.However, the practical application of UAV in various early scenes was mostly based on human remote control or intervention, and the degree of automation was not high. The degree to which UAVs can be automated is one of the decisive factors in whether they can play a bigger role in the future. With the increasing demand of UAV automation, target tracking based on computer vision has become one of the current research hotspots. Some companies in China and abroad, such as DJI, have successfully equipped target tracking on UAVs, but these technologies only exist in papers and descriptions, and the specific implementation has not been sorted out and opened source. Therefore, we plan to try to complete this project by ourselves and open source it on Github. Traditional visual tracking has many advantages, such as strong autonomy, wide measurement range and access to a large amount of environmental information, it also has many disadvantages.It requires a powerful hardware system. In order to obtain accurate navigation information, it needs to be equipped with a high-resolution camera and a powerful processor. From image data acquisition to processing, huge data operations are involved, which undoubtedly increases the cost of UAV tracking. Moreover, the reliability of traditional visual navigation and tracking is poor, and it is difficult for UAV to work in complex lighting and obstacle scenes. Therefore, we plan to use deep learning for target tracking in this project. We can train our own model through deep learning algorithm (we have not decided what network structure to use), then move the trained model to the embedded development board for operation, fix it on the UAV, read the image through the camera and process the data, so that it can recognize the objects to be recognized and tracked. In this project, we will use NVIDIA Jetson TX2 development board, install ROS in Linux system, establish communication with pixhawk, and conduct UAV flight control through PID algorithm.
Hassi34
This project contains the production ready Machine Learning(Deep Learning) solution for detecting and classifying the brain tumor in medical images
yyun543
A lightweight, high-performance deep learning inference framework built in Rust. Zen-Infer provides a clean, modular architecture for deploying neural networks in production environments with minimal dependencies and maximum performance.
meabhishekkumar
The Hitchhiker's Guide to Deep Learning Based Recommenders in Production
freds0
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Using-Deep-Learning-Techniques-perform-Fracture-Detection-Image-Processing Using Different Image Processing techniques Implementing Fracture Detection on X rays Images on 8000 + images of dataset Description About Project: Bones are the stiff organs that protect vital organs such as the brain, heart, lungs, and other internal organs in the human body. There are 206 bones in the human body, all of which has different shapes, sizes, and structures. The femur bones are the largest, and the auditory ossicles are the smallest. Humans suffer from bone fractures on a regular basis. Bone fractures can happen as a result of an accident or any other situation in which the bones are put under a lot of pressure. Oblique, complex, comminute, spiral, greenstick, and transverse bone fractures are among the many forms that can occur. X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and other types of medical imaging techniques are available to detect various types of disorders. So we design the architecture of it using Neural Networks different models, compare the accuracy, and get a result of which model works better for our dataset and which model delivers correct results on a specific related dataset with 10 classes. Basically our main motive is to check that which model works better on our dataset so in future reference we all get an idea that which model gives better type of accuracy for a respective dataset . Proposed Method for Project: we decided to make this project because we have seen a lot of times that report that are generated by computer produce error sometimes so we wanted to find out which model gives good accuracy and produce less error so we start to research over image processing nd those libraries which are used in image processing like Keras , Matplot lib , Image Generator , tensor flow and other libraries and used some of them and implement it on different image processing algorithm like as CNN , VGG-16 Model ,ResNet50 Model , InceptionV3 Model . and then find the best model which gives best accuracy for that we generate classification report using predefined libraries in python such as precision , recall ,r2score , mean square error etc by importing Sklearn. Methodology of Project: Phase 1: Requirement analysis: • Study concepts of Basic Python programming. • Study of Tensor flow, keras and Python API interface . • Study of basic algorithms of Image Processing and neural network And deep learning concepts. • Collect the dataset from different resources and describe it into Different classes(5 Fractured + 5 non fractured). Phase 2: Designing and development: The stages of design and development are further segmented. This step starts with data from the Requirement and Analysis phase, which will lead to the model construction phase, where a model will be created and an algorithm will be devised. After the algorithm design phase is completed, the focus will shift to algorithm analysis and implementation in this project. Phase 3: Coding Phase: Before real coding begins, the task is divided into modules/units and assigned to team members once the system design papers are received. Because code is developed during this phase, it is the developers' primary emphasis. The most time-consuming aspect of the project will be this. This project's implementation begins with the development of a program in the relevant programming language and the production of an error-free executable program. Phase 4: Testing Phase: When it comes to the testing phase, we may test our model based on the classification report it generates, which contains a variety of factors such as accuracy, f1score, precision, and recall, and we can also test our model based on its training and testing accuracy. Phase 5: Deployment Phase: One of our goals is to bring all of the previous steps together and put them into practice. Another goal is to deploy our model into a python-based interface application after comparing the classification reports and determining which model is best for our dataset.
This project aims to develop an innovative anomaly detection system using advanced data mining and deep learning techniques to accurately identify and localize defects in manufacturing components, thereby enhancing quality control processes and reducing production losses.
traderpedroso
XphoneBR is a Brazilian portuguese transformer base grapheme-to-phoneme and normalization tool modeling library that leverages recent deep learning technology and is optimized for usage in production systems such as TTS. In particular, the library should be accurate, fast, easy to use
In the first course of Machine Learning Engineering for Production Specialization, you will identify the various components and design an ML production system end-to-end: project scoping, data needs, modeling strategies, and deployment constraints and requirements; and learn how to establish a model baseline, address concept drift, and prototype the process for developing, deploying, and continuously improving a productionized ML application.
Many companies are utilizing the cloud for their day to day activities. Many big cloud service providers like AWS, Microsoft Azure have been success-fully serving its increasing customer base. A brief understanding of the char-acteristics of production virtual machine (VM) workloads of large cloud pro-viders can inform the providers resource management systems, e.g. VM scheduler, power manager, server health manager. In our project we will be analysing Microsoft Azure’s VM CPU utilization dataset released in October 2017. We predict the VM workload from the CPU usage pattern like mini-mum, maximum and average from the Azure dataset. Different techniques among Deep learning are used for the prediction by considering the history of the workload. By considering real VM traces, we can show that the predic-tion-informed schedules increase utilization and stop physical resource ex-haustion. We can arrive at a conclusion that cloud service providers can use their workloads’ characteristics and machine learning techniques to enhance resource management greatly.
Aryia-Behroziuan
In developmental robotics, robot learning algorithms generate their own sequences of learning experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans. These robots use guidance mechanisms such as active learning, maturation, motor synergies and imitation. Association rules Main article: Association rule learning See also: Inductive logic programming Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[60] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[61] Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets.[62] For example, the rule {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}\{{\mathrm {onions,potatoes}}\}\Rightarrow \{{\mathrm {burger}}\} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.[63] Inductive logic programming (ILP) is an approach to rule-learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting.[64][65][66] Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[67] The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. Models Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems. Artificial neural networks Main article: Artificial neural network See also: Deep learning An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another. Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68]
Scoping (optional) coursera 3rd week quiz anwers
Mohammed-razin-cr
CYBER-SENTINEL is a production-ready platform for advanced deepfake detection and secure image steganography. It uses an ensemble of deep learning models (CNN, ResNext, LSTM, Vision Transformer) for real-time analysis of images and videos, achieving up to 92% accuracy. The system also enables hiding. and extracting secret data in images
avs-abhishek123
Detectron is Facebook AI Research’s (FAIR) software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework. Detectron model is meant to advance object detection by offering speedy training and addressing the issues companies face when making the step from research to production.
MattiaLitrico
Official implementation for the paper "A deep learning approach to optimize recombinant protein production in fermentations"
ajayrawatsap
Explore how to practice real world Data Science by collecting data, curating it and apply advanced Deep Learning techniques to create high quality models which can be deployed in production. Use Keras and Pytorch libraries in python for applying advanced techniques like data augmentation, drop out, batch normalization and transfer learning
Aaryan2304
A production-grade, deep-learning-based anomaly detection system for CCTV surveillance footage. This project uses a PyTorch-based Convolutional Autoencoder to achieve high precision in identifying unusual events. The system is deployed as a scalable REST API using FastAPI and Docker, enabling real-time video analysis. It also includes an MLOps.
In this project, we compare and predict the yield of five crops (wheat, barley, jowar, rapeseed & mustard, and bajra) in Rajasthan (district-wise) using three machine learning techniques: random forest, lasso regression and SVM, and two deep learning techniques: gradient descent and RNN LSTM. To apply the models to our data, we divided it into training and testing datasets. Each model is tested twice: once with only "area" and "production" in mind, and then again with additional factors (rainfall and soil type) in mind to predict crop yield. To find the model that most accurately predicts the yield, R2 score, Root Mean Squared Error (RMSE) and Mean Average Error (MAE) are calculated for each model.
alimbekovKZ
Repository, with some blogposts and code for deploying machine and deep learning-based models in production.