Found 1,100 repositories(showing 30)
molyswu
using Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
Hrishikesh332
Motivation.AI is your personal guide to success! Based on the vision of Dr. APJ Abdul Kalam, our AI-powered chatbot provides personalized motivation, practical advice, and inspirational stories to help achieve goals 💫
yakupzengin
An AI-powered fitness tracker that uses real-time pose estimation to count reps, monitor form, and provide instant feedback for exercises like squats, push-ups, and bicep curls. Designed for accuracy, motivation, and adaptability
Arijit-05
Momentum is a clean, modern habit tracker that helps you stay consistent. Track daily habits, visualize streaks with a calendar and fire animations, get AI-powered motivation, and set reminders, all while working offline with a distraction-free design.
JLW-7
Solace is a platform aimed at addressing mental health issues such as depression and the effects of bullying. It incorporates an AI chatbot, Pi, capable of providing support and solutions to user concerns, daily motivational quotes, and a public forum for users to express themselves and find comfort.
samyakrajbayar
Generate stunning, professional posters using AI-powered image generation. This Python tool leverages Stability AI's SDXL model to create custom posters with text overlays for events, movies, motivation, and more.
ChiaPatricia
Advancing Motivational Interviewing through AI-powered feedback based on MITI framework
occupyashanti
An AI-powered virtual life coach designed to provide personalized guidance, motivation, and actionable advice. This project leverages natural language processing (NLP) and machine learning to simulate meaningful conversations, set goals, track progress, and suggest improvements in various areas like productivity, health, relationships.
M1chae1Patr1ck
A Twitter bot that uses Azure's AI to perform sentiment analysis of tweets live. If a tweet is determined to be positive it likes and retweets the tweet. The goal is to reinforce positive online behavior. This is an example of how to make AI work for us instead of just for marketers and other entities that have motivations beyond what is best for humanity.
Alfredd43
Fit Track is a playful, production-ready wellness tracker and AI companion. Log your meals, hydration, and workouts. See your progress. Get motivation and tips from AI—all in a slick, responsive app.
medtorch
Our vision towards a healthier and safer world.
MultiX0
Questra redefines self-improvement through adaptive AI quests, live progress tracking, and an honest motivation system that engages users across English and Arabic audiences.
ChanMeng666
【Every star you give feeds a hungry developer's motivation! ⭐️】A modern educational platform built with Docusaurus that teaches AI-assisted programming through comprehensive tutorials, practical exercises, and real-world projects. Features bilingual support, interactive examples, and guides for tools like Cursor, v0, and Vercel.
Aryia-Behroziuan
Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems. The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems. For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical. Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.[10] A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages that do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them. Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency.[11] In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework:[12] A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it. It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world? It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends. It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences. It is a medium of human expression, i.e., a language in which we say things about the world. Knowledge representation and reasoning are a key enabling technology for the Semantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[13] The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on the subsumption relations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[14] The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. The Web Ontology Language (OWL) adds additional semantics and integrates with automatic classification reasoners.[15]
Capstone-42-quote-gen
Gloomsmith, the AI-generated de-motivational quote image generator that brings laughter, sarcasm, and absurdity to life.
An AI-powered web assistant for stress relief, offering personalized mood analysis, calming activities, motivational quotes, and mental wellness suppor
AlexisTrouve
An intelligent CV platform that adapts to 5 professional themes with smart scoring. Features a unique dual-letter generator creating both motivation and satirical anti-motivation letters using Claude AI with multi-source company research (Wikipedia, GitHub, news). Built with Go, Fiber, Next.js 14, PostgreSQL, Redis. 882 tests passing.
swami-hai-ham
Karma AI redefines task management with AI coaches inspired by famous characters. Elevate productivity as movie, anime, or series personas guide your to-do list. Enjoy a unique, motivational experience tailored to your preferences in this personalized task companion.
rteeter
VS Code extension for mindful coding breaks with AI-powered encouragement messages. Set custom work/break intervals and receive personalized break-time messages in different styles (Zen Master, Motivational Coach, etc).
codingstark-dev
A customizable badge generator to track and display profile views with a twist, add AI-powered messages like motivational quotes or fun facts! Built with Hono.js, Cloudflare Workers, and more.
Akhilesh-yadav680
nspireMailer is an AI-powered email automation bot that delivers personalized motivational emails at scheduled times. It helps users maintain daily inspiration and positivity effortlessly by sending timely quotes, messages, and encouragement directly to their inbox.
eskarasu
AIShe is an AI-powered conversational companion designed to uplift, inspire, and support women. Whether you need a compliment, a motivational boost, or just someone to talk to, AIShe is here to brighten your day with positivity and encouragement.
tajpouria
GPT-YouTube-Short-Make is an AI-powered tool leveraging OpenAI's GPT-4 to recreate engaging and motivational videos based on YouTube short links. This project encompasses video extraction, script generation, voiceover synthesis, music recognition, and title creation, culminating in a final composite video.
mahalakshmi565
The Virtual Fitness Assistant uses human pose estimation to guide users through workouts by analyzing posture and providing real-time feedback. It helps improve exercise accuracy, prevent injuries, and make fitness accessible from home. The system promotes health and motivation through AI-powered interaction and personalized virtual training.
tanaygupta0610
This is a discord bot that I created in 2025. This bot can - send AI generated responses, recipes, give music suggestions, serve you a motivational quote,do a dictionary lookup, genereate a full form of your name using positive words, play truth and dare with you, give you compliments and apoligies.
Girijesh-devops
# Python Developer Roadmap Folks, Here are 10 important things to deep-dive into Python Developer Role! Also, the items are listed in no particular order. You don't need to learn everything listed here, however knowing what you don't know is as important as knowing things. ## **1. Learn the basics** * Basic syntax * Variable and data types * Conditionals * List, Tuples, Sets, Dictionaries * Type Casting, Exception Handling * Functions, Buitlin functions ## **2. Advanced Core Python** * Object Oriented Programming(OOP) * Data Structures and Algorithms * Regular Expressions * Decorators * Lambdas * Modules * Iterators ## **3. Version Control Systems** * Basic Git Usage * Repo Hosting Services(GitHub, GitLab, BitBucket) ## **4. Package Managers** * PyPI * PIP ## **5. Learn Framework(Web Development)** - Synchronous Framework - Django, Flask, Pyramid - Asynchrnous Framework - Tornado, Sanic, aiohttp, gevent ## **6. Desktop Applications** * Tkinter * PyQT * Kivy ## **7. Scraping** - Web scraping is an idea that alludes to the way toward gathering and handling huge information from the web utilizing programming or calculation. Absolutely, scratching information from the web is a significant ability to have in case you’re an information researcher, developer, or somebody who examinations tremendous amounts of information. - Python is a successful web scrapping programming language. Essentially, you don’t have to learn muddled codes in case you’re a Python master who can do numerous information creeping or web-scratching undertakings. Notwithstanding, the three most notable and usually utilized Python systems are Requests, Scrappy, and BeautifulSoup. ## **8. Scripting** - Python is a prearranged language since it utilizes a mediator to interpret and run its code. Also, a Python content can be an order that runs in Rhino, or it very well may be an assortment of capacities that you can import as a library of capacities in different contents. - In web applications, specialists use Python as a “prearranging language.” Because it can computerize a particular arrangement of assignments and further develop execution. Accordingly, designers lean toward Python for building programming applications, internet browser destinations, working framework shells, and a few games. **Python Scripting Tools You Can Implement Easily:** - DevOps: Docker, Kubernetes, Gradle, and so on - Framework Admin ## 9. Artificial Intelligence / Data Science - Shrewd engineers consistently lean toward Python for AI because of its countless advantages. Python’s creative libraries are one of the primary motivations to pick Python for ML or profound learning. Additionally, Python’s information taking care of limits is extraordinary not with standing its speed. - Being exceptionally strong in ML and AI, Python is presently getting more foothold from different enterprises like travel, Fintech, transportation, and medical services. Tools You Can Use For Python Machine Learning: Tensorflow PyTorch Keras Scikit-learn Numpy Pandas ## 10. Ethical Hacking With Python - Ethical hacking is the way toward utilizing complex instruments and strategies to recognize potential dangers and weaknesses in a PC organization. Python, quite possibly the most well-known programming dialect because of its huge number of instruments and libraries, is additionally utilized for moral hacking. - It is so generally utilized by programmers that there are plenty of various assault vectors to consider. Additionally, it just takes little coding information, simplifying it to compose content. - Tools For Python Hacking SQL infusion Meeting seizing Man in the Middle Systems administration IP Adress Double-dealing ###### Python is a programming language that has acquired prominence and is sought after. Additionally, Python developer’s interest has soar today, requiring information science with Python preparation. Thus, on the off chance that you have the chance to participate in element-related graphs and appreciate experience altogether, this work makes you fortunate in this field of programming. ###### To close this Python developer roadmap empowers an develoepr to prevail in Python programming on the off chance that you achieve the information and an essential comprehension of the field.
MathieuDvv
Generate motivation letter using AI
SiddardhaShayini
No description available
ImpKind
Reward and motivation system for AI agents. Part of the AI Brain series.
davidemesso
A bot that post itself an AI generated motivational quote on instagram @inspiration_davido