Found 131 repositories(showing 30)
khoj-ai
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.
SharadKumar97
Performs OSINT scan on email/domain/ip_address/organization using OSINT-SPY. It can be used by Data Miners, Infosec Researchers, Penetration Testers and cyber crime investigator in order to find deep information about their target. If you want to ask something please feel free to reach out to me at robotcoder@protonmail.com
CelaDaniel
🌟 A curated collection of free, high quality AI tools 🤖, APIs 🔗, datasets 📊, and learning resources 📚 covering machine learning 🧠, deep learning 🧩, generative AI 🎨, NLP 💬, and data science 📈. Designed to help developers 👩💻, researchers 🔬, and creators ✨ explore and build with AI faster ⚡.
molyswu
using Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
aaronjmars
Deep Research for crypto - free & fully local
OctagonAI
A free MCP server to analyze and extract insights from public filings, earnings transcripts, financial metrics, stock market data, private market transactions, and deep web-based research within Claude Desktop and other popular MCP clients.
Aryia-Behroziuan
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
apengsigkarup
OceanWave3D - a very efficient coastal engineering research tool used worldwide for simulation of nonlinear and dispersive free surface waves in varying bathymetries from very deep to shallow water. Learn more about the model:
aifa-agi
Free Open-Source Next.js starter kit to build, deploy, and scale intelligent AI applications. Artifacts Feature, features secure multi-provider auth, Stripe payments, vector knowledge bases, deep-research agents, and a unique fractal architecture designed for the future of AI.
AI-powered deep research tool leveraging web scraping for cost-effective, comprehensive analysis. Open-source and API-cost free!
Mario-Kart-Felix
2020 was a roller coaster of major, world-shaking events. We all couldn't wait for the year to end. But just as 2020 was about to close, it pulled another fast one on us: the SolarWinds hack, one of the biggest cybersecurity breaches of the 21st century. The SolarWinds hack was a major event not because a single company was breached, but because it triggered a much larger supply chain incident that affected thousands of organizations, including the U.S. government. What is SolarWinds? SolarWinds is a major software company based in Tulsa, Okla., which provides system management tools for network and infrastructure monitoring, and other technical services to hundreds of thousands of organizations around the world. Among the company's products is an IT performance monitoring system called Orion. As an IT monitoring system, SolarWinds Orion has privileged access to IT systems to obtain log and system performance data. It is that privileged position and its wide deployment that made SolarWinds a lucrative and attractive target. What is the SolarWinds hack? The SolarWinds hack is the commonly used term to refer to the supply chain breach that involved the SolarWinds Orion system. In this hack, suspected nation-state hackers that have been identified as a group known as Nobelium by Microsoft -- and often simply referred to as the SolarWinds Hackers by other researchers -- gained access to the networks, systems and data of thousands of SolarWinds customers. The breadth of the hack is unprecedented and one of the largest, if not the largest, of its kind ever recorded. More than 30,000 public and private organizations -- including local, state and federal agencies -- use the Orion network management system to manage their IT resources. As a result, the hack compromised the data, networks and systems of thousands when SolarWinds inadvertently delivered the backdoor malware as an update to the Orion software. SolarWinds customers weren't the only ones affected. Because the hack exposed the inner workings of Orion users, the hackers could potentially gain access to the data and networks of their customers and partners as well -- enabling affected victims to grow exponentially from there. Orion Platform hack compromised networks of thousands of SolarWinds customers Hackers compromised a digitally signed SolarWinds Orion network monitoring component, opening a backdoor into the networks of thousands of SolarWinds government and enterprise customers. How did the SolarWinds hack happen? The hackers used a method known as a supply chain attack to insert malicious code into the Orion system. A supply chain attack works by targeting a third party with access to an organization's systems rather than trying to hack the networks directly. The third-party software, in this case the SolarWinds Orion Platform, creates a backdoor through which hackers can access and impersonate users and accounts of victim organizations. The malware could also access system files and blend in with legitimate SolarWinds activity without detection, even by antivirus software. SolarWinds was a perfect target for this kind of supply chain attack. Because their Orion software is used by many multinational companies and government agencies, all the hackers had to do was install the malicious code into a new batch of software distributed by SolarWinds as an update or patch. The SolarWinds hack timeline Here is a timeline of the SolarWinds hack: September 2019. Threat actors gain unauthorized access to SolarWinds network October 2019. Threat actors test initial code injection into Orion Feb. 20, 2020. Malicious code known as Sunburst injected into Orion March 26, 2020. SolarWinds unknowingly starts sending out Orion software updates with hacked code According to a U.S. Department of Homeland Security advisory, the affected versions of SolarWinds Orion are versions are 2019.4 through 2020.2.1 HF1. More than 18,000 SolarWinds customers installed the malicious updates, with the malware spreading undetected. Through this code, hackers accessed SolarWinds's customer information technology systems, which they could then use to install even more malware to spy on other companies and organizations. Who was affected? According to reports, the malware affected many companies and organizations. Even government departments such as Homeland Security, State, Commerce and Treasury were affected, as there was evidence that emails were missing from their systems. Private companies such as FireEye, Microsoft, Intel, Cisco and Deloitte also suffered from this attack. The breach was first detected by cybersecurity company FireEye. The company confirmed they had been infected with the malware when they saw the infection in customer systems. FireEye labeled the SolarWinds hack "UNC2452" and identified the backdoor used to gain access to its systems through SolarWinds as "Sunburst." Microsoft also confirmed that it found signs of the malware in its systems, as the breach was affecting its customers as well. Reports indicated Microsoft's own systems were being used to further the hacking attack, but Microsoft denied this claim to news agencies. Later, the company worked with FireEye and GoDaddy to block and isolate versions of Orion known to contain the malware to cut off hackers from customers' systems. They did so by turning the domain used by the backdoor malware used in Orion as part of the SolarWinds hack into a kill switch. The kill switch here served as a mechanism to prevent Sunburst from operating further. Nonetheless, even with the kill switch in place, the hack is still ongoing. Investigators have a lot of data to look through, as many companies using the Orion software aren't yet sure if they are free from the backdoor malware. It will take a long time before the full impact of the hack is known. Why did it take so long to detect the SolarWinds attack? With attackers having first gained access to the SolarWinds systems in September 2019 and the attack not being publicly discovered or reported until December 2020, attackers may well have had 14 or more months of unfettered access. The time it takes between when an attacker is able to gain access and the time an attack is actually discovered is often referred to as dwell time. According to a report released in January 2020 by security firm CrowdStrike, the average dwell time in 2019 was 95 days. Given that it took well over a year from the time the attackers first entered the SolarWinds network until the breach was discovered, the dwell time in the attack exceeded the average. The question of why it took so long to detect the SolarWinds attack has a lot to do with the sophistication of the Sunburst code and the hackers that executed the attack. "Analysis suggests that by managing the intrusion through multiple servers based in the United States and mimicking legitimate network traffic, the attackers were able to circumvent threat detection techniques employed by both SolarWinds, other private companies, and the federal government," SolarWinds said in its analysis of the attack. FireEye, which was the first firm to publicly report the attack, conducted its own analysis of the SolarWinds attack. In its report, FireEye described in detail the complex series of action that the attackers took to mask their tracks. Even before Sunburst attempts to connect out to its command-and-control server, the malware executes a number of checks to make sure no antimalware or forensic analysis tools are running. What was the purpose of the hack? The purpose of the hack remains largely unknown. Still, there are many reasons hackers would want to get into an organization's system, including having access to future product plans or employee and customer information held for ransom. It is also not yet clear what information, if any, hackers stole from government agencies. But the level of access appears to be deep and broad. There are speculations that many enterprises might be collateral damage, as the main focus of the attack was government agencies that make use of the SolarWinds IT management systems. Who was responsible for the hack? Federal investigators and cybersecurity agents believe a Russian espionage operation -- mostly likely Russia's Foreign Intelligence Service -- is behind the SolarWinds attack. The Russian government has denied any involvement in the attack, releasing a statement that said, "Malicious activities in the information space contradicts the principles of the Russian foreign policy, national interests and understanding of interstate relations." They also added that "Russia does not conduct offensive operations in the cyber domain." Contrary to experts in his administration, then-President Donald Trump hinted at around the time of the discovery of the SolarWinds hack that Chinese hackers might be behind the cybersecurity attack. However, he did not present any evidence to back up his claim. Shortly after his inauguration, President Joe Biden vowed that his administration intended to hold Russia accountable, through the launch of a full-scale intelligence assessment and review of the SolarWinds attack and those behind it. The president also created the position of deputy national security adviser for cybersecurity as part of the National Security Council. The role, held by veteran intelligence operative Anne Neuberger, is part of an overall bid by the Biden administration to refresh the federal government's approach to cybersecurity and better respond to nation-state actors. Naming the attack: What is Solorigate, Sunburst and Nobelium? The SolarWinds attack has a number of different names associated with it. While the attack is often referred to simply as the SolarWinds attack, that isn't the only name to know. Sunburst. This is the name of the actual malicious code injection that was planted by hackers into the SolarWinds Orion IT monitoring system code. Both SolarWinds and CrowdStrike generally refer to the attack as Sunburst. Solorigate. Microsoft initially dubbed the actual threat actor group behind the SolarWinds attack as Solorigate. It's a name that stuck and was adopted by other researchers as well as media. Nobelium. In March 2021, Microsoft decided that the primary designation for the threat actor behind the SolarWinds attack should actually be Nobelium -- the idea being that the group is active against multiple victims -- not just SolarWinds -- and uses more malware than just Sunburst. The China connection to the SolarWinds attack While it is suspected that the initial Sunburst code and the attack against SolarWinds and its users came from a threat actor based in Russia, other nation-state threat actors have also used SolarWinds in attacks. According to a Reuters report, suspected nation-state hackers based in China exploited SolarWinds during the same period of time the Sunburst attack occurred. The suspected China-based threat actors targeted the National Finance Center, which is a payroll agency within the U.S. Department of Agriculture. It is suspected that the China-based attackers did not use Sunburst, but rather a different malware that SolarWinds identifies as Supernova. Why is the SolarWinds hack important? The SolarWinds supply chain attack is a global hack, as threat actors turned the Orion software into a weapon gaining access to several government systems and thousands of private systems around the world. Due to the nature of the software -- and by extension the Sunburst malware -- having access to entire networks, many government and enterprise networks and systems face the risk of significant breaches. The hack could also be the catalyst for rapid, broad change in the cybersecurity industry. Many companies and government agencies are now in the process of devising new methods to react to these types of attacks before they happen. Governments and organizations are learning that it is not enough to build a firewall and hope it protects them. They have to actively seek out vulnerabilities in their systems, and either shore them up or turn them into traps against these types of attacks. Since the hack was discovered, SolarWinds has recommended customers update their existing Orion platform. The company has released patches for the malware and other potential vulnerabilities discovered since the initial Orion attack. SolarWinds also recommended customers not able to update Orion isolate SolarWinds servers and/or change passwords for accounts that have access to those servers. The greater White House cybersecurity focus will be crucial, some industry experts have said. But organizations should consider adopting modern software-as-a-service tools for monitoring and collaboration. While the cybersecurity industry has significantly advanced in the last decade, these kinds of attacks show that there is still a long way to go to get really secure systems. The Nobelium group continues to attack targets The suspected threat actor group behind the SolarWinds attack has remained active in 2021 and hasn't stopped at just targeting SolarWinds. On May 27, 2021, Microsoft reported that Nobelium, the group allegedly behind the SolarWinds attack, infiltrated software from email marketing service Constant Contact. According to Microsoft, Nobelium targeted approximately 3,000 email accounts at more than 150 different organizations. The initial attack vector appears to be an account used by USAID. From that initial foothold, Nobelium was able to send out phishing emails in an attempt to get victims to click on a link that would deploy a backdoor Trojan designed to steal user information.
usemanusai
A revolutionary, multi-component research automation platform that combines advanced AI agent orchestration, cross-platform desktop applications, containerized deployments, and enterprise-grade intelligence capabilities. Features complete BMAD AI Agent integration, distributed computing, real-time collaboration, and autonomous research capabilities
ajaybhatiya1234
 Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works # Visual DeepFake Detection In our recent [article](https://www.dessa.com/post/deepfake-detection-that-actually-works), we make the following contributions: * We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube. * We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection. Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data. Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger [Atlas](https://github.com/dessa-research/atlas). ## Setup ## Prerequisities To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1 ## Steps 0. Install [nvidia-docker](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) 00. Install [ffmpeg](https://www.ffmpeg.org/download.html) or `sudo apt install ffmpeg` 1. Git Clone this repository. 2. If you haven't already, install [Atlas](https://github.com/dessa-research/atlas). 3. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder. That's it, You're ready to go! ## Datasets Half of the dataset used in this project is from the [FaceForensics](https://github.com/ondyari/FaceForensics/tree/master/dataset) deepfake detection dataset. . To download this data, please make sure to fill out the [google form](https://github.com/ondyari/FaceForensics/#access) to request access to the data. For the dataset that we collected from Youtube, it is accessible on [S3](ttps://deepfake-detection.s3.amazonaws.com/augment_deepfake.tar.gz) for download. To automatically download and restructure both datasets, please execute: ``` bash restructure_data.sh faceforensics_download.py ``` Note: You need to have received the download script from FaceForensics++ people before executing the restructure script. Note2: We created the `restructure_data.sh` to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish. ## Walkthrough Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside `custom_docker_image`. To create the docker image, execute the following commands in terminal: ``` cd custom_docker_image nvidia-docker build . -t atlas_ff ``` Note: if you change the image name, please make sure you also modify line 16 of `job.config.yaml` to match the docker image name. Inside `job.config.yaml`, please modify the data path on host from `/media/biggie2/FaceForensics/datasets/` to the absolute path of your `datasets` folder. The folder containing your datasets should have the following structure: ``` datasets ├── augment_deepfake (2) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── base_deepfake (1) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── both_deepfake (3) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── precomputed (4) └── T_deepfake (0) ├── manipulated_sequences │ ├── DeepFakeDetection │ ├── Deepfakes │ ├── Face2Face │ ├── FaceSwap │ └── NeuralTextures └── original_sequences ├── actors └── youtube ``` Notes: * (0) is the dataset downloaded using the FaceForensics repo scripts * (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called `frames` contain frames collected using `ffmpeg` * (2) is the augmented dataset, collected from youtube, available on s3. * (3) is the combination of both base and augmented datasets. * (4) precomputed will be automatically created during training. It holds cashed cropped frames. Then, to run all the experiments we will show in the article to come, you can launch the script `hparams_search.py` using: ```bash python hparams_search.py ``` ## Results In the following pictures, the title for each subplot is in the form `real_prob, fake_prob | prediction | label`. #### Model trained on FaceForensics++ dataset For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)   #### Model trained on Youtube dataset Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).   #### Model trained on Paper + Youtube dataset Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.   for a more in depth explanation of these results, please refer to the [article](https://www.dessa.com/post/deepfake-detection-that-actually-works) we published. More results can be seen in the [interactive UI](http://deepfake-detection.dessa.com/projects) ## Help improve this technology Please feel free to fork this work and keep pushing on it. If you also want to help improving the deepfake detection datasets, please share your real/forged samples at foundations@dessa.com. ## LICENSE © 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.
Using-Deep-Learning-Techniques-perform-Fracture-Detection-Image-Processing Using Different Image Processing techniques Implementing Fracture Detection on X rays Images on 8000 + images of dataset Description About Project: Bones are the stiff organs that protect vital organs such as the brain, heart, lungs, and other internal organs in the human body. There are 206 bones in the human body, all of which has different shapes, sizes, and structures. The femur bones are the largest, and the auditory ossicles are the smallest. Humans suffer from bone fractures on a regular basis. Bone fractures can happen as a result of an accident or any other situation in which the bones are put under a lot of pressure. Oblique, complex, comminute, spiral, greenstick, and transverse bone fractures are among the many forms that can occur. X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and other types of medical imaging techniques are available to detect various types of disorders. So we design the architecture of it using Neural Networks different models, compare the accuracy, and get a result of which model works better for our dataset and which model delivers correct results on a specific related dataset with 10 classes. Basically our main motive is to check that which model works better on our dataset so in future reference we all get an idea that which model gives better type of accuracy for a respective dataset . Proposed Method for Project: we decided to make this project because we have seen a lot of times that report that are generated by computer produce error sometimes so we wanted to find out which model gives good accuracy and produce less error so we start to research over image processing nd those libraries which are used in image processing like Keras , Matplot lib , Image Generator , tensor flow and other libraries and used some of them and implement it on different image processing algorithm like as CNN , VGG-16 Model ,ResNet50 Model , InceptionV3 Model . and then find the best model which gives best accuracy for that we generate classification report using predefined libraries in python such as precision , recall ,r2score , mean square error etc by importing Sklearn. Methodology of Project: Phase 1: Requirement analysis: • Study concepts of Basic Python programming. • Study of Tensor flow, keras and Python API interface . • Study of basic algorithms of Image Processing and neural network And deep learning concepts. • Collect the dataset from different resources and describe it into Different classes(5 Fractured + 5 non fractured). Phase 2: Designing and development: The stages of design and development are further segmented. This step starts with data from the Requirement and Analysis phase, which will lead to the model construction phase, where a model will be created and an algorithm will be devised. After the algorithm design phase is completed, the focus will shift to algorithm analysis and implementation in this project. Phase 3: Coding Phase: Before real coding begins, the task is divided into modules/units and assigned to team members once the system design papers are received. Because code is developed during this phase, it is the developers' primary emphasis. The most time-consuming aspect of the project will be this. This project's implementation begins with the development of a program in the relevant programming language and the production of an error-free executable program. Phase 4: Testing Phase: When it comes to the testing phase, we may test our model based on the classification report it generates, which contains a variety of factors such as accuracy, f1score, precision, and recall, and we can also test our model based on its training and testing accuracy. Phase 5: Deployment Phase: One of our goals is to bring all of the previous steps together and put them into practice. Another goal is to deploy our model into a python-based interface application after comparing the classification reports and determining which model is best for our dataset.
srajan-kiyotaka
Artificial Intelligence, Machine learning and Deep Learning Resources. 🚀 FREE AI/ML/DL Resources - 🎓 Courses, 📝 Blogs, 🔬 Research, and many more - for everyone!
Kuberwastaken
My free implementation of @dzhng's implementation of OpenAI's new Deep Research agent. Get (almost) the same capability for free. You can even tweak the behavior of the agent with adjustable breadth and depth. Run it for 5 min or 5 hours, it'll auto adjust :)
XiaodongYangQF
The CQF exams cover a wide range of topics, including derivatives pricing and modeling, portfolio & risk management, machine learning, deep learning and numerical methods. Feel free to look around and get in touch if you’d like to chat research.
usemanusai
Production-ready Python 3.13+ CLI/API system with Adaptive RAG, multi-engine TTS, OpenRouter key rotation, FastAPI backend, and Next.js dashboard
QinHsiu
This repository contains datasets for deep model training, AI-related competitions, websites for learning AI for free, online brush-ups, outsourcing websites, some tools you can use to do research, build, and some open-source tools.
This repository summaries the research studies related to deep learning with posit arithmetic. The studies are sorted by the publication dates either on Conference/Journal or ArXiv and OpenReview. Feel free to reach me (seyed hamed fatemi langroudi) at (email: sf3052@rit.edu) if your publication is missed or the publication date is wrong.
MortadhaMannai
well Social media has affected society during the last decade by providing free milieus for everyone to share their thoughts, ideas, and also news. As a negative effect, these environments have been used for propagation of low-quality, content-bare, and even outright “Fake” news. Spreading of Fake news has extreme effects on people’s minds and societies, such as decreasing one’s trust to all sources of news, making readers defensive against most news channels,The Fight Against Fake News with Deep Learning Fake news detection with LSTMs and BERT etc. This is why recently the detection of Fake news has become one of the top trends in the field of research. Referring to one of the latest works on this area, detecting Fake news in social media has exclusive features which cannot be found in traditional methods and approaches for reality detection so this work represent
unnatisilks12
Just a few years ago, a company formed by three individuals decided that it would be making skateboards and sunglasses from recycled nylon. They were basing their efforts upon “trash” floating in the ocean, that they were determined should get cleaned up if they set the ball rolling and others joined them in the effort. “When we researched ocean waste, we learned that there’s a constant stream of nylon fishing nets being dumped into the ocean every year, nets that are just going to sit there for generations. This stuff doesn’t break down.” Today, the company pays fishermen in Chile to collect old nylon fishing nets, which are then recycled into skateboards and sunglasses. What is the material called Nylon? Nylon is a type of synthetic fiber fabric like polyester, made from petroleum products. Nylon was the first fabric made entirely in a laboratory and its invention represents the dawn of the age of synthetics. Nylon had started appearing in stores in 1939 in the form of women’s tights, but it was really the Second World War that opened up the market for Nylon. Nylon became widely available to the general public around the time of World War II. In fact during the war it extensively found of use in the making of parachutes and other military equipment. Prior to 1945, cotton and wool dominated the market; by the end of the war, synthetic fibers particularly nylon had eaten up a significant 25% of the market share. It is today commonly used to make clothing, backpacks and bags, stockings or tights, outdoor gear such as tents, rope, carpet, underwear and hosiery, nylon can also be found in the bristles of our toothbrushes, umbrellas, knits, and swimwear and active wear and many other items we use every day. The advantages of Nylon as a material First developed in the 1930s as an alternative to silk, there are lots of great qualities about the fabric. It is lightweight yet strong, and it is often touted for its quick-drying capabilities. Clothing manufacturers like it because it holds dye well. It is also less expensive to produce than silk and does not get damaged as easily. The making of nylon for fabric use Nylons are a family of materials called polyamides, made from reacting carbon-based chemicals found in coal and petroleum in a high-pressure, heated environment. This chemical reaction, known as condensation polymerization, forms a large polymer – in the form of a sheet of nylon. To make nylon fabric for apparel, this nylon sheet is then broken into chips, melted and drawn through a mechanical spinneret to produce individual fibres that are woven into fabric. This plastic is then put through an intensive chemical process, resulting in the strong, stretchy fibres that make it so useful as a fabric. So what is the idea about recycling Nylon? Since Nylon is made of petroleum products it will not biodegrade. Nylon doesn’t break down easily and accounts for about 10% of the debris in the ocean. According to the World Society for the Protection of Animals, more than 600,000 tons of fishing gear is dumped into oceans every year, including nylon nets. Fishermen often discard the nets because the alternative is much costlier – paying someone to dispose of them properly. For some reason locked deep in polymer chemistry, nylon is more difficult to recycle than polyester. After years of research, development, and testing, some recycled nylon fibers that are suitable for apparel and can pass the rigorous tests of manufacturability and product quality, is what the company found out. “Although we’ve been substituting non-recycled polyester for recycled versions for 20 years, only in the last five have we begun swapping out non-recycled nylon for its recycled replacement. Some of the recycled nylon we use comes from post-industrial waste fiber, yarn collected from a spinning factory, and waste from the weaving mills that can be processed into reusable nylon fiber. Another recycled nylon fiber we are experimenting with is re-created from discarded industrial fishing nets. Though a lot of experiments were conducted and extensive research on how nylon could be converted to its recycled biodegradable form was carried out, it was only in 2013 onwards that it actually produced desired results. In any case, incorporating as much recycled nylon as we can lessens our dependence on petroleum as a raw material source. It curbs discards, thereby prolonging landfill life and reducing toxic emissions from incinerators. It helps promote new recycling streams for nylon products that are no longer usable. And it causes less air, water, and soil contamination compared to using non-recycled nylon. Recycling of Nylon – a challenge in itself The economics of recycling nylon are not very appealing, however. An associate professor in plastic engineering at the University of Massachusetts Lowell, ran a research program on recycled fishing nets for the company. Nylon, he says, is not an easy or cheap material to recycle. Plus polymers, or plastics, are cheap to buy new which may be why many companies choose to use polyethylene terephthalate (PET) – the most common type of plastic found in soda and water bottles – instead . Contamination is another concern. Unlike metals and glass, which are melted at high temperatures, nylon is melted at a lower temperature, meaning some contaminants – non-recyclable materials and microbes or bacteria – can survive. This is why all nylons have to be cleaned thoroughly before the recycling process. “When you’ve dragged a fishing net through a boat, on the ocean floor, and wherever else, it’s a lot harder to clean before you can recycle it,” Johnston says. That’s why Johnston is supportive of circular economy business models, in which businesses keep resources in use for as long as possible, extract their maximum value and then recycle and reuse products and materials. “What would change the recycling scene is if we were charged per pound for all waste. Or if companies had to take back part of what they produced.” The company has an idea already: the company’s sunglasses come with a lifetime warranty. In fact it will fix any pair of glasses free of charge, or provide customers with new frames if their product is beyond repair. Old frames are recycled. And another Italian manufacturer Aquafil has nylon fibers in its carpets. After nearly 40 years of producing carpet yarn, a growing awareness of the environmental harm caused by synthetic materials made it want to turn towards a more environmentally friendly business model. In 2007, Aquafil began developing a machine that can churn through most kinds of nylons, producing new threads ready to be repurposed. Aquafil now sells these threads, called Econyl, to American brands such as Outerknown, an LA-based outerwear company started by pro surfer Kelly Slater, and swimwear giant Speedo. LA-based Masami Shigematsu works on product development for Speedo. She says that she had been actively searching for recycled nylon for years before she found Econyl. “It has to perform well. It can’t just be a sustainable material. Our products are being used by athletes who need it to function as good as new material.” In 2014, Shigematsu met with Aquafil and started experimenting with the fabric. Last year, Speedo rolled out two products with Econyl and has since expanded to include more than 50 products made with the material. Has corporate social responsibility become the modern gold rush? California-based Patagonia has also been adding more recycled nylon to its lineup. Currently, the company has more than 50 products that contain recycled nylon in various percentages. The Torrent shell jackets, for instance, have an outer layer textile made with 100% chemically recycled nylon. It took Patagonia nearly 15 years to develop the technology to recycle polyester to a point where it was as good as virgin polyester. Patagonia wants to go further than just use recycled nylon in its products. How to they Recycle Nylon Just about everyone has nylon around their home. It is in the backpacks our kids take to school, the pantyhose women wear to work and the cheap, reusable shopping bags everyone is handing out these days. There are very few places that accept nylon for recycling. It is unlikely that you can recycle it through your curbside program, and equally so that your local recycling center will have a handy bin that says, “Put your unwanted nylon here!” Your ability to recycle nylon depends largely on the form it takes; for example, nylon pantyhose are easier to recycle than nylon backpacks. But remember: If you cannot recycle an item made of nylon, you may be able to reuse it rather than putting it in the trash. The problem with nylon is that, like many fabrics, it is difficult to recycle, especially once it has been used. Second-hand fabrics typically need to be cleaned before they can be recycled, and it is often not cost-effective for companies to do that. However, there are a few nylon recycling options out there. How to recycle or reuse nylon bags Nylon bags are challenging to recycle unless you purchase one from a company that offers a take back program. San Francisco-based Timbuk2 is one such company. Once your nylon messenger or camera bag is worn out, simply stick it in a box and mail it to the company at the address provided on its website. Timbuk2 will reuse or recycle as many of the materials as possible. There is no charge for the company’s recycling services (other than the cost of postage), and customers that send in products to be recycled will receive a 20% discount on a future purchase. There may also be creative ways to reuse unwanted nylon bags. If you have a backpack that is in good shape that you no longer want, consider donating it to a thrift shop or a program that helps children get school supplies. If you have a large shopping bag with a hole it in, cut it apart and use the good nylon to make a smaller storage bag. How to recycle or reuse nylon fabric Leftover nylon fabric from a sewing project is a great material to reuse. See if your community has an organization that provides fabric and supplies to artists and schools. Materials for the Arts in New York City and The Scrap Exchange in Durham, NC, are a few examples. If you have nylon clothing you want to recycle, and you purchased that clothing from popular outdoor gear manufacturer Patagonia, you can return it to the company for recycling. Get more information about Patagonia’s recycling program on its website. How to recycle and reuse nylons or tights No Nonsense, which makes nylons, tights and other types of leggings, offers a recycling program for consumers. The first step is visit their pantyhose recycling page and print a prepaid mailing label. Next, place all your unwanted nylon leggings in a box and put on the shipping label. Drop it at your nearest post office or other mailing location, and your old nylons are on their way to a recycling facility. No Nonsense sends the material to a plant that recycles it into things like playground equipment, toys and vehicle insulation. There are lots of ways to reuse old nylons as well. Put a bar of soap in the toe of a clean nylon (make sure there is no run in that section). Tie off the open end and hang the sock by the sink. When you go to wash your hands, get them plenty wet then roll the sock between your hands. This works really well in potting sheds, barns or other places where a soap dish might not be practical. Use nylons to tie up tomatoes or other plants that need support as they grow. Fill a clean nylon with potpourri or lavender. Use it as a sachet in your drawers, car or any other area you want to smell fresh. But then what is to be Nylon’s impact on the planet? Different kinds of nylon have different properties, but the common threads between each are strength, durability and ability to be moulded into shape. The flip side is that no form of nylon is biodegradable; so once you no longer have a need for your torn stockings or old toothbrush, it sits in a landfill for at least 30 years. Nylon is in part derived from coal and petroleum. In producing nylon there is creation of Greenhouse gases: producing nylon creates nitrous oxide, a greenhouse gas that is 300 times more potent than carbon dioxide. Water: manufacturing nylon is a very thirsty process; large amounts of water are used for cooling the fibres, which can be a source of environmental contamination and pollution. Energy: manufacturing nylon is a very energy-hungry process, which contributes to environmental degradation and global warming. But, definitely there is the good side to it. Nylon is a plastic that can be recycled. There are several brands and accreditations that can help consumers find more sustainable nylon products. Econyl has developed an eco-friendly nylon made from recycled plastics in a closed loop system, drastically reducing waste and emissions. Nylon may certainly not be great for the environment, but there are plenty of brands working hard to turn that around!
bodyAce15
A curated collection of free, high quality AI tools , APIs , datasets , and learning resources covering machine learning, deep learning , generative AI , NLP , and data science . Designed to help developers , researchers , and creators explore and build with AI faster .
CivilKen
Based on previous research, ground motion can be amplified in certain direction and show with significant anisotropy. The causes still remain unclear, and different researchers have attributed this phenomenon to several factors, including topographic effect, local geological heterogeneities, wave polarization, wave trapped in fault zone and etc. This phenomenon might have severe impacts on buildings that cause damages, especially in the near-fault area. However, the current seismic design code focus on the perpendicular direction of fault strike only, which is not suitable enough for real situation. The objective of this study will focus on seismic wave directivity in near-fault zone. A total of 104 earthquake events with basic geological data were collected. Causative factors were selected based on previous research. There are three main causes considered of free field stations, included wave polarization, anisotropic stiffness and forward directivity. Data of influence factors were collected accordingly, and Arias Intensity is used to describe the directivity of seismic wave. The deep learning technique was applied to predict Arias Intensity distribution with the given parameters. This research used TensorFlow as the main deep learning tool.
isaccanedo
🍳 Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free
GiannaW
Project 1 In this project, you will develop bots that play a simple game. Each bot has a different strategy. You will pit these bots against each other to determine which strategy is more successful in this context. These bots are incredibly simple, and consist of a few lines of code and methods that represent different strategies of play. Teams Teams have been assigned for this project and will be posted on Blackboard. They are fixed - no switching or cooperating across team lines. It is up to teammates to ensure that their partner adheres to the American University Honor Code. You may use pair programming, however, you must each take turns in the driver role on your own laptop. I should see commits on Github for each of you to get full credit for this assignment. Step 0 - Background Research. Both members should review the description for the Prisoner's Dilemma on Wikipedia. You do not need to become familiar with the intimate mathematical details of the Dilemma, just the general mechanism and the difference between the iterated dilemma and the non-iterated version (Introduction through the end of Section 3.1). This topic has been debated endless in a variety of fields, so there is a lot of additional material available if you want to dig deeper. For this assignment, you will only be required to be familiar with the basics (e.g., you will not need to understand the Nash equilibrium or the proof that goes with it). Both members should work together to devise five strategies for "winning" the prisoner's dilemma over a long number of iterations. I recommend first writing these strategies down in simple English rather than trying to jump directly into developing code. You may use the 'tit-for-tat' strategy as one of them, or come up with ones of your own. Optional There are several good videos that can help make these concepts a little clearer. I recommend this one, but there are many others. Step 1 - Create the Repo for your Team. Both members of your team will visit this link. This will create a repo for your team in Github. For this assignment, you will share a Github repo with your teammate. If you are the first member of your team to visit the link, you can create the team and the repo - make sure you create the right team. If you are the second member to click the link, then make sure you join the right team. Both members will clone the repository to your local machines (i.e., using git clone <URL>). You will then each have a local repository that is linked to the shared repository, and can work on the code together. As a reference for how to use git, I suggest this site, as it avoids some of the more complicated theory behind git and focuses on the bare minimum practicalities. Step 2 - Review the Provided Code. In the repository is a starter class, Prison, that has the bare minimum for the prisoner's dilemma. There is a variable for the last choice made by each of two prisoners, (i.e., Prisoner A and Prisoner B). //The last choice of each prisoner. boolean lastChoicePrisonerA = BETRAYED; //Set initially to BETRAYED for testing boolean lastChoicePrisonerB; Two examples are given. Prisoner B is using a randomChoice() strategy, in which B will randomly choose to stay silent or betray Prisoner A. This strategy does not use prior information to make the decision - it is equivalent to flipping a coin. The provided code gives an example of a second strategy: betrayIfBetrayed(). If A betrayed last time, then B will betray also. However, if A stayed silent, B will randomly choose to stay silent or betray based on the results of a coin flip. The coin flip is generated using the Random class, a more thorough description of which can be found here. public static boolean randomChoice(){ Random rand = new Random(); return rand.nextBoolean(); } public static boolean betrayIfBetrayed(boolean lastChoice){ if(lastChoice == BETRAYED) return BETRAYED; else return randomChoice(); } Step 3 - Write one method for each strategy. Following the design pattern for the example strategies, define one method for each of your team's five strategies (the example methods do not count). Assume that each prisoner can know the outcome of one or more previous encounters with the other prisoner through parameters passed to the method. Step 3.5 - Commit and Push to Github Remember, this is not like using Blackboard for submitting assignments. As you are working with your teammate, you will need to frequently push code to the Github repo so that your teammate can access it. If you wait to the last minute, you could have conflicts that are difficult to resolve. It is much better to get into a rhythm with your partner early rather. To get full credit for your part in this, I need to see multiple commits from each team member. Step 4 - Write a method for scoring the outcome If they both stay silent then both prisoners serve 1 year. If a prisoner stays silent and the other betrays, then the prisoner who stayed silent gets 3 years in prison while the other goes free. If both betray, then each prisoner serves 2 years. Write a method that assigns a score to a strategy based upon the outcome. A high score is a bad thing, as each point represents a number of additional years added to the prisoner's sentence.
Organizing a space might seem to be a tedious task especially when you’re lazy. But when you are very much concerned about organizing a kitchen, the task must be completed once and for all. It’s always better to go for modular kitchen fittings even when you encounter issues every now and then. This could be considered just a solution to extend the trend, harmonize everything and help you use maximum space. But, before proceeding with the tips, you should keep stuff within your reach and keep the countertops neat and tidy. Before starting off, you should visualize and try dividing the entire space into sections. Set aside sufficient space to keep electronic gadgets, crockery, cooking utensils, the gas burner, spices and things you use on a daily basis. So, let’s have a look at as to how sections could be organized using the modular fitting options. Around the gas burner The area around the stove is the place from where the organizing journey should commence. While you keep the utensils and cookware closer to the hob, set aside a shelf for the pressure cooker, pans, stew pots, and more. Usually, you can just go beyond the conventional way and keep utensils on revolving racks or sliding shelves. Additionally, you can also keep utensils in drawers which are quite broad and deep. Store kitchenware according to the usage With cabinets set just below the countertop, you should always store frequently used things in the lower shelves. As you move up the cabinets, you can stack the items that you require off and on. Make sure the space is quite big so that pans can be kept on stainless steel trays. Select suitable drawers As you think to show off design style and consult an experienced interior designer, a drawer with wire baskets would be useful. You can just put plastic containers, microwave-safe utensils, chocolate moulds, or baking dishes. For effective utilization of the space, you can stack lids by setting up organizer racks. Opt for cutlery drawers With demarcations set for keeping spoons, knives, and forks, cutlery differing in shapes and sizes can be arranged effortlessly. So, whenever you’re serving desserts or the most favorite dish, you no longer have to delve into shelves or reach out spoon holders. In fact, you can always keep spoons and ladles next to the stoves so that they can be accessed whenever you want to. Cabinet shelves & Open Racks Cabinet shelves are the most ideal space for keeping electronic gadgets together. These appliances could be a sandwich maker, a food processor, a juicer, and a hand blender. But, when you purchase the appliances, you should observe the size and see if they fit in the space. It would be better if you buy appliances of the same color shade. This would add some character to space and augment the vibe. Buy shelves for the pantry If you have a small modular kitchen, then don’t forget to allocate some space for the pantry. You can go for sliding drawers with steel fittings to store grains, stock bought month after month, and pulses. Investing in shelves is certainly a great idea because you would avail ventilated options as per your preferences. Be wise in deciding the spot because such items have to be placed a distance away from the sink. Try categorizing items and store them in groups. This would make you happy about organizing things and would never let get into a chaos later on. Go for open door cabinets If you need to showcase hand painted crockery, and antique china, then open door cabinets can surely help. Stacking baskets in a certain order is another fantastic idea for open cabinets. These would fascinate the guests and lend a clutter-free look to space. For a stunning effect, you can also set up fixtures and let the light play around. A retro look to the cabinets can be thought about in case you plan to adorn the cotton floral curtains. Utilize the space well Even if the modular kitchen design is laid across a small space, you can maximize space utilization by setting up storage units and pull out trolleys. While such units can be customized as per your needs, the cabinets should be as small as 5 to 6 inches. In fact, these are perfect because they offer ample space for storing sleek jars and oil bottles. Fix rotating trays It’s always recommended to go for rotating trays when you choose kitchen fittings. Soon after you have fixed them inside the cabinets, you can rotate them with a gentle spin. Moreover, you can place the trays in a corner with ketchup, and jams containers stacked one above the other. Once again, you should categorize the containers depending on what you are going to store. Well, such trays are also nicknamed as ‘lazy Susan trays’ simply because they can be used with minimal effort. As you step out for shopping, select trays made from plastic because these are washable or can be wiped with a damp cloth. In case you live near the coast, then you should protect the walls with a tile backsplash and prefer using stainless steel fittings. For a rustic but yet a contemporary touch, you can also select waterproof wood or a laminate. A little bit of research can actually help you pick the best backsplash idea and look for brands that you could trust. Regardless of what kind of layout you have on your mind, you can get in touch with an interior designer and come up with a plan. A basic modular kitchen design is worth thinking about, but you have to be ready to bear the cost. With time, you would be happy because you can still use the space and save time shuffling things from one corner to the other. So, hope you have learnt some effective ways of organizing the kitchen. If you have any other idea, then we would be keen to hear from you. Do share it with us, till then, you can just follow the ways mentioned above and get organized.
positive666
A lightweight, pure web search solution for large language models, supporting multi-engine aggregated search, deep reflection and result evaluation. A balanced approach between web search and deep research, providing a framework-free implementation and mcp server for easy developer integration.
bysiber
AI deep research agent. 4 LLM providers (OpenAI, Google Gemini, Anthropic Claude, Ollama). Works 100% free with Ollama + DuckDuckGo. No Langchain, no paid APIs required. CLI + Python API. pip install deepworm
This project presents a research AI system for breast cancer tumor progression analysis, combining deep learning–based temporal modeling with survival analysis techniques. The system is designed to model disease evolution over time and estimate clinically relevant outcomes such as overall survival, relapse-free survival, and recurrence risk.