Found 14 repositories(showing 14)
L1aoXingyu
This is code of book "Learn Deep Learning with PyTorch"
himanshub1007
# AD-Prediction Convolutional Neural Networks for Alzheimer's Disease Prediction Using Brain MRI Image ## Abstract Alzheimers disease (AD) is characterized by severe memory loss and cognitive impairment. It associates with significant brain structure changes, which can be measured by magnetic resonance imaging (MRI) scan. The observable preclinical structure changes provides an opportunity for AD early detection using image classification tools, like convolutional neural network (CNN). However, currently most AD related studies were limited by sample size. Finding an efficient way to train image classifier on limited data is critical. In our project, we explored different transfer-learning methods based on CNN for AD prediction brain structure MRI image. We find that both pretrained 2D AlexNet with 2D-representation method and simple neural network with pretrained 3D autoencoder improved the prediction performance comparing to a deep CNN trained from scratch. The pretrained 2D AlexNet performed even better (**86%**) than the 3D CNN with autoencoder (**77%**). ## Method #### 1. Data In this project, we used public brain MRI data from **Alzheimers Disease Neuroimaging Initiative (ADNI)** Study. ADNI is an ongoing, multicenter cohort study, started from 2004. It focuses on understanding the diagnostic and predictive value of Alzheimers disease specific biomarkers. The ADNI study has three phases: ADNI1, ADNI-GO, and ADNI2. Both ADNI1 and ADNI2 recruited new AD patients and normal control as research participants. Our data included a total of 686 structure MRI scans from both ADNI1 and ADNI2 phases, with 310 AD cases and 376 normal controls. We randomly derived the total sample into training dataset (n = 519), validation dataset (n = 100), and testing dataset (n = 67). #### 2. Image preprocessing Image preprocessing were conducted using Statistical Parametric Mapping (SPM) software, version 12. The original MRI scans were first skull-stripped and segmented using segmentation algorithm based on 6-tissue probability mapping and then normalized to the International Consortium for Brain Mapping template of European brains using affine registration. Other configuration includes: bias, noise, and global intensity normalization. The standard preprocessing process output 3D image files with an uniform size of 121x145x121. Skull-stripping and normalization ensured the comparability between images by transforming the original brain image into a standard image space, so that same brain substructures can be aligned at same image coordinates for different participants. Diluted or enhanced intensity was used to compensate the structure changes. the In our project, we used both whole brain (including both grey matter and white matter) and grey matter only. #### 3. AlexNet and Transfer Learning Convolutional Neural Networks (CNN) are very similar to ordinary Neural Networks. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. #### 3.1. AlexNet The net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The overall architecture is shown in Figure 1. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. AlexNet maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution. The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (as shown in Figure1). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.  The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192 , and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each. #### 3.2. Transfer Learning Training an entire Convolutional Network from scratch (with random initialization) is impractical[14] because it is relatively rare to have a dataset of sufficient size. An alternative is to pretrain a Conv-Net on a very large dataset (e.g. ImageNet), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. Typically, there are three major transfer learning scenarios: **ConvNet as fixed feature extractor:** We can take a ConvNet pretrained on ImageNet, and remove the last fully-connected layer, then treat the rest structure as a fixed feature extractor for the target dataset. In AlexNet, this would be a 4096-D vector. Usually, we call these features as CNN codes. Once we get these features, we can train a linear classifier (e.g. linear SVM or Softmax classifier) for our target dataset. **Fine-tuning the ConvNet:** Another idea is not only replace the last fully-connected layer in the classifier, but to also fine-tune the parameters of the pretrained network. Due to overfitting concerns, we can only fine-tune some higher-level part of the network. This suggestion is motivated by the observation that earlier features in a ConvNet contains more generic features (e.g. edge detectors or color blob detectors) that can be useful for many kind of tasks. But the later layer of the network becomes progressively more specific to the details of the classes contained in the original dataset. **Pretrained models:** The released pretrained model is usually the final ConvNet checkpoint. So it is common to see people use the network for fine-tuning. #### 4. 3D Autoencoder and Convolutional Neural Network We take a two-stage approach where we first train a 3D sparse autoencoder to learn filters for convolution operations, and then build a convolutional neural network whose first layer uses the filters learned with the autoencoder.  #### 4.1. Sparse Autoencoder An autoencoder is a 3-layer neural network that is used to extract features from an input such as an image. Sparse representations can provide a simple interpretation of the input data in terms of a small number of \parts by extracting the structure hidden in the data. The autoencoder has an input layer, a hidden layer and an output layer, and the input and output layers have same number of units, while the hidden layer contains more units for a sparse and overcomplete representation. The encoder function maps input x to representation h, and the decoder function maps the representation h to the output x. In our problem, we extract 3D patches from scans as the input to the network. The decoder function aims to reconstruct the input form the hidden representation h. #### 4.2. 3D Convolutional Neural Network Training the 3D convolutional neural network(CNN) is the second stage. The CNN we use in this project has one convolutional layer, one pooling layer, two linear layers, and finally a log softmax layer. After training the sparse autoencoder, we take the weights and biases of the encoder from trained model, and use them a 3D filter of a 3D convolutional layer of the 1-layer convolutional neural network. Figure 2 shows the architecture of the network. #### 5. Tools In this project, we used Nibabel for MRI image processing and PyTorch Neural Networks implementation.
深度学习入门值PyTorch
Girijesh-devops
# Python Developer Roadmap Folks, Here are 10 important things to deep-dive into Python Developer Role! Also, the items are listed in no particular order. You don't need to learn everything listed here, however knowing what you don't know is as important as knowing things. ## **1. Learn the basics** * Basic syntax * Variable and data types * Conditionals * List, Tuples, Sets, Dictionaries * Type Casting, Exception Handling * Functions, Buitlin functions ## **2. Advanced Core Python** * Object Oriented Programming(OOP) * Data Structures and Algorithms * Regular Expressions * Decorators * Lambdas * Modules * Iterators ## **3. Version Control Systems** * Basic Git Usage * Repo Hosting Services(GitHub, GitLab, BitBucket) ## **4. Package Managers** * PyPI * PIP ## **5. Learn Framework(Web Development)** - Synchronous Framework - Django, Flask, Pyramid - Asynchrnous Framework - Tornado, Sanic, aiohttp, gevent ## **6. Desktop Applications** * Tkinter * PyQT * Kivy ## **7. Scraping** - Web scraping is an idea that alludes to the way toward gathering and handling huge information from the web utilizing programming or calculation. Absolutely, scratching information from the web is a significant ability to have in case you’re an information researcher, developer, or somebody who examinations tremendous amounts of information. - Python is a successful web scrapping programming language. Essentially, you don’t have to learn muddled codes in case you’re a Python master who can do numerous information creeping or web-scratching undertakings. Notwithstanding, the three most notable and usually utilized Python systems are Requests, Scrappy, and BeautifulSoup. ## **8. Scripting** - Python is a prearranged language since it utilizes a mediator to interpret and run its code. Also, a Python content can be an order that runs in Rhino, or it very well may be an assortment of capacities that you can import as a library of capacities in different contents. - In web applications, specialists use Python as a “prearranging language.” Because it can computerize a particular arrangement of assignments and further develop execution. Accordingly, designers lean toward Python for building programming applications, internet browser destinations, working framework shells, and a few games. **Python Scripting Tools You Can Implement Easily:** - DevOps: Docker, Kubernetes, Gradle, and so on - Framework Admin ## 9. Artificial Intelligence / Data Science - Shrewd engineers consistently lean toward Python for AI because of its countless advantages. Python’s creative libraries are one of the primary motivations to pick Python for ML or profound learning. Additionally, Python’s information taking care of limits is extraordinary not with standing its speed. - Being exceptionally strong in ML and AI, Python is presently getting more foothold from different enterprises like travel, Fintech, transportation, and medical services. Tools You Can Use For Python Machine Learning: Tensorflow PyTorch Keras Scikit-learn Numpy Pandas ## 10. Ethical Hacking With Python - Ethical hacking is the way toward utilizing complex instruments and strategies to recognize potential dangers and weaknesses in a PC organization. Python, quite possibly the most well-known programming dialect because of its huge number of instruments and libraries, is additionally utilized for moral hacking. - It is so generally utilized by programmers that there are plenty of various assault vectors to consider. Additionally, it just takes little coding information, simplifying it to compose content. - Tools For Python Hacking SQL infusion Meeting seizing Man in the Middle Systems administration IP Adress Double-dealing ###### Python is a programming language that has acquired prominence and is sought after. Additionally, Python developer’s interest has soar today, requiring information science with Python preparation. Thus, on the off chance that you have the chance to participate in element-related graphs and appreciate experience altogether, this work makes you fortunate in this field of programming. ###### To close this Python developer roadmap empowers an develoepr to prevail in Python programming on the off chance that you achieve the information and an essential comprehension of the field.
1ZimingYuan
Many classical methods have been used in automatic sleep stage classification but few methods explore deep learning. Meanwhile, most deep learning methods require extensive expertise and suffer from a mass of handcrafted steps which are timeconsuming. In this paper, we propose an efficient convolutional neural network, Sle-CNN, for five-sleep-stage classification. We attach each kernel in the first layers with a learnable coefficient to enhance the learning ability and flexibility of the kernel. Then, we make full use of the genetic algorithm’s heuristic search and the advantage of no need for the gradient to search for the sleep stage classification architecture. We verify the convergence of Sle-CNN and compare the performance of traditional convolutional neural networks before and after using the learnable coefficient. Meanwhile, we compare the performance between the Sle-CNN generated through genetic algorithm and the traditional convolutional neural networks. The experiments demonstrate that the convergence of Sle-CNN is faster than the normal convolutional neural networks and the Sle-CNN generated by genetic algorithm outperforms the traditional handcrafted counterparts too. Our research suggests that deep learning has a great potential on electroencephalogram signal processing especially with the intensification of neural architecture search. Meanwhile, neural architecture search can exert greater power in practical engineering applications. We conduct the Sle-CNN with the Python library, Pytorch, and the code and models will be publicly available.
No description available
mpofukelvintafadzwa
This is a repository of deep learning Pytorch code applied to medical imaging. I wrote it following the udemy course called, Deep Learning with PyTorch for Medical Image Analysis. To learn more about this topic please checkout the course, it's really great, thanks.
goodpupil
# Packages installed 1. Anaconda (conda environment with python 3.6) 2. Keras (conda install -c -conda-forge keras) 3. SciKit-learn 4. Pandas 5. Matpotlib 6. NumPy 7. NLTK 8. Wordcloud # Approach I tried to implement 2 approaches, namely a linear neural network model (Sequential) (model 1) and Convolutional Neural Networks (CNN) (model 2), both using Keras libraries. I have commented the code wherever needed and explained the different strategies I tried during the course of this exercise. I prepared 3 datasets: 1. Just the sentence and label columns 2. Subject + Predicate + Object and label columns 3. Both of the above combined. I ran all three datasets and #3 performed better as compared to the other 2. There were key differences in data preparation for both models. In the case of the neural network, I tokenized and vectorized tokens using word2vec from Google News dataset(1x300 array per word) which has information regarding words being contextually relevant. Further, they were weighted by 'term frequency - inverse document frequency'(tf-idf), added together and divided by the count of words. Hence the 'mean' (so-to-speak) of all words of a sentence was calculated forming the word vector of that sentence. Extending this to all rows, the input was of the order: (number of input rows x vector dimensionality of word2vec) In terms of CNN, the input was also tokenized using the Keras library 'Tokenizer' this time and fit the sentences (row) iteratively. Instead of passing the 'mean' of the word vectors, I passed the vectors of a given sentence. I then zero-padded the vectors of sentences until the length was equal to that of the longest amongst the sentences in the input set. Hence the input was of the order: (number of input rows x vector size of the longest sentence) The embedding matrix was the word2vec mapping of all tokens in the input corpus and hence the order of this matrix was: (number of unique tokens x vector dimensionality of word2vec) In both cases, the train-test data split was 80%-20% respectively. # Results I plotted the history of accuracy and losses of the model predictions. Both models yielded around 60% (+/- 3%) accuracy with training and testing. There seems to be no overfitting/underfitting. Metrics for testing sets were as follows: ### Model 1 (Sequential) 1. Accuracy: 0.5833 2. Precision: 0.5548 3. Recall: 0.8571 4. F-score: 0.6736 ### Model 2 (CNN) 1. Accuracy: 0.6233 2. Precision: 0.6056 3. Recall: 0.7143 4. F-score: 0.6555 These results are not terrible but there's room for improvement with hyperparameter tuning and design tweaks for improved metrics. Also, increasing the data volume may result in better metrics. Transfer learning may also be a good option for data this small. # Limitations The metrics showing the model performance could be improved by using more data and tuning hyperparameters. One strategy I skipped is k-fold cross-validation. Implementing k-fold cross-validation may have high variability according to one study and alternatively, they suggest using a new technique called J-K-fold cross-validations to reduce variance during training (https://www.aclweb.org/anthology/C18-1252.pdf). Another strategy I skipped was performing a grid search to arrive at optimized hyperparameter values something that was done by Yoon Kim et.al.. Training deep learning classifiers with a small dataset may not be reliable. Transfer learning may be a better option. Other libraries like fastai (https://docs.fast.ai/) which is a wrapper over PyTorch could be an alternative. They implement sophisticated techniques like 'LR Finder' that helps users make informed decisions on choosing learning rates for optimizers (SGD, Adam or RAdam). They also implement transfer learning wherein an already trained classifier model (on a variety of corpora) is implemented which proves to be effective. They implement advanced Recurrent Neural Network (RNN) strategies. This could be explored in future work.
No description available
Hongli-Chang
No description available
orgTestCodacy11KRepos110MB
No description available
mpofukelvintafadzwa
This is a repository of deep learning Pytorch code applied to medical imaging. I wrote it following the udemy course called, Deep Learning with PyTorch for Medical Image Analysis. To learn more about this topic please checkout the course, it's really great, thanks.
manali-star
Machine Learning from Scratch repository. Collection of ML codes & projects implemented from ground up. Includes popular algorithms, deep learning models & real-world projects. Implemented in Python with NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow & PyTorch
kritisharmaa0
An AI-powered project demonstrating practical applications of machine learning and deep learning, including NLP, computer vision, and predictive analytics. Built with Python, TensorFlow, PyTorch, and Scikit-learn, it features modular, well-documented code for easy understanding, reproduction, and extension.
All 14 repositories loaded