Found 98 repositories(showing 30)
jcreinhold
Normalize MR image intensities in Python
USTCPCS
Context Encoding for Semantic Segmentation MegaDepth: Learning Single-View Depth Prediction from Internet Photos LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume On the Robustness of Semantic Segmentation Models to Adversarial Attacks SPLATNet: Sparse Lattice Networks for Point Cloud Processing Left-Right Comparative Recurrent Model for Stereo Matching Enhancing the Spatial Resolution of Stereo Images using a Parallax Prior Unsupervised CCA Discovering Point Lights with Intensity Distance Fields CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation Learning a Discriminative Feature Network for Semantic Segmentation Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi- Supervised Semantic Segmentation Unsupervised Deep Generative Adversarial Hashing Network Monocular Relative Depth Perception with Web Stereo Data Supervision Single Image Reflection Separation with Perceptual Losses Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains EPINET: A Fully-Convolutional Neural Network for Light Field Depth Estimation by Using Epipolar Geometry FoldingNet: Interpretable Unsupervised Learning on 3D Point Clouds Decorrelated Batch Normalization Unsupervised Learning of Depth and Egomotion from Monocular Video Using 3D Geometric Constraints PU-Net: Point Cloud Upsampling Network Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer Tell Me Where To Look: Guided Attention Inference Network Residual Dense Network for Image Super-Resolution Reflection Removal for Large-Scale 3D Point Clouds PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image Fully Convolutional Adaptation Networks for Semantic Segmentation CRRN: Multi-Scale Guided Concurrent Reflection Removal Network DenseASPP: Densely Connected Networks for Semantic Segmentation SGAN: An Alternative Training of Generative Adversarial Networks Multi-Agent Diverse Generative Adversarial Networks Robust Depth Estimation from Auto Bracketed Images AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation DeepMVS: Learning Multi-View Stereopsis GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation Single-Image Depth Estimation Based on Fourier Domain Analysis Single View Stereo Matching Pyramid Stereo Matching Network A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation Image Correction via Deep Reciprocating HDR Transformation Occlusion Aware Unsupervised Learning of Optical Flow PAD-Net: Multi-Tasks Guided Prediciton-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing Surface Networks Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation TextureGAN: Controlling Deep Image Synthesis with Texture Patches Aperture Supervision for Monocular Depth Estimation Two-Stream Convolutional Networks for Dynamic Texture Synthesis Unsupervised Learning of Single View Depth Estimation and Visual Odometry with Deep Feature Reconstruction Left/Right Asymmetric Layer Skippable Networks Learning to See in the Dark
himanshub1007
# AD-Prediction Convolutional Neural Networks for Alzheimer's Disease Prediction Using Brain MRI Image ## Abstract Alzheimers disease (AD) is characterized by severe memory loss and cognitive impairment. It associates with significant brain structure changes, which can be measured by magnetic resonance imaging (MRI) scan. The observable preclinical structure changes provides an opportunity for AD early detection using image classification tools, like convolutional neural network (CNN). However, currently most AD related studies were limited by sample size. Finding an efficient way to train image classifier on limited data is critical. In our project, we explored different transfer-learning methods based on CNN for AD prediction brain structure MRI image. We find that both pretrained 2D AlexNet with 2D-representation method and simple neural network with pretrained 3D autoencoder improved the prediction performance comparing to a deep CNN trained from scratch. The pretrained 2D AlexNet performed even better (**86%**) than the 3D CNN with autoencoder (**77%**). ## Method #### 1. Data In this project, we used public brain MRI data from **Alzheimers Disease Neuroimaging Initiative (ADNI)** Study. ADNI is an ongoing, multicenter cohort study, started from 2004. It focuses on understanding the diagnostic and predictive value of Alzheimers disease specific biomarkers. The ADNI study has three phases: ADNI1, ADNI-GO, and ADNI2. Both ADNI1 and ADNI2 recruited new AD patients and normal control as research participants. Our data included a total of 686 structure MRI scans from both ADNI1 and ADNI2 phases, with 310 AD cases and 376 normal controls. We randomly derived the total sample into training dataset (n = 519), validation dataset (n = 100), and testing dataset (n = 67). #### 2. Image preprocessing Image preprocessing were conducted using Statistical Parametric Mapping (SPM) software, version 12. The original MRI scans were first skull-stripped and segmented using segmentation algorithm based on 6-tissue probability mapping and then normalized to the International Consortium for Brain Mapping template of European brains using affine registration. Other configuration includes: bias, noise, and global intensity normalization. The standard preprocessing process output 3D image files with an uniform size of 121x145x121. Skull-stripping and normalization ensured the comparability between images by transforming the original brain image into a standard image space, so that same brain substructures can be aligned at same image coordinates for different participants. Diluted or enhanced intensity was used to compensate the structure changes. the In our project, we used both whole brain (including both grey matter and white matter) and grey matter only. #### 3. AlexNet and Transfer Learning Convolutional Neural Networks (CNN) are very similar to ordinary Neural Networks. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. #### 3.1. AlexNet The net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The overall architecture is shown in Figure 1. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. AlexNet maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution. The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (as shown in Figure1). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.  The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192 , and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each. #### 3.2. Transfer Learning Training an entire Convolutional Network from scratch (with random initialization) is impractical[14] because it is relatively rare to have a dataset of sufficient size. An alternative is to pretrain a Conv-Net on a very large dataset (e.g. ImageNet), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. Typically, there are three major transfer learning scenarios: **ConvNet as fixed feature extractor:** We can take a ConvNet pretrained on ImageNet, and remove the last fully-connected layer, then treat the rest structure as a fixed feature extractor for the target dataset. In AlexNet, this would be a 4096-D vector. Usually, we call these features as CNN codes. Once we get these features, we can train a linear classifier (e.g. linear SVM or Softmax classifier) for our target dataset. **Fine-tuning the ConvNet:** Another idea is not only replace the last fully-connected layer in the classifier, but to also fine-tune the parameters of the pretrained network. Due to overfitting concerns, we can only fine-tune some higher-level part of the network. This suggestion is motivated by the observation that earlier features in a ConvNet contains more generic features (e.g. edge detectors or color blob detectors) that can be useful for many kind of tasks. But the later layer of the network becomes progressively more specific to the details of the classes contained in the original dataset. **Pretrained models:** The released pretrained model is usually the final ConvNet checkpoint. So it is common to see people use the network for fine-tuning. #### 4. 3D Autoencoder and Convolutional Neural Network We take a two-stage approach where we first train a 3D sparse autoencoder to learn filters for convolution operations, and then build a convolutional neural network whose first layer uses the filters learned with the autoencoder.  #### 4.1. Sparse Autoencoder An autoencoder is a 3-layer neural network that is used to extract features from an input such as an image. Sparse representations can provide a simple interpretation of the input data in terms of a small number of \parts by extracting the structure hidden in the data. The autoencoder has an input layer, a hidden layer and an output layer, and the input and output layers have same number of units, while the hidden layer contains more units for a sparse and overcomplete representation. The encoder function maps input x to representation h, and the decoder function maps the representation h to the output x. In our problem, we extract 3D patches from scans as the input to the network. The decoder function aims to reconstruct the input form the hidden representation h. #### 4.2. 3D Convolutional Neural Network Training the 3D convolutional neural network(CNN) is the second stage. The CNN we use in this project has one convolutional layer, one pooling layer, two linear layers, and finally a log softmax layer. After training the sparse autoencoder, we take the weights and biases of the encoder from trained model, and use them a 3D filter of a 3D convolutional layer of the 1-layer convolutional neural network. Figure 2 shows the architecture of the network. #### 5. Tools In this project, we used Nibabel for MRI image processing and PyTorch Neural Networks implementation.
sergivalverde
Intensity normalization of multi-channel MRI images using the method proposed by Nyul et al. 2000
ruffk
This project allows Zwift users to monitor their moving average power and FTP in real-time. It also provides valuable metrics to the rider such as average and normalized power (NP), intensity factor (IF), and training stress score (TSS). For the racer and time-trialist, there's also the ability to configure distance based splits with optional goals.
Jfortin1
Intensity normalization of structural MRIs using RAVEL
Novestars
Neural Pre Processing is an end-to-end weakly supervised learning approach for converting raw head MRI images to intensity-normalized, skull-stripped brain in a standard coordinate space
LCS2-IIITD
[KDD 2022] Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization
HMS-CardiacMR
The repository contains a source code for measuring image sharpness. Image sharpness is defined as the absolute gradient of intensity profile extracted from the normalized image.
dakota-hawkins
Normalize intensity values in 3D image stacks
asharifisadr
This repository contains a Python script for preprocessing NIfTI-format brain MRI images from ADNI dataset. The preprocessing pipeline includes bias field correction, skull stripping, registration to MNI space, and intensity normalization. This script is designed to streamline the preprocessing of neuroimaging data for downstream analysis.
No description available
greenvalleyintl
LiDAR360-Lite is the free version of LiDAR360 – a LiDAR point cloud processing software. Lidar360 uses a performance driven data format for 3D visualization of massive point cloud and terrain data (LiData and LiModel). It provides various display modes including elevation, intensity, classification, return number and more. This data format can be loaded almost immediately in LiDAR360, less than a second in most cases, allowing users to check the data quality very efficiently. LiDAR360 also supports common spatial formats including raster, table (.csv) and vector data (.shp) visualization. LiDAR360-Lite Features: Point cloud profile analysis, LiDAR360-Lite allows users to view the point cloud data from different angles, classify point cloud by various selection tools, and supports 2D measurement 3D visualization and editing digital terrain model, there are a variety of selection tools to support terrain smooth, flat and repair Point cloud clip tools, clip the point cloud data by polygon or rectangular Measurement tools, including area, angle, volume, height, point density measurement Essential data management tools, including: Format conversion, outliers removal, normalization, projection transformation and clip, rater tools such as band operation, raster mosaic and raster subdivision Grid statistics, statistical analysis of point cloud based on points number, density and height value User friendly window manipulation: 2D and 3D display in same window, multi-window linkage, screen rolling, layer drag, cross selection, etc Language setting
HuLab-Code
RGC Ca2+ Clustering(RCC): The clustering and grouping procedures that we developed and detail below were based on previously described principles and protocols (3). First, we determined the range of non-specific fluctuation of fluorescence intensity in in vivo RGC imaging, which is in between +15% to -20% (ΔF/F0). Thus, we defined the RGCs with fluorescence changes within this range in response to UV stimulation as no-response (NR) RGCs. We then analyzed the rest of the RGCs with significant changes of fluorescence intensity in response to light stimulation using sparse principal component analysis (sPCA) (4, 5), which extracted 16 unbiased sparse response features in response to UV ON-OFF stimulation and therefore resulted in a 16-dimensional feature vector for each RGC. We fit each data set with a Gaussians mixture distribution model using an iterative expectation-maximization (EM) algorithm (MATLAB’s fitgmdist object) (6) with a maximum of 2000 optimization iterations. In the setup, we constrained the covariance matrix for each component to be diagonal, resulting in 32 parameters per component (16 for the mean, 16 for the variances). To avoid local minima, we set the EM algorithm repeating 20 times per candidate number of clusters and used the solution with the greatest likelihood. To find the optimal number of clusters, we evaluated the Bayesian information criterion (BIC) (7), which resulted in 47 automatically defined RGC clusters. These RGC clusters were further re-grouped with a custom-made one-dimensional hierarchical clustering MATLAB script (github.com/HuLab-Code/RCC) to combine similar clusters to achieve a good compromise between clustering complexity and quality. In this step, we used the unweighted average distance (UPGMA) algorithm to calculate the standard linkage (linkage) of the means of the RGC clusters and obtain the correlation distance. We presented the linkage result as a dendrogram plot (using linkage and dendrogram). The leaf order was optimized using the MATLAB’s optimalleaforder function and modified for clarity of presentation. By setting the grouping distance threshold as 0.28, 9 RGC functional groups were identified based on their similarity as presented in the main manuscript and the grouped individual data points were projected to a two-dimensional space by using t-Distributed Stochastic Neighbor Embedding (tSNE) (8) for visualization. For the clustering analysis under different conditions, each detected RGC fluorescent signal trace was normalized by its Euclidean norm (norm function) and compared with the normalized mean intensity of each group by performing the cross-correlation (xcross function in MATLAB, with lag range set as 0). The RGC trace was then assigned to the group that has the highest cross-correlation value that is larger than 0.75.
ani-4nirudh
This repo calculates the Mean Intensity Gradient (MIG) of laser-speckle images, and calculates position of tracked template using Normalized Cross-Correlation (NCC).
HuLab-Code
RGC Segmentation Extraction(RSE): In each selected raw image (I), the original image was filtered with a 2-D Gaussian smoothing kernel (imgaussfilt) with sigma=20 pixels to get a blurred background image (IF), and the mean intensity (IM) of the raw image was calculated. Further, a binary image (BW) was generated through image thresholding with the following parameter: BW= I>IF*1.25 & I > IM. Then, bwareaopen(BW,40) was performed to remove connected components (objects) in the binary image with fewer than 40 pixels. Next, the ROIs from each selected image were detected and registered using imfindcircles (2) with a radius range from 5 to 15 pixels. Finally, registered ROIs from selected images were merged by combining ROIs within 10 pixels of each other to determine individual RGC soma. The averaged center position and radius of the merged ROIs were assigned to the new ROIs of detected RGCs. Because different RGCs may show peak fluorescence intensity at different time frames due to their specific response dynamics to the visual stimulus, every 5th image frame (about 1 second interval) was used to detect the ROIs. The mean RGC fluorescence intensities for all time points within the ROIs were then extracted as the raw fluorescence intensity for individual RGCs. To minimize the effect of background intensity change during recording, the blood vessel area was segmented from the maximum projection image through thresholding. The average intensity with respect to time within the segmented region was extracted as a reference background intensity. This background intensity signal was subtracted from the fluorescent intensity signals for each RGC to acquire the normalized fluorescence intensity (F) at a given time. A time series of fluorescent intensity (F0 was at the first 10 seconds before UV light stimulation and F was during the next 50 seconds after initial UV onset) from all segmented RGCs was then generated with an embedded stimulus time marker. Each neuronal response was normalized by subtracting the average signal intensity before UV stimulus, F0, from all time points and dividing the resulting signal by F0. This process can be summarized as ΔF/F0 = (F-F0)/F0. Finally, the pooled RGCs’ time series of neuronal responses (ΔF/F0) was denoised by a low-pass filter (lowpass) below normalized passband frequency (0.3 π rad/sample) to remove high-frequency noise before further clustering and grouping.
yangyuke001
geometric normalization & intensity normalization
josephinecb
Flair image intensity normalization using SimpleITK, numpy and Pytorch
oes6098
Batch normalize and interpolate plot profile data from Fiji, then compile y-values (pixel intensity)
alecrimi
It normalizes the intensity of a given brain from MRI data according to another reference brain scan
mertCukadar
Lightweight motion detection via pixel intensity analysis and temporal averaging. Features an adaptive decay-based normalization to handle dynamic lighting conditions without external CV libraries.
cran
:exclamation: This is a read-only mirror of the CRAN R package repository. colocalization — Normalized Spatial Intensity Correlation
ssturner-broad
These scripts enable high throughput image processing for quantifying integrated fluorescence intensity signal normalized by cell density (estimated from paired DAPI images)
SinUbyCosU
Replicated Figure 1 and 2 from Sharma et al. (2017), involving: High-res AIA 1600 Å flash panel creation, Precise alignment and derotation of maps, HMI data integration, Arcsecond-accurate cropping, Intensity normalization and visual fidelity.
MolecularImagingPlatformIBMB
This macro divides the cell into rings of identical area that converge towards the nucleus center. The intensity density of the interest signal is then measured per ring and normalized to the total intensity density of the cell. In this manner, the intensity distribution of a target protein can be compared between cells of different shapes and sizes.
MSDLLCpapers
The Cytokine Array Assay is an automated image analysis tool designed to process cytokine array images and extract quantitative data from them. This tool analyzes raw images of cytokine arrays, detecting spots, measuring their intensities, and generating normalized quantitative output in multiple formats.
What is Feature Scaling? •Feature Scaling is a method to scale numeric features in the same scale or range (like:-1 to 1, 0 to 1). •This is the last step involved in Data Preprocessing and before ML model training. •It is also called as data normalization. •We apply Feature Scaling on independent variables. •We fit feature scaling with train data and transform on train and test data. Why Feature Scaling? •The scale of raw features is different according to its units. •Machine Learning algorithms can’t understand features units, understand only numbers. •Ex: If hight 140cm and 8.2feet •ML Algorithms understand numbers then 140 > 8.2 Which ML Algorithms Required Feature Scaling? Those Algorithms Calculate Distance •K-Nearest Neighbors (KNN) •K-Means •Support Vector Machine (SVM) •Principal Component Analysis(PCA) •Linear Discriminant Analysis Gradient Descent Based Algorithms •Linear Regression, •Logistic Regression •Neural Network Tree Based Algorithms not required Feature scaling •Decision Tree, Random Forest, XGBoost Types of Feature Scaling •1) Min Max Scaler •2) Standard Scaler •3) Max Abs Scaler •4) Robust Scaler •5) Quantile Transformer Scaler •6) Power Transformer Scaler •7) Unit Vector Scaler Standardization vs Normalization Explain in Detail What is Standardization? •Standardization rescale the feature such as mean(μ) = 0 and standard deviation (σ) = 1. •The result of standardization is Z called as Z-score normalization. • If data follow a normal distribution (gaussian distribution). • If the original distribution is normal, then the standardized distribution will be normal. • If the original distribution is skewed, then the standardized distribution of the variable will also be skewed. What is Normalization? •Normalization rescale the feature in fixed range between 0 to 1. •Normalization also called as Min-Max Scaling. •If data doesn’t follow normal distribution (Gaussian distribution). Standardization vs Normalization •There is no any thumb rule to use Standardization or Normalization for special ML algo. •But mostly Standardization use for clustering analyses, Principal Component Analysis(PCA). •Normalization prefers for Image processing because of pixel intensity between 0 to 255, neural network algorithm requires data in scale 0-1, K-Nearest Neighbors.
A machine learning project that estimates speaker age from audio recordings using a custom multi-regression model. Acoustic features i.e pitch, formants, intensity are extracted, normalized, and used to train the model. Performance is measured with MSE and R². Applications include voice personalization, targeted ads, and forensic analysis.
bendinglight
EDA on the National Institute of Health chest x-ray dataset consists of 112,120 images of 30,805 patients. Cleaning and processing dataset by normalizing intensity, image augmentation, image resizing with Keras. Training and fine tuning of neural network, application model architecture used VVG16 and ResNet50. Tried to implement the CheXNet research paper.
HuLab-Code
RGC Referencing Clustering(RRC): By referencing correlation fitting, for RGC clustering analysis in disease models, each detected RGC fluorescent signal trace was normalized by its Euclidean norm (norm function) and compared with the normalized mean intensity of each group by performing the cross-correlation (xcross function in MATLAB, with lag range set as 0). The RGC trace was then assigned to the group that has the highest cross-correlation value that is larger than 0.75. The RGC group assignment based on the 9 RGC functional groups was performed with a custom-made RGC referencing clustering MATLAB script (github.com/HuLab-Code/RRC).