Found 262 repositories(showing 30)
ijkguo
Parallel Faster R-CNN implementation with MXNet.
IliaZenkov
Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transformers, and everything in between
neilcz
一个移动端跨平台的gpu+cpu并行计算的cnn框架(A mobile-side cross-platform gpu+cpu parallel computing CNN framework)
colby-j-wise
Explore CNN/LSTM/GRU parallel architectures for movie recommendations using Keras & TensorFlow in Python
Shenyonglong
Parallel Spatial--Temporal Self-attention CNN-based Motor Imagery Classification for BCI
Learning discriminative and robust time-frequency representations for environmental sound classification: Convolutional neural networks (CNN) are one of the best-performing neural network architectures for environmental sound classification (ESC). Recently, attention mechanisms have been used in CNN to capture the useful information from the audio signal for sound classification, especially for weakly labelled data where the timing information about the acoustic events is not available in the training data, apart from the availability of sound class labels. In these methods, however, the inherent time-frequency characteristics and variations are not explicitly exploited when obtaining the deep features. In this paper, we propose a new method, called time-frequency enhancement block (TFBlock), which temporal attention and frequency attention are employed to enhance the features from relevant frames and frequency bands. Compared with other attention mechanisms, in our method, parallel branches are constructed which allow the temporal and frequency features to be attended respectively in order to mitigate interference from the sections where no sound events happened in the acoustic environments. The experiments on three benchmark ESC datasets show that our method improves the classification performance and also exhibits robustness to noise.
An open source Verilog Based LeNet-1 Parallel CNNs Accelerator for FPGAs in Vivado 2017
CAPRDZV
论文 Hinton等的论文 Matrix capsules with EM routing - Hinton, G. E., Sabour, S. and Frosst, N. (2018) Dynamic Routing Between Capsules - Sabour, S., Frosst, N. and Hinton, G.E. (2017) Transforming Auto-encoders - Hinton, G. E., Krizhevsky, A. and Wang, S. D. (2011) A parallel computation that assigns canonical object-based frames of reference. - Hinton, G.E. (1981) Shape representation in parallel systems - Hinton, G.E. (1981) Optimizing Neural Networks that Generate Images - Tijmen Tieleman’s disseration 其他论文 Capsule Network Performance on Complex Data - Xi, E., Bing, S. and Jin, Y. (2017) Accurate reconstruction of image stimuli from human fMRI based on the decoding model with capsule network architecture - Qiao, K., Zhang, C., Wang, L., Yan, B., Chen, J., Zeng, L. and Tong, L., (2018) An Optimization View on Dynamic Routing Between Capsules - Wang, D., Liu, E. (2018) CapsuleGAN: Generative Adversarial Capsule Network Ayush Jaiswal, Wael AbdAlmageed, Premkumar Natarajan. (2018) Spectral Capsule Networks - Bahadori, M. T. (2018) 博客 Max Pechyonkin的胶囊网络入门系列: 胶囊网络背后的直觉 胶囊如何工作 囊间动态路由算法 胶囊网络架构 基于TensorFlow实现胶囊网络 Debarko De的胶囊网络教程,包括注释详尽的胶囊网络实现代码 基于CUDA为胶囊网络实现TensorFlow定制操作 Jos van de Wolfshaar的文章,定制胶囊网络运算的CUDA支持 ISI新研究:胶囊生成对抗网络 用胶囊网络替换CNN作为GAN的判别网络,在MNIST数据集上取得了比卷积GAN更好的表现 Uncovering the Intuition behind Capsule Networks and Inverse Graphic Tanay Kothari的长篇教程 A Visual Representation of Capsule Connections in Dynamic Routing Between Capsules Mike Ross的胶囊网络示意图 Capsule Networks Are Shaking up AI — Here’s How to Use Them Nick Bourdakos的介绍 Capsule Networks Explained Kendrick Tan的解释 Understanding Dynamic Routing between Capsules (Capsule Networks) Jonathan Hui的教程,包括注释详尽的基于Keras的胶囊网络实现代码 Matrix capsules with EM routing Adrian Colyer关于EM路由的文章 Capsule Networks: A Glossary Sebastian Kwiatkowski的胶囊网络术语表 Overview of awesome articles 点评三篇胶囊网络教程 视频 Geoffrey Hinton’s talk: What is wrong with convolutional neural nets? - Geoffrey Hinton在MIT. Brain & Cognitive Sciences的演讲《卷积神经网络的问题在哪里?》 Capsule Networks (CapsNets) – Tutorial - “这视频棒极了。我本希望我能把胶囊解释得这么清楚。”Geoffrey Hinton Capsule networks: overview - 胶囊网络概览,包括向量和矩阵胶囊。 Overview of awesome videos 对以上3个视频的点评。 Capsule Networks: An Improvement to Convolutional Networks Siraj Raval介绍胶囊网络的视频 动态路由实现 官方实现 Sarasra/models 《Dynamic Routing Between Capsules》论文所用的代码 TensorFlow alisure-ml/CapsNet bourdakos1/capsule-networks etendue/CapsNet_TF InnerPeace-Wu/CapsNet-tensorflow jaesik817/adv_attack_capsnet jostosh/capsnet JunYeopLee/capsule-networks laodar/tf_CapsNet leoniloris/CapsNet naturomics/CapsNet-Tensorflow rrqq/CapsNet-tensorflow-jupyter thibo73800/capsnet-traffic-sign-classifier tjiang31/CapsNet winwinJJiang/capsNet-Tensorflow PyTorch acburigo/CapsNet adambielski/CapsNet-pytorch AlexHex7/CapsNet_pytorch aliasvishnu/Capsule-Networks-Notebook-MNIST andreaazzini/capsnet.pytorch cedrickchee/capsule-net-pytorch dragen1860/CapsNet-Pytorch gram-ai/capsule-networks higgsfield/Capsule-Network-Tutorial laubonghaudoi/CapsNet_guide_PyTorch leftthomas/CapsNet nishnik/CapsNet-PyTorch tonysy/CapsuleNet-PyTorch Ujjwal-9/CapsNet Keras fengwang/minimal-capsule gusgad/capsule-GAN mitiku1/Emopy-CapsNet ruslangrimov/capsnet-with-capsulewise-convolution streamride/CapsNet-keras-imdb sunxirui310/CapsNet-Keras theblackcat102/dynamic-routing-capsule-cifar XifengGuo/CapsNet-Keras XifengGuo/CapsNet-Fashion-MNIST Chainer soskek/dynamic_routing_between_capsules Torch mrkulk/Unsupervised-Capsule-Network MXNet AaronLeong/CapsNet_Mxnet GarrickLin/Capsnet.Gluon Soonhwan-Kwon/capsnet.mxnet CNTK Southworkscom/CapsNet-CNTK Lasagne DeniskaMazur/CapsNet-Lasagne Matlab yechengxi/LightCapsNet R dfalbel/capsnet JavaScript alseambusher/capsnet.js Vulcan moothyknight/CapsNet-for-Graphics-Rendering-Optimization EM路由实现 TensorFlow gyang274/capsulesEM www0wwwjs1/Matrix-Capsules-EM-Tensorflow PyTorch shzygmyx/Matrix-Capsules-pytorch 其他资源 Capsule Networks discussion Facebook讨论组 CapsNet-Tensorflow CapsNet-Tensorflow的gitter.im讨论组 Will capsule networks replace neural networks? Quora问答“胶囊网络会取代神经网络吗?” Could GANs work with Hinton’s capsule theory? Quora问答“GAN可以应用Hinton的胶囊理论吗?” Dynamic Routing Between Capsules Kyuhwan Jung对论文《Dynamic routing between Capsules》的评论(slideshare)
Who doesn’t dream of a new FPGA family that can provide embedded hard neurons in its silicon architecture fabric instead of the conventional DSP and multiplier blocks? The optimized hard neuron design will allow all the software and hardware designers to create or test different deep learning network architectures, especially the convolutional neural networks (CNN), more easily and faster in comparing to any previous FPGA family in the market nowadays. The revolutionary idea about this project is to open the gate of creativity for a precise-tailored new generation of FPGA families that can solve the problems of wasting logic resources and/or unneeded buses width as in the conventional DSP blocks nowadays. The project focusing on the anchor point of the any deep learning architecture, which is to design an optimized high-speed neuron block which should replace the conventional DSP blocks to avoid the drawbacks that designers face while trying to fit the CNN architecture design to it. The design of the proposed neuron also takes the parallelism operation concept as it’s primary keystone, beside the minimization of logic elements usage to construct the proposed neuron cell. The targeted neuron design resource usage is not to exceeds 500 ALM and the expected maximum operating frequency of 834.03 MHz for each neuron. In this project, ultra-fast, adaptive, and parallel modules are designed as soft blocks using VHDL code such as parallel Multipliers-Accumulators (MACs), RELU activation function that will contribute to open a new horizon for all the FPGA designers to build their own Convolutional Neural Networks (CNN). We couldn’t stop imagining INTEL ALTERA to lead the market by converting the proposed designed CNN block and to be a part of their new FPGA architecture fabrics in a separated new Logic Family so soon. The users of such proposed CNN blocks will be amazed from the high-speed operation per seconds that it can provide to them while they are trying to design their own CNN architectures. For instance, and according to the first coding trial, the initial speed of just one MAC unit can reach 3.5 Giga Operations per Second (GOPS) and has the ability to multiply up to 4 different inputs beside a common weight value, which will lead to a revolution in the FPGA capabilities for adopting the era of deep learning algorithms especially if we take in our consideration that also the blocks can work in parallel mode which can lead to increasing the data throughput of the proposed project to about 16 Tera Operations per Second (TOPS). Finally, we believe that this proposed CNN block for FPGA is just the first step that will leave no areas for competitions with the conventional CPUs and GPUs due to the massive speed that it can provide and its flexible scalability that it can be achieved from the parallelism concept of operation of such FPGA-based CNN blocks.
zhangqiao970914
[IEEE TCSVT] TCNet:Co-salient Object Detection via Parallel Interaction of Transformers and CNNs
Neovairis
For better estimation of aero-engine RUL, we concatenate 1-D CNN and LSTM in a parallel structure.
Code for our paper in ACL 2017
This is an official implementation for "Bidirectional Semi-supervised Dual-branch CNN for Robust 3D Reconstruction of Stereo Endoscopic Images via Adaptive Cross and Parallel Supervisions".
akalakheti
Motor Imagery Classification using Attention based Parallel CNN-LSTM architecture. Dataset used: PhysioNet Motor Imagery Dataset
CognitiveAISystems
KAGE-Bench: pure JAX 2D platformer RL benchmark for visual OOD generalization. Massively-parallel (vmap/JIT) RGB env with YAML-configurable visuals/physics, plus PPO-CNN (Flax) training scripts.
WilliammmZ
This project includes the official implementation of "Video Compression Artifact Reduction by Fusing Motion Compensation and Global Context in a Swin-CNN based Parallel Architecture " (AAAI'23)
harpribot
Parallel Implementation of CNN on all major framework - Keras, Tensorflow etc.
frknrnn
Precise and quick monitoring of key cytometric features such as cell count, cell size, cell morphology and DNA content is crucial for life research and development. Cytometry is important for numerous applications in biotechnology, medical sciences, and cell culture research laboratories. Flow cytometry that relies on aligning cell flow and their characterization by optical or electrical detection has been the dom- inant cytometry approach for high throughput applications. Recent advances in digital microscopy revealed image cytometry as a viable alternative that can lead to simpler, more compact and less expensive solutions. Traditionally, image cytome- try relies on the use of a hemocytometer accompanied with visual inspection of an operator under the microscope. This approach is prone to error due to subjective decisions of the operator. Machine learning approaches have recently emerged as powerful tools enabling quick and highly accurate image cytometric analysis that are easily generalizable to different cell types. Here, we demonstrate a modular deep learning system (DeepCAN) that provides a complete solution for automated cell counting and viability analysis. DeepCAN employs three different neural network blocks called Parallel Segmenter, Cluster CNN, and Viability CNN that are trained for initial segmentation, cluster separation, and cell viability analysis, respectively. Parallel Segmenter and Cluster CNN blocks achieve highly accurate segmentation of individual cells while Viability CNN block performs viability classification A modified U-Net network, a well-known deep neural network model for bio-image analysis, is used in Parallel Segmenter while LeNet-5 architecture and itss modifid versions are used for Cluster CNN and Viability CNN, respectively. We trained the Parallel Seg- menter using 15 images of A2780 cells and 5 images of yeasts cells containing 14742 individual cell images. Similarly, 6101 and 5900 A2480 cell images were employed for training of Cluster CNN and Viability CNN models. 2514 individual A2780 cell images were used to test the overall segmentation performance of Parallel Segmenter combined with Cluster CNN, revealing a high precision of 96.52%. Overall cell count- ing/viability analysis performance of DeepCAN was tested with A2780 (2514 cells), A549 (601 cells), Colo (356 cells), and MDA-MB-231 (857 cells) cell images reveal- ing high counting/viability accuracies of 93.82 %/95.93 %, 92.18 %/97.90 %, and 85.32 %/97.40 %, respectively.
jingang-cv
Exploiting Multi-scale Parallel Self-attention and Local Variation via Dual-branch Transformer-CNN Structure for Face Super-resolution
sudheerachary
One weird trick for parallelizing convolutional neural networks
successfully using CNN and GRU to classify time series signals
nelsonalbertohj
Implementation of Parallel CNN and RNN architecture for classification of motor imagery data.
Tamerkobba
This project explores the parallelization of Convolutional Neural Networks (CNNs) using MPI, OpenMP, and CUDA to enhance performance and reduce computational time on the MNIST dataset
HemantaIngle
In this Project, our main aim is to accelerate the image recognition of CNN (Convolution Neural Network) with the help of a platform deployable on FPGA. CNN focuses on image classification, speech recognition, and video analysis. CNN is accelerated by using GPU (Graphical Processing Unit), which is relatively slow and consumes a high amount of power as CNN requires 20 GFLOPS/image. Also, the CPU acceleration being cheaper as it is readily available on most x86 machines is proportional to power. The modern Application-Specific Chips(ASICS) and the capability of a Field Programmable Gate Array( FPGA ) have power efficiency and faster computation rate over the GPU. With FPGA as a reconfigurable base and parallel architecture, we decided to target the CNN acceleration with an FPGA using Pipe CNN- an algorithm that gets synthesized via HLS (Hardware Level Synthesis Tools) like Intel's Quartus, and Open CL toolkit. Modern Large scale FPGA's like Stratix 10 and Arria 10 have shown a 10 percent less power consumption than GPU's, and it has an added advantage of pipeline parallel architecture and dedicated DSP for faster and efficient computations. The main goal of the Project is to design an OpenCL accelerator that is generic and yet powerful means of improving throughput in inference computations
eekamal
Parallel CNN&LSTM-powered AI model for demodulation.
We propose to use a method called parallel multiple CNNs with temporal predictions (PMCTP) for wind turbine blade cracking early fault detection.
memgonzales
Presented at the 2023 International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2023). Lightweight mirror segmentation CNN that uses an EfficientNet backbone, employs parallel convolutional layers to capture edge features, and applies filter pruning for model compression
git-tarik
Explainable Hand Gesture Recognition (ParallelCNN + SHAP) — MANIT Bhopal internship
Reem-Alatrash
A speech emotion recognition (SER) system that won 1st place in a competition for a deep learning master-course. It uses a parallel CNN built using Keras and Tensorflow.
DreamyWanderer
The final project of Parallel Computing course in HCMUS. Here, we use CUDA to parallelize the implementation of Convolutional layer in a simple CNN architecture, as well as measure performance of various parallelize stategy.