Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars
361
Forks
91
Watchers
361
Open Issues
16
Overall repository health assessment
No package.json found
This might not be a Node.js project
11
commits
2
commits
1
commits
1
commits
1
commits
bug fix: self.transformer_encoder (which passes through nn.MultiheadAttention) expects the batch size to be in the second dimension of the input tensor, however the current implementation places the batch size in the first dimension which causes an incorrect computation during multiheaded attention - this commit swaps the batch size of the input tensor from the first dimension to the second and then after passing through self.transformer_encoder, the batch size is swapped back to the first dimension, because the following layers in the architecture expects the batch size to be in the first dimension. The current version of PyTorch has a boolean argument for nn.TransformerEncoderLayer called batch_first which allows the batch size of the input tensor to be in the first dimension, however this commit is assuming an older version of PyTorch (e.g. PyTorch 1.6.0, the current version used in this project) which does not have this boolean argument. (#14)
09b4273View on GitHub