Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Implementation of self attention mechanisms for computer vision in PyTorch with einsum and einops.
Focused on computer vision self-attention modules.
$ pip install self-attention-cv
It would be nice to pre-install pytorch in your environment, in case you don’t have a GPU. To run the tests from the terminal
$ pytest
you may need to run export PYTHONPATH=$PATHONPATH:`pwd`
before.
import torch
from self_attention_cv import MultiHeadSelfAttention
model = MultiHeadSelfAttention(dim=64)
x = torch.rand(16, 10, 64) # [batch, tokens, dim]
mask = torch.zeros(10, 10) # tokens X tokens
mask[5:8, 5:8] = 1
y = model(x, mask)
import torch
from self_attention_cv import AxialAttentionBlock
model = AxialAttentionBlock(in_channels=256, dim=64, heads=8)
x = torch.rand(1, 256, 64, 64) # [batch, tokens, dim, dim]
y = model(x)
import torch
from self_attention_cv import TransformerEncoder
model = TransformerEncoder(dim=64,blocks=6,heads=8)
x = torch.rand(16, 10, 64) # [batch, tokens, dim]
mask = torch.zeros(10, 10) # tokens X tokens
mask[5:8, 5:8] = 1
y = model(x,mask)
import torch
from self_attention_cv import ViT, ResNet50ViT
model1 = ResNet50ViT(img_dim=128, pretrained_resnet=False,
blocks=6, num_classes=10,
dim_linear_block=256, dim=256)
# or
model2 = ViT(img_dim=256, in_channels=3, patch_dim=16, num_classes=10,dim=512)
x = torch.rand(2, 3, 256, 256)
y = model2(x) # [2,10]
import torch
from self_attention_cv.transunet import TransUnet
a = torch.rand(2, 3, 128, 128)
model = TransUnet(in_channels=3, img_dim=128, vit_blocks=8,
vit_dim_linear_mhsa_block=512, classes=5)
y = model(a) # [2, 5, 128, 128]
import torch
from self_attention_cv.bottleneck_transformer import BottleneckBlock
inp = torch.rand(1, 512, 32, 32)
bottleneck_block = BottleneckBlock(in_channels=512, fmap_size=(32, 32), heads=4, out_channels=1024, pooling=True)
y = bottleneck_block(inp)
import torch
from self_attention_cv.pos_embeddings import AbsPosEmb1D,RelPosEmb1D
model = AbsPosEmb1D(tokens=20, dim_head=64)
# batch heads tokens dim_head
q = torch.rand(2, 3, 20, 64)
y1 = model(q)
model = RelPosEmb1D(tokens=20, dim_head=64, heads=3)
q = torch.rand(2, 3, 20, 64)
y2 = model(q)
import torch
from self_attention_cv.pos_embeddings import RelPosEmb2D
dim = 32 # spatial dim of the feat map
model = RelPosEmb2D(
feat_map_size=(dim, dim),
dim_head=128)
q = torch.rand(2, 4, dim*dim, 128)
y = model(q)
Thanks to Alex Rogozhnikov @arogozhnikov for the awesome einops package.
For my re-implementations I have studied and borrowed code from many repositories of Phil Wang @lucidrains.
By studying his code I have managed to grasp self-attention, discover nlp stuff that are never
referred in the papers, and learn from his clean coding style.
@article{adaloglou2021transformer,
title = "Transformers in Computer Vision",
author = "Adaloglou, Nikolas",
journal = "https://theaisummer.com/",
year = "2021",
howpublished = {https://github.com/The-AI-Summer/self-attention-cv},
}
If you really like this repository and find it useful, please consider (★) starring it, so that it can reach a broader audience of like-minded people. It would be highly appreciated 😃 !