Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.
Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms **.
· torch, torch-vision
· torchsummary
· other necessary
A training script is supplied in “train_baseline.py”, the arguments are in “args.py
- StochDepth
- label smoothing
- Cutout
- DropBlock
- Mixup
- Manifold Mixup
- ShakeDrop
- cutmix
from dataloder.scoliosis_dataloder import ScoliosisDataset
from dataloder.facial_attraction_dataloder import FacialAttractionDataset
from dataloder.fa_and_sco_dataloder import ScoandFaDataset
from dataloder.scofaNshot_dataloder import ScoandFaNshotDataset
from dataloder.age_dataloder import MegaAsiaAgeDataset
def load_dataset(data_config):
if data_config.dataset == 'cifar10':
training_transform=training_transforms()
if data_config.autoaug:
print('auto Augmentation the data !')
training_transform.transforms.insert(0, Augmentation(fa_reduced_cifar10()))
train_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
train=True,
transform=training_transform,
download=True)
val_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
train=False,
transform=validation_transforms(),
download=True)
return train_dataset,val_dataset
elif data_config.dataset == 'cifar100':
train_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
train=True,
transform=training_transforms(),
download=True)
val_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
train=False,
transform=validation_transforms(),
download=True)
return train_dataset, val_dataset
Two deployment modes of pytorch model are provided, one is web deployment and the other is C + + deployment
Store the training weight file in ` flash_ Deployment ` folder
Then modify ' server.py ' path
Leverage ' client.Py ' call
- **AlexNet**
- **VGG**
- **ResNet**
- **ResNext**
- **InceptionV1**
- **InceptionV2 and InceptionV3**
- **InceptionV4 and Inception-ResNet**
- **GoogleNet**
- **EfficienNet**
- **MNasNet**
- **DPN**
- **SE Attention**
- **External Attention**
- **Self Attention**
- **SK Attention**
- **CBAM Attention**
- **BAM Attention**
- **ECA Attention**
- **DANet Attention**
- **Pyramid Split Attention(PSA)**
- **EMSA Attention**
- **A2Attention**
- **Non-Local Attention**
- **CoAtNet**
- **CoordAttention**
- **HaloAttention**
- **MobileViTAttention**
- **MUSEAttention**
- **OutlookAttention**
- **ParNetAttention**
- **ParallelPolarizedSelfAttention**
- **residual_attention**
- **S2Attention**
- **SpatialGroupEnhance Attention**
- **ShuffleAttention**
- **GFNet Attention**
- **TripletAttention**
- **UFOAttention**
- **VIPAttention**
- **MobileNets:**
- **MobileNetV2:**
- **MobileNetV3:**
- **ShuffleNet:**
- **ShuffleNet V2:**
- **SqueezeNet**
- **Xception**
- **MixNet**
- **GhostNet**
- **Auxiliary Classifier GAN**
- **Adversarial Autoencoder**
- **BEGAN**
- **BicycleGAN**
- **Boundary-Seeking GAN**
- **Cluster GAN**
- **Conditional GAN**
- **Context-Conditional GAN**
- **Context Encoder**
- **Coupled GAN**
- **CycleGAN**
- **Deep Convolutional GAN**
- **DiscoGAN**
- **DRAGAN**
- **DualGAN**
- **Energy-Based GAN**
- **Enhanced Super-Resolution GAN**
- **Least Squares GAN**
- **Enhanced Super-Resolution GAN**
- **GAN**
- **InfoGAN**
- **Pix2Pix**
- **Relativistic GAN**
- **Semi-Supervised GAN**
- **StarGAN**
- **Wasserstein GAN**
- **Wasserstein GAN GP**
- **Wasserstein GAN DIV**
- **SSD:**
- **YOLO:**
- **YOLOv2:**
- **YOLOv3:**
- **FCOS:**
- **FPN:**
- **RetinaNet**
- **Objects as Points:**
- **FSAF:**
- **CenterNet**
- **FoveaBox**
- **FCN**
- **Fast-SCNN**
- **LEDNet:**
- **LRNNet**
- **FisheyeMODNet:**
- **PolarMask**
- **FaceBoxes**
- **LFFD**
- **VarGFaceNet**
- **Stacked Hourglass Networks**
- **Simple Baselines**
- **LPN**
- label-smooth
- amsoftmax
- focal-loss
- dual-focal-loss
- triplet-loss
- giou-loss
- affinity-loss
- pc_softmax_cross_entropy
- ohem-loss(softmax based on line hard mining loss)
- large-margin-softmax(bmvc2019)
- lovasz-softmax-loss
- dice-loss(both generalized soft dice loss and batch soft dice loss)
This directory is used to convert TensorFlow weights to PyTorch.
It was hacked together fairly quickly, so the code is not the most
beautiful (just a warning!), but it does the job. I will be refactoring it soon.
Simple way to leverage the class-specific activation of convolutional layers in PyTorch.
- CAM
- ScoreCAM
- SSCAM
- ISCAM
- GradCAM
- Grad-CAM++
- Smooth Grad-CAM++
- XGradCAM
- LayerCAM
More modules may be added later.
During the implementation process, I read a lot of codes and articles and referred to a lot of contents.
Some have added copyright notices, and some don’t remember the main references. If there is infringement, please contact to delete.
I wrote some blogs(which are in Chinese) to introduce the models implemented in this project:
Some of My Reference Repositories:
At present, the work organized by this project is indeed not comprehensive enough. As the amount of reading increases, we will continue to improve this project. Welcome everyone star to support. If there are incorrect statements or incorrect code implementations in the article, you are welcome to point out~