resnet

resnest
Guide To ResNeSt: A Better ResNet With The Same Costs

ResNeSt architecture combines the channel-wise attention with multi-path representation into a single unified Split-Attention block.

T2T-ViT
Complete Guide to T2T-ViT: Training Vision Transformers Efficiently with Minimal Data

T2T-ViT employs progressive tokenization that takes patches of an image and converts it into an overlapped-token over a few iterations

self-attention
PyTorch Code for Self-Attention Computer Vision

Self-Attention Computer Vision is a PyTorch based library providing a one-stop solution for all of the self-attention based requirements

Guide To Building A ResNet Model With & Without Dropout

Through this article, we will explore the usage of dropouts with the ResNet pre-trained model.

Amazon’s ResNeSt Surpassed Popular CVPR Award Winner ResNet

Recently, researchers from Amazon and the University of California, Davis introduced ResNeSt, which is a…

7 Deep Learning Methods Every AI Enthusiast Must Know

Deep Learning has seeped in almost every organisation and their day-to-day activities — right from…

Why ResNets Are A Major Breakthrough In Image Processing

Deep convolutional networks have led to remarkable breakthroughs for image classification. Driven by the significance…