AI Research Graph: Residual Neural Network (ResNet)
This AI research graph edition covers the key knowledge areas and important research papers related to Residual Neural Network (ResNet), including Kaiming He's 10-min presentation of "Deep Residual Learning for Image Recognition” paper.
Google Brain and UC Berkeley's new paper "Revisiting ResNets: Improved Training and Scaling Strategies" has brought more attention to Residual Networks lately. First introduced by Kaiming He, et al. (2015 ), ResNets is created by reformulating the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It is easier to optimize, and can gain accuracy from considerably increased depth. This edition of AI research graph focuses on the key knowledge areas and research papers related to this research technique, including some of the most cited work and the latest development.
Top ResNet papers with video:
The graph below demonstrates a paper-knowledge mapping for the top 10 research papers with video derived from the core knowledge node of ResNet.
Click to view video presentations of the top papers:
Additional highly-cited papers with video :
We further look into related research papers with high impacts based on the citations. Click to view video presentations of the top papers:
Additional papers with video:
Here are some of the most recent publications in this research area. Click to view video presentations:
Additional papers worth reading:
- Dai et al., 2016. R-FCN: Object Detection via Region-based Fully Convolutional Networks. NIPS 2016
- Gulrajani et al., 2017. Improved Training of Wasserstein GANs. NIPS 2017
- Redmon, 2016. YOLO9000: Better, Faster, Stronger. CVPR 2017
- Ledig et al., 2017. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. CVPR 2017
- Szegedy, 2016. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. AAAI 2017
- Singh et al., 2021. Rapid Classification of Glaucomatous Fundus Images.
- Srinivas et al., 2021. Bottleneck Transformers for Visual Recognition.
- Mao et al., 2021. PatchNet -- Short-range Template Matching for Efficient Video Processing.
- Kang et al., 2021. Distribution Adaptive INT8 Quantization for Training CNNs. AAAI 2021
- Yu et al., 2021. GNN-RL Compression: Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning.