Most Influential ECCV Papers (2022-05)
The European Conference on Computer Vision (ECCV) is one of the top computer vision conferences in the world. Paper Digest Team analyzes all papers published on ECCV in the past years, and presents the 15 most influential papers for each year. This ranking list is automatically constructed based upon citations from both research papers and granted patents, and will be frequently updated to reflect the most recent changes. To browse the most productive ECCV authors by year ranked by #papers accepted, here is a list of most productive ECCV authors. To find the most influential papers from other conferences/journals, visit Best Paper Digest page. Note: the most influential papers may or may not include the papers that won the best paper awards. (Version: 2022-05)
Based in New York, Paper Digest is dedicated to producing high-quality text analysis results that people can acturally use on a daily basis. Since 2018, we have been serving users across the world with a number of exclusive services on ranking, search, tracking and automatic literature review.
If you do not want to miss interesting academic papers, you are welcome to sign up our free daily paper digest service to get updates on new papers published in your area every day. You are also welcome to follow us on Twitter and Linkedin to get updated with new conference digests.
Paper Digest Team
New York City, New York, 10017
team@paperdigest.org
TABLE 1: Most Influential ECCV Papers (2022-05)
Year | Rank | Paper | Author(s) |
---|---|---|---|
2020 | 1 | End-to-End Object Detection With Transformers IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We present a new method that views object detection as a direct set prediction. |
NICOLAS CARION et. al. |
2020 | 2 | NeRF: Representing Scenes As Neural Radiance Fields For View Synthesis IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. |
BEN MILDENHALL et. al. |
2020 | 3 | Contrastive Multiview Coding IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. |
Yonglong Tian; Dilip Krishnan; Phillip Isola; |
2020 | 4 | UNITER: UNiversal Image-TExt Representation Learning IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. |
YEN-CHUN CHEN et. al. |
2020 | 5 | Oscar: Object-Semantics Aligned Pre-training For Vision-Language Tasks IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar, which uses object tags detected in images as anchor points to significantly ease the learning of alignments. |
XIUJUN LI et. al. |
2020 | 6 | Single Path One-Shot Neural Architecture Search With Uniform Sampling IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: This work propose a Single Path One-Shot model to address the challenge in the training. |
ZICHAO GUO et. al. |
2020 | 7 | Big Transfer (BiT): General Visual Representation Learning IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). |
ALEXANDER KOLESNIKOV et. al. |
2020 | 8 | Object-Contextual Representations For Semantic Segmentation IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we address the semantic segmentation problem with a focus on the context aggregation strategy. |
Yuhui Yuan; Xilin Chen; Jingdong Wang; |
2020 | 9 | RAFT: Recurrent All-Pairs Field Transforms For Optical Flow IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for estimating optical flow. |
Zachary Teed; Jia Deng; |
2020 | 10 | Rethinking Few-shot Image Classification: A Good Embedding Is All You Need? IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this work, we show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. |
Yonglong Tian; Yue Wang; Dilip Krishnan; Joshua B. Tenenbaum; Phillip Isola; |
2020 | 11 | Tracking Objects As Points IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. |
Xingyi Zhou; Vladlen Koltun; Philipp Krähenbühl; |
2020 | 12 | Square Attack: A Query-efficient Black-box Adversarial Attack Via Random Search IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose the Square Attack, a score-based black-box $l_2$- and $l_\infty$- adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking. |
Maksym Andriushchenko; Francesco Croce; Nicolas Flammarion; Matthias Hein; |
2020 | 13 | Contrastive Learning For Unpaired Image-to-Image Translation IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose a straightforward method for doing so — maximizing mutual information between the two, using a framework based on contrastive learning. |
Taesung Park Alexei A. Efros Richard Zhang Jun-Yan Zhu; |
2020 | 14 | Convolutional Occupancy Networks IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. |
Songyou Peng; Michael Niemeyer; Lars Mescheder; Marc Pollefeys; Andreas Geiger; |
2020 | 15 | Axial-DeepLab: Stand-Alone Axial-Attention For Panoptic Segmentation IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. |
HUIYU WANG et. al. |
2018 | 1 | Encoder-Decoder With Atrous Separable Convolution For Semantic Image Segmentation IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this work, we propose to combine the advantages from both methods. |
Liang-Chieh Chen; Yukun Zhu; George Papandreou; Florian Schroff; Hartwig Adam; |
2018 | 2 | CBAM: Convolutional Block Attention Module IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose Convolutional Block Attention Module (CBAM), a simple and effective attention module that can be integrated with any feed-forward convolutional neural networks. |
Sanghyun Woo; Jongchan Park; Joon-Young Lee; In So Kweon; |
2018 | 3 | ShuffleNet V2: Practical Guidelines For Efficient CNN Architecture Design IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Taking these factors into account, this work proposes practical guidelines for efficient network de- sign. |
Ningning Ma; Xiangyu Zhang; Hai-Tao Zheng; Jian Sun; |
2018 | 4 | Image Super-Resolution Using Very Deep Residual Channel Attention Networks IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: To solve these problems, we propose the very deep residual channel attention networks (RCAN). |
YULUN ZHANG et. al. |
2018 | 5 | Group Normalization IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we present Group Normalization (GN) as a simple alternative to BN. |
Yuxin Wu; Kaiming He; |
2018 | 6 | Multimodal Unsupervised Image-to-image Translation IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. |
Xun Huang; Ming-Yu Liu; Serge Belongie; Jan Kautz; |
2018 | 7 | CornerNet: Detecting Objects As Paired Keypoints IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. |
Hei Law; Jia Deng; |
2018 | 8 | Progressive Neural Architecture Search IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. |
CHENXI LIU et. al. |
2018 | 9 | Deep Clustering For Unsupervised Learning Of Visual Features IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. |
Mathilde Caron; Piotr Bojanowski; Armand Joulin; Matthijs Douze; |
2018 | 10 | Image Inpainting For Irregular Holes Using Partial Convolutions IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose to use partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. |
GUILIN LIU et. al. |
2018 | 11 | Simple Baselines For Human Pose Estimation And Tracking IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: This work provides simple and effective baseline methods. |
Bin Xiao; Haiping Wu; Yichen Wei; |
2018 | 12 | BiSeNet: Bilateral Segmentation Network For Real-time Semantic Segmentation IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). |
CHANGQIAN YU et. al. |
2018 | 13 | AMC: AutoML For Model Compression And Acceleration On Mobile Devices IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. |
YIHUI HE et. al. |
2018 | 14 | Exploring The Limits Of Weakly Supervised Pretraining IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. |
DHRUV MAHAJAN et. al. |
2018 | 15 | Diverse Image-to-Image Translation Via Disentangled Representations IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. |
Hsin-Ying Lee; Hung-Yu Tseng; Jia-Bin Huang; Maneesh Singh; Ming-Hsuan Yang; |