Most Influential SIGGRAPH Papers (2024-05)
To search or review papers within SIGGRAPH related to a specific topic, please use the search by venue (SIGGRAPH) and review by venue (SIGGRAPH) services. To browse the most productive SIGGRAPH authors by year ranked by #papers accepted, here is a list of most productive SIGGRAPH authors.
Based in New York, Paper Digest is dedicated to producing high-quality text analysis results that people can acturally use on a daily basis. Since 2018, we have been serving users across the world with a number of exclusive services to track, search, review and rewrite scientific literature.
You are welcome to follow us on Twitter and Linkedin to get updated with new conference digests.
Paper Digest Team
New York City, New York, 10017
team@paperdigest.org
TABLE 1: Most Influential SIGGRAPH Papers (2024-05)
Year | Rank | Paper | Author(s) |
---|---|---|---|
2023 | 1 | 3D Gaussian Splatting for Real-Time Radiance Field Rendering IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. |
Bernhard Kerbl; Georgios Kopanas; Thomas Leimkuehler; George Drettakis; |
2023 | 2 | Nerfstudio: A Modular Framework for Neural Radiance Field Development IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In order to streamline the development and deployment of NeRF research, we propose a modular PyTorch framework, Nerfstudio. |
MATTHEW TANCIK et. al. |
2023 | 3 | Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. |
Hila Chefer; Yuval Alaluf; Yael Vinker; Lior Wolf; Daniel Cohen-Or; |
2023 | 4 | Zero-shot Image-to-Image Translation IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce pix2pix-zero, an image-to-image translation method that can preserve the original image’s content without manual prompting. |
GAURAV PARMAR et. al. |
2023 | 5 | Blended Latent Diffusion IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask. |
Omri Avrahami; Ohad Fried; Dani Lischinski; |
2023 | 6 | TEXTure: Text-Guided Texturing of 3D Shapes IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes. |
Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or; |
2023 | 7 | Sketch-Guided Text-to-Image Diffusion Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our key idea is to train a Latent Guidance Predictor (LGP) – a small, per-pixel, Multi-Layer Perceptron (MLP) that maps latent features of noisy images to spatial maps, where the deep features are extracted from the core Denoising Diffusion Probabilistic Model (DDPM) network. |
Andrey Voynov; Kfir Aberman; Daniel Cohen-Or; |
2023 | 8 | Drag Your GAN: Interactive Point-based Manipulation on The Generative Image Manifold IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. |
XINGANG PAN et. al. |
2023 | 9 | MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. |
CHRISTIAN REISER et. al. |
2023 | 10 | BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. |
LIOR YARIV et. al. |
2023 | 11 | Key-Locked Rank One Editing for Text-to-Image Personalization IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size. We present Perfusion, a T2I personalization method that addresses these challenges using dynamic rank-1 updates to the underlying T2I model. |
Yoad Tewel; Rinon Gal; Gal Chechik; Yuval Atzmon; |
2023 | 12 | Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, current personalization approaches struggle with lengthy training times, high storage requirements or loss of identity. To overcome these limitations, we propose an encoder-based domain-tuning approach. |
RINON GAL et. al. |
2023 | 13 | 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. |
Biao Zhang; Jiapeng Tang; Matthias Nießner; Peter Wonka; |
2023 | 14 | Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models. We show that these models are an excellent fit for synthesising human motion that co-occurs with audio, e.g., dancing and co-speech gesticulation, since motion is complex and highly ambiguous given audio, calling for a probabilistic description. |
Simon Alexanderson; Rajmund Nagy; Jonas Beskow; Gustav Eje Henter; |
2023 | 15 | GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. |
Tenglong Ao; Zeyi Zhang; Libin Liu; |
2022 | 1 | Palette: Image-to-Image Diffusion Models IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper develops a unified framework for image-to-image translation based on conditional diffusion models and evaluates this framework on four challenging image-to-image translation tasks, namely colorization, inpainting, uncropping, and JPEG restoration. |
CHITWAN SAHARIA et. al. |
2022 | 2 | StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of 10242 at such a dataset scale. |
Axel Sauer; Katja Schwarz; Andreas Geiger; |
2022 | 3 | StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. |
RINON GAL et. al. |
2022 | 4 | EAMM: One-Shot Emotional Talking Face Via Audio-Based Emotion-Aware Motion Model IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose the Emotion-Aware Motion Model (EAMM) to generate one-shot emotional talking faces by involving an emotion source video. |
XINYA JI et. al. |
2022 | 5 | CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we investigate how to effectively link the pretrained latent spaces of StyleGAN and CLIP, which in turn allows us to automatically extract semantically-labeled edit directions from StyleGAN, finding and naming meaningful edit operations, in a fully unsupervised setup, without additional human guidance. |
Rameen Abdal; Peihao Zhu; John Femiani; Niloy Mitra; Peter Wonka; |
2022 | 6 | Authentic Volumetric Avatars from A Phone Scan IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Creating photorealistic avatars of existing people currently requires extensive person-specific data capture, which is usually only accessible to the VFX industry and not the general public. Our work aims to address this drawback by relying only on a short mobile phone capture to obtain a drivable 3D head avatar that matches a person’s likeness faithfully. |
CHEN CAO et. al. |
2022 | 7 | Domain Enhanced Arbitrary Image Style Transfer Via Contrastive Learning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we tackle the challenging problem of arbitrary image style transfer using a novel style feature representation learning method. |
YUXIN ZHANG et. al. |
2022 | 8 | Variable Bitrate Neural Fields IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Unfortunately, these feature grids usually come at the cost of significantly increased memory consumption compared to stand-alone neural network models. We present a dictionary method for compressing such feature grids, reducing their memory consumption by up to 100 × and permitting a multiresolution representation which can be useful for out-of-core streaming. |
TOWAKI TAKIKAWA et. al. |
2022 | 9 | AvatarCLIP: Zero-shot Text-driven Generation and Animation of 3D Avatars IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the whole production process is prohibitively time-consuming and labor-intensive. To democratize this technology to a larger audience, we propose AvatarCLIP, a zero-shot text-driven framework for 3D avatar generation and animation. |
FANGZHOU HONG et. al. |
2022 | 10 | ReLU Fields: The Little Non-linearity That Could IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Hence, in this work, we investigate what is the smallest change to grid-based representations that allows for retaining the high fidelity result of MLPs while enabling fast reconstruction and rendering times. |
Animesh Karnewar; Tobias Ritschel; Oliver Wang; Niloy Mitra; |
2022 | 11 | Learning High-DOF Reaching-and-grasping Via Dynamic Representation of Gripper-object Interaction IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To resolve the sample efficiency issue in learning the high-dimensional and complex control of dexterous grasping, we propose an effective representation of grasping state characterizing the spatial interaction between the gripper and the target object. |
QIJIN SHE et. al. |
2022 | 12 | Learning Smooth Neural Functions Via Lipschitz Regularization IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a novel regularization designed to encourage smooth latent spaces in neural fields by penalizing the upper bound on the field’s Lipschitz constant. |
Hsueh-Ti Derek Liu; Francis Williams; Alec Jacobson; Sanja Fidler; Or Litany; |
2022 | 13 | Differentiable Signed Distance Function Rendering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this article, we show how to extend the commonly used sphere tracing algorithm so that it additionally outputs a reparameterization that provides the means to compute accurate shape parameter derivatives. |
Delio Vicini; Sébastien Speierer; Wenzel Jakob; |
2022 | 14 | CLIPasso: Semantically-aware Object Sketching IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present CLIPasso, an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications. |
YAEL VINKER et. al. |
2022 | 15 | ASE: Large-scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present a large-scale data-driven framework for learning versatile and reusable skill embeddings for physically simulated characters. |
Xue Bin Peng; Yunrong Guo; Lina Halper; Sergey Levine; Sanja Fidler; |
2021 | 1 | Designing An Encoder for StyleGAN Image Manipulation IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator. |
Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or; |
2021 | 2 | Learning An Animatable Detailed 3D Face Model from In-the-wild Images IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. |
Yao Feng; Haiwen Feng; Michael J. Black; Timo Bolkart; |
2021 | 3 | Mixture of Volumetric Primitives for Efficient Neural Rendering IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. |
STEPHEN LOMBARDI et. al. |
2021 | 4 | AMP: Adversarial Motion Priors for Stylized Physics-based Character Control IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. |
Xue Bin Peng; Ze Ma; Pieter Abbeel; Sergey Levine; Angjoo Kanazawa; |
2021 | 5 | Editable Free-viewpoint Video Using A Layered Neural Representation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To fill this gap, in this paper, we propose the first approach for editable free-viewpoint video generation for large-scale view-dependent dynamic scenes using only 16 cameras. |
JIAKAI ZHANG et. al. |
2021 | 6 | Acorn: Adaptive Coordinate Networks for Neural Scene Representation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest. |
JULIEN N. P. MARTEL et. al. |
2021 | 7 | Only A Matter of Style: Age Transformation Using A Style-based Regression Model IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. |
Yuval Alaluf; Or Patashnik; Daniel Cohen-Or; |
2021 | 8 | Real-time Deep Dynamic Characters IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery. |
MARC HABERMANN et. al. |
2021 | 9 | Codimensional Incremental Potential Contact IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Extending the IPC model to thin structures poses new challenges in computing strain, modeling thickness and determining collisions. To address these challenges we propose three corresponding contributions. |
Minchen Li; Danny M. Kaufman; Chenfanfu Jiang; |
2021 | 10 | Total Relighting: Learning to Relight Portraits for Background Replacement IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel system for portrait relighting and background replacement, which maintains high-frequency boundary details and accurately synthesizes the subject’s appearance as lit by novel illumination, thereby producing realistic composite images for any desired scene. |
ROHIT PANDEY et. al. |
2021 | 11 | FovVideoVDP: A Visible Difference Predictor for Wide Field-of-view Video IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: FovVideoVDP is a video difference metric that models the spatial, temporal, and peripheral aspects of perception. While many other metrics are available, our work provides the first practical treatment of these three central aspects of vision simultaneously. |
RAFA? K. MANTIUK et. al. |
2021 | 12 | Neural Monocular 3D Human Motion Capture with Physical Awareness IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a new trainable system for physically plausible markerless 3D human motion capture, which achieves state-of-the-art results in a broad range of challenging scenarios. |
Soshi Shimada; Vladislav Golyanik; Weipeng Xu; Patrick Pérez; Christian Theobalt; |
2021 | 13 | TryOnGAN: Body-aware Try-on Via Layered Interpolation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Given a pair of images—target person and garment on another person—we automatically generate the target person in the given garment. |
Kathleen M Lewis; Srivatsan Varadharajan; Ira Kemelmacher-Shlizerman; |
2021 | 14 | ManipNet: Neural Manipulation Synthesis with A Hand-object Spatial Representation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a hand-object spatial representation that can achieve generalization from limited data. |
He Zhang; Yuting Ye; Takaaki Shiratori; Taku Komura; |
2021 | 15 | Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we develop a learning framework that generates control policies for physically simulated athletes who have many degrees-of-freedom. |
Jungdam Won; Deepak Gopinath; Jessica Hodgins; |
2020 | 1 | Consistent Video Depth Estimation IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. |
Xuan Luo; Jia-Bin Huang; Richard Szeliski; Kevin Matzen; Johannes Kopf; |
2020 | 2 | Immersive Light Field Video With A Layered Mesh Representation IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. |
MICHAEL BROXTON et. al. |
2020 | 3 | Character Controllers Using Motion VAEs IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. |
Hung Yu Ling; Fabio Zinno; George Cheng; Michiel Van De Panne; |
2020 | 4 | Robust Motion In-betweening IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks. |
Félix G. Harvey; Mike Yurick; Derek Nowrouzezahrai; Christopher Pal; |
2020 | 5 | XNect: Real-time Multi-person 3D Motion Capture With A Single RGB Camera IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. |
DUSHYANT MEHTA et. al. |
2020 | 6 | Learning Temporal Coherence Via Self-supervision For GAN-based Video Generation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In contrast, we focus on improving learning objectives and propose a temporally self-supervised algorithm. |
Mengyu Chu; You Xie; Jonas Mayer; Laura Leal-Taixé; Nils Thuerey; |
2020 | 7 | Local Motion Phases For Learning Multi-contact Character Movements IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel framework to learn fast and dynamic character interactions that involve multiple contacts between the body and an object, another character and the environment, from a rich, unstructured motion capture database. |
Sebastian Starke; Yiwei Zhao; Taku Komura; Kazi Zaman; |
2020 | 8 | Skeleton-aware Networks For Deep Motion Retargeting IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs. |
KFIR ABERMAN et. al. |
2020 | 9 | MEgATrack: Monochrome Egocentric Articulated Hand-tracking For Virtual Reality IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a system for real-time hand-tracking to drive virtual and augmented reality (VR/AR) experiences. |
SHANGCHEN HAN et. al. |
2020 | 10 | Point2Mesh: A Self-prior For Deformable Meshes IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud. |
Rana Hanocka; Gal Metzer; Raja Giryes; Daniel Cohen-Or; |
2020 | 11 | Single Image HDR Reconstruction Using A CNN With Masked Features And Perceptual Loss IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a novel learning-based approach to reconstruct an HDR image by recovering the saturated pixels of an input LDR image in a visually pleasing way. |
Marcel Santana Santos; Tsang Ing Ren; Nima Khademi Kalantari; |
2020 | 12 | Fast Tetrahedral Meshing In The Wild IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a new tetrahedral meshing method, fTetWild, to convert triangle soups into high-quality tetrahedral meshes. |
Yixin Hu; Teseo Schneider; Bolun Wang; Denis Zorin; Daniele Panozzo; |
2020 | 13 | Path-space Differentiable Rendering IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we show how path integrals can be differentiated with respect to arbitrary differentiable changes of a scene. |
Cheng Zhang; Bailey Miller; Kai Yan; Ioannis Gkioulekas; Shuang Zhao; |
2020 | 14 | A Scalable Approach To Control Diverse Behaviors For Physically Simulated Characters IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we develop a technique for learning controllers for a large set of heterogeneous behaviors. |
Jungdam Won; Deepak Gopinath; Jessica Hodgins; |
2020 | 15 | DeepFaceDrawing: Deep Generation Of Face Images From Sketches IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. |
Shu-Yu Chen; Wanchao Su; Lin Gao; Shihong Xia; Hongbo Fu; |
2019 | 1 | Deferred Neural Rendering: Image Synthesis Using Neural Textures IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we explore the use of imperfect 3D content, for instance, obtained from photo-metric reconstructions with noisy and incomplete surface geometry, while still aiming to produce photo-realistic (re-)renderings. |
Justus Thies; Michael Zollh�fer; Matthias Nie�ner; |
2019 | 2 | Local Light Field Fusion: Practical View Synthesis With Prescriptive Sampling Guidelines IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. |
BEN MILDENHALL et. al. |
2019 | 3 | Semantic Photo Manipulation With A Generative Image Prior IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we address these issues by adapting the image prior learned by GANs to image statistics of an individual image. |
DAVID BAU et. al. |
2019 | 4 | Text-based Editing Of Talking-head Video IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). |
OHAD FRIED et. al. |
2019 | 5 | Single Image Portrait Relighting IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we present a system for portrait relighting: a neural network that takes as input a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and from that image produces a relit image of that subject as though it were illuminated according to any provided environment map. |
TIANCHENG SUN et. al. |
2019 | 6 | MeshCNN: A Network With An Edge IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. |
RANA HANOCKA et. al. |
2019 | 7 | Learning To Optimize Halide With Tree Search And Random Programs IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a new algorithm to automatically schedule Halide programs for high-performance image processing and deep learning. |
ANDREW ADAMS et. al. |
2019 | 8 | Scalable Muscle-actuated Human Simulation And Control IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work aims to build a comprehensive musculoskeletal model and its control system that reproduces realistic human movements driven by muscle contraction dynamics. |
Seunghwan Lee; Moonseok Park; Kyoungmin Lee; Jehee Lee; |
2019 | 9 | Handheld Multi-frame Super-resolution IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. |
BARTLOMIEJ WRONSKI et. al. |
2019 | 10 | PlanIT: Planning And Instantiating Indoor Scenes With Relation Graph And Spatial Prior Networks IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a new framework for interior scene synthesis that combines a high-level relation graph representation with spatial prior neural networks. |
KAI WANG et. al. |
2019 | 11 | Content-aware Generative Modeling Of Graphic Design Layouts IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we study the problem of content-aware graphic design layout generation. To train our model, we build a large-scale magazine layout dataset with fine-grained layout annotations and keyword labeling. |
Xinru Zheng; Xiaotian Qiao; Ying Cao; Rynson W. H. Lau; |
2019 | 12 | Deep Inverse Rendering For High-resolution SVBRDF Estimation From An Arbitrary Number Of Images IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we present a unified deep inverse rendering framework for estimating the spatially-varying appearance properties of a planar exemplar from an arbitrary number of input photographs, ranging from just a single photograph to many photographs. |
DUAN GAO et. al. |
2019 | 13 | Real-time Pose And Shape Reconstruction Of Two Interacting Hands With A Single Depth Camera IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands. |
FRANZISKA MUELLER et. al. |
2019 | 14 | Interactive Hand Pose Estimation Using A Stretch-sensing Soft Glove IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a stretch-sensing soft glove to interactively capture hand poses with high accuracy and without requiring an external optical setup. |
Oliver Glauser; Shihao Wu; Daniele Panozzo; Otmar Hilliges; Olga Sorkine-Hornung; |
2019 | 15 | Foveated AR: Dynamically-foveated Augmented Reality Display IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a near-eye augmented reality display with resolution and focal depth dynamically driven by gaze tracking. |
JONGHYUN KIM et. al. |