Chen-Hsuan Lin

Chen-Hsuan is the first name (neither just Chen nor Hsuan).
Hsuan is pronounced like "shoo-en" with a quick transition.
I am a senior research scientist at NVIDIA Research, working on computer vision, computer graphics, and generative AI applications. I am interested in solving problems for 3D content creation, involving 3D reconstruction, neural rendering, generative models, and beyond. My research aims to empower AI systems with 3D visual intelligence: human-level 3D perception and imagination abilities. My research has been recognized with a Best Inventions of 2023 by TIME Magazine.
I received my Ph.D. in Robotics from Carnegie Mellon University, where I was advised by Simon Lucey and supported by the NVIDIA Graduate Fellowship. I also spent internships at Facebook AI Research and Adobe Research. I received my B.S. in Electrical Engineering from National Taiwan University.
Email:  chenhsuanl (at) nvidia (dot) com

Updates

older updates... (show)

Highlights

Edify 3D

High-quality 3D asset generation

Neuralangelo

Neural surface reconstruction

Magic3D

Text-to-3D content creation

Research

ATT3D: Amortized Text-to-3D Object Synthesis

Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas
ICCV 2023
Generating high-quality 3D assets from input text typically requires lengthy per-prompt optimization. Instead, we can train a generalizable model to amortize the optimization process for fast text-to-3D generation.

Neuralangelo: High-Fidelity Neural Surface Reconstruction

Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H. Taylor, Mathias Unberath, Ming-Yu Liu, Chen-Hsuan Lin
CVPR 2023
TIME's Best Inventions of 2023
We create 3D surface reconstruction with extremely high fidelity from RGB video captures! Numerical gradients with coarse-to-fine optimization are the keys to unlock the full potential of multi-resolution hash encoding.

Magic3D: High-Resolution Text-to-3D Content Creation

Chen-Hsuan Lin*, Jun Gao*, Luming Tang*, Towaki Takikawa*, Xiaohui Zeng*, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin   (*: equal contributions)
CVPR 2023 (highlight)
We create high-quality 3D textured mesh models from text prompts with editing capabilities! We utilize a two-stage optimization pipeline with different diffusion models for fast and high-resolution text-to-3D generation.

BARF: Bundle-Adjusting Neural Radiance Fields

Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey
ICCV 2021 (oral presentation)
We can optimize a NeRF from a video sequence with unknown camera poses! Coarse-to-fine optimization is a simple yet effective strategy to jointly solve for registration and reconstruction on neural scene representations.

SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images

Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey
NeurIPS 2020
We design a geometric loss to supervise neural SDFs with 2D object masks. This allows scalable single-view training of neural 3D shape reconstruction from real-world images, without relying on multi-view supervision.

Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild

Chaoyang Wang, Chen-Hsuan Lin, Simon Lucey
3DV 2020 (oral presentation)
We design a self-supervised method for learning to recover 3D structure and poses from 2D keypoints. It uses hierarchical block-sparse coding in NRSfM frameworks and handles perspective cameras and missing data.

Photometric Mesh Optimization for Video-Aligned 3D Object Reconstruction

Chen-Hsuan Lin, Oliver Wang, Bryan C. Russell, Eli Shechtman, Vladimir G. Kim, Matthew Fisher, Simon Lucey
CVPR 2019
Given an RGB video capture, we optimize an initial 3D mesh prediction for photometric consistency to make it pixel-aligned with the video. By using a pretrained shape prior, we can bypass depth and mask constraints.

ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey
CVPR 2018
We can make GANs learn to correct the perspective geometry of objects and create realistic image composites. This can be trained solely from appearance realism where ground-truth geometry supervision is unavailable.

Deep-LK for Efficient Adaptive Object Tracking

Chaoyang Wang, Hamed Kiani Galoogahi, Chen-Hsuan Lin, Simon Lucey
ICRA 2018
We train Siamese networks for object tracking by unrolling the Lucas-Kanade algorithm as a graph and training parameters end-to-end. The learned feature representation can adapt to the regression parameters online.

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

Chen-Hsuan Lin, Chen Kong, Simon Lucey
AAAI 2018 (oral presentation)
We design a differentiable point cloud renderer to approximate the rasterization of 3D point clouds. For single-image 3D shape reconstruction, this can be used to supervise the predicted point clouds with depth images.

Object-Centric Photometric Bundle Adjustment with Deep Shape Prior

Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey
WACV 2018
Given a video capture and an initial 3D point cloud predicted by a neural network, we can use the same neural network as a learned prior to refine the 3D point cloud and camera poses in a joint optimization framework.

Inverse Compositional Spatial Transformer Networks

Chen-Hsuan Lin, Simon Lucey
CVPR 2017 (oral presentation)
We redesign Spatial Transformer Networks inspired by the Lucas-Kanade algorithm. It can be iteratively applied as an intermediate network module to predict recurrent spatial transformations for efficient visual recognition.

Using Locally Corresponding CAD Models for Dense 3D Reconstructions from a Single Image

Chen Kong, Chen-Hsuan Lin, Simon Lucey
CVPR 2017
Given an image of an object and its partial keypoint annotations, we recover the 3D shape by solving for a sparse linear combination of a prebuilt CAD model dictionary while matching keypoint projections at the same time.

The Conditional Lucas & Kanade Algorithm

Chen-Hsuan Lin, Rui Zhu, Simon Lucey
ECCV 2016
We treat the Lucas-Kanade algorithm as an iterative computation graph, and we optimize the parameters with a conditional loss for registration. This converges much faster than classical synthesis-based optimization.

Ph.D. Dissertation

Learning 3D Registration and Reconstruction from the Visual World

Chen-Hsuan Lin
Carnegie Mellon University, 2021

Experiences

NVIDIA Research

, 2021 – present
Senior Research Scientist
Research in 3D reconstruction, 3D generation, view synthesis, and neural rendering problems.

Carnegie Mellon University

, 2014 – 2021
Graduate Research Assistant (with Simon Lucey)
Research in geometric image registration, dense 3D reconstruction, and self-supervised learning.

Facebook AI Research

(Meta AI), 2019
Research Intern (with Kaiming He, Georgia Gkioxari, and Justin Johnson)
Learning 3D-aware feature representations for improving standard 2D object detection systems.

Adobe Research

, 2018
Research Intern (with Oliver Wang, Bryan Russell, Eli Shechtman, Vladimir Kim, and Matthew Fisher)
Photometric optimization of 3D object meshes for shape reconstruction aligned to RGB videos.

Adobe Research

, 2017
Research Intern (with Eli Shechtman, Oliver Wang, and Ersin Yumer)
Learning geometric corrections of composited objects in images driven by appearance realism.

National Taiwan University

, 2011 – 2013
Undergraduate Research Assistant (with Homer H. Chen)
Designing rate-distortion optimization for video compression based on perceptual quality metrics.

Teaching

Visual Learning and Recognition

(CMU 16-824), Spring 2019
Teaching Assistant / Graduate Student Instructor (with Abhinav Gupta)
(Lectures: 3D Vision & 3D Reasoning, Semantic Segmentation & Pixel Labeling)

Computer Vision

(CMU 16-720 A/B), Fall 2017
Head Teaching Assistant (with Srinivasa Narasimhan, Simon Lucey, and Yaser Sheikh)

Designing Computer Vision Apps

(CMU 16-423), Fall 2015
Teaching Assistant (with Simon Lucey)

© 2018 - designed by Chen-Hsuan Lin.