ATT3D: Amortized Text-to-3D Object Synthesis
ICCV 2023
paper (arXiv) •
project page •
BibTex (show)
@inproceedings{lorraine2023att3d,
title={ATT3D: Amortized Text-to-3D Object Synthesis},
author={Lorraine, Jonathan and Xie, Kevin and Zeng, Xiaohui and Lin, Chen-Hsuan and Takikawa, Towaki and Sharp, Nicholas and Lin, Tsung-Yi and Liu, Ming-Yu and Fidler, Sanja and Lucas, James},
booktitle={IEEE International Conference on Computer Vision ({ICCV})},
year={2023}
}
title={ATT3D: Amortized Text-to-3D Object Synthesis},
author={Lorraine, Jonathan and Xie, Kevin and Zeng, Xiaohui and Lin, Chen-Hsuan and Takikawa, Towaki and Sharp, Nicholas and Lin, Tsung-Yi and Liu, Ming-Yu and Fidler, Sanja and Lucas, James},
booktitle={IEEE International Conference on Computer Vision ({ICCV})},
year={2023}
}
Generating high-quality 3D assets from input text typically requires lengthy per-prompt optimization.
Instead, we can train a generalizable model to amortize the optimization process for fast text-to-3D generation.