In this work, we propose HeadNeRF, a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head. It can render high fidelity head images in real-time, and supports directly controlling the generated images' rendering pose and various semantic attributes. Different from existing related parametric models, we use the neural radiance fields as a novel 3D proxy instead of the traditional 3D textured mesh, which makes that HeadNeRF is able to generate high fidelity images. However, the computationally expensive rendering process of the original NeRF hinders the construction of the parametric NeRF model. To address this issue, we adopt the strategy of integrating 2D neural rendering to the rendering process of NeRF and design novel loss terms. As a result, the rendering speed of HeadNeRF can be significantly accelerated, and the rendering time of one frame is reduced from 5s to 25ms. The well-designed loss terms also improve the rendering accuracy, and the fine-level details of the human head, such as the gaps between teeth, wrinkles, and beards, can be represented and synthesized by HeadNeRF.
Overview of HeadNeRF. Given semantic latent codes and camera parameters, the MLP-based implicit function is utilized to predict the density and feature vector of the 3D point sampled from one ray. Then the volume rendering is performed to generate a low-resolution feature map, which is further used to render the final result by our well-designed 2D neural rendering module. The whole process is differentiable, and thus the construction of HeadNeRF can be completed using only 2D images.
@inproceedings{hong2021headnerf,
author = {Yang Hong and Bo Peng and Haiyao Xiao and Ligang Liu and Juyong Zhang},
title = {HeadNeRF: A Real-time NeRF-based Parametric Head Model},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}