Generalizable and Relightable Gaussian Splatting
for Human Novel View Synthesis


Yipengjing Sun1, Chenyang Wang1, Shunyuan Zheng1, Zonglin Li1, Shengping Zhang✉,1, Xiangyang Ji2

1Harbin Institute of Technology    2Tsinghua University
Corresponding author

Abstract


We propose GRGS, a generalizable and relightable 3D Gaussian framework for high-fidelity human novel view synthesis under diverse lighting conditions. Unlike existing methods that rely on per-character optimization or ignore physical constraints, GRGS adopts a feed-forward, fully supervised strategy that projects geometry, material, and illumination cues from multi-view 2D observations into 3D Gaussian representations. Specifically, to reconstruct lighting-invariant geometry, we introduce a Lighting-aware Geometry Refinement (LGR) module trained on synthetically relit data to predict accurate depth and surface normals. Based on the high-quality geometry, a Physically Grounded Neural Rendering (PGNR) module is further proposed to integrate neural prediction with physics-based shading, supporting editable relighting with shadows and indirect illumination. Besides, we design a 2D-to-3D projection training scheme that leverages differentiable supervision from ambient occlusion, direct, and indirect lighting maps, which alleviates the computational cost of explicit ray tracing. Extensive experiments demonstrate that GRGS achieves superior visual quality, geometric consistency, and generalization across characters and lighting conditions.


Gallery


Various performers under diverse lighting conditions and novel viewpoints

Dynamic Extension


Although GRGS is not explicitly designed with a temporal consistency module for dynamic scenarios, it can be extended to dynamic settings under uniform lighting conditions while capturing.

Method


 

Overview of GRGS. Given sparse-view images of a performer under arbitrary illumination, GRGS first leverages the LGR module to reconstruct accurate depth and surface normals, and then employs the PGNR module for material decomposition and physically plausible realistic relighting from novel viewpoints.

 


Demo Video



Citation



@article{sun2025generalizable,
title={Generalizable and Relightable Gaussian Splatting for Human Novel View Synthesis},
author={Sun, Yipengjing and Wang, Chenyang and Zheng, Shunyuan and Li, Zonglin and Zhang, Shengping and Ji, Xiangyang},
journal={arXiv preprint arXiv:2505.21502},
year={2025}
}