3D Semantic Splatting

University of Bristol
Semantic Feature Field Standard RGB 3DGS

Drag the slider to compare the Standard RGB rendering with the PCA-projected semantic feature space dynamically.

Abstract

Coral reefs are highly vulnerable to climate change, and while photogrammetry allows for millimetre-scale mapping, supervised AI methods fail without massive labeled datasets. This ongoing research presents an approach to coral scene understanding that integrates DINOv3 semantic embeddings directly into the 3D Gaussian Splatting pipeline. By supervising high-dimensional feature vectors across multi-view imagery, we have thus far achieved qualitatively view-consistent 3D semantic segmentation. This methodology leverages the foundational strengths of Vision Transformers to provide rich, zero-shot semantic representations within a reconstructed 3D volume.

Semantic Features Standard RGB
Standard RGB View
Semantic Features View

A static view of the 3D models showing consistent semantic embeddings across complex geometry.

Acknowledgments

Special thanks to the creators of the wildflow/sweet-corals dataset for open-sourcing the high-quality underwater photogrammetry data used in this project. The visualisations above specifically use the Tabuhan P1 dataset.

BibTeX

If you find this work useful for your research, please consider citing:

@article{hyde2026semanticsplatting, title={3D Semantic Splatting: Fusing DINOv3 features with Gaussian Splatting for Coral Reef Monitoring}, author={Hyde, Joshua and Clark, Jeff and Jones, Rob}, journal={University of Bristol Student Research Project}, year={2026} }