Material Anything: Generating Materials for Any 3D Object via Diffusion
About
We present Material Anything, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Conditional 3D Texture Generation | Curated Shaded 1.0 (test) | FID126 | 9 | |
| Conditional 3D Texture Generation | Curated Unshaded 1.0 (test) | FID165.5 | 9 | |
| PBR Texture Generation | PBR Texture Generation Evaluation Set | Shaded FID/CLIP Score6.582 | 8 |