Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects

About

Rendering articulated objects while controlling their poses is critical to applications such as virtual reality or animation for movies. Manipulating the pose of an object, however, requires the understanding of its underlying structure, that is, its joints and how they interact with each other. Unfortunately, assuming the structure to be known, as existing methods do, precludes the ability to work on new object categories. We propose to learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views, with no joints annotation supervision, or information about the structure. We observe that 3D points that are static relative to one another should belong to the same part, and that adjacent parts that move relative to each other must be connected by a joint. To leverage this insight, we model the object parts in 3D as ellipsoids, which allows us to identify joints. We combine this explicit representation with an implicit one that compensates for the approximation introduced. We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.

Atsuhiro Noguchi, Umar Iqbal, Jonathan Tremblay, Tatsuya Harada, Orazio Gallo• 2021

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisZJU-MoCap (test)
SSIM0.966
43
Novel View SynthesisD-NeRF synthetic (test)
Average PSNR25.21
42
Novel View SynthesisBlender (test)
PSNR23.81
37
Novel View SynthesisZJU-MoCap
PSNR31.08
23
Human Pose EstimationZJU-MoCap (test)
MPJPE7.59
4
Re-posingZJU-MoCap (test)
LPIPS0.064
4
Novel View SynthesisRobots
PSNR29.11
3
Showing 7 of 7 rows

Other info

Code

Follow for update