DeformationDriven Shape Correspondence via Shape Recognition
Chenyang Zhu^{1,2}, Renjiao Yi^{1,2},
Wallace Lira^{1}, Ibraheem Alhashim^{1}, Kai Xu^{2}, Hao Zhang^{1}
^{1}Simon Fraser University, ^{2}National University of Defense Technology
ACM Transactions
on Graphics (SIGGRAPH 2017), 36(4)
An overview of our deformationdriven correspondence algorithm. The input consists of two presegmented or oversegmented 3D shapes. We
recursively split and match substructures according to a datadriven plausibility criterion that relies exclusively on shape recognition. The first iteration splits the input shapes into two subshapes. Given a pair of matched components, the algorithm recursively splits and matches them. Finally, aer
termination conditions are met for each substructure, we obtain a final part correspondence.
Abstract

Many approaches to shape comparison and recognition start by establishing
a shape correspondence. We “turn the table” and show that quality
shape correspondences can be obtained by performing many shape recognition
tasks. What is more, the method we develop computes a finegrained,
topologyvarying part correspondence between two 3D shapes where the
core evaluation mechanism only recognizes shapes globally. This is made
possible by casting the part correspondence problem in a deformationdriven
framework and relying on a datadriven “deformation energy” which rates
visual similarity between deformed shapes and models from a shape repository.
Our basic premise is that if a correspondence between two chairs (or
airplanes, bicycles, etc.) is correct, then a reasonable deformation between
the two chairs anchored on the correspondence ought to produce plausible,
“chairlike” inbetween shapes.
Given two 3D shapes belonging to the same category, we perform a
topdown, hierarchical search for part correspondences. For a candidate
correspondence at each level of the search hierarchy, we deform one input
shape into the other, while respecting the correspondence, and rate the
correspondence based on how well the resulting deformed shapes resemble
other shapes from ShapeNet belonging to the same category as the inputs.
The resemblance, i.e., plausibility, is measured by comparing multiview
depth images over categoryspecic features learned for the various shape
categories. We demonstrate clear improvements over stateoftheart approaches
through tests covering extensive sets of manmade models with
rich geometric and topological variations.



Paper 



Images 
Left: Samples of our plausibility measure training data. "Missing negative" (leftmiddle)
samples shows examples with patches that are not present in the shape,
while "swap negative" (leftright) shows structures that have some undesirable combination
of patches. Either negative case directly affects the visual perception
of shape implausibility. Right: Extracting a feature vector from the HOG representation of the
depth image and the midlevel patches. Feature vectors are constructed by
generating a response map using a convolution operation between patches
and depth maps. We obtain a 10dimensional feature vector for each depth
patch by averaging five segments of the response map in the vertical and
horizontal directions. The feature vectors of all the patches of a given depth
image are then combined into one single feature vector that represents the
depth image.
Some hierarchical correspondence of input shape segments obtained by our method.
A gallery of part correspondences computed by our algorithm (boom pair of each two shape pairs compared) with comparisons to GeoTopo [Alhashim
et al. 2015]. Matched parts share the same color; unmatched parts are in gray.



Thanks 
We would like to thank the reviewers for their valuable comments
and feedback. We also thank Noa Fish and Oliver van Kaick for
fruitful discussions. This work is supported in part by grants from
NSERC Canada (611770), China Scholarship Council, and NSF China
(61572507, 61532003, 61622212).



Code Data 



Bibtex 
@article
{zhu_sig17,
title = {DeformationDriven Shape Correspondence via Shape Recognition},
author
= {Chenyang Zhu and Renjiao Yi and Wallace Lira and Ibraheem Alhashim and Kai Xu and Hao Zhang},
journal
= {ACM Transactions on Graphics (Proc. of SIGGRAPH 2017)},
volume
= {36},
number
= {4},
pages
= {to appear},
year
= {2017}
}

