A greedy-based approach to learn a compact and discriminative dictionary for sparse representation is presented. The objective function consists of two components: entropy rate of a random walk on a graph and a discriminative term. Dictionary learning is achieved by finding a graph topology that maximizes the objective function. By exploiting the monotonicity and submodularity properties of the objective and matroid constraints, a highly efficient greedy-based optimization algorithm is obtained. Experimental results demonstrate superior performance on face, action, and object category recognition tasks.
(1) Entropy Rate of a Random Walk (favors compact and homogeneous clusters)
(2) Discriminative Function (encourages class purity and fewer clusters)
Each waveform indicates the sum of absolute sparse codes for different testing samples from the same class.
Random face descriptors are used for Extended YaleB. For Caltech101, dense SIFT features and spatial pyramid representations are extracted and reduced by PCA.
(1) Classification accuracy using different dictionary sizes
(2) Training time on Extended YaleB
(3) Training time on Caltech101
All materials provided here are for non-commercial research use only.
If you have any questions about this source code, please contact: zhuolinumd@gmail.com