This work addresses the problem of feature indexing to significantly accelerate the matching process which is commonly known as a cumbersome task in many computer vision applications. To this aim, we propose to perform product sub-vector quantization (PSVQ) to create finer representation of underlying data while still maintaining reasonable memory allocation. | Journal of Computer Science and Cybernetics, , (2019), 69–83 DOI PRODUCT SUB-VECTOR QUANTIZATION FOR FEATURE INDEXING THE-ANH PHAM1,∗ , DINH-NGHIEP LE1 , THI-LAN-PHUONG NGUYEN2 1 Hong 2 Thai Duc University Nguyen University Lao Cai Campus, ∗ phamtheanh@ Abstract. This work addresses the problem of feature indexing to significantly accelerate the matching process which is commonly known as a cumbersome task in many computer vision applications. To this aim, we propose to perform product sub-vector quantization (PSVQ) to create finer representation of underlying data while still maintaining reasonable memory allocation. In addition, the quantized data can be jointly used with a clustering tree to perform approximate nearest search very efficiently. Experimental results demonstrate the superiority of the proposed method for different datasets in comparison with other methods. Keywords. Product quantization; Hierarchical clustering tree; Approximate nearest search. 1. INTRODUCTION Feature indexing has been known as an important technique which allows fast retrieval and matching of visual objects in computer vision filed. The most active application of feature indexing is probably concerned with fast approximate nearest neighbor (ANN) search. In the literature, popular approaches for this problem can be listed as space partitioning methods (typically, KD-tree [5] and randomized KD-trees [26], or LM-tree [24]), hashing methods (such as LSH [8], Kernelized LSH [12]), hierarchical clustering methods (such as vocabulary K-means tree [19], POC-trees [22]). Recently, product quantization (PQ) [9] has been actively studied for its applications in fast approximate nearest neighbor search (ANN) and feature indexing. Different variants of PQ technique have been presented to optimize the quantization stage such as optimized PQ [6, 20], locally optimized PQ [10], or distribution sensitive PQ (DSPQ) [13]. PQ can be .