Matching 3D objects by their similarity is a fundamental problem in computer vision, computer graphics and many other fields. The main challenge in object matching is to find a suitable shape representation that can be used to accurately and quickly discriminate between similar and dissimilar shapes. In this paper we present a new volumetric descriptor to represent 3D objects. The proposed descriptor is used to match objects under rigid transformations including uniform scaling. The descriptor represents the object by dividing it into shells, acquiring the area distribution of the object through those shells. The computed areas are normalised to make the descriptor scale-invariant in addition to rotation and translation invariant. The effectiveness and stability of the proposed descriptor to noise and variant sampling density as well as the effectiveness of the similarity measures are analysed and demonstrated through experimental results.