Traditional content-based image retrieval methods based on learning from examples analyze and attempt to understand high-level semantics of an image as a whole. They typically apply certain case-based reasoning technique to interpret and retrieve images through measuring the semantic similarity or relatedness between example images and search candidate images. The drawback of such a traditional content-based image retrieval paradigm is that the summation of imagery contents in an image tends to lead to tremendous variation from image to image. Hence, semantically related images may only exhibit a small pocket of common elements, if at all. Such variability in image visual composition poses great challenges to content-based image retrieval methods that operate at the granularity of entire images. In this study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a whole image to identify visual instances which can more reliably and generically represent a given search concept. We performed preliminary experiments to validate our new idea for content-based image retrieval and obtained very encouraging results.