Efficient Classification of Very Large Images with Tiny Objects

Fanjie Kong, Ricardo Henao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Scopus citations

Abstract

An increasing number of applications in computer vision, specially, in medical imaging and remote sensing, become challenging when the goal is to classify very large images with tiny informative objects. Specifically, these classification tasks face two key challenges: i) the size of the input image is usually in the order of mega- or giga-pixels, however, existing deep architectures do not easily operate on such big images due to memory constraints, consequently, we seek a memory-efficient method to process these images; and ii) only a very small fraction of the input images are informative of the label of interest, resulting in low region of interest (ROI) to image ratio. However, most of the current convolutional neural networks (CNNs) are designed for image classification datasets that have relatively large ROIs and small image sizes (sub-megapixel). Existing approaches have addressed these two challenges in isolation. We present an end-to-end CNN model termed Zoom-In network that leverages hierarchical attention sampling for classification of large images with tiny objects using a single GPU. We evaluate our method on four large-image histopathology, road-scene and satellite imaging datasets, and one gigapixel pathology dataset. Experimental results show that our model achieves higher accuracy than existing methods while requiring less memory resources.
Original languageEnglish (US)
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherIEEE Computer Society
Pages2374-2384
Number of pages11
ISBN (Print)9781665469463
DOIs
StatePublished - Jan 1 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'Efficient Classification of Very Large Images with Tiny Objects'. Together they form a unique fingerprint.

Cite this