Detecting Ground Shadows
in Outdoor Consumer Photographs

Teaser
From an input image (left), ground shadow boundaries are detected (middle), and eventually removed (right).

People

Abstract

Probability of shadow
Probability of shadow

Detecting shadows from images can significantly improve the performance of several vision tasks such as object detection and tracking. Recent approaches have mainly used illumination invariants which can fail severely when the qualities of the images are not very good, as is the case for most consumer-grade photographs, like those on Google or Flickr. We present a practical algorithm to automatically detect shadows cast by objects onto the ground, from a single consumer photograph. Our key hypothesis is that the types of materials constituting the ground in outdoor scenes is relatively limited, most commonly including asphalt, brick, stone, mud, grass, concrete, etc. As a result, the appearances of shadows on the ground are not as widely varying as general shadows and thus, can be learned from a labelled set of images. Our detector consists of a three-tier process including (a) training a decision tree classifier on a set of shadow sensitive features computed around each image edge, (b) a CRF-based optimization to group detected shadow edges to generate coherent shadow contours, and (c) incorporating any existing classifier that is specifically trained to detect grounds in images. Our results demonstrate good detection accuracy (85%) on several challenging images. Since most objects of interest to vision applications (like pedestrians, vehicles, signs) are attached to the ground, we believe that our detector can find wide applicability.

Citation

Paper thumbnail Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan. Detecting Ground Shadows in Outdoor Consumer Photographs, European Conference on Computer Vision, 2010. [PDF] [BibTeX]

Poster

Poster thumbnail Download the poster that was presented at ECCV 2010 here: [PDF, 9MB]

Dataset

Download the dataset (with shadow boundary annotations) used to train and evaluate the shadow classifier presented in this paper. Please cite the paper if you use the data in a publication.

Code

Download the code for this paper in a ZIP file, or get it from github.

Funding

This research is supported by:

Copyright notice

Valid XHTML 1.0 Transitional