Computational Photography @ ICIP 2015

Responsive image

Credit: Martin St-Amant

In the last decade, computational photography has emerged as a vibrant field of research. A computational camera uses a combination of unconventional optics and novel algorithms to produce images that cannot otherwise be captured with traditional cameras. The design of such cameras involves the following two main aspects:

Optical coding
modifying the design of a traditional camera by introducing programmable optical elements and light sources to capture maximal amount of scene information in images;
Algorithm design
developing algorithms that take information captured by conventional or modified cameras, and create a visual experience that goes beyond the capabilities of traditional systems.

Examples of computational cameras that are already making an impact in the consumer market include wide field-of-view cameras (Omnicam), light-field cameras (Lytro), high dynamic range cameras (mobile cameras), multispectral cameras, motion sensing cameras (Leap Motion) and depth cameras (Kinect).

This course serves as an introduction to the basic concepts in programmable optics and computational image processing needed for designing a wide variety of computational cameras, as well as an overview of the recent work in the field.

Opening remarks and a brief history of photography [J-F Lalonde]
Coded photography [M. Gupta]
Augmented photography [J-F Lalonde]
Future and impact of photography [M. Gupta]
Q&A and concluding remarks

1. A brief history of photography

  • From camera obscura to the computational camera
PDF PPT Keynote

2. Coded photography Novel camera designs and functionalities

  • Optical coding approaches: aperture, image plane, and illumination coding; camera arrays;
  • Novel functionalities: light field cameras, extended DOF cameras, hyperspectral cameras, ultra high-resolution cameras (Gigapixel), HDR cameras, post-capture refocusing and post-capture resolution trade-offs,
  • Depth cameras: structured light, time-of-flight,
  • Compressive sensing: single pixel and high speed cameras;

3. Augmented photography Algorithmic tools for novel visual experiences

  • Mobile photography: trends, goals
  • Inverting the imaging pipeline: deconvolution, PSF estimation, demosaicking;
  • Capturing bursts of photos: denoising, deblurring, sources of camera noise;
  • Advanced image editing: automatic cropping, contrast/tone/color adjustments or transfer, distractor removal, and shallow depth of field;
  • 2D image, 3D scene: advanced image editing beyond the image plane, scene geometry, materials, and light estimation for "behind-the-image" editing;
PDF PPT Keynote

4. Future and impact of photography

  • "Social/collaborative photography" or the Internet of Cameras;
  • Wearable and flexible cameras;
  • Seeing the invisible: seeing around corners, through walls, laser speckle photography;
  • Image forensics;
  • Next generation applications (personalized health monitoring, robotic surgery, self-driving cars, astronomy).

5. Extras What to do in Québec




Have a look at the Flickr photostream!

Mohit Gupta
JF Lalonde
Mohit Gupta
JF Lalonde
Mohit Gupta

Mohit Gupta
Assistant Professor, University of Wisconsin, Madison

Mohit Gupta will start as an assistant professor in the CS department at the University of Wisconsin-Madison in January ’16. He is currently a research scientist in the CAVE lab at Columbia University. He received a B.Tech. in computer science from Indian Institute of Technology Delhi in 2003, an M.S. from Stony Brook University in 2005 and a Ph.D. from the Robotics Institute, Carnegie Mellon University in 2011. His research interests are in computer vision and computational imaging. His focus is on designing computational cameras that enable computer vision systems to perform robustly in demanding real-world scenarios, as well as capture novel kinds of information about the physical world. Details can be found here.

Jean-Francois Lalonde

Jean-François Lalonde
Assistant Professor, Laval University

Jean-François Lalonde is an assistant professor in ECE at Laval University, Quebec City. Previously, he was a Post-Doctoral Associate at Disney Research, Pittsburgh. He received a B.Eng. degree in Computer Engineering with honors from Laval University, Canada, in 2004. He earned his M.S at the Robotics Institute at Carnegie Mellon University in 2006 and received his Ph.D., also from Carnegie Mellon, in 2011. After graduation, he became a Computer Vision Scientist at Tandent, where he helped develop LightBrush™, the first commercial intrinsic imaging application. His work focuses on lighting-aware image understanding and synthesis by leveraging large amounts of data. Details can be found here.