Digital Camera Images:

Captured Scene Information vs. Engineered Errors

(From 2 hour to half day course)


Alessandro Rizzi,

Dipartimento di Informatica e Comunicazione, Università degli Studi di Milano, Italy

alessandro.rizzi@unimi.it


John McCann,

McCann Imaging, Belmont, MA, USA

mccanns@tiac.net


Abstract:

Accurate scene capture is the first step in computer vision.  In addition to geometric camera calibration,spatial radiance information is key to many recognition problems.  Face, object, motion, and feature recognition begin with spatial scene information.It would be nice to assume that, when we open a digital camera file and read the RGB digital values of a pixel, we get calibrated radiance measurements of the light from the scene. Two main types of problems make this step complex. First, cameras are designed to make attractive pictures using a host of firmware algorithms. Second, camera optics are a severe limit of accuracy of the acquired range. Thus, optical limits and firmware image processing modify scene radiance information, altering the spatial relationships in the captured image.  The modifications are:

  1. Optical veiling glare introduces limits to the dynamic range.

  2. Nonlinear tone scales modify spatial comparisons

  3. Color space manipulations modify an object's chroma values

  4. Camera firmware can morph the geometry of the sensor image


Special calibration procedures are needed to recover accurate scene information, when possible. The tutorial describes techniques for:

  1. capturing scene radiances within the physical limits of the camera (range and color gamut),

  2. obtaining  accurate color information from RAW images,

  3. efficiently representing information for subsequent image processing,

  4. rendering a High-Dynamic-Range (HDR) of captured data in Low-Dynamic-Range (LDR) media,

  5. mimicking human vision's scene rendition.


Human vision has excellent HDR performance, despite severe glare in the retinal image. It uses spatial comparisons to synthesize HDR vision in LDR media. Human neural contrast mechanisms counteracts the degradation of scene information by optical glare.  In some applications, that require scene rendition, a spatial HDR processing that performs scene information processing in the way human vision does is desirable.


The ease of using a digital camera to capture scene information must be balanced against the unwanted modifications of actual scene radiance data.  This tutorial teaches the difference between a good looking photograph and an accurate scene record. It address the characteristics of the many problems in camera acquisition and the best approaches to limit and/or compensate their effect. It teaches accurate color recording techniques. It also discusses how to render HDR information for human vision.


Intended audience:

This tutorial is for intermediate and advanced researchers that need accurate scene information from camera  images. In particular, real natural scenes have nonuniform illumination that increases scene range beyond the dynamic range of cameras.  A partial list of computer vision disciplines include: color, face, object, motion, and in general every feature recognition that benefit from accurate spatial scene information.


Syllabus

Digital Camera Images:

Captured Scene Information vs. Optical & Engineered Errors


1. Introduction to Digital Camera Scene Capture


2. History of silver halide scene capture

    Daguerre to Robinson and Abney

    Tone Scale design: 1929 - present

    Ansel Adam Zone System

    Color negative/positive film


3. Digital cameras

    Morita's Mavica (1980s) to today

    Thousands to millions of pixels ( pixel wars)

    CCD, CMOS and CID sensors

    Firmware features


4. Research on Sensors and Displays

    Smart Sensors - 1010 dynamic range

    HDR displays - LED plus LCD


5. Physical limits to captured information

    Resolution limits

    Veiling glare limits the optical image on sensor

    Multiple exposures increases range, but not accuracy

    Camera firmware performs image modification

    Nonlinear color processing (at a pixel) alters spatial relationships


6. Understanding camera digital values

    Camera firmware and file formats

        Bit depth

        JPEG, RAW, HDR formats

        ICC color profiles

        XYZ, L*a*b*, CIECAM, sRGB, NTSC, Munsell color spaces

    The modification of scene information by camera engineering


7. Calibration and programs that can convert camera digits into scene information

    Digits vs. scene radiance: Measurements

    Color correction of RAW files

    Lens specific geometric corrections


8. Physiological limits to visual appearance

    Resolution limits

    Veiling glare limits range on the retina

    Appearance is the result of two opposing spatial mechanisms

        a. Veiling glare reduces range on retina (glare spread function)

        b. Neural contrast: low-contrast retinal images appear to have more contrast


9. Spatial models of human Neural Contrast applied to Image Processing

    Pixel processing: LUTS and Histograms

    Local processing: Gaussian and Bimodal

    All pixels: Spatial-frequency algorithms

    All pixels mimicking vision : Retinex, Milano Retinex, analytic, computational algorithms


  1. 10.Review and Discussion

_______________


The course objectives are to:

  1. understand the digits in a stored image file

  2. understand the spatial processing implications of camera firmware

  3. understand the physical limits of image capture image information

  4. learn techniques for accurate calibration of captured scene data

  5. learn the scene dependent effects of the image on a sensor

  6. adopt the use of efficient records of scene content

  7. optimized scene rendition for human viewing

The course materials will be:

  1. website with slides and .pdfs

  2. hand outs with web links

_______________


A central theme in the authors' research has been scene capture, and how cameras modify scene information. Alessandro Rizzi and John McCann have collaborated on a book for The Society of Imaging Science and Technology / Wiley: The Art and Science of HDR Imaging. This interdisciplinary text describes the limits of capturing scene information in cameras and humans.  It emphasizes the physical limits of images on sensors and the spatial image processing that can optimize scene rendition for humans.  The authors' experience in camera research and development, as well as spatial image processing spans from the 1960's to the present.


Instructors:

Alessandro Rizzi obtained his PhD in Information Engineering, Universit à di Brescia, 1999 and is an Associate Professor in the Department of Information Science and Communication at the University of Milano. He teaches fundamentals of digital imaging, multimedia video and human - computer interaction. Since 1990, he has studied digital imaging and human vision. His research focuses on issues regarding vision when combined with digital imaging. He has worked on computational models of color appearance for standard and high - dynamic - range images when applied to image enhancement, movie and picture restoration, medical imaging, and computer vision. He is one of the founders of the Italian Color Group, and member of several program committees of conferences related to color and digital imaging. He serves as Co-Chair of the Color Conference in IS&T/SPIE Electronic Imaging in which he introduced “ The Dark Side of Color ”.  He recently joined the Board of the Society of Imaging Science and Technology (IS&T) as vice president.


John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He is a past president of the Society of Imaging Science and Technology (IS&T). He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the Society of Imaging Science & Technology (IS&T) and the Optical Society of America (OSA).  He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member. He is currently consulting and continuing his research on color vision and HDR imaging.

 


in conjunction with the

17th International Conference on Image Analysis and Processing (ICIAP),

Naples, Italy,


www.iciap2013-naples.org