Spatial Mechanisms for

High-Dynamic-Range Scenes

 






Listed below are a number of our papers on HDR.


They cover measurements of what cameras can capture,

and measurements of what humans can see.


There is a also a mix of the history of imaging and the technologies of modern devices.


The menu is on the right,

the reference links on the left.



















































HDR in painting

Section 2.1


07HDR1Hist.pdf






HDR in photography

Section 2.2


07HDR1Hist.pdf





HDR in electronics

Section 2.3 & 2.4

07HDR1Hist.pdf



HDR in computer graphics

Section 3


07HDR1Hist.pdf








Measure Limits in HDR

Section 2


07HDR2Exp.pdf





Camera Limits in HDR

Section 3


07HDR2Exp.pdf





Human Limits in HDR

Section 4


07HDR2Exp.pdf





















Human Limits in HDR:

Usable Range of Light


06CIC34.pdf


07 CIC Rizzi.pdf


07HDR2Exp.pdf













Retinal image


08 AIC Rizzi019F.pdf

09HDRSize.pdf




In Progress






































































































































HDR and UniformColorSpace


1983 Stiehl et al.pdf


08CGIV.pdf




















HDR and Color


09Parraman.pdf


McCann6p.pdf





HDR and Spatial Processing


07EI 6493-01.pdf








Imaging’s Customers


08202article.pdf





 

18,619 to 1 HDR Scene

[ 2049 to 0.11 cd/m2 ]



Definition:  High Dynamic Range (HDR) Imaging

A rendition of a scene with greater dynamic range of light than possible in its reproduction media.


This broad definition includes cameras, prints, displays, and human vision.



Challenge


How do you reproduce the above HDR scene?


When you look at this test target you see 40 distinct pie-shaped sectors, organized in four circles.


If you photograph it with your cell phone, you will loose information, that is, some very bright sectors will merge, so that they reproduce the same.  As well, some very dark sectors will merge together.  Why do we see HDR information that cannot capture?


In order to reproduce the appearance of this test target you have to either paint it, manipulate one or more photographs, or image process it.  In all three of these approaches you mimic the spatial image processing found in human vision.  Cameras count the photons falling on each pixel in their sensor arrays.  The amount of light at each pixel determines the camera’s response.  Human vision works by comparing pixels and is largely indifferent to what happens at individual pixels.  In order to reproduce the above test target one must adopt human vision’s strategy of using spatial mechanisms.  Spatial comparisons correlate with appearances, while camera pixel responses do not.


History

Painters since the renaissance have used techniques to portray appearances in real life scenes.  Paints have a dynamic range of less than 100 to 1.  Nevertheless, painters render have learned to reproduce HDR scenes in LDR (Low Dynamic Range) media.  Painters created the HDR appearances by learning to paint spatial relationships.  The amount of light was not as important as the change in light from one area to another.


Photographers in the 1850’s have very limited materials, but invented HDR image capture and printing.  Here too, they learned to put HDR spatial information onto LDR prints.  The best modern example is Ansel Adam’s Zone System.



Early electronic image processing (1968 / analog) used spatial comparisons to compress scene range.  Digital image processing began in the mid-1970’s.



HDR was rediscovered and redirected to accurate scene capture in the late-1990’s.  The computer graphic community became fascinated by the use of multiple exposures with different camera settings to extend the range light captured by cameras.





Limits of HDR Scene Capture

Optical lenses limits of how much scene information can be captured by cameras and by humans.  Ale and I made a test target using a lightbox and transparent filters.

See [18,619 to 1 HDR Test Target]



Using this target we measured the range of light that digital and film cameras can accurately capture light.

The results confirm much earlier work that showed lens glare limits dynamic range.  Lens glare depends on the physical properties of the lens and the spatial distribution of light in the scene.


Humans use a tissue lens that also exhibits glare.  The range of light we can see also depends on the content of the scene.





Two Counteracting Spatial Mechanisms

The range of light falling on the retina gets reduced by glare.  The more whites in the image, the greater the glare, the higher the amount of scattered glare light in the darker areas on the retina.


The familiar example of simultaneous contrast demonstrates the opposite.  Grays surrounded by white appear darker even though the have more scattered light on that part of the retina.


The physiological spatial processing (interaction of nerves) acts in opposition to the physical scattering of light by the lens and eye.  The highest appearances of contrast occur in scenes with lower dynamic range on the retina.


Glare reduces the range of light on the retina and spatial image processing makes it appear to have higher contrast.



Appearance of an HDR Stimulus

We made a new set of test targets and asked observers estimate their appearance between white [100] and black [1].  


We changed the amount of scattered light in the image without changing the dynamic range. 


In other experiments, we changed the dynamic range without changing the scattered light.


These experiments measured the usable range of light that humans can see in a complex images. The results showed that the appearance (gray value) and the usable range of light depended on the scene content.

 


Retinal Image of an HDR Stimulus

Given the light coming from the scene, we can calculate the light falling on the retina.  We compare the range of appearances with the range of light on the retina and found good correlation.



We are currently working on a program we plan to distributes that calculates the retinal image for any calibrated scene.



The  image below studies 40 pie-shaped luminance targets in low-glare and in high-glare surrounds.


The top row shows the target as measured with a spot photometer, one pie-shaped sector at a time.  During these measurements all other sectors were covered by opaque paper.  The scale on the left shows the optical density assigned to each digital gray level 9) to 255).


The second row shows the calculated retinal luminances for age 25 for each pixel using the same scaling as in the top row.


The third row shows the results for age 75.  This representation shows that there are changes, but it is difficult to visualize the changes in retinal luminances.




The next image shows the same results displayed with pseudo-color lookup tables.


The plot of 0 to 255 digital values covering 0.0 to 4.27 Optical Density (OD) units is divided into 9 ranges with 9 colors.  The optical densities between 0.0 and 0.5 are white; those between 0.5 and 1.0 are pink; and so on until those above 4.0 are black.


It is much easier to read the OD range in pseudo-color.  The targets show that the stimulus luminances are the same in both the low- and the high-glare targets.


The target introduces much bigger changes in retinal image than the effect of age.


The original target covers a range of 4.27 OD units in the pie shaped sectors.


The low-glare retinal image has a range of 4 OD for 25 years, and 3.5 OD for 75 years.  Glare has little effect on the righter sectors.  Only the darkest sectors show large changes.


The high-glare retinal image has a range of 2 OD for both 25 years, and 75 years.  Glare has a substantial effect on the two darkest circular targets.




HDR and Uniform Color Spaces

When we evaluate image quality we should do it in a uniform color space.  We often look for a single number to describe the quality of an entire image.  That number usually represents some statistical average of distances in color space.  Any average of distances in  hue, chroma and lightness must be done in an isotropic space in which x distance in hue is the same change in magnitude of  appearance as x distance in chroma and x distance in lightness.  Unless the color space is uniform, then even averaging two values is invalid.


Does HDR scene capture and rendition affect uniform color spaces?   Since the Munsell Uniform Space was measured using papers, do we need to new uniform color spaces for HDR displays?


We have studied this problem an found that the Munsell system is quite robust.



HDR and Color

Along with Carinna Parraman we have been studying two identical scenes in different illuminations (LDR/HDR).



Digital Spatial Processing

.  However, it is the study of the underlying mechanism that explains why humans see all the information in HDR scenes with LDR physiology.




Cameras, Media and Commerce

Two different evolutions affect imaging.


One is technical; the other is commercial.


Imaging recent changes have been dramatic, but in fact, there have been many such changes in the past, and will be more in the future.

 

links:

home

color

lightness

patents

paper list

links:

home

color

lightness

patents

paper list