Human response to light: 
intraocular glare and post-receptor neural processing 

Introduction
One of the first topics in the foundation of psychophysics in the 1860‘s was the measurement of the Human Response Function (HRF) to light. Fechner and Weber initiated the field by measuring the amounts of light that caused different visual sensations. Around the same time, Maxwell measured color matches and established Colorimetry. This idea that receptor response causes appearance is a cornerstone of human vision, at least in a broad general manner.
 Further, a second idea is broadly held, namely that photography reproduces the light from the original on the viewer’s retina. Research in vision and photography over the past 150 years has refined our understanding of these generalizations. Studies of both adaptation of photoreceptor sensitivity[1] and the important role of spatial neural interactions[2] have shown that the quanta catch of a single photoreceptor does not generate a unique sensation. As well, reproduction of a sensation is not uniquely generated by a fixed radiance. 
An important goal of this Frontiers Topic is to characterize the Human Response Function (HRF) to HDR stimuli. Traditionally, we measure HRF with spots of light in an otherwise light-free room. Those are the stimuli used in Colorimetry and increment threshold. Scene content in natural images is usually thought of as an uncontrolled stimulus. In psychophysics we attempt to measure the pure response to light. However, when we use these spots of light, we make the assumption that optical glare does not influence vision in real-world scenes. The HRF spots of light have no light in the rest of the field of view. That means there is no veiling glare in the HRF measurements. When we apply such function to real life scenes, we make a hidden assumption that there is no glare in our retinal and photographic images.
------
[1] Dowling, J. D., The Retina, Harvard Un. Press, Cambridge. (1978).
[2] McCann J. and A. Rizzi, The Art and Science of HDR Imaging, Wiley, Chichester, (2012).   
ISBN: 978-0-470-66622-7

Glare Influences the HRF
In order to study the Human Response Function to HDR scenes we need to isolate the different roles of intraocular glare, and post-receptor neural processing. CIELAB and many other lightness models[3] used the cube root of luminance to calculate the sensation lightness. 
Calculated Retinal Image
Stiehl et al. [4] made an HDR display of equal increments in apparent lightness. The displays luminances fit CIELAB’s cube-root function. Stiehl calculated the retinal image using Vos et al. [5] Glare Spread Function (GSF). This showed lightness sensations were proportional to log retinal luminance. The cube root function associated with human response to light is the direct result of intraocular glare.
Response Functions using Calculated Retinal Images
Rizzi et al. [6,7,8] made a series of HDR displays that had more than 5 log units of dynamic range and different spatial content. These targets contained 20 pairs of test squares with different luminances. One target had an uniform white background for the 20 pairs of squares (maximum glare); another used black background (minimum glare; 5.6 log units darker than white). The third background was half-white and half-black. This background was made up of different size squares with a pseudo-random arrangement.(Fig.1)
Observers measured the apparent lightness of each of the 40 squares in each of the backgrounds using magnitude estimation. The apparent lightness of a fixed scene luminance changed in each background. For example. A square with a luminance equal to 10% of the White‘s luminance had an apparent lightness (Scaled 100 to 1) of 35 in the White, and 60 in the Black background. 

Fig. 1 shows three HDR test targets with different backgrounds that make up 88% of the area of the target. Each target contains 20 pairs of different test squares. Observers measured the apparent lightness of each area. The White background has a dynamic range of 200:000:1 (5.3 log units); the half-White background a range of 250:000:1 (5.4log units); and the Black background a range of 650:000:1 (5.8 log units).

Using Vos and van den Berg’s[9] newer GSF, Rizzi and McCann[8] calculated the retinal image for these three targets. As we saw in the illustration in Fig. 1, glare is a spatial transformation of the scene. A similar comparison of observed lightness from identical retinal luminances shows a much greater influence of the background. The square with 10% of the white’s retinal luminance is lightness estimate 10 in the White background; and 60 in the Black background. In order to make an accurate model of the Human Response Function, we must include receptor response and neural processing.
-----
[3]   Wyszecki G. & Stiles WS (1982) “Colour Science: Concepts and Methods Quantitative Data and Formulae” , 2nd ed, John Wiley & Sons, Inc, New York, 486 – 513.
[4]  Stiehl W. , McCann J. & Savoy R. (1983) Influence of Intraocular Scattered Light on Lightness-scaling Experiments, J Opt Soc Am, 73 , 1143 – 48.
[5]  Vos J. J. Vos, J. Walraven, and A. van Meeteren, "Light profiles of the foveal image of a point source," Vision Res. 16, 215-219, (1976).
[6] A. Rizzi, M. Pezzetti, and J. McCann, “Glare-limited Appearances in HDR Images”, IS&T/SID Color Imaging Conference, 15, 293-298, (2007).
<http://www.mccannimaging.com/Lightness/HDR_Papers_files/07%20CIC%20Rizzi.pdf>
[7] A. Rizzi, M. Pezzetti, and J. McCann, “Separating the Effects of Glare from Simultaneous Contrast, SPIE Proc. 6492, 68060-69, (2008).
[8]  A. Rizzi, and  J. McCann.  “Glare-limited Appearances in HDR Images”, J Soc Img Display, 17, 3 (2009).
<http://www.mccannimaging.com/Lightness/HDR_Papers_files/07HDR2Exp_3.pdf>
[9]  J. Vos, and T. van den Berg,  “CIE Research note 135/1: Disability Glare”, CIE, ISBN 3 900 734 97 6  (1999).  
 The RETINAL Response to Light
We analyzed the apparent lightness from Rizzi et al.[8] to fit a HRF for HDR scenes. Fig. 2 plots apparent lightness to log retinal luminance for three targets shown in Figure 1.
Fig. 2 plots Lightness sensations as a function of log retinal luminance.

We do not find a single HRF for the observers’ response to retinal luminance. Instead, we find three distinct responses, one for each background with different spatial contents. All three are linear functions of log luminance. It is well established that retinal receptors’ neural response is proportional to log luminance.[10]  The analysis of these HDR targets is that Lightness is always linearly proportional to receptor response. However, Fig. 2 shows the content of the scene on the retina initiates very different amplification slopes of receptor response to quanta catch.
Fig. 2 plots the fit by three linear function of log luminance. The values of slope (m), and intercept (b), and correlation are listed in Table 1.
Table 1 lists the slopes, intercept, and correlation coefficients of the different linear HRFs for each background.

Apparent lightness is a linear function of retinal log luminance, with a variable slope determined by the scene content. A White background causes the highest glare, and therefore has the lowest contrast retinal image. Nevertheless, it has the highest apparent contrast. The slope of that human response function is 56. The Black background has the least glare, yet the human response function has the lowest slope of 26.  Here it takes 3.5 log units of decrease in dynamic range to go from the sensation white to the sensation black. In comparison, that same change in sensation happens in 1.6 log units for scenes with maximal intraocular glare. The half-White and half-Black background has an intermediate slope of 47; that means the change from white to black sensations occurs over 2.0 log units. 
-----
[10] F. Werblin and J. Dowling “Organization of the Retina of the Mudpuppy, Necturus maculosus. II. Intracellular Recording”, J. Neurophysiol. 32, 339-55, (1969).

Human Response Model
We can model the different Human Response Functions with a very simple equation:
                                            L = (2.67 + s) log R + 93

where L is apparent lightness; R is retinal luminance; and s is an additive factor responsive to scene content.  In the three HDR scenes studied here, s = 0 for the Black surround; s = 20 for the half-White/half-Black background; and s = 30 for 100% White background. A small signal that adds to the slope that amplifies log retinal luminance can model lightness in HDR, and scenes in the real visual environment. 
The implications of this equation is that the post-receptor visual processing is scene dependent. There is no single Human Response Function for all receptor quanta catches. The data require a dramatic change in the slope of the HRF with changes in scene content with constant dynamic range. The remaining problem is to define the model for calculating the parameter s from the spatial array of retinal radiances.
 Spatial vs. Pixels-based Algorithms 
Many models of human vision are transforms of single-pixel scene radiances. The simplest model of our human response is a function that converts luminance to lightness for each individual pixel. CIE L* is the most popular example. The  model does not accept data from more than one pixel. Nevertheless, there are innumerable experiments, dating back to the early days of psychophysics, requiring human spatial processors in vision models. 
Apparent lightness has a complex relationship to spatial content of the image. Averages, local averages, histograms, and local histograms, image statistics are not important in human vision. Vision responds to adjacent areas, local maxima, enclosure, and separation from maxima [2, chapters 20-25; 11]. There is a rich collection of experimental data that needs to be reevaluated using retinal, rather than scene luminances.
 Summary
Models of HVS’s response to HDR scenes have to go beyond simple, single-pixel responses to light. Vision has two powerful spatial processes that transform scene radiances.(Fig.3)  The first transform is the degradation of the optical image by glare, and the second is the enhancement by post-receptor neural mechanisms.[11]  A comprehensive model of vision requires both elements. The problem of calculating appearance is that these two strong mechanisms almost cancel each other. This has the advantage that we do not notice glare in everyday life, but the disadvantage that it makes the powerful influence of neural spatial processing less apparent. 
Fig. 3 Two powerful spatial processes that tend to cancel each other.

---------
[11]  J.J. McCann, (2016) “Retinex Algorithms: Many spatial processes used to solve many different problems”, In  Retinex at 50, Proc Electronic Imaging, pp. 1-10(10).
<http://mccannimaging.com/Retinex/Talks_files/McCann%20Proc%20RET50.pdf>
http://www.mccannimaging.com/Lightness/HDR_Papers_files/07%20CIC%20Rizzi.pdfhttp://www.mccannimaging.com/Lightness/HDR_Papers_files/07HDR2Exp_3.pdfhttp://www.ingentaconnect.com/content/ist/eihttp://mccannimaging.com/Retinex/Talks_files/McCann%20Proc%20RET50.pdfshapeimage_1_link_0shapeimage_1_link_1shapeimage_1_link_2shapeimage_1_link_3