Jancke D (2021).
Darks and lights, the ‘yin–yang’ of vision depends on luminance
Trends in Neurosciences 44: 339-341. (Open Access) doi: https://doi.org/10.1016/j.tins.2021.02.007
How can neuronal processing do a better job sampling visual contrast in natural scenery than a high-tech camera? In this Spotlight a recent study by Rahimi-Nasrabadi et al. is reviewed, shownig that our ability to perceive image contrast depends on overall luminance range.
Image caption: Efficient neuronal encoding of luminance contrast under largely different conditions of reflectance. Upper photograph shows an original image displayed as taken by a camera. Bottom photograph is the same as upper photograph after applying an 'ONOFF' algorithm developed by Rahimi-Nasrabadi et al.
Note that the algorithm simulating human perception slightly blurs contrast within bright regions (compare upper and lower left patches). Conversely, items within darker regions are perceived in much greater detail than within the original picture (cf. patches at right).