L3 Patch Luminance

From VISTA LAB WIKI

Jump to: navigation, search

UPDATE TO THE L^3 ALGORITHM: THE PATCH LUMINANCE APPROACH

INTRODUCTION

As described above, in the previous method (henceforth referred to as the global luminance method), the image processing pipeline worked as follows:

You start with a training set of images taken over a wide range of luminance values. Each image is grouped according to its mean luminance level. This luminance level is calculated as a weighted average of all the pixels in the image. Then a random sample of 10000 training patches is extracted and used to train the optimal Wiener filter for the different luminance levels. For a new test CFA image, its mean luminance level is calculated as before and the trained filter corresponding to that level is used to transform the image to XYZ space.

Pipeline.jpg

This approach had a few shortcomings. Firstly, the method performed inadequately on images where any of the channels was saturated because the filters weren't designed to take this possibility into account. Secondly, the method didn't account for the fact that within an image, it is quite common to find regions with varying luminance levels. This fact is demonstrated in the figure below and the description that follows:

LuminanceVarianceAcrossImage.jpg

The figure above shows 4 CFA images where luminance at each pixel is in terms of a voltage level recorded at the CFA. The voltage level in the raw CFA images ranges between 0 and 1.8V. Thus the 4 images, taken at luminance levels of 10, 100, 1000 and 10000 cd/m^2, are as recorded at the CFA. In the first two images, i.e. at 10 and 100 cd/m^2, it can be seen that there is low variance in the luminance across the image. Thus in these cases, the global luminance approach as described above would yield decent results. However in the next two images at 1000 and 10000 cd/m^2 there is a lot of variation visible in the luminance levels. In these cases, using one of the trained filters for the full image would yield a lower quality result.

Thus, the new approach (henceforth referred to as the patch luminance method), was designed to account for these two shortcomings in the global luminance method. This approach is described below:

THE PATCH LUMINANCE APPROACH

This approach differs from the global luminance approach in one significant aspect. In this approach, luminance is considered at a patch level rather than a global image level, where a patch consists of a 9x9 block of pixels. As before, the training set consists of a set of images taken over a wide range of luminance levels. However, instead of the full images, the training patches are now grouped according to their mean luminance level. It was found experimentally that 40 training groups that linearly spanned the range 0-1.8V produced optimal results. For each luminance level group, the patches belonging to that group are used to train the optimal Wiener filter. For a new test image, each of its patch's luminance level is found and the filter corresponding to the closest luminance level is used to transform the image to XYZ space.

The patch luminance level is calculated as a weighted average of the pixels in the patch. The weights vary slightly depending on the relative numbers of red, green, blue and white channel pixels in the patch but it is more or less a simple average of the luminance at each pixel as shown below.

CalculationOfPatchLuminance.jpg


DISCUSSION OF THE FILTERS YIELDED BY THE PATCH LUMINANCE APPROACH

To better understand the filters, it would be useful to first understand how the percentage of training patches that saturate changes as luminance level changes. This is depicted in the figure below, with a curve for each patch of the four patch types (a patch type is characterized by the color channel at the center of the patch - so there's red, blue, green and white patch types)

ChangeInPercentSatVSLumLevel.jpg

Based on these curves, it becomes easier to interpret what is happening with the filters as shown below. All the filters are for estimating the color X at the center of the patch.

Filter1.jpg

In this image, the filter shown is for really low light levels (patch luminance voltage = 0.01V). Thus the estimated value at the center of the patch based on this filter is fairly close to an average taken over the patch. The weights are fairly distributed over the entire patch, with some additional weight placed on the red pixels and the white pixels near the center. This would intuitively be the case if we were to estimate the value X color in dark conditions.

Filter2.jpg

In this image, the filter shown is for moderate light levels where none of the channels have begun to saturate at all (patch luminance voltage = 0.147V). In this case, the weights are still spread off the center a little bit. But there is a large weight placed at the center pixel which is an order of magnitude larger than all the other weights in the filter. There is still some usage of the white pixels near the center.

Filter3.jpg

In this image, the filter shown is for the case where some of the white patches have begun to saturate (patch luminance voltage = 0.42V). Now it can be seen that the weight on the center pixel is even larger than before and the fall off to the adjacent weights is much steeper. The usage of the white channel has dropped close to 0.

Filter4.jpg

In this image, the filter shown is for the case where all the white patches have saturated and the green channel has started to saturate (patch luminance voltage = 1.02V). In this case, all the weight of the patch is on the center pixel. The magnitude of the weight is higher than in the previous cases by almost a factor of 2. The rest of the pixels have weights near 0. This too makes intuitive sense that since the luminance is so high and the white is completely saturated, there is not information in that channel and so to estimate the value of the center pixel, most of the information is at the center pixel itself.


THE RESULTING IMAGES

To understand the results better, it would be useful to first understand how the the percentage of saturated pixels in an image changes as luminance changes in terms of cd/m^2. The figure below illustrates the point. It shows the percentage of pixels from each channel that saturate as luminance changes.

PercentSatPixelsVSLumLevel.jpg

Based on these curves, the resulting images are as shown below. They show the desired image on the left, the one produced by the patch luminance method in the middle and the corresponding one produced by the global luminance method on the right for different light levels.

Result1.jpg

At this luminance level (10cd/m^2) conditions are fairly dark. Recall that at such light levels there wasn't much luminance variation across the patch so it is expected that global and patch luminance results are very similar - and this is precisely what is seen. Both results are slightly grainy but that is expected because of the low light conditions.

Result2.jpg

At this luminance level (31.6cd/m^2) the resulting images are of fairly good quality. The patch luminance result is slightly sharper than the global.

Result3.jpg

At this luminance level (316.2cd/m^2) the patch luminance result is good. The global luminance result is still slightly blurred and some of the color tiles in the background have odd colors. This is when the white channel begins to saturate so this illustrates how the global luminance method was not suited to handle saturation

Result4.jpg

At this luminance level (1000cd/m^2) the patch luminance result is still good. The global luminance result is quite poor. This is where about half the white pixels are saturated and the green channel is starting to saturate

Result5.jpg

At this luminance level (3162cd/m^2) the patch luminance result starts to deteriorate. The overall image matches the desired one in terms of most spatial characteristics. However those parts of the image that are bright or have some significant white component to them have their colors poorly estimated. This is expected to some extent because the white channel is completely saturated and some significant percentage of the green pixels are saturated. The global luminance result is very poor. It is completely unable to handle the saturation of the white and green channels.

Result6.jpg

At this luminance level (10000 cd/m^2) half of all the color channels have saturated. In this extreme case the patch luminance result is still fair. The color estimation in the image is poor but the spatial characteristics are still discernible. The global luminance result is very poor.

These results are reflected in the comparison of the scielab values for the different methods as shown below. Along with a comparison of these two methods there is also a comparison of the same methods where the white channel is not used at all. This experiment was run to gauge the effect of using the white channel in each of these methods.

GraphOfScielabValues.jpg

The figure shows an interesting result. The global luminance method without white does better than with white after the point where the white channel starts to saturate. The patch luminance method without white does worse than with white even after the channel saturates so that indicates there is some information that is being used in this channel even after saturation. The results for each method start getting worse roughly from when the green channel starts to saturate. Another interesting result is that at low luminance levels, all the methods do more or less the same.


DIFFERENT FACTORS TESTED FOR THE PATCH LUMINANCE APPROACH

As part of the effort to make the patch luminance approach robust, various parameters were experimented with to determine their optimal combination for the method. These are described below:

1) The Function Used To Span The Luminance Range

As part of the training process for the patch luminance approach, there were different options considered in terms of how to quantize the voltage space between 0 and 1.8V for the training patches. There were two different levels of analysis performed. The first was an experiment on the type of function to be used to span the space. One was a linear division of the space and another was a sigmoid function used to span the space. The linear division had been used in the past but it was hypothesized that the sigmoid function would provide more coverage of the low luminance and high luminance regions and hence might improve the performance of the filters at those levels.

The second was an experiment on the number of quantization levels between 0 and 1.8V for which to train the Wiener filters. 10, 20, 30, 40 and 50 quantization levels were experimented with and the resulting scielab curves are displayed below.

ScielabsLinearLevels.jpg

ScielabsLinearSigmoidComparison.jpg

From the figures, it can be seen that the linear quantization performs better than the sigmoid quantization method. Also, the performance improves with increase in the number of quantization levels, until 40 levels. 40 and 50 have nearly the same results and so from a computation standpoint, 40 quantization levels were adopted as optimal.

2) The Filter Type

As part of the training process, the training patches were divided into flat and texture patches based, roughly speaking, on the variance of the luminance across a patch. Flat patches would correspond to features in an image like walls where there was little or no variance in the luminance of the patch. Texture patches would correspond to edges in an image. To calculate the luminance of the training patches for this purpose, different filter types were experimented with i.e. Wiener filters, Gaussian filters and Average filters.

The Wiener filters for this purpose were a 9x9 matrix derived from the patches and the known luminance values of their center pixels, for each color channel. The Gaussian filters were a 9x9 matrix consisting of a Gaussian function centered at the center pixel of the patch with standard deviation of 1, for each color channel. The Average filters were a 9x9 matrix consisting of weights to find the simple average at the center pixel of each of the color pixels in the patch corresponding to the center pixel's type, for each color channel.

The above set of experiments was repeated with 10000, 50000 and 100000 training patches to get a complete sense of the results. The resulting scielab curves from the tests with 10000 training patches are displayed in the figure below.

ScielabsComparisonDifferentFilters.jpg

From these curves it can be seen that there is a negligible difference in the scielab values across the range of luminance levels, especially when compared to the cost of computation for each of the filter types. Though the Wiener filters result in a slight improvement at the higher end of the luminance levels, it was noted that this luminance region is, in all likelihood, beyond even extreme operating conditions and so this region wasn't considered as much in making this decision. The Average filter is the simplest, followed by the Gaussian filter and then the Wiener filter. Thus it was decided to use Average filters as an optimal choice given these results.

3) The Number of Training Samples

As mentioned previously, 10000 patches were found to be optimal in the training stage of each method, both in terms of the resulting scielab curves and in terms of computation effort required to derive the Wiener filters. This number was derived from experiments with the number of training patches. The experiment was conducted with 10000, 50000 and 100000 training patches. A sample of the resulting scielab curves are shown below:

ScielabsComparisonDiffNumTrainPathes.jpg

From these curves, it can be seen that there is an incredibly small difference in the scielab values across the range of luminance levels especially when compared to the cost of computation that is added because of the additional training patches involved. The curves only show some difference at a very high amount of zoom-in. Thus based on this result, it was concluded that 10000 training patches was an adequate number.

4) The Oversample Factor

As part of the training process, the Wiener filters are trained on a noisy version of the patches i.e. in the initial formulation of the L^3 pipeline, the Wiener filters were designed to account for the probability distribution of the noise that is encountered in the CFA. It was hypothesized that the filters might be better trained on instances of the noise from the probability distribution rather than the distribution itself. Thus experiments were run to compare the resulting scielab curves from four experiments: a) Training on the patches and the noise distribution. b) Training on 10000 patches corrupted by 1 instance of the noise distribution. c) Training on 10000 patches corrupted by 5 instances of the noise distribution. d) Training on 10000 patches corrupted by 10 instances of the noise distribution.

The above set of experiments was repeated with 50000 and 100000 training patches to get a complete sense of the results. The resulting scielab curves from the tests with 10000 training patches are displayed in the figure below.

ScielabsComparisonDiffSamplingTypes.jpg

From these curves, it can be seen that there is a negligible difference in the scielab values across the range of luminance levels, especially when compared to the cost of computation that is added because of the process involved. That is, essentially for tests a) and b) mentioned above, 10000 training patches are involved, for test c) 50000 training patches are involved because of 5 instances of the noise distribution, and for test d) 100000 training patches are involved because of the 10 instances of the noise distribution. Thus it was concluded that training on the noise distribution itself produced sufficiently good results.


5) Using The White Channel Or Not

This experiment has been described previously in the section dealing with the resulting images from the global and patch luminance approaches.

Personal tools