Demosaicking and Denoising


Jump to: navigation, search

Return to PDC Projects


[edit] Figure Captions

Figure 1. Demosaicking and dneoising in the imaging pipeline

Figure 2.

[edit] Introduction

Color imaging sensors used in digital cameras acquire three spatially subsampled color channels with a color filter array (CFA) mosaic. The acquired image is then processed through an imaging pipeline of demosaicking, denoising and color correction. The algorithms used in each stage of the image pipeline are complex, several of them nonlinear and the effect on the noise is complicated.

There are two goals in this project. First, we investigate the effects of the order of demosaicking and denoising on images and image noise. The study focuses on commonly adopted denoising algorithms: BM3D, bilateral filtering and BLS GSM; and four demosaic algorithms, namely bilinear interpolation, adaptive homogeneity, POCS and adaptive frequency domain method. Images and noises are monitored and analyzed in each stage of the pipeline, to understand how each stage/algorithm affects and manipulates the noise characteristics. Noise characteristics are evaluated by various metrics from MSE and sCIELAB to visual representations of spatial and color channel correlation of noise. Ultimately, this project will suggest a preferred order of the image pipeline.

Since most of the demosaicing algorithms do not take the effects of denoising into account and vice versa, optimizing both stages is difficult, hence a joint demosaicking and denoising algorithm was proposed by Keigo Hirakawa et al. which combines these two procedures systematically into a single operation. In this project, we will also compare the performance of the joint demosaicking and denoising algorithm, by comparing its final images with the images generated by separate demosaic and denoise processes.

[edit] The effect of demosaicking on sensor noise

[edit] Noise model and components

In this part, we would like to discuss how the noise affects the demosaicking algorithm, how the demosaicking algorithm changes the noise characteristics and present a way to quantify such change. Most demosaicking algorithms assume input images with little to no noise. However, in practice, the input image is subject to various types of noise such as photon shot noise, readout noise, fixed pattern noise and thermal noise. Noise becomes more profound in images captured in low light conditions or images captured with small sensors such as cell phone cameras. Such noisy images will degrade the performance of most demosaicking algorithms. Let's analyze what happens to demosaicking methods for images with considerably high input noise. First, we define the total demosaic error as the difference between a noiseless ground truth RGB image and the output image from the simulation shown in Figure x. The output image is obtained by demosaicking the noisy CFA image generated by adding noise to the simulated CFA image. We assume the input noise to be additive white Gaussian noise (AWGN). This implies that the input noise is free of both color channel correlation and spatial correlation. We then divide the total demosaicking error into two components, the demosaic error component and the noise error component.

The demosaic error component is the error generated by the demosaicking algorithm itself. We can obtain the demosaic error component by demosaicking a noiseless image. The demosaic error component can be evaluated by comparing the ground truth noiseless RGB image and the image obtained by demosaicking the simulated noiseless CFA image.

The noise error component can be calculated by subtracting the demosaic error component from the total demosaic error. This component can be understood as the noise propagation term showing how much the output is degraded by input noise. Figure x shows each component for several different demosaicking methods. When the noise level is zero, the total demosaic error is equal to the demosaic error component. In this case, the performance of the demosaicking method determines the overall quality of pipeline. However, as input noise increases, the noise error component becomes dominant. At high input noise levels, the noise error component becomes dominant. We will focus on the noise error component when analyzing the performance of a demosaicking algorithm. Analyzing how demosaicking algorithms handle input noise and how much noise error component they produce becomes a significantly important issue in image pipeline analysis.

[edit] Noise after demosaicking

Analyzing the characteristics of the noise error component after demosaicking is an issue that has not been addressed before. We will show that noise error components are highly color channel correlated as well as spatially correlated.

[edit] Color channel correlation

Correlation of noise between color channels can be visualized by plotting the noise in a 3D RGB scatter plot. Figure x is obtained by plotting AWGN input noise in 3D RGB space. Since the input noise is white and has zero mean, the distribution of the data points is isotropic, forming a uniform sphere around the origin. However, after demosaicking, the resulting noise error component is no longer isotropic. In Figure x we see the distribution is stretched in the (1, 1, 1) direction. This implies that the noise error component tends to have the same value in the R, G, and B channels, i.e. the color independent noise becomes luminance noise after demosaicking. This is due to the fact that most state-of-the-art demosaicking algorithms tend to use the high frequency information in the better sampled green channel to substitute for the high frequency information in the red and blue channels. This enforces edges and reduces demosaicking artifacts, but also introduces color correlation between the color channels.

[edit] Spatial correlation

Spatial correlation is another type of correlation that occurs in the noise error component after demosaicking. Spatial correlation arises from the fact that demosaicking algorithms refer to neighboring pixels to estimate missing values. However, at high input noise levels, demosaicking algorithms may confuse noise with signal, introducing false edges and patterns. Figure x shows the case for spatially independent noise which is grainy. In figures x and x, the demosaicked images show abnormal streaks and color blobs which are extremely hard to analyze and denoise afterwards.

[edit] Effect on denoising

Most denoising algorithms assume additive white Gaussian noise because it is a good approximation of readout noise and easy to analyze. Also, multiplicative noise can be handled by adaptively applying denoising algorithms with additive noise assumptions. However, denoising algorithms are susceptible to noise with different characteristics from the assumptions made. Unfortunately, as discussed previously, demosaicking algorithms introduce color channel correlation and spatial correlation to the noise, degrading the performance of the denoising algorithm.

Here we show how denoising algorithms work for noise after the demosaicking operation. We will compare two cases of simulated noise, and a case of actual noise obtained after demosaicking which is heavily correlated. The first simulated case is the simple case of additive white Gaussian noise. The second is noise with color channel correlation but spatially independent. We generate this noise by finding the color channel correlation of actual noise and forcing the same correlation to additive white Gaussian noise. We also control the power of the noise to make it equal for all three cases.

Figure x compares the performance of two state-of-the-art denoising algorithms, BM3D and BLSGSM, for the aforementioned three kinds of noise with different noise levels. PSNR comparison of denoised output images for the simulated cases clearly shows that the latter is more difficult to denoise. As the noise becomes color correlated, the power of the noise is forced into the luminance channel, where most of the signal power is, making it more difficult to separate noise from signal. We visualize this observation in figure x.

The PSNR for denoising actual noise after demosaicking is the worst among the three. Strong spatial correlation in the noise, not present in the two simulated cases, introduces additional PSNR attenuation. Most denoising algorithms do not consider spatial correlation. In summary, even though the noise levels are the same, correlation, color channel and/or spatial, makes denoising more difficult than the ideal case with additive white Gaussian noise.

[edit] Summary

[edit] CFA denoising

In order to consider performing denoising before demosaicing, we must now address the task of denoising CFA images. Although the task of denoising gray-scale or RGB images is frequently addressed, there exist very few publications that propose methods for denoising CFA images. Since adjacent pixels of the CFA image represent different color measurements, the image cannot be adequately denoised using most gray-scale denoisers. Similarly, color denoising algorithms cannot be directly applied to CFA measurements because each pixel only contains one color measurement. Any successful CFA denoising algorithm must take advantage of the spatial and color correlations that exist between the CFA measurements. For these reasons, CFA denoising is challenging.

One possible approach to CFA denoising is to decompose the CFA image into four smaller images that of all of the R, G, G, and B measurements from the CFA. Then the four single channel images can be denoised with a gray-scale denoising algorithm and the resulting images can be rearranged into the denoised CFA image. This method performs poorly because correlations between the different colors are ignored.

Another possible approach is to form a new lower resolution RGB image by pulling the red and blue values from each 2x2 block and averaging the green values. This full color image can then be filtered using a RGB denoising algorithm. Finally the denoised RGB values at each pixel must be placed back into the RGGB CFA measurements. Although this approach takes advantage of the color correlation in the CFA image, it fails to preserve the high frequency spatial information that exists in the green channel of the original CFA.

Instead our approach for CFA denoising is to rearrange the CFA image into the four lower resolution RGGB images and then perform a four channel color transformation from RGGB to C1C2C3deltaG that is described below. These four images are then denoised using existing gray-scale denoising algorithms or modified versions of existing RGB denoising algorithms. The four denoised images are then converted back to the RGGB color space with the inverse color transform. The four RGGB images are then rearranged to form the final denoised CFA.

[edit] Color space transformation

We desire a color transform that will take RG1G2B into a superior color space for denoising. The first requirement we will impose on the color transformation is that it be orthonormal. This ensures the sensor noise in the transformed color space has the same distribution as in the original color space due to our assumptions about the noise model. The orthonormal property also guarantees the errors that remain in the denoised images in the transformed color space are not amplified by converting back to the RGGB colro space.

The color space we chose is derived from the principal components of the RGB values at all of the pixels in the Kodak dataset. This PCA basis has the property that the largest variance in the data is along the direction of the first principal component. Then the second principal component is in the direction of maximum variance of all vectors that are orthnormal to the first principal component. (probably not necessary) The transformation from RGB to the PCA color space is

\begin{equation} \left[ \begin{array}{c} C_0 \\ C_1 \\ C_2 \end{array} \right] = \left[ \begin{array}{ccc} .541 & .617 & .572 \\ -.794 & .152 & 588 \\ -.276 & .772 & -.572 \end{array} \right] \left[ \begin{array}{c} R \\ G \\ B \end{array} \right] \end{equation}

Since we desire a transformation that can be applied to the RG1G2B values from the Bayer pattern, we modified the above transform. Specifically the energy in the green coefficients above was split equally between G1 and G2. Another color plane also was added to to extract the additional information present in the additional green value. In order to preserve the orthnormal property of the transform (and compact energy), the difference between the greens was added. The transformation from RG1G2B to our proposed color space is

\begin{equation} \left[ \begin{array}{c} C_0 \\ C_1 \\ C_2 \\ \Delta G \end{array} \right] = \left[ \begin{array}{cccc} .541 & .436 & .436 & .572 \\ -.794 & .107 & .107 & 588 \\ -.276 & .546 & .546 & -.572 \\ 0 & .707 & .707 & 0 \end{array} \right] \left[ \begin{array}{c} R \\ G_1 \\ G_2 \\ B \end{array} \right] \end{equation}

This transform is helpful for denoising because it effectively compacts the signal energy while the noise is equally powerful in all of the color channels. This enables the signal to be more easily extracted from the first few color planes of the transformed space because it dominates the signal, and the last few color planes can be more aggressively filtered to remove the noise without a significant loss to the signal.

[edit] Experiments

To compare the performance of two different image pipelines, demosaicking-first and denoising-first, we performed simulations with several combinations of demosaicking and denoising methods. The performance is quantified in PSNR by comparing the noiseless ground truth RGB image and the result image obtained by applying both demosaic and denoise operations to the CFA image in corresponding order. The PSNR is evaluated by averaging error over all 24 images in the Kodak image set.

[edit] CBM3D

When applying CBM3D algorithm directly to CFA images, we performed block matching only on the first channel of the color transformed image, which corresponds to the luminance channel. We then denoise blocks for all four channels with the block matches from the first channel in order to have a good set of matched blocks. This is because block matching on chrominance channels usually don't give good results.

For the BM3D case, the results show that both pipelines do a great job in denoising, while the pipeline which demosaics first outperforms the other pipeline by about a half dB in PSNR. However, an important advantage of denoising first is that it reduces a significant amount of computation.

// computation analysis //

[edit] BLSGSM

The results for the BLSGSM case are comparable to the results from the BM3D case. The denoise-first pipeline works as good as the other pipeline while saving much computation.

// computation analysis //

Actual simulations on the Kodak image set show that denoising CFA data is 2.72 times faster than denoising RGB data, including overhead costs in the denoising algorithm.

[edit] Comparision of visual quality

It is interesting to compare the visual quality of output images from each pipeline. For smooth regions, the demosaic-first pipeline produces many abnormal(jiggling) patterns that did not come from the original image. They are remnants of demosaic artifacts for noisy input and they depend on which demosaicking algorithm is used. We do not have those undesired patterns when denoising first, and are able to get a clean smooth output. In addition, the final output does not heavily depend on the choice of demosaicking algorithm because the demosaicking process gets less input noise and the performance is determined by its characteristics for the noiseless case as it has been designed for. On the other hand, the demosaic-first pipeline performs better along strong edges. Our proposed denoising method does not consider possible subpixel-misalignment of subimages. Thus, in the aspect of performance only, we can argue that the denoise-first pipeline is preferable if one cares more on overall smooth region, while the demosaic-first pipeline is a better choice if input image is highly textured and contains a lot of extremely strong edges.

[edit] Conclusions

[edit] References

Denoising software by Steve Lansel

1. Hirakawa, K.; Parks, T.W., "Adaptive homogeneity-directed demosaicing algorithm," Image Processing, IEEE Transactions on , vol.14, no.3, pp.360-369, March 2005

@ARTICLE{1395991, title={Adaptive homogeneity-directed demosaicing algorithm}, author={Hirakawa, K. and Parks, T.W.}, journal={Image Processing, IEEE Transactions on}, year={2005}, month={March }, volume={14}, number={3}, pages={360-369}, keywords={cameras, channel bank filters, image colour analysis, image reconstruction, image representation, image segmentation, interpolationadaptive homogeneity-directed demosaicing algorithm, color artifact, digital camera, directional interpolation, filterbank technique, image reconstruction, metric neighborhood modeling, nonlinear iterative procedure, three-color image representation}, doi={10.1109/TIP.2004.838691}, ISSN={1057-7149}, }

2. Dubois, E., Frequency-domain methods for demosaicking of Bayer-sampled color images, Signal Processing Letters, IEEE , vol.12, no.12, pp. 847-850, Dec. 2005


 author = {Dubois, E.},
 title = {Frequency-domain methods for demosaicking of Bayer-sampled color


 journal = IEEE_J_SPL,
 year = {2005},
 volume = {12},
 pages = {847--850},
 number = {12},
 doi = {10.1109/LSP.2005.859503},
 issn = {1070-9908},
 keywords = {filtering theory, frequency-domain analysis, image colour analysis,

image representation, image sampling, image segmentation, interpolation, optical filters, Bayer-sampled color image, color filter array, demosaicking algorithm, digital camera, frequency-domain representation, interpolation, Bayer sampling, color filter array, demosaicking, digital cameras, interpolation}, }

3. Dabov, K., Foi, A., Katkovnik, V. , and Egiazarian, K., Image denoising with block-matching and 3D filtering, in Electronic Imaging'06, Proc. SPIE 6064, no. 6064A-30, San Jose, California USA, 2006.


   author = {Kostadin Dabov and Alessandro Foi and Vladimir Katkovnik and Karen Egiazarian},
   title = { Image denoising with block-matching and 3D filtering},
   booktitle = {IN ELECTRONIC IMAGING’06, PROC. SPIE 6064, NO. 6064A-30},
   year = {2006},
   publisher = {}


4. Portilla J., Strela V., Wainwright M., Simoncelli, E. P., Image Denoising using Scale Mixtures of Gaussians in the Wavelet Domain. IEEE Transactions on Image Processing. vol 12, no. 11, pp. 1338-1351, November 2003.


 author = {Portilla, J. and Strela, V. and Wainwright, M.J. and Simoncelli,


 title = {Image denoising using scale mixtures of Gaussians in the wavelet


 journal = IEEE_J_IP,
 year = {2003},
 volume = {12},
 pages = {1338--1351},
 number = {11},
 doi = {10.1109/TIP.2003.818640},
 issn = {1057-7149},
 keywords = {AWGN, Bayes methods, hidden Markov models, image denoising, image

restoration, least squares approximations, parameter estimation, statistical analysis, wavelet transforms, Bayesian least squares estimate, Gaussian scale mixtures, Gaussian vector, additive white Gaussian noise, hidden Markov model, hidden positive scalar multiplier, hidden scalar multiplier, image denoising, image restoration, mean squared error, overcomplete multiscale oriented basis, wavelet domain}, }

Personal tools