Paper Draft JEI

From VISTA LAB WIKI

Revision as of 14:35, 10 August 2015 by Rjpatruno (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In the process of moving from tex to wiki - mp







\title{On improving imaging device sensitivity with wideband color filters}

\author{Manu Parmar} \email[]{mparmar@stanford.edu} \thanks{MP was supported by funds from Samsung Advanced Insititue of Technology. Seoul, Korea} \affiliation{Electrical Engineering Department, Stanford University, Stanford, CA-94305, USA}

\author{Brian Wandell} \email[]{wandell@stanford.edu} \affiliation{Psychology Department, Stanford University, Stanford, CA-94305, USA}

\author{Joyce Farrell} \email[]{joyce_farrell@stanford.edu} \affiliation{SCIEN, Stanford University, Stanford, CA-94305, USA}

\date{\today}

\begin{abstract} Under low illumination conditions, such as moonlight, there simply are not enough photons present to create a high quality color image with integration times that avoid camera-shake. Consequently, conventional imagers are designed for daylight conditions and modeled on human cone vision. Here, we propose a novel sensor design that parallels the human retina and extends sensor performance to span daylight and moonlight conditions. Specifically, we describe an interleaved imaging architecture comprising two collections of pixels. One set of pixels is monochromatic and high sensitivity; a second, interleaved set of pixels is trichromatic and lower sensitivity. The sensor implementation requires new image processing techniques that allow for graceful transitions between different operating conditions. We describe these techniques and simulate the performance of this sensor under a range of conditions. We show that the proposed system is capable of producing high quality images spanning photopic, mesopic and near scotopic conditions. \end{abstract}

\keywords{wideband spectral transmittance, interleaved imaging, imaging device sensitivity}

[edit] Introduction

We describe a method to improve imaging device sensitivity by incorporating a pixel with wideband transmittance in a color filter array (CFA), and discuss the associated tradeoffs. Such a system offers many advantages, but at some cost. We describe the following features of such a system \begin{enumerate} \item The increase in sensitivity \item The increase in dynamic range \item The loss of resolution \end{enumerate}

Designers and consumers would like cameras to operate under the same range of conditions as the human visual system. From day to night, scene intensities span an intensity range of roughly $10^8$ units and the human visual system adapts to encode images effectively across this enormous range. Multiple mechanisms play a role in adapting to light-levels \cite{wandell95a}; these include regulation of the pupil and scaling of the response gain of individual receptors. In addition, the system uses two distinct types of photoreceptors, rods and cones, to encode the broad range of intensity levels. The three types of cones are the principal encoding cells under relatively high levels of illumination (photopic). Under these conditions spatial and temporal resolution are highest, and we experience color. Vision is mediated by rods under the lowest levels of illumination (scotopic). Rod vision has significantly lower spatial and temporal resolution, and since there is only one type of rod, scotopic vision is achromatic. Over the intermediate (mesopic) range, signals encoded by rods and cones both contribute to vision. Experiments show that rod and cone signals interact, with both receptor types influencing both color appearance and sensitivity at mesopic levels \cite{stockman06a,knight01a}.


\begin{figure}[] \centering

    \subfigure[]{
         \label{fig:noise_low_light}

\includegraphics[height=2.85cm]{../figs/noise_example_lighting.eps}} \\

    \subfigure[]{
         \label{fig:noise_small_pixel}

\includegraphics[height=3.05cm]{../figs/noise_example_size.eps}} \caption{{\bf Image sensor noise from low pixel illumination.} (a) An image acquired with a digital camera under low photopic illumination. Portions of the image from a low-light area (left, red border) and a well-lit area (right, green border) are compared. (b) Simulations of the same scene acquired using a sensor with 6 $\mu$m (left) and 2 $\mu$m (right) pixels.} \label{fig:sensor-noise} \end{figure}

The need to span such a large range intensity range imposes a substantial challenge because there are very few photons available at low levels. For example, under moonlight conditions an $f$/5.6 lens gathers about 10 photons per square micron per second. Hence, in an exposure of 25 ms a 2 $\mu$m pixel (100\% fill factor, 100\% quantum efficiency) will receive on average only one incident photon. Xiao et al. \cite{xiao05a} showed that photon noise on a uniform background becomes visible at an SNR $\lesssim$ 30 dB (1000 photons). Even under moderate imaging intensities noise frequently becomes visible. For example, Figure \ref{fig:noise_low_light} was acquired under low photopic levels, but shadows reduce the illumination in a significant portion of the image to scotopic levels. The noise in the shadowed region is apparent and much greater than the noise in the well-lit portion.


Figure \ref{fig:noise_small_pixel} illustrates a second reason for low illumination: the drive to increase spatial resolution by reducing pixel-size. The image on the left simulates a scene with mean illumination 100 cd/m$^2$, and $f$/4 lens acquired with an image sensor with 6 $\mu$m pixels (the simulation was carried out with the ISET imaging pipeline simulator \cite{farrell04a}). The image on the right simulates an identical acquisition (similar exposure and noise characteristics) with a sensor with 2 $\mu$m pixels. The sensor with smaller pixels produces an image with significantly higher noise. The deterioration of signal to noise ratio (SNR) with reduced pixel-size is related to the smaller number of incident photons and the Poisson nature of photon arrival; the SNR declines with the square root of the signal level. In the case illustrated in Fig.\ \ref{fig:noise_small_pixel}, as the pixel area decreases from 6 $\mu$m to 2$\mu$m, the photon-dependent SNR decreases by a factor of 3 (10 dB).

The most practical opportunities for increasing SNR have been (a) reducing pixel noise, (b) increasing pixel spectral quantum efficiency. There are very few other opportunities to increase the amount of light incident on the sensor and improve SNR. Lens aperture-widths are often limited by the device form factor. Increasing exposure durations is impractical for video applications because exposure durations are limited by the frame-rate. In still cameras, exposure duration is limited by motion blur and camera shake \cite{xiao06b} which is exacerbated by small form-factor.

The modern sensor roadmap, driven largely by pixel-size reduction, is not helpful in accounting for the physical limitations at low light levels; there is a need for additional approaches to sensor design. In this paper we describe and analyze an architecture that contains two foregoes micro-lensinterleaved sensor mosaics. One mosaic is optimized to capture color at relatively high light levels. A second mosaic forgoes color information and is optimized to capture images under low-light levels. Because many scenes contain some regions that are adequately illuminated and others that are poorly illuminated, the sensor design requires the development of an image processing framework that can gracefully combine information from these interleaved mosaics.

\section{Stuff from here and there}

Practically, improved image SNR is best achieved by optimizing the conversion of photons incident at the image plane into usable signal. This signal is dependent on the charge generated by a pixel due to the incident photons. Not all photons that are focused on the image sensor generate charge; some are absorbed or reflected in other physical structures of the pixel. The charge generated at the pixel depends on the efficiency of photon transmission and the efficiency of charge conversion. In simple terms: \begin{align}

N_S(\lambda) &= \Theta(\lambda)\,N_P(\lambda),

\end{align} where $N_S$ is the charge generated for $N_P$ incident photons; $\Theta(\lambda) = \kappa(\lambda)\,T(\lambda)$, $T(\lambda)$ is the transmittance of the pixel and depends on the pixel technology. and $\kappa(\lambda)$ is the charge collection efficiency that depends on a number of factors like fill-factor, microlens design, photodiode structure, etc.

\subsection{SNR in Color Image Sensors}

The image sensor is not inherently color aware. It simply produces a signal proportional to all incident energy in its sensitivity range. Most common color imagers use color filter arrays (CFAs) overlaid on the image sensor to acquire three or more separate wavelength bands (color channels). The CFA arrangement allows each pixel to only acquire a sample of one color channel. At a particular pixel location that samples the $i^{\mathrm{th}}$ color-channel with filter transmittance given by $t_i(\lambda)$, the signal generated is proportional to the accumulated charge: \begin{align} \label{eq:color_charge}

C_i &= \sum_{k \in \Lambda_i} t_i(k) \Theta(k)\,N_P(k),

\end{align} where the color filter only transmits energy at wavelengths in $\Lambda_i$.\ This accumulated charge is the result of attenuation of the quantum efficiency of the pixel by the transmittance of the color filter. Typically, relatively narrow band (over the wavelength range) color filters are used; this wavelength-selective behavior restricts the number of photons gathered at each color pixel and reduces the effective sensitivity of color sensing pixels. A color filter is commonly denoted by the dominant wavelength range in its pass-band. Mainly, CFAs with red, green and blue (RGB) and cyan, magenta, and yellow (CMY) color filters have been used in digital cameras.

An obvious approach to increase the number of photons gathered by a color-sensing pixel is to maximize the transmission efficiency $t(\lambda)$. This not only requires improvements in color filter peak efficiencies, which depend on the pigment materials, but also wider pass-band responses. Unfortunately, broad-band filter transmittances have a deleterious effect on the quality of another important color image processing operation. The \emph{raw} color image acquired by the CFA-equipped sensor must be \emph{color corrected}. The color correction process attempts to adjust the raw color values to color values intended for RGB-type display systems that will then be seen by the human visual system. This color correction is implemented as a linear transformation from raw color space to rendering color space. In the case of broadband filters, the color correction matrix has noise amplifying properties and there exists a trade-off between increasing color sensor sensitivity and maintaining color accuracy. This effect has been shown for CMY filters, where color correction involves subtraction operations that have the effect of reducing signal SNR.\cite{baer99a,barnhoefer03a} Therefore, even though complementary CMY filters have higher peak efficiency and also gather more photons than the primary RGB filters because of broader wavelength-support, in recent times RGB CFAs have become more popular.

Recently, Kijima et al. \cite{kijima07a} and Luo \cite{luo07a} have proposed approaches that attempt to conserve the color reproduction ability of the image sensor while increasing sensor sensitivity. Both these approaches are fundamentally similar and use image sensors with CFAs that have four distinct types of photosites. Three of these photosites have color filters similar to conventional RGB-type filters, while the fourth type of photosite is covered with a filter which is essentially all-pass in the visible range of the spectrum. Kijima et al. refer to this type of filter as \emph{panchromatic}, while Luo refers to the corresponding type of photosite as \emph{transparent}. Fundamentally, both approaches attempt to exploit the separability of spatial and chromatic components of color images \cite{poirson93a}. Pattern-color separability of color images is a well understood property and is commonly used in the processing of color images. In essence, both approaches derive spatial detail from the high SNR broadband channel while recovering color information from the RGB channels.

We believe there is a significant lost opportunity to maximize the efficiency of the broadband channel in both these approaches. The signal processing architecture in both approaches requires the processing of signals in broadband-color difference spaces. This implies that broadband and color signals need to be combined and thus need to have similar signal levels. Otherwise, in many images, the broadband pixel may saturate while the RGB signals are still operating in noisy domains. Operating in broadband-RGB difference spaces implies that both approaches can only exploit the all-pass nature to increase filter efficiency in a limited manner. The all-pass nature of the filter is only one of two factors that influence efficiency (the second being peak efficiency). Although these designs offers some improvement over conventional designs, in very low light conditions where increased sensitivity is most needed, they can not well-exploit the broadband channel.


\section{Background} \label{sec:background} The problems caused by the limited number of photons in low-light conditions have been addressed in a number of patents and publications. Several of these rely on methods that increase the number of photons gathered by a color-sensing pixel by using filters with higher transmission efficiencies: say by increasing color filter peak efficiency and by increasing the filter spectral bandwidth. Here we describe salient features of a few such methods most similar in spirit to our technique.

In a U.S. patent granted in 1983, Sato et al.\ \cite{sato83a} described an imaging system with a CFA with transparent photosites. The transparent pixel serves in lieu of luminance in their application; they do not describe a method for accounting for the sensitivity mis-match between the transparent and color channels.

In 1994 Yamagami et al. \cite{yamagami94a} were granted a patent for a system design that uses a CFA comprising RGB and luminance-sensitive (denoted by the letter Y) photosites. This RGBY system generates a luminance channel (Y) and two color-difference channels that are derived from the RGB photosites. Yamagami et al.\ acknowledge the large sensitivity mis-match between RGB and Y pixels. Specifically, the Y channel has significantly higher sensitivity than the other channels and it will saturate when the color channels are well-exposed. They discuss approaches to minimize this problem by (a) using CMY filters rather than RGB, or (b) placing a neutral density filter on the Y channel. In 2002, Gindele and Gallagher \cite{gindele02} were granted a patent that addresses the sensitivity mismatch. They propose a scheme for recovering RGB data from Yamagami's RGBY CFA data in bright conditions when Y photosites are saturated. Gindele's method extends Adams' demosaicking method \cite{adams97a}.

Kijima et al. \cite{kijima07a} applied for a patent on image sensors with three color (e.g., RGB) and a fourth wideband photosite that they call \emph{panchromatic} (see also Luo \cite{luo07a}). The signal processing architecture combines panchromatic and color channels at an early stage, and thus faces the sensitivity mis-match problem. Kijima et al.\ propose that because ``the color filter pixels will be significantly less sensitive than the panchromatic pixels it is ``advantageous to adjust the sensitivity of the color filter pixels so that they have roughly the same sensitivity as the panchromatic pixels. (Kijima et al.\ \cite{kijima07a}, page 5, column 2, par 57).

There is a significant lost opportunity in all of these methods. Combining the wideband and RGB color signals at an early stage limits the ability to create a high-dynamic range sensor. The proposed solutions - using CMY filters, adding a neutral density to the Y channel - reduce the effective sensor dynamic range. Thus, they fail to exploit fully the wideband channel.

In contrast to these approaches, the interleaved imaging system proposed here has a wideband channel with peak quantum efficiency that can be very high relative to RGB channels. The wideband channel provides high SNR and spatial detail information in low light conditions. When light levels increase, the RGB signal has sufficient SNR, and the system smoothly reduces its use of the wideband channel. The interleaved imaging design parallels the biological rod-cone design and frees us to maintain the full dynamic range of the two types of channels.


\section{Interleaved imaging}

We simulate an image sensor CFA with two kinds of photosites: RGB photosites and and wideband photosites (W) specialized for low-light sensing (Figure \ref{fig:interleaved}). The wideband channel maximizes the number of photons gathered and has approximately 6x the SNR of the G channel and 10x higher than the R and B channels. There are many possible spatial arrangements of these photosites; we illustrate one. A single acquisition provides two interleaved images: the RGB image encodes color information while the wideband image encodes an achromatic high SNR representation. The key problem is to find a method of combining the two images without compromising the advantages inherent to the two interleaved images.


\begin{figure}[ht] \centering \includegraphics[width=8.5cm]{../figs/cfa_rgbsi_interleaved.eps} \caption{{\bf Color filter array.} The curves show the transmittances of the RGBW filters used in the simulation. The legend shows the spatial arrangement of these filters.} \label{fig:interleaved} \end{figure}

% \begin{figure*}[ht] % \centering % \subfigure[]{ % \label{fig:rgbw} % \includegraphics[width=3cm]{../figs/rgbw.eps}} % \hspace{1cm} % \subfigure[]{ % \label{fig:rgbw_efficiencies} % \includegraphics[width=7cm]{../figs/channel_efficiencies.eps}} % %\vspace{0.25cm} % \caption{{\bf Interleaved imaging architecture:} (a) CFA with four distinct color channels. Three of these are similar to RGB color filters used in conventional CFAs. The fourth photosite is transparent (b) Color filter transmittances corresponding to the photosites in Panel (a). Simulations for Figures 3 and 4 were carried assuming XX um pixels with OTHER PROPERTIES.} % \label{fig:interleaved} % \end{figure*}

Figure \ref{fig:interleaved_pipeline} illustrates the general operation of the imaging pipeline. The RGBW channels are decomposed into two images. The images at the center of the pipeline in Fig.\ \ref{fig:interleaved_pipeline} illustrate conditions where the interleaved imaging system is most effective (low-light). The RGB photosites gather only a limited number of photons and provide a noisy but colored image. This is the image that would be acquired by a conventional imaging system in such conditions. The W channel output is shown at center-bottom. This achromatic image has high SNR and carries reliable spatial information. The interleaved image processing combines the two images to yield a single final output, shown at the right.

\begin{figure*}[ht] \centering \includegraphics[width=17cm]{../figs/interleaved_pipeline.eps} \caption{{\bf Interleaved imaging.} The data from the RGB and W photosites are decomposed into a color (top) and wideband (bottom) image. The color image is noisy and comparable to the quality that one would obtain with a conventional camera. The interleaved image processing system combines the color and wideband images to improve the final output, which is a single high SNR color image shown at the right.} \label{fig:interleaved_pipeline} \end{figure*}

\subsection{Interleaved imaging signal processing} \label{sec:sigproc}

Interleaved image processing confronts many of the same issues as conventional digital image processing, such as demosaicking and white balancing. In addition, interleaved imaging has one further challenge: managing the large sensitivity mis-match between the wideband and color images. In this section we describe our approach to this problem.

Interleaved imaging operates in several different regimes that parallel the scotopic, mesopic, and photopic ranges. Under very low-light the SNR in the RGB channels is so low that little useful information can be derived. In such conditions spatial information is derived completely from the W channel. In mesopic conditions, there is useful information in both the W image and the RGB image. In this regime it is useful to combine the information from the W and RGB images. Finally, in photopic conditions the W channel is saturated and information is derived from the RGB image. Below we describe how we manage the transition between these domains.

The interleaved imaging system acquires a 2D CFA ($m \times n$) image that measures one intensity level at each spatial location. This image can be expanded to an $m \times n \times 4$ array with zeros in locations that are not sampled by the CFA. The first three bands ($m \times n \times 1,2,3$) of this array contain the RGB image. This demosaicking process is analogous to conventional demosaicking, but it has the additional problem of the sensitivity mis-match between the channels. Hence, demosaicking the CFA data requires special considerations. We use adaptive smoothing based on bilateral filtering and non-local means to recover RGB from the CFA data.

We denote each band of the interleaved image and measurements by $f_i$ and $g_i$, $i=\mathrm{R,G,B,W}$, respectively. The estimated value of a pixel $g_i(x)$ is found as a weighted sum of the pixels in its neighborhood. The weight associated with each pixel in the neighborhood of $x$ depends on two factors: a) its distance from $x$, and b) its similarity with respect to $g_i(x)$. In the examples below, we used a neighborhood size of $21 \times 21$ pixels.

\subsubsection{Bilateral filter structure}

Tomasi and Manduchi introduced the term \emph{bilateral filter} \cite{tomasi98a} to describe the idea of selecting filter weights based on geometric and photometric similarities. In the original implementations, photometric similarity was based entirely on pixel intensity. The bilateral filter adapts to local image content and is able to perform smoothing while preserving edges. Buades et al.\cite{buades05a} in the \emph{non-local means filter} use an inter-pixel similarity measure based on the similarity of image patches surrounding $x$ and the neighborhood pixel. This inter-pixel similarity measure captures the closeness of image features and is successful at preserving textures.

The bilateral filter is applied for demosaicking by updating each pixel as: \begin{align}\label{eq:ii_filter} f_i(x) &= \frac{1}{W_y}\sum_{y \in \Omega} G(\beta_d,\sigma_d) G(\beta_s,\sigma_s) S^{\Omega}_i g_i(y), \end{align} where $S^{\Omega}_i$ is a mask that has ones at locations where the CFA samples the $i^{\mathrm{th}}$ channel and zeros everywhere else; $G(\beta,\sigma)$ is the 2D Gaussian kernel \begin{align}

G(\beta,\sigma) &= \frac{1}{2\pi\sigma^2} \exp\left(-\frac{\beta^2}{2\sigma^2}\right)

\end{align} and $W_y$ is a normalization factor \begin{align}

W_y &= \sum_{y \in \Omega} G(\beta_d,\sigma_d) G(\beta_s,\sigma_s) S^{\Omega}_i,

\end{align} and $\Omega$ is the neighborhood.

In the next section we define the Gaussians for the distance-weight, $G(\beta_d,\sigma_d)$, and the pixel-similarity weight, $G(\beta_s,\sigma_s)$.


\subsubsection{Similarity functions}

The distance similarity measures the pixel distance between two points: \begin{align} \beta_d &= \|x-y\|_2. \end{align} We illustrate the pixel-similarity measure with an example in Fig.\ \ref{fig:patch_similarity}. Consider the pixel at location $x$. It has a neighborhood, $\Omega$, indicated by the white circle and an associated image patch, $h(x)$, shown on the right. Two other pixels in the neighborhood, $y$ and $z$ and their image patches are also marked. The pixel-similarity between $x$ and $y$ is measured by comparing their image patches, $h(x)$ and $h(y)$. In this example $h(x)$ is more similar to $h(y)$ than to $h(z)$. The similarity is greater because the edge orientation around $x$ and $y$ are similar, but the edge around $z$ is not similar.

\begin{figure}[h] \centering \includegraphics[width=8cm]{../figs/non_local_means.eps} \caption{{\bf Pixel-similarity.} Each pixel is associated with a small surrounding patch. The pixel-similarity between two pixels is determined by the similarity of their associated patches. The image patches for three pixels are shown. The pixel-similarity between $x$ and $y$ is higher than the pixel-similarity between $x$ and $z$.} \label{fig:patch_similarity} \end{figure}

Under low and moderate luminance levels, the W channel has more reliable spatial information than the RGB channels. As the illumination increases the W channel saturates. Hence, under low illumination we prefer to judge the image patch similarity based on the W channel and under high illumination we prefer to use the RGB channels. To transition gracefully between these regimes, we define the pixel-similarity as a weighted sum of the similarity in the W channel and RGB channels: \begin{align} \beta_s &= \alpha(y) \|h_i(x) - h_i(y) \|_2 + (1 -\alpha(y)) \|h_W(x) - h_W(y) \|_2, \end{align} The weight, $\alpha$, is determined by the fraction of saturated pixels in the image patch near $y$. \begin{align}

\alpha(y) = \frac{N_{\mathrm{saturated}}(h_W(y))}{N_{\mathrm{total}}(h_W(y))}.

\end{align}

The adaptive pixel-similarity computation is illustrated in Fig.\ \ref{fig:sat_image}. Panel (a) is the R channel image for a simulated acquisition with an interleaved sensor. Panel (b) is the corresponding W channel image. Note that several areas in the W channel image are saturated and lose all spatial detail. In such conditions, the pixel-similarity measure relies mainly on R channel data. In dark regions of the image, where R channel spatial detail is unreliable and the W channel is unsaturated, the pixel-similarity measure relies mainly on W channel information data. Panel (c) shows the resulting RGB image.

\begin{figure*}[ht] \centering \includegraphics[width=17.25cm]{../figs/sat_image_1.eps} \caption{{\bf Channel sensitivity mis-match.} Simulated acquisition of a scene with an interleaved image sensor (mean illumination 200 cd m$^2$, 66 ms exposure time,3 $\mu$m pixels). Under these conditions the RGB channels are adequately exposed in some regions but not others. The W channel is saturated in some regions. The images show the (a) R channel (b) W channel and (c) The output image after applying interleaved image processing to the data.} \label{fig:sat_image} \end{figure*}

\subsubsection{Luminance substitution mode} \label{sec:lum_substitute}

The adaptive methods based on bilateral filtering and non-local means produce a complete RGB image. Under typical or even fairly dark conditions this image is the final output of the system. When the illumination is extremely low, however, this image may still be quite noisy. There is one additional mode that we can call upon to attempt to rescue images under these very low conditions. Specifically, we can use the W image as a substitute for the luminance component of the RGB image. This substitution will alter the chromatic appearance of the image slightly, but the additional SNR in the W image compared to the luminance component of the RGB image makes the substitution worthwhile. The decision to make this substitution is based on the SNR of the RGB image.

\section{Experiments} \label{sec:experiments}

We used the ISET Digital Camera Simulator \cite{farrell04a} to simulate the the proposed interleaved imaging system and compared its performance with a conventional imaging system with a similar size sensor. ISET is a software package that offers a system approach to modeling and simulating the image processing pipeline of a digital camera. The ISET simulation begins with scene data (a physical description of radiance); these are transformed by the imaging optics into the optical image, an irradiance distribution at the image sensor array. The irradiance is transformed into an image sensor array response and finally, the image sensor array data are processed to generate a display image. ISET includes the ability to simulate a variety of visual scenes, imaging optics, sensor electronics and image processing pipelines.

We show results for a simulated scene in Fig.\ \ref{fig:results}. The average illuminant level for this scene was set at 25 cd/m$^2$. The values of sensor parameters, such as, read noise, dark voltage, photoresponse non-uniformity, etc. were approximated from values obtained for similar sensors from experiments and specifcation sheets and are listed in Table \ref{tab:sensor_parameters}. The image processing pipeline for the conventional imager used for comparison is based on bilinear demosaicking and Gray World color balancing. The image processing pipeline for the interleaved imager relies on the signal processing steps described in Section \ref{sec:sigproc} and subsequent color balancing.

\begin{figure*}[t] \centering \subtable[5 ms]{ \begin{tabular}{c} \includegraphics[width=5.25cm]{../figs/caitlin/bayer_0.01_1.eps}\\ \includegraphics[width=5.25cm]{../figs/caitlin/ii_0.01.eps} \end{tabular}}\hspace{-0.25cm} \subtable[10 ms]{ \begin{tabular}{c} \includegraphics[width=5.25cm]{../figs/caitlin/bayer_0.02_1.eps}\\ \includegraphics[width=5.25cm]{../figs/caitlin/ii_0.02.eps} \end{tabular}}\hspace{-0.25cm} \subtable[20 ms]{ \begin{tabular}{c} \includegraphics[width=5.25cm]{../figs/caitlin/bayer_0.03_1.eps}\\ \includegraphics[width=5.25cm]{../figs/caitlin/ii_0.03.eps} \end{tabular}} \caption{{\bf Interleaved imaging compared with conventional imaging.} Top row: simulations of an image acquired using a conventional Bayer CFA of a scene with average illumination 25 cd/m$^2$. Each image was acquired at a different exposure duration (5, 10, 20 ms). Bottom row: simulations of images reconstructed with the proposed interleaved imaging system at the same exposure durations. The W channel replaced the RGB luminance information (luminance substitution mode) in the 5ms image, but not in the other images. Simulation parameters are listed in Table \ref{tab:sensor_parameters}.} \label{fig:results} \end{figure*}

The top row in Fig.\ \ref{fig:results} shows the results obtained for the Bayer sensor for various acquisition times. The bottom row shows corresponding results for the interleaved sensor. In the 5 ms interleaved acquisition, the W channel was used to replace luminance information as described in Section \ref{sec:lum_substitute}.

\begin{table}[h] \caption{Sensor parameter values used for simulations.} \label{tab:sensor_parameters} \begin{center} \begin{tabular}{|r||l|} \hline Sensor parameter & {} \\ \hline \hline Pixel width ($\mu$m) & 2.2 \\ Pixel height ($\mu$m) & 2.2 \\ Fill factor & 0.9 \\ Dark voltage (V) & 0.0 \\ Read noise (mV) & 4.58 \\ Dark signal nonuniformity (DSNU) (mV) & 6 \\ Photoreceptor nonuniformity (PRNU) (\%) & 1.7 \\ Conversion Gain ($\mu$V/e) & 30 \\ Voltage swing (V) & 1.08 \\ Analog gain & 7.98 \\ Mean scene luminance (cd/m$^2$) & 25 \\ Lens f-number & \emph{f}/2.8 \\ Well capacity (electrons) & 15000 \\ \hline \end{tabular} \end{center} \end{table}

\section{Conclusions}

We propose and simulate an interleaved imaging system. The system is designed to expand the effective operating range of the imaging sensor. The system is based on capturing two images that parallel the rod (scotopic) and cone (photopic) photoreceptors in the retina. The imaging pixels can be operated in very different intensity ranges, and the interleaved image processing smoothly combines the spatial and chromatic information captured by each of the images.

\bibliography{refs_200811}


% If in two-column mode, this environment will change to single-column % format so that long equations can be displayed. Use % sparingly. %\begin{widetext} % put long equation here %\end{widetext}

% figures should be put into the text as floats. % Use the graphics or graphicx packages (distributed with LaTeX2e) % and the \includegraphics macro defined in those packages. % See the LaTeX Graphics Companion by Michel Goosens, Sebastian Rahtz, % and Frank Mittelbach for instance. % % Here is an example of the general form of a figure: % Fill in the caption in the braces of the \caption{} command. Put the label % that you will use with \ref{} command in the braces of the \label{} command. % Use the figure* environment if the figure should span across the % entire page. There is no need to do explicit centering.

% \begin{figure} % \includegraphics{}% % \caption{\label{}} % \end{figure}

% Surround figure environment with turnpage environment for landscape % figure % \begin{turnpage} % \begin{figure} % \includegraphics{}% % \caption{\label{}} % \end{figure} % \end{turnpage}

% tables should appear as floats within the text % % Here is an example of the general form of a table: % Fill in the caption in the braces of the \caption{} command. Put the label % that you will use with \ref{} command in the braces of the \label{} command. % Insert the column specifiers (l, r, c, d, etc.) in the empty braces of the % \begin{tabular}{} command. % The ruledtabular enviroment adds doubled rules to table and sets a % reasonable default table settings. % Use the table* environment to get a full-width table in two-column % Add \usepackage{longtable} and the longtable (or longtable*} % environment for nicely formatted long tables. Or use the the [H] % placement option to break a long table (with less control than % in longtable). % \begin{table}%[H] add [H] placement to break table across pages % \caption{\label{}} % \begin{ruledtabular} % \begin{tabular}{} % Lines of table here ending with \\ % \end{tabular} % \end{ruledtabular} % \end{table}

% Surround table environment with turnpage environment for landscape % table % \begin{turnpage} % \begin{table} % \caption{\label{}} % \begin{ruledtabular} % \begin{tabular}{} % \end{tabular} % \end{ruledtabular} % \end{table} % \end{turnpage}

% Specify following sections are appendices. Use \appendix* if there % only one appendix. %\appendix %\section{}

% If you have acknowledgments, this puts in the proper section head. %\begin{acknowledgments} % put your acknowledgments here. %\end{acknowledgments}

% Create the reference section using BibTeX: %\bibliography{basename of .bib file}

\end{document}

Personal tools