Misprint development notes

From VISTA LAB WIKI

Jump to: navigation, search

vDAPTe <a href="http://ncgqcuulpras.com/">ncgqcuulpras</a>, [url=http://whfnhixvgded.com/]whfnhixvgded[/url], [link=http://ddbfecbhgdsh.com/]ddbfecbhgdsh[/link], http://wkkhuaqsafsq.com/

Contents

[edit] To Do

Complete the pipeline and perform more statistical experiments. // somehow done

Be clear about the precision-recall and conventional ROC relationships // done

Organize and write the Wiki report summarizing (A) Purpose, (B) Background, (C) Implementation, (D) Results and (E) Conclusions (future directions).

Analyze the FLOPs if possible. Matlab Profile for sure.

Develop test charts

Find regions separately for C, M, Y from the original input

Think about how to find optimal spectral QE for the sensor to identify missing planes // in future directions

Verify the lookup table by checking for CM constant and Y varying - what is the luminance. Do this for different color plane varying. This is just a utility to verify look-up table measurements. // done seems alright


[edit] Main Functions

[edit] hplPrintLineSensor

To generate an output from an image, the main function is process_line (it processes the image with the simulated line sensor); image has to be a tiff file loaded in matlab, and normalized, so if you have an image My_Image.tiff, encoded with a 8 bits precision, you need to do:

image = imread('My_Image.tiff');
precision = 8;
image = (100*double(image))/(2^precision);
output = hplPrintLineSensor(image);
imagesc(output);

You can also give more precisions in the function hplPrintLineSensor; Here, all these arguments are at default.

The default option is without the lens, to use the lens, you have to do:

output = hplPrintLineSensor(image,[],[],[],1); %1 being for with the lens

[edit] hpl_image_cmyk_2_reflectance

The function is implemented with 2 modes 'fast' and 'slow', the fast most uses a log-linear regression to compute the reflectance from the cmyk, while the slow mode computes a weighted sum between the closest neighbors. Of course the slow mode is supposed to be more accurate than the fast mode.

We can see that the difference between the 2 modes is very small:

Image from the slow mode Image from the fast mode Difference between these 2 images

[edit] Number of lines needed

Adapting the scene to the sensor field of view has some consequences that we did not consider immediately:

  • We should maintain the same ratio between r = width_of_the_image/width_of_the_sensor and the same height ratio when we are cutting the image into lines before computing the scene through the sensor.
  • Not maintaining this ratio was the reason for the very low signal that we had for wider images: this stretched the pixels in the vertical direction; diluting the photons (in the vertical direction)

This is now fixed, and if r is the previous ratio, then the output of the sensor will be: (height_of _the_image/r)x(length_of_the_sensor), an example of the new result is:

Original and sensed image

The light is now set up to a fixed value of 32, and then we don't readjust the luminance as it has the effect of giving the same value to all uniform scenes (patches), that we use to calibrate, and for many other reasons. Moreover, it makes the images very difficult to compare as the output doesn't correspond anymore to the absolute color of the pixel. Finally it also changes the noise property of each color...

For a cmyk value of (i-1)*ones(1,1,4), we can find the output is varying in a reasonable fashion:

Output vs i

Horizontal axis = K (from CMYK) Vertical axis = Sensor output

[edit] Sensor without Iset

The properties of the output are stored in 2 files: models/out_mean.mat, and models/out_std.mat. These are the pre-computed output for the sensor for the values of the cmyk table. The function simulates the whole ISET pipeline, it is highly faster than the Iset computation as we don't have to go through all the reflectance and luminance calculi. The downside being of course that all the parameters are fixed.

To give an idea of how faster this method is, we can compare the simulation on the test4.tiff image (792x650): it takes roughly 45 minutes for the line sensor iset simulation; while it takes only around 35 seconds with this method.

Example of code:

image = imread('My_Image.tiff');
precision = 8;
image = (100*double(image))/(2^precision);
output = hpl_simulate_pipeline(image);
imagesc(output);

simulated image without ISET

[edit] Probability of correct print

We now have a function calculating how "likely" it is that your image have been generated from a given CMYK image. This function is: hpl_probability_of_misprint, an example of use is:

image = (100*double(imread('input/test4.tiff')))/(2^8);
res = hpl_simulate_pipeline(image);
proba_map = hpl_probability_of_misprint(res.output_average, res.output_noisy, res.map_of_stds);

In proba_map you then have two fields:

  • proba_map.print gives you a likelihood, for each of the block you are testing, of the deviation from the average image.
  • proba_map.misprint = 1-proba_map.print (except for roundings)

You can also run the script hpl_proba_script.m;

From this table we extract 3 statistics:

mean(mean(proba_map.print)); and min(min(proba_map.print)) and sum(sum(proba_map.print < 0.5))/size(proba_map.print).

We can compute the conditional distribution of probabilities on these 3 statistics given the random variable misprint (0 if correct print, 1 if misprint). Assuming that these statistics are independent given misprint, we get this simple probability model:

Simple statistical model

A development of this idea that seems quite natural is to add new variables that are the kind of misprint that we have:

Developed statistical model

We can learn these CDP from a set of images on which we create random misprints; This is done by the function hpl_compute_cdp.

One can see that these statistics are quite discriminant, as lot of the values on the following curves are close to 0 or 1:

CDP Tables

These CPD's were computed with our largest training set so far of 999 images. The quantization step is also adaptated to the range of the actual values, so ths x-axis just means i-th quantization value.

Once these CDPs are learned, we can test our method on a test_set; This is done by the function hpl_test_statistics.

First experiments seem to show that this could give very good results.

(For disk space considerations, only very small training and testing sets are on svn).

All this method can be computed with:

cdp_table = hpl_compute_cdp();
results = hpl_test_statistics(cdp_table)

First training set : 109 images, first testing set: 33 images, this gives us the ROC curve on the left:

The curves on the left show some results that are much better and probably closer to the capacities of the system, as the training was done on 999 images, and the testing on 339 images. Some improvement probably also comes from a change in the quantization of the statistics: while previously we were using a uniform quantization on [0,1], it is now adaptated to the range of the actual values taken by the statistics (for instance, the minimum higher than 0.2 almost never happen...).

ROC Curve ROC Curve

or precision-recall curve for those who prefer:

Precision-Recall Curve Precision-Recall Curve

recall = TP/(TP+FN); precision = TN/(FP+TN); The small red cross represents the value that we have with a threshold of 0.5

[edit] Compressive sensing and misprint detection

[edit] General ideas about compressive sensing

Compressive sensing usually implies two steps : sensing measurement (encoding) and reconstruction techniques (decoding)

In our case, we don't need to worry about the most computational part, which is reconstruction. We only need to focus on the sensing part, because we only want to detect misprint, and that can be done directly in the transform representation.

Good tutorial slides on compressive sensing can be found here. It is interesting also because it contains lots of information about image representations and examples and it is not going into deep mathematical details.

http://www.dsp.ece.rice.edu/%7Erichb/talks/cs-tutorial-ITA-feb08-complete.pdf

[edit] Sensing with hardware

Our goal is to find a way to measure only a limited number of characteristics, instead of the entire (line) array of information. Then, the hardware should be able to measure those “projections” itself, and we should only process those features vectors.

There is an interesting article on hardware compressive sensing for imaging. Basically, it explains how to capture a transformation of the image (DCT or wavelets, or some random basis) instead of the entire raw data. It can be found here :

http://users.ece.gatech.edu/%7Ejustin/Publications_files/robucci08co.pdf

Some questions about it : How fast such a system can work ?

Is it the A/D converter that is limiting the speed of the system ?

How complicated can be made the “projection” part of the analog circuit ?

[edit] The selection problem

In the previous paper, they explain that they can basically perform several type of projections directly in hardware, but they say that the problem they encounter with a wavelet transform is that they don't know which information to keep for reconstruction, because they don't know where the information lie on the images they are capturing.

But we might have a solution in our case, because our problem is simpler. In fact, we do know exactly where the information can be found as we control perfectly what is being printed. Then, we can precompute the transformation on our file, and finds where the information lies in the transform domain.

Then, the problem is to find a way to control the hardware to tell him which coefficients to keep and which one to throw away while the paper is being printed, using the precomputed transformation.

We can imagine fixing a number of features, and controlling the hardware to give us those features as the paper is being printed. The last step would be to compare those features with the precomputed ones and to decide whether there have been misprints or not.

[edit] References and related information

These are sites with information we think will be relevant for our analyses.

  1. Compressed sensing site
  2. Introductory paper from Rice University
  3. Prof. Donoho's paper: mathematical foundations
Personal tools