Sparse reflectance recovery

From VISTA LAB WIKI

Jump to: navigation, search

We are working on a plan for a paper on this topic now. The main points will be:

[edit] Representation only

1. Using sparse representations just to represent is only mildly interesting; really not all that helpful.

1.1 One approach is to use the, say, nine PCA terms and be sparse on those. Not so good.

1.2 A second approach is to learn a dictionary with say 20 terms.

1.2.1 In that case the method we use to learn a dictionary matters

1.2.2 It doesn't really cost a lot because at any given accuracy level you can choose a level of over-completeness and still be relatively efficient. Suppose you have M reflectances and the accuracy level is A with N PCA terms. With PCA you will always use N*M coefficients plus the basis vectors. With the sparse, you can k coefficients and S dictionary terms. Then you need k*M coefficients + log2(N)*M bits to represent the sparse data. So you might have some efficiency.

On the whole, simply representing data with this method is not a big win.

[edit] Estimation from RGB data using sparse methods

For estimation, we compared to linear methods that find a least-squares to find an estimate within the PCA space. If we use the knowledge that there is a sparse representation on a learned dictionary, and we perform least-squares with a sparsity constraint (the least-squares predicts the sensor data, the L1 means we are sparse), then we do reasonably well but at a very high computational cost.

[edit] Estimation from RGB and a simplified sparse idea (look-up like)

A third approach is to use sparsity in a constrained way. Suppose that we create a large dictionary in which the elements form a large sampling array in RGB space. In that case each RGB value is near some points on the grid. These grid points are associated with a particular reflectance. And we then make an estimate based on the nearby RGB values. The error here is roughly as in (2). There is no L1 search involved, so in principle it should be much faster. It is almost a look-up table method. SL has curves on this. We haven't looked at noise or computational efficiency. We haven't built up tools for looking at the selection of surfaces. This is a locally linear method, but not necessarily globally linear.

Personal tools