*Large-scale stochastic linear inversion using hierarchical matrices*. S. Ambikasaran, J.Y. Li, P.K. Kitanidis and E. Darve. *Computational Geosciences*, 17(6):913-927, 2013. (URL)

Stochastic inverse modeling deals with the estimation of functions from sparse data, which is a problem with a nonunique solution, with the objective to evaluate best estimates, measures of uncertainty, and sets of solutions that are consistent with the data. As finer resolutions become desirable, the computational requirements increase dramatically when using conventional solvers. A method is developed in this paper to solve large-scale stochastic linear inverse problems, based on the hierarchical matrix (or H2 matrix) approach. The proposed approach can also exploit the sparsity of the underlying measurement operator, which relates observations to unknowns. Conventional direct algorithms for solving large-scale linear inverse problems, using stochastic linear inversion techniques, typically scale as O(n 2 m+nm 2), where n is the number of measurements and m is the number of unknowns. We typically have n<<m. In contrast, the algorithm presented here scales as O(n 2 m), i.e., it scales linearly with the larger problem dimension m. The algorithm also allows quantification of uncertainty in the solution at a computational cost that also grows only linearly in the number of unknowns. The speedup gained is significant since the number of unknowns m is often large. The effectiveness of the algorithm is demonstrated by solving a realistic crosswell tomography problem by formulating it as a stochastic linear inverse problem. In the case of the crosswell tomography problem, the sparsity of the measurement operator allows us to further reduce the cost of our proposed algorithm from O(n 2 m) to $\mathcal {O}(n^{2} \sqrt {m} + nm)$ . The computational speedup gained by using the new algorithm makes it easier, among other things, to optimize the location of sources and receivers, by minimizing the mean square error of the estimation. Without this fast algorithm, this optimization would be computationally impractical using conventional methods.

`@ARTICLE { darve13,`

AUTHOR = { S. Ambikasaran and J.Y. Li and P.K. Kitanidis and E. Darve },

TITLE = { Large-scale stochastic linear inversion using hierarchical matrices },

YEAR = { 2013 },

JOURNAL = { Computational Geosciences },

VOLUME = { 17 },

NUMBER = { 6 },

PAGES = { 913--927 },

ABSTRACT = { Stochastic inverse modeling deals with the estimation of functions from sparse data, which is a problem with a nonunique solution, with the objective to evaluate best estimates, measures of uncertainty, and sets of solutions that are consistent with the data. As finer resolutions become desirable, the computational requirements increase dramatically when using conventional solvers. A method is developed in this paper to solve large-scale stochastic linear inverse problems, based on the hierarchical matrix (or H2 matrix) approach. The proposed approach can also exploit the sparsity of the underlying measurement operator, which relates observations to unknowns. Conventional direct algorithms for solving large-scale linear inverse problems, using stochastic linear inversion techniques, typically scale as O(n 2 m+nm 2), where n is the number of measurements and m is the number of unknowns. We typically have n<<m. In contrast, the algorithm presented here scales as O(n 2 m), i.e., it scales linearly with the larger problem dimension m. The algorithm also allows quantification of uncertainty in the solution at a computational cost that also grows only linearly in the number of unknowns. The speedup gained is significant since the number of unknowns m is often large. The effectiveness of the algorithm is demonstrated by solving a realistic crosswell tomography problem by formulating it as a stochastic linear inverse problem. In the case of the crosswell tomography problem, the sparsity of the measurement operator allows us to further reduce the cost of our proposed algorithm from O(n 2 m) to $\mathcal {O}(n^{2} \sqrt {m} + nm)$ . The computational speedup gained by using the new algorithm makes it easier, among other things, to optimize the location of sources and receivers, by minimizing the mean square error of the estimation. Without this fast algorithm, this optimization would be computationally impractical using conventional methods. },

URL = { http://dx.doi.org/10.1007/s10596-013-9364-0 },

}