This page provides references and links to introductory material on UQ and to computational tools that are freely available on the Web.

In spite of the pervasive nature of computational analysis in today's engineering practice, it remains very difficult to objectively establish the confidence levels in the numerical predictions. This is due to the differences between the real device of interest and the corresponding computer models and, in general, to lack of knowledge associated to the physical processes. Uncertainty quantification (UQ) plays a fundamental role in Validation of simulation methodologies and aims at developing rigorous methods to characterize the impact of variability and lack-of-knowledge on the final quantity of interest. At the boundary between discrete mathematics, probability, optimization and physics UQ is a growing field with broad applications to a variety of engineering fields.

The American Institute of Aeronautics and Astronautics (AIAA) ”Guide for the Verification and Validation of CFD Simulations” defines errors as recognisable deficiencies of the models or the algorithms employed and uncertainties as a potential deficiency that is due to lack of knowledge. This definition does not precisely distinguish between the mathematics and the physics. It might be more useful to define errors as associated to the translation of a mathematical formulation into a numerical algorithm (and a computational code). Examples are round-off errors, limited convergence of certain iterative algorithms and implementation mistakes (bugs). With this definition of errors, the uncertainties are naturally associated to the choice of the physical models and to the specification of the input parameters required for performing the analysis. As an example, numerical simulations require the precise specification of boundary conditions and typically only limited information are available from corresponding experiments and observations. Therefore variability, vagueness, ambiguity and confusion are all factors that introduce uncertainties in the simulations. A more precise characterization is based on the distinction in aleatory and epistemic uncertainties.

Aleatory uncertainty (also referred to as variability, stochastic uncertainty or irreducible uncertainty) is the physical variability present in the system being analysed or its environment. It is not strictly due to a lack of knowledge and cannot be reduced. The determination of material properties or operating conditions of a physical system typically leads to aleatory uncertainties; additional experimental characterization might provide more conclusive description of the variability but cannot eliminate it completely. Aleatory uncertainty is normally characterized using probabilistic approaches.

Epistemic uncertainty (also called reducible uncertainty or incertitude) is a potential deficiency that is solely due to a lack of knowledge. It can arise from assumptions introduced in the derivation of the mathematical model used or simplifications related to the correlation or dependence between physical processes. It is obviously possible to reduce the epistemic uncertainty by using, for example, a combination of calibration, inference from experimental observations and improvement of the physical models. Epistemic uncertainty is not well characterized by probabilistic approaches because it might be difficult to infer any statistical information due to the nominal lack of knowledge. Typical examples of sources of epistemic uncertainties are turbulence modeling assumptions and surrogate chemical kinetics models.

Sensitivity analysis (SA) investigates the connection between inputs and outputs of a (computational) model; more specifically, it allows to identify how the variability in an output quantity of interest is connected to an input in the model and which input sources will dominate the response of the system. On the other hand, uncertainty analysis aims at identifying the overall output uncertainty in a given system. The main difference is that sensitivity analysis does not require input data uncertainty characterization from a real device; it can be conducted purely based on the mathematical form of the model. As a conclusion large output sensitivities (identified using SA) do not necessarily translate in important uncertainties because the input uncertainty might be very small in a device of interest. SA is often based on the concept of sensitivity derivatives, the gradient of the output of interest with respect to input variables. The overall sensitivity is then evaluated using a Taylor series expansion, which, to first order, would be equivalent to a linear relationship between inputs and outputs.

The paragraphs above are extracted from the Lecture Notes prepared by G. Iaccarino for a Von Karmann Institute Short Course.

- UQ Tutorial given at KAUST, March 2012 (size 21Mb)
- UQ Tutorial at the 2009 SIAM CSE Conference
- Stanford UQ Workshop, 2008
- Opportunities and Challenges in Uncertainty Quantification for Complex Interacting systems, USC 2009
- Stochastic Multiscale Methods, USC 2009
- Opportunities and Challenges in Applying PCE to Engineering, USC 2008
- Sandia CSRI Workshop on Mathematical Methods for Verification and Validation, 2007
- Uncertainty Quantification Workshop at UIUC, 2007
- Uncertainty Quantification Workshop at University of Arizona, 2007

- Stanford University
- Purdue University
- University of Texas Austin
- California Institute of Technology
- University of Michigan Ann Arbor

- "Ideas Underlying Quantification of Margins and Uncertainties (QMU): A White Paper" by T. Trucano, 2006.
- "Quantification of Margins and Uncertainties (QMU) - Jason Report" 2005.
- "Prediction and Uncertainty in Computational Modeling of Complex Phenomena: A Whitepaper" by T. Trucano, 1998.
- "Error Analysis and Simulations of Complex Phenomena" by M. A. Christie et. al., 2005.
- "Solution Error Models for Uncertainty Quantification" by J. Glimm et al., 2006.
- "Uncertainty Analysis for Fluid Mechanics with Applications" by R. W. Walters and L. Huyse, 2002.
- "Quantification of Uncertainty in Computational Fluid Dynamics", P. J. Roache, 1997.
- "Validation Methodology in Computational Fluid Dynamics" by W. L. Oberkampf and T. Trucano, 2000.
- "Calibration, validation, and sensitivity analysis: What’s what" by T. Trucano, et al., 2006.
- "Special Number of RESS on Epistemic Uncertainty, 2004"

- The Dakota Toolkit from Sandia National Lab
- The PSUADE Software Library from Lawrance Livermore National Lab
- Sparse Grid Interpolation Toolbox (for Matlab) from Andreas Klimke (U. Stuttgard)