## A Large Deviations Perspective on Ordinal Optimization
We consider the problem of optimal allocation of computing budget to maximize the probability of correct selection in the ordinal optimization setting. This problem has been studied in the literature in an approximate mathematical framework under the assumption that the underlying random variables have a Gaussian distribution. We use the large deviations theory to develop a mathematically rigorous framework for determining the optimal allocation of computing resources even when the underlying variables have general, non-Gaussian distributions. Further, in a simple setting we show that when there exists an indifference zone, quick stopping rules may be developed that exploit the exponential decay rates of the probability of false selection. In practice, the distributions of the underlying variables are estimated from generated samples leading to performance degradation due to estimation errors. On a positive note, we show that the corresponding estimates of optimal allocations converge to their true values as the number of samples used for estimation increases to infinity. |