Slide 13 of 19
Notes:
However, before describing my research in information-based optimization methods, let us further motivate it with another observation: Most global optimization methods waste most information they gain. And why not? For many optimization applications, the function to be optimized is simple to evaluate. Why care about bookkeeping and efficient use of information when you can rapidly compute a dizzying number of function evaluations?
For our purposes, we care very much. When each function evaluation involves a simulation and some evaluation of that simulation, our optimization task merits good use of the information we gain at great cost.
For this reason, we can benefit from the ideas of Bayesian reasoning. If we have some probability measure on the class of functions to be optimized, we may build an appropriate bias into our search. When considering which point in our search space to evaluate next, we consider which is most likely to be a zero (indicating an unsafe trajectory) given the function evaluations already made. For instance, if our unknown function f is more likely to be continuous than not, and more likely to have a gradual slope than a steep slope, then we should logically next evaluate the point in the space which if evaluated as zero, would require the least slope of f given our previous function evaluations. The result of this function evaluation in turn gives us new information about the next best point to evaluate. [worked example above]