Multi-Level Local Optimization
Each layer of local optimization simplifies search space for the layer above.
MLLO-RIQ: Perform random (Monte-Carlo) optimization of:
- f´´ : Information-based local optimization of:
- f´ : Quasi-Newton local optimization of:
Notes:
Most global optimization methods we've seen have global and local phases of search. We can generalize this approach to optimization as multi level local optimization, where each level of local optimization simplifies the search space for the level above, that is, it serves to widen the "basin of attraction" of local optimization to the global minimum for the layer above. [worked example]
This approach is illustrated in my multi level local optimization method MLLO-RIQ, which performs random (Monte-Carlo) optimization of an information-based local optimization of a quasi-Newton local optimization of f. Viewed bottom-up, f is transformed by quasi-Newton local optimization into a "flattened" or "plateau-ed" f', which is efficiently transformed by information-based local optimization to a search landscape with fewer, lower plateaus f'', which is randomly searched at the top level.