Adversarial Problem: Airport Defense Strategy
Multiple, non-aligned decision makers (adversary)
Multiple Objective Function Optimization
You have been tasked with making a decision as to which, if any, airports to defend against attack in the US. Your alternatives are to defend none, the top 10, or the top 50. Your objective for the decision is to minimize fatalities and costs. While you'd like to defend all airports and incur no fatalities, that simply isn't a feasible alternative in the real world. This situation could be modeled with a one up-front decision, a couple of attributes and a single objective function to be minimized that is the weighted sum of the attributes (fatalities & cost).
But what if we could also model the utility behind the attacker's decision? In this case the Adversary has their own unique set of attribute weights which we do have some awareness of. To gain a fuller picture of the problem, the attacker's decision is incorporated into the model, additional attributes that the attackers seek to optimize are added, and then combined into a second objective function for use in the model.