Search game

Two-person zero-sum game From Wikipedia, the free encyclopedia

A search game is a zero-sum game between at least one searcher and one or more immobile or mobile targets which takes place in a set called the search space. The searcher(s) must detect or capture the target(s) under resource constraints. The searcher can choose any continuous trajectory subject to a maximal velocity constraint. As mathematical models, search games can be applied to areas such as hide-and-seek games that children play or representations of some tactical military situations, such as anti-submarine warfare or air defense, in which a searching vehicle sweeps a region to intercept an adversary historically [1]. Today, these models extend to cybersecurity, where a defender traverses a "state space" of systems and networks in search of adversarial intrusions. There are also used in biology to model predator-prey interactions [2],[3],[4], where a predator can have finite resources (number of daylight hours, motivation[5], ...)

Definition

A search game introduces:

  • A search space X, which can be a Euclidean domain, a graph, or a more abstract state space.
  • One or more searchers whose trajectory is a measurable function of time, subject to a maximal speed constraint or a movement budget.
  • One or more hiders or targets, static or mobile, that choose an initial location or a trajectory under their own constraints.
  • the set of strategies for the hider(s) and the searcher(s)
  • A detection rule, often formulated as "capture" when the distance between searcher and target becomes smaller than a detection radius, or when an observation region is visited.
  • A performance criterion, such as time to detection, detection probability before a horizon, or a more general cost–reward combination.

The game is typically played under uncertainty[6]: players do not necessarily observe each other's exact positions until within detection range; instead, they may have partial information. Strategies may be pure, mixed, feedback, or information-based, depending on the observation structure and the game dynamics.

Origins

The area of search games was introduced in the last chapter of Rufus Isaacs' classic book "Differential Games"[7], where Isaacs studies pursuit–evasion and search scenarios with partial information.

Princess and Monster game

The princess and monster game deals with a moving target, a searcher must find a "princess" moving on an interval or a domain with limited visibility and continuous dynamics. This game illustrates the difficulty of designing optimal strategies when player trajectories are continuous and information is very restricted. It is assumed that neither the searcher nor the hider has any knowledge about the movement of the other player until their distance apart is less than or equal to the discovery radius and at this very moment capture occurs. The game is zero sum with the payoff being the time spent in searching. A natural strategy to search for a stationary target in a graph (in which arcs have lengths) is to find a minimal closed curve L that covers all the arcs of the graph. (L is called a Chinese postman tour). Then, traverse L with probability 1/2 for each direction. This strategy seems to work well if the graph is Eulerian. In general, this random Chinese postman tour is indeed an optimal search strategy if and only if the graph consists of a set of Eulerian graphs connected in a tree-like structure.[8] A misleadingly simple example of a graph not in this family consists of two nodes connected by three arcs. The random Chinese postman tour (equivalent to traversing the three arcs in a random order) is not optimal, and the optimal way to search these three arcs is complicated.[9]

Unbounded domains

In general, the reasonable framework for searching an unbounded domain, as in the case of an online algorithm, is to use a normalized cost function (called the competitive ratio in Computer Science literature). The minimax trajectory for problems of these types is always a geometric sequence (or exponential function for continuous problems). This result yields an easy method to find the minimax trajectory by minimizing over a single parameter (the generator of this sequence) instead of searching over the whole trajectory space. This tool has been used for the linear search problem, i.e., finding a target on the infinite line, which has attracted much attention over several decades and has been analyzed as a search game.[10] It has also been used to find a minimax trajectory for searching a set of concurrent rays. Optimal searching in the plane is performed by using exponential spirals.[9][11][12] Searching a set of concurrent rays was later re-discovered in Computer Science literature as the 'cow-path problem'.[13]

Future developments

They has been developed further by Shmuel Gal[9][11] and Steve Alpern[11], who developed the mathematical foundations of search games, especially for continuous search spaces, diverse information structures, and minimax optimality criteria. From the 1990s–2000s onwards, the literature expanded towards variants on networks, discrete environments, and problems inspired by economics, security, or autonomous robotics.

Modern taxonomy of search games

Recent reviews propose a detailed classification of search games according to the strategies available to searcher and target [14].

  • In Stationary-target search games, only the searcher moves and the target is described by an initial distribution over the search space.
  • In Mobile-target search games, the target actively chooses a trajectory, often under speed or topology constraints, bringing them close to pursuit–evasion games.
  • In Path-constrained games, both players must move along prescribed paths, for example along a road network, corridors, or shipping lanes. For the searcher, optimal solutions often involve cyclic or randomized sweep policies, sometimes with randomization in time to avoid predictability.
  • In Search–search games, multiple searchers or multiple targets interact, such as competing patrols or coordinated defense and attack coalitions.

An unifying view between search games and pursuit-evasion games sees many adversarial systems as a succession of phases: a hiding and searching phase (hide–seek), a pursuit–evasion phase after detection, and a possible capture or engagement phase. Each phase has its own information, dynamics, and payoff structure [15],[16] .

Decisions made during the search phase determine the initial conditions of the pursuit, such as distance, interception angle, or remaining energy.

A current trend is the study of dynamic search games in which players learn [17] or adapt during the game rather than having complete prior knowledge of the environment. This framework is relevant when the searcher gradually accumulates experience about typical target behaviours or terrain properties. Hybrid models combine game theory with learning techniques, such as reinforcement learning or Bayesian learning, to progressively adjust search and hiding strategies.

Methods

The theortical analysis of search games relies on tools from game theory, optimization, and optimal control. For simple models, analytical solutions can be obtained by exploiting symmetries, coupling arguments, or geometric transformations.

However, most realistic problems require numerical or approximate methods, such as discretized dynamic programming, value iteration algorithms, or semi-Lagrangian schemes. Challenges arise from the high dimensionality of the state space, nonlinear dynamics, and complex information structures. Acceleration techniques such as phase separation, pruning, or heuristic value estimates mitigate combinatorial explosion while preserving optimality in some cases.


References

Related Articles

Wikiwand AI