Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model.

Original publication

DOI

10.1002/eap.1700

Type

Journal article

Journal

Ecol Appl

Publication Date

06/2018

Volume

28

Pages

938 - 952

Keywords

Ceratitis capitata , approximate dynamic programming, continuous state space, dynamic programming, look-ahead policy, sequential decision problem, stage structure, sterile insect release