A faster solution approach
The previous approach
involves two steps done by the model checker. First, the
model checker builds a Markov decision process (MDP) from a
formal specification of the system. Then, the model checker
solves the MDP, computing an optimal policy to achieve the
adaptation goal. This whole process is done at run time
every time an adaptation decision has to be made. We have
developed another approach that decouples the construction
of the MDP from its solution. That way, the construction of
the MDP from formal specifications can be done offline, and
the solution is the only part that remains to be done at run
time. In addition, the solution algorithm leverages the
structure of the problem to reduce the number of states of
the environment that have to be considered at each
iteration. In our experiments we have found the decision
time to be reduced 10 times with respect to the decision
based on probabilistic model checking. Additionally, the
approach can achieve adaptation goals that involve the
maximization of some objective function subject to
probabilistic constraints. For example, it can maximize the
expected number of objects detected, subject to surviving the
mission with 95% probability.
More details about this
approach can be found in the following publication:
Efficient
Decision-Making under Uncertainty for Proactive
Self-Adaptation. Gabriel Moreno, Javier Camara,
David Garlan, Bradley Schmerl.
In proceedings of the
International Conference on Autonomic Computing (ICAC),
2016.