\section{Introduction} A Discrete-Time Markov Chain (DTMC) is a Kripke structure enhanced with a \emph{transition probability function} $P : S\times S\to [0,1]$, indicating the probability of moving from one state to another state, and an \emph{initial distribution} $\iota_{\text{init}} : S\to [0,1]$, indicating for each state the probability of starting in it~\cite[747--748]{BK}. The transition probability function and the initial distribution should be total, i.e., the probabilities (per state) sum up to one. A Markov Decision Process (MDP) further extends a DTMC with nondeterministic choice. It adds a set of actions $\textsl{Act}$ and uses $P : S\times \textsl{Act}\times S\to [0,1]$ as the transition probability function~\cite[833--834]{BK}. The transition probability function should still be total for pairs of states and actions. Properties of DTMCs and MDPs are typically described using Probabilistic Computation Tree Logic (PCTL), an extension of CTL~\cite[780--781]{BK}. It adds state formulae $\mathbb P_J(\phi)$, where $J$ is an interval with rational bounds and $\phi$ a path formula. A state $s$ satisfies $\mathbb P_J(\phi)$ iff the probability of satisfying $\phi$ for a path starting in $s$ is in $J$. PCTL also adds the bounded-until path formula $\Phi_1 \Uop^{\le n} \Phi_2$, where $n\in \mathbb N$ and $\Phi_{1,2}$ are state formulae --- the semantics are that $\Phi_2$ should be satisfied at most $n$ steps after $\Phi_1$. A distinction between two kinds of properties is made. A probabilistic reachability property $\phi$ is a property of the form: \[ \phi = \mathbb{P}_{J}(\phi') \] where $\phi'$ is a PCTL formula, $J \subseteq [0,1]$ and $\mathbb{P}$ is a function that determines if the probability of satisfying $\phi'$ is in $J$. A expected cost property $\phi$ is a property of the form: \[ \phi = \mathbb{E}_R(\phi') \] where $\phi'$ is a PRCTL formula, $R$ are intervals with rational bounds, and $\mathbb{E}$ is a function that determines is the cost of satisfying $\phi'$ is in $R$. The first approach to automatically perform Model Repair was introduced by Bartocci et al. \cite[327]{Bartocci2011}. Additionally, they define the Model Repair problem as: \begin{quote} Given a probabilistic system $M$ and a probabilistic temporal logic formula $\phi$ such that $M$ fails to satisfy $\phi$, the \emph{probabilistic Model Repair problem} is to find an $M'$ that satisfies $\phi$ and differs from $M$ only in the transition flows of those states in $M$ that are deemed \emph{controllable}. \end{quote} Furthermore, we may score the reparation using a \emph{cost} function, and want a Model Repair tool to minimise this cost.