This paper presents an approach for achieving conciseness in generating explanations, which is clone by exploiting formal reconstructions of aspects of the Gricean principle of relevance to simulate conversational implicature. By applying contextually motivated inference rules in an anticipation feed-back loop, a set of propositions explicitly representing an explanation's content is reduced to a subset which, in the actual context, can still be considered to convey the message adequately.