For a more reliable assessment of future performance, the k-fold cross validation [Sto74] is often applied. An advantage of this technique is that all samples in the data set are fully utilized. In a k-fold cross validation, the data is randomly separated into k partitions of equal size. In each of the k runs, (k – 1) partitions are combined to form the training set and the remaining partition is held out as the testing set. This process repeats k times, each time with a different partition of training and testing sets. The average performance of the model.