Comments on "Researcher Bias: The Use of Machine Learning in Software Defect Prediction"

Authors - Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E. Hassan, Kenichi Matsumoto
Venue - IEEE Transactions on Software Engineering, Vol. 42, No. 11, pp. 1092-1094, 2016

Related Tags - TSE 2016 software quality defect prediction

Abstract - Shepperd et al. find that the reported performance of a defect prediction model shares a strong relationship with the group of researchers who construct the models. In this paper, we perform an alternative investigation of Shepperd et al.'s data. We observe that (a) research group shares a strong association with other explanatory variables (i.e., the dataset and metric families that are used to build a model); (b) the strong association among these explanatory variables makes it difficult to discern the impact of the research group on model performance; and (c) after mitigating the impact of this strong association, we find that the research group has a smaller impact than the metric family. These observations lead us to conclude that the relationship between the researcher group and the performance of a defect prediction model is more likely due to the tendency of researchers to reuse experimental components (e.g., datasets and metrics). We recommend that researchers experiment with a broader selection of datasets and metrics to combat potential bias in their results.

Preprint - PDF


  Author = {Chakkrit Tantithamthavorn and Shane McIntosh and Ahmed E. Hassan and Kenichi Matsumoto},
  Title = {{Comments on "Researcher Bias: The Use of Machine Learning in Software Defect Prediction"}},
  Year = {2016},
  Journal = {IEEE Transactions on Software Engineering},
  Volume = {42},
  Number = {11},
  Pages = {1092-1094}