Decision Support for Investment of Developer Effort in Code Review

Authors - Ruiyin Wen
Venue - McGill University, pp. 1-100, 2018

Related Tags - Theses 2018 code review build systems software evolution

Abstract - Modern software development is a highly collaborative endeavour. Developers work in teams with tens, if not hundreds, of people who are globally distributed. At the heart of developer collaboration lies the process of code review, where fellow developers critique code changes to provide feedback to the author. Unlike the rigid formal code inspection process, which includes in-person meetings, the modern variant of code review provides developers with a lightweight, tool supported, online collaboration environment, where code changes are discussed. However, the existence of Modern Code Review (MCR) tools does not guarantee a smooth collaborative process that generates more value than cost. Indeed, the investment of developer effort in code reviewing is a key software development cost that needs to be spent efficiently and effectively.

Intelligent MCR investment decisions need to be made at the level of organizations and individuals. Thus, in this thesis, we set out to support team and individual code reviewing investment decisions. First, to support decisions about the content of code reviewing feedback, we train and analyze topic models of 248,695 reviewer comments from one open source community and one proprietary organization. We observe that more context-specific, technical feedback is being raised as the studied organizations have aged and as the reviewers within those organizations accrue project-specific experience. These topic models can be used to track organizational and individual feedback trends, and whether those trends align with respect to organization and individual reviewing goals.

Next, we set out to support individual decisions about which review requests require additional effort. Since patches that impact mission-critical project deliverables or deliverables that cover a broad set of products should involve more reviewing investment than others, we propose BLIMP Tracer—an impact analysis tool that pinpoints which deliverables are affected by given code changes. To evaluate BLIMP Tracer, we deploy a prototype implementation of it at a large multinational software organization, and conduct a qualitative empirical study with the developers from that organization. We observe that BLIMP Tracer not only improves the speed and accuracy of identifying the set of deliverables that are impacted by a patch, but also helps the new members of the organization to better understand the project architecture.

Preprint - PDF

Bibtex

@mastersthesis{wen2018masters,
  Author = {Ruiyin Wen},
  Title = {{Decision Support for Investment of Developer Effort in Code Review}},
  Year = {2018},
  School = {McGill University},
  Address = {3480 Rue University, Montréal, QC, Canada},
  Month = {August}
}