SeBERTis: A Framework for Producing Classifiers of Security-Related Issue Reports

Authors - Sogol Masoumzadeh, Yufei (Mary) Li, Shane McIntosh, Dániel Várro, Lili Wei
Venue - International Conference on Software Analysis, Evolution, and Reengineering, pp. To appear, 2026

Related Tags - SANER 2026 software quality defect prediction

Abstract - Monitoring issue tracker submissions is a crucial software maintenance activity. A key goal is the prioritization of high risk security-related bugs. If such bugs can be recognized early, the risk of propagation to dependent products and endangerment of stakeholder benefits can be mitigated. To assist triage engineers with this task, several automatic detection techniques, from machine learning (ML) models to prompting large language models (LLMs), have been proposed. Although promising to some extent, prior techniques often memorize lexical cues as decision shortcuts, yielding low detection rate specifically for more complex submissions. As such, these classifiers do not yet reach the practical expectations of a real-time detector of security-related issues. To address these limitations, we propose SEBERTIS, a framework to train deep neural networks (DNNs) as classifiers independent of lexical cues, so that they can confidently detect fully unseen security-related issues. SEBERTIS capitalizes on fine-tuning bidirectional transformer architectures as masked language models (MLMs) on a series of semantically equivalent vocabulary to prediction labels (which we call Semantic Surrogates) when they have been replaced with a mask. Our SEBERTIS-trained classifier achieves a 0.9880 F1-score in detecting security-related issues of a curated corpus of 10,000 GitHub issue reports, substantially outperforming state-of-the-art issue classifiers, with 14.44%-96.98%, 15.40%-93.07%, and 14.90%-94.72% higher detection precision, recall, and F1-score over ML-based baselines. Our classifier also substantially surpasses LLM-based baselines, with an improvement of 23.20%-63.71%, 36.68%-85.63%, and 39.49%-74.53% for precision, recall, and F1-score, respectively. Finally, our classifier demonstrates a high confidence in detecting recently submitted security-related issues, achieving 0.7123, 0.6860, and 0.6760 precision, recall, and F1-score, comparable to those of promoting LLMs, making it a practical tool for real-time issue report triage.

Preprint - PDF

Bibtex

@inproceedings{masoumzadeh2026saner,
  Author = {Sogol Masoumzadeh and Yufei (Mary) Li and Shane McIntosh and Dániel Várro and Lili Wei},
  Title = {{SeBERTis: A Framework for Producing Classifiers of Security-Related Issue Reports}},
  Year = {2026},
  Booktitle = {Proc. of the International Conference on Software Analysis, Evolution, and Reengineering (SANER)},
  Pages = {To appear}
}