Abstract

Deep learning-based approaches for software vulnerability prediction currently mainly rely on the original text of software code as the feature of nodes in the graph of code and thus could learn a representation that is only specific to the code text, rather than the representation that depicts the ‘intrinsic’ functionality of a program hidden in the text representation. One curse that causes this problem is an infinite number of possibilities to name a variable. In order to lift the curse, in this work we introduce a new type of edge called name dependence, a type of abstract syntax graph based on the name dependence, and an efficient node representation method named 3-property encoding scheme. These techniques will allow us to remove the concrete variable names from code, and facilitate deep learning models to learn the functionality of software hidden in diverse code expressions. The experimental results show that the deep learning models built on these techniques outperform the ones based on existing approaches not only in the prediction of vulnerabilities but also in the memory need. The factor of memory usage reductions of our techniques can be up to the order of 30,000 in comparison to existing approaches.
OriginalspracheEnglisch
TitelDEXA (1)
Seitenumfang6
Erscheinungsdatum2023
Seiten516-521
ISBN (Print)9783031398469
DOIs
PublikationsstatusVeröffentlicht - 2023

Strategische Forschungsbereiche und Zentren

  • Zentren: Zentrum für Künstliche Intelligenz Lübeck (ZKIL)
  • Querschnittsbereich: Intelligente Systeme

DFG-Fachsystematik

  • 409-06 Informationssysteme, Prozess- und Wissensmanagement

Zitieren