OASIS: Order-Augmented Strategy for Improved Code Search
About
Code embeddings capture the semantic representations of code and are crucial for various code-related large language model (LLM) applications, such as code search. Previous training primarily relies on optimizing the InfoNCE loss by comparing positive natural language (NL)-code pairs with in-batch negatives. However, due to the sparse nature of code contexts, training solely by comparing the major differences between positive and negative pairs may fail to capture deeper semantic nuances. To address this issue, we propose a novel order-augmented strategy for improved code search (OASIS). It leverages order-based similarity labels to train models to capture subtle differences in similarity among negative pairs. Extensive benchmark evaluations demonstrate that our OASIS model significantly outperforms previous state-of-the-art models focusing solely on major positive-negative differences. It underscores the value of exploiting subtle differences among negative pairs with order labels for effective code embedding training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| NL2Code Search | CSN (CodeSearchNet) (test) | Recall (Python)73.69 | 18 | |
| Code2Code Search | Code2Code Search (test) | Python66.27 | 7 | |
| NL2Code Search | CoSQA (test) | MRR55.77 | 7 | |
| NL2Code Search | Adv (test) | MRR57.27 | 7 | |
| Code Search | CodeSearchNet Python hard subset (test) | MRR51.13 | 3 |