Skip to content

Commit

Permalink
Add paper
Browse files Browse the repository at this point in the history
  • Loading branch information
mallamanis committed Feb 29, 2024
1 parent dfc4de4 commit 180b42f
Showing 1 changed file with 12 additions and 0 deletions.
12 changes: 12 additions & 0 deletions _publications/ahmed2024studying.markdown
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@

---
layout: publication
title: "Studying LLM Performance on Closed- and Open-source Data"
authors: Toufique Ahmed, Christian Bird, Premkumar Devanbu, Saikat Chakraborty
conference:
year: 2024
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2402.15100"}
tags: ["Transformers"]
---
Large Language models (LLMs) are finding wide use in software engineering practice. These models are extremely data-hungry, and are largely trained on open-source (OSS) code distributed with permissive licenses. In terms of actual use however, a great deal of software development still occurs in the for-profit/proprietary sphere, where the code under development is not, and never has been, in the public domain; thus, many developers, do their work, and use LLMs, in settings where the models may not be as familiar with the code under development. In such settings, do LLMs work as well as they do for OSS code? If not, what are the differences? When performance differs, what are the possible causes, and are there work-arounds? In this paper, we examine this issue using proprietary, closed-source software data from Microsoft, where most proprietary code is in C# and C++. We find that performance for C# changes little from OSS --> proprietary code, but does significantly reduce for C++; we find that this difference is attributable to differences in identifiers. We also find that some performance degradation, in some cases, can be ameliorated efficiently by in-context learning.

0 comments on commit 180b42f

Please sign in to comment.