Skip to content

Commit

Permalink
Remove empty first line
Browse files Browse the repository at this point in the history
  • Loading branch information
h4iku authored and mallamanis committed Mar 16, 2024
1 parent 59e61a2 commit 10a5ed9
Show file tree
Hide file tree
Showing 18 changed files with 20 additions and 37 deletions.
6 changes: 3 additions & 3 deletions _publications/add_from_arxiv.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@


def _first_non_stopword(title: str) -> str:
for word in re.split("\W", title.lower()):
for word in re.split(r"\W", title.lower()):
if word in ("a", "an", "the", "is", "are", "what", "who", "your"):
continue
return word
Expand All @@ -30,15 +30,15 @@ def get_info(paper_id: str, out_dir: str) -> None:
)

tmpl = textwrap.dedent(
f"""
f"""\
---
layout: publication
title: "{paper.title}"
authors: {", ".join(a.name for a in paper.authors)}
conference:
year: {paper.published.year}
additional_links:
- {{name: "ArXiV", url: "https://arxiv.org/abs/{paper_id}"}}
- {{name: "ArXiV", url: "https://arxiv.org/abs/{paper_id}"}}
tags: ["TODO"]
---
{summary}
Expand Down
3 changes: 1 addition & 2 deletions _publications/ahmed2024studying.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Studying LLM Performance on Closed- and Open-source Data"
authors: Toufique Ahmed, Christian Bird, Premkumar Devanbu, Saikat Chakraborty
conference:
year: 2024
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2402.15100"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2402.15100"}
tags: ["Transformers"]
---
Large Language models (LLMs) are finding wide use in software engineering practice. These models are extremely data-hungry, and are largely trained on open-source (OSS) code distributed with permissive licenses. In terms of actual use however, a great deal of software development still occurs in the for-profit/proprietary sphere, where the code under development is not, and never has been, in the public domain; thus, many developers, do their work, and use LLMs, in settings where the models may not be as familiar with the code under development. In such settings, do LLMs work as well as they do for OSS code? If not, what are the differences? When performance differs, what are the possible causes, and are there work-arounds? In this paper, we examine this issue using proprietary, closed-source software data from Microsoft, where most proprietary code is in C# and C++. We find that performance for C# changes little from OSS --> proprietary code, but does significantly reduce for C++; we find that this difference is attributable to differences in identifiers. We also find that some performance degradation, in some cases, can be ameliorated efficiently by in-context learning.
3 changes: 1 addition & 2 deletions _publications/chen2023supersonic.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Supersonic: Learning to Generate Source Code Optimizations in C/C++"
authors: Zimin Chen, Sen Fang, Martin Monperrus
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2309.14846"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2309.14846"}
tags: ["optimization"]
---
Software optimization refines programs for resource efficiency while preserving functionality. Traditionally, it is a process done by developers and compilers. This paper introduces a third option, automated optimization at the source code level. We present Supersonic, a neural approach targeting minor source code modifications for optimization. Using a seq2seq model, Supersonic is trained on C/C++ program pairs ($x_{t}$, $x_{t+1}$), where $x_{t+1}$ is an optimized version of $x_{t}$, and outputs a diff. Supersonic's performance is benchmarked against OpenAI's GPT-3.5-Turbo and GPT-4 on competitive programming tasks. The experiments show that Supersonic not only outperforms both models on the code optimization task but also minimizes the extent of the change with a model more than 600x smaller than GPT-3.5-Turbo and 3700x smaller than GPT-4.
3 changes: 1 addition & 2 deletions _publications/ding2023static.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "A Static Evaluation of Code Completion by Large Language Models"
authors: Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng Li, Murali Krishna Ramanathan, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2306.03203"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2306.03203"}
tags: ["LLM", "static analysis"]
---
Large language models trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the contrary, static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with execution-based evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by language models. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions.
3 changes: 1 addition & 2 deletions _publications/eniser2023automatically.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Automatically Testing Functional Properties of Code Translation Models"
authors: Hasan Ferit Eniser, Valentin Wüstholz, Maria Christakis
conference: AAAI
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2309.12813"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2309.12813"}
tags: ["translation"]
---
Large language models are becoming increasingly practical for translating code across programming languages, a process known as $transpiling$. Even though automated transpilation significantly boosts developer productivity, a key concern is whether the generated code is correct. Existing work initially used manually crafted test suites to test the translations of a small corpus of programs; these test suites were later automated. In contrast, we devise the first approach for automated, functional, property-based testing of code translation models. Our general, user-provided specifications about the transpiled code capture a range of properties, from purely syntactic to purely semantic ones. As shown by our experiments, this approach is very effective in detecting property violations in popular code translation models, and therefore, in evaluating model quality with respect to given properties. We also go a step further and explore the usage scenario where a user simply aims to obtain a correct translation of some code with respect to certain properties without necessarily being concerned about the overall quality of the model. To this purpose, we develop the first property-guided search procedure for code translation models, where a model is repeatedly queried with slightly different parameters to produce alternative and potentially more correct translations. Our results show that this search procedure helps to obtain significantly better code translations.
3 changes: 1 addition & 2 deletions _publications/li2023hitchhiker.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models"
authors: Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2308.00245"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2308.00245"}
tags: ["static analysis"]
---
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated agent that interfaces with both a static analysis tool and an LLM. By carefully designing the agent and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates an extremely potent capability, showcasing a high precision (50%) and recall rate (100%). It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in the use of LLMs for bug discovery in extensive, real-world datasets.
3 changes: 1 addition & 2 deletions _publications/li2023starcoder.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "StarCoder: may the source be with you!"
authors: Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.06161"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.06161"}
tags: ["Transformer"]
---
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI `code-cushman-001` model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
3 changes: 1 addition & 2 deletions _publications/li2023think.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Think Outside the Code: Brainstorming Boosts Large Language Models in Code Generation"
authors: Xin-Ye Li, Jiang-Tian Xue, Zheng Xie, Ming Li
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.10679"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.10679"}
tags: ["generation", "Transformer"]
---
Code generation aims to automatically generate source code from high-level task specifications, which can significantly increase productivity of software engineering. Recently, approaches based on large language models (LLMs) have shown remarkable code generation abilities on simple tasks. However, generate code for more complex tasks, such as competition-level problems, remains challenging. In this paper, we introduce Brainstorm framework for code generation. It leverages a brainstorming step that generates and selects diverse thoughts on the problem to facilitate algorithmic reasoning, where the thoughts are possible blueprint of solving the problem. We demonstrate that Brainstorm significantly enhances the ability of LLMs to solve competition-level programming problems, resulting in a more than 50% increase in the pass@$k$ metrics for ChatGPT on the CodeContests benchmark, achieving state-of-the-art performance. Furthermore, our experiments conducted on LeetCode contests show that our framework boosts the ability of ChatGPT to a level comparable to that of human programmers.
3 changes: 1 addition & 2 deletions _publications/liu2023code.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Code Execution with Pre-trained Language Models"
authors: Chenxiao Liu, Shuai Lu, Weizhu Chen, Daxin Jiang, Alexey Svyatkovskiy, Shengyu Fu, Neel Sundaresan, Nan Duan
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.05383"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.05383"}
tags: ["Transformer", "execution"]
---
Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can understand and perform code execution. We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution, which challenges existing models such as Codex. We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension. We evaluate CodeExecutor on code execution and show its promising performance and limitations. We also demonstrate its potential benefits for code intelligence tasks such as zero-shot code-to-code search and text-to-code generation. Our analysis provides insights into the learning and generalization abilities of pre-trained models for code execution.
3 changes: 1 addition & 2 deletions _publications/mohajer2023skipanalyzer.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "SkipAnalyzer: A Tool for Static Code Analysis with Large Language Models"
authors: Mohammad Mahdi Mohajer, Reem Aleithan, Nima Shiri Harzevili, Moshi Wei, Alvine Boaye Belle, Hung Viet Pham, Song Wang
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2310.18532"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2310.18532"}
tags: ["repair"]
---
We introduce SkipAnalyzer, a large language model (LLM)-powered tool for static code analysis. SkipAnalyzer has three components: 1) an LLM-based static bug detector that scans source code and reports specific types of bugs, 2) an LLM-based false-positive filter that can identify false-positive bugs in the results of static bug detectors (e.g., the result of step 1) to improve detection accuracy, and 3) an LLM-based patch generator that can generate patches for the detected bugs above. As a proof-of-concept, SkipAnalyzer is built on ChatGPT, which has exhibited outstanding performance in various software engineering tasks. To evaluate SkipAnalyzer, we focus on two types of typical and critical bugs that are targeted by static bug detection, i.e., Null Dereference and Resource Leak as subjects. We employ Infer to aid the gathering of these two bug types from 10 open-source projects. Consequently, our experiment dataset contains 222 instances of Null Dereference bugs and 46 instances of Resource Leak bugs. Our study demonstrates that SkipAnalyzer achieves remarkable performance in the mentioned static analysis tasks, including bug detection, false-positive warning removal, and bug repair. In static bug detection, SkipAnalyzer achieves accuracy values of up to 68.37% for detecting Null Dereference bugs and 76.95% for detecting Resource Leak bugs, improving the precision of the current leading bug detector, Infer, by 12.86% and 43.13%, respectively. For removing false-positive warnings, SkipAnalyzer can reach a precision of up to 93.88% for Null Dereference bugs and 63.33% for Resource Leak bugs. Additionally, SkipAnalyzer surpasses state-of-the-art false-positive warning removal tools. Furthermore, in bug repair, SkipAnalyzer can generate syntactically correct patches to fix its detected bugs with a success rate of up to 97.30%.
3 changes: 1 addition & 2 deletions _publications/muennighoff2023octopack.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "OctoPack: Instruction Tuning Code Large Language Models"
authors: Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2308.07124"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2308.07124"}
tags: ["dataset", "instruction tuning"]
---
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
3 changes: 1 addition & 2 deletions _publications/olausson2023demystifying.markdown
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@

---
layout: publication
title: "Demystifying GPT Self-Repair for Code Generation"
authors: Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
conference:
year: 2023
additional_links:
- {name: "ArXiV", url: "https://arxiv.org/abs/2306.09896"}
- {name: "ArXiV", url: "https://arxiv.org/abs/2306.09896"}
tags: ["repair"]
---
Large Language Models (LLMs) have shown remarkable aptitude in code generation but still struggle on challenging programming tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of providing accurate feedback on why the code is wrong when that code was generated by the same model. In this paper, we analyze GPT-3.5 and GPT-4's ability to perform self-repair on APPS, a challenging dataset consisting of diverse coding challenges. To do so, we first establish a new evaluation strategy dubbed pass@t that measures the pass rate of the tasks against the total number of tokens sampled from the model, enabling a fair comparison to purely sampling-based approaches. With this evaluation strategy, we find that the effectiveness of self-repair is only seen in GPT-4. We also observe that self-repair is bottlenecked by the feedback stage; using GPT-4 to give feedback on the programs generated by GPT-3.5 and using expert human programmers to give feedback on the programs generated by GPT-4, we unlock significant performance gains.
Loading

0 comments on commit 10a5ed9

Please sign in to comment.