Groups | Search | Server Info | Login | Register
Groups > comp.compilers > #3667
| From | John R Levine <johnl@taugh.com> |
|---|---|
| Newsgroups | comp.compilers |
| Subject | CompilerGPT: Leveraging Large Language Models for Analyzing and Acting on Compiler Optimization Reports |
| Date | 2025-06-09 09:40 +0200 |
| Organization | Compilers Central |
| Message-ID | <25-06-001@comp.compilers> (permalink) |
The authors told LLMs to read C++ compiler optimization reports and make the code better. https://arxiv.org/abs/2506.06227 Abstract: Current compiler optimization reports often present complex, technical information that is difficult for programmers to interpret and act upon effectively. This paper assesses the capability of large language models (LLM) to understand compiler optimization reports and automatically rewrite the code accordingly. To this end, the paper introduces CompilerGPT, a novel framework that automates the interaction between compilers, LLMs, and user defined test and evaluation harness. CompilerGPT's workflow runs several iterations and reports on the obtained results. Experiments with two leading LLM models (GPT-4o and Claude Sonnet), optimization reports from two compilers (Clang and GCC), and five benchmark codes demonstrate the potential of this approach. Speedups of up to 6.5x were obtained, though not consistently in every test. This method holds promise for improving compiler usability and streamlining the software optimization process. Regards, John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. https://jl.ly
Back to comp.compilers | Previous | Next | Find similar
CompilerGPT: Leveraging Large Language Models for Analyzing and Acting on Compiler Optimization Reports John R Levine <johnl@taugh.com> - 2025-06-09 09:40 +0200
csiph-web