採択課題 【詳細】
| jh260017 | LLM-Driven Auto-Tuning for High-Performance Code Generation on Large-Scale HPC |
|---|---|
| 課題代表者 | 片桐 孝洋(名古屋大学・情報基盤センター) Takahiro Katagiri (Information Technology Center / Nagoya University) |
| 概要 | This proposal explores LLM-driven auto-tuning for high-performance code generation on large-scale HPC systems. We aim to integrate LLM models with AI agents and search-based optimization to automatically generate, transform, and tune code for large-scale HPC systems. By leveraging LLMs to guide code tuning for architecture-aware optimizations, the proposed approach reduces human effort while achieving near-optimal performance. The framework targets scalability across CPUs, GPUs, and distributed systems, enabling portable performance and rapid adaptation to evolving HPC platforms. The outcome will advance automated performance engineering for scientific applications, such FMO, BLAS, FFT, and non-linear problems. |
| 関連Webページ | |
| 報告書等 | 研究紹介ポスター / 最終報告書 |
無断転載禁止








