Major efficiency inference

MappingEvolve: LLM-Driven Code Evolution for Technology Mapping

Rongliang Fu, Yi Liu, Qiang Xu, Tsung-Yi Ho

Original source

arXiv cs.AI

https://arxiv.org/abs/2604.26591v1

Problem
This paper addresses the gap in leveraging Large Language Models (LLMs) for enhancing core algorithms in technology mapping, a critical phase in logic synthesis. While LLMs have been utilized for generating optimization scripts, their application in evolving technology mapping code has not been explored. The authors present MappingEvolve, an open-source framework that aims to fill this void by directly applying LLMs to the evolution of technology mapping algorithms. This work is a preprint and has not yet undergone peer review.

Method
MappingEvolve introduces a hierarchical agent-based architecture consisting of three main components: a Planner, an Evolver, and an Evaluator. The architecture abstracts the technology mapping process into distinct optimization operators, allowing for a structured evolutionary search. The Planner generates potential code modifications, the Evolver applies evolutionary strategies to refine these modifications, and the Evaluator assesses the performance of the evolved code against predefined metrics. The framework employs a combination of LLMs to facilitate the generation and evaluation of code, optimizing for area and delay trade-offs. Specific details regarding the training compute and datasets used are not disclosed in the paper.

Results
The experimental results demonstrate that MappingEvolve significantly outperforms both direct evolution methods and established baselines. The framework achieves a 10.04% area reduction compared to the ABC tool and a 7.93% reduction compared to mockturtle. Additionally, it shows a substantial improvement in overall synthesis quality, with $S_{overall}$ metrics ranging from 46.6% to 96.0% enhancement on EPFL benchmarks. These results indicate that the proposed method not only improves area efficiency but also effectively navigates the area-delay trade-off, showcasing its practical applicability in technology mapping.

Limitations
The authors acknowledge several limitations, including the potential dependency on the quality of the LLMs used and the need for extensive computational resources for training and evaluation. They also note that the framework’s performance may vary based on the specific characteristics of the technology mapping problem being addressed. An additional limitation not explicitly mentioned is the generalizability of the approach across different logic synthesis contexts, as the experiments are primarily conducted on EPFL benchmarks.

Why it matters
The introduction of MappingEvolve has significant implications for the field of logic synthesis and technology mapping. By demonstrating the efficacy of LLMs in evolving core algorithms, this work opens avenues for further research into the application of AI-driven methods in hardware design automation. The framework’s ability to optimize for both area and delay can lead to more efficient designs, which is crucial in the context of increasingly complex integrated circuits. Furthermore, the open-source nature of the framework encourages community engagement and further development, potentially accelerating advancements in the field.

Authors: Rongliang Fu, Yi Liu, Qiang Xu, Tsung-Yi Ho
Source: arXiv:2604.26591
URL: https://arxiv.org/abs/2604.26591v1

Published
Apr 29, 2026 — 12:17 UTC
Summary length
444 words
AI confidence
80%