Skip to content
星际流动

Polynomial Expansion Rank Adaptation: Enhancing Low-Rank Fine-Tuning with High-Order Interactions

发布
采集
学术前沿 3.0 分 — Moderate AI relevance +novelty(1) +practical(1)
原文: cs.LG updates on arXiv.org

评分 3.0 · 来源:cs.LG updates on arXiv.org · 发布于 2026-04-15

评分依据:Moderate AI relevance +novelty(1) +practical(1)

arXiv:2604.11841v1 Announce Type: new Abstract: Low-rank adaptation (LoRA) is a widely used strategy for efficient fine-tuning of large language models (LLMs), but its strictly linear structure fundamentally limits expressive capacity. The bilinear formulation of weight updates captures only first-order dependencies between low-rank factors, restricting the modeling of nonlinear and higher-order parameter interactions. In this paper, we propose Polynomial Expansion Rank Adaptation (PERA), a…