Abstract
The in-context learning capabilities of modern language models have motivated a deeper mathematical understanding of sequence models. A line of recent work has shown that linear attention models can emulate projected gradient descent iterations to implicitly learn the task vector from the data provided in the context window. In this work, we consider a novel setting where the global task distribution can be partitioned into a union of conditional task distributions. We then examine the use of task-specific prompts and prediction heads for learning the prior information associated with the conditional task distribution using a one-layer attention model. Our results on loss landscape show that task-specific prompts facilitate a covariance-mean decoupling where prompt-tuning explains the conditional mean of the distribution whereas the variance is learned/explained through in-context learning. Incorporating task-specific head further aids this process by entirely decoupling estimation of mean and variance components. This covariance-mean perspective similarly explains how jointly training prompt and attention weights can provably help over fine-tuning after pretraining. The code for reproducing the numerical results is available at GitHub.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 1558-1566 |
| Number of pages | 9 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 258 |
| State | Published - 2025 |
| Externally published | Yes |
| Event | 28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand Duration: May 3 2025 → May 5 2025 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'Provable Benefits of Task-Specific Prompts for In-context Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver