this post was submitted on 28 Jun 2023
7 points (100.0% liked)

Machine Learning

3 readers
1 users here now

Machine learning (ML) is a field devoted to understanding and building methods that let machines "learn" – that is, methods that leverage data to improve computer performance on some set of tasks. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, agriculture, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

founded 1 year ago
 

Interesting technique to increase the context window of language models by finetuning on a small number of samples after pretraining.

(I did a double-take after seeing the heading on the first page of the pdf, but it's not actually an old paper.)

We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least $\sim 600 \times$ smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.

top 5 comments
sorted by: hot top controversial new old
[–] SSamDav 2 points 1 year ago (1 children)

One cool thing about this work is that there was a concurrent discussion in twitter about the proposed method. From different authors.

[–] nsa@kbin.social 2 points 1 year ago

do you have a link?

[–] miro@kbin.social 1 points 1 year ago (2 children)

Is this similar to what MPT did to extend its context length?

[–] Blaed@lemmy.world 4 points 1 year ago

I believe it's a different technique (at least far as I understand the topics).

According to Mosaic, MPT (i.e. MPT-7B-StoryWriter-65k+) uses a different underlying architecture which enables their long context lengths.

The original author of this new method (SuperHOT by kaiokendev) shares what he has learned about this method here:

[–] nsa@kbin.social 1 points 1 year ago

hmmm... not sure which model you're referring to. do you have a paper link?