The paper was published in 2021 as a preliminary report, and in late 2023 it has been updated to v5. The core idea is the same. The only thing that’s changed is basically evaluating on English benchmarks instead of Chinese data only.

This is probably one of the papers with the worst English writing skills I’ve read. It’s kinda iconic since this is an LLM paper, but LLM is not being used to audit the writing of the paper.

Why relative

This is my understanding, not what’s in the paper. We don’t need the absolute position information that’s done in Absolute position embedding. What we care about is the relative distance when doing attention. That means the y can generalize better on longer sequence (position 5 to 8 is the same as 500 to 503).

Formulation

Say we generate the positional embedding via function ,

and are both on position because that’s a way of understanding the weighted sum. And then they go through softmax like this

For the traditional absolute position embedding, that’s

Since we want to capture the relative information and attention, it would be good if that depend only on the relative position between and . So the problem becomes: can we find such an , such that

One can find a solution to our formulation, when is:

where is the real part of a complex number and represents the conjugate complex number of . is a preset non-zero constant. We can further write in a multiplication matrix:

We can see intuitively that this work since this is rotating the embedding, or, assigning them angles in 2D plane. A vector at and a vector at , when doing dot product, gives us .

In order to generalize our results in 2D to any where is even, we divide the d-dimension space into sub-spaces and combine them in the merit of the linearity of the inner product, turning into:

where

is the rotary matrix with pre-defined parameters . One can think about it as “encoding relative information for each pair of size 2 in the embedding”.

Properties of RoPE

  • Similar as the OG position embedding, it has a long term decay property (that’s because only the “going down” part of the cosine is the dominant part)
  • It can be used easily with linear attention

Appendix A: linearity of the inner product

Let’s consider two vectors, and , in a -dimensional space, where is an even number.

The standard inner product (dot product), denoted by , is defined as the sum of the element-wise products of their components:

Now, RoPE treats the -dimensional vector as a concatenation of smaller, 2-dimensional vectors. Let’s denote these sub-vectors with a prime symbol (′):

  • , , …,
  • , , …,

Because of the basic rules of addition, we can simply regroup the terms in the original inner product sum:

Notice that each term in parentheses is just the inner product of the corresponding 2D sub-vectors:

This leads us to the core identity that RoPE exploits. The inner product in -dimensions is precisely the sum of the inner products in the constituent 2D subspaces: