Run this notebook online: or Colab:
10.2. Attention Scoring Functions¶
In sec_nadaraya-waston
, we used a Gaussian kernel to model
interactions between queries and keys. Treating the exponent of the
Gaussian kernel in eq_nadaraya-waston-gaussian
as an
attention scoring function (or scoring function for short), the
results of this function were essentially fed into a softmax operation.
As a result, we obtained a probability distribution (attention weights)
over values that are paired with keys. In the end, the output of the
attention pooling is simply a weighted sum of the values based on these
attention weights.
At a high level, we can use the above algorithm to instantiate the
framework of attention mechanisms in fig_qkv
. Denoting an
attention scoring function by \(a\),
fig_attention_output
illustrates how the output of attention
pooling can be computed as a weighted sum of values. Since attention
weights are a probability distribution, the weighted sum is essentially
a weighted average.
.. _fig_attention_output:
Mathematically, suppose that we have a query \(\mathbf{q} \in \mathbb{R}^q\) and \(m\) key-value pairs \((\mathbf{k}_1, \mathbf{v}_1), \ldots, (\mathbf{k}_m, \mathbf{v}_m)\), where any \(\mathbf{k}_i \in \mathbb{R}^k\) and any \(\mathbf{v}_i \in \mathbb{R}^v\). The attention pooling \(f\) is instantiated as a weighted sum of the values:
where the attention weight (scalar) for the query \(\mathbf{q}\) and key \(\mathbf{k}_i\) is computed by the softmax operation of an attention scoring function \(a\) that maps two vectors to a scalar:
As we can see, different choices of the attention scoring function \(a\) lead to different behaviors of attention pooling. In this section, we introduce two popular scoring functions that we will use to develop more sophisticated attention mechanisms later.
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/PlotUtils.java
NDManager manager = NDManager.newBaseManager();
10.2.1. Masked Softmax Operation¶
As we just mentioned, a softmax operation is used to output a
probability distribution as attention weights. In some cases, not all
the values should be fed into attention pooling. For instance, for
efficient minibatch processing in Section 9.5,
some text sequences are padded with special tokens that do not carry
meaning. To get an attention pooling over only meaningful tokens as
values, we can specify a valid sequence length (in number of tokens) to
filter out those beyond this specified range when computing softmax. In
this way, we can implement such a masked softmax operation in the
following masked_softmax
function, where any value beyond the valid
length is masked as zero.
public static NDArray maskedSoftmax(NDArray X, NDArray validLens) {
/* Perform softmax operation by masking elements on the last axis. */
// `X`: 3D NDArray, `validLens`: 1D or 2D NDArray
if (validLens == null) {
return X.softmax(-1);
}
Shape shape = X.getShape();
if (validLens.getShape().dimension() == 1) {
validLens = validLens.repeat(shape.get(1));
} else {
validLens = validLens.reshape(-1);
}
// On the last axis, replace masked elements with a very large negative
// value, whose exponentiation outputs 0
X = X.reshape(new Shape(-1, shape.get(shape.dimension() - 1)))
.sequenceMask(validLens, (float) -1E6);
return X.softmax(-1).reshape(shape);
}
To demonstrate how this function works, consider a minibatch of two \(2 \times 4\) matrix examples, where the valid lengths for these two examples are two and three, respectively. As a result of the masked softmax operation, values beyond the valid lengths are all masked as zero.
maskedSoftmax(
manager.randomUniform(0, 1, new Shape(2, 2, 4)),
manager.create(new float[] {2, 3}));
ND: (2, 2, 4) gpu(0) float32
[[[0.4549, 0.5451, 0. , 0. ],
[0.6175, 0.3825, 0. , 0. ],
],
[[0.294 , 0.3069, 0.3992, 0. ],
[0.3747, 0.2626, 0.3627, 0. ],
],
]
Similarly, we can also use a two-dimensional NDArray to specify valid lengths for every row in each matrix example.
maskedSoftmax(
manager.randomUniform(0, 1, new Shape(2, 2, 4)),
manager.create(new float[][] {{1, 3}, {2, 4}}));
ND: (2, 2, 4) gpu(0) float32
[[[1. , 0. , 0. , 0. ],
[0.2777, 0.4156, 0.3067, 0. ],
],
[[0.3441, 0.6559, 0. , 0. ],
[0.2544, 0.2482, 0.2036, 0.2939],
],
]
10.2.2. Additive Attention¶
In general, when queries and keys are vectors of different lengths, we can use additive attention as the scoring function. Given a query \(\mathbf{q} \in \mathbb{R}^q\) and a key \(\mathbf{k} \in \mathbb{R}^k\), the additive attention scoring function
where learnable parameters \(\mathbf W_q\in\mathbb R^{h\times q}\), \(\mathbf W_k\in\mathbb R^{h\times k}\), and \(\mathbf w_v\in\mathbb R^{h}\). Equivalent to (10.2.3), the query and the key are concatenated and fed into an MLP with a single hidden layer whose number of hidden units is \(h\), a hyperparameter. By using \(\tanh\) as the activation function and disabling bias terms, we implement additive attention in the following.
/* Additive attention. */
public static class AdditiveAttention extends AbstractBlock {
private Linear W_k;
private Linear W_q;
private Linear W_v;
private Dropout dropout;
public NDArray attentionWeights;
public AdditiveAttention(int numHiddens, float dropout) {
W_k = Linear.builder().setUnits(numHiddens).optBias(false).build();
addChildBlock("W_k", W_k);
W_q = Linear.builder().setUnits(numHiddens).optBias(false).build();
addChildBlock("W_q", W_q);
W_v = Linear.builder().setUnits(1).optBias(false).build();
addChildBlock("W_v", W_v);
this.dropout = Dropout.builder().optRate(dropout).build();
addChildBlock("dropout", this.dropout);
}
@Override
protected NDList forwardInternal(
ParameterStore ps,
NDList inputs,
boolean training,
PairList<String, Object> params) {
// Shape of the output `queries` and `attentionWeights`:
// (no. of queries, no. of key-value pairs)
NDArray queries = inputs.get(0);
NDArray keys = inputs.get(1);
NDArray values = inputs.get(2);
NDArray validLens = inputs.get(3);
queries = W_q.forward(ps, new NDList(queries), training, params).head();
keys = W_k.forward(ps, new NDList(keys), training, params).head();
// After dimension expansion, shape of `queries`: (`batchSize`, no. of
// queries, 1, `numHiddens`) and shape of `keys`: (`batchSize`, 1,
// no. of key-value pairs, `numHiddens`). Sum them up with
// broadcasting
NDArray features = queries.expandDims(2).add(keys.expandDims(1));
features = features.tanh();
// There is only one output of `this.W_v`, so we remove the last
// one-dimensional entry from the shape. Shape of `scores`:
// (`batchSize`, no. of queries, no. of key-value pairs)
NDArray result = W_v.forward(ps, new NDList(features), training, params).head();
NDArray scores = result.squeeze(-1);
attentionWeights = maskedSoftmax(scores, validLens);
// Shape of `values`: (`batchSize`, no. of key-value pairs, value dimension)
NDList list = dropout.forward(ps, new NDList(attentionWeights), training, params);
return new NDList(list.head().batchDot(values));
}
@Override
public Shape[] getOutputShapes(Shape[] inputShapes) {
throw new UnsupportedOperationException("Not implemented");
}
@Override
public void initializeChildBlocks(
NDManager manager, DataType dataType, Shape... inputShapes) {
W_q.initialize(manager, dataType, inputShapes[0]);
W_k.initialize(manager, dataType, inputShapes[1]);
long[] q = W_q.getOutputShapes(new Shape[] {inputShapes[0]})[0].getShape();
long[] k = W_k.getOutputShapes(new Shape[] {inputShapes[1]})[0].getShape();
long w = Math.max(q[q.length - 2], k[k.length - 2]);
long h = Math.max(q[q.length - 1], k[k.length - 1]);
long[] shape = new long[] {2, 1, w, h};
W_v.initialize(manager, dataType, new Shape(shape));
long[] dropoutShape = new long[shape.length - 1];
System.arraycopy(shape, 0, dropoutShape, 0, dropoutShape.length);
dropout.initialize(manager, dataType, new Shape(dropoutShape));
}
}
Let us demonstrate the above AdditiveAttention
class with a toy
example, where shapes (batch size, number of steps or sequence length in
tokens, feature size) of queries, keys, and values are (\(2\),
\(1\), \(20\)), (\(2\), \(10\), \(2\)), and
(\(2\), \(10\), \(4\)), respectively. The attention pooling
output has a shape of (batch size, number of steps for queries, feature
size for values).
NDArray queries = manager.randomNormal(0, 1, new Shape(2, 1, 20), DataType.FLOAT32);
NDArray keys = manager.ones(new Shape(2, 10, 2));
// The two value matrices in the `values` minibatch are identical
NDArray values = manager.arange(40f).reshape(1, 10, 4).repeat(0, 2);
NDArray validLens = manager.create(new float[] {2, 6});
AdditiveAttention attention = new AdditiveAttention(8, 0.1f);
NDList input = new NDList(queries, keys, values, validLens);
ParameterStore ps = new ParameterStore(manager, false);
attention.initialize(manager, DataType.FLOAT32, input.getShapes());
attention.forward(ps, input, false).head();
ND: (2, 1, 4) gpu(0) float32
[[[ 2., 3., 4., 5.],
],
[[10., 11., 12., 13.],
],
]
Although additive attention contains learnable parameters, since every key is the same in this example, the attention weights are uniform, determined by the specified valid lengths.
PlotUtils.showHeatmaps(
attention.attentionWeights.reshape(1, 1, 2, 10),
"Keys",
"Queries",
new String[] {""},
500,
700);
10.2.3. Scaled Dot-Product Attention¶
A more computationally efficient design for the scoring function can be simply dot product. However, the dot product operation requires that both the query and the key have the same vector length, say \(d\). Assume that all the elements of the query and the key are independent random variables with zero mean and unit variance. The dot product of both vectors has zero mean and a variance of \(d\). To ensure that the variance of the dot product still remains one regardless of vector length, the scaled dot-product attention scoring function
divides the dot product by \(\sqrt{d}\). In practice, we often think in minibatches for efficiency, such as computing attention for \(n\) queries and \(m\) key-value pairs, where queries and keys are of length \(d\) and values are of length \(v\). The scaled dot-product attention of queries \(\mathbf Q\in\mathbb R^{n\times d}\), keys \(\mathbf K\in\mathbb R^{m\times d}\), and values \(\mathbf V\in\mathbb R^{m\times v}\) is
In the following implementation of the scaled dot product attention, we use dropout for model regularization.
/* Scaled dot product attention. */
public static class DotProductAttention extends AbstractBlock {
private Dropout dropout;
public NDArray attentionWeights;
public DotProductAttention(float dropout) {
this.dropout = Dropout.builder().optRate(dropout).build();
this.addChildBlock("dropout", this.dropout);
this.dropout.setInitializer(new UniformInitializer(0.07f), Parameter.Type.WEIGHT);
}
@Override
protected NDList forwardInternal(
ParameterStore ps,
NDList inputs,
boolean training,
PairList<String, Object> params) {
// Shape of `queries`: (`batchSize`, no. of queries, `d`)
// Shape of `keys`: (`batchSize`, no. of key-value pairs, `d`)
// Shape of `values`: (`batchSize`, no. of key-value pairs, value
// dimension)
// Shape of `valid_lens`: (`batchSize`,) or (`batchSize`, no. of queries)
NDArray queries = inputs.get(0);
NDArray keys = inputs.get(1);
NDArray values = inputs.get(2);
NDArray validLens = inputs.get(3);
Long d = queries.getShape().get(queries.getShape().dimension() - 1);
// Swap the last two dimensions of `keys` and perform batchDot
NDArray scores = queries.batchDot(keys.swapAxes(1, 2)).div(Math.sqrt(d));
attentionWeights = maskedSoftmax(scores, validLens);
NDList list = dropout.forward(ps, new NDList(attentionWeights), training, params);
return new NDList(list.head().batchDot(values));
}
@Override
public Shape[] getOutputShapes(Shape[] inputShapes) {
throw new UnsupportedOperationException("Not implemented");
}
@Override
public void initializeChildBlocks(
NDManager manager, DataType dataType, Shape... inputShapes) {
try (NDManager sub = manager.newSubManager()) {
NDArray queries = sub.zeros(inputShapes[0], dataType);
NDArray keys = sub.zeros(inputShapes[1], dataType);
NDArray scores = queries.batchDot(keys.swapAxes(1, 2));
dropout.initialize(manager, dataType, scores.getShape());
}
}
}
To demonstrate the above DotProductAttention
class, we use the same
keys, values, and valid lengths from the earlier toy example for
additive attention. For the dot product operation, we make the feature
size of queries the same as that of keys.
queries = manager.randomNormal(0, 1, new Shape(2, 1, 2), DataType.FLOAT32);
DotProductAttention productAttention = new DotProductAttention(0.5f);
input = new NDList(queries, keys, values, validLens);
productAttention.initialize(manager, DataType.FLOAT32, input.getShapes());
productAttention.forward(ps, input, false).head();
ND: (2, 1, 4) gpu(0) float32
[[[ 2., 3., 4., 5.],
],
[[10., 11., 12., 13.],
],
]
Same as in the additive attention demonstration, since keys
contains
the same element that cannot be differentiated by any query, uniform
attention weights are obtained.
PlotUtils.showHeatmaps(
productAttention.attentionWeights.reshape(1, 1, 2, 10),
"Keys",
"Queries",
new String[] {""},
500,
700);
10.2.4. Summary¶
We can compute the output of attention pooling as a weighted average of values, where different choices of the attention scoring function lead to different behaviors of attention pooling.
When queries and keys are vectors of different lengths, we can use the additive attention scoring function. When they are the same, the scaled dot-product attention scoring function is more computationally efficient.
10.2.5. Exercises¶
Modify keys in the toy example and visualize attention weights. Do additive attention and scaled dot-product attention still output the same attention weights? Why or why not?
Using matrix multiplications only, can you design a new scoring function for queries and keys with different vector lengths?
When queries and keys have the same vector length, is vector summation a better design than dot product for the scoring function? Why or why not?