Is "Attention is all you need" weird?
"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. "
"In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality"
"the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution."
"An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key."
"We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension
d
k
, and values of dimension
d
v
. We compute the dot products of the query with all keys, divide each by
d
k
, and apply a softmax function to obtain the weights on the values."
Can you please explain how attention is a neural network and not a simple dot product calculation requiring no neurons or hidden layers?