Python知識分享網(wǎng) - 專業(yè)的Python學(xué)習(xí)網(wǎng)站 學(xué)Python,上Python222
transformer論文集合 下載
匿名網(wǎng)友發(fā)布于:2025-05-26 09:54:38
(侵權(quán)舉報)
(假如點(diǎn)擊沒反應(yīng),多刷新兩次就OK!)

transformer論文集合 下載 圖1

 

 

資料內(nèi)容:

 

1 Introduction
Transformer has been the most widely used ar-
chitecture for machine translation (Vaswani et al.,
2017). Despite its strong performance, the decod-
ing of Transformer is inefficient as it adopts the
sequential auto-regressive factorization for its prob-
ability model (Figure 1a). Recent work such as
non-autoregressive transformer (NAT), aim to de-
code target tokens in parallel to speed up the gener-
ation (Gu et al., 2018). However, the vanilla NAT
still lags behind Transformer in the translation qual-
ity – with a gap about 7.0 BLEU score. NAT as-
sumes the conditional independence of the target
tokens given the source sentence. We suspect that
NAT’s conditional independence assumption pre-
vents learning word interdependency in the target
sentence. Notice that such word interdependency
is crucial, as the Transformer explicitly captures
that via decoding from left to right (Figure 1a).