Attention for RNN Seq2Seq Models (1.25x speed recommended)

Attention for RNN Seq2Seq Models (1.25x speed recommended)

Next Video:    • Self-Attenion for RNN (1.25x speed re...   Attention was originally proposed by Bahdanau et al. in 2015. Later on, attention finds much broader applications in NLP and computer vision. This lecture introduces only attention for RNN sequence-to-sequence models. The audience is assumed to know RNN sequence-to-sequence models before watching this video. Slides: https://github.com/wangshusen/DeepLea... Reference: Bahdanau, Cho, & Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.