Historical Context: Seq2Seq Paper and NMT by Joint Learning to Align & Translate Paper :
* Seq2Seq model introduced in "Sequence to Sequence Learning with Neural Networks" paper by Sutskever et al., revolutionized NLP with end-to-end learning. For example, translating "Bonjour" to "Hello" without handcrafted features.
* NMT by "Joint Learning to Align and Translate" (Luong et al.) improved Seq2Seq with attention mechanisms. For instance, aligning "Bonjour" and "Hello" more accurately based on context.
Introduction to Transformers (Paper: Attention is all you need) :
* Transformers introduced in "Attention is All You Need" paper.
* Replaced RNN-based models with attention mechanisms, making them highly parallelizable and efficient.
Why transformers :
* Transformers capture long-range dependencies effectively.
* They use self-attention to process tokens in parallel and capture global context efficiently.
Explain the working of each transformer component :
* Input Embeddings: Tokens are embedded into high-dimensional vectors.
* Positional Encoding: Adds positional information to embeddings.
* Encoder: Processes input through self-attention and feedforward layers.
* Decoder: Generates output based on encoder's representation and target sequence.
* Attention Mechanism: Computes weighted sums of input embeddings.
* Feedforward Neural Networks: Apply non-linear transformations to attention outputs.
How is GPT-1 trained from Scratch? (Take Reference from BERT and GPT-1 Paper) :
* GPT-1 pre-trained using unsupervised learning on large text corpus.
* Includes Masked Language Modeling (MLM) to predict masked words and Next Sentence Prediction (NSP) to predict sentence relationships.
* Learns bidirectional context and semantic representations during pre-training.
@InnomaticsResearchLabs #InnomaticsResearchLabs
Comments
Post a Comment