Skip to content

This repository contains assignment notebooks for implementing vanilla transformer architecture, GPT and BERT like structures from scratch using PyTorch. This is for educational purpose only

License

Notifications You must be signed in to change notification settings

Arunprakash-A/LLM-from-scratch-PyTorch

Repository files navigation

LLM from scratch using PyTorch

This repository contains exercise notebooks for the Introduction to Large Language Models (LLMs) course offered at IIT Madras by Mitesh Khapra and myself. The notebooks contain templates for implementing vanilla transformer architecture, GPT and BERT-like structures from scratch using PyTorch (without using built-in transformer layers). You will end up implementing the following core components

  • Multi-Head Attention (MHA)
  • Multi-Head Masked Attention (MHMA) and Multi-Head Cross attention (MHCA)
  • Position-wise Feed Forward Neural Networks (FFN) (aka MLP)
  • Teacher Forcing and Auto-regressive training
  • Causal Language Modelling (CLM) and Masked Language Modelling (MLM)
  • Text Translation and text generation

The objective is to give you an in-depth understanding of how things work underhood the built-in functions in both PyTorch and high-level APIs such as Hugging-Face.

About

This repository contains assignment notebooks for implementing vanilla transformer architecture, GPT and BERT like structures from scratch using PyTorch. This is for educational purpose only

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published