Skip to content

Latest commit

 

History

History
11 lines (9 loc) · 1.17 KB

Transformers-Embeddings-Foundational.md

File metadata and controls

11 lines (9 loc) · 1.17 KB

Transformers

image

  • Transformers work by processing huge volumes of data, and encoding language tokens (representing individual words or phrases) as vector-based embeddings (arrays of numeric values)
  • Tokens that are semantically similar are encoded in similar positions, creating a semantic language model that makes it possible to build sophisticated NLP solutions for text analysis, translation, language generation, and other tasks.

Transformers in Multi-Modal Models. Foundational Model.

image

  • The Microsoft Florence model is just such a model. Trained with huge volumes of captioned images from the Internet, it includes both a language encoder and an image encoder. Florence is an example of a foundation model.
  • In other words, a pre-trained general model on which you can build multiple adaptive models for specialist tasks. image