Skip to content

Commit

Permalink
Merge branch 'GH-2400-release' of https://github.com/flairNLP/flair i…
Browse files Browse the repository at this point in the history
…nto GH-2400-release
  • Loading branch information
alanakbik committed Aug 29, 2021
2 parents 2f26a7c + b2bf60f commit 7fda587
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 9 deletions.
3 changes: 1 addition & 2 deletions resources/docs/TUTORIAL_6_CORPUS.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,8 +178,7 @@ The `MultiCorpus` inherits from `Corpus`, so you can use it like any other corpu
Flair supports many datasets out of the box. It automatically downloads and sets up the
data the first time you call the corresponding constructor ID.

The following datasets are supported (click category to
expand):
The following datasets are supported (**click category to expand**):

<details>
<summary>Named Entity Recognition (NER) datasets</summary>
Expand Down
4 changes: 1 addition & 3 deletions resources/docs/TUTORIAL_7_TRAINING_A_MODEL.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,9 +211,7 @@ tutorials on both for difference). The rest is exactly the same as before!
The best results in text classification use fine-tuned transformers with `TransformerDocumentEmbeddings` as shown in the
code below:

(If you don't have a big GPU to fine-tune transformers, try `DocumentPoolEmbeddings` or `DocumentRNNEmbeddings` instead

- sometimes they work just as well!)
(If you don't have a big GPU to fine-tune transformers, try `DocumentPoolEmbeddings` or `DocumentRNNEmbeddings` instead; sometimes they work just as well!)

```python
import torch
Expand Down
8 changes: 4 additions & 4 deletions resources/docs/embeddings/TRANSFORMER_EMBEDDINGS.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,21 +64,21 @@ from flair.embeddings import TransformerWordEmbeddings
sentence = Sentence('The grass is green.')

# use only last layers
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='-1')
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='-1', layer_mean=False)
embeddings.embed(sentence)
print(sentence[0].embedding.size())

sentence.clear_embeddings()

# use last two layers
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='-1,-2')
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='-1,-2', layer_mean=False)
embeddings.embed(sentence)
print(sentence[0].embedding.size())

sentence.clear_embeddings()

# use ALL layers
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='all')
embeddings = TransformerWordEmbeddings('bert-base-uncased', layers='all', layer_mean=False)
embeddings.embed(sentence)
print(sentence[0].embedding.size())
```
Expand All @@ -90,7 +90,7 @@ torch.Size([1536])
torch.Size([9984])
```

I.e. the size of the embedding increases the mode layers we use.
I.e. the size of the embedding increases the mode layers we use (but ONLY if layer_mean is set to False, otherwise the length is always the same).


### Pooling operation
Expand Down

0 comments on commit 7fda587

Please sign in to comment.