You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when using the sent2vec command, a model will be produced through the cbow model.
According to the paper, sent2vec will average the words vectors based on the weights learned in the training of corpus phase.
But how does cbow initialise and update the weights, and what are the n grams used?
For instance, when training a wikipedia corpus, what goes under the hood to calculate the different weights and dimensions for the sentence- 'I ate my breakfast in the morning'?
What are the unigrams and bigrams involved here to be averaged? how are the initialisation of weights done? what is the target/source word in the sentence above? Thanks
The text was updated successfully, but these errors were encountered:
Hi,
when using the sent2vec command, a model will be produced through the cbow model.
According to the paper, sent2vec will average the words vectors based on the weights learned in the training of corpus phase.
But how does cbow initialise and update the weights, and what are the n grams used?
For instance, when training a wikipedia corpus, what goes under the hood to calculate the different weights and dimensions for the sentence- 'I ate my breakfast in the morning'?
What are the unigrams and bigrams involved here to be averaged? how are the initialisation of weights done? what is the target/source word in the sentence above? Thanks
The text was updated successfully, but these errors were encountered: