Skip to content

Commit

Permalink
use useBytes when writing text to disk
Browse files Browse the repository at this point in the history
  • Loading branch information
jwijffels committed Oct 4, 2023
1 parent 687c8f3 commit 71da28f
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 2 deletions.
1 change: 1 addition & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
- The embeddings with the file-based (word2vec.character) and list-based approach (word2vec.list) are proven to be the same if the tokenisation is the same and the hyperparameters of the model are the same
- In order to make sure the embeddings are the same the vocabulary had to be sorted according to the number of times it appears in the corpus as well as the token itself in case the number of times the 2 tokens occur is the same. This has as a consequence that the embeddings generated with version 0.4.0 will be slightly different as the ones obtained with package version < 0.4.0 due to a possible ordering difference in the vocabulary
- examples provided in the help of ?word2vec and in the README
- writing text data to files before training for the file-based approach (word2vec.character) now uses useBytes = TRUE (see issue #7)

## CHANGES IN word2vec VERSION 0.3.4

Expand Down
6 changes: 4 additions & 2 deletions R/word2vec.R
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ word2vec <- function(x,
#' @param encoding the encoding of \code{x} and \code{stopwords}. Defaults to 'UTF-8'.
#' Calculating the model always starts from files allowing to build a model on large corpora. The encoding argument
#' is passed on to \code{file} when writing \code{x} to hard disk in case you provided it as a character vector.
#' @param useBytes logical passed on to \code{\link{writeLines}} when writing the text and stopwords on disk before building the model. Defaults to \code{TRUE}.
#' @export
word2vec.character <- function(x,
type = c("cbow", "skip-gram"),
Expand All @@ -144,6 +145,7 @@ word2vec.character <- function(x,
split = c(" \n,.-!?:;/\"#$%&'()*+<=>@[]\\^_`{|}~\t\v\f\r",
".\n?!"),
encoding = "UTF-8",
useBytes = TRUE,
...){
type <- match.arg(type)
stopw <- stopwords
Expand All @@ -153,7 +155,7 @@ word2vec.character <- function(x,
}
file_stopwords <- tempfile()
filehandle_stopwords <- file(file_stopwords, open = "wt", encoding = encoding)
writeLines(stopw, con = filehandle_stopwords)
writeLines(stopw, con = filehandle_stopwords, useBytes = useBytes)
close(filehandle_stopwords)
on.exit({
if (file.exists(file_stopwords)) file.remove(file_stopwords)
Expand All @@ -167,7 +169,7 @@ word2vec.character <- function(x,
if (file.exists(file_train)) file.remove(file_train)
})
filehandle_train <- file(file_train, open = "wt", encoding = encoding)
writeLines(text = x, con = filehandle_train)
writeLines(text = x, con = filehandle_train, useBytes = useBytes)
close(filehandle_train)
}
#expTableSize <- 1000L
Expand Down
3 changes: 3 additions & 0 deletions man/word2vec.character.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit 71da28f

Please sign in to comment.