You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given the existence of readCsv and toCsv it's not unexpected to think that arraymancer could parse its own csv files. Unfortunately, this is not possible as the csv files generated by arraymancer contain the dimension information for each index as two columns. Thus, parsing an arraymancer csv file leads to a (N+1)xM tensor, where N is the number of dimensions of the original tensor and M the size of the tensor.
We should have an option in readCsvisTensor or something like this that knows about the arraymancer layout and parses it back to the original layout.
The text was updated successfully, but these errors were encountered:
Vindaar
added a commit
to Vindaar/Arraymancer
that referenced
this issue
Oct 26, 2021
* change CSV parser to directly parse into a Tensor
Otherwise we would have to copy the `seq` we parse into for mem
copyable types after parsing.
* [io] replace CSVParser based line counter by memfiles counter
* [io] add simple readCsv test & add note to docstring about #530
* [io] extend tests with semicolon example, fixup empty line test
* [io] remove TODO note
* fix line counting for quoted fields in CSV files
* [tests] add test case for quoted field in CSV file
Related to #162 but deserves to be its own issue.
Given the existence of
readCsv
andtoCsv
it's not unexpected to think that arraymancer could parse its own csv files. Unfortunately, this is not possible as the csv files generated by arraymancer contain the dimension information for each index as two columns. Thus, parsing an arraymancer csv file leads to a(N+1)xM
tensor, whereN
is the number of dimensions of the original tensor and M the size of the tensor.We should have an option in
readCsv
isTensor
or something like this that knows about the arraymancer layout and parses it back to the original layout.The text was updated successfully, but these errors were encountered: