You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import arraymancer
let t = arange(0, 2*3)
let w = t.reshape(2, 3).permute(1, 0).flatten()
echo t
echo w
assert t != w
The assertion actually fails. The tensors are the same, while you'd expect the w tensor to be [0, 3, 1, 4, 2, 5].
The problem lies in the flatten (or equivalently reshape(2*3)) operation: if .is_F_contiguous check passes (and it does in this case), the operation just changes the shape without a copy. In reality we have a non-contiguous C-tensor, so a copy is needed to flatten it. Just because the strides happen to make it look like some contiguous F-tensor, it doesn't become one.
There are two solutions to this problem:
Make the layout an attribute of the Tensor object and support both layouts. This way there will be no confusion, but many operations will have to be adjusted.
Make all tensors C-tensors (i.e. with rowMajor layout). Actually this is already the case, but checks for F-continuity and layout options need to be eliminated from all operations (reshape included), as these are applied erroneously.
P.S. The layout defines the ordering of tensor elements and cannot be inferred from the strides. Suppose we have a 3-dim tensor t. If it has rowMajor layout, the ordering of the elements is (t[0, 0, 0], t[0, 0, 1], ..., t[0, 1, 0], t[0, 1, 1], ..., t[d1 -1, d2-1, d3-2], t[d1-1, d2-1, d3-1]). If the layout is colMajor, the ordering is (t[0, 0, 0], t[1, 0, 0], ..., t[0, 1, 0], t[1, 1, 0], ..., t[d1-2, d2-1, d3-1], t[d1-1, d2-1, d3-1]). This holds irrespective of how the tensor happens to lie in memory. And given the same strides, the tensor can be contiguous if it has one layout and discontiguous if it has the other.
The text was updated successfully, but these errors were encountered:
AFAIK, the way to change a tensor layout is through initTensorMetadata and not by reshaping which will mostly change strides vector and not actual data.
What I get from this issue :
Comparaison needs to take strides / shape / col-row major argument
We need a dedicated function to swap from row major to col major to avoid those reshape / permute shenanigans
Here is a simple example:
The assertion actually fails. The tensors are the same, while you'd expect the
w
tensor to be[0, 3, 1, 4, 2, 5]
.The problem lies in the
flatten
(or equivalentlyreshape(2*3)
) operation: if.is_F_contiguous
check passes (and it does in this case), the operation just changes the shape without a copy. In reality we have a non-contiguous C-tensor, so a copy is needed to flatten it. Just because the strides happen to make it look like some contiguous F-tensor, it doesn't become one.There are two solutions to this problem:
reshape
included), as these are applied erroneously.P.S. The layout defines the ordering of tensor elements and cannot be inferred from the strides. Suppose we have a 3-dim tensor
t
. If it has rowMajor layout, the ordering of the elements is(t[0, 0, 0], t[0, 0, 1], ..., t[0, 1, 0], t[0, 1, 1], ..., t[d1 -1, d2-1, d3-2], t[d1-1, d2-1, d3-1])
. If the layout is colMajor, the ordering is(t[0, 0, 0], t[1, 0, 0], ..., t[0, 1, 0], t[1, 1, 0], ..., t[d1-2, d2-1, d3-1], t[d1-1, d2-1, d3-1])
. This holds irrespective of how the tensor happens to lie in memory. And given the same strides, the tensor can be contiguous if it has one layout and discontiguous if it has the other.The text was updated successfully, but these errors were encountered: