Replies: 3 comments 6 replies
-
Hi @kiview,
|
Beta Was this translation helpful? Give feedback.
-
It looks like there may have been a bug at some point, but it also looks like it was addressed. However, I am seeing something very similar, or perhaps the same. Here is an example of feeding the same, exact data into Positive Case (bug?):
Negative Case -- re-loading the learner each time:
|
Beta Was this translation helpful? Give feedback.
-
Hi @bob-mcrae, X, y, splits = get_UCR_data('LSST', split_data=False)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=[ShowGraph()])
learn.fit_one_cycle(10, 1e-2)
output = learn.get_X_preds(X[splits[1]], with_decoded=True)[0]
output2 = learn.get_X_preds(X[splits[1]], with_decoded=True)[0]
test_eq(output, output2) I've closed the issue, but please let me know if you find any other issues. |
Beta Was this translation helpful? Give feedback.
-
When calling
learn.get_preds(dl=test_dl, with_decoded=True)
for new data, as shown in the tutorial notebooks, I get the correct predictions and probabilities when calling it initially. However, calling it again with the same dl leads to completely wrong decoded predictions, and the probabilities tensors consist only of 0 and 1 values.Obviously, there is something stateful happening in the
learn
object which I don't understand. I also had the feeling this is related to thereorder
parameter, however, I could not really reproduce this.Beta Was this translation helpful? Give feedback.
All reactions