TypeError: unhashable type: 'list' #1331
Unanswered
Fahimtonmoy
asked this question in
Q&A
Replies: 3 comments
-
Hi, can you prepare a minimal example that we can run ourselves to reproduce the error? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi I am facing the same issue, Have you guys resolve this issue? |
Beta Was this translation helpful? Give feedback.
0 replies
-
any progress now? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I am a new learner of avalanche library. I am trying to utilize avalanche library to remove catastrophic forgetting. I am using below code blocks in jupyter notebook to implement continual learning. I have a 5 custom tabular datasets and after conventional data preprocessing, I have made the below list named datasets. Moreover, I am trying to use EWC plugin with naive strategy to observe how it helps reducing catastrophic forgetting. After execution, I am stuck with couple of errors which I have mentioned below. Could anyone kindly help suggesting directions to avoid facing the errors that I have mentioned below.
Code:
import tensorflow as tf
import pandas as pd
import numpy as np
import avalanche
import torch
import torch.nn as nn
datasets = [(dataset_1_x, dataset_1_y), (dataset_2_x, dataset_2_y), (dataset_3_x, dataset_3_y), (dataset_4_x, dataset_4_y), (dataset_5_x, dataset_5_y)].
#Converting numpy arrays to tensors
train_datasets = []
task_labels = []
for i, (X, y) in enumerate(datasets):
# Convert numpy arrays to PyTorch tensors
X_tensor = torch.tensor(X, dtype=torch.float32)
y_tensor = torch.tensor(y, dtype=torch.long)
from avalanche.benchmarks.generators import tensors_benchmark
created_benchmark = tensors_benchmark(
train_tensors = train_datasets,
test_tensors = train_datasets, # I have kept both train and test data same to observe the effect of catastrophic forgetting.
task_labels=task_labels,
complete_test_set_only=False
)
Model creation
model = nn.Sequential(
nn.Linear(82, 64),
nn.ReLU(),
nn.Linear(64, 64),
nn.ReLU(),
nn.Linear(64, 64),
nn.ReLU(),
nn.Linear(64, 1),
nn.Sigmoid()
)
Strategy Creation
from torch.nn import BCELoss
import torch.optim as optim
from avalanche.training.plugins import EWCPlugin
from avalanche.training.supervised import Naive
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = BCELoss()
ewc = EWCPlugin(ewc_lambda=0.001)
strategy = Naive (model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128, plugins=[ewc])
Training
for experience in created_benchmark.train_stream:
print(f"Training on experience: {experience.current_experience}")
strategy.train(experience)
Errors:
After executing above code, I am getting below error.
ValueError: Using a target size (torch.Size([128])) that is different to the input size (torch.Size([128, 1])) is deprecated. Please ensure they have the same size.
To solve the above error, I added below line in numpy array-to-tensor conversion code block.
y_tensor = y_tensor.reshape(-1, 1)
After adding new line and then while executing the training again, I am now getting new error as
TypeError: unhashable type: 'list'
Beta Was this translation helpful? Give feedback.
All reactions