Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predicting score problem #8

Open
Ildar5 opened this issue Nov 15, 2020 · 0 comments
Open

Predicting score problem #8

Ildar5 opened this issue Nov 15, 2020 · 0 comments

Comments

@Ildar5
Copy link

Ildar5 commented Nov 15, 2020

Hello, I am trying to use your code without kitura and swift just with python, but it doesn't matter what image I use, each of the predicting images has at least ~0.79 memorability score, even completely white image has ~0.83 score. Please help me, here is my code:

...
model = Sequential()
model.add(Conv2D(96, (11, 11), (4, 4), activation="relu", input_shape=(227, 227, 3)))
model.add(MaxPooling2D((3, 3), (2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(256, (5, 5), activation="relu"))
model.add(ZeroPadding2D((2, 2)))
model.add(MaxPooling2D((3, 3), (2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(384, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Conv2D(384, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Conv2D(256, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(MaxPooling2D((3, 3), (2, 2)))
model.add(GlobalAveragePooling2D())
model.add(Dense(4096, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(4096, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1))

train_split = load_split("../lamem/splits/train_1.txt")
test_split = load_split("../lamem/splits/test_1.txt")
batch_size = 64 * 4

train_gen = lamem_generator(train_split, batch_size=batch_size)
test_gen = lamem_generator(test_split, batch_size=batch_size)

model.compile("adam", euclidean_distance_loss)
model.fit(train_gen, steps_per_epoch=int(len(train_split) / batch_size), epochs=5, verbose=1, validation_data=test_gen, 
validation_steps=int(len(test_split) / batch_size))

model.save("memnet_model2.h5")
// I am trying also with separate weights saving.
model.save_weights('memnet_model2_w')

then in other script:

model = tf.keras.models.load_model('memnet_model2.h5', custom_objects={'euclidean_distance_loss': 
euclidean_distance_loss})
// tried also with load_weights, I have hoped this may help(
//model.load_weights('memnet_model2_w')

def load_image(image_file):
return np.array(Image.open(image_file).resize((227, 227)).convert("RGB"), dtype="float32") / 255.

test_img = mp.Pool().map(load_image, ['predict/7.png'])
test_img = np.array(test_img)
print(test_img.shape)
# test_img.reshape(-1, 227, 227, 3)
print(np.array(test_img).shape)

prediction = model.predict(np.array(test_img))
print(prediction)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant