Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation server #23

Open
cylee81 opened this issue Mar 15, 2021 · 12 comments
Open

Evaluation server #23

cylee81 opened this issue Mar 15, 2021 · 12 comments

Comments

@cylee81
Copy link

cylee81 commented Mar 15, 2021

Hi,

Is the evaluation server working? I haven't received the result after waiting for 4 days.

Thanks!

@akshitac8
Copy link
Collaborator

@dingjiansw101 Can you please have a look at it?

@kgiannakelos
Copy link

Yeah it seems the evaluation server is down.

@akshitac8 I would like to ask if the evaluation server can accept predictions on different splitting of images. Currently the server accepts predictions based on the 800 x 800 splitting of images using the devkit.

Is it possible to upload predictions for a different splitting of the images like 1024 x 1024 ?

@akshitac8
Copy link
Collaborator

@kgiannakelos the evaluation server only accepts images with 800X800 patch size.

@kgiannakelos
Copy link

@akshitac8 Ok thanks!

One last thing, does the evaluation server accept a mask with all predictions for a specific category?

Currently my model produces an 800x800 mask for each object it detects. All these predictions amount to ~ 600gb json file which is obviously too big to process.

So the results could be reduced if my model outputs a single mask for all predicted objects of a specific category instead of N masks for N predicted objects of this category.

Is it ok if i evaluate my results like this?

@dingjiansw101
Copy link
Collaborator

It should be ok now.

@czq693497091
Copy link

@akshitac8 I use maskrcnn-benchmark to get the segmentation result segm.json in test set of iSAID with the test_info.json in iSAID website, but replied with error format of json. The format of my segm.json is the same as example. Here is the first detection result "[{"image_id": 0, "category_id": 9, "segmentation": {"size": [800, 800], "counts": "[dm=5jh02N2N3M2N2N1N2O1O000000001O00000000000010O1O100O1O1O2N2N1O3M2N3MTci4"}, "score": 0.9938278794288635},...]," in my segm.json, I don't the reason of format error, could you please help me?

I do the experiments in val dataset and can get the right result in iSAID_devkit.
Thank you!

@kgiannakelos
Copy link

@czq693497091 Did you figure it out? I got the same problem. I get correct results on validation set but error format on test set.

@czq693497091
Copy link

czq693497091 commented Apr 27, 2021 via email

@kgiannakelos
Copy link

@czq693497091 I already zip it so that's not my problem. I tried a very small portion of my results and it works. So it must be the server's issue. Thanks

@czq693497091
Copy link

czq693497091 commented May 4, 2021 via email

@akshitac8
Copy link
Collaborator

@czq693497091 Can you try to upload the results on the server now and please let me know if the problem still persists?

@MJ-inin
Copy link

MJ-inin commented Feb 2, 2023

@kgiannakelos the evaluation server only accepts images with 800X800 patch size.

Hello, I have a question. 🖐🖐

When I generated the test_patch by running the split.py code, some image files such as P1032, P1831 are not 800x800 size. Can I upload them to the evaluation server?

In other words, for images that are not more than 800 pixels in height or width,
if I upload them to the evaluation server without matching the size of 800x800, can I use them without any problems in performance evaluation?

Please leave a comment.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants