You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train instance segmentation model with mmdetection using dataset and I got this error TypeError: 'numpy.float64' object cannot be interpreted as an integer
I obtained the data by following the instructions in the Getting started section in the TACO github repo. I separated the annotations.json file as test, train and valid with the help of https://pypi.org/project/echo1-coco-split/ link.
As you can see below, when I run it for the train after giving the path of the test, train and valid annotation files and images I have obtained to the config file in mmdetection for the train process, I get an error in the Evaluating section after the train part is finished.
/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py in run(self, data_loaders, workflow, max_epochs, **kwargs)
134 if mode == 'train' and self.epoch >= self._max_epochs:
135 break
--> 136 epoch_runner(data_loaders[i], **kwargs)
137
138 time.sleep(1) # wait for some hooks like loggers to finish
/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py in call_hook(self, fn_name)
315 """
316 for hook in self._hooks:
--> 317 getattr(hook, fn_name)(self)
318
319 def get_hook_info(self) -> str:
/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py in after_train_epoch(self, runner)
269 """Called after every training epoch to evaluate the results."""
270 if self.by_epoch and self._should_evaluate(runner):
--> 271 self._do_evaluate(runner)
272
273 def _do_evaluate(self, runner):
~/violations-tracing-project/mmdetection/mmdet/core/evaluation/eval_hooks.py in _do_evaluate(self, runner)
61 self.latest_results = results
62 runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
---> 63 key_score = self.evaluate(runner, results)
64 # the key_score may be None so it needs to skip the action to save
65 # the best checkpoint
/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py in evaluate(self, runner, results)
366 """
367 eval_res = self.dataloader.dataset.evaluate(
--> 368 results, logger=runner.logger, **self.eval_kwargs)
369
370 for name, val in eval_res.items():
~/violations-tracing-project/mmdetection/mmdet/datasets/coco.py in evaluate(self, results, metric, logger, jsonfile_prefix, classwise, proposal_nums, iou_thrs, metric_items)
643 metrics, logger, classwise,
644 proposal_nums, iou_thrs,
--> 645 metric_items)
646
647 if tmp_dir is not None:
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in init(self, cocoGt, cocoDt, iouType)
74 self._gts = defaultdict(list) # gt for evaluation
75 self._dts = defaultdict(list) # dt for evaluation
---> 76 self.params = Params(iouType=iouType) # parameters
77 self._paramsEval = {} # parameters for evaluation
78 self.stats = [] # result summarization
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in init(self, iouType)
525 def init(self, iouType='segm'):
526 if iouType == 'segm' or iouType == 'bbox':
--> 527 self.setDetParams()
528 elif iouType == 'keypoints':
529 self.setKpParams()
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in setDetParams(self)
505 self.catIds = []
506 # np.arange causes trouble. the data point on arange is slightly larger than the true value
--> 507 self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
508 self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)
509 self.maxDets = [1, 10, 100]
<array_function internals> in linspace(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py in linspace(start, stop, num, endpoint, retstep, dtype, axis)
118
119 """
--> 120 num = operator.index(num)
121 if num < 0:
122 raise ValueError("Number of samples, %s, must be non-negative." % num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer`
I converted the values in the segmentation part of the annotation files from float to integer, and I did the same for the values in the box part. but the problem was not fixed.
on this line where train for mmdetection is started train_detector(model, datasets, cfg, distributed=False, validate=True)
If validate=False, the train process is finished without an error, but since there is no validation, I cannot determine my accuracy (such as precision, recall). Also, although it was not the best result, it was not a correct train because it saved the last log. If I solve the float problem in the validation part here, I will continue my train process in a healthy way.
The text was updated successfully, but these errors were encountered:
I am trying to train instance segmentation model with mmdetection using dataset and I got this error
TypeError: 'numpy.float64' object cannot be interpreted as an integer
I want to segment with mmedetection (You can also find the mmdetection repo here. https://github.com/open-mmlab/mmdetection ) using the dataset in the link. https://github.com/pedropro/TACO
TACO dataset is a dataset in COCO format used for garbage detection. Classes in them ['Aluminium foil', 'Battery', 'Aluminium blister pack', 'Carded blister pack', 'Other plastic bottle', 'Clear plastic bottle', 'Glass bottle', 'Plastic bottle cap', 'Metal bottle cap', 'Broken glass', 'Food Can', 'Aerosol', 'Drink can', 'Toilet tube', 'Other carton', 'Egg carton', 'Drink carton', 'Corrugated carton', 'Meal carton', 'Pizza box', 'Paper cup', 'Disposable plastic cup', 'Foam cup', 'Glass cup', 'Other plastic cup', 'Food waste', 'Glass jar', 'Plastic lid', 'Metal lid', 'Other plastic', 'Magazine paper', 'Tissues', 'Wrapping paper', 'Normal paper', 'Paper bag', 'Plastified paper bag', 'Plastic film', 'Six pack rings', 'Garbage bag', 'Other plastic wrapper', 'Single-use carrier bag', 'Polypropylene bag', 'Crisp packet', 'Spread tub', 'Tupperware', 'Disposable food container', 'Foam food container', 'Other plastic container', 'Plastic glooves', 'Plastic utensils', 'Pop tab', 'Rope & strings', 'Scrap metal', 'Shoe', 'Squeezable tube', 'Plastic straw', 'Paper straw', 'Styrofoam piece', 'Unlabeled litter', 'Cigarette']
I obtained the data by following the instructions in the Getting started section in the TACO github repo. I separated the annotations.json file as test, train and valid with the help of https://pypi.org/project/echo1-coco-split/ link.
As you can see below, when I run it for the train after giving the path of the test, train and valid annotation files and images I have obtained to the config file in mmdetection for the train process, I get an error in the Evaluating section after the train part is finished.
'---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_26640/1144535353.py in
17 # Create work_dir
18 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
---> 19 train_detector(model, datasets, cfg, distributed=False, validate=True)
~/violations-tracing-project/mmdetection/mmdet/apis/train.py in train_detector(model, dataset, cfg, distributed, validate, timestamp, meta)
244 elif cfg.load_from:
245 runner.load_checkpoint(cfg.load_from)
--> 246 runner.run(data_loaders, cfg.workflow)
/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py in run(self, data_loaders, workflow, max_epochs, **kwargs)
134 if mode == 'train' and self.epoch >= self._max_epochs:
135 break
--> 136 epoch_runner(data_loaders[i], **kwargs)
137
138 time.sleep(1) # wait for some hooks like loggers to finish
/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py in train(self, data_loader, **kwargs)
56 self._iter += 1
57
---> 58 self.call_hook('after_train_epoch')
59 self._epoch += 1
60
/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py in call_hook(self, fn_name)
315 """
316 for hook in self._hooks:
--> 317 getattr(hook, fn_name)(self)
318
319 def get_hook_info(self) -> str:
/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py in after_train_epoch(self, runner)
269 """Called after every training epoch to evaluate the results."""
270 if self.by_epoch and self._should_evaluate(runner):
--> 271 self._do_evaluate(runner)
272
273 def _do_evaluate(self, runner):
~/violations-tracing-project/mmdetection/mmdet/core/evaluation/eval_hooks.py in _do_evaluate(self, runner)
61 self.latest_results = results
62 runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
---> 63 key_score = self.evaluate(runner, results)
64 # the key_score may be
None
so it needs to skip the action to save65 # the best checkpoint
/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py in evaluate(self, runner, results)
366 """
367 eval_res = self.dataloader.dataset.evaluate(
--> 368 results, logger=runner.logger, **self.eval_kwargs)
369
370 for name, val in eval_res.items():
~/violations-tracing-project/mmdetection/mmdet/datasets/coco.py in evaluate(self, results, metric, logger, jsonfile_prefix, classwise, proposal_nums, iou_thrs, metric_items)
643 metrics, logger, classwise,
644 proposal_nums, iou_thrs,
--> 645 metric_items)
646
647 if tmp_dir is not None:
~/violations-tracing-project/mmdetection/mmdet/datasets/coco.py in evaluate_det_segm(self, results, result_files, coco_gt, metrics, logger, classwise, proposal_nums, iou_thrs, metric_items)
481 break
482
--> 483 cocoEval = COCOeval(coco_gt, coco_det, iou_type)
484 cocoEval.params.catIds = self.cat_ids
485 cocoEval.params.imgIds = self.img_ids
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in init(self, cocoGt, cocoDt, iouType)
74 self._gts = defaultdict(list) # gt for evaluation
75 self._dts = defaultdict(list) # dt for evaluation
---> 76 self.params = Params(iouType=iouType) # parameters
77 self._paramsEval = {} # parameters for evaluation
78 self.stats = [] # result summarization
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in init(self, iouType)
525 def init(self, iouType='segm'):
526 if iouType == 'segm' or iouType == 'bbox':
--> 527 self.setDetParams()
528 elif iouType == 'keypoints':
529 self.setKpParams()
/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py in setDetParams(self)
505 self.catIds = []
506 # np.arange causes trouble. the data point on arange is slightly larger than the true value
--> 507 self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
508 self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)
509 self.maxDets = [1, 10, 100]
<array_function internals> in linspace(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py in linspace(start, stop, num, endpoint, retstep, dtype, axis)
118
119 """
--> 120 num = operator.index(num)
121 if num < 0:
122 raise ValueError("Number of samples, %s, must be non-negative." % num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer`
I converted the values in the segmentation part of the annotation files from float to integer, and I did the same for the values in the box part. but the problem was not fixed.
on this line where train for mmdetection is started
train_detector(model, datasets, cfg, distributed=False, validate=True)
If validate=False, the train process is finished without an error, but since there is no validation, I cannot determine my accuracy (such as precision, recall). Also, although it was not the best result, it was not a correct train because it saved the last log. If I solve the float problem in the validation part here, I will continue my train process in a healthy way.
The text was updated successfully, but these errors were encountered: