[python]mmdetで学習を実行するまでに遭遇したエラー [mmdet]

mmdetは3.0.0を使用

ValueError: train_dataloader, train_cfg, and optim_wrapper should be either all None or not None

Traceback (most recent call last):
  File "/notebooks/mmdetection-main/tools/train.py", line 133, in <module>
    main()
  File "/notebooks/mmdetection-main/tools/train.py", line 122, in main
    runner = Runner.from_cfg(cfg)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 439, in from_cfg
    runner = cls(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 290, in __init__
    raise ValueError(
ValueError: train_dataloader, train_cfg, and optim_wrapper should be either all None or not None, but got train_dataloader={'batch_size': 16, 'num_workers': 2, 'persistent_workers': True, 'sampler': {'type': 'DefaultSampler', 'shuffle': True}, 'batch_sampler': {'type': 'AspectRatioBatchSampler'}, 'dataset': {'type': 'CocoDataset', 'data_root': 'data/', 'ann_file': 'annotations/coco_annotations_train_all_fold1.json', 'data_prefix': {'img': 'train2017/'}, 'filter_cfg': {'filter_empty_gt': True, 'min_size': 32}, 'pipeline': [{'type': 'LoadImageFromFile', 'backend_args': None}, {'type': 'LoadAnnotations', 'with_bbox': True, 'with_mask': True, 'poly2mask': True}, {'type': 'PackDetInputs'}], 'backend_args': None}}, train_cfg=None, optim_wrapper=None.

解決策

・コンフィグファイルにtrain_cfgとoptim_wrapperを追加する
エラー文をよく読むと、”train_cfg=None, optim_wrapper=None”と記述があります。train_cfgとoptim_wrapperを定義し忘れているので、コンフィグファイルにtrain_cfgを追加すれば、直ります。
train_cfgなどの記述はここ(https://mmdetection.readthedocs.io/en/3.x/user_guides/config.html#training-and-testing-config)を参考。

AttributeError: ‘NoneType’ object has no attribute ‘get’

Traceback (most recent call last):
  File "/notebooks/mmdetection-main/tools/train.py", line 133, in <module>
    main()
  File "/notebooks/mmdetection-main/tools/train.py", line 122, in main
    runner = Runner.from_cfg(cfg)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 439, in from_cfg
    runner = cls(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 353, in __init__
    self.setup_env(env_cfg)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 643, in setup_env
    if env_cfg.get('cudnn_benchmark'):
AttributeError: 'NoneType' object has no attribute 'get'

解決策

・コンフィグファイルにenv_cfgを追加する
そもそも、コンフィグでenv_cfgを定義していないのが原因です。
env_cfgの記述はここ(https://mmdetection.readthedocs.io/en/3.x/migration/config_migration.html?highlight=env_cfg%20)を参考

KeyError: ‘MaskRCNN is not in the model registry.

Traceback (most recent call last):
  File "/notebooks/mmdetection-main/tools/train.py", line 133, in <module>
    main()
  File "/notebooks/mmdetection-main/tools/train.py", line 122, in main
    runner = Runner.from_cfg(cfg)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 439, in from_cfg
    runner = cls(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 406, in __init__
    self.model = self.build_model(model)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 813, in build_model
    model = MODELS.build(model)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/registry.py", line 548, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 250, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 100, in build_from_cfg
    raise KeyError(
KeyError: 'MaskRCNN is not in the model registry. 
Please check whether the value of `MaskRCNN` is correct or it was registered as expected. 
More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'

解決策

・コンフィグファイルに、default_scope = ‘mmdet’を追加する
理由はよくわかりません。

ValueError: need at least one array to concatenate

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "/usr/local/lib/python3.9/dist-packages/mmdet/datasets/base_det_dataset.py", line 40, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/dataset/base_dataset.py", line 245, in __init__
    self.full_init()
  File "/usr/local/lib/python3.9/dist-packages/mmdet/datasets/base_det_dataset.py", line 78, in full_init
    self.data_bytes, self.data_address = self._serialize_data()
  File "/usr/local/lib/python3.9/dist-packages/mmengine/dataset/base_dataset.py", line 765, in _serialize_data
    data_bytes = np.concatenate(data_list)
  File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/loops.py", line 44, in __init__
    super().__init__(runner, dataloader)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/base_loop.py", line 26, in __init__
    self.dataloader = runner.build_dataloader(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 1346, in build_dataloader
    dataset = DATASETS.build(dataset_cfg)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/registry.py", line 548, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg
    raise type(e)(
ValueError: class `CocoDataset` in mmdet/datasets/coco.py: need at least one array to concatenate

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/notebooks/mmdetection/tools/train.py", line 135, in <module>
    main()
  File "/notebooks/mmdetection/tools/train.py", line 131, in main
    runner.train()
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 1687, in train
    self._train_loop = self.build_train_loop(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 1479, in build_train_loop
    loop = LOOPS.build(
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/registry.py", line 548, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg
    raise type(e)(
ValueError: class `EpochBasedTrainLoop` in mmengine/runner/loops.py: class `CocoDataset` in mmdet/datasets/coco.py: need at least one array to concatenate

解決策

・コンフィグファイルにclassesの定義とmetainfoの追加
・モデルのnum_classesの設定をクラス数に変更する
カスタムデータセットを使う際は、classesの定義とそれをmetainfoとして登録する必要があります。また、モデルのデフォルトのクラス数は80になっているので、クラス数を変更する。
参考: https://mmdetection.readthedocs.io/en/dev-3.x/advanced_guides/customize_dataset.html#:~:text=your%20class%20names%20to%20the%20field%20%60metainfo%60-,metainfo%3Ddict(classes%3Dclasses)%2C,-data_root%3Ddata_root%2C%0A%20%20%20%20%20%20%20%20ann_file%3D%27train

dataset_type = 'CocoDataset'
classes = ('a', 'b', 'c', 'd', 'e')
data_root='path/to/your/'

train_dataloader = dict(
    batch_size=2,
    num_workers=2,
    dataset=dict(
        type=dataset_type,
        # explicitly add your class names to the field `metainfo`
        metainfo=dict(classes=classes),
        data_root=data_root,
        ann_file='train/annotation_data',
        data_prefix=dict(img='train/image_data')
        )
    )

AssertionError: scale_factor is not found in results

Traceback (most recent call last):
  File "/notebooks/mmdetection/tools/train.py", line 135, in <module>
    main()
  File "/notebooks/mmdetection/tools/train.py", line 131, in main
    runner.train()
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/runner.py", line 1721, in train
    model = self.train_loop.run()  # type: ignore
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/loops.py", line 96, in run
    self.run_epoch()
  File "/usr/local/lib/python3.9/dist-packages/mmengine/runner/loops.py", line 111, in run_epoch
    for idx, data_batch in enumerate(self.dataloader):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 681, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
    return self._process_data(data)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
    data.reraise()
  File "/usr/local/lib/python3.9/dist-packages/torch/_utils.py", line 461, in reraise
    raise exception
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.9/dist-packages/mmengine/dataset/base_dataset.py", line 408, in __getitem__
    data = self.prepare_data(idx)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/dataset/base_dataset.py", line 790, in prepare_data
    return self.pipeline(data_info)
  File "/usr/local/lib/python3.9/dist-packages/mmengine/dataset/base_dataset.py", line 58, in __call__
    data = t(data)
  File "/usr/local/lib/python3.9/dist-packages/mmcv/transforms/base.py", line 12, in __call__
    return self.transform(results)
  File "/usr/local/lib/python3.9/dist-packages/mmdet/datasets/transforms/formatting.py", line 130, in transform
    assert key in results, f'`{key}` is not found in `results`, ' \
AssertionError: `scale_factor` is not found in `results`, the valid keys are ['img_path', 'img_id', 'seg_map_path', 'height', 'width', 'instances', 'sample_idx', 'img', 'img_shape', 'ori_shape', 'gt_bboxes', 'gt_ignore_flags', 'gt_bboxes_labels', 'gt_masks'].

解決策

・コンフィグファイルのtrain/testのpipelineの項目に”Resize”を追加する
画像のサイズが同じなので、Resizeはいらないかと思いましたが、書く必要があるようです。
同様に”AssertionError: flip is not found in results“はtrain_pipelineに下のコードでの”RandomFlip”の項目がなかったから出ていたエラーでした。
このあたりの必須の項目についてはよくわかっていません。

train_pipeline = [  # Training data processing pipeline
    dict(type='LoadImageFromFile', backend_args=backend_args),  # First pipeline to load images from file path
    dict(
        type='LoadAnnotations',  # Second pipeline to load annotations for current image
        with_bbox=True,  # Whether to use bounding box, True for detection
        with_mask=True,  # Whether to use instance mask, True for instance segmentation
        poly2mask=True),  # Whether to convert the polygon mask to instance mask, set False for acceleration and to save memory
    dict(
        type='Resize',  # Pipeline that resizes the images and their annotations
        scale=(512, 512),  # The largest scale of the images
        keep_ratio=True  # Whether to keep the ratio between height and width
        ),
    dict(
        type='RandomFlip',  # Augmentation pipeline that flips the images and their annotations
        prob=0.0),  # The probability to flip
    dict(type='PackDetInputs')  # Pipeline that formats the annotation data and decides which keys in the data should be packed into data_samples
]
test_pipeline = [  # Testing data processing pipeline
    dict(type='LoadImageFromFile', backend_args=backend_args),  # First pipeline to load images from file path
    dict(type='Resize', scale=(512, 512), keep_ratio=True),  # Pipeline that resizes the images
    dict(
        type='PackDetInputs',  # Pipeline that formats the annotation data and decides which keys in the data should be packed into data_samples
        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor'))
]
タイトルとURLをコピーしました