MMDetection v2.0 trains its own voc dataset

1 Create a new container

Enter the topic mmdetection docker environment has been introduced last time, now we create a new container

sudo nvidia-docker run  -shm-size=8g -name mm_det -it -v  /train_data:/mmdetection/data

nvidia-docker: New container can call GPU

-name : container name can be modified by yourself

-v : map the host directory to the container directory, /train_data is the host directory, mapped to the container directory /mmdetection/data

exit the container

exit

re-enter the container

sudo docker exec -i -t mm_det /bin/bash

doxker exec : execute a command in a running container

-i -t : execute in interactive mode

mm_det : container name

/bin/bash : execute the script

2 Prepare your own VOC dataset

mmdetection supports the VOC dataset, as well as the COCO dataset format, and can also customize the data format. Now we use the VOC data format. The mm_det container has already mapped the host directory. In the host directory /train_data, create a new directory to store the dataset, which can be found in The container is operating in /mmdetection/data, and the new directory structure is as follows

VOCdevkit

--VOC2007

----Annotations

----ImageSets

------Main

----JEPGImages

The Annotations directory stores .xml files, JEPGImages stores training images, and the following code is used to divide the dataset,

The code is stored in the /VOCdevkit/VOC2007 directory and executed directly

import os
import random

trainval_percent = 0.8
train_percent = 0.8
xmlfilepath = 'Annotations'
txtsavepath = 'ImageSets\Main'
total_xml = os.listdir(xmlfilepath)

num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)

ftrainval = open('ImageSets/Main/trainval.txt', 'w')
ftest = open('ImageSets/Main/test.txt', 'w')
ftrain = open('ImageSets/Main/train.txt', 'w')
fval = open('ImageSets/Main/val.txt', 'w')

for i in list:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        ftrainval.write(name)
        if i in train:
            ftrain.write(name)
        else:
            fval.write(name)
    else:
        ftest.write(name)

ftrainval.close()
ftrain.close()
fval.close()
ftest.close()

For the above code segmentation data set, the training set accounts for 80% and the test set accounts for 20%. After running the code, you can see three .txt files in /VOCdevkit/VOC2007/ImageSets/Main

The three .txt files are the index of the name of the training and test images, and the data set is ready.

3 Modify the VOC0712.py file

cd /mmdetection/configs/_base_/datasets

After entering the directory, open voc0712.py

In the configuration of data To delete the path of shielding VOC2012, and VOC2012 variables Save the file

4 Modify voc.py document

cd /mmdetection/mmdet/datasets

Open the voc.py file

This CLASSE is the category of the VOC label, we have to replace it with the category label of our own dataset

5 Modify the class_names.py file

cd /mmdetection/mmdet/core/evaluation

Open the class_names.py file

Modify the label returned by the voc_classes() function and replace it with the label of your own dataset Save and exit

6 Modify faster_rcnn_r50_fpn_1x_coco.py

cd mmdetection/configs/faster_rcnn

We use the faster_rcnn model for training this time and open the faster_rcnn_r50_fpn_1x_coco.py file

There are three files called in the faster_rcnn_r50_fpn_1xcoco.py file, the first is the model configuration file, the second is the dataset configuration file, and the last two are the configuration learning rate, the number of iterations, the model loading path, etc., we put the original COCO_detection. py modified to VOC0712.py file

7 Modify faster_rcnn_r50_fpn.py

cd /mmdetection/configs/_base_/models

Open the faster_rcnn_r50fpn.py file, modify the number of num_classes, the value of num_classes is equal to the number of categories, no need to add background

The above is the content that needs to be modified. After the modification is completed, start training the model

8 Train the model

python3 ./tools/train.py ./configs/faster_rcnn_r50_fpn_1x.py

After training, you can refer to the /mmdetection/demo/image_demo.py file for testing

That's all for training the mmdetection faster_rcnn model with your own dataset

Reprinted: https://zhuanlan.zhihu.com/p/162730118

Tags: mmdetection

Posted by s3rg1o on Wed, 01 Jun 2022 19:28:30 +0530