Remote Sensing Image Semantic Segmentation of Computer Graduation Design U-Net Network (Source Code + Paper)

0 item description

**Semantic Segmentation of Remote Sensing Images Based on U-Net Network**

Tips: suitable for course design or graduation design, the workload is up to standard, and the source code is open

Experimental training using TensorFlow-GPU1.8 under Anaconda Edition Python 3.7
Due to the limitation of GPU memory in post-stage image generation, the CPU version of TensorFlow is used to calculate the prediction image calculation.

Project sharing:

https://gitee.com/asoonis/feed-neo

1 Research purpose

U-Net is a symmetrical structure network inspired by fully convolutional neural networks, which has achieved good results in the field of medical image segmentation. This study tried to use the U-Net network to train on the multispectral remote sensing image dataset, and tried to use the convolutional neural network to automatically segment buildings, hoping to obtain a simple method for automatically segmenting remote sensing images.

2 Research methods

Firstly, a cross-entropy loss function based on the category ratio of remote sensing images—category-balanced cross-entropy is proposed. And combined with U-Net applied to medical image segmentation, it is applied to remote sensing image semantic segmentation.

The cross-entropy loss function and the class-balanced cross-entropy loss function were used for training on the Inria Aerial Image Labeling Dataset training dataset, and two trained convolutional neural networks were obtained. These two networks are then used to generate predicted images on the Inria Aerial Image Labeling Dataset test dataset for comparison.

3 Research conclusions

There is not much difference between the two methods in terms of accuracy and cross entropy, and using cross entropy as a loss function is slightly better than class balancing cross entropy. But there is a big difference between these two methods in F1 Score. The F1 Score of the cross-entropy is 0.47, and the F1 Score of the class-balanced cross-entropy is 0.51, and the class-balanced cross-entropy is 8.5% higher than the cross-entropy.

4 Catalog of Papers

Chapter One Introduction
1.1 Research background and significance
1.2 Research status at home and abroad
1.2.1 Research Status of Semantic Segmentation
1.2.2 Applying Deep Learning to Remote Sensing Image Segmentation
1.3 The main work of this paper
1.4 Chapter arrangement of the thesis
Chapter 2 Background Knowledge
2.1 Fully Convolutional Network
2.2 Precise Segmentation Using Fully Connected Networks
2.2.1 Linear structured network
2.2.2 Symmetrical structured network
Chapter 3 Experimental Design
3.1 Dataset selection and processing
3.2 Image processing flow design
3.2.1 Network structure
3.2.2 Convolution Kernel Initialization Scheme
3.2.3 Output image recovery and optimization
3.3 Loss design
3.3.1 Cross entropy
3.3.2 Cross-entropy with weights
3.3.3 Class Balanced Cross Entropy
3.4 Results evaluation
3.5 Implementation
3.5.1 Experimental platform
3.5.2 Model Implementation
Chapter 4 Experiment
4.1 Network initialization design
4.2 Experimental results
4.2.1 The first group: cross entropy
4.2.2 The second group: class-balanced cross-entropy
4.2.3 Result analysis
4.3 Typical errors
4.3.1 Buildings that are too large
4.3.2 Misidentification
4.3.3 Trees and Light and Shadow
4.4 Final result
Chapter V Summary and Outlook
5.1 Summary of the full text
5.2 Future Outlook
thank you
references

5 Project source code

from glob import glob
from PIL import Image
from os import path, makedirs
import numpy as np
from itertools import count, cycle
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')

color_class_dict = {(0,): 1, (255,): 0}
n_class = len(list(color_class_dict.keys()))


def name_generator(file_path, ex_name=None, cycle_num=None):
    if len(file_path.split('.')) == 2:  # file_path with extension
        true_file_name = path.basename(file_path).split('.')[-2]
        true_file_type = path.basename(file_path).split('.')[-1]
    else:  # file_path without extension
        true_file_name = path.basename(file_path)
        true_file_type = None
    if not ex_name:
        true_file_type = None
    elif ex_name is not None:
        true_file_type = ex_name
    if cycle_num is not None:
        for i in cycle(range(cycle_num)):
            if true_file_type is not None:
                yield true_file_name + '_' + str(i) + '.' + true_file_type
            else:
                yield true_file_name + '_' + str(i)
    else:
        for i in count(0):
            if true_file_type is not None:
                yield true_file_name + '_' + str(i) + '.' + true_file_type
            else:
                yield true_file_name + '_' + str(i)


def resize_image(image_path, new_width, new_height, save_dir=None):
    image = Image.open(image_path)
    image = image.resize((new_width, new_height))
    if save_dir is not None:
        file_name = path.basename(image_path)
        image.save(path.join(save_dir, file_name))
    else:
        image.save(image_path)


def split_image(image_path,
                split_width,
                split_height,
                save_dir,
                save_as_img=False):
    # Split in the following order:
    # 1 2 3
    # 4 5 6
    image = Image.open(image_path)
    image_width, image_height = image.size
    if image_width % split_width or image_height % split_height:
        raise ValueError('The image size is:{}x{}ï¼›cannot be equally divided into{}x{}Image'.format(
            image_width, image_height, split_width, split_height))
    new_name = name_generator(image_path, ex_name=False)
    for i in range(0, image_height, split_height):
        for j in range(0, image_width, split_width):
            cutting_box = (j, i, j + split_width, i + split_height)
            slice_of_image = image.crop(cutting_box)
            if save_as_img:
                slice_of_image.save(
                    path.join(save_dir,
                              next(new_name) + '.png'))
            else:
                slice_of_image = np.array(slice_of_image)
                np.save(path.join(save_dir, next(new_name)), slice_of_image)


def glut_image(image_list, glut_cols, glut_rows, image_width, image_height,
               save_path):
    glutted_image = Image.new(
        'RGB', (image_width * glut_cols, image_height * glut_rows))
    k = 0
    for i in range(glut_rows):
        for j in range(glut_cols):
            # Determine whether image_list is a string
            # paste_it = Image.open(image_list[k])
            paste_it = image_list[k]
            glutted_image.paste(paste_it, (j * image_width, i * image_height))
            k += 1
    glutted_image.save(save_path)


def color_to_class(image_path, save_path=None):
    raw_image = np.load(image_path)
    raw_image = np.reshape(raw_image,
                           (raw_image.shape[0], raw_image.shape[1], -1))
    [rows, cols, _] = raw_image.shape
    classed_image = np.zeros((rows, cols, n_class))
    for i in range(rows):
        for j in range(cols):
            classed_image[i, j, color_class_dict[tuple(raw_image[i, j])]] = 1
    if save_path is not None:
        np.save(path.join(save_path,
                          path.basename(image_path).split('.')[-2]),
                classed_image)
    else:
        return classed_image


def output_map_to_class(output_map):
    most_possible_label = np.argmax(output_map, 2)
    classed_image = np.zeros(shape=output_map.shape)
    [rows, cols] = most_possible_label.shape
    for i in range(rows):
        for j in range(cols):
            classed_image[i, j, most_possible_label[i, j]] = 1
    return classed_image


def class_to_color(classed_image):
    reverse_color_class_dict = dict(
        zip(color_class_dict.values(), color_class_dict.keys()))
    colored_image = np.zeros(shape=(classed_image.shape[0],
                                    classed_image.shape[1],
                                    len(list(color_class_dict.keys())[0])))
    for i in range(classed_image.shape[0]):
        for j in range(classed_image.shape[1]):
            for k in range(n_class):
                if classed_image[i][j][k] == 1:
                    colored_image[i][j] = reverse_color_class_dict[k]
    return colored_image

Project sharing:

https://gitee.com/asoonis/feed-neo

Tags: Java Python

Posted by solarith on Fri, 10 Mar 2023 19:08:48 +0530