Point cloud registration literature reading and simple implementation

Literature reading

Document 1

I Deep learning-based dynamic object classification using LiDAR point cloud augmented by layer-based accumulation for intelligent vehicles

Dynamic object classification based on LiDAR point cloud based on deep learning to enhance intelligent vehicles through layer based accumulation

Background: due to the sparsity of point cloud data, it is difficult to provide enough shape information for the detected object, so it is difficult to apply deep learning method for classification

Point cloud classification method based on deep learning:

  • Convert 3D point cloud to 2D projection

  • Converting point clouds to voxels

  • Directly input point cloud information

    The first two methods may lose information during the conversion process

Main content of the article

1. point cloud accumulation method

Point cloud registration algorithm

**iterative closest point (ICP) method to obtain the rigid coordinate transformation matrix of the same target point cloud at different times

ICP algorithm principle:

2. experimental design of point cloud classification

CAD software is used to create a segmented 3D model of classified objects. The irrelevant data such as other trees actually collected by lidar are used as negative samples for training, and the training data are verified on the KITTI data set.

Literature 2

**II LIDAR based detection of road boundaries using the density of accumulated point clouds and their gradients **

LIDAR based road boundary detection uses the density and gradient of cumulative point cloud

Only the data of four scanning lines scanned to the pavement are used to extract the road boundary through the accumulation method

1. point cloud accumulation method


Assuming the traveling speed of the vehicle is 50KM/h and the scanning frequency of the lidar is 50HZ, the vehicle will acquire the point cloud data once every 0.29m. In this paper, the scanned data every 4 times will be used as an input frame to calculate and obtain the coordinate conversion relationship.
The implementation effect is as follows: red is the current laser scanning data, and white is the previous accumulated data

verification

1 data description

The practical principle of the point cloud accumulation method is similar to the method of multi frame synthesis, which mainly lies in obtaining the coordinate conversion relationship between two frames of point clouds. Through coordinate conversion, the scanning results of the same target object can be combined into one frame. In the above literature, the coordinate conversion relationship between two frames of point clouds can be obtained through the point cloud registration method. The typical method is ICP. As shown in the figure is the road map for data collection

2 code implementation process

  1. Read the pcap file data collected by velodyen VLP16 lidar

     filename= 'D:\matlabwenji\lidar_pcap\a1.pcap'; %file location    
     veloReader = velodyneFileReader(filename,"VLP16"); % read.pcap file
     ptCloud_original=cell(1,veloReader.NumberOfFrames); % Define tuples to store original point cloud data
     for i=1:veloReader.NumberOfFrames
    ptCloud_original{1,i}=readFrame(veloReader,i); % Read the data of each sampling period in sequence and save it as pointCloud format
    end
    
  2. Read the data collected by the laser radar in the first two frames, take the point cloud data of the first frame as a reference, and obtain the coordinate conversion matrix from the second frame to the first frame through ICP In addition, in order to reduce the amount of point cloud computing during ICP and improve the accuracy of feature matching, the original point cloud needs to be Down sampling That is, reduce the amount of point cloud data

    ptCloudRef = ptCloud_original{1}; % Define frame 1 point cloud as reference point cloud
    ptCloudCurrent = ptCloud_original{2}; % Define frame 2 as a pending point cloud
    %% Down sampling
    gridSize = 0.1; %Define down sampling mesh size
    fixed = pcdownsample(ptCloudRef, 'gridAverage', gridSize);
    moving = pcdownsample(ptCloudCurrent, 'gridAverage', gridSize); % Down sampling
    tform = pcregistericp(moving, fixed, 'Metric','pointToPlane','Extrapolate', true); % matlab Self contained ICP Algorithm, get moving Point cloud to fixed Coordinate transformation matrix between point clouds tform
    ptCloudAligned = pctransform(ptCloudCurrent,tform); % Convert the point cloud of the second frame through the obtained coordinate conversion matrix
    mergeSize = 0.015;
    ptCloudScene = pcmerge(ptCloudRef, ptCloudAligned, mergeSize);
    
  3. Assuming that the point cloud coordinates of the first frame are used as the global reference coordinates, repeat the above steps to complete the point cloud accumulation of the whole process

    accumTform = tform; 
    mergeSize = 0.015;
    ptCloudScene = pcmerge(ptCloudRef, ptCloudAligned, mergeSize);
    figure
    hAxes = pcshow(ptCloudScene, 'VerticalAxis','Y', 'VerticalAxisDir', 'Down');
    title('Updated world scene')
    % Set axis attributes for faster rendering
    hAxes.CameraViewAngleMode = 'auto';
    hScatter = hAxes.Children; 
    for i = 3:length(ptCloud_original) % Retrieve point clouds without frames in sequence
     ptCloudCurrent = ptCloud_original{i};% Will the i Assign frame data to point cloud to be processed ptCloudCurrent
     fixed = moving; % Take the moving point cloud of the previous frame as the reference point cloud of the point cloud of the next frame
     moving = pcdownsample(ptCloudCurrent, 'gridAverage', gridSize);% Treat the pending point cloud as a mobile point cloud
     % application CIP Algorithm obtained moving reach fixed Coordinate transformation matrix of
     tform = pcregistericp(moving, fixed, 'Metric','pointToPlane','Extrapolate', true);
     % The coordinate conversion matrix from the current frame to the first frame is obtained by multiplying the current conversion matrix by the previously accumulated conversion matrix
     accumTform = affine3d(tform.T * accumTform.T);
     ptCloudAligned = pctransform(ptCloudCurrent, accumTform);
     % Update globally accumulated point cloud data
     ptCloudScene = pcmerge(ptCloudScene, ptCloudAligned, mergeSize);
     hScatter.XData = ptCloudScene.Location(:,1);
     hScatter.YData = ptCloudScene.Location(:,2);
     hScatter.ZData = ptCloudScene.Location(:,3);
    end
     % visualization
    pcshow(ptCloudScene, 'VerticalAxis','Y', 'VerticalAxisDir', 'Down', ...
      'Parent', hAxes)
    title('Updated world scene')
    xlabel('X (m)')
    ylabel('Y (m)')
    zlabel('Z (m)')
    
  4. Operation results


  5. Result analysis
    This method can significantly improve the point cloud density, and even clearly see the zebra crossing, turning signs and other traffic signs on the ground, but the registration takes too long

Tags: MATLAB

Posted by sford999 on Fri, 03 Jun 2022 12:52:43 +0530