| 000 | 05835nam a22002537a 4500 | ||
|---|---|---|---|
| 008 | 220426b2022 |||ad||| mb|| 00| 0 eng d | ||
| 040 |
_aEG-CaNU _cEG-CaNU |
||
| 041 | 0 |
_aeng _beng |
|
| 082 | _a610 | ||
| 100 | 0 |
_aMohammed Moustafa Mohamed Hassoubah _91447 |
|
| 245 | 1 |
_aEnhanced Transformer-based Deep Semantic Segmentation Architecture for Lidar 3D Point Clouds / _cMohammed Moustafa Mohamed Hassoubah |
|
| 260 | _c2022 | ||
| 300 |
_a 94 p. _bill. _c21 cm. |
||
| 500 | _3Supervisor: Mohamed Elhelw | ||
| 502 | _aThesis (M.A.)—Nile University, Egypt, 2022 . | ||
| 504 | _a"Includes bibliographical references" | ||
| 505 | 0 | _aContents: Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Chapters: 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Outline and Summary of Contributions . . . . . . . . . 3 2. Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 KITTI Velodyne Dataset . . . . . . . . . . . . . . . . . . . . 5 2.2 Point cloud segmentation . . . . . . . . . . . . . . . . . . . . 6 2.3 Transformer Mechanism . . . . . . . . . . . . . . . . . . . . . 9 2.4 Transformer applications for 3D point cloud . . . . . . . . . . 13 2.5 Transformer and Attention mechanisms applications for 2D images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.6 Self-Supervised Learning . . . . . . . . . . . . . . . . . . . . . 21 2.7 Uncertainty estimation in Deep neural networks applications . 27 3. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1 Spherical projection . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Semantic Segmentation Network . . . . . . . . . . . . . . . . 32 3.3 Self-Supervision pre-training . . . . . . . . . . . . . . . . . . . 35 3.3.1 Data Augmentation and Corruption . . . . . . . . . . 36 3.3.2 Pre-Training Tasks . . . . . . . . . . . . . . . . . . . . 36 3.3.3 Noise-Contrastive estimation . . . . . . . . . . . . . . 38 vii 3.3.4 Learning Process Smoothness . . . . . . . . . . . . . . 40 3.4 Estimating the Model Uncertainty . . . . . . . . . . . . . . . 41 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4. Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1 Experiments and Results . . . . . . . . . . . . . . . . . . . . . 44 4.1.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . 45 4.1.3 Training configuration . . . . . . . . . . . . . . . . . . 46 4.1.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.1.5 Ablation Studies . . . . . . . . . . . . . . . . . . . . . 51 4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5. Conclusions and Future work . . . . . . . . . . . . . . . . . . . . . 67 5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . 68 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 | |
| 520 | 3 | _aAbstract: For the task of semantic segmentation of 2D or 3D inputs, transformer architecture suffers limitations in the ability of localization because of lacking low-level details. Also, for the transformer to function well, it has to be pre-trained first. Still pre-training transformers is an open area of research. In this work, we introduce a novel architecture for semantic segmentation of 3D point clouds generated from Light Detection and Ranging (LiDAR) sensors. A transformer is integrated into the U-Net 2D segmentation network [1] and the new architecture is trained to conduct semantic segmentation of 2D spherical images generated from projecting 3D LiDAR point clouds. Such integration allows capturing the local and region level dependencies from CNN backbone processing of the input, followed by transformer processing to capture the long range dependencies. Obtained results demonstrate that the new architecture provides enhanced segmentation results over existing state-of-theart approaches. Furthermore, to define the best pre-training settings, multiple ablations have been conducted on network architecture, self-training loss function and self-training procedures. It’s proved that, the integrated architecture, pre-trained over the augmented version of the training dataset to reconstruct the original data from the corrupted input, while randomly initializing the batch normalization layers when fine-tuning, all these together outperforms the SalsaNext [2] (to our knowledge it’s the best projection based semantic segmentation network), where results are reported on the SemanticKITTI [3] iv validation dataset with 2D input dimension 1024 × 64. In our evaluation it is found that self-supervision pre-training much reduces the epistemic uncertainty (the model weights uncertainty ex.introducing new input example to the model different from those seen in training dataset would increase such uncertainty) of the output of the segmentation model. | |
| 546 | _aText in English, abstracts in English. | ||
| 650 | 4 |
_aInformatics-IFM _9266 |
|
| 655 | 7 |
_2NULIB _aDissertation, Academic _9187 |
|
| 690 |
_aInformatics-IFM _9266 |
||
| 942 |
_2ddc _cTH |
||
| 999 |
_c9580 _d9580 |
||