A Hybrid Deep CNN-Reinforcement Learning Model for Autonomous Driving / (Record no. 8816)

MARC details
000 -LEADER
fixed length control field 13522nam a22002537a 4500
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 210112b2018 a|||f mb|| 00| 0 eng d
040 ## - CATALOGING SOURCE
Original cataloging agency EG-CaNU
Transcribing agency EG-CaNU
041 0# - Language Code
Language code of text eng
Language code of abstract eng
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 610
100 0# - MAIN ENTRY--PERSONAL NAME
Personal name Karim Mansour
245 1# - TITLE STATEMENT
Title A Hybrid Deep CNN-Reinforcement Learning Model for Autonomous Driving /
Statement of responsibility, etc. Karim Mansour
260 ## - PUBLICATION, DISTRIBUTION, ETC.
Date of publication, distribution, etc. 2018
300 ## - PHYSICAL DESCRIPTION
Extent 108 p.
Other physical details ill.
Dimensions 21 cm.
500 ## - GENERAL NOTE
Materials specified Supervisor: Mohamed A. El-Helw
502 ## - Dissertation Note
Dissertation type Thesis (M.A.)—Nile University, Egypt, 2018 .
504 ## - Bibliography
Bibliography "Includes bibliographical references"
505 0# - Contents
Formatted contents note Contents:<br/>1 CHAPTER 1: INTRODUCTION ................................................................................ 1<br/>1.1 Motivation ............................................................................................................ 1<br/>1.2 Problem Definition ............................................................................................... 1<br/>1.3 Thesis Contributions ............................................................................................ 2<br/>1.4 Autonomous Driving and Artificial Intelligence ................................................. 2<br/>1.5 Organization of Thesis ......................................................................................... 2<br/>2 CHAPTER 2: LITERATURE REVIEW ..................................................................... 4<br/>2.1 Introduction .......................................................................................................... 4<br/>2.2 Autonomous Driving ............................................................................................ 4<br/>2.2.1 5 Levels of Autonomy .................................................................................. 5<br/>2.2.1.1 Level 1: One Control Automated .............................................................. 5<br/>2.2.1.2 Level 2: Two Controls Automated ............................................................ 6<br/>2.2.1.3 Level 3: City to City Automation .............................................................. 7<br/>2.2.1.4 Level 4: Full Autonomous......................................................................... 9<br/>2.2.1.5 Level 5: No Driver .................................................................................. 10<br/>2.2.2 Autonomous Driving Block Diagram ......................................................... 10<br/>2.3 Artificial Intelligence ......................................................................................... 11<br/>2.3.1 Machine Learning ....................................................................................... 11<br/>2.3.2 Deep Learning ............................................................................................. 11<br/>2.3.2.1 Convolutional Neural Networks.............................................................. 13<br/>2.3.2.2 Reinforcement Learning .......................................................................... 14<br/>2.3.3 Recurrent Neural Networks ........................................................................ 14<br/>2.4 Rule-Based Control Systems .............................................................................. 15<br/>2.4.1 PID Controller ............................................................................................. 15<br/>2.4.2 Automotive Rule-Based Systems ............................................................... 16<br/>2.4.3 Advantages and Disadvantages ................................................................... 17<br/>2.5 End-to-End Deep Learning ................................................................................ 17<br/>2.5.1 Advantages and Disadvantages ................................................................... 18<br/>2.5.2 Existing End-to-End Deep Learning Networks .......................................... 18<br/>2.5.2.1 CNN Only Steering: NVIDIA Model ..................................................... 19<br/>2.5.2.2 CNN + LSTM Steering: VALEO Model ................................................ 21<br/>2.5.2.3 Reinforcement Learning Steering: STANFORD Model ......................... 23<br/>2.6 Conclusions ....................................................................................................... 25<br/>3 CHAPTER 3: NOVEL CNN-REINFORCEMENT LEARNING STEERING MODEL<br/>26<br/>ix<br/>3.1 Introduction ........................................................................................................ 26<br/>3.2 Sensors Setup ..................................................................................................... 27<br/>3.2.1 Camera ........................................................................................................ 27<br/>3.2.2 Lidar ............................................................................................................ 28<br/>3.2.3 Steering Angles ........................................................................................... 29<br/>3.3 Deep Network Architecture to Predict Future Steering Angles ......................... 30<br/>3.3.1 Model Architecture ..................................................................................... 30<br/>3.3.1.1 CNN-Only Model Limitations ................................................................ 34<br/>3.3.2 Predicting Travel Path Instead of Steering ................................................. 34<br/>3.4 Adding Vehicle Speed ........................................................................................ 38<br/>3.5 Hybrid Model ..................................................................................................... 40<br/>3.5.1 Free Space Acquisition ............................................................................... 41<br/>3.5.1.1 Cameras ................................................................................................... 41<br/>3.5.1.2 Lidar ........................................................................................................ 41<br/>3.5.2 Real-time Reward Integration ..................................................................... 42<br/>3.6 Conclusion .......................................................................................................... 48<br/>4 CHAPTER 4: EXPERIMENTS AND RESULTS .................................................... 50<br/>4.1 Introduction ........................................................................................................ 50<br/>4.2 Country Road Driving Simulator ....................................................................... 50<br/>4.3 Datasets .............................................................................................................. 54<br/>4.4 Data Augmentation ............................................................................................ 56<br/>4.4.1 Data Processing ........................................................................................... 56<br/>4.4.1.1 Data Flipping ........................................................................................... 56<br/>4.4.1.2 Contrast Modifications ............................................................................ 57<br/>4.4.1.3 Shadow Addition ..................................................................................... 57<br/>4.4.2 Extra Sensors for Data Augmentation ........................................................ 58<br/>4.4.2.1 Angled Cameras ...................................................................................... 58<br/>4.4.2.2 Distanced Cameras .................................................................................. 58<br/>4.5 Results ................................................................................................................ 59<br/>4.5.1 Test Track ................................................................................................... 59<br/>4.5.2 Metrics Used ............................................................................................... 59<br/>4.5.2.1 Mean Square Error .................................................................................. 59<br/>4.5.2.2 Validation Loss ....................................................................................... 60<br/>4.5.2.3 Learning Rate .......................................................................................... 60<br/>4.5.3 NVIDIA Model Results .............................................................................. 60<br/>4.5.4 Predicting Future Steering Angles .............................................................. 62<br/>4.5.4.1 One Second into Future ........................................................................... 62<br/>4.5.4.2 Two Seconds into Future......................................................................... 63<br/>4.5.5 Speed Integration ........................................................................................ 65<br/>x<br/>4.5.6 Free Space Only Results ............................................................................. 67<br/>4.5.7 Autonomous Driving Hybrid Model ........................................................... 69<br/>4.6 Conclusion .......................................................................................................... 74<br/>5 CHAPTER 5: CONCLUSIONS AND FUTURE WORK ........................................ 76<br/>5.1 Conclusions ........................................................................................................ 76<br/>5.2 Avoiding Objects................................................................................................ 77<br/>5.3 Different Vehicles Data ...................................................................................... 77<br/>5.4 Human Brain Comparison .................................................................................. 77<br/>5.5 End-to-End Deep Learning Limitations Overcome ........................................... 78<br/>5.6 Safety and Reliability Considerations ................................................................ 78<br/>6 APPENDIX – DESCRIPTION OF USED LIBRARIES .......................................... 79<br/>6.1 Python................................................................................................................. 79<br/>6.2 TensorFlow......................................................................................................... 79<br/>6.3 Keras................................................................................................................... 80<br/>6.4 Unity ................................................................................................................... 80<br/>7 REFERENCES ..........................................................................................................
520 3# - Abstract
Abstract Abstract:<br/>Nowadays, to reduce the number of accidents and to increase cars safety levels, all car<br/>manufacturers are racing to reach high level of autonomy to achieve driverless cars as soon as<br/>possible. The conventional ways to achieve the current Advanced Driving Assistance Systems<br/>(ADAS) of controlling the vehicle using traditional control theory proved to be not enough to<br/>handle the unlimited scenarios of autonomous driving in a city environment. It became clear that<br/>the usage of Artificial Intelligence (AI) in terms of deep learning is needed to aid the current ADAS<br/>systems (e.g. Adaptive Cruise Control and Lane Keeping) to reach high level of autonomy using<br/>big data. Currently, the automotive industry is using deep learning excessively in classifying<br/>objects captured by on-board sensors (e.g. cameras), but lately there are many researches being<br/>done to use deep learning to control the steering using Convolutional Neural Networks (CNN) for<br/>raw camera images, or Reinforcement Learning (RL) to learn from mistakes using huge amount<br/>of simulated data. The limitation of the CNN-only solution is that it lacks live-feedback to correct<br/>it-self, and in CNNs using Long-Short-Term-Memory (LSTM) layers to reduce the noise in<br/>steering, they suffer from the need of high computational power on-board. On the other hand, the<br/>RL-only solution is relying heavily on simulations due to the nature of RL of learning from doing<br/>mistakes; therefore, RL cannot be used easily with real-life data. On top of that, both solutions are<br/>not considering the speed of the vehicle as an input but relying only on raw data from the frontcamera,<br/>which leads to a system that only works on constant velocities.<br/>This thesis proposes a novel CNN-Reinforcement hybrid solution where both CNN and<br/>RL are being used at the same time in a supervised mode to provide a live feedback that can be<br/>used in real-life data not only in simulations. It also proposes a major modification in the traditional<br/>CNN architecture to let the neural network predict the path of driving instead of steering angle to<br/>reduce the noise on the output and to avoid using LSTMs which require huge amount of<br/>computational power. Besides that, the architecture has been altered for considering the velocity<br/>of the vehicle to ensure smooth driving at different speeds. The different approaches available in<br/>the research community are reviewed in detail. Experiments and tests are done using computer<br/>simulations on both Hybrid and CNN-only solution and the results are examined in details. Finally,<br/>future work is discussed to enhance the system for better and safer driving.
546 ## - Language Note
Language Note Text in English, abstracts in English.
650 #4 - Subject
Subject Informatics-IFM
655 #7 - Index Term-Genre/Form
Source of term NULIB
focus term Dissertation, Academic
690 ## - Subject
School Informatics-IFM
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme Dewey Decimal Classification
Koha item type Thesis
650 #4 - Subject
-- 266
655 #7 - Index Term-Genre/Form
-- 187
690 ## - Subject
-- 266
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Home library Current library Date acquired Total Checkouts Full call number Date last seen Price effective from Koha item type
    Dewey Decimal Classification   Not For Loan Main library Main library 01/12/2021   610 / K.M.H / 2018 01/12/2021 01/12/2021 Thesis