top of page
ASCL_symbol_2022.png
DGIST symbol 3.jpg

Investigating the Impact of Adverse Weather Conditions on Object Detection Performance and Time to Collision for Self-Driving Cars

Recently, the development of level four and level five autonomous vehicles has gained significant attention, and object detection technologies has been crucially requiring for their stable maneuvering. The accurate and rapid detection of objects using diverse sensors, such as LiDAR, radar, and cameras, is crucial; however, detecting objects in various adverse weather conditions (e.g., stormy, snowy, or foggy) can be challenged with reduction of the time to collision (TTC), which can lead to dangerous autonomous driving. To thoroughly address this challenge, this study used the two-stream image-to-image translation (TSIT) model to obtain clear, cloudy, and dark weather datasets including synthesized quantitative rainfall and snowfall images which were obtained by using experimental data. We evaluated the performance of the object detection model in terms of TTC under different weather conditions, specifically investigating how the real-time object detector YOLO's performance decreased and how much the TTC value changed under adverse weather conditions. As a result, this study quantitatively showed that bad weather could have a significant impact on the maneuverability of autonomous vehicles, and then provided convincing insights for improving object detection and TTC performance in terms of enhancing the safety of self-driving cars.

Integrated path tracking with DYC and MPC using LSTM based tire force estimator for four-wheel independent steering and driving vehicle

Active collision avoidance system plays a crucial role in ensuring the lateral safety of autonomous vehicles, and it is primarily related to path planning and tracking control algorithms. In particular, the direct yaw-moment control (DYC) system can significantly improve the lateral stability of a vehicle in environments with sudden changes in road conditions. In order to apply the DYC algorithm, it is very important to accurately consider the properties of tire forces with complex nonlinearity for control to ensure the lateral stability of the vehicle. In this study, longitudinal and lateral tire forces for safety path tracking were simultaneously estimated using a long short-term memory (LSTM) neural network model-based estimator. Furthermore, to improve path tracking performance in case of sudden changes in road conditions, a system has been developed by combining 4-wheel independent steering (4WIS) model predictive control (MPC) and 4-wheel independent drive (4WID) direct yaw-moment control (DYC). The estimation performance of the extended Kalman filter (EKF) and LSTM, which are commonly used for tire force estimation, was compared. In addition, the estimated longitudinal and lateral tire forces of each wheel were applied to the proposed system, and system verification was performed through simulation using a vehicle dynamics simulator. Consequently, the proposed method, the integrated path tracking algorithm with DYC and MPC using the LSTM based estimator, was validated to significantly improve the vehicle stability in the presence of diverse road conditions.

LuminanceGAN: Controlling the Brightness of Generated Image for Data Augmentation in Various Night Conditions

Advanced deep learning technology has been increasingly used for performing perception tasks in autonomous driving. Although most perception tasks in autonomous maneuvering exhibit excellent performance in day conditions, their performance degrades under night conditions. Therefore, train a deep learning model that performs well even in night conditions, datasets in night conditions are required; however, most of these are currently composed of datasets in day conditions. Many day-to-night image translation models have been proposed to address lack of night condition dataset, but these models often generate artifacts and are not able to control the brightness of the output image. Thus, in this study, we propose a novel LuminanceGAN, for controlling the brightness level in night condition to obtain the realistic night image outputs. The proposed novel Y-control loss converged the brightness level of the output image to a specific value. Moreover, the self-attention method provides our model with additional information to distinguish objects in the input image, which reduces artifacts in the generated output image. Through qualitative comparison with other image-to-image translation models, our model showed much better performance in day-to-night image translation. For quantitative evaluation, the KITTI dataset was augmented from the day condition to the night condition using the proposed model, and a depth estimation model was also trained on this augmented dataset. With training the depth estimation model using the KITTI dataset augmented with the controlled brightness of night images generated by the proposed method, the depth estimation performance on the night condition considerably improved.

CARLA Simulator-Based Evaluation Framework Development of Lane Detection Accuracy Performance Under Sensor Blockage Caused by Heavy Rain for Autonomous Vehicle

As self-driving cars have been developed targeting level 4 and 5 autonomous driving, the capability of the vehicle to handle environmental effects has been considered importantly. The sensors installed on autonomous vehicles can be easily affected by blockages (e.g., rain, snow, dust, fog, and others) covering the surface of them. In a virtual environment, we can safely observe the behavior of the vehicle and the degradation of the sensors by blockages. In this letter, the CARLA simulator-based evaluation framework has been developed and the assessment of lane detection performance under sensor blockage by heavy rain, which was analyzed by using the experimental data. Thus, we thoroughly note that the accuracy of lane detection for the autonomous vehicle has been decreased as the rainfall rate increases, and the impact of the blockage is more critical to curved lanes than straight lanes. Finally, we have suggested a critical rainfall rate causing safety failures of the autonomous vehicles, based on reasonably established rainfall equation based on experimental rain datasets.

Improved collision avoidance performance by using infra- surveillance sensor detecting objects in blind spots for autonomous mobile robot

n this study, we propose an improved navigation method that an autonomous mobile robot is capable of detecting objects early in blind spots by using infra-surveillance camera sensors detecting hidden objects from the detection range of the autonomous mobile robot for efficient path planning. Moreover, we firstly propose the method of the optimal position selection for infra-surveillance sensor in a given global map. The proposed algorithm is simulated in ROS and Gazebo environment with omni-directional mecanum wheeled robot model compared with conventional algorithms (i.e., robot sensor detection only, robot sensor detection with velocity limits on blind spots). As a result, the autonomous mobile robot using the proposed algorithm shows significantly improved navigation performances, such as, faster arrival to the destination and longer time-to-event (TTE) value. We have also considered our simulation scenarios not only with single human object but also with multiple human objects, which can actually happen in real world environment.

Improved Lane Changing Control System with Varying Look-Ahead Distance in accordance with Longitudinal Vehicle Velocity

Many path tracking control algorithms have been introduced over the years such as pure pursuit, Stanley method, model predictive control and etc. However, simple controllers (e.g., pure pursuit and Stanley method) can have tracking error noise based on real car model. Moreover, the vehicle dynamics-based complex path tracking controller (e.g., model predictive control) requiring large amount of computation resources becomes hard to be implemented in real time. Therefore, we studied a simple lane changing control system (LXC) based on varying look-ahead distance in accordance with the longitudinal vehicle velocity. This proposed algorithm is designed to improve the tracking control performance which is validated in CarSim and Matlab/Simulink. The proposed algorithm enhances an ego vehicle to track the reference trajectory with the lane keeping system (LKS) that uses both the estimated value of lateral distance error and look ahead distance error with PID controller. To verify enhanced simulation performance of the designed tracking control in simulation tools, we compared with static look-ahead distance lane changing system in double lane change scenario with low and high vehicle velocity value. Simulation was accomplished to verify whether the appropriate varying look-ahead distance depending on the vehicle velocity tracks the reference trajectory in diverse longitudinal vehicle velocity cases. As a result, with varying look-ahead distance method, the tracking error becomes smaller within the lateral error of 0.55m than one of non-varying look ahead distance of lane keeping control system. To obtain varying look-ahead distance relationship with the vehicle velocity, we observe tracking performance with different vehicle velocity, and then find the optimal look-ahead distance for each vehicle velocity cases. Consequently, with varying look-ahead distance value, it is able to secure the convincing tracking performance up to 80kph in vehicle speed, and also make the ego vehicle stay in stable region of the steering control system.

Fast lane Detection Algorithm by Using Real-Time Image Processing

This study proposes a fast image processing algorithm using efficient computing resources for autonomous vehicles requiring the accurate lane recognition. With the proposed algorithm, the image processing speed was dramatically reduced about twice times faster than the previous studies. Finally, using the proposed algorithm, the image processing frame rate was shown at about 60 fps, and the lane recognition algorithm with the proposed algorithm was verified in various road conditions on the DGIST campus

Adaptive Urban Auto-driving Algorithm based on Sensor Weighted Integration Field

We suggests a deep-learning based lane recognition algorithm for autonomous vehicles on urban road environments. On urban roads, there are various environments - straight, curve, crossroad, and diverse road marks. Moreover, in cases by object avoidance and/or overtaking, the vehicle maneuvers in various locations and orientations on the diverse roads. For the deep-learning based lane recognition algorithm, the dataset was firstly designed to evaluate the performance of urban roads in both normal and abnormal maneuvering, while the classes of straight, curve, crossroad, and road mark (e.g., arrow, diamond, speed bump and crosswalk) are classified. Next, the deep-learning network was constructed and trained by using the above dataset for the autonomous urban driving test. Spatial CNN (SCNN), implemented for the model, is suitable for strong spatial relationship which has similar continuous shape structure, by slices feature points in a layer and message passing in a specific direction between each slice. However, SCNN is not enough to run in real time. Thus, this study proposes a Sparse Spatial CNN (SSCNN), which reduces the computational steps and improves the execution speed. As a result, the proposed model showed a significant speed improvement with minimum performance degradation in the lane detection datasets. (KASA 2019-Fall Conference)

Sparse Spatial CNN for Traffic Lane Recognition on Urban Road Environments

We integrates data from three sensors - Vision, LiDAR, and GPS. The suggested algorithm decides critical motions of autonomous vehicle, such as acceleration, deceleration and steering angle. For flexible motion planning by the algorithm, the novel sensor integration method, named Sensor Weighted Integration Field (SWIF), was proposed to generate the safe trajectory of vehicle motion when the weighting function in SWIF is applied. Before forming SWIF, vision data is processed by Deep learning, which is called Spatial CNN (SCNN), in order to recognize the adjacent lanes to the vehicle. Then, by applied to SWIF, these adjacent lanes get a higher weighting value toward the center of the lane. Furthermore, obstacles detected from LiDAR are judged as the dangerous area where the algorithm lowers the weighting values. Finally, with SWIF applied with above whole data from sensors, the weighting function, where more significant area of interest expected to maneuver on the road is weighted higher, is able to generate the safe motion trajectory. As a result, the suggested algorithm for the flexible adaptive vehicle motion with minimized steering angle is successfully to avoid a dangerous area without lane and path departure in the presence of various factors on the urban roads, such as big trucks, buses and construction sites.(KASA 2019-Fall Conference)

Collision-Free Path Planning Algorithm: Dynamic Obstacles in Artificial Potential Field for Autonomous Vehicles

We proposes a path planning algorithm that considers the motion of dynamic moving objects for autonomous vehicles. Securing the safety of the autonomous vehicles is the critical requirement by
preventing collisions against other vehicles or pedestrians in the presence of high traffic density. However, collision-free path planning for autonomous vehicles is difficult problem because of the steering angle limits and the trajectory of movement of other objects. Thus, the proposed algorithm avoiding dynamically
moving objects, named Dynamic Artificial Potential Field (DAPF), has been modified to estimate the expected dynamic path of the vehicle. Using the modified DAPF, the collision-free path of the
autonomous vehicle is planned with non-holonomic conditions of the vehicle. This proposed algorithm shows better performance for generating collision-free paths than ones of the conventional APF-based algorithms.(KASA 2019-Fall Conference)

Smart Agricutural System:

Autonomous Driving-based Transplating Machine

We investigated the self-driving performace of transplating machine by using vision-based tranplannted line estimation algorithm. The algorithm includes a series of image processing for autonomous navigating machine. RANSAC (Random Sample Consensus) algorithm is also applied to detect transplanted lines. Considering the result of experiment with the proposed algorithm, it is suggested to install the vision camera on the frontal side and 1m~1.4m high. Moreover, the shadow and the change of light affects the image processing and planted-line detection performance. Thus, the proposed planted-line detection algorithm using SVM classifier was investigated. Consequently, the proposed algorithm performance was convincing in the presence of disturbances, such as the shadow and the light change during agricultural operation. (KASA 2019-Spring Conference)

bottom of page