Academic Positions

  • Present 2021

    Assistant Professor

    Computer Engineering Department, Universidad de Alcalá
    INVETT Research Group (Intelligent Vehicles and Traffic Technologies)
    , Universidad de Alcalá

  • 2016 2020

    PhD Grant (FPU)

    INVETT Research Group (Intelligent Vehicles and Traffic Technologies), Universidad de Alcalá

  • 2016 2013

    Researcher

    INVETT Research Group (Intelligent Vehicles and Traffic Technologies), Universidad de Alcalá

Education & Training

  • PhD2020

    PhD in Information and Communications Technologies

    Universidad de Alcalá, Spain

  • Internship2017

    3 month internship at TNO (Netherlands Organisation for Applied Scientific Research)

    Netherlands Organisation for Applied Scientific Research, The Hague, Netherlands

  • MSc.2016

    Master of Science in Industrial Engineering | Robotics and Perception

    Universidad de Alcalá, Spain

  • BSc.2014

    Bachelor in Electronics and Industrial Automation Engineering

    Universidad de Alcalá

INVETT

Current Teaching

  • 2019 2022

    C/C++ Programming

    The objective of the course is the study in depth the structured programming using C programming language. The programme of the course is: review of basic concepts about pointers, advanced use of pointers, advanced management of functions, creation and manipulation of files, dynamic data structures and algorithms.

Filter by type:

Sort by year:

Goal-Oriented Transformer to Predict Context-Aware Trajectories in Urban Scenarios

Conference Paper A. Quintanar, R. Izquierdo, I. Parra, and D. Fernández-Llorca, "Goal-Oriented Transformer to Predict Context-Aware Trajectories in Urban Scenarios", Eng. Proc. 2023, 39, 57, doi: 10.3390/engproc2023039057.

Abstract

The accurate prediction of road user behaviour is of paramount importance for the design and implementation of effective trajectory prediction systems. Advances in this domain have recently been centred on incorporating the social interactions between agents in a scene through the use of RNNs. Transformers have become a very useful alternative to solve this problem by making use of positional information in a straightforward fashion. The proposed model leverages positional information together with underlying information of the scenario through goals in the digital map, in addition to the velocity and heading of the agent, to predict vehicle trajectories in a prediction horizon of up to 5 s. This approach allows the model to generate multimodal trajectories, considering different possible actions for each agent, being tested on a variety of urban scenarios, including intersections, and roundabouts, achieving state-of-the-art performance in terms of generalization capability, providing an alternative to more complex models.

Vehicle trajectory prediction on highways using bird eye view representations and deep learning

Journal Paper R. Izquierdo, A. Quintanar, D. Fernández-Llorca, I. García-Daza, N. Hernández, I. Parra, and M. A. Sotelo, "Vehicle trajectory prediction on highways using bird eye view representations and deep learning", Applied Intelligence, 2022, (Springer-IF-2021: 5.019), doi: 10.1007/s10489-022-03961-y.

Abstract

This work presents a novel method for predicting vehicle trajectories in highway scenarios using efficient bird's eye view representations and convolutional neural networks. Vehicle positions, motion histories, road configuration, and vehicle interactions are easily included in the prediction model using basic visual representations. The U-net model has been selected as the prediction kernel to generate future visual representations of the scene using an image-to-image regression approach. A method has been implemented to extract vehicle positions from the generated graphical representations to achieve subpixel resolution. The method has been trained and evaluated using the PREVENTION dataset, an on-board sensor dataset. Different network configurations and scene representations have been evaluated. This study found that U-net with 6 depth levels using a linear terminal layer and a Gaussian representation of the vehicles is the best performing configuration. The use of lane markings was found to produce no improvement in prediction performance. The average prediction error is 0.47 and 0.38 meters and the final prediction error is 0.76 and 0.53 meters for longitudinal and lateral coordinates, respectively, for a predicted trajectory length of 2.0 seconds. The prediction error is up to 50% lower compared to the baseline method.

Vehicle Lane Change Prediction on Highways Using Efficient Environment Representation and Deep Learning

Journal Paper R. Izquierdo, A. Quintanar, J. Lorenzo, I. García-Daza, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "Vehicle Lane Change Prediction on Highways Using Efficient Environment Representation and Deep Learning", IEEE Access, 2021, (JCR-IF-2020: 3.367; Q2), doi: 10.1109/ACCESS.2021.3106692.

Abstract

This paper introduces a novel method of lane-change and lane-keeping detection and prediction of surrounding vehicles based on Convolutional Neural Network (CNN) classification approach. Context, interaction, vehicle trajectories, and scene appearance are efficiently combined into a single RGB image that is fed as input for the classification model. Several state-of-the-art classification-CNN models of varying complexity are evaluated to find out the most suitable one in terms of anticipation and prediction. The model has been trained and evaluated using the PREVENTION dataset, a specific dataset oriented to vehicle maneuver and trajectory prediction. The proposed model can be trained and used to detect lane changes as soon as they are observed, and to predict them before the lane change maneuver is initiated. Concurrently, a study on human performance in predicting lane-change maneuvers using visual inputs has been conducted, so as to establish a solid benchmark for comparison. The empirical study reveals that humans are able to detect the 83.9% of lane changes on average 1.66 seconds in advance. The proposed automated maneuver detection model increases anticipation by 0.43 seconds and accuracy by 2.5% compared to human results, while the maneuver prediction model increases anticipation by 1.03 seconds with an accuracy decrease of only 0.5%.

Predicting Vehicles Trajectories in Urban Scenarios with Transformer Networks and Augmented Information

Conference Paper A. Quintanar, D. Fernández-Llorca, I. Parra, R. Izquierdo, and M. A. Sotelo, "Predicting Vehicles Trajectories in Urban Scenarios with Transformer Networks and Augmented Information," 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 2021, pp. 1051-1056, doi: 10.1109/IV48863.2021.9575242.

Abstract

Understanding the behavior of road users is of vital importance for the development of trajectory prediction systems. In this context, the latest advances have focused on recurrent structures, establishing the social interaction between the agents involved in the scene. More recently, simpler structures have also been introduced for predicting pedestrian trajectories, based on Transformer Networks, and using positional information. They allow the individual modelling of each agent's trajectory separately without any complex interaction terms. Our model exploits these simple structures by adding augmented data (position and heading), and adapting their use to the problem of vehicle trajectory prediction in urban scenarios in prediction horizons up to 5 seconds. In addition, a cross-performance analysis is performed between different types of scenarios, including highways, intersections and roundabouts, using recent datasets (inD, rounD, highD and INTERACTION). Our model achieves state-of-the-art results and proves to be flexible and adaptable to different types of urban contexts.

The PREVENTION Challenge: How Good Are Humans Predicting Lane Changes?

Conference Paper A. Quintanar, R. Izquierdo, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "The PREVENTION Challenge: How Good Are Humans Predicting Lane Changes?," 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 2020, pp. 45-50, doi: 10.1109/IV47402.2020.9304640.

Abstract

While driving on highways, every driver tries to be aware of the behavior of surrounding vehicles, including possible emergency braking, evasive maneuvers trying to avoid obstacles, unexpected lane changes, or other emergencies that could lead to an accident. In this paper, human’s ability to predict lane changes in highway scenarios is analyzed through the use of video sequences extracted from the PREVENTION dataset, a database focused on the development of research on vehicle intention and trajectory prediction. Thus, users had to indicate the moment at which they considered that a lane change maneuver was taking place in a target vehicle, subsequently indicating its direction: left or right. The results retrieved have been carefully analyzed and compared to ground truth labels, evaluating statistical models to understand whether humans can actually predict. The study has revealed that most participants are unable to anticipate lane-change maneuvers, detecting them after they have started. These results might serve as a baseline for AI’s prediction ability evaluation, grading if those systems can outperform human skills by analyzing hidden cues that seem unnoticed, improving the detection time, and even anticipating maneuvers in some cases.

Vehicle Trajectory Prediction in Crowded Highway Scenarios Using Bird Eye View Representations and CNNs.

Conference Paper R. Izquierdo, A. Quintanar, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "Vehicle Trajectory Prediction in Crowded Highway Scenarios Using Bird Eye View Representations and CNNs," 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 2020, pp. 1-6, doi: 10.1109/ITSC45102.2020.9294732.

Abstract

This paper describes a novel approach to perform vehicle trajectory predictions employing graphic representations. The vehicles are represented using Gaussian distributions into a Bird Eye View. Then the U-net model is used to perform sequence to sequence predictions. This deep learning-based methodology has been trained using the HighD dataset, which contains vehicles' detection in a highway scenario from aerial imagery. The problem is faced as an image to image regression problem training the network to learn the underlying relations between the traffic participants. This approach generates an estimation of the future appearance of the input scene, not trajectories or numeric positions. An extra step is conducted to extract the positions from the predicted representation with subpixel resolution. Different network configurations have been tested, and prediction error up to three seconds ahead is in the order of the representation resolution. The model has been tested in highway scenarios with more than 30 vehicles simultaneously in two opposite traffic flow streams showing good qualitative and quantitative results.

Experimental validation of lane-change intention prediction methodologies based on CNN and LSTM

Conference Paper R. Izquierdo, A. Quintanar, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "Experimental validation of lane-change intention prediction methodologies based on CNN and LSTM," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 3657-3662, doi: 10.1109/ITSC.2019.8917331.

Abstract

This paper describes preliminary results of two different methodologies used to predict lane changes of surrounding vehicles. These methodologies are deep learning based and the training procedure can be easily deployed by making use of the labeling and data provided by The PREVENTION dataset. In this case, only visual information (data collected from the cameras) is used for both methodologies. On the one hand, visual information is processed using a new multi-channel representation of the temporal information which is provided to a CNN model. On the other hand, a CNN-LSTM ensemble is also used to integrate temporal features. In both cases, the idea is to encode local and global context features as well as temporal information as the input of a CNN-based approach to perform lane change intention prediction. Preliminary results showed that the dataset proved to be highly versatile to deal with different vehicle intention prediction approaches.

The PREVENTION dataset: a novel benchmark for PREdiction of VEhicles iNTentIONs

Conference Paper R. Izquierdo, A. Quintanar, I. Parra, D. Fernández-Llorca, and M. A. Sotelo, "The PREVENTION dataset: a novel benchmark for PREdiction of VEhicles iNTentIONs," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 3114-3121, doi: 10.1109/ITSC.2019.8917433.

Abstract

Recent advances in autonomous driving have shown the importance of endowing self-driving cars with the ability of predicting the intentions and future trajectories of other traffic participants. In this paper, we introduce the PREVENTION dataset, which provides a large number of accurate and detailed annotations of vehicles trajectories, categories, lanes, and events, including cut-in, cut-out, left/right lane changes, and hazardous maneuvers. Data is collected from 6 sensors of different nature (LiDAR, radar, and cameras), providing both redundancy and complementarity, using an instrumented vehicle driven under naturalistic conditions. The dataset contains 356 minutes, corresponding to 540 km of distance traveled, including more than 4M detections, and more than 3K trajectories. Each vehicle is unequivocally identified with a unique id and the corresponding image, LiDAR and radar coordinates. No other public dataset provides such a rich amount of data on different road scenarios and critical situations and such a long-range coverage around the ego-vehicle (up to 80 m) using a redundant sensor set-up and providing enhanced lane-change annotations of surrounding vehicles. The dataset is ready to develop learning and inference algorithms for predicting vehicles intentions and future trajectories, including inter-vehicle interactions.

CNNs for Fine-Grained Car Model Classification

Conference Paper H. Corrales, D. F. Llorca, I. Parra, S. Vigre, A. Quintanar, J. Lorenzo, N. Hernández, International Conference on Computer Aided Systems Theory, 2019, doi: 10.1007/978-3-030-45096-0_13.

Abstract

This paper describes an end-to-end training methodology for CNN-based fine-grained vehicle model classification. The method relies exclusively on images, without using complicated architectures. No extra annotations, pose normalization or part localization are needed. Different full CNN-based models are trained and validated using CompCars [31] dataset, for a total of 431 different car models. We obtained a top-1 validation accuracy of 97.62% which substantially outperforms previous works.

Semi-Automatic High-Accuracy Labelling Tool for Multi-Modal Long-Range Sensor Dataset

Conference Paper R. Izquierdo, I. Parra, C. Salinas, D. Fernández-Llorca and M. A. Sotelo, 2018 IEEE Intelligent Vehicles Symposium (IV)

Abstract

Many research works have contributed to achieve SAE levels 3 and 4 in some pre-defined areas under certain restrictions. A deeper scene understanding and precise predic- tions of drivers intentions are needed to continue improving autonomous driving capabilities to reach higher SAE levels. Deployment of accurate and detailed datasets could be consid- ered as one of the most pressing needs to enhance autonomous driving capabilities. This work presents a novel data acqui- sition methodology for on-road vehicle trajectory collection. The proposed sensor setup improves the range and detection accuracy by using a high accuracy laser scanner covering 360◦ and two high-speed and high-resolution cameras. The sensor fusion increases the labelling resolution and extends the detection range sporting the best of each sensor. A Median Flow tracking algorithm and a Convolutional Neural Network enable a semi-automatic labelling process, which reduces the effort to create detailed annotated datasets. High accurate trajectories are reconstructed with few manual annotations up to 60 m with a mean error below 2 cm. This methodology has been developed with a view to creating a dataset which enables the development of advanced vehicle trajectory prediction systems, and thus to contribute to human-like automated driving.

Multi-Radar Self-Calibration Method using High-Definition Digital Maps for Autonomous Driving

Conference Paper R. Izquierdo, I. Parra, D. Fernández-Llorca and M. A. Sotelo, 2018 21st International Conference on Intelligent Transportation Systems (ITSC)

Abstract

Advanced Driving Assistance Systems rely on a very precise sensor-based environmental perception. The qual- ity of the perception depends on the quality of the calibration when multiple and/or redundant sensors are used. This work presents a novel self-calibration method for radars based on high-definition digital maps and high radar-sensitive structural elements. The calibration targets are transformed from the world into the vehicle reference system according to the estimated vehicle state. Then, the calibration between the radar and the vehicle frame is split into two phases, alignment and translation estimation. The alignment is based on the trajectory described by the calibration targets when the vehicle is moving, and the translation is based on position differences when is standing. The uncertainties of the detections are treated in a scoring fashion. Three radars of two different models have been calibrated with this method achieving radar alignments below the angular accuracy and mean range errors below the radar range accuracy.

Analysis of ITS-G5A V2X communications performance in autonomous cooperative driving experiments

Conference Paper Ignacio Parra , Alvaro Garcı́a-Morcillo, Rubén Izquierdo, Javier Alonso, D. Fernández-Llorca and M.A. Sotelo, 2017 IEEE Intelligent Vehicles Symposium (IV)

Abstract

In this paper the performance of ITS-G5A com- munications for an autonomous driving application is analyzed in a real high-density scenario. The data was collected during the cooperative platooning tests that took place in Helmond in the frame of the Grand Cooperative Driving Challenge 2016. In the competition, between 8-10 autonomous vehicles formed two platoons in different lanes and were required to merge into a predefined competition zone. The performance is characterized using CAM CCDFs which serves as a base for the evaluation of a Cooperative Adaptive Cruise Control application. Two important effects has been identified that affect to the reliability of the communications. Firstly, there is a degradation with the distance that appears to be stronger for cars and more gentle for trucks. This indicates that occlusions heavily affect the connectivity of ITS-G5A. Secondly, the reliability is below expectations and some of the vehicles perform consistently worse than others. Although further investigation is required, a possible explanation for this is that a highly congested channel is making some of the vehicles get stuck and are not able to regularly access the channel.

Vehicle Trajectory and Lane Change Prediction Using ANN and SVM Classifiers

Conference Paper R. Izquierdo, I. Parra, J. Muñoz-Bulnes, D. Fernández-Llorca and M. A. Sotelo, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)

Abstract

Millions of traffic accidents take place every year on roads around the world. Some advanced assistance systems have been released in commercial vehicles in the past few years, contributing to the transition towards semiautonomous vehicles. Some of the best known are the adaptive cruise control and the lane keeping systems. These systems keep a desired distance with respect to the preceding vehicle or a fixed speed on the center of the lane. It is very useful for these systems to know what the surrounding vehicles trajectories will be or if they will perform a lane change manoeuvre. This paper evaluates two kinds of artificial neural networks over two different datasets to predict its trajectories. A Support Vector Machine classifier is used to classify the action that will be carried out. The proposed trajectory prediction systems are 30% better than the vehicle motion model in a time horizon of 4 seconds and are able to predict a lane change action 3 seconds before it happens.

Ego-Lane Estimation by Modeling Lanes and Sensor Failures

Conference Paper A. L. Ballardini, D. Cattaneo, R. Izquierdo, I. Parra, M. A. Sotelo, D. G. Sorrenti, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)

Abstract

In this paper we present a probabilistic lane- localization algorithm for highway-like scenarios designed to increase the accuracy of the vehicle localization estimate. The contribution relies on a Hidden Markov Model (HMM) with a transient failure model. The idea behind the proposed approach is to exploit the availability of OpenStreetMap road properties in order to reduce the localization uncertainties that would result from relying only on a noisy line detector, by leveraging consecutive, possibly incomplete, observations. The algorithm effectiveness is proven by employing a line detection algorithm and showing we could achieve a much more usable, i.e., stable and reliable, lane-localization over more than 100Km of highway scenarios, recorded both in Italy and Spain. Moreover, as we could not find a suitable dataset for a quantitative comparison of our results with other approaches, we collected datasets and manually annotated the Ground Truth about the vehicle ego-lane. Such datasets are made publicly available for usage from the scientific community.

Two-camera based accurate vehicle speed measurement using average speed at a fixed point

Conference Paper D. F. Llorca, C. Salinas, M. Jiménez, I. Parra, A. G. Morcillo, R. Izquierdo, J. Lorenzo, M. A. Sotelo, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC)

Abstract

In this paper we present a novel two-camera- based accurate vehicle speed detection system. Two high- resolution cameras, with high-speed and narrow field of view, are mounted on a fixed pole. Using different focal lengths and orientations, each camera points to a different stretch of the road. Unlike standard average speed cameras, where the cameras are separated by several kilometers and the errors in measurement of distance can be in the order of several meters, our approach deals with a short stretch of a few meters, which involves a challenging scenario where distance estimation errors should be in the order of centimeters. The relative distance of the vehicles w.r.t. the cameras is computed using the license plate as a known reference. We demonstrate that there is a specific geometry between the cameras that minimizes the speed error. The system was tested on a real scenario using a vehicle equipped with DGPS to compute ground truth speed values. The obtained results validate the proposal with maximum speed errors < 3kmh at speeds up to 80kmh.

Fusing directional passive UHF RFID and stereo vision for tag association in outdoor scenarios.

Conference Paper D. F. Llorca, R. Quintero, I. Parra, M. Jimenez, C. Fernández, R. Izquierdo, M. A. Sotelo. Fusing directional passive UHF RFID and stereo vision for tag association in outdoor scenarios. 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro. (2016).

Abstract

Stereo-based object detection systems can be greatly enhanced thanks to the use of passive UHF RFID technology. By combining tag localization with its identification capability, new features can be associated with each detected object, extending the set of potential applications. The main problem consists in the association between RFID tags and objects due to the intrinsic limitations of RSSI-based localization approaches. In this paper, a new directional RSSIdistance model is proposed taking into account the angle between the object and the antenna. The parameters of the model are automatically obtained by means of a stereo-RSSI automatic calibration process. A robust data association method is presented to deal with complex outdoor scenarios in medium sized areas with a measurement range up to 15m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags

Comparison between UHF RFID and BLE for Stereo-Based Tag Association in Outdoor Scenarios.

Conference Paper D. F. Llorca, R. Quintero, I. Parra, M. Jimenez, C. Fernández, R. Izquierdo, M. A. Sotelo. Comparison between UHF RFID and BLE for Stereo-Based Tag Association in Outdoor Scenarios. 2016 6th International Conference on IT Convergence and Security (ICITCS), Prague (2016).

Abstract

Stereo-based object detection systems can be greatly enhanced thanks to the use of wireless identification technology. By combining tag localization with its identification capability, new features can be associated with each detected object, extending the set of potential applications. The main problem consists in the association between wireless tags and objects due to the intrinsic limitations of Received Signal Strength Indicator-based localization approaches. In this paper, an experimental comparison between two specific technologies is presented: passive UHF Radio Frequency IDentification (RFID) and Bluetooth Low Energy (BLE). An automatic calibration process is used to model the relationship between RSSI and distance values. A robust data association method is presented to deal with complex outdoor scenarios in medium sized areas with a measurement range up to 15m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags and active BLE beacons.

Assistive Pedestrian Crossings by Means of Stereo Localization and RFID Anonymous Disability Identification.

Conference Paper D. F. Llorca, R. Quintero, I. Parra, R. Izquierdo, C. Fernández and M. A. Sotelo. Assistive Pedestrian Crossings by Means of Stereo Localization and RFID Anonymous Disability Identification. 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Las Palmas (2015).

Abstract

Assistive technology usually refers to systems used to increase, maintain, or improve functional capabilities of individuals with disabilities. This idea is here extended to transportation infrastructures, using pedestrian crossings as a specific case study. We define an Assistive Pedestrian Crossing as a pedestrian crossing able to interact with users with disabilities and provide an adaptive response to increase, maintain or improve their functional capabilities while crossing. Thus, the infrastructure should be able to locate the pedestrians with special needs as well as to identify their specific disability. In this paper, user location is obtained by means of a stereo-based pedestrian detection system. Disability identification is proposed by means of a RFID-based anonymous procedure from which pedestrians are only required to wear a portable and passive RFID tag. Global nearest neighbor is applied to solve data association between stereo targets and RFID measurements. The proposed assistive technology is validated in a real crosswalk, including different complex scenarios with multiple RFID tags.

A Comparative Analysis of Decision Trees Based Classifiers for Road Detection in Urban Environments.

Conference Paper C. Fernández, R. Izquierdo, D. F. Llorca and M. A. Sotelo. A Comparative Analysis of Decision Trees Based Classifiers for Road Detection in Urban Environments. 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Las Palmas (2015).

Abstract

In this paper a comparative analysis of decision trees based classifiers is presented. Two different approaches are presented, the first one is a speficic classifier depending on the type of scene. The second one is a general classifier for every type of scene. Both approaches are trained with a set of features that enclose texture, color, shadows, vegetation and other 2D features. As well as 2D features, 3D features are taken into account, such as normals, curvatures and heights with respect to the ground plane. Several tests are made on five different classifiers to get the best parameters configuration and obtain the importance of each features in the final classification. In order to compare the results of this paper with the state of the art, the system has been tested on the KITTI Benchmark public dataset.

Stereo-based Pedestrian Detection in Crosswalks for Pedestrian Behavioural Modelling Assessment.

Conference Paper D. F. Llorca, I. Parra, R. Quintero, C. Fernández, R. Izquierdo, M. A. Sotelo. Stereo-based Pedestrian Detection in Crosswalks for Pedestrian Behavioural Modelling Assessment. 2014 IEEE ICINCO Conference. Vienna, Austria (2014).

Abstract

In this paper, a stereo- and infrastructure-based pedestrian detection system is presented to deal with infrastructure-based pedestrian safety measurements as well as to assess pedestrian behaviour modelling methods. Pedestrian detection is performed by region growing over temporal 3D density maps, which are obtained by means of stereo reconstruction and background modelling. 3D tracking allows to correlate the pedestrian position with the different pedestrian crossing regions (waiting and crossing areas). As an example of an infrastructure safety system, a blinking luminous traffic sign is switched on to warn the drivers about the presence of pedestrians in the waiting and the crossing regions. The detection system provides accurate results even for nighttime conditions: an overall detection rate of 97.43% with one false alarm per each 10 minutes. In addition, the proposed approach is validated for being used in pedestrian behaviour modelling, applying logistic regression to model the probability of a pedestrian to cross or wait. Some of the predictor variables are automatically obtained by using the pedestrian detection system. Other variables are still needed to be labelled using manual supervision. A sequential feature selection method showed that time-to-collision and pedestrian waiting time (both variables automatically collected) are the most significant parameters when predicting the pedestrian intent. An overall predictive accuracy of 93.10% is obtained, which clearly validates the proposed methodology.

Road curb and lanes detection for autonomous driving on urban scenarios

Conference Paper C. Fernández, R. Izquierdo, D. F. Llorca, M. A. Sotelo. Road curb and lanes detection for autonomous driving on urban scenarios. 2014 IEEE Intelligent Transportation Systems Conference. Qingdao, China (2014).

Abstract

This paper addresses a framework for road curb and lanes detection in the context of urban autonomous driving, with particular emphasis on unmarked roads. Based on a 3D point cloud, the 3D parameters of several curb models are computed using curvature features and Conditional Random Fields (CRF). Information regarding obstacles is also computed based on the 3D point cloud, including vehicles and urban elements such as lampposts, fences, walls, etc. In addition, a gray-scale image provides the input for computing lane markings whenever they are present and visible in the scene. A high level decision-making system yields accurate information regarding the number and location of drivable lanes, based on curbs, lane markings, and obstacles. Our algorithm can deal with curbs of different curvature and heights, from as low as 3 cm, in a range up to 20 m. The system has been successfully tested on images from the KITTI data-set in real traffic conditions, containing different number of lanes, marked and unmarked roads, as well as curbs of quite different height. Although preliminary results are promising, further research is needed in order to deal with intersection scenes where no curbs are present and lane markings are absent or misleading.

Location & Address


Room E-333.
Dpto. Automatica
Escuela Politecnica. Campus Universitario.
Ctra. Madrid-Barcelona, Km. 33,600.
28805 Alcalá de Henares (Madrid), Spain