Collections of traffic images used to improve the perception capabilities of the autonomous car


Original traffic scene. Credit: Pangiotis Meletis

If self-driving cars are to navigate traffic safely, they need to “understand” the things around them.

They can learn this by training with existing images of traffic situations. Panagiotis Meletis combined various collections of traffic images to improve the perceptual abilities of TU / e’s autonomous car.

Is this young man with this cap waiting for someone or is he planning to cross the street? A ball rolls in the street, will a child run after it? And is this little blue Toyota going to parallel park, or is it about to pull away? When driving a car in town, you have to constantly assess these kinds of situations, which requires in-depth knowledge of the traffic on the part of the person behind the wheel. One of the major challenges for an autonomous car is therefore to arrive at the right conclusions based on the things it “sees” around it, so that it is able to anticipate unexpected situations, a prerequisite for participate safely in traffic.

The first step towards a better understanding of traffic situations is to correctly determine the different objects in the images that an autonomous vehicle receives from its camera, explains Greek researcher Panagiotis Meletis. Within the Video Coding & Architectures group, specializing in image recognition, he worked on a Mobile Perception Systems Lab project: an autonomous car that regularly takes a test drive on the TU / e campus. “He must be able to determine whether he sees a traffic light or a tree, a pedestrian, a cyclist or a vehicle.” And at a more detailed level, it should also be able to recognize wheels or limbs, as these indicate the direction of movement and the intentions of road users.

Gray torsos

You can train an artificial neural network (artificial intelligence or AI) to analyze a traffic scene by providing it with large amounts of images of traffic situations, in which all relevant elements have been tagged. You can then measure the AI’s level of understanding by providing it with new, unlabeled images. An example of such an image can be seen below. In the bottom image, the colors show us how the AI ​​interpreted the image: the cars are blue, the bikes are dark red, people’s arms are colored orange, and their torsos are gray.

Autonomous traffic overview

Ideal processed image showing objects (white outlines) and semantic information (colors). Credit: Panagiotis Meletis

When Meletis began his doctoral research, there were only a few datasets of publicly available images depicting traffic scenes, he says. “Now there are dozens of them, each with their own goal. Think about images containing traffic lights, images containing cyclists, pedestrians, etc. The problem, however, was that each data set was labeled with a different system.

What Meletis has contributed is the fact that he has succeeded in connecting these labels to a supposedly higher semantic level. “To give you an idea: cars, buses and trucks all fall into the ‘vehicle’ category. And cyclists, motorcyclists and drivers are all specific examples of “drivers”. With the help of these definitions, I was able to train our AI with all the datasets available at the same time. It immediately gave much better results.

Heavy rain

The power of his method was proven during a workshop organized as part of the Computer Vision and Pattern Recognition Conference in 2018. “This is a large annual conference with over ten thousand participants. , including all the big tech companies. One of the competitions that The year was dedicated to “rugged vision”, where images of traffic with visual hazards, such as heavy rain or overexposure to the sun, were to be analyzed. Our system performed better than any of the other participants on a dataset that contained many of these degraded images. “

Over the past year, Meletis has continued to work on this project as a post-doctoral fellow, while revising his thesis, which is standard practice in his group. “During this time, we have managed to compile two new datasets for a holistic understanding of the scene.” And before the pandemic broke, Meletis was part of TU / e’s communications team recruiting students and doctoral students. candidates in his country of origin, Greece. “I am very excited about the university and the atmosphere in Eindhoven, and I wanted to pass it on to my compatriots, who might not take the step and go to TU / e for fear of the unknown.”

What about the future? “I am looking for a job where I hope to put the most recent scientific knowledge into practice, somewhere at the interface between academia and industry. Preferably in the Netherlands or elsewhere in Europe in because of the pandemic. I could also move to the United States when the time is appropriate. ”


Linking self-driving cars to traffic lights could help pedestrians give them the green light


Provided by Eindhoven University of Technology


Quote: Collections of traffic images used to improve the perception capabilities of the autonomous car (2021, 4 November) retrieved on 4 November 2021 from https://techxplore.com/news/2021-11-traffic-images-perception- capabilities-self- conduct.html

This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.


Source link

Previous Yamaha FZS FI v3 price in Nepal: specifications, features, variants, pictures
Next Pregnant Kylie Jenner shares footage with Kendall and Stormi from Cactus Jack softball game