Hyundai Mobis Introduces Autonomous Driving Test Using 3D Game Technology
-Utilizing 3D game and deep learning technology to develop a camera sensor with
-innovative performance for autonomous driving vehicles
-Using software for 3D game development to produce the imagery of mock autonomous driving in various environments
-Enhancing camera performance to verify the ability to recognize vehicles, pedestrians, traffic lights, and road signs, etc.
-Increasing the accuracy of sensors by creating various environments, such as bad weather, unusual terrains and road conditions
-Also, developed technology for automatically classifying 18 million driving images for each country through deep learning
An image of the virtual driving environment created using the mock autonomous driving imaging technology, based on the high-definition software for developing 3D games, which Hyundai Mobis is currently developing. (Graphic: Business Wire)
Hyundai Mobis is actively introducing innovative technologies like 3D images and deep learning to drastically improve the accuracy of the autonomous driving sensor it is proprietarily developing. Its purpose is to speed up the formation of a template for future technologies by adopting various ideas in the R&D process.
“In general, around 1,000 people are allocated and they manually label each of the images called in by the sensors"
On September 16, Hyundai Mobis announced that the company began to develop the mock autonomous driving imaging technology by utilizing the high-definition software for 3D game development.
This technology is for conducting autonomous driving tests in the 3D virtual environment for various scenarios used in computer games. As it is possible to create the desired environment for testing without being limited by numerous constraints in real life, the performance of the cameras can be enhanced. The biggest automotive supplier in Korea expects to greatly improve the object recognition accuracy of the cameras used in autonomous driving vehicles that the company is proprietarily developing.
To develop related technologies, the Technical Center of India of Hyundai Mobis recently entered into a contract with Tata Elxsi and is spearheading this development. Tata Elxsi is India’s software company that provides solutions optimized for ICT (Information and Communications Technology) sectors, such as artificial intelligence, IoT (Internet of Things), big data and autonomous driving.
In 2007, Hyundai Mobis opened a research center in Hyderabad, which is often referred to as India’s equivalent to Silicon Valley. The Technical Center of India has hired a large number of excellent local researchers, specializing in development and verification of software for DAS (Driver Assistance System), autonomous driving system and multimedia.
“We are planning to finish developing the mock autonomous driving imaging technology by the end of this year,” said Yang Seung-wook, head of Hyundai Mobis ICT Research Center (vice-president). “We will proactively make the most of top-notch tech companies both at home and abroad in various fields, including artificial intelligence, to develop core future car technologies which will enable us to get ahead of the global competition.”
-Unlimited testing by implementing various scenarios, such as bad weather and busy downtown streets
Hyundai Mobis will fully leverage the technology to drastically enhance the performance of the cameras used in autonomous driving cars. The recognition accuracy of the camera, the core sensor, is essential to safe autonomous driving, and leading global companies are now competing for it.
The virtual driving environment, which Hyundai Mobis is seeking to introduce to autonomous driving testing, is made by using the imaging software for developing 3D games. If high-definition 3D images are used, it is possible to make various driving scenarios, such as night roads on a rainy day, congested downtown area, puddles, and road construction sites.
If the cameras for autonomous driving cars are tested in these various virtual environments, the recognition performance can be improved so that it is possible to correctly classify a wide range of objects including vehicles, pedestrians, traffic light infrastructure and road markings under any harsh condition.
In addition to testing autonomous driving test vehicles all around the world and collecting information on various climate, unusual terrain and road conditions, the company will be able to use mock environments similar to those used in computer games to verify the performance of sensors in the desired manner anytime and anywhere, gaining the upper hand on the competition.
-Using deep learning technology to automatically classify 18 million images…radically improving sensor performance
Hyundai Mobis is also striving to develop the technology for automatically classifying driving images based on deep learning technology, an area of artificial intelligence, by the first half of next year. The purpose is, again, to drastically improve the recognition performance of the cameras for autonomous driving vehicles.
The front-view camera, installed in autonomous driving cars, captures a myriad of objects, such as vehicles, lanes, pedestrians and traffic lights, in place of human eyes. To correctly read the images, a great deal of information is necessary. The more data that is secured, the more will be learned, and the recognition accuracy of the sensor will be improved.
Amid this learning data, the image itself is important, but labeling, i.e. assigning a name to each piece of data, is also important. To make the camera learn, it is necessary to specify the type of each captured image, for example, a vehicle, a pedestrian or a traffic sign.
“In general, around 1,000 people are allocated and they manually label each of the images called in by the sensors,” said director Lee Jin-eon, head of the Autonomous Driving Advanced Development Department of Hyundai Mobis. “Hyundai Mobis is aiming to take advantage of the deep learning-based computing technology to improve its efficiency, including accuracy and speed.”
In the industry, it is known that about a million images are required for each object so that the cameras for autonomous driving vehicles can correctly recognize it. Hyundai Mobis has selected a total of 18 classification categories on its own (vehicles, pedestrian, lanes, road conditions, etc.), and the company is developing technology for automatically labeling an average of 18 million driving images for each country.
As the quantity and quality of this database (DB) determine the recognition accuracy of sensors, if the deep learning-based automation technology is applied, the recognition accuracy is expected to be drastically improved.
Hyundai Mobis is currently carrying out various projects to give definition to the blueprint of the future technology. Recently, the company invested in StradVision, a domestic startup that possesses deep learning camera imaging technology and is working with a German radar specialist to accelerate the development of a high-performance radar.
Hyundai Mobis is also actively recruiting global talents. Last July, it hired Gregory Baratoff, an autonomous driving sensor expert, and in the first half of this year, it hired Carsten Weiss to reinforce the global competitiveness of its software division. Both came from Germany’s Continental.
In addition, Hyundai Mobis is planning to increase the size of the autonomous R&D workforce from the current level of about 600 to more than 1,000 by 2021. The company will focus its energy on developing core future car technologies, such as increasing the number of software designers in its domestic technical center from the current 800 or so to 4,000 by 2025.
Choon Kee Hwang, +82-2-2018-5519