AFEELA AD/ADAS real-world sensor data: Anytime, Anywhere. 

AD/ADAS Architecture

AD/ADAS controls a car’s intelligence, playing a crucial role in automated driving and a society built around it.  

AFEELA’s AD/ADAS architecture simplifies the tasks involved in recognizing the external environment, planning routes, and controlling calculations using a learning approach instead of traditional analytical or computational methods. Its objective is to streamline route planning and control through a single step of learning-based methodology, and it will likely handle complex decisions instead of simply controlling car functions based on clear criteria such as “decelerate or accelerate.” AFEELA uses learning-based technology for spatial recognition with ViT (Vision Transformer), a new image classification model.  

The ViT Contextual Image Model  

Convolutional Neural Networks (CNN) are currently the dominant paradigm for vehicle sensor input recognition in image classification. This model efficiently extracts texture-based features using a rectangular learning filter on camera-acquired image information.  

ViT handles input from various sensors — for example camera, LiDAR, and RADAR — by vectoring the information from each to recognize relationships. This system classifies images based on a language model that analyzes sentences within context.  

It can detect people and cars based on visual information and surrounding scene structure for high performance in challenging conditions, e.g., at night or in bad weather — something that’s difficult for CNN. The system estimates attributes of people and vehicles, including spatial positions and postures.  

Vur vs. CNN

Anytime, Anywhere.  

In the future, autonomous vehicles will expand the possibilities for people’s mobility. We have probably all dreamed at one time or another of getting into a car that would automatically take us to our destination. This is what is called Level 5 automated driving. The technological hurdles are of course high, and it is one of the technology goals of car brands around the world. 

AFEELA’s AD/ADAS concept for the future is anytime, anywhere. The objective is to create a technology that accurately identifies relevant details about surroundings (such as objects, lanes, and driving paths) using sensor data. This system should work reliably at any time and in all weather conditions, and even in areas without maps for autonomous driving. In some instances, local information is insufficient for lane detection due to various factors like deterioration, surrounding vehicles, weather conditions, or the absence of white lines. Local visibility information alone is insufficient to solve such cases, so contextual factors like road surface structure, surrounding vehicle location, and movement need consideration. ViT’s strength lies in its ability to use big-picture context effectively.  

AD/ADAS Architecture

However, there are some performance issues. It’s challenging to recognize and assess the last mile, like narrow alleys in residential areas, due to insufficient map data. Also, intersections lack lanes, making it problematic to turn without map data. One hurdle is the large amount of information needed for learning and processing spatial relationships.  

Unlike CNNs, ViT has no local inductive bias. ViT has high generalizability but needs a lot of data to learn. The more it’s trained, the higher its generalizability, resulting in high performance anytime, anywhere. AFEELA approaches this issue by using vehicle verification and simulators to learn.  

The traditional approach to minimizing risk involves analyzing and modeling surrounding environments and vehicle movements. However, the behavior of people and motorbikes on urban public roads is complex, making it difficult to accurately determine risk. By using neural-net type planning technology, route planners find low-risk routes based on data-driven models of surrounding dangers, even on regular roads. Further studies on architecture and data expansion will enhance the learning-based planner’s routes by gathering more data. 

This AI Automated Driving Simulator Ignites Coding Inspiration

Looking to the Invisible Future  

While improving ViT’s cognitive accuracy for public road driving, it’s important to consider a more futuristic tomorrow. In the field of automated driving, companies are working on technologies to accurately identify cars using sensor data and other sources.  

In the future, as image recognition in cars advances, the next consideration may be developing a vehicle that can make accurate decisions even without input from sensors. 

These achievements would signify a noteworthy milestone in creating a car capable of making reliable decisions in all circumstances. Maybe there’s an aspect that requires a fundamental shift in technology. ViT’s development is meaningful as it gathers the knowledge to envision the future. A future that holds the true realization of anytime, anywhere, as technology evolves amidst many challenges. 

Interviewer: Takuya Wada
Writer: Asuka Kawanabe