Welcome to the Sony Honda Mobility Tech Blog, where our engineers unveil the cutting-edge research and development shaping the future of intelligent mobility. Our mission is to redefine mobility, transforming it into a living, connected experience. This first blog post takes you behind the scenes of AFEELA, Sony Honda Mobility’s groundbreaking project, focusing on innovations that push the boundaries of perception in autonomous driving technology.

Transforming Recognition into Contextual Reasoning

AFEELA’s Intelligent Drive is more than a mere Advanced Driver-Assistance System (ADAS); it is a leap forward in understanding the intricate relationships between objects. Our AI interprets interactions and their implications for real-world driving decisions. This is achieved by integrating data from a variety of sensors, forming a comprehensive system that we refer to as “Understanding AI.” This intelligence excels not only in recognizing visible objects but in reasoning why they matter within the context.

Enhanced Perception with SPAD-based LiDAR

Equipped with 40 sensors, AFEELA’s ADAS leverages a fusion of technologies such as cameras, radar, and LiDAR. The inclusion of Sony’s Single Photon Avalanche Diode (SPAD) in the LiDAR enhances accuracy, especially in less favorable conditions. The SPAD-based LiDAR significantly boosts AFEELA’s perception AI accuracy by capturing high-density 3D point cloud data, an advantage evident in testing environments with complex variables.

From Object Recognition to Contextual Understanding

By employing Topology for structural insight, AFEELA’s AI transcends basic recognition. It understands how objects relate spatially and logically, allowing it to interpret broader environmental contexts. The use of Transformer architecture with advanced “attention” mechanisms enables the AI to resolve connections between spatially disparate elements, offering deeper situational awareness and nuanced decision-making abilities.

Overcoming Challenges with Real-Time Efficiency

The implementation of sophisticated Transformer models in real-time automotive scenarios poses challenges, particularly in execution efficiency. By collaborating with Qualcomm, AFEELA has optimized these models, achieving a remarkable fivefold efficiency increase, thereby enabling real-time applications without sacrificing performance.

Realizing Practical AI Through Multimodal Integration

AFEELA’s AI thrives in dynamic driving environments by integrating inputs from LiDAR, radar, and SD maps, which enhances adaptive intelligence under varying conditions. This multi-modal synthesis is crucial for creating “Real-World Usable Intelligence” that is robust and reliable in practice.

As stated in AFEELA, AFEELA’s pioneering work connects world-leading AI research with tangible automotive engineering, crafting AI that truly comprehends the complex world around it. In future blog posts, we will delve deeper into our strides in AI learning efficiency, further illustrating the evolution and impact of our work in mobility technology.