
- E-ISSN 3022-5388
This study introduces an AI-based multi-sensor fusion system to enhance human detection and collision avoidance in autonomous vehicles and drones. Utilizing camera, LiDAR, and radar data, the system integrates deep learning models, such as YOLO, Faster R-CNN, DQN, and PPO, to leverage the strengths of different sensors. Object recognition and path prediction are managed with CNN, RNN, and reinforcement learning algorithms, ensuring real-time collision avoidance even in complex environments. A key innovation is the interaction capability between vehicles and drones, allowing shared object detection from aerial and ground views for cooperative collision avoidance based on predicted paths. The system implements distributed learning that merges cloud and edge computing to improve real-time responsiveness and optimize energy efficiency, facilitating data sharing without imposing heavy computational demands. This strategy contrasts with previous research by reducing processing load and supporting coordinated functionality. Additionally, ethical considerations are embedded through algorithms designed for optimal decisions in high-risk scenarios, promoting safer, cooperative operation and boosting public trust. The integration of these technologies aims to enhance both the effectiveness and societal acceptance of autonomous systems.