OPTIMIZATION OF ALGORITHMS AND PROCESS OF ACOUSTICAL AWARENESS IN INTELLIGENT ROBOTIC ACTIONS
Abstract
The class of complicated acoustic scenes in actual-time situations is a rapidly advancing place, drawing enormous interest from the machine mastering network. Techniques for classifying acoustic patterns in natural and concrete soundscapes were substantially explored. In this have a look at, we gift a novel framework for automatic acoustic class tailored to behavioral robotics. Drawing lessons from a class of beautiful algorithms in computer vision, we propose an updated description that combines the one-dimensional environment triplet model (1D-LTP) and mel frequency cepstral coefficients (MFCC). Object vectors are classified using a multilayer support vector machine (SVM) based on the classification criteria. Validated on benchmark datasets DCASE and RWCP, our approach achieves accuracies of ninety-seven.38% and 94.10%, respectively, outperforming other feature descriptors. Furthermore, we introduce a multi-layer classification system to communicate non-verbal information in human-robot interaction (HRI) via sound, allowing robots to convey states such as urgency and directionality intuitively. Evaluations show that these sounds are effectively understood by human participants. Additionally, we review state-of-the-art acoustic scene analysis technology (ASA), focusing on source localization, signal enhancement, and ego-noise suppression, highlighting their role in supporting the self-awareness of autonomous systems. We also explore the potential of active sensing in ASA, particularly in humanoid robots, and propose a theoretical framework for mapping musical parameters to robotic motions, facilitating communication and coordination within robotic swarms. This comprehensive study underscores the transformative potential of optimized acoustical awareness algorithms, enhancing the capabilities and interactions of intelligent robotic systems across diverse applications.