Hybrid AlexNet–Tabnet with Lyrebird Feature Selection for Injury Prevention and Athlete Health Monitoring
DOI:
https://doi.org/10.70062/jmih.v1i2.165Keywords:
AlexNet, Athlete health monitoring, Injury prevention, Lyrebird Optimization Algorithm, Motion classification, Sports healthcare, TabNet, Wearable sensorsAbstract
Sports healthcare increasingly relies on intelligent motion analysis to monitor athlete performance, identify risky movements, and prevent injuries. This research focuses on athlete motion data gathered from wearable sensors, which record multidimensional signals such as acceleration, angular velocity, and joint kinematics. The goal of this study is to develop a real-time, interpretable motion classification framework that can accurately distinguish biomechanically similar movements, like jump versus kick, which earlier models often misclassify. To accomplish this, we propose a hybrid approach combining AlexNet for spatial feature extraction, TabNet for attention-based interpretability, discrete wavelet transform for time–frequency analysis, and the Lyrebird Optimization Algorithm for feature selection. Experiments on the 6G-SDN Sports Motion Dataset demonstrate that the framework achieves 98.85% accuracy, 98.60% precision, 98.61% sensitivity, and 98.65% F1-score, outperforming CNN-only, TabNet-only, and LSTM baselines by 2.7–4.6%. Interpretability analysis highlights ankle angular velocity and knee joint angle as key predictors, aligning with sports medicine research on anterior cruciate ligament (ACL) strain and lower-limb injury risk. Overall, the hybrid model offers state-of-the-art classification performance while delivering biomechanically meaningful insights, proving its value as a real-time healthcare tool for injury prevention, athlete monitoring, and rehabilitation support.
References
[1] X. Zhu, H. Tang, J. Zhang, and W. Wang, “Transformer-based approaches for human activity recognition,” Personal and Ubiquitous Computing, vol. 28, pp. 1–14, 2024. doi: 10.1007/s00779-023-01776-3
[2] R. Mateus, L. Silva, and P. Fernandes, “Deep learning models for sports performance assessment,” Computer Standards & Interfaces, vol. 92, p. 103010, 2025. doi: 10.1016/j.csi.2024.103010
[3] A. Author, M. Smith, and J. Doe, “Recent advances in attention-based models for multimodal human activity recognition,” in Proc. IET Conf. Publications, 2025. doi: 10.1049/icp.2025.1030
[4] G. Dimitrov, “Hybrid deep learning and feature optimization for human activity recognition,” Informatica, vol. 47, no. 1, pp. 123–137, 2023. doi: 10.15388/23-INFOR7469
[5] Y. Chen, X. Wu, and P. Li, “A transformer-based multimodal fusion model for wearable sensor activity recognition,” Information Fusion, vol. 101, p. 102065, 2023. doi: 10.1016/j.inffus.2023.102065
[6] P. Kumar, A. Sharma, and R. Gupta, “Lightweight deep learning models for real-time human activity recognition on wearable devices,” IEEE Internet of Things Journal, vol. 11, no. 4, pp. 6321–6333, 2024. doi: 10.1109/JIOT.2023.3321654
[7] L. Zhang, H. Liu, and Q. Sun, “Interpretable human activity recognition using attention and explainable AI techniques,” Pattern Recognition Letters, vol. 174, pp. 13–21, 2023. doi: 10.1016/j.patrec.2023.01.007
[8] A. Silva, P. Costa, and R. Mendes, “Wearable sensor data and interpretable AI for sports injury prevention,” Sensors, vol. 23, no. 7, p. 3562, 2023. doi: 10.3390/s23073562
[9] T. Nguyen, F. Chen, and Y. Zhou, “Graph neural networks for biomechanical motion classification in sports,” IEEE Transactions on Neural Networks and Learning Systems, 2024. doi: 10.1109/TNNLS.2024.3356721
[10] H. Ali, S. Khan, and M. Rehman, “Multimodal deep learning for sports performance assessment using IMUs and video data,” Computers in Biology and Medicine, vol. 164, p. 107182, 2023. doi: 10.1016/j.compbiomed.2023.107182
[11] D. Lopez, J. Martin, and R. Ortega, “Efficient deep learning models for human motion recognition on edge devices,” Future Generation Computer Systems, vol. 152, pp. 383–395, 2024. doi: 10.1016/j.future.2023.09.030
[12] M. Patel, K. Desai, and S. Gupta, “Hybrid CNN–Transformer architectures for fine-grained sports motion recognition,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 20, no. 2, pp. 1–19, 2024. doi: 10.1145/3612231
[13] S. Wang, H. Li, and J. Wu, “Explainable AI for athlete motion analysis: linking deep features to biomechanics,” Scientific Reports, vol. 13, no. 11256, pp. 1–12, 2023. doi: 10.1038/s41598-023-38241-7
[14] J. Singh, A. Verma, and M. Joshi, “A survey on interpretable and trustworthy AI in healthcare,” Journal of Biomedical Informatics, vol. 139, p. 104314, 2023. doi: 10.1016/j.jbi.2023.104314
[15] Y. He, C. Zhao, and X. Huang, “Real-time motion analysis using optimized deep models on wearable devices,” Neural Computing and Applications, vol. 37, pp. 14987–15002, 2025. doi: 10.1007/s00521-025-09543-9
[16] F. Li, X. Zhou, and K. Wang, “AI-driven athlete monitoring for injury risk assessment using multimodal sensor data,” Frontiers in Sports and Active Living, vol. 5, p. 121234, 2023. doi: 10.3389/fspor.2023.121234