Application of Natural Language Processing for Sentiment Analysis in Digital Application User Feedback
Keywords:
BERT, Natural Language Processing, Sentiment Analysis, Transformer, User FeedbackAbstract
The increasing volume of user feedback on digital applications presents a major challenge for developers and analysts, as manual analysis is time-consuming, subjective, and inefficient. This research aims to automatically identify sentiment patterns within large-scale user feedback using Natural Language Processing (NLP) techniques based on Transformer architecture. The study applies a Transformer-based model, specifically BERT, to classify sentiments into positive, neutral, and negative categories. User feedback data were collected from various digital application platforms, then preprocessed through tokenization, stopword removal, and stemming to ensure text quality and consistency. The fine-tuned Transformer model successfully achieved high accuracy in classifying sentiment patterns, demonstrating its ability to capture nuanced contextual meanings in textual data. The results revealed that positive feedback accounted for 45.2%, neutral for 23.8%, and negative for 31.0% of the total dataset. Compared to manual sentiment analysis, the Transformer-based approach showed greater efficiency, reduced analysis time, and minimized human bias. These findings highlight the transformative potential of deep learning models in automating large-scale text analytics. In conclusion, this research confirms that Transformer-based NLP methods provide a robust and scalable solution for sentiment analysis of user feedback, enabling digital application developers to monitor user satisfaction and improve service quality based on data-driven insights.


