TY - THES N1 - Pembimbing: Dr. Agung Fatwanto, S.Si., M.Kom ID - digilib66502 UR - https://digilib.uin-suka.ac.id/id/eprint/66502/ A1 - Fardan Zamakhsyari, NIM.: 22206051005 Y1 - 2024/06/27/ N2 - Question answering (QA) is one of the tasks in natural language processing (NLP) where the BERT language model has shown remarkable results in this field. However, so far there has been no research comparing the performance of BERT models, namely M-BERT and IndoBERT, for the case of the mental health domain with the Indonesian dataset. This study aims to assess how well IndoBERT and M-BERT perform in completing QA tasks using Indonesian-language datasets with mental health topics. This research is conducted by comparing the performance of IndoBERT and M-BERT models, both before and after fine-tuning, for QA tasks of the mental health domain. The dataset used is a translation of the Amod/mental_health_counseling_conversations dataset on the hugging face. The performance of these models was evaluated using BERTScore. The results of this study show that the performance of IndoBERT outperforms M-BERT after the fine-tuning process, with F1-BERTScore of 91.8%, recall BERTScore of 89.9%, and precision BERTScore of 93.9%, compared to M-BERT which only gets F1-BERTScore of 79.2%, recall BERTScore of 73.4%, and precision BERTScore of 86.2%. These findings highlight the need to use language-specific models, such as IndoBERT for Indonesians, to improve the performance and relevance of responses in question-answer systems. In addition, this study demonstrates the effectiveness of fine-tuning methods in improving model performance in this case the IndoBERT improved by 28% while MBERT improved by about 5%. The higher improvement of IndoBERT shows that models trained for a specific type of language (in this case the Indonesian language) can improve significantly when optimized for NLP tasks in that specific language. PB - UIN SUNAN KALIJAGA YOGYAKARTA KW - Natural Language Processing KW - Question Answer KW - BERT KW - IndoBERT KW - MBERT M1 - masters TI - OPTIMASI MODEL NATURAL LANGUAGE PROCESSING UNTUK APLIKASI QUESTION-ANSWER KESEHATAN MENTAL: PERBANDINGAN MODEL INDOBERT DAN M-BERT AV - restricted EP - 83 ER -