eprintid: 72937 rev_number: 10 eprint_status: archive userid: 12460 dir: disk0/00/07/29/37 datestamp: 2025-09-16 07:24:03 lastmod: 2025-09-16 07:24:03 status_changed: 2025-09-16 07:24:03 type: thesis metadata_visibility: show contact_email: muh.khabib@uin-suka.ac.id creators_name: R. Abdullah Hammami, NIM.: 22206051019 title: KOMPARASI PERFORMA LARGE LANGUAGE MODELS UNTUK TUGAS PERINGKASAN TEKS BERBAHASA INDONESIA ispublished: pub subjects: 004. divisions: S2_inf full_text_status: restricted keywords: Peringkasan Teks, Fine-tuning, Gemma2, LLaMA3 note: Dr. Agung Fatwanto, S.Si., M.Kom. abstract: The rapid growth of online information, coupled with low reading interest and heterogeneous literacy levels in Indonesia, necessitates concise, accurate, and context-sensitive automatic summarization. Given Indonesian’s low-resource status, systematic evaluation of locally adapted models is warranted. This study compares four Indonesian-capable large language models—Gemma2 9B CPT Sahabat-AI v1 Instruct, Llama3 8B CPT Sahabat-AI v1 Instruct, Gemma-SEA-LION-v3-9B-IT, and Llama-SEA-LION-v3-8B-IT—on news summarization to identify the most suitable model for practical use. We employ a benchmarking protocol on the IndoSum test subset (3,762 articles), comprising preprocessing (token reconstruction and punctuation cleanup), prompt design, 8-bit quantized inference, and automated evaluation with ROUGE (1/2/L; precision, recall, F1), BLEU, METEOR, and BERTScore. Inference is executed in four batches to meet computational constraints, and evaluation is standardized across models. Llama3 8B CPT Sahabat-AI v1 Instruct achieves the most balanced performance: ROUGE F1 42.05% (precision 42.27%; recall 42.68%), BLEU 25.10%, and BERTScore P/R/F1 88.68%/88.43%/88.54%. Gemma2 9B CPT Sahabat-AI v1 Instruct excels in coverage with ROUGE recall 48.23%, ROUGE F1 39.50%, BLEU 22.70%, METEOR 47.20%, and BERTScore 86.78%/89.17%/87.95%. SEA-LION models perform lower: Gemma-SEA-LION-v3-9B-IT (ROUGE P/R/F1 25.77%/37.58%/30.37%; BLEU 12.65%; METEOR 37.72%; BERTScore 84.63%/87.36%/85.97%) and Llama-SEA-LION-v3-8B-IT (ROUGE 25.22%/33.84%/28.71%; BLEU 11.06%; METEOR 34.57%; BERTScore 84.46%/86.80%/85.61%). Overall, Indonesian-optimized models (SahabatAI) are superior and more stable. Llama3 8B is preferable when balancing precision, coverage, and structural consistency; Gemma2 9B is better when recall and semantic alignment with the source are prioritized. date: 2025-08-22 date_type: published pages: 119 institution: UIN SUNAN KALIJAGA YOGYAKARTA department: FAKULTAS SAINS DAN TEKNOLOGI thesis_type: masters thesis_name: other citation: R. Abdullah Hammami, NIM.: 22206051019 (2025) KOMPARASI PERFORMA LARGE LANGUAGE MODELS UNTUK TUGAS PERINGKASAN TEKS BERBAHASA INDONESIA. Masters thesis, UIN SUNAN KALIJAGA YOGYAKARTA. document_url: https://digilib.uin-suka.ac.id/id/eprint/72937/1/22206051019_BAB-I_IV-atau-V_DAFTAR-PUSTAKA.pdf document_url: https://digilib.uin-suka.ac.id/id/eprint/72937/2/22206051019_BAB-II_sampai_SEBELUM-BAB-TERAKHIR.pdf