Offensive Language and Hate Speech Detection using BERT Model

https://doi.org/10.22146/ijccs.99841

Fadila Shely Amalia(1), Yohanes Suyanto(2*)

(1) Department of Computer Science and Electronics, Universitas Gadjah Mada
(2) (Scopus ID : 57193142907); Department of Computer Science and Electronics, Universitas Gadjah Mada
(*) Corresponding Author

Abstract


Hate speech detection is an important issue in sentiment analysis and natural language processing. This study aims to improve the effectiveness of hate speech detection in English text using the BERT model, along with modified preprocessing techniques to enhance the F1-score. The dataset, sourced from Kaggle, contains English text with hate speech content. Evaluation results show a significant improvement in the model's accuracy and overall text classification performance. The BERT model achieved 89.11% accuracy on test data, correctly predicting 85 out of 95 samples. While the model excels at classifying offensive text with around 95% accuracy, it struggles to distinguish between hate and offensive text, with some confusion between neither and offensive categories. The classification report shows F1-scores of 0.43 for the hate class, 0.94 for the offensive class, and 0.84 for the neither class, with a weighted average F1-score of 0.89 and a macro average of 0.73. These results indicate that the BERT model delivers solid performance in detecting hate speech, though there is room for improvement, particularly in distinguishing certain classes.

Keywords


Hate speech; Offensive; Deep Learning; BERT; Twitter

Full Text:

PDF


References

A. Matamoros-Fernández and J. Farkas, Racism, Hate Speech, and Social Media: A Systematic Review and Critique, Television and New Media, vol. 22, no. 2, pp. 205–224, Feb. 2021, doi: 10.1177/1527476420982230.

S. E. Kapolri, T. Penanganan, U. Kebencian, M. Choirul, A. Dan, and M. Hafiz, Surat Edaran Kapolri Tentang Penanganan Ujaran Kebencian (Hate Speech) dalam Kerangka Hak Asasi Manusia.

A. Mousa, I. Shahin, A. B. Nassif, and A. Elnagar, Detection of Arabic offensive language in social media using machine learning models, Intelligent Systems with Applications, vol. 22, Jun. 2024, doi: 10.1016/j.iswa.2024.200376.

C. D. Putra and H.-C. Wang, Advanced BERT-CNN for Hate Speech Detection, Procedia Comput Sci, vol. 234, pp. 239–246, 2024, doi: 10.1016/j.procs.2024.02.170.

P. K. Roy, S. Bhawal, and C. N. Subalalitha, Hate speech and offensive language detection in Dravidian languages using deep ensemble framework, Comput Speech Lang, vol. 75, Sep. 2022, doi: 10.1016/j.csl.2022.101386.

J. A. Benítez-Andrades, Á. González-Jiménez, Á. López-Brea, J.

Aveleira-Mata, J. M. Alija-Pérez, and M. T. García-Ordás, Detecting racism and xenophobia using deep learning models on Twitter data: CNN, LSTM and BERT, PeerJ Comput Sci, vol. 8, 2022, doi: 10.7717/PEERJ-CS.906.

A. K. Das, A. Al Asif, A. Paul, and M. N. Hossain, Bangla hate speech detection on social media using attention-based recurrent neural network, Journal of Intelligent Systems, vol. 30, no. 1, pp. 578–591, Jan. 2021, doi: 10.1515/jisys-2020-0060.

A. Dewani, M. A. Memon, and S. Bhatti, Cyberbullying detection: advanced preprocessing techniques & deep learning architecture for Roman Urdu data, J Big Data, vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00550-7.

R. T. Mutanga, N. Naicker, and O. O. Olugbara, Detecting Hate Speech on Twitter Network Using Ensemble Machine Learning. [Online]. Available: www.ijacsa.thesai.org

G. Z. Nabiilah, S. Y. Prasetyo, Z. N. Izdihar, and A. S. Girsang, BERT base model for toxic comment analysis on Indonesian social media, in Procedia Computer Science, Elsevier B.V., 2022, pp. 714–721. doi: 10.1016/j.procs.2022.12.188.

D. Yang, X. Wang, and R. Celebi, Expanding the Vocabulary of BERT for Knowledge Base Construction, 2023. [Online]. Available: http://ceur-ws.org

A. A. Mosaed, H. Hindy, and M. Aref, BERT-Based Model for Reading Comprehension Question Answering, in 2023 Eleventh International Conference on Intelligent Computing and Information Systems (ICICIS), 2023, pp. 52–57. doi: 10.1109/ICICIS58388.2023.10391167.

J. Devlin, M.-W. Chang, K. Lee, K. T. Google, and A. I. Language, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [Online]. Available: https://github.com/tensorflow/tensor2tensor

V. Sanh, L. Debut, J. Chaumond, and T. Wolf, DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.01108



DOI: https://doi.org/10.22146/ijccs.99841

Article Metrics

Abstract views : 809 | views : 553

Refbacks

  • There are currently no refbacks.




Copyright (c) 2024 IJCCS (Indonesian Journal of Computing and Cybernetics Systems)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.



Copyright of :
IJCCS (Indonesian Journal of Computing and Cybernetics Systems)
ISSN 1978-1520 (print); ISSN 2460-7258 (online)
is a scientific journal the results of Computing
and Cybernetics Systems
A publication of IndoCEISS.
Gedung S1 Ruang 416 FMIPA UGM, Sekip Utara, Yogyakarta 55281
Fax: +62274 555133
email:ijccs.mipa@ugm.ac.id | http://jurnal.ugm.ac.id/ijccs



View My Stats1
View My Stats2