Identification of Incung Characters (Kerinci) to Latin Characters Using Convolutional Neural Network

https://doi.org/10.22146/ijccs.70939

Tesa Ananda Putri(1), Tri Suratno(2*), Ulfa Khaira(3)

(1) Department of Information Systems, FST Universitas Jambi, Jambi
(2) Department of Information Systems, FST Universitas Jambi, Jambi
(3) Department of Information Systems, FST Universitas Jambi, Jambi
(*) Corresponding Author

Abstract


Incung script is a legacy of the Kerinci tribe located in Kerinci Regency, Jambi Province. On October 17, 2014, the Incung script was designated by the Ministry of Education and Culture as an intangible heritage property owned by Jambi Province. But in reality, the Incung script is almost extinct in society. This study aims to identify the characters of the Incung (Kerinci) script with the output in the form of Latin characters from the Incung script. The classification method used is the Convolutional Neural Network (CNN) method. The dataset used as many as 1400 incung character images divided into 28 classes. In this study, an experiment was conducted to obtain the most optimal model. Showing the results using the CNN method during the training process that the accuracy of the training data reaches 99% and the accuracy of the testing data reaches 91% by using the optimal hyperparameters from the tests that have been done, namely batch size 32, epoch 100, and Adam's optimizer. It evaluates the CNN model using 80 images in words (a combination of several characters) with 4 test scenarios. It shows that the model can recognize image data from scanning printed books, digital writing test data, test data with images containing more than two characters, and check images with different font sizes

Keywords


Incung Script; Convolutional Neural Network; Deep Learning; Image; Pattern Recognition

Full Text:

PDF


References

[1] H. Mubarat, “Aksara Incung Kerinci Sebagai Sumber Ide Penciptaan Seni Kriya,” Ekspresi Seni, vol. 17, no. 2, 2015, doi: 10.26887/ekse.v17i2.101.

[2] P. Pudil, P. Somol, and M. Haindl, “Introduction To Statistical Pattern Recognition.,” Proc Natl Electron Conf, vol. 29, pp. 349–352, 1974, doi: 10.1016/0098-3004(96)00017-9.

[3] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015, doi: 10.1038/nature14539.

[4] N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey,” IEEE Access, vol. 6, pp. 14410–14430, 2018, doi: 10.1109/ACCESS.2018.2807385.

[5] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, USA: The MIT Press, 2016.

[6] J. Ruiz-del-Solar, P. Loncomilla, and N. Soto, “A Survey on Deep Learning Methods for Robot Vision,” Mar. 2018, Accessed: Nov. 22, 2021. [Online]. Available: http://arxiv.org/abs/1803.10862.

[7] R. Khadijah and A. Nurhadiyatna, “Deep Learning for Handwritten Javanese Character Recognition,” Int. Conf. Informatics Comput. Sci., pp. 59–64, 2017.

[8] C. K. Dewa, A. L. Fadhilah, and Afiahayati, “Convolutional Neural Networks for Handwritten Javanese Character Recognition,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 12, no. 1, pp. 83–94, 2018, doi: 10.22146/ijccs.31144.

[9] N. Euclides, W. Nugroho, and A. Harjoko, “Transliteration of Hiragana and Katakana Handwritten Characters Using CNN-SVM,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 15, no. 3, 2021.

[10] F. Ilham and N. Rochmawati, “Transliterasi Aksara Jawa Tulisan Tangan ke Tulisan Latin Menggunakan CNN,” JINACS (Journal Informatics Comput. Sci., vol. 01, no. 4, pp. 200–208, 2020.

[11] P. Grace W. Lindsay, “Convolutional neural networks as a model of the visual system: Past, present, and future,” J. Cogn. Neurosci., vol. 33, no. 10, pp. 2017–2031, 2021, doi: 10.1162/jocn_a_01544.

[12] X. Mao, S. Hijazi, R. Casas, P. Kaul, R. Kumar, and C. Rowen, “Hierarchical CNN for traffic sign recognition,” IEEE Intell. Veh. Symp., no. 4, pp. 130–135, 2016, doi: 10.1109/IVS.2016.7535376.

[13] B. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Commun. ACM, vol. 60, no. 6, 2017, doi: 10.1145/3065386.

[14] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1701–1708, 2014, doi: 10.1109/CVPR.2014.220.

[15] B. V. Jauhari, J. Mardizal, Zulwachdi, and Yozerizal, Mengenal Aksara Incung Suku Kerinci Daerah Jambi. Sungai Penuh, Provinsi Jambi: Lembaga Bina Potensia Aditya Mahatva Yodha, 2013.

[16] B. V. Jauhari and Martono, Belajar Aksara Incung Suku Kerinci Daerah Jambi. Sungai Penuh, Provinsi Jambi: Lembaga Bina Potensia, 2013.

[17] F. Utaminingrum, A. W. Satria Bahari Johan, I. K. Somawirata, Risnandar, and A. Septiarini, “Descending stairs and floors classification as control reference in autonomous smart wheelchair,” J. King Saud Univ. - Comput. Inf. Sci., 2021, doi: 10.1016/j.jksuci.2021.07.025.

[18] Z. Ahmad and N. Khan, “CNN-Based Multistage Gated Average Fusion (MGAF) for Human Action Recognition Using Depth and Inertial Sensors,” IEEE Sens. J., vol. 21, no. 3, pp. 3623–3634, 2021, doi: 10.1109/JSEN.2020.3028561.

[19] P. L. Neary, “Automatic hyperparameter tuning in deep convolutional neural networks using asynchronous reinforcement learning,” IEEE Int. Conf. Cogn. Comput., pp. 73–77, 2018, doi: 10.1109/ICCC.2018.00017.

[20] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” pp. 1–31, 2016, [Online]. Available: http://arxiv.org/abs/1603.07285.

[21] W. Ng et al., “Convolutional neural network for simultaneous prediction of several soil properties using visible/near-infrared, mid-infrared, and their combined spectra,” Geoderma, vol. 352, pp. 251–267, Oct. 2019, doi: 10.1016/J.GEODERMA.2019.06.016.

[22] A. Ramdan, V. Zilvan, E. Suryawati, H. F. Pardede, and V. P. Rahadi, “Tea clone classification using deep CNN with residual and densely connections,” J. Teknol. dan Sist. Komput., vol. 8, no. 4, pp. 289–296, 2020, doi: 10.14710/jtsiskom.2020.13768.



DOI: https://doi.org/10.22146/ijccs.70939

Article Metrics

Abstract views : 2161 | views : 1779

Refbacks

  • There are currently no refbacks.




Copyright (c) 2022 IJCCS (Indonesian Journal of Computing and Cybernetics Systems)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.



Copyright of :
IJCCS (Indonesian Journal of Computing and Cybernetics Systems)
ISSN 1978-1520 (print); ISSN 2460-7258 (online)
is a scientific journal the results of Computing
and Cybernetics Systems
A publication of IndoCEISS.
Gedung S1 Ruang 416 FMIPA UGM, Sekip Utara, Yogyakarta 55281
Fax: +62274 555133
email:ijccs.mipa@ugm.ac.id | http://jurnal.ugm.ac.id/ijccs



View My Stats1
View My Stats2