Quick jump to page content
  • Main Navigation
  • Main Content
  • Sidebar

  • Home
  • Current
  • Archives
  • Join As Reviewer
  • Info
  • Announcements
  • Statistics
  • About
    • About the Journal
    • Submissions
    • Editorial Team
    • Privacy Statement
    • Contact
  • Register
  • Login
  • Home
  • Current
  • Archives
  • Join As Reviewer
  • Info
  • Announcements
  • Statistics
  • About
    • About the Journal
    • Submissions
    • Editorial Team
    • Privacy Statement
    • Contact
  1. Home
  2. Archives
  3. Vol. 9, No. 2, May 2024
  4. Articles

Issue

Vol. 9, No. 2, May 2024

Issue Published : May 31, 2024
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Enhancing Qur'anic Recitation Experience with CNN and MFCC Features for Emotion Identification

https://doi.org/10.22219/kinetik.v9i2.2007
Lailis Syafa'ah
Universitas Muhammadiyah Malang
Roby Prasetyono
Universitas Muhammadiyah Malang
Hariyady Hariyady
Universitas Muhammadiyah Malang

Corresponding Author(s) : Lailis Syafa'ah

lailis@umm.ac.id

Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, Vol. 9, No. 2, May 2024
Article Published : May 27, 2024

Share
WA Share on Facebook Share on Twitter Pinterest Email Telegram
  • Abstract
  • Cite
  • References
  • Authors Details

Abstract

In this study, MFCC feature extraction and CNN algorithms are used to examine the identification of emotions in the murottal sounds of the Qur'an. A CNN model with labelled emotions is trained and tested, as well as data collection of Qur'anic murottal voices from a variety of readers using MFCC feature extraction to capture acoustic properties. The outcomes show that MFCC and CNN work together to significantly improve emotion identification. The CNN model attains an accuracy rate of 56 percent with the Adam optimizer (batch size 8) and a minimum of 45 percent with the RMSprop optimizer (batch size 16). Notably, accuracy is improved by using fewer emotional parameters, and the Adam optimizer is stable across a range of batch sizes. With its insightful analysis of emotional expression and user-specific recommendations, this work advances the field of emotion identification technology in the context of multitonal music.

Keywords

Emotion identification, Qur'an murottal sound, MelFrequency Cepstral Coefficients (MFCC), Convolutional Neural Network (CNN)
Syafa’ah, L., Prasetyono, R., & Hariyady, H. (2024). Enhancing Qur’anic Recitation Experience with CNN and MFCC Features for Emotion Identification. Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, 9(2), 181-192. https://doi.org/10.22219/kinetik.v9i2.2007
  • ACM
  • ACS
  • APA
  • ABNT
  • Chicago
  • Harvard
  • IEEE
  • MLA
  • Turabian
  • Vancouver
Download Citation
Endnote/Zotero/Mendeley (RIS)
BibTeX
References
  1. Z. Jia, Y. Lin, J. Wang, Z. Feng, X. Xie, and C. Chen, “HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network for Multi-modal Emotion Recognition,” MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia. pp. 1047–1056, 2021. https://doi.org/10.1145/3474085.3475583
  2. R. Rafeh, R. Azimi Khojasteh, and N. Alobaidi, “Proposing A Hybrid Approach for Emotion Classification using Audio and Video Data,” pp. 31–40, 2019. https://doi.org/10.5121/csit.2019.91403
  3. F. Ye, “Emotion recognition of online education learners by convolutional neural networks,” Comput. Intell. Neurosci., vol. 2022, p. 4316812, 2022. https://doi.org/10.1155/2022/4316812
  4. X. Liang, J. Liang, T. Yin, and X. Tang, “A lightweight method for face expression recognition based on improved MobileNetV3,” IET Image Process., vol. 17, no. 8, pp. 2375–2384, 2023. https://doi.org/10.1049/ipr2.12798
  5. X. Xu, Y. Zhang, M. Tang, H. Gu, S. Yan, and J. Yang, “Emotion recognition based on double tree complex wavelet transform and machine learning in internet of things,” IEEE Access, vol. 7, pp. 154114–154120, 2019. https://doi.org/10.1109/access.2019.2948884
  6. Z. Han, H. Chang, X. Zhou, J. Wang, L. Wang, and Y. Shao, “E2ENNet: An end-to-end neural network for emotional brain-computer interface,” Front. Comput. Neurosci., vol. 16, p. 942979, 2022. https://doi.org/10.3389/fncom.2022.942979
  7. C. Wei, L. lan Chen, Z. zhen Song, X. guang Lou, and D. dong Li, “EEG-based emotion recognition using simple recurrent units network and ensemble learning,” Biomed Signal Process Control, vol. 58, p. 101756, 2020. https://doi.org/10.1016/j.bspc.2019.101756
  8. H. Yang, J. Han, and K. Min, “A multi-column CNN model for emotion recognition from EEG signals,” Sensors (Switzerland), vol. 19, no. 21, pp. 1–12, 2019. https://doi.org/10.3390/s19214736
  9. M. R. Islam et al., “EEG Channel Correlation Based Model for Emotion Recognition,” Comput Biol Med, vol. 136, no. May, p. 104757, 2021. https://doi.org/10.1016/j.compbiomed.2021.104757
  10. J.-R. Zhuang, Y.-J. Guan, H. Nagayoshi, K. Muramatsu, K. Watanuki, and E. Tanaka, “Real-time emotion recognition system with multiple physiological signals,” J. Adv. Mech. Des. Syst. Manuf., vol. 13, no. 4, pp. JAMDSM0075–JAMDSM0075, 2019. https://doi.org/10.1299/jamdsm.2019jamdsm0075
  11. P. Li et al., “EEG Based Emotion Recognition by Combining Functional Connectivity Network and Local Activations,” IEEE Trans Biomed Eng, vol. 66, no. 10, pp. 2869–2881, 2019. https://doi.org/10.1109/TBME.2019.2897651
  12. N. Y. Weinstein, L. B. Whitmore, and K. L. Mills, “Individual differences in mentalizing tendencies,” Collabra Psychol., vol. 8, no. 1, 2022. https://doi.org/10.1525/collabra.37602
  13. Y. Cimtay and E. Ekmekcioglu, “Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset eeg emotion recognition,” Sensors (Switzerland), vol. 20, no. 7, pp. 1–20, 2020. https://doi.org/10.3390/s20072034
  14. Y. Liu et al., “Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network,” Comput Biol Med, vol. 123, no. March, p. 103927, 2020. https://doi.org/10.1016/j.compbiomed.2020.103927
  15. L. J. Z. Et.al, “Investigating the use of eye fixation data for emotion classification in VR,” Turk. J. Comput. Math. Educ. (TURCOMAT), vol. 12, no. 3, pp. 1852–1857, 2021. https://doi.org/10.17762/turcomat.v12i3.1014
  16. T. Zhang, W. Zheng, Z. Cui, Y. Zong, and Y. Li, “Spatial-Temporal Recurrent Neural Network for Emotion Recognition,” IEEE Trans Cybern, vol. 49, no. 3, pp. 939–947, 2019. https://doi.org/10.1109/TCYB.2017.2788081
  17. S. K. Khare and V. Bajaj, “Time-Frequency Representation and Convolutional Neural Network-Based Emotion Recognition,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 7, pp. 2901–2909, 2021. https://doi.org/10.1109/TNNLS.2020.3008938
  18. L. Liu, Y. Ji, Y. Gao, T. Li, and W. Xu, “A data-driven adaptive emotion recognition model for college students using an improved multifeature deep neural network technology,” Comput. Intell. Neurosci., vol. 2022, p. 1343358, 2022. https://doi.org/10.1155/2022/1343358
  19. L. Shu et al., “Wearable emotion recognition using heart rate data from a smart bracelet,” Sensors (Switzerland), vol. 20, no. 3, pp. 1–19, 2020. https://doi.org/10.3390/s20030718
  20. S. Syafril, N. E. Yaumas, E. Engkizar, A. Jaafar, and Z. Arifin, “Sustainable development: Learning the Quran using the tartil method,” AL-TA LIM, vol. 28, no. 1, pp. 1–8, 2021. https://doi.org/10.15548/jt.v1i1.673
  21. D. Daliman, “Ethical conduct-do and general well-being among university students, moderated by religious internalization: An Islamic perspective,” Indigenous, vol. 6, no. 2, pp. 14–24, 2021. https://doi.org/10.23917/indigenous.v6i2.14886
  22. N. Najiburrahman, Y. N. Azizah, J. Jazilurrahman, W. Azizah, and N. A. Jannah, “Implementation of the tahfidz Quran program in developing Islamic character,” J. Obs. J. Pendidik. Anak Usia Dini, vol. 6, no. 4, pp. 3546–3599, 2022. https://doi.org/10.31004/obsesi.v6i4.2077
  23. M. A. Al-Jarrah et al., “Accurate Reader Identification for the Arabic Holy Quran Recitations Based on an Enhanced VQ Algorithm,” Revue d’Intelligence Artificielle, vol. 36, no. 6, pp. 815–823, 2022. https://doi.org/10.18280/ria.360601
  24. Y. Hanafi et al., “Student’s and instructor’s perception toward the effectiveness of E-BBQ enhances Al-qur’an reading ability,” Int. J. Instr., vol. 12, no. 3, pp. 51–68, 2019. https://doi.org/10.29333/iji.2019.1234a
  25. J. H. Alkhateeb, “A machine learning approach for recognizing the Holy Quran reciter,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 7, pp. 268–271, 2020. https://doi.org/10.14569/IJACSA.2020.0110735
  26. J. W. Watts, “Sensation and metaphor in ritual performance: The example of sacred texts,” Entangled Relig., vol. 10, 2019. https://doi.org/10.46586/er.10.2019.8365
  27. W. Zhang, “Intelligent recognition and analysis of negative emotions of undergraduates under COVID-19,” Front. Public Health, vol. 10, p. 913255, 2022. https://doi.org/10.3389/fpubh.2022.913255
  28. B. Chakravarthi, S.-C. Ng, M. R. Ezilarasan, and M.-F. Leung, “EEG-based emotion recognition using hybrid CNN and LSTM classification,” Front. Comput. Neurosci., vol. 16, p. 1019776, 2022. https://doi.org/10.3389/fncom.2022.1019776
  29. Y. Cui and F. Wang, “Research on audio recognition based on the deep neural network in music teaching,” Comput. Intell. Neurosci., vol. 2022, p. 7055624, 2022. https://doi.org/10.1155/2022/7055624
  30. H. Geng, Y. Hu, and H. Huang, “Monaural singing voice and accompaniment separation based on gated nested U-Net architecture,” Symmetry (Basel), vol. 12, no. 6, p. 1051, 2020. https://doi.org/10.3390/sym12061051
  31. V.-T. Tran and W.-H. Tsai, “Speaker identification in multi-talker overlapping speech using neural networks,” IEEE Access, vol. 8, pp. 134868–134879, 2020. https://doi.org/10.1109/access.2020.3009987
  32. M. Bandara, R. Jayasundara, I. Ariyarathne, D. Meedeniya, and C. Perera, “Forest sound classification dataset: FSC22,” Sensors (Basel), vol. 23, no. 4, 2023. https://doi.org/10.3390/s23042032
  33. H. K. Shin, S. H. Park, and K. W. Kim, “Inter-floor noise classification using convolutional neural network,” PLoS One, vol. 15, no. 12 December 2020, 2020. https://doi.org/10.1371/journal.pone.0243758
  34. O. Ilina, V. Ziyadinov, N. Klenov, and M. Tereshonok, “A Survey on Symmetrical Neural Network Architectures and Applications,” Symmetry (Basel), vol. 14, no. 7, 2022. https://doi.org/10.3390/sym14071391
  35. Z. Huang and M. Liao, “Evidence-Based Research on Multimodal Fusion Emotion Recognition,” pp. 594–601, 2023. https://doi.org/10.2991/978-94-6463-200-2_61
  36. S. C. Lai et al., “Hardware Accelerator Design of DCT Algorithm with Unique-Group Cosine Coefficients for Mel-Scale Frequency Cepstral Coefficients,” IEEE Access, vol. 10, pp. 79681–79688, 2022. https://doi.org/10.1109/ACCESS.2022.3194261
  37. Y. Sharma and Dr. B. Kumar Singh, “Depression analysis of voice samples using machine learning,” Journal of University of Shanghai for Science and Technology, vol. 23, no. 11, pp. 429–438, 2021. https://doi.org/10.51201/jusst/21/10820
  38. A. Al Harere and K. Al Jallad, “Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning,” arXiv. pp. 1–16, 2023. https://doi.org/10.48550/arXiv.2305.06429
  39. A. S. Wibowo, I. D. M. Bayu, and A. Darmawan, “Iqra reading verification with mel frequency cepstrum coefficient and dynamic time warping,” Journal of Physics: Conference Series, vol. 1722, no. 1. 2021. https://doi.org/10.1088/1742-6596/1722/1/012015
Read More

References


Z. Jia, Y. Lin, J. Wang, Z. Feng, X. Xie, and C. Chen, “HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network for Multi-modal Emotion Recognition,” MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia. pp. 1047–1056, 2021. https://doi.org/10.1145/3474085.3475583

R. Rafeh, R. Azimi Khojasteh, and N. Alobaidi, “Proposing A Hybrid Approach for Emotion Classification using Audio and Video Data,” pp. 31–40, 2019. https://doi.org/10.5121/csit.2019.91403

F. Ye, “Emotion recognition of online education learners by convolutional neural networks,” Comput. Intell. Neurosci., vol. 2022, p. 4316812, 2022. https://doi.org/10.1155/2022/4316812

X. Liang, J. Liang, T. Yin, and X. Tang, “A lightweight method for face expression recognition based on improved MobileNetV3,” IET Image Process., vol. 17, no. 8, pp. 2375–2384, 2023. https://doi.org/10.1049/ipr2.12798

X. Xu, Y. Zhang, M. Tang, H. Gu, S. Yan, and J. Yang, “Emotion recognition based on double tree complex wavelet transform and machine learning in internet of things,” IEEE Access, vol. 7, pp. 154114–154120, 2019. https://doi.org/10.1109/access.2019.2948884

Z. Han, H. Chang, X. Zhou, J. Wang, L. Wang, and Y. Shao, “E2ENNet: An end-to-end neural network for emotional brain-computer interface,” Front. Comput. Neurosci., vol. 16, p. 942979, 2022. https://doi.org/10.3389/fncom.2022.942979

C. Wei, L. lan Chen, Z. zhen Song, X. guang Lou, and D. dong Li, “EEG-based emotion recognition using simple recurrent units network and ensemble learning,” Biomed Signal Process Control, vol. 58, p. 101756, 2020. https://doi.org/10.1016/j.bspc.2019.101756

H. Yang, J. Han, and K. Min, “A multi-column CNN model for emotion recognition from EEG signals,” Sensors (Switzerland), vol. 19, no. 21, pp. 1–12, 2019. https://doi.org/10.3390/s19214736

M. R. Islam et al., “EEG Channel Correlation Based Model for Emotion Recognition,” Comput Biol Med, vol. 136, no. May, p. 104757, 2021. https://doi.org/10.1016/j.compbiomed.2021.104757

J.-R. Zhuang, Y.-J. Guan, H. Nagayoshi, K. Muramatsu, K. Watanuki, and E. Tanaka, “Real-time emotion recognition system with multiple physiological signals,” J. Adv. Mech. Des. Syst. Manuf., vol. 13, no. 4, pp. JAMDSM0075–JAMDSM0075, 2019. https://doi.org/10.1299/jamdsm.2019jamdsm0075

P. Li et al., “EEG Based Emotion Recognition by Combining Functional Connectivity Network and Local Activations,” IEEE Trans Biomed Eng, vol. 66, no. 10, pp. 2869–2881, 2019. https://doi.org/10.1109/TBME.2019.2897651

N. Y. Weinstein, L. B. Whitmore, and K. L. Mills, “Individual differences in mentalizing tendencies,” Collabra Psychol., vol. 8, no. 1, 2022. https://doi.org/10.1525/collabra.37602

Y. Cimtay and E. Ekmekcioglu, “Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset eeg emotion recognition,” Sensors (Switzerland), vol. 20, no. 7, pp. 1–20, 2020. https://doi.org/10.3390/s20072034

Y. Liu et al., “Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network,” Comput Biol Med, vol. 123, no. March, p. 103927, 2020. https://doi.org/10.1016/j.compbiomed.2020.103927

L. J. Z. Et.al, “Investigating the use of eye fixation data for emotion classification in VR,” Turk. J. Comput. Math. Educ. (TURCOMAT), vol. 12, no. 3, pp. 1852–1857, 2021. https://doi.org/10.17762/turcomat.v12i3.1014

T. Zhang, W. Zheng, Z. Cui, Y. Zong, and Y. Li, “Spatial-Temporal Recurrent Neural Network for Emotion Recognition,” IEEE Trans Cybern, vol. 49, no. 3, pp. 939–947, 2019. https://doi.org/10.1109/TCYB.2017.2788081

S. K. Khare and V. Bajaj, “Time-Frequency Representation and Convolutional Neural Network-Based Emotion Recognition,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 7, pp. 2901–2909, 2021. https://doi.org/10.1109/TNNLS.2020.3008938

L. Liu, Y. Ji, Y. Gao, T. Li, and W. Xu, “A data-driven adaptive emotion recognition model for college students using an improved multifeature deep neural network technology,” Comput. Intell. Neurosci., vol. 2022, p. 1343358, 2022. https://doi.org/10.1155/2022/1343358

L. Shu et al., “Wearable emotion recognition using heart rate data from a smart bracelet,” Sensors (Switzerland), vol. 20, no. 3, pp. 1–19, 2020. https://doi.org/10.3390/s20030718

S. Syafril, N. E. Yaumas, E. Engkizar, A. Jaafar, and Z. Arifin, “Sustainable development: Learning the Quran using the tartil method,” AL-TA LIM, vol. 28, no. 1, pp. 1–8, 2021. https://doi.org/10.15548/jt.v1i1.673

D. Daliman, “Ethical conduct-do and general well-being among university students, moderated by religious internalization: An Islamic perspective,” Indigenous, vol. 6, no. 2, pp. 14–24, 2021. https://doi.org/10.23917/indigenous.v6i2.14886

N. Najiburrahman, Y. N. Azizah, J. Jazilurrahman, W. Azizah, and N. A. Jannah, “Implementation of the tahfidz Quran program in developing Islamic character,” J. Obs. J. Pendidik. Anak Usia Dini, vol. 6, no. 4, pp. 3546–3599, 2022. https://doi.org/10.31004/obsesi.v6i4.2077

M. A. Al-Jarrah et al., “Accurate Reader Identification for the Arabic Holy Quran Recitations Based on an Enhanced VQ Algorithm,” Revue d’Intelligence Artificielle, vol. 36, no. 6, pp. 815–823, 2022. https://doi.org/10.18280/ria.360601

Y. Hanafi et al., “Student’s and instructor’s perception toward the effectiveness of E-BBQ enhances Al-qur’an reading ability,” Int. J. Instr., vol. 12, no. 3, pp. 51–68, 2019. https://doi.org/10.29333/iji.2019.1234a

J. H. Alkhateeb, “A machine learning approach for recognizing the Holy Quran reciter,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 7, pp. 268–271, 2020. https://doi.org/10.14569/IJACSA.2020.0110735

J. W. Watts, “Sensation and metaphor in ritual performance: The example of sacred texts,” Entangled Relig., vol. 10, 2019. https://doi.org/10.46586/er.10.2019.8365

W. Zhang, “Intelligent recognition and analysis of negative emotions of undergraduates under COVID-19,” Front. Public Health, vol. 10, p. 913255, 2022. https://doi.org/10.3389/fpubh.2022.913255

B. Chakravarthi, S.-C. Ng, M. R. Ezilarasan, and M.-F. Leung, “EEG-based emotion recognition using hybrid CNN and LSTM classification,” Front. Comput. Neurosci., vol. 16, p. 1019776, 2022. https://doi.org/10.3389/fncom.2022.1019776

Y. Cui and F. Wang, “Research on audio recognition based on the deep neural network in music teaching,” Comput. Intell. Neurosci., vol. 2022, p. 7055624, 2022. https://doi.org/10.1155/2022/7055624

H. Geng, Y. Hu, and H. Huang, “Monaural singing voice and accompaniment separation based on gated nested U-Net architecture,” Symmetry (Basel), vol. 12, no. 6, p. 1051, 2020. https://doi.org/10.3390/sym12061051

V.-T. Tran and W.-H. Tsai, “Speaker identification in multi-talker overlapping speech using neural networks,” IEEE Access, vol. 8, pp. 134868–134879, 2020. https://doi.org/10.1109/access.2020.3009987

M. Bandara, R. Jayasundara, I. Ariyarathne, D. Meedeniya, and C. Perera, “Forest sound classification dataset: FSC22,” Sensors (Basel), vol. 23, no. 4, 2023. https://doi.org/10.3390/s23042032

H. K. Shin, S. H. Park, and K. W. Kim, “Inter-floor noise classification using convolutional neural network,” PLoS One, vol. 15, no. 12 December 2020, 2020. https://doi.org/10.1371/journal.pone.0243758

O. Ilina, V. Ziyadinov, N. Klenov, and M. Tereshonok, “A Survey on Symmetrical Neural Network Architectures and Applications,” Symmetry (Basel), vol. 14, no. 7, 2022. https://doi.org/10.3390/sym14071391

Z. Huang and M. Liao, “Evidence-Based Research on Multimodal Fusion Emotion Recognition,” pp. 594–601, 2023. https://doi.org/10.2991/978-94-6463-200-2_61

S. C. Lai et al., “Hardware Accelerator Design of DCT Algorithm with Unique-Group Cosine Coefficients for Mel-Scale Frequency Cepstral Coefficients,” IEEE Access, vol. 10, pp. 79681–79688, 2022. https://doi.org/10.1109/ACCESS.2022.3194261

Y. Sharma and Dr. B. Kumar Singh, “Depression analysis of voice samples using machine learning,” Journal of University of Shanghai for Science and Technology, vol. 23, no. 11, pp. 429–438, 2021. https://doi.org/10.51201/jusst/21/10820

A. Al Harere and K. Al Jallad, “Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning,” arXiv. pp. 1–16, 2023. https://doi.org/10.48550/arXiv.2305.06429

A. S. Wibowo, I. D. M. Bayu, and A. Darmawan, “Iqra reading verification with mel frequency cepstrum coefficient and dynamic time warping,” Journal of Physics: Conference Series, vol. 1722, no. 1. 2021. https://doi.org/10.1088/1742-6596/1722/1/012015

Author Biography

Lailis Syafa'ah, Universitas Muhammadiyah Malang

Scopus Profil

https://www.scopus.com/authid/detail.uri?authorId=57194553869

Google Scholar Profil:

https://scholar.google.co.id/citations?user=amt_SrgAAAAJ&hl=id

SINTA Profil:

http://sinta2.ristekdikti.go.id/authors/detail?id=5994572&view=overview

Download this PDF file
PDF
Statistic
Read Counter : 6 Download : 2

Downloads

Download data is not yet available.

Quick Link

  • Author Guidelines
  • Download Manuscript Template
  • Peer Review Process
  • Editorial Board
  • Reviewer Acknowledgement
  • Aim and Scope
  • Publication Ethics
  • Licensing Term
  • Copyright Notice
  • Open Access Policy
  • Important Dates
  • Author Fees
  • Indexing and Abstracting
  • Archiving Policy
  • Scopus Citation Analysis
  • Statistic
  • Article Withdrawal

Meet Our Editorial Team

Ir. Amrul Faruq, M.Eng., Ph.D
Editor in Chief
Universitas Muhammadiyah Malang
Google Scholar Scopus
Agus Eko Minarno
Editorial Board
Universitas Muhammadiyah Malang
Google Scholar  Scopus
Hanung Adi Nugroho
Editorial Board
Universitas Gadjah Mada
Google Scholar Scopus
Roman Voliansky
Editorial Board
Dniprovsky State Technical University, Ukraine
Google Scholar Scopus
Read More
 

KINETIK: Game Technology, Information System, Computer Network, Computing, Electronics, and Control
eISSN : 2503-2267
pISSN : 2503-2259


Address

Program Studi Elektro dan Informatika

Fakultas Teknik, Universitas Muhammadiyah Malang

Jl. Raya Tlogomas 246 Malang

Phone 0341-464318 EXT 247

Contact Info

Principal Contact

Amrul Faruq
Phone: +62 812-9398-6539
Email: faruq@umm.ac.id

Support Contact

Fauzi Dwi Setiawan Sumadi
Phone: +62 815-1145-6946
Email: fauzisumadi@umm.ac.id

© 2020 KINETIK, All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License