Quick jump to page content
  • Main Navigation
  • Main Content
  • Sidebar

  • Home
  • Current
  • Archives
  • Join As Reviewer
  • Info
  • Announcements
  • Statistics
  • About
    • About the Journal
    • Submissions
    • Editorial Team
    • Privacy Statement
    • Contact
  • Register
  • Login
  • Home
  • Current
  • Archives
  • Join As Reviewer
  • Info
  • Announcements
  • Statistics
  • About
    • About the Journal
    • Submissions
    • Editorial Team
    • Privacy Statement
    • Contact
  1. Home
  2. Archives
  3. Vol. 9, No. 4, November 2024
  4. Articles

Issue

Vol. 9, No. 4, November 2024

Issue Published : Nov 1, 2024
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Predicting the Sentiment of Review Aspects in the Peer Review Text using Machine Learning

https://doi.org/10.22219/kinetik.v9i4.2042
Setio Basuki
Universitas Muhammadiyah Malang
Zamah Sari
Universitas Muhammadiyah Malang
Masatoshi Tsuchiya
Toyohashi University of Technology
Rizky Indrabayu
Universitas Muhammadiyah Malang

Corresponding Author(s) : Setio Basuki

setio_basuki@umm.ac.id

Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, Vol. 9, No. 4, November 2024
Article Published : Nov 1, 2024

Share
WA Share on Facebook Share on Twitter Pinterest Email Telegram
  • Abstract
  • Cite
  • References
  • Authors Details

Abstract

This paper develops a Machine Learning (ML) model to classify the sentiment of review aspects in the peer review text. Reviewers use the review aspect as paper quality indicators such as motivation, originality, clarity, soundness, substance, replicability, meaningful comparison, and summary during the review process. The proposed model addresses the critique of the existing peer review process, including a high volume of submitted papers, limited reviewers, and reviewer bias. This paper uses citation functions, representing the author's motivation to cite previous research, as the main predictor. Specifically, the predictor comprises citing sentence features representing the scheme of citation functions, regular sentence features representing the scheme of citation functions for non-citation sentences, and reference-based representing the source of citation. This paper utilizes the paper dataset from the International Conference on Learning Representations (ICLR) 2017-2020, which includes sentiment values (positive or negative) for all review aspects. Our experiment on combining XGBoost, oversampling, and hyper-parameter optimization revealed that not all review aspects can be effectively estimated by the ML model. The highest results were achieved when predicting Replicability sentiment with 97.74% accuracy. It also demonstrated accuracies of 94.03% for Motivation and 93.93% for Meaningful Comparison. However, the model exhibited lower effectiveness on Originality and Substance (85.21% and 79.94%) and performed less effectively on Clarity and Soundness with accuracies of 61.22% and 61.11%, respectively. The combination predictor was the best for the 5 review aspects, while the other 2 aspects were effectively estimated by regular sentence and reference-based predictors.

Keywords

Citation Functions Paper Quality Peer Review Review Aspects Sentiment Analysis
Basuki, S., Sari, Z., Tsuchiya, M. ., & Indrabayu, R. (2024). Predicting the Sentiment of Review Aspects in the Peer Review Text using Machine Learning. Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, 9(4). https://doi.org/10.22219/kinetik.v9i4.2042
  • ACM
  • ACS
  • APA
  • ABNT
  • Chicago
  • Harvard
  • IEEE
  • MLA
  • Turabian
  • Vancouver
Download Citation
Endnote/Zotero/Mendeley (RIS)
BibTeX
References
  1. F. Rowland, “The peer-review,” Learn. Publ., vol. 15, no. 4, pp. 247–258, 2002. 10.1087/095315102760319206
  2. R. Johnson, A. Watkinson, and M. Mabe, “The STM Report - An overview of scientific and scholarly publishing.” Oct. 2018.
  3. A. Checco, L. Bracciale, P. Loreti, S. Pinfield, and G. Bianchi, “AI-assisted peer review,” Humanit Soc Sci Commun, vol. 8, no. 1, Dec. 2021. https://doi.org/10.1057/s41599-020-00703-8
  4. M. Jubb, “Peer review: The current landscape and future trends,” Learn. Publ., vol. 29, no. 1, pp. 13–21, 2016. https://doi.org/10.1002/leap.1008
  5. Z. Tong, Y. Huan, S. Lei, W. Jing, and X. Daojia, “Application and classification of artificial intelligence-assisted academic peer review,” Chinese J. Sci. Tech. Periodals, vol. 32, no. 1, pp. 65–74, 2021. https://doi.org/10.11946/cjstp.201911220799
  6. J. P. Tennant, “The state of the art in peer review,” FEMS Microbiol Lett, vol. 365, no. 19, pp. 1–10, 2018. https://doi.org/10.1093/femsle/fny204
  7. S. Schroter, N. Black, S. Evans, J. Carpenter, F. Godlee, and R. Smith, “Effects of training on quality of peer review: Randomised controlled trial,” Br. Med. J., vol. 328, no. 7441, pp. 673–675, Mar. 2004. https://doi.org/10.1136/bmj.38023.700775.AE
  8. C. A. Pierson, “Peer review and journal quality,” J. Am. Assoc. Nurse Pract., vol. 30, no. 1, pp. 1–2, Jan. 2018. https://doi.org/10.1097/jxx.0000000000000018
  9. D. Moher and others, “Core competencies for scientific editors of biomedical journals: Consensus statement,” BMC Med, vol. 15, no. 1, Sep. 2017. https://doi.org/10.1186/s12916-017-0927-0
  10. S. Jana, “A history and development of peer-review process,” Ann. Libr. Inf. Stud., vol. 66, no. 4, pp. 152–162, 2019.
  11. K. Wang and X. Wan, “Sentiment analysis of peer review texts for scholarly papers,” in 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, Ann Arbor Michigan, U. S. A.: Association for Computing Machinery, 2018, pp. 175–184. https://doi.org/10.1145/3209978.3210056
  12. A. J. Casey, B. Webber, and D. Glowacka, “Can models of author intention support quality assessment of content?,” in Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2019), Paris, France: CEUR Workshop Proceedings (CEUR-WS.org), 2019, pp. 92–99.
  13. P. Fytas, G. Rizos, and L. Specia, “What Makes a Scientific Paper be Accepted for Publication?,” in Proceedings of the First Workshop on Causal Inference and NLP, Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021, 2021, pp. 44–60. https://doi.org/10.18653/v1/2021.cinlp-1.4
  14. A. C. Ribeiro, A. Sizo, H. L. Cardoso, and L. P. Reis, “Acceptance Decision Prediction in Peer-Review Through Sentiment Analysis,” in EPIA 2021: Progress in Artificial Intelligence, Springer, , Online, Cham: Springer International Publishing, 2021, pp. 766–777. https://doi.org/10.1007/978-3-030-86230-5_60
  15. T. Ghosal, R. Verma, A. Ekbal, and P. Bhattacharyya, “DeepSentipeer: Harnessing sentiment in review texts to recommend peer review decisions,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy: Association for Computational Linguistics, 2019, 2019, pp. 1120–1130. https://doi.org/10.18653/v1/P19-1106
  16. W. Jen and M. Chen, “Predicting Conference Paper Acceptance.” 2018.
  17. A. Ghosh, N. Pande, R. Goel, R. Mujumdar, and S. S. Sistla, “Prediction, Conference Paper Acceptance (Acceptometer).” 2020.
  18. M. Skorikov and S. Momen, “Machine learning approach to predicting the acceptance of academic papers,” in The 2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), IEEE, 2020, pp. 113–117. https://doi.org/10.1109/IAICT50021.2020.9172011
  19. G. M. de Buy Wenniger, T. van Dongen, E. Aedmaa, H. T. Kruitbosch, E. A. Valentijn, and L. Schomaker, “Structure-tags improve text classification for scholarly document quality prediction,” in Proceedings of the First Workshop on Scholarly Document Processing, Online, 2020, 2020, pp. 158–167. https://doi.org/10.18653/v1/2020.sdp-1.18
  20. A. Ciloglu and M. Merdan, “Big Peer Review Challenge.” 2022.
  21. D. J. Joshi, A. Kulkarni, R. Pande, I. Kulkarni, S. Patil, and N. Saini, “Conference Paper Acceptance Prediction: Using Machine Learning,” in Machine Learning and Information Processing, , , SIngapore, Singapore: Springer, 2021, pp. 143–152. https://doi.org/10.1007/978-981-33-4859-2_14
  22. P. Vincent-Lamarre and V. Larivière, “Textual analysis of artificial intelligence manuscripts reveals features associated with peer review outcome,” Quant. Sci. Stud., vol. 2, no. 2, pp. 662–677, 2021. https://doi.org/10.1162/qss_a_00125
  23. P. Bao, W. Hong, and X. Li, “Predicting Paper Acceptance via Interpretable Decision Sets,” in The Web Conference 2021 - Companion of the World Wide Web Conference (WWW 2021), Ljubljana, Slovenia: Association for Computing Machinery (ACM), 2021, pp. 461–467. https://doi.org/10.1145/3442442.3451370
  24. P. K. Bharti, S. Ranjan, T. Ghosal, M. Agrawal, and A. Ekbal, “PEERAssist : Leveraging on Paper-Review Interactions to Predict Peer Review Decisions,” in International Conference on Asian Digital Libraries, Springer International Publishing, 2021, Springer International Publishing, 2021, pp. 421–435. https://doi.org/10.1007/978-3-030-91669-5_33
  25. T. Pradhan, C. Bhatia, P. Kumar, and S. Pal, “A deep neural architecture based meta-review generation and final decision prediction of a scholarly article,” Neurocomputing, vol. 428, pp. 218–238, 2021. https://doi.org/10.1016/j.neucom.2020.11.004
  26. R. L. Kravitz, P. Franks, M. D. Feldman, M. Gerrity, C. Byrne, and W. M. Tierney, “Editorial peer reviewers’ recommendations at a general medical journal: Are they reliable and do editors care?,” PLoS One, vol. 5, no. 4, pp. 2–6, 2010. https://doi.org/10.1371/journal.pone.0010072
  27. S. Chakraborty, P. Goyal, and A. Mukherjee, “Aspect-based sentiment analysis of scientific reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, 2020, pp. 207–216. https://doi.org/10.1145/3383583.3398541
  28. Z. J. Beasley, “Sentiment Analysis in Peer Review.” 2020.
  29. M. Meng, R. Han, J. Zhong, H. Zhou, and C. Zhang, “Aspect-based sentiment analysis of online peer reviews and prediction of paper acceptance results,” DATA Sci. Inf., vol. 3, no. 1, 2023. https://doi.org/10.59494/dsi.2023.1.4
  30. S. Kumar, H. Arora, T. Ghosal, and A. Ekbal, “DeepASPeer: Towards an aspect-level sentiment controllable framework for decision prediction from academic peer reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, Institute of Electrical and Electronics Engineers Inc., Jun. 2022, Jun. 2022. https://doi.org/10.1145/3529372.3530937
  31. H. Arora, K. Shinde, and T. Ghosal, “Deciphering the Reviewer’s Aspectual Perspective: A Joint Multitask Framework for Aspect and Sentiment Extraction from Scholarly Peer Reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, Institute of Electrical and Electronics Engineers Inc., 2023, 2023, pp. 35–46. https://doi.org/10.1109/JCDL57899.2023.00015
  32. S. Basuki and M. Tsuchiya, “SDCF: semi-automatically structured dataset of citation functions,” Scientometrics, vol. 127, no. 8, pp. 4569–4608, Aug. 2022. https://doi.org/10.1007/s11192-022-04471-x
  33. K. L. Lin and S. X. Sui, “Citation Functions in the Opening Phase of Research Articles: A Corpus-based Comparative Study,” in to Grammar, Media and Health Discourses - Part of The M. A. K. Halliday Library Functional Linguistics Series, no, C. Approaches, Ed., Singapore: Springer, 2020, pp. 233–250. https://doi.org/10.1007/978-981-15-4771-3_10
  34. F. Qayyum and M. T. Afzal, “Identification of important citations by exploiting research articles’ metadata and cue-terms from content,” Scientometrics, vol. 118, pp. 21–43, 2018. https://doi.org/10.1007/s11192-018-2961-x
  35. I. Tahamtan and L. Bornmann, “What do citation counts measure? An updated review of studies on citations in scientific documents published between 2006 and 2018,” Scientometrics, vol. 121, pp. 1635–1684, 2019. https://doi.org/10.1007/s11192-019-03243-4
  36. S. Basuki and M. Tsuchiya, “The Quality Assist: A Technology-Assisted Peer Review Based on Citation Functions to Predict the Paper Quality,” IEEE Access, vol. 10, pp. 126815–126831, 2022. https://doi.org/10.1109/ACCESS.2022.3225871
  37. W. Yuan, P. Liu, and G. Neubig, “Can We Automate Scientific Reviewing?,” J. Artif. Intell. Res., vol. 75, pp. 171–212, 2022. https://doi.org/10.1613/jair.1.12862
  38. R. Shwartz-Ziv and A. Armon, “Tabular data: Deep learning is not all you need,” Inf. Fusion, vol. 81, pp. 84–90, 2022. https://doi.org/10.1016/j.inffus.2021.11.011
Read More

References


F. Rowland, “The peer-review,” Learn. Publ., vol. 15, no. 4, pp. 247–258, 2002. 10.1087/095315102760319206

R. Johnson, A. Watkinson, and M. Mabe, “The STM Report - An overview of scientific and scholarly publishing.” Oct. 2018.

A. Checco, L. Bracciale, P. Loreti, S. Pinfield, and G. Bianchi, “AI-assisted peer review,” Humanit Soc Sci Commun, vol. 8, no. 1, Dec. 2021. https://doi.org/10.1057/s41599-020-00703-8

M. Jubb, “Peer review: The current landscape and future trends,” Learn. Publ., vol. 29, no. 1, pp. 13–21, 2016. https://doi.org/10.1002/leap.1008

Z. Tong, Y. Huan, S. Lei, W. Jing, and X. Daojia, “Application and classification of artificial intelligence-assisted academic peer review,” Chinese J. Sci. Tech. Periodals, vol. 32, no. 1, pp. 65–74, 2021. https://doi.org/10.11946/cjstp.201911220799

J. P. Tennant, “The state of the art in peer review,” FEMS Microbiol Lett, vol. 365, no. 19, pp. 1–10, 2018. https://doi.org/10.1093/femsle/fny204

S. Schroter, N. Black, S. Evans, J. Carpenter, F. Godlee, and R. Smith, “Effects of training on quality of peer review: Randomised controlled trial,” Br. Med. J., vol. 328, no. 7441, pp. 673–675, Mar. 2004. https://doi.org/10.1136/bmj.38023.700775.AE

C. A. Pierson, “Peer review and journal quality,” J. Am. Assoc. Nurse Pract., vol. 30, no. 1, pp. 1–2, Jan. 2018. https://doi.org/10.1097/jxx.0000000000000018

D. Moher and others, “Core competencies for scientific editors of biomedical journals: Consensus statement,” BMC Med, vol. 15, no. 1, Sep. 2017. https://doi.org/10.1186/s12916-017-0927-0

S. Jana, “A history and development of peer-review process,” Ann. Libr. Inf. Stud., vol. 66, no. 4, pp. 152–162, 2019.

K. Wang and X. Wan, “Sentiment analysis of peer review texts for scholarly papers,” in 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, Ann Arbor Michigan, U. S. A.: Association for Computing Machinery, 2018, pp. 175–184. https://doi.org/10.1145/3209978.3210056

A. J. Casey, B. Webber, and D. Glowacka, “Can models of author intention support quality assessment of content?,” in Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2019), Paris, France: CEUR Workshop Proceedings (CEUR-WS.org), 2019, pp. 92–99.

P. Fytas, G. Rizos, and L. Specia, “What Makes a Scientific Paper be Accepted for Publication?,” in Proceedings of the First Workshop on Causal Inference and NLP, Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021, 2021, pp. 44–60. https://doi.org/10.18653/v1/2021.cinlp-1.4

A. C. Ribeiro, A. Sizo, H. L. Cardoso, and L. P. Reis, “Acceptance Decision Prediction in Peer-Review Through Sentiment Analysis,” in EPIA 2021: Progress in Artificial Intelligence, Springer, , Online, Cham: Springer International Publishing, 2021, pp. 766–777. https://doi.org/10.1007/978-3-030-86230-5_60

T. Ghosal, R. Verma, A. Ekbal, and P. Bhattacharyya, “DeepSentipeer: Harnessing sentiment in review texts to recommend peer review decisions,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy: Association for Computational Linguistics, 2019, 2019, pp. 1120–1130. https://doi.org/10.18653/v1/P19-1106

W. Jen and M. Chen, “Predicting Conference Paper Acceptance.” 2018.

A. Ghosh, N. Pande, R. Goel, R. Mujumdar, and S. S. Sistla, “Prediction, Conference Paper Acceptance (Acceptometer).” 2020.

M. Skorikov and S. Momen, “Machine learning approach to predicting the acceptance of academic papers,” in The 2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), IEEE, 2020, pp. 113–117. https://doi.org/10.1109/IAICT50021.2020.9172011

G. M. de Buy Wenniger, T. van Dongen, E. Aedmaa, H. T. Kruitbosch, E. A. Valentijn, and L. Schomaker, “Structure-tags improve text classification for scholarly document quality prediction,” in Proceedings of the First Workshop on Scholarly Document Processing, Online, 2020, 2020, pp. 158–167. https://doi.org/10.18653/v1/2020.sdp-1.18

A. Ciloglu and M. Merdan, “Big Peer Review Challenge.” 2022.

D. J. Joshi, A. Kulkarni, R. Pande, I. Kulkarni, S. Patil, and N. Saini, “Conference Paper Acceptance Prediction: Using Machine Learning,” in Machine Learning and Information Processing, , , SIngapore, Singapore: Springer, 2021, pp. 143–152. https://doi.org/10.1007/978-981-33-4859-2_14

P. Vincent-Lamarre and V. Larivière, “Textual analysis of artificial intelligence manuscripts reveals features associated with peer review outcome,” Quant. Sci. Stud., vol. 2, no. 2, pp. 662–677, 2021. https://doi.org/10.1162/qss_a_00125

P. Bao, W. Hong, and X. Li, “Predicting Paper Acceptance via Interpretable Decision Sets,” in The Web Conference 2021 - Companion of the World Wide Web Conference (WWW 2021), Ljubljana, Slovenia: Association for Computing Machinery (ACM), 2021, pp. 461–467. https://doi.org/10.1145/3442442.3451370

P. K. Bharti, S. Ranjan, T. Ghosal, M. Agrawal, and A. Ekbal, “PEERAssist : Leveraging on Paper-Review Interactions to Predict Peer Review Decisions,” in International Conference on Asian Digital Libraries, Springer International Publishing, 2021, Springer International Publishing, 2021, pp. 421–435. https://doi.org/10.1007/978-3-030-91669-5_33

T. Pradhan, C. Bhatia, P. Kumar, and S. Pal, “A deep neural architecture based meta-review generation and final decision prediction of a scholarly article,” Neurocomputing, vol. 428, pp. 218–238, 2021. https://doi.org/10.1016/j.neucom.2020.11.004

R. L. Kravitz, P. Franks, M. D. Feldman, M. Gerrity, C. Byrne, and W. M. Tierney, “Editorial peer reviewers’ recommendations at a general medical journal: Are they reliable and do editors care?,” PLoS One, vol. 5, no. 4, pp. 2–6, 2010. https://doi.org/10.1371/journal.pone.0010072

S. Chakraborty, P. Goyal, and A. Mukherjee, “Aspect-based sentiment analysis of scientific reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, 2020, pp. 207–216. https://doi.org/10.1145/3383583.3398541

Z. J. Beasley, “Sentiment Analysis in Peer Review.” 2020.

M. Meng, R. Han, J. Zhong, H. Zhou, and C. Zhang, “Aspect-based sentiment analysis of online peer reviews and prediction of paper acceptance results,” DATA Sci. Inf., vol. 3, no. 1, 2023. https://doi.org/10.59494/dsi.2023.1.4

S. Kumar, H. Arora, T. Ghosal, and A. Ekbal, “DeepASPeer: Towards an aspect-level sentiment controllable framework for decision prediction from academic peer reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, Institute of Electrical and Electronics Engineers Inc., Jun. 2022, Jun. 2022. https://doi.org/10.1145/3529372.3530937

H. Arora, K. Shinde, and T. Ghosal, “Deciphering the Reviewer’s Aspectual Perspective: A Joint Multitask Framework for Aspect and Sentiment Extraction from Scholarly Peer Reviews,” in Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, Institute of Electrical and Electronics Engineers Inc., 2023, 2023, pp. 35–46. https://doi.org/10.1109/JCDL57899.2023.00015

S. Basuki and M. Tsuchiya, “SDCF: semi-automatically structured dataset of citation functions,” Scientometrics, vol. 127, no. 8, pp. 4569–4608, Aug. 2022. https://doi.org/10.1007/s11192-022-04471-x

K. L. Lin and S. X. Sui, “Citation Functions in the Opening Phase of Research Articles: A Corpus-based Comparative Study,” in to Grammar, Media and Health Discourses - Part of The M. A. K. Halliday Library Functional Linguistics Series, no, C. Approaches, Ed., Singapore: Springer, 2020, pp. 233–250. https://doi.org/10.1007/978-981-15-4771-3_10

F. Qayyum and M. T. Afzal, “Identification of important citations by exploiting research articles’ metadata and cue-terms from content,” Scientometrics, vol. 118, pp. 21–43, 2018. https://doi.org/10.1007/s11192-018-2961-x

I. Tahamtan and L. Bornmann, “What do citation counts measure? An updated review of studies on citations in scientific documents published between 2006 and 2018,” Scientometrics, vol. 121, pp. 1635–1684, 2019. https://doi.org/10.1007/s11192-019-03243-4

S. Basuki and M. Tsuchiya, “The Quality Assist: A Technology-Assisted Peer Review Based on Citation Functions to Predict the Paper Quality,” IEEE Access, vol. 10, pp. 126815–126831, 2022. https://doi.org/10.1109/ACCESS.2022.3225871

W. Yuan, P. Liu, and G. Neubig, “Can We Automate Scientific Reviewing?,” J. Artif. Intell. Res., vol. 75, pp. 171–212, 2022. https://doi.org/10.1613/jair.1.12862

R. Shwartz-Ziv and A. Armon, “Tabular data: Deep learning is not all you need,” Inf. Fusion, vol. 81, pp. 84–90, 2022. https://doi.org/10.1016/j.inffus.2021.11.011

Author biographies is not available.
Download this PDF file
PDF
Statistic
Read Counter : 5 Download : 10

Downloads

Download data is not yet available.

Quick Link

  • Author Guidelines
  • Download Manuscript Template
  • Peer Review Process
  • Editorial Board
  • Reviewer Acknowledgement
  • Aim and Scope
  • Publication Ethics
  • Licensing Term
  • Copyright Notice
  • Open Access Policy
  • Important Dates
  • Author Fees
  • Indexing and Abstracting
  • Archiving Policy
  • Scopus Citation Analysis
  • Statistic
  • Article Withdrawal

Meet Our Editorial Team

Ir. Amrul Faruq, M.Eng., Ph.D
Editor in Chief
Universitas Muhammadiyah Malang
Google Scholar Scopus
Agus Eko Minarno
Editorial Board
Universitas Muhammadiyah Malang
Google Scholar  Scopus
Hanung Adi Nugroho
Editorial Board
Universitas Gadjah Mada
Google Scholar Scopus
Roman Voliansky
Editorial Board
Dniprovsky State Technical University, Ukraine
Google Scholar Scopus
Read More
 

KINETIK: Game Technology, Information System, Computer Network, Computing, Electronics, and Control
eISSN : 2503-2267
pISSN : 2503-2259


Address

Program Studi Elektro dan Informatika

Fakultas Teknik, Universitas Muhammadiyah Malang

Jl. Raya Tlogomas 246 Malang

Phone 0341-464318 EXT 247

Contact Info

Principal Contact

Amrul Faruq
Phone: +62 812-9398-6539
Email: faruq@umm.ac.id

Support Contact

Fauzi Dwi Setiawan Sumadi
Phone: +62 815-1145-6946
Email: fauzisumadi@umm.ac.id

© 2020 KINETIK, All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License