Perbaikan Mekanisme Load Balancing Untuk Komputasi Klaster Pada Kondisi Dinamis
Corresponding Author(s) : Mohammad Zarkasi
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control,
Vol 2, No 2, May-2017
Abstract
Salah satu permasalahan yang sering terjadi pada lingkungan komputasi klaster adalah terjadinya beban yang tidak seimbang yang dapat menurunkan Quality of Service (QoS). Oleh sebab itu, dibutuhkan metode load balancing yang handal dalam pendistribusian beban. Pada beberapa kasus, metode load balancing dinamis gagal berperan sebagai metode load balancing yang optimal jika lingkungan implementasi tidak sesuai dengan lingkungan yang diasumsikan saat metode tersebut dikembangkan. Dalam penelitian ini diusulkan metode load balancing adaptif dengan menggunakan algoritma reinforcement learning (RL) yang mampu beradaptasi terhadap perubahan lingkungan. Hasil pengujian menunjukkan bahwa kinerja dari metode load balancing yang diusulkan lebih efisien dibandingkan dengan metode load balancing dinamis baik pada kondisi lingkungan tidak mengalami beban dengan peningkatan total waktu eksekusi sebesar 45% maupun pada saat lingkungan mengalami beban yang tidak seimbang dengan peningkatan waktu eksekusi sebesar 21%.
Keywords
Download Citation
Endnote/Zotero/Mendeley (RIS)BibTeX
- F. Berman, R. Wolski, And S. Figueira, “Application-Level Scheduling On Distributed Heterogeneous Networks,” In Proceedings Of The 1996 Acm/Ieee Conference On Supercomputing, 1996.
- P. Mohammadpour And M. Sharifi, “A Self-Training Algorithm For Load Balancing In Cluster Computing,” Fourth International Networked Computing and Advanced Information Management, Vol. 1, 2008.
- D. Mahato, A. Maurya, And A. Tripathi, “Dynamic And Adaptive Load Balancing In Transaction Oriented Grid Service,” In 2nd International Conference On Green High Performance Computing, 2016.
- H. Desai And R. Oza, “A Study Of Dynamic Load Balancing In Grid Environment,” In International Conference On Wireless Communications, Signal Processing And Networking, 2016.
- A. Radulescu And A. Van Gemund, “Low-Cost Task Scheduling For Distributed-Memory Machines,” IEEE Transactions on Parallel and Distributed Systems, Vol. 13, No. 6, Pp. 648–658, 2002.
- A. Tchernykh And J. Cortés-Mendoza, “Adaptive Energy Efficient Distributed Voip Load Balancing In Federated Cloud Infrastructure,” In IEEE 3rd International Conference On Cloud Networking, 2014.
- S. Li, S. Zhao, X. Wang, K. Zhang, And L. Li, “Adaptive And Secure Load-Balancing Routing Protocol For Service-Oriented Wireless Sensor Networks,” IEEE Systems Journal, Vol. 8, No. 3, Pp. 858–867, 2014.
- J. Wu, X. Xu, P. Zhang, And C. Liu, “A Novel Multi-Agent Reinforcement Learning Approach For Job Scheduling In Grid Computing,” Future Generation Computer Systems, Vol. 25, No. 5, Pp. 430–439, 2011.
- C. Wirth And J. Fürnkranz, “Epmc: Every Visit Preference Monte Carlo For Reinforcement Learning.,” In Proceedings Of The 5th Asian Conference On Machine Learning, 2013, Vol. 29, Pp. 483–497.
- R. Sutton And A. Barto, Reinforcement Learning: An Introduction. Cambridge, Massachusetts: The Mit Press, 2012.
References
F. Berman, R. Wolski, And S. Figueira, “Application-Level Scheduling On Distributed Heterogeneous Networks,” In Proceedings Of The 1996 Acm/Ieee Conference On Supercomputing, 1996.
P. Mohammadpour And M. Sharifi, “A Self-Training Algorithm For Load Balancing In Cluster Computing,” Fourth International Networked Computing and Advanced Information Management, Vol. 1, 2008.
D. Mahato, A. Maurya, And A. Tripathi, “Dynamic And Adaptive Load Balancing In Transaction Oriented Grid Service,” In 2nd International Conference On Green High Performance Computing, 2016.
H. Desai And R. Oza, “A Study Of Dynamic Load Balancing In Grid Environment,” In International Conference On Wireless Communications, Signal Processing And Networking, 2016.
A. Radulescu And A. Van Gemund, “Low-Cost Task Scheduling For Distributed-Memory Machines,” IEEE Transactions on Parallel and Distributed Systems, Vol. 13, No. 6, Pp. 648–658, 2002.
A. Tchernykh And J. Cortés-Mendoza, “Adaptive Energy Efficient Distributed Voip Load Balancing In Federated Cloud Infrastructure,” In IEEE 3rd International Conference On Cloud Networking, 2014.
S. Li, S. Zhao, X. Wang, K. Zhang, And L. Li, “Adaptive And Secure Load-Balancing Routing Protocol For Service-Oriented Wireless Sensor Networks,” IEEE Systems Journal, Vol. 8, No. 3, Pp. 858–867, 2014.
J. Wu, X. Xu, P. Zhang, And C. Liu, “A Novel Multi-Agent Reinforcement Learning Approach For Job Scheduling In Grid Computing,” Future Generation Computer Systems, Vol. 25, No. 5, Pp. 430–439, 2011.
C. Wirth And J. Fürnkranz, “Epmc: Every Visit Preference Monte Carlo For Reinforcement Learning.,” In Proceedings Of The 5th Asian Conference On Machine Learning, 2013, Vol. 29, Pp. 483–497.
R. Sutton And A. Barto, Reinforcement Learning: An Introduction. Cambridge, Massachusetts: The Mit Press, 2012.