common.openJournalSystems

Optimization of Genetic Algorithm Performance using Naïve Bayes for Basis Path Generation

Achmad Arwan, Denny Sagita Rusdianto

Abstract

 Basis path testing is a method that can be used to find for code defects. Determination of independent paths on basis path testing can be generated using the Genetic Algorithm. The method has a weakness, ie the number of iterations can affect the appearance of basis path. When the iteration is low, not all paths appear, but if iteration is many, then the path already appears but after a certain iteration the result does not change. This study aims to perform the optimization of Genetic Algorithm performance for independent path determination on by determining how many iteration levels match the characteristics of the code. Characteristics of the code used include the Node, Edge, VG, NBD, LOC. Naïve Bayes is a method used to predict the exact number of iterations based on 17 selected code data into training data, and 16 data into test data. The result of system accuracy test is able to predict the exact iteration of 93.75% from 16 test data. Time-test results show that the new system was able to complete an independent search path faster 15% than the old system.

Keywords

Basis Path Testing, Genetic Algorithm Genetik, Naive Bayes

References

I. Sommerville, Software Engineering, 9th ed. Pearson, 2011.

V. Elodie, “White Box Coverage and Control Flow Graphs,” pp. 1–33, 2011.

A. Bertolino, R. Mirandola, and E. Peciola, “A case study in branch testing automation,” J. Syst. Softw., vol. 38, no. 1, pp. 47–59, 1997.

F. Zapata, A. Akundi, R. Pineda, and E. Smith, “Basis path analysis for testing complex system of systems,” Procedia Comput. Sci., vol. 20, pp. 256–261, 2013.

A. Ghiduk, M. R. Girgis, and A. S. Ghiduk, “Automatic Generation of Data Flow Test Paths using a Genetic Algorithm Automatic Generation of Data Flow Test Paths using a Genetic Algorithm,” no. February, 2014.

W. Xibo and S. Na, “Automatic Test Data Generation for Path Testing Using Genetic Algorithms,” 2011.

I. Rash, “An empirical study of the naive {Bayes} classifier,” no. January 2001, 2001.

S. Herbold, J. Grabowski, and S. Waack, “Calculation and optimization of thresholds for sets of software metrics,” Empir. Softw. Eng., vol. 16, no. 6, pp. 812–841, 2011.

T. Ostrand, “White-Box Testing,” Encycl. Softw. Eng., 2002.

D. Kafura and G. R. Reddy, “The Use of Software Complexity Metrics in Software Maintenance,” IEEE Trans. Softw. Eng., vol. SE-13, no. 3, pp. 335–343, 1987.

A. Arwan, M. Sidiq, B. Priyambadha, H. Kristianto, and R. Sarno, “Ontology and semantic matching for diabetic food recommendations,” Proc. - 2013 Int. Conf. Inf. Technol. Electr. Eng. "Intelligent Green Technol. Sustain. Dev. ICITEE 2013, no. October, pp. 170–175, 2013.

R. Pawlak et al., “Spoon : A Library for Implementing Analyses and Transformations of Java Source Code To cite this version : Spoon : A Library for Implementing Analyses and Transformations of Java Source Code,” 2015.

E. Frank, M. A. Hall, and I. H. Witten, “The WEKA Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques",” Morgan Kaufmann, Fourth Ed., 2016.

Refbacks

  • There are currently no refbacks.
 

Indexed by:

Referencing Software:

Checked by:

Statistic:

View My Stats


Creative Commons License Kinetik : Game Technology, Information System, Computer Network, Computing, Electronics, and Control by http://kinetik.umm.ac.id is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

ISSN: 2503-2267