ci-definitions.rst.inc (5358B)
1Definition of terms 2=================== 3 4This section defines the terms used in this document and correlates them with 5what is currently used on QEMU. 6 7Automated tests 8--------------- 9 10An automated test is written on a test framework using its generic test 11functions/classes. The test framework can run the tests and report their 12success or failure [1]_. 13 14An automated test has essentially three parts: 15 161. The test initialization of the parameters, where the expected parameters, 17 like inputs and expected results, are set up; 182. The call to the code that should be tested; 193. An assertion, comparing the result from the previous call with the expected 20 result set during the initialization of the parameters. If the result 21 matches the expected result, the test has been successful; otherwise, it has 22 failed. 23 24Unit testing 25------------ 26 27A unit test is responsible for exercising individual software components as a 28unit, like interfaces, data structures, and functionality, uncovering errors 29within the boundaries of a component. The verification effort is in the 30smallest software unit and focuses on the internal processing logic and data 31structures. A test case of unit tests should be designed to uncover errors due 32to erroneous computations, incorrect comparisons, or improper control flow [2]_. 33 34On QEMU, unit testing is represented by the 'check-unit' target from 'make'. 35 36Functional testing 37------------------ 38 39A functional test focuses on the functional requirement of the software. 40Deriving sets of input conditions, the functional tests should fully exercise 41all the functional requirements for a program. Functional testing is 42complementary to other testing techniques, attempting to find errors like 43incorrect or missing functions, interface errors, behavior errors, and 44initialization and termination errors [3]_. 45 46On QEMU, functional testing is represented by the 'check-qtest' target from 47'make'. 48 49System testing 50-------------- 51 52System tests ensure all application elements mesh properly while the overall 53functionality and performance are achieved [4]_. Some or all system components 54are integrated to create a complete system to be tested as a whole. System 55testing ensures that components are compatible, interact correctly, and 56transfer the right data at the right time across their interfaces. As system 57testing focuses on interactions, use case-based testing is a practical approach 58to system testing [5]_. Note that, in some cases, system testing may require 59interaction with third-party software, like operating system images, databases, 60networks, and so on. 61 62On QEMU, system testing is represented by the 'check-acceptance' target from 63'make'. 64 65Flaky tests 66----------- 67 68A flaky test is defined as a test that exhibits both a passing and a failing 69result with the same code on different runs. Some usual reasons for an 70intermittent/flaky test are async wait, concurrency, and test order dependency 71[6]_. 72 73Gating 74------ 75 76A gate restricts the move of code from one stage to another on a 77test/deployment pipeline. The step move is granted with approval. The approval 78can be a manual intervention or a set of tests succeeding [7]_. 79 80On QEMU, the gating process happens during the pull request. The approval is 81done by the project leader running its own set of tests. The pull request gets 82merged when the tests succeed. 83 84Continuous Integration (CI) 85--------------------------- 86 87Continuous integration (CI) requires the builds of the entire application and 88the execution of a comprehensive set of automated tests every time there is a 89need to commit any set of changes [8]_. The automated tests can be composed of 90the unit, functional, system, and other tests. 91 92Keynotes about continuous integration (CI) [9]_: 93 941. System tests may depend on external software (operating system images, 95 firmware, database, network). 962. It may take a long time to build and test. It may be impractical to build 97 the system being developed several times per day. 983. If the development platform is different from the target platform, it may 99 not be possible to run system tests in the developer’s private workspace. 100 There may be differences in hardware, operating system, or installed 101 software. Therefore, more time is required for testing the system. 102 103References 104---------- 105 106.. [1] Sommerville, Ian (2016). Software Engineering. p. 233. 107.. [2] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering, 108 A Practitioner’s Approach. p. 48, 376, 378, 381. 109.. [3] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering, 110 A Practitioner’s Approach. p. 388. 111.. [4] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering, 112 A Practitioner’s Approach. Software Engineering, p. 377. 113.. [5] Sommerville, Ian (2016). Software Engineering. p. 59, 232, 240. 114.. [6] Luo, Qingzhou, et al. An empirical analysis of flaky tests. 115 Proceedings of the 22nd ACM SIGSOFT International Symposium on 116 Foundations of Software Engineering. 2014. 117.. [7] Humble, Jez & Farley, David (2010). Continuous Delivery: 118 Reliable Software Releases Through Build, Test, and Deployment, p. 122. 119.. [8] Humble, Jez & Farley, David (2010). Continuous Delivery: 120 Reliable Software Releases Through Build, Test, and Deployment, p. 55. 121.. [9] Sommerville, Ian (2016). Software Engineering. p. 743.