The bottom-up integration strategy is the one in which the integration begins with the lower modules in the use hierarchy, i.e., it starts with the modules that do not use any other module in the system, and continues by incrementally adding modules that are using already tested modules. In this way, there is no need for stubs, but complex drivers are needed.
To avoid the construction of drivers and stubs it is possible to follow the big-bang integration order, where all modules are integrated at once. While avoiding the problem of scaffolding construction, this approach has severe drawbacks. First of all, identiï¬cation and removal of faults are much more difï¬cult when coping with the entire system instead of subsystems. In addition, the achieved degree of testing of the code is lower with respect to the two alternative approaches, where modules composing the incrementally growing subsystems are tested several times during each integration step.
In the threads integration strategy, units are merged according to expected execution threads. Finally, in the critical modules integration strategy, units are merged according to their criticality level, i.e., most critical units are integrated ï¬rst.
4.3.3 Functional testing
Functional testing is the testing of the system as a whole. It is characterized by being performed on a code which is in general not visible, due to both accessibility and complexity reasons. This kind of test addresses all the properties of software that cannot be expressed in terms of the properties of the subsystems composing it. At this level the software behaviour is compared with the expected one according to the speciï¬cations. An example of system testing is load testing, which aims at verifying whether the software under test is robust enough to handle a load of work bigger than expected (Alesandro Orso, 2004).
Functional testing is much more than just testing. It is also about communication between developers, analysts, and testers. It is about understanding the requirements, the business domain, and your system as a solution addressing business problems. Jim Shore states, “In the same way that test-driven development, when done well, facilitates thinking about design, [functional testing] done well facilitates thinking about the domain. This thinking happens at the requirements level and at the design level†(Jim Shore, 2002).
Ultimately functional tests become a domain-level language spoken among the various members of the development team. So as you embark on functional tests, be sure to focus on communication of requirements and building up of the domain language. In fact, Functional Testing is an excellent way to start off. We would add that service-driven functional testing also facilitates thinking about system architecture. You simply can’t put much logic in your GUI if you have to run your functional tests without the GUI! Functional testing is also very tool sensitive. If the tools are not up-to-par in speed and feedback then functional tests lose much of their beneï¬t. Once you have the right tools, you need to know how to use them.
Functional tests should iteratively cover use cases, one thin scenario slice at a time (Jean Whitmore et al, 2008).
4.3.4 Unit testing
Unit testing is just one of the levels of testing which go together to make the “big picture†of testing a system. It complements integration and system level testing. It should also complement (rather than compete with) code reviews and walkthroughs. Unit testing is generally seen as a “white box†test class. That is, it is biased to looking at and evaluating the code as implemented, rather than evaluating conformance to some set of requirements.
For any system of more than trivial complexity, it is highly inefï¬cient and ineffective to test the system solely as a “big black boxâ€. Any attempt to do so quickly gets lost in lots of assumptions and potential interactions. The only viable approach is to perform a hierarchy of tests, with higher level tests assuming “reasonable and consistent behaviour†by the lower level components, and separate lower level tests to demonstrate these assumptions.
Boris Beizer has deï¬ned a progression of levels of sophistication in software testing. At the lowest level, testing is considered no different to debugging. At the higher levels, testing becomes a mindset which aims to maximise the system reliability. His approach stresses that you should “test†in the way which returns the greatest reliability improvement for resources spent rather than mindlessly performing some “theoretically neat†collection of tests.
Usually unit testing is primarily focused on the implementation − Does the code implement what the designer intended? For each conditional statement, is the condition correct? Do all the special cases work correctly? Are error cases correctly detected?
The required level of formality, the appropriate level of documentation for unit testing varies from project to project,