diff options
author | René den Hertog | 2017-10-02 00:08:16 +0200 |
---|---|---|
committer | René den Hertog | 2017-10-02 00:20:31 +0200 |
commit | 7fde6c1dd56878680483ba86566b1009faf9da2d (patch) | |
tree | cb412b35c7406a1dd0b06cbeb9e1717d2438b6fd /assignments | |
parent | Improve the 'Test Goals' section of the 'Test Plan'. (diff) |
Improve the 'Test Method' section of the 'Test Plan'.
Diffstat (limited to 'assignments')
-rw-r--r-- | assignments/assignment1.tex | 52 |
1 files changed, 34 insertions, 18 deletions
diff --git a/assignments/assignment1.tex b/assignments/assignment1.tex index 639635e..fc601b0 100644 --- a/assignments/assignment1.tex +++ b/assignments/assignment1.tex @@ -45,7 +45,6 @@ The SUT is used by providing it with, via the standard input, a text file contai As is common with everything software, the SUT is likely to contain faults. However, the risks surrounding the program are near zero due to the very low impact of the program faulting. As of our expectation, the most ``risky" scenario for the SUT is at a notable Chess competition. In the case the program does not behave as intended, it is almost certain that a human referee would take over its place. Furthermore, we expect such a system to be used as a supporting tool for match officials and not as a replacement for regulation. -\section{Test method} \subsection*{Test Goals} \paragraph{Specification} @@ -105,23 +104,6 @@ In order to transform the {\tt chesshs} package into a system with a command lin Due to its strong influence on the behavior of the SUT, we will also be testing the wrapper. Nonetheless, testing will still focus mainly on the {\tt chesshs} package and its individual components, as this is the (sub)system of interest. Specifically, we will be testing \begin{enumerate} - \setcounter{enumi}{7} - \item - Unit, Integration. The scale and the number of sub-components of the project limits us to these types of tests. - \item - Mostly Black box testing, for the input/output of entire games in pgn notation. - %TODO: Fix sentence - The functions of the library will also be tested using a White box model. - \item - \begin{enumerate}[a)] - \item - For test generation of the black box testing, we will use a mixture of error guessing and the parsing of PGN files in a database. - White box testing will be done through statement coverage and branch/condition coverage. - \item - Equivalence partitioning and boundary value analysis is hard to implement given our SUT, given that all possible chess games are nearly impossible to enumerate. - Use case testing is also very hard to test, given that Chess.hs is a library, no user will likely every interact with the library directly. - Path coverage and condition coverage are not implemented in Quickcheck, making it substantially harder to implement. - \end{enumerate} \item the wrapper, \label{EI:W} \item PGN parsing, \label{EI:PGNP} \item move legality verification and \label{EI:MLV} @@ -138,6 +120,40 @@ Functionality is the most relevant quality characteristic of the tests. We would White box and gray box testing would look somewhat similar when testing the SUT, due to the fact that the system is a package. Gray box testing would only look at the components available to other programs, that is, the exported elements. White box testing would also look at every component of the library including their implementation. Gray box testing would consider every component as a black box, that is, it will only focus on testing if each component adheres to its related specification. White box testing would consider every component by its literal code. In conclusion, white box testing would focus on the actual implementation of the library's functionality and gray box testing would only focus on the library's exported component's functionality without considering any actual code. +\subsection*{Test Method} + +The testing level is mainly at the unit level, and somewhat at the integration level. The SUT only consists of two components: the wrapper and the {\tt chesshs} package. Both are ``atomic" enough to be tested via unit testing. The interaction between these two, mostly the usage of the library by the wrapper, will be tested via integration testing. Testing at the module, system and acceptance level is not reasonable in the case of our SUT, as its scale and number of subsystems limits us to lower testing levels. + +\paragraph{Test Generation Techniques} + +The majority of our test generation techniques are based upon or similar to black box methods. However, we also will apply some white box testing techniques. + +\subparagraph{Black Box Techniques} + +The focus of the black box testing is on the SUT in its entirety, that is, the {\tt chesshs} package surrounded by the wrapper. The black box tester will try to find faults in the input-output behavior of the program as described in \S\ref{P:ERSUT}. In this type, test generation is based on error guessing and an examples database. + +Error guessing is based mostly on our collective experience of developing and testing software in the past. But also somewhat on our experience with the game of Chess and using other similar SUTs. + +The examples database consists of a collection of text files describing a game of Chess in (possibly illegal) PGN (the `{\tt .in}' files) and the expected output of the SUT (the `{\tt .out}' files). + +We will manually create test cases of this form, where we will be possibly introducing illegal PGN guided by error guessing. + +Sadly, equivalence partitioning and boundary value analysis are hard to apply to our SUT, given that the collection of all possible Chess games is nearly impossible to categorize. Use case testing is also tricky to apply. The wrapper is very limited in its interaction and is designed purely to make the {\tt chesshs} package ``usable" for testing. Since the core of our SUT is a library, no \emph{end} user will likely interact with it directly. In summary, it does not feel reasonable to apply use case testing to the SUT. + +\subparagraph{White Box Techniques} + +The focus of the white box testing is on the {\tt chesshs} package only, that is, the components of the library. The white box tester will try to find faults in the implementation of the package with regards to its specification. In this type, test generation is based on statement coverage and manual path based coverage. + +Statement coverage, in Haskell's case, marks the lines of code that have been executed at least once. + +Path based coverage analysis is not included in the testing tools used. To compensate, we will measure path based coverage manually. By hand, the implementation is transformed into a control flow graph, and we manually determine how the test cases ``traverse" the code. + +We will continuously adapt the white box tester until it reaches a sufficient level of coverage, that is full coverage, unless achieving so is unfeasible or unreasonable. + +Sadly, path, branch and condition coverage are not included in the tools we will be using during the testing. Hence, it is substantially harder to implement these test generation techniques and will not be applied during our testing. \\ + +As every software developer and tester knows, it is very hard, if not impossible, to completely test a given system. Noting that the risks surrounding our SUT are almost negligible (see \S\ref{P:ERSUT}), the pressure on the testing is not noticeable. However, we still feel the tests as described should give a more than sufficient scrutiny. The black box testing will likely find, if any, faults in the wrapper and the interaction with the {\tt chesshs} package. The white box testing will likely find, if any, errors in the implementation of the library. + \begin{figure} \centering \begin{tikzpicture}[ |