summaryrefslogtreecommitdiff
path: root/assignments/assignment2/assignment2.tex
blob: 9f843a7e19d02a6bb057f6281839f68c4a2c612a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
\documentclass[british]{scrartcl}

\usepackage[british]{babel}
\usepackage{csquotes}
\usepackage{enumerate}
\usepackage[hidelinks]{hyperref}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{cleveref}
\usepackage{subfig}

\let\oldurl\url
\def\url#1{{\small\oldurl{#1}}}

\title{Testing Techniques}
\subtitle{Assignment 2 Automata Learning}
\author{Ren\'e den Hertog \and Camil Staps \and Erin van der Veen}
% Remove the date from '\maketitle'.
\makeatletter
	\patchcmd
		{\@maketitle}
		{{\usekomafont{date}{\@date\par}}\vskip\z@\@plus 1em}
		{}
		{}
		{}
\makeatother

\begin{document}

\maketitle

\section{Chocolate Bar Machine}
\begin{enumerate}
	\item
		L* creates large hypothesis model from the beginning.
		This means that it works faster with inefficient equivalence oracles.
		On the other hand, Rivest-Schapire and TTT create small models which can perform better with a fast equivalence oracles,
			because these algorithms spend less time on creating a well-based hypothesis.

	\item
		We used L* with the following counterexamples:

		\begin{itemize}
			\item 10ct 10ct 5ct snickers
			\item 10ct 10ct 10ct 10ct 10ct snickers snickers
			\item 10ct 10ct 10ct snickers 10ct twix
			\item 10ct 10ct 10ct 5ct snickers twix
			\item 10ct 10ct 5ct 10ct snickers mars
			\item 10ct 10ct 5ct 10ct 10ct mars mars snickers
		\end{itemize}

		The hypotheses are given in \cref{fig:lstar-run} (p.~\pageref{fig:lstar-run}).

	\item
		States are in principle uniquely identified by the amount injected in the machine.
		The machine does not accept more than 45ct, and the granularity is 5ct.
		Hence, there are $\frac{45}{5}+1=10$ states.
		However, there are two states in which we have 35ct, namely one where we can still insert money
			(accessed through \enquote{10ct 10ct 5ct 10ct})
			and one where we cannot (e.g. \enquote{10ct 10ct 10ct 5ct}).
		Hence, the complete model has 11 states.

		The learned model also contains 11 states, therefore they must be equivalent.

	\item
		A counter-example $\pi$ consists of a prefix, an input where an inconsistency appears and a suffix showing the inconsistency:
			$\pi = \sigma u \rho$.
		However, when we provide some sequence $\pi$ it may be possible to find distinct partitions
			$\pi = \sigma_1 u_1 \rho_1 \neq \sigma_2 u_2 \rho_2 = \pi$
			which both help to improve the hypothesis.
		If on the first run we add $\rho_1$ to the suffices of our table,
			$\sigma_2 u_2 \rho_2$ may not be covered yet,
			and we may be able to add $\pi$ a second time.

	\item
		We use Rivest-Schapire.

	\item
		The found models are the same modulo state names.
		This is because, as argued in the answer to question 3, the SUT has eleven states
			and all testing methods come up with a model of eleven states.

		With a random walk we needed only 56s.
		WMethod took 15154s and WPMethod took 5956s.

	\item
		The method with user queries worked very efficiently in combination with L*.
		It allows the user to insert cleverly chosen counter-examples,
			but does not ask him as often as other learning methods would do.
		However, we realise that for more complex systems it may be difficult for the user to provide counter-examples.

		As for the testing method,
			on our small example a random walk appears to be by far the fastest.
		However, as noted in the source code (\texttt{BasicLearner.java}),
			it may perform worse than the others on large models.

\end{enumerate}

\begin{figure}[p]
	\includegraphics[width=.5\textwidth]{LStar_hypothesis0}
	\includegraphics[width=.5\textwidth]{LStar_hypothesis1}
	\includegraphics[width=.5\textwidth]{LStar_hypothesis2}
	\includegraphics[width=.5\textwidth]{LStar_hypothesis3}
	\includegraphics[width=.5\textwidth]{LStar_hypothesis4}
	\includegraphics[width=.5\textwidth]{LStar_hypothesis5}
	\caption{Learning the chocolate bar machine with L* (top to bottom, left to right). Continued on the next page.\label{fig:lstar-run}}
\end{figure}
\begin{figure}[p]
	\ContinuedFloat
	\includegraphics[width=\textwidth]{LStar_hypothesis6}
	\caption[]{Continued from the previous page: learning the chocolate bar machine with L*; final learned model.\label{fig:lstar-run}}
\end{figure}
\thispagestyle{empty}

\section{Bounded Retransmission Protocol}

\end{document}