summaryrefslogtreecommitdiff
path: root/assignments/assignment2/assignment2.tex
blob: 26e964e6071ed15696d956b7edd28cf0c2bd993a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
\documentclass[british]{scrartcl}

\usepackage[british]{babel}
\usepackage{csquotes}
\usepackage{enumerate}
\usepackage[hidelinks]{hyperref}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{cleveref}

\let\oldurl\url
\def\url#1{{\small\oldurl{#1}}}

\title{Testing Techniques}
\subtitle{Assignment 2 Automata Learning}
\author{Ren\'e den Hertog \and Camil Staps \and Erin van der Veen}
% Remove the date from '\maketitle'.
\makeatletter
	\patchcmd
		{\@maketitle}
		{{\usekomafont{date}{\@date\par}}\vskip\z@\@plus 1em}
		{}
		{}
		{}
\makeatother

\begin{document}

\maketitle

\section{Chocolate Bar Machine}
\begin{enumerate}
	\item
		L* creates large hypothesis model from the beginning.
		This means that it works faster with inefficient equivalence oracles.
		On the other hand, Rivest-Schapire and TTT create small models which can perform better with a fast equivalence oracles,
			because these algorithms spend less time on creating a well-based hypothesis.

	\item
		We used L* with the following counterexamples:

		\begin{itemize}
			\item 10ct 10ct 5ct snickers
			\item 10ct 10ct 10ct 10ct 10ct snickers snickers
			\item 10ct 10ct 10ct snickers 10ct twix
			\item 10ct 10ct 10ct 5ct snickers twix
		\end{itemize}

		The hypotheses are given in \cref{fig:lstar-run} (p.~\pageref{fig:lstar-run}).

		\begin{figure}[p]
			\includegraphics[width=.5\textwidth]{LStar_hypothesis0}
			\includegraphics[width=.5\textwidth]{LStar_hypothesis1}
			\includegraphics[width=.5\textwidth]{LStar_hypothesis2}
			\includegraphics[width=.5\textwidth]{LStar_hypothesis3}
			\caption{Learning the chocolate bar machine with L* (top to bottom, left to right).\label{fig:lstar-run}}
		\end{figure}

	\item
		States are uniquely identified by the amount injected in the machine.
		The machine does not accept more than 40ct, and the granularity is 5ct.
		Hence, there are $\frac{40}{5}+1=9$ states.
		The learned model also contains states, therefore they must be equivalent.

	\item
		A counter-example $\pi$ consists of a prefix, an input where an inconsistency appears and a suffix showing the inconsistency:
			$\pi = \sigma u \rho$.
		However, when we provide some sequence $\pi$ it may be possible to find distinct partitions
			$\pi = \sigma_1 u_1 \rho_1 \neq \sigma_2 u_2 \rho_2 = \pi$
			which both help to improve the hypothesis.
		If on the first run we add $\rho_1$ to the suffices of our table,
			$\sigma_2 u_2 \rho_2$ may not be covered yet,
			and we may be able to add $\pi$ a second time.

	\item

	\item

	\item
\end{enumerate}

\section{Bounded Retransmission Protocol}

\end{document}