blob: 9fb455bb03f8b8176049273aff98717c5ec6ca6e (
plain) (
blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
|
\documentclass[british]{scrartcl}
\usepackage[british]{babel}
\usepackage{csquotes}
\usepackage{enumerate}
\usepackage[hidelinks]{hyperref}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{cleveref}
\let\oldurl\url
\def\url#1{{\small\oldurl{#1}}}
\title{Testing Techniques}
\subtitle{Assignment 2 Automata Learning}
\author{Ren\'e den Hertog \and Camil Staps \and Erin van der Veen}
% Remove the date from '\maketitle'.
\makeatletter
\patchcmd
{\@maketitle}
{{\usekomafont{date}{\@date\par}}\vskip\z@\@plus 1em}
{}
{}
{}
\makeatother
\begin{document}
\maketitle
\section{Chocolate Bar Machine}
\begin{enumerate}
\item
L* creates large hypothesis model from the beginning.
This means that it works faster with inefficient equivalence oracles.
On the other hand, Rivest-Schapire and TTT create small models which can perform better with a fast equivalence oracles,
because these algorithms spend less time on creating a well-based hypothesis.
\item
We used L* with the following counterexamples:
\begin{itemize}
\item 10ct 10ct 5ct snickers
\item 10ct 10ct 10ct 10ct 10ct snickers snickers
\item 10ct 10ct 10ct snickers 10ct twix
\item 10ct 10ct 10ct 5ct snickers twix
\end{itemize}
The hypotheses are given in \cref{fig:lstar-run} (p.~\pageref{fig:lstar-run}).
\begin{figure}[p]
\includegraphics[width=.5\textwidth]{LStar_hypothesis0}
\includegraphics[width=.5\textwidth]{LStar_hypothesis1}
\includegraphics[width=.5\textwidth]{LStar_hypothesis2}
\includegraphics[width=.5\textwidth]{LStar_hypothesis3}
\caption{Learning the chocolate bar machine with L* (top to bottom, left to right).\label{fig:lstar-run}}
\end{figure}
\item
States are uniquely identified by the amount injected in the machine.
The machine does not accept more than 40ct, and the granularity is 5ct.
Hence, there are $\frac{40}{5}+1=9$ states.
The learned model also contains states, therefore they must be equivalent.
\item
\item
\item
\item
\end{enumerate}
\section{Bounded Retransmission Protocol}
\end{document}
|