1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
|
$Id: README,v 1.9 2006-01-07 07:19:27 matju Exp $
PureUnity
Copyright 2006 by Mathieu Bouchard <matju à artengine point ca>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
See file ./COPYING for further informations on licensing terms.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+-+-+--+---+-----+--------+-------------+---------------------+
GOALS
1. To provide a unit-test framework, which also provide benchmarking
features, all made in Pd for use in Pd.
2. To provide tests for functionality in internals, externals, abstractions,
etc., in a modularized way, in a DRY/OAOO fashion, thus abstracting out
common features so that many objects share the same test patch for the
features that they have in common.
+-+-+--+---+-----+--------+-------------+---------------------+
REQUIREMENTS
1. Pd 0.39 (PureMSP or Devel)
+-+-+--+---+-----+--------+-------------+---------------------+
TEST PROTOCOL
new:
create common (reusable) fixtures.
inlet 0:
bang:
run all available tests in that class. individual tests don't have
to be available through individual methods but may. If they do, the
names of the methods must match those given in the test results.
each test should build its own non-reusable fixtures and reinitialize
common fixtures, not assuming that the previous tests have left the
common fixtures in a normal state.
outlet 0:
test results. a sequence of lists like:
list $passed? $accuracy $elapsed $name1 ...
where:
$passed? is either 0 for failure or 1 for success
$accuracy is a float proportional to relative error on math
(if not applicable, use 0)
$elapsed is a nonnegative float, the time elapsed in milliseconds
or it is any negative float meaning the time hasn't been measured.
$name1 and the rest are symbols and/or floats identifying the test
for example:
list 1 0 -1 commutative f + *
Which means that the 1st test about commutativity passed ($2=1) because it
was perfectly accurate ($3==0) and that we didn't measure the time ($4=-).
+-+-+--+---+-----+--------+-------------+---------------------+
SEVERITIES (in decreasing order)
* crash: Segmentation Fault, Bus Error, Illegal Instruction, Infinite Loop,
etc. You can't deal with those errors at the level of the tests. Maybe there
should be a way to tell a test object to skip certain tests, by name, in
order to be able to perform as many tests as possible while waiting for a
fix. It could become possible to rescue from some of those crashes if Pd
supported exceptions (stack-unwinding).
* corruption: this may cause future crashes and failures on innocent
objects/features. I have no solution for this except to be careful.
* post(),error(),pd_error(): Gets printed in the console. The problem is that
those can't be handled by the test objects, so someone has to read them and
interpret them. Also they prevent test objects to ensure that error
conditions produce error messages.
* pd_error2(): I wish this would exist. It would be sort of like pd_error()
but it would produce a pd message instead, whose selector would be an
error code, designed to be both localizable and [route]able. By default, that
message would be sent to the console, but there would be an internal class
designed to catch those messages. (If stack-unwinding were possible, it would
be disabled by default on pd_error2 and could be enabled explicitly
by-selector).
* failure: a test object reports a problem through outlet 0.
* dropout: a failure in realtimeness... difficult for an object to detect.
* inaccuracy: a test more or less succeeds but the test detected that the
epsilon sucks.
+-+-+--+---+-----+--------+-------------+---------------------+
PROTOCOL FOR [error]
new:
optional argument which would either be a float
(e.g. the $0 of the enclosing abstraction) or a pointer.
inlet 0:
set $scapegoat:
replaces the originator of the message by $scapegoat, which can be a
float or a pointer
error $1 ...:
causes its arguments to be concatenated, space-separated (may include
floats), and then sent through pd_error using the appropriate
originator (scapegoat).
list $1 ...:
for future use. would use pd_error2() (see README or previous mail).
$1 has to be a symbol.
+-+-+--+---+-----+--------+-------------+---------------------+
ACCURACY AND ERROR (in math-related unit tests)
The "absolute error" between a practical result and the expected value
is considered to be the distance between the two value. That is the
absolute value of the difference.
In the case of positions in 2D, 3D, etc., use the L2-Norm which is
a generalized Pythagoras' Theorem: dist^2 = x^2 + y^2 + z^2 + ...
A norm is a distance between something and zero.
Sometimes you have several practical results for one expected value
and must extract a single absolute error out of that. Then you should pick
the largest of the individual absolute errors.
Sometimes you don't have an expected value, you just have several
practical results that you expect to be quite the same. In that case,
the absolute error is the "diameter" of those results. The meaning
of diameter here is: the largest distance between any two results.
If in a single test you must compare 2D errors with 3D errors and 1D
errors, etc., you may have to adjust them by dividing the error by
the square root of N (N is the number of dimensions). In that case,
the resulting value is called a RMS (Root-Mean-Square).
The maximum error introduced by just representing a number as a float
(instead of an exact value) is at most proportional to the magnitude
of the number (e.g. usually 16 million times smaller: about 6 decimals).
Also, often we are only interested in relative error, which is absolute
error divided by the norm of the expected result, because small absolute
errors don't matter much with large results. This is the reason floats
exist in the first place. By default, use relative error as the $accuracy
in Pd tests.
If you don't have an expected result, then compute the relative error as
being the absolute error divided by the norm of the average of practical
results.
In the RMS case of relative error, the norms of expected results should also
be adjusted, but both adjustments cancel because they get divided by each
other. That means: don't divide by the sqrt(N) at all and you'll get an
appropriate result.
+-+-+--+---+-----+--------+-------------+---------------------+
TYPE PREFIXES
Those have to be prefixes in order to be honored by DOLLSYM:
[$1norm] should expand to [fnorm], [lfnorm], [#norm], etc.
Those prefixes are necessary in order to achieve polymorphism through
abstraction arguments.
CURRENT:
f float
~ signal
FUTURE (from PureData):
s symbol
p gpointer
a anything
l list (of whatever)
lf list of floats
ls list of symbols
lp list of pointers
FUTURE (from DesireData):
t stringpointer
L listpointer
v varpointer (instance symbol)
FUTURE (from GridFlow):
# grid (of whatever)
#b grid of bytes (uint8)
#s grid of shorts (int16)
#i grid of ints (int32)
#l grid of longs (int64)
#f grid of floats (float32)
#d grid of doubles (float64)
#r grid of rubies (VALUE*)
for a type prefix to be considered implemented, it has to
have the following class set:
metaabstraction for floats for signals for grids
[$1.inlet] [inlet] [inlet~] [inlet]
[$1.outlet] [outlet] [outlet~] [outlet]
[$1.do $2 $3] [$2 $3] [$2~ $3] [# $2 $3]
[$1.taa] [t a a] noop [t a a]
[$1.swap] [swap] noop TODO
[$1.norm] [abs] [env~] [# sq]->[#ravel]->[#fold +]->[#export]->[sqrt]
[$1.packunpack3] pack,unpack noop TODO
The first two cannot be implemented as abstractions and instead must be
defined as aliases in pureunity.c.
+-+-+--+---+-----+--------+-------------+---------------------+
OTHER PROTOCOLS
Those four classes are operators that give verify algebraic properties
of other operators. The more their outputs are close to zero, the more
those other operators are faithful to an algebraic property.
(here, supported $types are f and ~)
[commutator $type $class] (2 inlets) ab-ba
[associator $type $class] (2 inlets) (ab)c-a(bc)
[distributor $type $class1 $class2] (3 inlets) a&(b^c)-(a&b^a&c)
[invertor $type $class1 $class2] (2 inlets) ab/b-a
+-+-+--+---+-----+--------+-------------+---------------------+
TESTS AND RULES
For each class, a test file's name is the class name followed by "-test.pd",
and a rule file's name is the class name followed by "-rule.pd",
in the same way as it is for help files.
for a class called $foo, the protocol (aka interface aka rule) $foo is the
set of behaviours expected from the $foo class; the class called $foo-rule
must repect the $foo protocol as well, plus it should test that the inputs
are valid, and if they are, it should test for one or several results and
report any errors.
(((To report errors and inaccuracies, output them through the properties outlet at the right. If there is no
properties outlet in $foo (curently almost nothing in Pd has one),
then $foo-rule must have one more outlet than $foo.)))
(((Float messages coming out of the properties outlet of $foo-rule report
accuracy. Named error messages come out with selector "error" followed by
an error-symbol and then its arguments.)))
(((In the case of true/false logic, a value of 0 means that a test has passed
and a 1 means that a test has failed. Those values represent failure and not
success. The reason is so that it matches with accuracy levels, where 0 is
perfectly accurate, but any inaccuracy shows up as a relative error fraction.
Any finite nonnegative value is allowed for accuracy, because it is expected
to be the result of a norm))).
(((In standard math, "Discrete Metric" is when there are only two possible
distances between objects: together=0 and apart=1)))
+-+-+--+---+-----+--------+-------------+---------------------+
ETC
(write me!)
If +-test.pd tests [+], it can test for hotness, coldness, it can test
that only one result is produced per hot message, that all results are
float, that a few example additions work, and that with random inputs it
respects commutativity, associativity, invertibility, within appropriate
relative-error bounds, etc.
However +-test.pd can't test that errormessages aren't printed during the
testing. This may be something that we want to check for, and currently
the best way to handle it is to search the console for error messages, and
if there are any, restart the tests in verbose mode and see where the
error happens exactly.
[...]
Floating-point is the scientific notation for numbers that we all
learned on paper in school. Rounding and inaccuracy are two sides
of the same coin. They are required when it is stupid to have perfect
results, that is, when it would mean too many computations for little
gain.
However sometimes we want to make sure that our math is accurate enough.
Many algorithms are data-recursive: each computation uses previous
results. Many of those algorithms have chaotic and/or unstable
behaviours, which means that the inaccuracies may skyrocket instead of
fading out.
|