$Id: README,v 1.5 2005-12-31 18:44:54 matju Exp $ PureUnity Copyright 2006 by Mathieu Bouchard This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See file ./COPYING for further informations on licensing terms. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +-+-+--+---+-----+--------+-------------+---------------------+ GOALS 1. To provide a unit-test framework, which also provide benchmarking features, all made in Pd for use in Pd. 2. To provide tests for functionality in internals, externals, abstractions, etc., in a modularized way, in a DRY/OAOO fashion, thus abstracting out common features so that many objects share the same test patch for the features that they have in common. +-+-+--+---+-----+--------+-------------+---------------------+ TEST PROTOCOL new: create common (reusable) fixtures. inlet 0: bang: run all available tests in that class. individual tests don't have to be available through individual methods but may. If they do, the names of the methods must match those given in the test results. each test should build its own non-reusable fixtures and reinitialize common fixtures, not assuming that the previous tests have left the common fixtures in a normal state. outlet 0: test results. a sequence of lists like: list $name $passed? $accuracy $elapsed where: $name is a symbol $passed? is either 0 for failure or 1 for success $accuracy is a float proportional to relative error on math (if not applicable, use 0) $elapsed is a float, the time elapsed in milliseconds or it is the symbol "-" if not measured. for example: list commutative1 1 0 - Which means that the 1st test about commutativity passed ($2=1) because it was perfectly accurate ($3==0) and that we didn't measure the time ($4=-). +-+-+--+---+-----+--------+-------------+---------------------+ SEVERITIES (in decreasing order) * crash: Segmentation Fault, Bus Error, Illegal Instruction, Infinite Loop, etc. You can't deal with those errors at the level of the tests. Maybe there should be a way to tell a test object to skip certain tests, by name, in order to be able to perform as many tests as possible while waiting for a fix. It could become possible to rescue from some of those crashes if Pd supported exceptions (stack-unwinding). * corruption: this may cause future crashes and failures on innocent objects/features. I have no solution for this except to be careful. * post(),error(),pd_error(): Gets printed in the console. The problem is that those can't be handled by the test objects, so someone has to read them and interpret them. Also they prevent test objects to ensure that error conditions produce error messages. * pd_error2(): I wish this would exist. It would be sort of like pd_error() but it would produce a pd message instead, whose selector would be an error code, designed to be both localizable and [route]able. By default, that message would be sent to the console, but there would be an internal class designed to catch those messages. (If stack-unwinding were possible, it would be disabled by default on pd_error2 and could be enabled explicitly by-selector). * failure: a test object reports a problem through outlet 0. * dropout: a failure in realtimeness... difficult for an object to detect. * inaccuracy: a test more or less succeeds but the test detected that the epsilon sucks. +-+-+--+---+-----+--------+-------------+---------------------+ PROTOCOL FOR [error] new: optional argument which would either be a float (e.g. the $0 of the enclosing abstraction) or a pointer. inlet 0: set $scapegoat: replaces the originator of the message by $scapegoat, which can be a float or a pointer error $1 ...: causes its arguments to be concatenated, space-separated (may include floats), and then sent through pd_error using the appropriate originator (scapegoat). list $1 ...: for future use. would use pd_error2() (see README or previous mail). $1 has to be a symbol. +-+-+--+---+-----+--------+-------------+---------------------+ ACCURACY AND ERROR (in math-related unit tests) The "absolute error" between a practical result and the expected value is considered to be the distance between the two value. That is the absolute value of the difference. In the case of positions in 2D, 3D, etc., use the L2-Norm which is a generalized Pythagoras' Theorem: dist^2 = x^2 + y^2 + z^2 + ... A norm is a distance between something and zero. Sometimes you have several practical results for one expected value and must extract a single absolute error out of that. Then you should pick the largest of the individual absolute errors. Sometimes you don't have an expected value, you just have several practical results that you expect to be quite the same. In that case, the absolute error is the "diameter" of those results. The meaning of diameter here is: the largest distance between any two results. If in a single test you must compare 2D errors with 3D errors and 1D errors, etc., you may have to adjust them by dividing the error by the square root of N (N is the number of dimensions). In that case, the resulting value is called a RMS (Root-Mean-Square). The maximum error introduced by just representing a number as a float (instead of an exact value) is at most proportional to the magnitude of the number (e.g. usually 16 million times smaller: about 6 decimals). Also, often we are only interested in relative error, which is absolute error divided by the norm of the expected result, because small absolute errors don't matter much with large results. This is the reason floats exist in the first place. By default, use relative error as the $accuracy in Pd tests. If you don't have an expected result, then compute the relative error as being the absolute error divided by the norm of the average of practical results. In the RMS case of relative error, the norms of expected results should also be adjusted, but both adjustments cancel because they get divided by each other. That means: don't divide by the sqrt(N) at all and you'll get an appropriate result. +-+-+--+---+-----+--------+-------------+---------------------+ ETC (write me!) If +-test.pd tests [+], it can test for hotness, coldness, it can test that only one result is produced per hot message, that all results are float, that a few example additions work, and that with random inputs it respects commutativity, associativity, invertibility, within appropriate relative-error bounds, etc. However +-test.pd can't test that errormessages aren't printed during the testing. This may be something that we want to check for, and currently the best way to handle it is to search the console for error messages, and if there are any, restart the tests in verbose mode and see where the error happens exactly. [...] Floating-point is the scientific notation for numbers that we all learned on paper in school. Rounding and inaccuracy are two sides of the same coin. They are required when it is stupid to have perfect results, that is, when it would mean too many computations for little gain. However sometimes we want to make sure that our math is accurate enough. Many algorithms are data-recursive: each computation uses previous results. Many of those algorithms have chaotic and/or unstable behaviours, which means that the inaccuracies may skyrocket instead of fading out.