aboutsummaryrefslogtreecommitdiff
path: root/examples/ann_som/ann_som.pd
blob: 3d471edede5585997b1b14b13084aa4f090b83eb (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
#N canvas 258 163 640 725 10;
#X msg 131 535 print;
#X msg 132 568 new 5 8 8;
#X msg 127 139 init;
#X msg 128 314 train;
#X msg 129 336 test;
#X msg 128 427 write;
#X obj 70 599 ann_som 4 9 10;
#X msg 70 49 1 0 0 1;
#X msg 70 68 0 1 0 1;
#X msg 70 87 2 1 0 0;
#X msg 128 158 init 0.5;
#X msg 128 178 init 1 0.5 0 0.5;
#X text 234 141 init all weights with "0";
#X text 235 160 init all weights with "0.5";
#X text 235 177 init weights for each sensor;
#X msg 128 203 learn 1;
#X msg 128 237 learn 1 0.9 0.1;
#X text 226 203 set learning rate to 1;
#X msg 128 220 learn 0.5 0.999;
#X text 227 219 set learning rate to 0.5 and factor to 0.999;
#X text 227 237 set learning rate to 1 \, factor to 0.9 and offset
to 0.1;
#X msg 128 254 neighbour 1;
#X msg 128 271 neighbour 0.5 0.999;
#X msg 128 288 neighbour 1 0.9 0.1;
#X text 248 255 set neighbourhood to 1;
#X text 249 271 set neighbourhoodto 0.5 and factor to 0.999;
#X text 249 289 set neighbourhood to 1 \, factor to 0.9 and offset
to 0.1;
#X text 180 309 set som to "train" mode (learn from sensor-input and
output winning neuron);
#X text 179 331 set som to "test" mode (output winning neuron for sensor-input
\, but do not learn !);
#X msg 129 368 rule INSTAR;
#X msg 129 385 rule OUTSTAR;
#X msg 129 402 rule KOHONEN;
#X text 218 367 learn with IN-STAR rule;
#X text 219 385 learn with OUT-STAR rule;
#X text 219 402 learn with KOHONENrule;
#X msg 128 445 write mysom.som;
#X msg 129 469 read;
#X msg 129 487 read mysom.som;
#X text 156 68 present various data to the SOM;
#X text 203 535 for debugging;
#X text 207 570 create a new SOM with 8x8 neurons \, each having 5
sensors;
#X text 204 601 create a new SOM with 9x10 neurons \, each having 4
sensors;
#X floatatom 70 654 4 0 0 0 - - -;
#X text 113 658 winning neuron;
#N canvas 13 0 889 630 SOMs 0;
#X text 76 27 SOM :: Self-Organized Maps;
#X text 55 53 SOMs are "Artificial Neural Networks" \, that are trying
to learn something about the data presented to them without a supervisor/teacher.
;
#X text 59 118 in short:;
#X text 120 119 the neuron \, whose weight-configuration matches the
presented data best is the winner (its number (counting from the lower-left
corner) is sent to the output);
#X text 121 163 to match the data better the next time it is presented
\, the weights of the winning neuron are adjusted.;
#X text 121 188 the weights of the neurons neighbouring the winner
are adjusted to match the data too \, but not so strong as the winner's.
;
#X text 121 276 lr(n+1)=lr(n)*factor;
#X text 275 277 learning_rate=lr+offset;
#X text 121 289 nb(n+1)=nb(n)*factor;
#X text 275 290 neighbourhood=nb+offset;
#X text 121 230 both neighbourhood and learning-rate (==amount of how
much the weights of the winner (and \, proportional \, the weights
of the neighbours) are adjusted) are decreasing recursively with time.
;
#X text 119 319 thus you will sooner or (most of the time) later get
a "brain map" \, where similar inputs will activate neurons in specifique
regions (like there are regions for seeing and regions for hearing
in our brains);
#X text 97 381 there are various rules \, how to re-adjust the weights
of the neurons : in-star \, out-star and kohonen (maybe there are others
\, but these i found in literature);
#X obj 607 220 +;
#X text 640 182 ...;
#X obj 579 185 * \$1;
#X obj 607 185 * \$2;
#X obj 670 185 * \$0;
#X obj 579 128 unpack 0 0 0 0 0;
#X text 602 111 n sensors;
#X text 705 186 weights 1 to n;
#X obj 579 90 inlet;
#X obj 607 288 outlet;
#X text 594 62 a neuron;
#X text 566 307 the neuron with the highest weighted sum;
#X text 567 318 matches best and is therefore the winner;
#X text 53 452 notes:;
#X text 101 453 each neuron of the SOM has n sensors. you have to present
a list of n floats to the SOM to make it work;
#X text 102 482 you should init the weights for each sensor with the
expected mean of the sensor values before you start training to get
best and fastest results;
#X text 55 87 they were first proposed by the Finnish scientist T.Kohonen
in the 80ies (i think).;
#X text 98 543 if you have no clue \, what's this all about \, maybe
you do not need SOMs (which i doubt) or you should have a look at;
#X text 118 577 http://www.eas.asu.edu/~eee511;
#X text 118 591 http://www.cis.hut.fi/projects/ica;
#X connect 13 0 22 0;
#X connect 15 0 13 0;
#X connect 16 0 13 0;
#X connect 17 0 13 0;
#X connect 18 0 15 0;
#X connect 18 1 16 0;
#X connect 18 4 17 0;
#X connect 21 0 18 0;
#X restore 535 44 pd SOMs;
#X text 81 13 ann_som :: train and test Self-Organized Maps;
#X obj 73 700 ann_som test.som;
#X text 211 704 load a SOM-file;
#X msg 128 119 rinit 10;
#X text 234 121 init all weights with time-seeded random values from
0 to 10;
#X connect 0 0 6 0;
#X connect 1 0 6 0;
#X connect 2 0 6 0;
#X connect 3 0 6 0;
#X connect 4 0 6 0;
#X connect 5 0 6 0;
#X connect 6 0 42 0;
#X connect 7 0 6 0;
#X connect 8 0 6 0;
#X connect 9 0 6 0;
#X connect 11 0 6 0;
#X connect 15 0 6 0;
#X connect 16 0 6 0;
#X connect 18 0 6 0;
#X connect 21 0 6 0;
#X connect 22 0 6 0;
#X connect 23 0 6 0;
#X connect 29 0 6 0;
#X connect 30 0 6 0;
#X connect 31 0 6 0;
#X connect 35 0 6 0;
#X connect 36 0 6 0;
#X connect 37 0 6 0;
#X connect 48 0 6 0;