samedi 17 novembre 2018

Neural Network with Red language

Thanks to:  
Andrew Blais (onlymice@gnosis.cx), Gnosis Software, Inc. 
David Mertz (mertz@gnosis.cx), Gnosis Software, Inc.
Michael Shook, http://mshook.webfactional.com/talkabout/archive/

For the fun, we'll test Red capacities for generating neural networks. Here we used a simple network with 2 input neurons, 3 hidden neurons and 1 output neuron. Red code is based on Back-Propagation Neural Networks python code by Neil Schemenauer (nas@arctrix.com) and on Karl Lewin’s code for Rebol language.

You’ll find easily on the Internet detailed explanations about neural networks.

Neural Networks

Simply speaking, human brain consists of about billion neurons and a neuron is connected to many other neurons. With this kind of connections, neurons both send and receive varying quantities of signals. One very important feature of neurons is that they don't react immediately to the reception of signals, but they sum signals, and they send their own signals to other neurons only when this sum has reached a threshold. This means that the human brain learns by adjusting the number and strength of connections between neurons.

Threshold logic units (TLUs)

The first step toward understanding neural networks is to abstract from the biological neuron, and to consider artificial neurons as threshold logic units (TLUs). A TLU is an object that inputs an array of weighted values, sums them, and if this sum is equal or superior to some threshold, outputs a signal. This means than TLUs can classify data. Imagine an artificial neuron with two inputs, whose weights equal 1, and threshold equals 1.5. With these weighted inputs [0 0], [0 1], [1,0], and [1,1], the neuron will output 0, 0, 0, and 1 respectively. Hidden neurons used in Red code are TLUs.


Network training 

Since TLUs can classify, neural networks can be built to artificially learn some simple rules as Boolean operators. Learning mechanism is modeled on the brain's adjustments of its neural connections. A TLU learns by changing its weights and threshold. This done by process called training.  The concept is not difficult to understand.  Basically, a set of input values and the desired output for each set of inputs are required. This corresponds to the truth tables associated to Boolean operator we want to be learned such as XOR :
Input 1
Input 2
Output
0
0
0
0
1
1
1
0
1
1
1
0

We set first the weights with random values. Then each set of input values is evaluated and compared to the desired output for that set. We add up each of these differences and get a summed error value for this set of weights. Then we modify the weights and go through each of the input/output sets again to find out the total error for this set of weights. Lastly, we use a backpropagation algorithm to test the network learning. The backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. The weights that minimize the error function is then considered to be a solution to the learning problem.



Different Boolean operators ["XOR" "OR" "NOR" "AND" "NAND"] can be used to test the network. You can also play with the number of iterations you want to train the network. Lastly two algorithms are implemented to compute weights in hidden neurons either exponential or sigmoidal. 

Code

Red [
Title:   "Red Neural Network"
Author:  "Francois Jouen"
File: %neuraln.red
Needs: View
]
{This code is based on Back-Propagation Neural Networks 
by Neil Schemenauer <nas@arctrix.com>
Thanks to  Karl Lewin for the Rebol version}

; default number of input, hidden, and output nodes
nInput: 2
nHidden: 3
nOutput: 1
; activations for nodes
aInput: []
aHidden: []
aOutput: []
; weights matrices
wInput: []
wOutput: []
; matrices for last change in weights for momentum
cInput: []
cOutput: []
learningRate: 0.5; = learning rate
momentumFactor: 0.1; = momentum factor

n: 1280 ; n training sample
netR: copy [] ; learning result
step: 8;

;XOR by default
pattern: [
[[0 0] [0]]
[[1 0] [1]]
[[0 1] [1]]
[[1 1] [0]]
]


;calculate a random number where: a <= rand < b
rand: function [a [number!] b [number!]] [(b - a) * ((random 10000.0) / 10000.0) + a]

; Make matrices
make1DMatrix: function [mSize[integer!] value [number!] return: [block!]][
m: copy []
repeat i mSize [append m value]
m
]
make2DMatrix: function [line [integer!] col [integer!] value [number!] return: [block!]][
m: copy []
repeat i line [
blk: copy []
repeat j col [append blk value]
append/only m blk
]
m
]

tanh: function [x [number!] return: [number!]][ (EXP x - EXP negate x) / (EXP x + EXP negate x)]

;sigmoid function, tanh seems better than the standard 1/(1+e^-x)

sigmoid: function [x [number!] return: [number!]][tanh x]

;derivative of  sigmoid function
dsigmoid: function [y [number!] return: [number!]][1.0 - y * y]

createMatrices: func [] [
aInput: make1DMatrix nInput 1.0
aHidden: make1DMatrix nHidden 1.0
aOutput: make1DMatrix nOutput 1.0
wInput: make2DMatrix nInput nHidden 0.0
wOutput: make2DMatrix nHidden nOutput 0.0
cInput: make2DMatrix nInput nHidden 0.0
cOutput: make2DMatrix nHidden nOutput 0.0
randomizeMatrix wInput -2.0 2.0
randomizeMatrix wOutput -2.0 2.0
]

randomizeMatrix: function [mat [block!] v1 [number!] v2 [number!]][
foreach elt mat [loop length? elt [elt: change/part elt rand v1 v2 1]]
]

computeMatrices: func [inputs [block!] return: [block!]] [
; input activations
repeat i (nInput - 1) [poke aInput i to float! inputs/:i]
; hidden activations
repeat j nHidden [
sum: 0.0
repeat i nInput [sum: sum + (aInput/:i * wInput/:i/:j)]
either cb/data [poke aHidden j sigmoid sum] 
[poke aHidden j 1 / (1 + EXP negate sum)]
]
; output activations
repeat j nOutput [
sum: 0.0
repeat i nHidden [
sum: sum + (aHidden/:i * wOutput/:i/:j)]
either cb/data [poke aOutput j sigmoid sum]
[poke aOutput j 1 / (1 + EXP negate sum)]
]
aOutput
]
backPropagation: func [targets [block!] N [number!] M [number!] return: [number!]] [
; calculate error terms for output
oDeltas: make1DMatrix  nOutput 0.0
sum: 0.0
repeat k nOutput [
either cb/data [
sum: targets/:k - aOutput/:k poke oDeltas k (dsigmoid aOutput/:k) * sum]
[ao: aOutput/:k
poke oDeltas k ao * (1 - ao) * (targets/:k - ao)]
]
; calculate error terms for hidden
hDeltas: make1DMatrix  nHidden 0.0
repeat j nHidden [
sum: 0.0
repeat k nOutput [sum: sum + (oDeltas/:k * wOutput/:j/:k)]
either cb/data [poke hDeltas j (dsigmoid aHidden/:j) * sum]
[poke hDeltas j (aHidden/:j * (1 - aHidden/:j) * sum)]
]
; update output weights
repeat j nHidden [
repeat k nOutput [
chnge: oDeltas/:k * aHidden/:j
poke wOutput/:j k (wOutput/:j/:k + (N * chnge) + (M * cOutput/:j/:k))
poke cOutput/:j k chnge
]
]
; update hidden weights
repeat i nInput [
repeat j nHidden [
chnge: hDeltas/:j * aInput/:i
poke wInput/:i j (wInput/:i/:j + (N * chnge) + (M * cInput/:i/:j))
poke cInput/:i j chnge
]
]
; calculate error
error: 0
repeat k nOutput [error: error + (learningRate * ((targets/:k - aOutput/:k) ** 2))]
error
]
trainNetwork: func [patterns[block!] iterations [number!] return: [block!]] [
blk: copy []
count: 0
x: 10
plot: compose [line-width 1 pen red line 0x230 660x230 pen green]
repeat i iterations [
;sbcount/text: form i
error: 0
foreach p patterns [
r: computeMatrices p/1 
error: error + backPropagation p/2 learningRate momentumFactor
sberr/text: form round/to error 0.001
if system/platform = 'Windows [do-events/no-wait];' win users
do-events/no-wait
append blk error
count: count + 1
]
;visualization
if (mod count step) = 0 [
y: 230 - (error * 320)
if x = 10 [append append plot 'line (as-pair x y)];'
append plot (as-pair x y)
x: x + 1
]
visu/draw: plot
do-events/no-wait
]
sb/text: copy "Neural Network rendered in: "
blk
]
testLearning: func [patterns [block!]] [ 
result2/text: copy ""
foreach p patterns [
r: computeMatrices(p/1) 
append result2/text form to integer! round/half-ceiling first r 
append result2/text newline
]


changePattern: func [v1 v2 v3 v4][
change second first pattern  v1 
change second second pattern v2 
change second third pattern  v3 
change second fourth pattern v4
result2/text: copy ""
result1/text: copy ""
append append result1/text form second first pattern newline
append append result1/text form second second pattern newline
append append result1/text form second third pattern newline
append append result1/text form second fourth pattern newline
]


makeNetwork: func [ni [integer!] nh [integer!] no [integer!] lr [float!] mf [float!]] [
random/seed now/time/precise
nInput: ni + 1
nHidden: nh
nOutput: no
learningRate: lr
momentumFactor: mf
createMatrices
s: copy "Neural Network created: "
append s form ni
append s form " input neurons "
append s form nh
append s form " hidden neurons "
append s form no
append s form " output neuron(s) "
sb/text: s
result2/text: copy ""
sberr/text: copy ""
]


makeTraining: does [
t1: now/time/precise
netR: trainNetwork pattern n ; network training
t2: now/time/precise
testLearning pattern ; test output values after training
append sb/text form t2 - t1
]

view win: layout [
title "Back-Propagation Neural Network"
text  "Pattern" 
dpt: drop-down 70 
data ["XOR" "OR" "NOR" "AND" "NAND"]
select 1
on-change [
switch face/text [
"XOR" [changePattern 0 1 1 0]
"AND" [changePattern 0 0 0 1] 
"OR"  [changePattern 0 1 1 1]
"NOR" [changePattern 1 0 0 0]
"NAND"[changePattern 1 1 1 0]
]
isCreated: false]
text "Sample"
dp2: drop-down 70 
data ["640" "1280" "1920" "2560"]
select 2
on-change [n: to integer! face/text step: (n / 640) * 4 ]
cb: check "Sigmoid" []
button "Run Network" [makeNetwork 2 3 1 0.5 0.1 makeTraining ]  
text 40 "Error"
sberr: field 60
pad 10x0
button "Quit" [quit]
return
visu: base 660x240 black
result1: area 35x80
result2: area 35x80
return
sb: field 660
do [changePattern 0 1 1 0]
]






Aucun commentaire:

Enregistrer un commentaire