@ -1,7 +1,7 @@
TITLE:: FluidMLPRegressor
TITLE:: FluidMLPRegressor
summary:: Regression with a multi-layer perceptron
summary:: Regression with a multi-layer perceptron
categories:: Machine learning
categories:: Machine learning
related:: Classes/FluidDataSet
related:: Classes/FluidMLPClassifier, Classes/Fluid DataSet
DESCRIPTION::
DESCRIPTION::
Perform regression between link::Classes/FluidDataSet::s using a Multilayer Perception neural network.
Perform regression between link::Classes/FluidDataSet::s using a Multilayer Perception neural network.
@ -18,25 +18,25 @@ ARGUMENT:: hidden
An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
ARGUMENT:: activation
ARGUMENT:: activation
A tivation function to use for the hidden layer units.
The ac tivation function to use for the hidden layer units.
ARGUMENT:: maxIter
ARGUMENT:: maxIter
M aximum number of iterations to use in training.
The m aximum number of iterations to use in training.
ARGUMENT:: learnRate
ARGUMENT:: learnRate
The learning rate of the network. Start small, increase slowly.
The learning rate of the network. Start small, increase slowly.
ARGUMENT:: momentum
ARGUMENT:: momentum
Training momentum, default 0.9
The t raining momentum, default 0.9
ARGUMENT:: batchSize
ARGUMENT:: batchSize
Training batch size.
The t raining batch size.
ARGUMENT:: validation
ARGUMENT:: validation
The fraction of the DataSet size to hold back during training to validate the network against.
The fraction of the DataSet size to hold back during training to validate the network against.
METHOD:: identity, relu, sigmoid, tanh
METHOD:: identity, relu, sigmoid, tanh
C onvinience constants for the available activation functions.
A set of c onvinience constants for the available activation functions.
INSTANCEMETHODS::
INSTANCEMETHODS::
@ -57,7 +57,7 @@ Function to run when training is complete
returns:: The training loss, or -1 if training failed
returns:: The training loss, or -1 if training failed
METHOD:: predict
METHOD:: predict
Apply the learned mapping to a DataSet (given a trained network)
Apply the learned mapping to a link::Classes/Fluid DataSet:: (given a trained network)
ARGUMENT:: sourceDataSet
ARGUMENT:: sourceDataSet
Input data
Input data
@ -90,41 +90,67 @@ EXAMPLES::
code::
code::
//Make a simple mapping between a ramp and a sine cycle, test with an exponentional ramp
(
(
{
~source = FluidDataSet(s,\mlp_regressor_source);
~source = FluidDataSet.new(s,"mlpregressor_source");
~target = FluidDataSet(s,\mlp_regressor_target);
~target = FluidDataSet.new(s,"mlpregressor_target");
~test = FluidDataSet(s,\mlp_regressor_dest);
~dest = FluidDataSet.new(s,"mlpregressor_dest");
~output = FluidDataSet(s,\mlp_regress_out);
~datapoint = Buffer.alloc(s,2);
~tmpbuf = Buffer.alloc(s,1);
~destpoint = Buffer.new(s);
~regressor = FluidMLPRegressor(s,[2],FluidMLPRegressor.tanh,1000,0.1,0.1,1,0);
~regressor = FluidMLPRegressor(s) ;
s.sync;
~source.read("/tmp/test_reg_source_200_lin.json");
~source.print;
~target.read("/tmp/test_reg_target_200_lin.json");
~target.print;
}.fork
)
)
//Train network to map source to target. fit() returns loss. If this is -1, then training has failed
//Make source, target and test data
(
(
~regressor.fit(~source,~target,action: {|x|
~sourcedata = 128.collect{|i|i/128};
if(x != -1) {("MLP trained with loss"+x).postln;}{"Training failed. Try again (perhaps with a lower learning rate)".postln;}
~targetdata = 128.collect{|i| sin(2*pi*i/128) };
~testdata = 128.collect{|i|(i/128)**2};
~source.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~sourcedata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~target.load(
d = Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~targetdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~test.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~targetdata.plot;
~source.print;
~target.print;
~test.print;
)
// Now make a regressor and fit it to the source and target, and predict against test
//grab the output data whilst we're at it, so we can inspect
// run this to train the network for up to 1000(max epochs to map source to target. fit() returns loss. If this is -1, then training has failed. Run until the printed error is satisfactory to you
~regressor.fit(~source, ~target, {|x|x.postln;});
//you can change parameters of the MLPregressor with setters
~regressor.learnRate = 0.01;
~regressor.momentum = 0;
~regressor.validation= 0.2;
(
~outputdata = Array(128);
~regressor.predict(~test, ~output, action:{
~output.dump{|x| 128.do{|i|
~outputdata.add(x["data"][i.asString][0])
}};
});
});
)
)
//Batch predict takes a FluidDataSet source, a FluidDataSet to write netwotk output to, and layer to read from
//We should see a single cycle of a chirp. if not,
~regressor.predict(~source,~dest,2);
~outputdata.plot;
~dest.dump
//Single point predict uses Buffers rater than FluidDataSet:
{
~datapoint.setn(0,[1,1]);
~regressor.predictPoint(~datapoint,~destpoint,2);
s.sync;
~destpoint.loadToFloatArray(0,action:{|a|
a.postln;
});
}.fork
::
::