updated helpfiles and examples for MLP(Regressor | Classifier)

nix
Pierre Alexandre Tremblay 6 years ago
parent d6b5ebd686
commit 28728e79e8

@ -85,7 +85,7 @@ d = Dictionary.with(
~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]) ~testdata.collect{|x, i| [i.asString, [x]]}.flatten)])
); );
~targetdata.plot ~targetdata.plot;
~source.print; ~source.print;
~target.print; ~target.print;
~test.print; ~test.print;

@ -1,84 +1,80 @@
TITLE:: FluidMLPClassifier TITLE:: FluidMLPClassifier
summary:: Classification with a neural network summary:: Classification with a multi-layer perceptron
categories:: Undocumented classes categories:: Machine learning
related:: Classes/FluidMLPRegressor, Classes/FluidDataSet related:: Classes/FluidMLPRegressor, Classes/FluidDataSet
DESCRIPTION:: Perform regression between link::Classes/FluidDataSet::s using a Multilayer Perception neural network.
CLASSMETHODS:: CLASSMETHODS::
METHOD:: new METHOD:: new
(describe method here) Creates a new instance on the server.
ARGUMENT:: server ARGUMENT:: server
(describe argument here) The link::Classes/Server:: on which to run this model.
ARGUMENT:: hidden ARGUMENT:: hidden
(describe argument here) An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
ARGUMENT:: activation ARGUMENT:: activation
(describe argument here) The activation function to use for the hidden layer units.
ARGUMENT:: maxIter ARGUMENT:: maxIter
(describe argument here) The maximum number of iterations to use in training.
ARGUMENT:: learnRate ARGUMENT:: learnRate
(describe argument here) The learning rate of the network. Start small, increase slowly.
ARGUMENT:: momentum ARGUMENT:: momentum
(describe argument here) The training momentum, default 0.9
ARGUMENT:: batchSize ARGUMENT:: batchSize
(describe argument here) The training batch size.
ARGUMENT:: validation ARGUMENT:: validation
(describe argument here) The fraction of the DataSet size to hold back during training to validate the network against.
returns:: (describe returnvalue here)
METHOD:: identity, relu, sigmoid, tanh
A set of convinience constants for the available activation functions.
INSTANCEMETHODS:: INSTANCEMETHODS::
METHOD:: predictPoint PRIVATE:: init, uid
(describe method here)
ARGUMENT:: sourceBuffer
(describe argument here)
ARGUMENT:: action
(describe argument here)
returns:: (describe returnvalue here)
METHOD:: fit METHOD:: fit
(describe method here) Train the network to map between a source link::Classes/FluidDataSet:: and a target link::Classes/FluidLabelSet::
ARGUMENT:: sourceDataSet ARGUMENT:: sourceDataSet
(describe argument here) Source data
ARGUMENT:: targetLabelSet ARGUMENT:: targetLabelSet
(describe argument here) Target data
ARGUMENT:: action ARGUMENT:: action
(describe argument here) Function to run when training is complete
returns:: (describe returnvalue here) returns:: The training loss, or -1 if training failed
METHOD:: predict METHOD:: predict
(describe method here) Apply the learned mapping to a DataSet (given a trained network)
ARGUMENT:: sourceDataSet ARGUMENT:: sourceDataSet
(describe argument here) Input data
ARGUMENT:: targetLabelSet ARGUMENT:: targetLabelSet
(describe argument here) Output data
ARGUMENT:: action ARGUMENT:: action
(describe argument here) Function to run when complete
METHOD:: predictPoint
Apply the learned mapping to a single data point in a link::Classes/Buffer::
returns:: (describe returnvalue here) ARGUMENT:: sourceBuffer
Input point
ARGUMENT:: action
A function to run when complete
EXAMPLES:: EXAMPLES::

@ -1,7 +1,7 @@
TITLE:: FluidMLPRegressor TITLE:: FluidMLPRegressor
summary:: Regression with a multi-layer perceptron summary:: Regression with a multi-layer perceptron
categories:: Machine learning categories:: Machine learning
related:: Classes/FluidDataSet related:: Classes/FluidMLPClassifier, Classes/FluidDataSet
DESCRIPTION:: DESCRIPTION::
Perform regression between link::Classes/FluidDataSet::s using a Multilayer Perception neural network. Perform regression between link::Classes/FluidDataSet::s using a Multilayer Perception neural network.
@ -18,25 +18,25 @@ ARGUMENT:: hidden
An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each). An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
ARGUMENT:: activation ARGUMENT:: activation
Ativation function to use for the hidden layer units. The activation function to use for the hidden layer units.
ARGUMENT:: maxIter ARGUMENT:: maxIter
Maximum number of iterations to use in training. The maximum number of iterations to use in training.
ARGUMENT:: learnRate ARGUMENT:: learnRate
The learning rate of the network. Start small, increase slowly. The learning rate of the network. Start small, increase slowly.
ARGUMENT:: momentum ARGUMENT:: momentum
Training momentum, default 0.9 The training momentum, default 0.9
ARGUMENT:: batchSize ARGUMENT:: batchSize
Training batch size. The training batch size.
ARGUMENT:: validation ARGUMENT:: validation
The fraction of the DataSet size to hold back during training to validate the network against. The fraction of the DataSet size to hold back during training to validate the network against.
METHOD:: identity, relu, sigmoid, tanh METHOD:: identity, relu, sigmoid, tanh
Convinience constants for the available activation functions. A set of convinience constants for the available activation functions.
INSTANCEMETHODS:: INSTANCEMETHODS::
@ -57,7 +57,7 @@ Function to run when training is complete
returns:: The training loss, or -1 if training failed returns:: The training loss, or -1 if training failed
METHOD:: predict METHOD:: predict
Apply the learned mapping to a DataSet (given a trained network) Apply the learned mapping to a link::Classes/FluidDataSet:: (given a trained network)
ARGUMENT:: sourceDataSet ARGUMENT:: sourceDataSet
Input data Input data
@ -90,41 +90,67 @@ EXAMPLES::
code:: code::
//Make a simple mapping between a ramp and a sine cycle, test with an exponentional ramp
( (
{ ~source = FluidDataSet(s,\mlp_regressor_source);
~source = FluidDataSet.new(s,"mlpregressor_source"); ~target = FluidDataSet(s,\mlp_regressor_target);
~target = FluidDataSet.new(s,"mlpregressor_target"); ~test = FluidDataSet(s,\mlp_regressor_dest);
~dest = FluidDataSet.new(s,"mlpregressor_dest"); ~output = FluidDataSet(s,\mlp_regress_out);
~datapoint = Buffer.alloc(s,2); ~tmpbuf = Buffer.alloc(s,1);
~destpoint = Buffer.new(s); ~regressor = FluidMLPRegressor(s,[2],FluidMLPRegressor.tanh,1000,0.1,0.1,1,0);
~regressor = FluidMLPRegressor(s) ; )
s.sync;
~source.read("/tmp/test_reg_source_200_lin.json"); //Make source, target and test data
(
~sourcedata = 128.collect{|i|i/128};
~targetdata = 128.collect{|i| sin(2*pi*i/128) };
~testdata = 128.collect{|i|(i/128)**2};
~source.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~sourcedata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~target.load(
d = Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~targetdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~test.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~targetdata.plot;
~source.print; ~source.print;
~target.read("/tmp/test_reg_target_200_lin.json");
~target.print; ~target.print;
}.fork ~test.print;
) )
//Train network to map source to target. fit() returns loss. If this is -1, then training has failed // Now make a regressor and fit it to the source and target, and predict against test
//grab the output data whilst we're at it, so we can inspect
// run this to train the network for up to 1000(max epochs to map source to target. fit() returns loss. If this is -1, then training has failed. Run until the printed error is satisfactory to you
~regressor.fit(~source, ~target, {|x|x.postln;});
//you can change parameters of the MLPregressor with setters
~regressor.learnRate = 0.01;
~regressor.momentum = 0;
~regressor.validation= 0.2;
( (
~regressor.fit(~source,~target,action: {|x| ~outputdata = Array(128);
if(x != -1) {("MLP trained with loss"+x).postln;}{"Training failed. Try again (perhaps with a lower learning rate)".postln;} ~regressor.predict(~test, ~output, action:{
~output.dump{|x| 128.do{|i|
~outputdata.add(x["data"][i.asString][0])
}};
}); });
) )
//Batch predict takes a FluidDataSet source, a FluidDataSet to write netwotk output to, and layer to read from //We should see a single cycle of a chirp. if not,
~regressor.predict(~source,~dest,2); ~outputdata.plot;
~dest.dump
//Single point predict uses Buffers rater than FluidDataSet:
{
~datapoint.setn(0,[1,1]);
~regressor.predictPoint(~datapoint,~destpoint,2);
s.sync;
~destpoint.loadToFloatArray(0,action:{|a|
a.postln;
});
}.fork
:: ::

Loading…
Cancel
Save