You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

205 lines
5.5 KiB
Plaintext

TITLE:: FluidMLPRegressor
summary:: Regression with a multi-layer perceptron
categories:: Machine learning
related:: Classes/FluidMLPClassifier, Classes/FluidDataSet
DESCRIPTION::
Perform regression between link::Classes/FluidDataSet::s using a Multi-Layer Perception neural network.
CLASSMETHODS::
METHOD:: new
Creates a new instance on the server.
ARGUMENT:: server
The link::Classes/Server:: on which to run this model.
ARGUMENT:: hidden
An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
ARGUMENT:: activation
The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1).
ARGUMENT:: outputActivation
The activation function to use for the final layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1).
ARGUMENT:: tapIn
The layer whose input is used to predict and predictPoint. It is 0 counting, where the default of 0 is the input layer, and 1 would be the first hidden layer, and so on.
ARGUMENT:: tapOut
The layer whose output to return. It is counting from 0 as the input layer, and 1 would be the first hidden layer, and so on. The default of -1 is the last layer of the whole network.
ARGUMENT:: maxIter
The maximum number of iterations to use in training.
ARGUMENT:: learnRate
The learning rate of the network. Start small, increase slowly.
ARGUMENT:: momentum
The training momentum, default 0.9
ARGUMENT:: batchSize
The training batch size.
ARGUMENT:: validation
The fraction of the DataSet size to hold back during training to validate the network against.
METHOD:: identity, sigmoid, relu, tanh
A set of convinience constants for the available activation functions.
INSTANCEMETHODS::
PRIVATE:: init, uid
METHOD:: fit
Train the network to map between a source and target link::Classes/FluidDataSet::
ARGUMENT:: sourceDataSet
Source data
ARGUMENT:: targetDataSet
Target data
ARGUMENT:: action
Function to run when training is complete
returns:: The training loss, or -1 if training failed
METHOD:: predict
Apply the learned mapping to a link::Classes/FluidDataSet:: (given a trained network)
ARGUMENT:: sourceDataSet
Input data
ARGUMENT:: targetDataSet
Output data
ARGUMENT:: action
Function to run when complete
METHOD:: predictPoint
Apply the learned mapping to a single data point in a link::Classes/Buffer::
ARGUMENT:: sourceBuffer
Input point
ARGUMENT:: targetBuffer
Output point
ARGUMENT:: action
A function to run when complete
METHOD:: clear
This will erase all the learning done in the neural network.
ARGUMENT:: action
A function to run when complete
EXAMPLES::
code::
//Make a simple mapping between a ramp and a sine cycle, test with an exponentional ramp
(
~source = FluidDataSet(s,\mlp_regressor_source);
~target = FluidDataSet(s,\mlp_regressor_target);
~test = FluidDataSet(s,\mlp_regressor_dest);
~output = FluidDataSet(s,\mlp_regress_out);
~tmpbuf = Buffer.alloc(s,1);
~regressor = FluidMLPRegressor(s,[2], FluidMLPRegressor.tanh, FluidMLPRegressor.tanh, maxIter: 1000, learnRate: 0.1, momentum: 0.1, batchSize: 1, validation: 0);
)
//Make source, target and test data
(
~sourcedata = 128.collect{|i|i/128};
~targetdata = 128.collect{|i| sin(2*pi*i/128) };
~testdata = 128.collect{|i|(i/128)**2};
~source.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~sourcedata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~target.load(
d = Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~targetdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~test.load(
Dictionary.with(
*[\cols -> 1,\data -> Dictionary.newFrom(
~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]);
);
~targetdata.plot;
~source.print;
~target.print;
~test.print;
)
// Now make a regressor and fit it to the source and target, and predict against test
//grab the output data whilst we're at it, so we can inspect
// run this to train the network for up to 1000(max epochs to map source to target. fit() returns loss. If this is -1, then training has failed. Run until the printed error is satisfactory to you
~regressor.fit(~source, ~target, {|x|x.postln;});
//you can change parameters of the MLPregressor with setters
~regressor.learnRate = 0.01;
~regressor.momentum = 0;
~regressor.validation= 0.2;
(
~outputdata = Array(128);
~regressor.predict(~test, ~output, action:{
~output.dump{|x| 128.do{|i|
~outputdata.add(x["data"][i.asString][0])
}};
});
)
//We should see a single cycle of a chirp. If not, fit a little more epochs
~outputdata.plot;
// single point transform on arbitrary value
~inbuf = Buffer.loadCollection(s,[0.5]);
~outbuf = Buffer.new(s);
~regressor.predictPoint(~inbuf,~outbuf,{|x|x.postln;x.getn(0,1,{|y|y.postln;};)});
::
subsection:: Server Side Queries
code::
//Setup
(
~inputPoint = Buffer.alloc(s,1);
~predictPoint = Buffer.alloc(s,1);
~pitchingBus = Bus.control;
~catchingBus = Bus.control;
)
(
~regressor.inBus_(~pitchingBus).outBus_(~catchingBus).inBuffer_(~inputPoint).outBuffer_(~predictPoint);
~inputSynth = {
var input = Saw.kr(2).linlin(-1,1,0,1);
var trig = Impulse.kr(ControlRate.ir/10);
BufWr.kr(input,~inputPoint,0);
Out.kr(~pitchingBus.index,[trig]);
};
~inputSynth.play(~regressor.synth,addAction:\addBefore);
~outputSynth = {
Poll.kr(In.kr(~catchingBus.index),BufRd.kr(1,~predictPoint,0),"mapped value")
};
~outputSynth.play(~regressor.synth,addAction:\addAfter);
~outputSynth.scope
)
::