TITLE:: FluidMLPRegressor summary:: Regression with a multi-layer perceptron categories:: Machine learning related:: Classes/FluidMLPClassifier, Classes/FluidDataSet DESCRIPTION:: Perform regression between link::Classes/FluidDataSet::s using a Multi-Layer Perception neural network. CLASSMETHODS:: METHOD:: new Creates a new instance on the server. ARGUMENT:: server The link::Classes/Server:: on which to run this model. ARGUMENT:: hidden An link::Classes/Array:: that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each). ARGUMENT:: activation The activation function to use for the hidden layer units. ARGUMENT:: maxIter The maximum number of iterations to use in training. ARGUMENT:: learnRate The learning rate of the network. Start small, increase slowly. ARGUMENT:: momentum The training momentum, default 0.9 ARGUMENT:: batchSize The training batch size. ARGUMENT:: validation The fraction of the DataSet size to hold back during training to validate the network against. METHOD:: identity, relu, sigmoid, tanh A set of convinience constants for the available activation functions. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1) INSTANCEMETHODS:: PRIVATE:: init, uid METHOD:: fit Train the network to map between a source and target link::Classes/FluidDataSet:: ARGUMENT:: sourceDataSet Source data ARGUMENT:: targetDataSet Target data ARGUMENT:: action Function to run when training is complete returns:: The training loss, or -1 if training failed METHOD:: predict Apply the learned mapping to a link::Classes/FluidDataSet:: (given a trained network) ARGUMENT:: sourceDataSet Input data ARGUMENT:: targetDataSet Output data ARGUMENT:: layer Layer whose output to return. ARGUMENT:: action Function to run when complete METHOD:: predictPoint Apply the learned mapping to a single data point in a link::Classes/Buffer:: ARGUMENT:: sourceBuffer Input point ARGUMENT:: targetBuffer Output point ARGUMENT:: layer Layer whose output to return. ARGUMENT:: action A function to run when complete METHOD:: init This will erase all the learning done in the neural network. ARGUMENT:: action A function to run when complete EXAMPLES:: code:: //Make a simple mapping between a ramp and a sine cycle, test with an exponentional ramp ( ~source = FluidDataSet(s,\mlp_regressor_source); ~target = FluidDataSet(s,\mlp_regressor_target); ~test = FluidDataSet(s,\mlp_regressor_dest); ~output = FluidDataSet(s,\mlp_regress_out); ~tmpbuf = Buffer.alloc(s,1); ~regressor = FluidMLPRegressor(s,[2],FluidMLPRegressor.tanh,1000,0.1,0.1,1,0); ) //Make source, target and test data ( ~sourcedata = 128.collect{|i|i/128}; ~targetdata = 128.collect{|i| sin(2*pi*i/128) }; ~testdata = 128.collect{|i|(i/128)**2}; ~source.load( Dictionary.with( *[\cols -> 1,\data -> Dictionary.newFrom( ~sourcedata.collect{|x, i| [i.asString, [x]]}.flatten)]); ); ~target.load( d = Dictionary.with( *[\cols -> 1,\data -> Dictionary.newFrom( ~targetdata.collect{|x, i| [i.asString, [x]]}.flatten)]); ); ~test.load( Dictionary.with( *[\cols -> 1,\data -> Dictionary.newFrom( ~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]); ); ~targetdata.plot; ~source.print; ~target.print; ~test.print; ) // Now make a regressor and fit it to the source and target, and predict against test //grab the output data whilst we're at it, so we can inspect // run this to train the network for up to 1000(max epochs to map source to target. fit() returns loss. If this is -1, then training has failed. Run until the printed error is satisfactory to you ~regressor.fit(~source, ~target, {|x|x.postln;}); //you can change parameters of the MLPregressor with setters ~regressor.learnRate = 0.01; ~regressor.momentum = 0; ~regressor.validation= 0.2; ( ~outputdata = Array(128); ~regressor.predict(~test, ~output, action:{ ~output.dump{|x| 128.do{|i| ~outputdata.add(x["data"][i.asString][0]) }}; }); ) //We should see a single cycle of a chirp. If not, fit a little more epochs ~outputdata.plot; ::