Merge branch 'clients/inter_client_comms' of https://bitbucket.org/flucoma/flucoma-supercollider into clients/inter_client_comms

nix
Gerard 5 years ago
commit 63832bc512

@ -5,7 +5,7 @@ FluidMLPRegressor : FluidRTDataClient {
const <relu = 2;
const <tanh = 3;
*new {|server, hidden = #[3,3] , activation = 0, outputActivation = 0, inputTap = 0, outputTap = 0,maxIter = 1000, learnRate = 0.0001, momentum = 0.9, batchSize = 50, validation = 0.2|
*new {|server, hidden = #[3,3] , activation = 2, outputActivation = 0, tapIn = 0, tapOut = -1,maxIter = 1000, learnRate = 0.0001, momentum = 0.9, batchSize = 50, validation = 0.2|
var hiddenCtrlLabels;
hidden = [hidden.size]++hidden;
@ -16,8 +16,8 @@ FluidMLPRegressor : FluidRTDataClient {
[
\activation,activation,
\outputActivation, outputActivation,
\inputTap, inputTap,
\outputTap, outputTap,
\tapIn, tapIn,
\tapOut, tapOut,
\maxIter, maxIter,
\learnRate,learnRate,
\momentum, momentum,

@ -34,7 +34,7 @@ The training batch size.
ARGUMENT:: validation
The fraction of the DataSet size to hold back during training to validate the network against.
METHOD:: identity, relu, sigmoid, tanh
METHOD:: identity, sigmoid, relu, tanh
A set of convinience constants for the available activation functions.
INSTANCEMETHODS::

@ -20,11 +20,14 @@ An link::Classes/Array:: that gives the sizes of any hidden layers in the networ
ARGUMENT:: activation
The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1).
ARGUMENT:: finalActivation
ARGUMENT:: outputActivation
The activation function to use for the final layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1).
ARGUMENT:: outputLayer
The layer whose output to return. It is negative 0 counting, where the default of 0 is the output layer, and 1 would be the last hidden layer, and so on.
ARGUMENT:: tapIn
The layer whose input is used to predict and predictPoint. It is 0 counting, where the default of 0 is the input layer, and 1 would be the first hidden layer, and so on.
ARGUMENT:: tapOut
The layer whose output to return. It is counting from 0 as the input layer, and 1 would be the first hidden layer, and so on. The default of -1 is the last layer of the whole network.
ARGUMENT:: maxIter
The maximum number of iterations to use in training.
@ -41,7 +44,7 @@ The training batch size.
ARGUMENT:: validation
The fraction of the DataSet size to hold back during training to validate the network against.
METHOD:: identity, relu, sigmoid, tanh
METHOD:: identity, sigmoid, relu, tanh
A set of convinience constants for the available activation functions.
INSTANCEMETHODS::
@ -103,7 +106,7 @@ code::
~test = FluidDataSet(s,\mlp_regressor_dest);
~output = FluidDataSet(s,\mlp_regress_out);
~tmpbuf = Buffer.alloc(s,1);
~regressor = FluidMLPRegressor(s,[2], FluidMLPRegressor.tanh, FluidMLPRegressor.tanh, 0, 1000,0.1,0.1,1,0);
~regressor = FluidMLPRegressor(s,[2], FluidMLPRegressor.tanh, FluidMLPRegressor.tanh, maxIter: 1000, learnRate: 0.1, momentum: 0.1, batchSize: 1, validation: 0);
)
//Make source, target and test data

@ -73,6 +73,10 @@ An link::Classes/IdentityDictionary:: that details labels and start-end position
ARGUMENT:: action
A function that runs on complettion, will be passed the link::Classes/IdentityDictionary:: from link::#index:: as an argument.
ARGUMENT:: tasks
ANCHOR::ntasks::
The number of parallel processing tasks to run on the server. Default 4. This should probably never be greater than the number of available CPU cores.
METHOD:: index
A link::Classes/IdentityDictionary:: containing information about the position of each discovered slice, using labels based on those passed into link::#play:: (see link::#labelling::). This dictionary copies all other entries from the source dictionary on a per-key basis (so you can store arbitary stuff in there should you wish, and it will remain oassciated with its original source chunk).

Loading…
Cancel
Save