diff --git a/release-packaging/HelpSource/Classes/FluidMLPRegressor.schelp b/release-packaging/HelpSource/Classes/FluidMLPRegressor.schelp index 512c87a..16e599f 100644 --- a/release-packaging/HelpSource/Classes/FluidMLPRegressor.schelp +++ b/release-packaging/HelpSource/Classes/FluidMLPRegressor.schelp @@ -20,10 +20,13 @@ An link::Classes/Array:: that gives the sizes of any hidden layers in the networ ARGUMENT:: activation The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1). -ARGUMENT:: finalActivation +ARGUMENT:: outputActivation The activation function to use for the final layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1). -ARGUMENT:: outputLayer +ARGUMENT:: inputTap +The layer whose input is used to predict and predictPoint. It is 0 counting, where the default of 0 is the input layer, and 1 would be the first hidden layer, and so on. + +ARGUMENT:: outputTap The layer whose output to return. It is negative 0 counting, where the default of 0 is the output layer, and 1 would be the last hidden layer, and so on. ARGUMENT:: maxIter @@ -103,7 +106,7 @@ code:: ~test = FluidDataSet(s,\mlp_regressor_dest); ~output = FluidDataSet(s,\mlp_regress_out); ~tmpbuf = Buffer.alloc(s,1); -~regressor = FluidMLPRegressor(s,[2], FluidMLPRegressor.tanh, FluidMLPRegressor.tanh, 0, 1000,0.1,0.1,1,0); +~regressor = FluidMLPRegressor(s,[2], FluidMLPRegressor.tanh, FluidMLPRegressor.tanh, 0, 0, 1000,0.1,0.1,1,0); ) //Make source, target and test data