[Release] 1.0.0-beta5 (#73)
* FluidPitch.schelp
* FluidPitch.schelp
* edited example 8b-mlp-synth-control
brought it in line with the max example.
the user must put the synth params and 2D slider where they want them, _and then_ click 'add point'.
also simplified some of the SC code.
* FluidWaveform draws features & gate
* new team is credited (#39)
* adding guides from UKH and CIRMMT
* FluidBufToKr has optional numFrames argument
* FluidBufToKr has optional numFrames argument
* default for numFrames argument is -1
* FluidBufToKr help file
* created FluidFilesPath and a test file
* fixed FluidPlotter call createCatColors
* allow for passing the file name in as well
* one extra slash check!
* add section to bottom of nn-->fm for testing with audio files
* Revert "one extra slash check!"
This reverts commit 3ba5c4bf3e.
* add section to bottom of nn->fm to test with audio files
* FluidPlotter typo
* fix reference to FLUID_PARAMDUMP (#43)
* FluidFilesPath helpfile
* comments added, ted's TODO added in folder
* don't show rms color and allow passing waveformcolor
none of this is breaking (i think)
* FluidPlotter now allows > 1 identifier to highlight
* udpated ted's to do list
* updated ted_helpfiles_outline
* deleted ted_helpfiles_outline
* clean up examples/guides folder
* typo
* made audio buffer optional for fluid waveform
* fluid waveform features are stackable or not
* fluid waveform spectrogram scratch paper
* colors
* fluid waveform spectrogram tests
* spectrogram flag is working -- there are some color considerations to make
* spectrogram alpha available
* read from color-schemes folder
* normalizeFeaturesIndependently argument, close method
* Nightly Builds and Continuous Integration (#38)
* build macos supercollider to begin with
* build on nightlies
* compile all 3
* sloppy indentation
* remove ninja for configs
* try all the builds
* fix indentation and dependency
* try packaging
* make fully installation
* fix bigobj whinging in github
* move bigobj down
* remove huge pdb files
* remove pdb files
* build linux on ubuntu 18.04 LTS
* only build on dev branch and ci/nightlies branch
* parallelise zipping and correct the name
* use max cores on mac
* use windows-ly way of zipping files
* package things into non-nested zips
* download to here
* max -> supercollider 🤦
* use ninja and make release builds on windows
* sudo apt
* and Prs
* clone the dev branch of supercollider
* clone with https
* delete the old release before making a new one
* remove extraneous comment
* Revert "move bigobj down"
This reverts commit 5cd4a3532d6a629a071b1210e397f21fe416307f.
* Revert "fix bigobj whinging in github"
This reverts commit cb172b9c7ec2398ad0fbe6bb9456de91bfee990e.
* get core not SC
* use proper CMAKE variable for CORE
* use DFLUID_PATH not DFLUID_CORE
* update tags and remove make
* use choco to install ninja
* use ninja on windows
* update incorrect core link
* add working directory
* use composite action
* correctly point to the composite action
* specify toolchain for cmake
* use v2 of env flucoma action
* use an env variable to call CMAKE
* use composite action to build release
* remove env
* use flucoma actions to do building
* use sc not scbuild
* moved CSVs
* delete scratch paper file
* fluid waveform help file
* added more color schemes to choose from, also new grey scale
* [CI] Actions@v4 (#49)
* use v4 of sc actions
* dont build on PR
* amended the cmake to copy Resources and to capitalise Plugin
* omission in the NoveltySlice
* WIP towards a 'rasterBuffer' approach, waiting on interface decisions and scaling decisions
* melbands weirdness sorted
* no more error when audioBuffer is not passed
* bump
* user specified lin or log scaling
* log
* agnostic 🪵
* 'imageBuffer'
* removed word 'raster'
* waveform help file
* removed 'teds to do list' from repo
* implement startFrame as suggested in https://github.com/flucoma/flucoma-sc/issues/51
* remove extraneous postln
* test code for multiple overlays
* dummy commit
* FluCoMa-ize argument order and defaults, more error checks
* 🚧 updating help file examples
* still 🚧
* FluidWaveform: featureBuffer to featuresBuffer
* Fluid waveform layers (#53)
* layers cause race conditions
* front method keeps race conditions from happening
* allow for image color to be base on alpha
* bump
* bump
* more tests
* updated FluidWaveform help file examples
* download instructions
* made some helpfile examples
* change release action
* changed first argument to kr
to match the default for the restructured text 'schelp_descriptor.schelp' file in the 'flucoma-docs' repo
this needs to happen or else SCDocs will throw a warning everytime the user opens this helpfile
* begin cleaning up of the examples folder
* argument typo in FluidLoudness 'maxwindowSize' --> 'maxWindowSize'
* typo: maxWindowSize in FluidLoudness
* fix FluidMFCC argument ordering
* [Enhance] Update resources folder structure (#57)
* copy the whole resources folder from core
* make fluidfilespath respect the new structure
* FluidChroma and FluidBufChroma help files alignment
* FluidMFCC docs repo alignment
* FluidLoudness docs repo alignment
* FluidCorpusManipulationToolkit Guide
* FluidBufNMF removed 'randomSeed' and 'windowType' (docs repo alignment)
* converted all ugly paths to FluidFilesPath
* fix color-schemes lookup per new folder structure
* BufAudioTransport now has A-B based Arguments
* Update nightly.yaml
Add workflow dispatch for manual launch
* moved the sc-only resources to a SC only folder, and change the cmake to copy the right stuff (#61)
* Enhance/integrate doc (#68)
* Add docs targets to CMake
* Add docs targets to nightly workflow
* fix doc copying for nightly
* try again to fix doc copying for nightly
* syntax error in yaml
* added the missing 'setLabel' method to FluidLabelSet
* a more convenient method call to FluidViewer to get colors
* NRT and Data objects ensure params can be set in NRT queue immediately after creation (#71)
fixes #70
Co-authored-by: tremblap
Co-authored-by: James Bradbury
Co-authored-by: Till
Co-authored-by: James Bradbury
Co-authored-by: Owen Green
nix
parent
b79331d024
commit
b89e4c5c7f
@ -1,37 +1,37 @@
|
|||||||
FluidBufNMF : FluidBufProcessor
|
FluidBufNMF : FluidBufProcessor
|
||||||
{
|
{
|
||||||
*kr {|source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth, bases, basesMode = 0, activations, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1, windowType = 0, randomSeed = -1, trig = 1, blocking = 0|
|
*kr {|source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth, bases, basesMode = 0, activations, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1, trig = 1, blocking = 0|
|
||||||
|
|
||||||
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
||||||
resynth = resynth ? -1;
|
resynth = resynth ? -1;
|
||||||
bases = bases ? -1;
|
bases = bases ? -1;
|
||||||
activations = activations ? -1;
|
activations = activations ? -1;
|
||||||
|
|
||||||
^FluidProxyUgen.kr(\FluidBufNMFTrigger,-1,source.asUGenInput, startFrame, numFrames, startChan, numChans, resynth.asUGenInput, bases.asUGenInput, basesMode, activations.asUGenInput, actMode, components, iterations, windowSize, hopSize, fftSize, trig, blocking);
|
^FluidProxyUgen.kr(\FluidBufNMFTrigger,-1,source.asUGenInput, startFrame, numFrames, startChan, numChans, resynth.asUGenInput, bases.asUGenInput, basesMode, activations.asUGenInput, actMode, components, iterations, windowSize, hopSize, fftSize, trig, blocking);
|
||||||
}
|
}
|
||||||
|
|
||||||
*process { |server, source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth = -1, bases = -1, basesMode = 0, activations = -1, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1, windowType = 0, randomSeed = -1,freeWhenDone = true, action|
|
|
||||||
|
|
||||||
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
*process { |server, source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth = -1, bases = -1, basesMode = 0, activations = -1, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1,freeWhenDone = true, action|
|
||||||
|
|
||||||
|
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
||||||
resynth = resynth ? -1;
|
resynth = resynth ? -1;
|
||||||
bases = bases ? -1;
|
bases = bases ? -1;
|
||||||
activations = activations ? -1;
|
activations = activations ? -1;
|
||||||
|
|
||||||
^this.new(
|
^this.new(
|
||||||
server,nil,[resynth, bases, activations].select{|x| x!= -1}
|
server,nil,[resynth, bases, activations].select{|x| x!= -1}
|
||||||
).processList([source, startFrame, numFrames, startChan, numChans, resynth, bases, basesMode, activations, actMode, components,iterations, windowSize, hopSize, fftSize,0],freeWhenDone,action);
|
).processList([source, startFrame, numFrames, startChan, numChans, resynth, bases, basesMode, activations, actMode, components,iterations, windowSize, hopSize, fftSize,0],freeWhenDone,action);
|
||||||
}
|
}
|
||||||
|
|
||||||
*processBlocking { |server, source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth = -1, bases = -1, basesMode = 0, activations = -1, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1, windowType = 0, randomSeed = -1,freeWhenDone = true, action|
|
|
||||||
|
|
||||||
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
*processBlocking { |server, source, startFrame = 0, numFrames = -1, startChan = 0, numChans = -1, resynth = -1, bases = -1, basesMode = 0, activations = -1, actMode = 0, components = 1, iterations = 100, windowSize = 1024, hopSize = -1, fftSize = -1,freeWhenDone = true, action|
|
||||||
|
|
||||||
|
source.isNil.if {"FluidBufNMF: Invalid source buffer".throw};
|
||||||
resynth = resynth ? -1;
|
resynth = resynth ? -1;
|
||||||
bases = bases ? -1;
|
bases = bases ? -1;
|
||||||
activations = activations ? -1;
|
activations = activations ? -1;
|
||||||
|
|
||||||
^this.new(
|
^this.new(
|
||||||
server,nil,[resynth, bases, activations].select{|x| x!= -1}
|
server,nil,[resynth, bases, activations].select{|x| x!= -1}
|
||||||
).processList([source, startFrame, numFrames, startChan, numChans, resynth, bases, basesMode, activations, actMode, components,iterations, windowSize, hopSize, fftSize, 1],freeWhenDone,action);
|
).processList([source, startFrame, numFrames, startChan, numChans, resynth, bases, basesMode, activations, actMode, components,iterations, windowSize, hopSize, fftSize, 1],freeWhenDone,action);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
FluidBufNMFTrigger : FluidProxyUgen {}
|
FluidBufNMFTrigger : FluidProxyUgen {}
|
||||||
|
|||||||
@ -1,28 +1,75 @@
|
|||||||
FluidKrToBuf {
|
FluidKrToBuf {
|
||||||
*kr {
|
*kr {
|
||||||
arg krStream, buffer;
|
arg krStream, buffer, krStartChan = 0, krNumChans = -1, destStartFrame = 0;
|
||||||
if(buffer.numFrames == 0) {"FluidKrToBuf: UGen will have 0 outputs!".warn};
|
var endChan;
|
||||||
if(buffer.numFrames > 1000) {"FluidKrToBuf: Buffer is % frames. This is probably not the buffer you intended.".format(buffer.numFrames).error};
|
|
||||||
^buffer.numFrames.do{
|
// fix -1 default
|
||||||
arg i;
|
if(krNumChans == -1,{krNumChans = krStream.numChannels - krStartChan});
|
||||||
BufWr.kr(krStream[i], buffer, i);
|
|
||||||
|
// what is the last channel that will be used
|
||||||
|
endChan = (krStartChan + krNumChans) - 1;
|
||||||
|
|
||||||
|
if(buffer.isKindOf(Buffer).or(buffer.isKindOf(LocalBuf)),{
|
||||||
|
|
||||||
|
// sanity check
|
||||||
|
if(buffer.numFrames == 0){"% Buffer has 0 frames".format(this.class).warn};
|
||||||
|
|
||||||
|
// oopsie check
|
||||||
|
if(buffer.numFrames > 1000){
|
||||||
|
Error("% Buffer is % frames. This is probably not the buffer you intended.".format(this.class,buffer.numFrames)).throw;
|
||||||
|
};
|
||||||
|
|
||||||
|
// out of bounds check
|
||||||
|
if((destStartFrame + krNumChans) > buffer.numFrames,{
|
||||||
|
Error("% (destStartFrame + krNumChans) > buffer.numFrames".format(this.class)).throw;
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
|
|
||||||
|
^(krStartChan..endChan).do{
|
||||||
|
arg kr_i, i;
|
||||||
|
BufWr.kr(krStream[kr_i], buffer, destStartFrame + i);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
FluidBufToKr {
|
FluidBufToKr {
|
||||||
*kr {
|
*kr {
|
||||||
arg buffer;
|
arg buffer, startFrame = 0, numFrames = -1;
|
||||||
if(buffer.numFrames == 0) {"FluidKrToBuf: Buffer has 0 frames!".warn};
|
|
||||||
if(buffer.numFrames > 1000) {"FluidKrToBuf: Buffer is % frames. This is probably not the buffer you intended.".format(buffer.numFrames).error};
|
// out of bounds check
|
||||||
|
if(startFrame < 0,{Error("% startFrame must be >= 0".format(this.class)).throw;});
|
||||||
|
|
||||||
|
if(buffer.isKindOf(Buffer) or: {buffer.isKindOf(LocalBuf)},{
|
||||||
|
|
||||||
|
// fix default -1
|
||||||
|
if(numFrames == -1,{numFrames = buffer.numFrames - startFrame});
|
||||||
|
|
||||||
|
// dummy check
|
||||||
|
if(numFrames < 1,{Error("% numFrames must be >= 1".format(this.class)).throw});
|
||||||
|
|
||||||
|
// out of bounds check
|
||||||
|
if((startFrame+numFrames) > buffer.numFrames,{Error("% (startFrame + numFrames) > buffer.numFrames".format(this.class)).throw;});
|
||||||
|
|
||||||
|
},{
|
||||||
|
// make sure the numFrames give is a positive integer
|
||||||
|
if((numFrames < 1) || (numFrames.isInteger.not),{
|
||||||
|
Error("% if no buffer is specified, numFrames must be a value >= 1.".format(this.class)).throw;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// oopsie check
|
||||||
|
if(numFrames > 1000) {
|
||||||
|
Error("%: numframes is % frames. This is probably not what you intended.".format(this.class, numFrames)).throw;
|
||||||
|
};
|
||||||
|
|
||||||
if(buffer.numFrames > 1,{
|
if(numFrames > 1,{
|
||||||
^buffer.numFrames.collect{
|
^numFrames.collect{
|
||||||
arg i;
|
arg i;
|
||||||
BufRd.kr(1,buffer,i,0,0);
|
BufRd.kr(1,buffer,i+startFrame,0,0);
|
||||||
}
|
}
|
||||||
},{
|
},{
|
||||||
^BufRd.kr(1,buffer,0,0,0);
|
^BufRd.kr(1,buffer,startFrame,0,0);
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -0,0 +1,7 @@
|
|||||||
|
FluidFilesPath {
|
||||||
|
*new {
|
||||||
|
arg fileName;
|
||||||
|
fileName = fileName ? "";
|
||||||
|
^("%/../Resources/AudioFiles/".format(File.realpath(FluidDataSet.class.filenameSymbol).dirname) +/+ fileName);
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,47 +1,436 @@
|
|||||||
FluidWaveform {
|
FluidViewer {
|
||||||
|
|
||||||
|
createCatColors {
|
||||||
|
^FluidViewer.createCatColors;
|
||||||
|
}
|
||||||
|
|
||||||
|
*createCatColors {
|
||||||
|
// colors from: https://github.com/d3/d3-scale-chromatic/blob/main/src/categorical/category10.js
|
||||||
|
^"1f77b4ff7f0e2ca02cd627289467bd8c564be377c27f7f7fbcbd2217becf".clump(6).collect{
|
||||||
|
arg six;
|
||||||
|
Color(*six.clump(2).collect{
|
||||||
|
arg two;
|
||||||
|
"0x%".format(two).interpret / 255;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
*categoryColors {
|
||||||
|
^FluidViewer.createCatColors;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FluidWaveformAudioLayer {
|
||||||
|
var audioBuffer, waveformColor;
|
||||||
|
|
||||||
*new {
|
*new {
|
||||||
arg audio_buf, slices_buf, bounds;
|
arg audioBuffer, waveformColor;
|
||||||
^super.new.init(audio_buf,slices_buf, bounds);
|
^super.new.init(audioBuffer,waveformColor);
|
||||||
}
|
}
|
||||||
|
|
||||||
init {
|
init {
|
||||||
arg audio_buf, slices_buf, bounds;
|
arg audioBuffer_, waveformColor_;
|
||||||
Task{
|
|
||||||
|
audioBuffer = audioBuffer_;
|
||||||
|
waveformColor = waveformColor_ ? Color.gray;
|
||||||
|
}
|
||||||
|
|
||||||
|
draw {
|
||||||
|
arg win, bounds;
|
||||||
|
fork({
|
||||||
var path = "%%_%_FluidWaveform.wav".format(PathName.tmp,Date.localtime.stamp,UniqueID.next);
|
var path = "%%_%_FluidWaveform.wav".format(PathName.tmp,Date.localtime.stamp,UniqueID.next);
|
||||||
var sfv, win, userView;
|
var sfv;
|
||||||
|
|
||||||
bounds = bounds ? Rect(0,0,800,200);
|
audioBuffer.write(path,"wav");
|
||||||
win = Window("FluidWaveform",bounds);
|
|
||||||
audio_buf.write(path,"wav");
|
|
||||||
|
|
||||||
audio_buf.server.sync;
|
audioBuffer.server.sync;
|
||||||
|
|
||||||
sfv = SoundFileView(win,Rect(0,0,bounds.width,bounds.height));
|
sfv = SoundFileView(win,bounds);
|
||||||
|
sfv.peakColor_(waveformColor);
|
||||||
|
sfv.drawsBoundingLines_(false);
|
||||||
|
sfv.rmsColor_(Color.clear);
|
||||||
|
sfv.background_(Color.clear);
|
||||||
sfv.readFile(SoundFile(path));
|
sfv.readFile(SoundFile(path));
|
||||||
sfv.gridOn_(false);
|
sfv.gridOn_(false);
|
||||||
|
|
||||||
File.delete(path);
|
File.delete(path);
|
||||||
|
},AppClock);
|
||||||
|
^audioBuffer.server;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if(slices_buf.notNil,{
|
FluidWaveformIndicesLayer : FluidViewer {
|
||||||
slices_buf.loadToFloatArray(action:{
|
var indicesBuffer, audioBuffer, color, lineWidth;
|
||||||
arg slices_fa;
|
|
||||||
|
*new {
|
||||||
|
arg indicesBuffer, audioBuffer, color, lineWidth = 1;
|
||||||
|
^super.new.init(indicesBuffer, audioBuffer, color, lineWidth);
|
||||||
|
}
|
||||||
|
|
||||||
|
init {
|
||||||
|
arg indicesBuffer_, audioBuffer_, color_, lineWidth_;
|
||||||
|
indicesBuffer = indicesBuffer_;
|
||||||
|
audioBuffer = audioBuffer_;
|
||||||
|
color = color_ ? Color.red;
|
||||||
|
lineWidth = lineWidth_;
|
||||||
|
}
|
||||||
|
|
||||||
|
draw {
|
||||||
|
arg win, bounds;
|
||||||
|
|
||||||
|
if(audioBuffer.notNil,{
|
||||||
|
fork({
|
||||||
|
indicesBuffer.numChannels.switch(
|
||||||
|
1,{
|
||||||
|
indicesBuffer.loadToFloatArray(action:{
|
||||||
|
arg slices_fa;
|
||||||
|
UserView(win,bounds)
|
||||||
|
.drawFunc_({
|
||||||
|
Pen.width_(lineWidth);
|
||||||
|
slices_fa.do{
|
||||||
|
arg start_samp;
|
||||||
|
var x = start_samp.linlin(0,audioBuffer.numFrames,0,bounds.width);
|
||||||
|
Pen.line(Point(x,0),Point(x,bounds.height));
|
||||||
|
Pen.color_(color);
|
||||||
|
Pen.stroke;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
});
|
||||||
|
},
|
||||||
|
2,{
|
||||||
|
indicesBuffer.loadToFloatArray(action:{
|
||||||
|
arg slices_fa;
|
||||||
|
slices_fa = slices_fa.clump(2);
|
||||||
|
UserView(win,bounds)
|
||||||
|
.drawFunc_({
|
||||||
|
Pen.width_(lineWidth);
|
||||||
|
slices_fa.do{
|
||||||
|
arg arr;
|
||||||
|
var start = arr[0].linlin(0,audioBuffer.numFrames,0,bounds.width);
|
||||||
|
var end = arr[1].linlin(0,audioBuffer.numFrames,0,bounds.width);
|
||||||
|
Pen.addRect(Rect(start,0,end-start,bounds.height));
|
||||||
|
Pen.color_(color.alpha_(0.25));
|
||||||
|
Pen.fill;
|
||||||
|
};
|
||||||
|
|
||||||
|
});
|
||||||
|
});
|
||||||
|
},{
|
||||||
|
Error("% indicesBuffer must have either 1 nor 2 channels.".format(this.class)).throw;
|
||||||
|
}
|
||||||
|
);
|
||||||
|
},AppClock);
|
||||||
|
^indicesBuffer.server;
|
||||||
|
},{
|
||||||
|
Error("% In order to display an indicesBuffer an audioBuffer must be included.".format(this.class)).throw;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FluidWaveformFeaturesLayer : FluidViewer {
|
||||||
|
var featuresBuffer, colors, stackFeatures, normalizeFeaturesIndependently;
|
||||||
|
|
||||||
|
*new {
|
||||||
|
arg featuresBuffer, colors, stackFeatures = false, normalizeFeaturesIndependently = true;
|
||||||
|
^super.new.init(featuresBuffer,colors,stackFeatures,normalizeFeaturesIndependently);
|
||||||
|
}
|
||||||
|
|
||||||
|
init {
|
||||||
|
arg featuresBuffer_, colors_, stackFeatures_ = false, normalizeFeaturesIndependently_ = true;
|
||||||
|
featuresBuffer = featuresBuffer_;
|
||||||
|
normalizeFeaturesIndependently = normalizeFeaturesIndependently_;
|
||||||
|
stackFeatures = stackFeatures_;
|
||||||
|
colors = colors_ ?? {this.createCatColors};
|
||||||
|
|
||||||
|
// we'll index into it to draw, so just in case the user passed just one color, this will ensure it can be "indexed" into
|
||||||
|
if(colors.isKindOf(SequenceableCollection).not,{colors = [colors]});
|
||||||
|
}
|
||||||
|
|
||||||
|
draw {
|
||||||
|
arg win, bounds;
|
||||||
|
|
||||||
|
featuresBuffer.loadToFloatArray(action:{
|
||||||
|
arg fa;
|
||||||
|
var minVal = 0, maxVal = 0;
|
||||||
|
var stacked_height;
|
||||||
|
|
||||||
userView = UserView(win,Rect(0,0,bounds.width,bounds.height))
|
if(stackFeatures,{
|
||||||
|
stacked_height = bounds.height / featuresBuffer.numChannels;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(normalizeFeaturesIndependently.not,{
|
||||||
|
minVal = fa.minItem;
|
||||||
|
maxVal = fa.maxItem;
|
||||||
|
});
|
||||||
|
|
||||||
|
fa = fa.clump(featuresBuffer.numChannels).flop;
|
||||||
|
|
||||||
|
fork({
|
||||||
|
fa.do({
|
||||||
|
arg channel, channel_i;
|
||||||
|
var maxy;// a lower value;
|
||||||
|
var miny;// a higher value;
|
||||||
|
|
||||||
|
if(stackFeatures,{
|
||||||
|
miny = stacked_height * (channel_i + 1);
|
||||||
|
maxy = stacked_height * channel_i;
|
||||||
|
},{
|
||||||
|
miny = bounds.height;
|
||||||
|
maxy = 0;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(normalizeFeaturesIndependently,{
|
||||||
|
minVal = channel.minItem;
|
||||||
|
maxVal = channel.maxItem;
|
||||||
|
});
|
||||||
|
|
||||||
|
channel = channel.resamp1(bounds.width).linlin(minVal,maxVal,miny,maxy);
|
||||||
|
|
||||||
|
UserView(win,bounds)
|
||||||
.drawFunc_({
|
.drawFunc_({
|
||||||
slices_fa.do{
|
Pen.moveTo(Point(0,channel[0]));
|
||||||
arg start_samp;
|
channel[1..].do{
|
||||||
var x = start_samp.linlin(0,audio_buf.numFrames,0,bounds.width);
|
arg val, i;
|
||||||
Pen.line(Point(x,0),Point(x,bounds.height));
|
Pen.lineTo(Point(i+1,val));
|
||||||
Pen.color_(Color.red);
|
|
||||||
Pen.stroke;
|
|
||||||
};
|
};
|
||||||
|
Pen.color_(colors[channel_i % colors.size]);
|
||||||
|
Pen.stroke;
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
},AppClock);
|
||||||
|
});
|
||||||
|
^featuresBuffer.server;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FluidWaveformImageLayer {
|
||||||
|
var imageBuffer, imageColorScheme, imageColorScaling, imageAlpha;
|
||||||
|
|
||||||
|
*new {
|
||||||
|
arg imageBuffer, imageColorScheme = 0, imageColorScaling = 0, imageAlpha = 1;
|
||||||
|
^super.new.init(imageBuffer,imageColorScheme,imageColorScaling,imageAlpha);
|
||||||
|
}
|
||||||
|
|
||||||
|
init {
|
||||||
|
arg imageBuffer_, imageColorScheme_ = 0, imageColorScaling_ = 0, imageAlpha_ = 1;
|
||||||
|
|
||||||
|
imageBuffer = imageBuffer_;
|
||||||
|
imageColorScheme = imageColorScheme_;
|
||||||
|
imageColorScaling = imageColorScaling_;
|
||||||
|
imageAlpha = imageAlpha_;
|
||||||
|
}
|
||||||
|
|
||||||
|
draw {
|
||||||
|
arg win, bounds;
|
||||||
|
var colors;
|
||||||
|
|
||||||
|
if(imageColorScheme.isKindOf(Color),{
|
||||||
|
// "imageColorScheme is a kind of Color".postln;
|
||||||
|
colors = 256.collect{
|
||||||
|
arg i;
|
||||||
|
Color(imageColorScheme.red,imageColorScheme.green,imageColorScheme.blue,i.linlin(0,255,0.0,1.0));
|
||||||
|
};
|
||||||
|
},{
|
||||||
|
imageColorScheme.switch(
|
||||||
|
0,{
|
||||||
|
colors = this.loadColorFile("CET-L02");
|
||||||
|
},
|
||||||
|
1,{
|
||||||
|
colors = this.loadColorFile("CET-L16");
|
||||||
|
},
|
||||||
|
2,{
|
||||||
|
colors = this.loadColorFile("CET-L08");
|
||||||
|
},
|
||||||
|
3,{
|
||||||
|
colors = this.loadColorFile("CET-L03");
|
||||||
|
},
|
||||||
|
4,{
|
||||||
|
colors = this.loadColorFile("CET-L04");
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"% imageColorScheme: % is not valid.".format(thisMethod,imageColorScheme).warn;
|
||||||
|
}
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
imageBuffer.loadToFloatArray(action:{
|
||||||
|
arg vals;
|
||||||
|
fork({
|
||||||
|
var img = Image(imageBuffer.numFrames,imageBuffer.numChannels);
|
||||||
|
|
||||||
|
imageColorScaling.switch(
|
||||||
|
FluidWaveform.lin,{
|
||||||
|
var minItem = vals.minItem;
|
||||||
|
vals = (vals - minItem) / (vals.maxItem - minItem);
|
||||||
|
vals = (vals * 255).asInteger;
|
||||||
|
},
|
||||||
|
FluidWaveform.log,{
|
||||||
|
vals = (vals + 1e-6).log;
|
||||||
|
vals = vals.linlin(vals.minItem,vals.maxItem,0.0,255.0).asInteger;
|
||||||
|
// vals.postln;
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"% colorScaling argument % is invalid.".format(thisMethod,imageColorScaling).warn;
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// colors.postln;
|
||||||
|
|
||||||
|
vals.do{
|
||||||
|
arg val, index;
|
||||||
|
img.setColor(colors[val], index.div(imageBuffer.numChannels), imageBuffer.numChannels - 1 - index.mod(imageBuffer.numChannels));
|
||||||
|
};
|
||||||
|
|
||||||
|
UserView(win,bounds)
|
||||||
|
.drawFunc_{
|
||||||
|
img.drawInRect(Rect(0,0,bounds.width,bounds.height),fraction:imageAlpha);
|
||||||
|
};
|
||||||
|
},AppClock);
|
||||||
|
});
|
||||||
|
^imageBuffer.server;
|
||||||
|
}
|
||||||
|
|
||||||
|
loadColorFile {
|
||||||
|
arg filename;
|
||||||
|
^CSVFileReader.readInterpret(FluidFilesPath("../color-schemes/%.csv".format(filename))).collect{
|
||||||
|
arg row;
|
||||||
|
Color.fromArray(row);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FluidWaveform : FluidViewer {
|
||||||
|
classvar <lin = 0, <log = 1;
|
||||||
|
var <win, bounds, display_bounds, <layers;
|
||||||
|
|
||||||
|
*new {
|
||||||
|
arg audioBuffer, indicesBuffer, featuresBuffer, parent, bounds, lineWidth = 1, waveformColor, stackFeatures = false, imageBuffer, imageColorScheme = 0, imageAlpha = 1, normalizeFeaturesIndependently = true, imageColorScaling = 0;
|
||||||
|
^super.new.init(audioBuffer,indicesBuffer, featuresBuffer, parent, bounds, lineWidth, waveformColor,stackFeatures,imageBuffer,imageColorScheme,imageAlpha,normalizeFeaturesIndependently,imageColorScaling);
|
||||||
|
}
|
||||||
|
|
||||||
|
init {
|
||||||
|
arg audio_buf, slices_buf, feature_buf, parent_, bounds_, lineWidth = 1, waveformColor,stackFeatures = false, imageBuffer, imageColorScheme = 0, imageAlpha = 1, normalizeFeaturesIndependently = true, imageColorScaling = 0;
|
||||||
|
layers = List.new;
|
||||||
|
|
||||||
|
fork({
|
||||||
|
var plotImmediately = false;
|
||||||
|
|
||||||
|
bounds = bounds_;
|
||||||
|
|
||||||
|
waveformColor = waveformColor ? Color(*0.6.dup(3));
|
||||||
|
|
||||||
|
if(bounds.isNil && imageBuffer.notNil,{
|
||||||
|
bounds = Rect(0,0,imageBuffer.numFrames,imageBuffer.numChannels);
|
||||||
|
});
|
||||||
|
|
||||||
|
bounds = bounds ? Rect(0,0,800,200);
|
||||||
|
|
||||||
|
if(parent_.isNil,{
|
||||||
|
win = Window("FluidWaveform",bounds);
|
||||||
|
win.background_(Color.white);
|
||||||
|
display_bounds = Rect(0,0,bounds.width,bounds.height);
|
||||||
|
},{
|
||||||
|
win = parent_;
|
||||||
|
display_bounds = bounds;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(imageBuffer.notNil,{
|
||||||
|
this.addImageLayer(imageBuffer,imageColorScheme,imageColorScaling,imageAlpha);
|
||||||
|
imageBuffer.server.sync;
|
||||||
|
plotImmediately = true;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(audio_buf.notNil,{
|
||||||
|
this.addAudioLayer(audio_buf,waveformColor);
|
||||||
|
audio_buf.server.sync;
|
||||||
|
plotImmediately = true;
|
||||||
});
|
});
|
||||||
|
|
||||||
|
if(feature_buf.notNil,{
|
||||||
|
this.addFeaturesLayer(feature_buf,this.createCatColors,stackFeatures,normalizeFeaturesIndependently);
|
||||||
|
feature_buf.server.sync;
|
||||||
|
plotImmediately = true;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(slices_buf.notNil,{
|
||||||
|
this.addIndicesLayer(slices_buf,audio_buf,Color.red,lineWidth);
|
||||||
|
slices_buf.server.sync;
|
||||||
|
plotImmediately = true;
|
||||||
|
});
|
||||||
|
|
||||||
|
if(plotImmediately,{this.front;});
|
||||||
|
},AppClock);
|
||||||
|
}
|
||||||
|
|
||||||
|
addImageLayer {
|
||||||
|
arg imageBuffer, imageColorScheme = 0, imageColorScaling = 0, imageAlpha = 1;
|
||||||
|
var l = FluidWaveformImageLayer(imageBuffer,imageColorScheme,imageColorScaling,imageAlpha);
|
||||||
|
|
||||||
|
// l.postln;
|
||||||
|
layers.add(l);
|
||||||
|
// layers.postln;
|
||||||
|
// l.draw(win,display_bounds);
|
||||||
|
}
|
||||||
|
|
||||||
|
addAudioLayer {
|
||||||
|
arg audioBuffer, waveformColor;
|
||||||
|
var l = FluidWaveformAudioLayer(audioBuffer,waveformColor);
|
||||||
|
|
||||||
|
// l.postln;
|
||||||
|
layers.add(l);
|
||||||
|
// layers.postln;
|
||||||
|
|
||||||
|
// l.draw(win,display_bounds);
|
||||||
|
}
|
||||||
|
|
||||||
|
addIndicesLayer {
|
||||||
|
arg indicesBuffer, audioBuffer, color, lineWidth = 1;
|
||||||
|
var l = FluidWaveformIndicesLayer(indicesBuffer,audioBuffer,color,lineWidth);
|
||||||
|
|
||||||
|
// l.postln;
|
||||||
|
layers.add(l);
|
||||||
|
// layers.postln;
|
||||||
|
|
||||||
|
// l.draw(win,display_bounds);
|
||||||
|
}
|
||||||
|
|
||||||
|
addFeaturesLayer {
|
||||||
|
arg featuresBuffer, colors, stackFeatures = false, normalizeFeaturesIndependently = true;
|
||||||
|
var l = FluidWaveformFeaturesLayer(featuresBuffer,colors,stackFeatures,normalizeFeaturesIndependently);
|
||||||
|
|
||||||
|
// l.postln;
|
||||||
|
layers.add(l);
|
||||||
|
// layers.postln;
|
||||||
|
|
||||||
|
// l.draw(win,display_bounds);
|
||||||
|
}
|
||||||
|
|
||||||
|
addLayer {
|
||||||
|
arg fluidWaveformLayer;
|
||||||
|
layers.add(fluidWaveformLayer);
|
||||||
|
}
|
||||||
|
|
||||||
|
front {
|
||||||
|
fork({
|
||||||
|
|
||||||
|
UserView(win,display_bounds)
|
||||||
|
.drawFunc_{
|
||||||
|
Pen.fillColor_(Color.white);
|
||||||
|
Pen.addRect(Rect(0,0,bounds.width,bounds.height));
|
||||||
|
Pen.fill;
|
||||||
|
};
|
||||||
|
|
||||||
|
layers.do{
|
||||||
|
arg layer;
|
||||||
|
// layer.postln;
|
||||||
|
layer.draw(win,display_bounds).sync;
|
||||||
|
};
|
||||||
win.front;
|
win.front;
|
||||||
}.play(AppClock);
|
},AppClock);
|
||||||
|
}
|
||||||
|
|
||||||
|
close {
|
||||||
|
win.close;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -1,100 +0,0 @@
|
|||||||
(
|
|
||||||
var win, soundFileView, freqSscope,loadButton, loopButton;
|
|
||||||
var harmSlider, percSlider, mixSlider;
|
|
||||||
var soundFile, buffer;
|
|
||||||
var synthDef, synth;
|
|
||||||
var makeSynthDef;
|
|
||||||
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
buffer = Buffer.new;
|
|
||||||
win = Window.new("HPSS", Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)]]);
|
|
||||||
|
|
||||||
loopButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Play", Color.grey, Color.grey(0.8)],
|
|
||||||
["Stop", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
harmSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
percSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
mixSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
freqSscope = FreqScopeView(win, server:Server.default);
|
|
||||||
freqSscope.active_(true);
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
buffer = Buffer.read(Server.default, path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
loopButton.action_{|but|
|
|
||||||
if(but.value == 1, {
|
|
||||||
synth = Synth(\hpssExtractionDemo, [\buffer, buffer.bufnum]);
|
|
||||||
mixSlider.action.value(mixSlider);
|
|
||||||
},{
|
|
||||||
synth.free;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
mixSlider.action_{|slider|
|
|
||||||
synth.set(\bal, ControlSpec(0, 1).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
makeSynthDef = {
|
|
||||||
|
|
||||||
synthDef = SynthDef(\hpssExtractionDemo,
|
|
||||||
{|buffer, bal = 0.5|
|
|
||||||
var player, fhpss, mix;
|
|
||||||
var harmSize = (2 * ControlSpec(1, 100, step:1).map(harmSlider.value)) - 1;
|
|
||||||
var percSize = (2 * ControlSpec(1,100, step:1).map(percSlider.value)) - 1;
|
|
||||||
player = PlayBuf.ar(1, buffer, loop:1);
|
|
||||||
fhpss = FluidHPSS.ar(in: player, harmFilterSize: harmSize, percFilterSize: percSize, maskingMode: 1, harmThreshFreq1: 0.1, harmThreshAmp1: 0, harmThreshFreq2: 0.5, harmThreshAmp2: 0, percThreshFreq1: 0.1, percThreshAmp1: 0, percThreshFreq2: 0.5, percThreshAmp2: 0, windowSize: 1024, hopSize: 256, fftSize: -1);
|
|
||||||
|
|
||||||
mix =(bal * fhpss[0]) + ((1 - bal) * fhpss[1]);
|
|
||||||
Out.ar(0,Pan2.ar(mix));
|
|
||||||
}
|
|
||||||
).add;
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loopButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("H Size ").minWidth_(100), harmSlider),
|
|
||||||
HLayout(StaticText(win).string_("P Size").minWidth_(100), percSlider),
|
|
||||||
HLayout(StaticText(win).string_("Mix").minWidth_(100), mixSlider)
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[freqSscope, stretch:2]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
makeSynthDef.value;
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -1,111 +0,0 @@
|
|||||||
(
|
|
||||||
var server;
|
|
||||||
var win, soundFileView, loadButton, loopButton;
|
|
||||||
var sliders;
|
|
||||||
var soundFile, audioBuffer, destBuffer;
|
|
||||||
var synthDef, synth;
|
|
||||||
var sl1, sl2, sl3, sl4;
|
|
||||||
|
|
||||||
server = Server.default;
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
|
|
||||||
audioBuffer = Buffer.new;
|
|
||||||
destBuffer = Buffer.new;
|
|
||||||
|
|
||||||
|
|
||||||
synthDef = SynthDef(\nmfDemo,{|bufnum, a1, a2, a3, a4|
|
|
||||||
var p = PlayBuf.ar(4, bufnum, loop:1);
|
|
||||||
var mix = (a1*p[0]) + (a2 * p[1]) + (a3*p[2]) + (a4*p[3]);
|
|
||||||
Out.ar(0, Pan2.ar(mix));
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
win = Window.new("NMF4",
|
|
||||||
Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)],
|
|
||||||
["Wait", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
loopButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Play", Color.grey, Color.grey(0.8)],
|
|
||||||
["Stop", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
sliders = Array.fill(4, {|i|
|
|
||||||
var s = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
s.action_{
|
|
||||||
var sym = ("a"++(i+1)).asSymbol;
|
|
||||||
synth.set(sym, ControlSpec(0, 1).map(s.value));
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
Routine{
|
|
||||||
audioBuffer = Buffer.read(server, path[0]);
|
|
||||||
server.sync;
|
|
||||||
FluidBufNMF.process(server,
|
|
||||||
audioBuffer.bufnum,resynth:destBuffer.bufnum, components:4
|
|
||||||
);
|
|
||||||
server.sync;
|
|
||||||
destBuffer.query;
|
|
||||||
server.sync;
|
|
||||||
{loadButton.value_(0)}.defer;
|
|
||||||
}.play;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
loopButton.action_{|but|
|
|
||||||
var a1 = ControlSpec(0, 1).map(sliders[0].value);
|
|
||||||
var a2 = ControlSpec(0, 1).map(sliders[1].value);
|
|
||||||
var a3 = ControlSpec(0, 1).map(sliders[2].value);
|
|
||||||
var a4 = ControlSpec(0, 1).map(sliders[3].value);
|
|
||||||
|
|
||||||
if(but.value == 1, {
|
|
||||||
synth = Synth(\nmfDemo,
|
|
||||||
[\bufnum, destBuffer.bufnum, \a1, a1, \a2, a2, \a3, a3, \a4, a4]);
|
|
||||||
},{
|
|
||||||
synth.free;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loopButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("source 1 ").minWidth_(100), sliders[0]),
|
|
||||||
HLayout(StaticText(win).string_("source 2 ").minWidth_(100), sliders[1]),
|
|
||||||
HLayout(StaticText(win).string_("source 3 ").minWidth_(100), sliders[2]),
|
|
||||||
HLayout(StaticText(win).string_("source 4 ").minWidth_(100), sliders[3])
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -1,150 +0,0 @@
|
|||||||
(
|
|
||||||
var server;
|
|
||||||
var win, soundFileView,loadButton, processButton;
|
|
||||||
var ksSlider, thSlider;
|
|
||||||
var soundFile, audioBuffer, slicesBuffer, slicesArray;
|
|
||||||
var addSelections, playFunc, stopFunc;
|
|
||||||
var synthDef, synth;
|
|
||||||
var synths;
|
|
||||||
|
|
||||||
var playing, currentSelection, colors, prevColor;
|
|
||||||
var qwerty = "1234567890qwertyuiopasdfghjklzxcvbnm";
|
|
||||||
playing = Array.fill(qwerty.size, {false});
|
|
||||||
server = Server.default;
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
|
|
||||||
audioBuffer = Buffer.new;
|
|
||||||
slicesBuffer = Buffer.new;
|
|
||||||
|
|
||||||
colors = Array.fill(qwerty.size, {Color.rand});
|
|
||||||
synths = Array.fill(qwerty.size, {nil});
|
|
||||||
|
|
||||||
synthDef = SynthDef(\noveltySegDemo,{|buf, start, end|
|
|
||||||
Out.ar(0, BufRd.ar(1, buf, Phasor.ar(1, 1, start, end)));
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
playFunc = {|index|
|
|
||||||
var dur;
|
|
||||||
currentSelection = index;
|
|
||||||
if(playing[index].not){
|
|
||||||
synths[index] = Synth(\noveltySegDemo,
|
|
||||||
[\buf, audioBuffer.bufnum,
|
|
||||||
\start, slicesArray[index],
|
|
||||||
\end, slicesArray[index+1]
|
|
||||||
]);
|
|
||||||
playing[index] = true;
|
|
||||||
};
|
|
||||||
soundFileView.setSelectionColor(currentSelection, Color.white);
|
|
||||||
};
|
|
||||||
|
|
||||||
stopFunc = {|index| synths[index].free; playing[index] = false;
|
|
||||||
soundFileView.setSelectionColor(
|
|
||||||
index, colors[index]
|
|
||||||
);
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
win = Window.new("NoveltySegmentation",
|
|
||||||
Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
win.view.keyDownAction_{|view, char, modifiers, unicode, keycode, key|
|
|
||||||
var num = qwerty.indexOf(char);
|
|
||||||
if (num.notNil&& slicesArray.notNil){
|
|
||||||
playFunc.value(num);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
win.view.keyUpAction_{|view, char|
|
|
||||||
var num = qwerty.indexOf(char);
|
|
||||||
if(num.notNil){
|
|
||||||
stopFunc.value(num);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)]]);
|
|
||||||
|
|
||||||
processButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Process", Color.grey, Color.grey(0.8)],
|
|
||||||
["Wait", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
ksSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
thSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
audioBuffer = Buffer.read(server, path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
processButton.action_{|but|
|
|
||||||
var ks = 2*(ControlSpec(2, 100, step:1).map(ksSlider.value)) - 1;
|
|
||||||
var th = ControlSpec(0, 1).map(thSlider.value);
|
|
||||||
if(but.value == 1, {
|
|
||||||
Routine{
|
|
||||||
FluidBufNoveltySlice.process(
|
|
||||||
server,
|
|
||||||
source:audioBuffer.bufnum,
|
|
||||||
indices:slicesBuffer.bufnum,
|
|
||||||
kernelSize:ks,
|
|
||||||
threshold: th
|
|
||||||
);
|
|
||||||
server.sync;
|
|
||||||
slicesBuffer.loadToFloatArray(action:{|arr|
|
|
||||||
slicesArray = arr;
|
|
||||||
{ processButton.value_(0);
|
|
||||||
addSelections.value(slicesArray)
|
|
||||||
}.defer;
|
|
||||||
|
|
||||||
});
|
|
||||||
}.play;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
addSelections = {|array|
|
|
||||||
var nSegments = min(array.size, soundFileView.selections.size) - 1;
|
|
||||||
soundFileView.selections.do({|sel, i| soundFileView.selectNone(i)});
|
|
||||||
nSegments.do({|i|
|
|
||||||
soundFileView.setSelectionStart(i, array[i]);
|
|
||||||
soundFileView.setSelectionSize(i, array[i+1] - array[i]);
|
|
||||||
soundFileView.setSelectionColor(i, colors[i]);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[processButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("Kernel ").minWidth_(100), ksSlider),
|
|
||||||
HLayout(StaticText(win).string_(" Threshold").minWidth_(100), thSlider)
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -1,107 +0,0 @@
|
|||||||
(
|
|
||||||
var win, soundFileView, freqSscope,loadButton, loopButton;
|
|
||||||
var thresholdSlider, lenSlider, mixSlider;
|
|
||||||
var soundFile, buffer;
|
|
||||||
var synthDef, synth;
|
|
||||||
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
buffer = Buffer.new;
|
|
||||||
win = Window.new("SineExtraction",
|
|
||||||
Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)]]);
|
|
||||||
|
|
||||||
loopButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Play", Color.grey, Color.grey(0.8)],
|
|
||||||
["Stop", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
thresholdSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
lenSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
mixSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
freqSscope = FreqScopeView(win, server:Server.default);
|
|
||||||
freqSscope.active_(true);
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
buffer = Buffer.read(Server.default, path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
loopButton.action_{|but|
|
|
||||||
if(but.value == 1, {
|
|
||||||
synth = Synth(\sineExtractionDemo, [\buffer, buffer.bufnum]);
|
|
||||||
mixSlider.action.value(mixSlider);
|
|
||||||
thresholdSlider.action.value(thresholdSlider);
|
|
||||||
lenSlider.action.value(lenSlider);
|
|
||||||
},{
|
|
||||||
synth.free;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
mixSlider.action_{|slider|
|
|
||||||
synth.set(\bal, ControlSpec(0, 1).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
thresholdSlider.action_{|slider|
|
|
||||||
synth.set(\threshold, ControlSpec(-144, 0).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
lenSlider.action_{|slider|
|
|
||||||
synth.set(\minLength, ControlSpec(0, 30).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
synthDef = SynthDef(\sineExtractionDemo,
|
|
||||||
{|buffer, threshold = 0.9, minLength = 15, bal = 0.5|
|
|
||||||
var player, fse, mix;
|
|
||||||
player = PlayBuf.ar(1, buffer, loop:1);
|
|
||||||
fse = FluidSines.ar(in: player, bandwidth: 76,
|
|
||||||
detectionThreshold: threshold, minTrackLen: minLength,
|
|
||||||
windowSize: 2048,
|
|
||||||
hopSize: 512, fftSize: 8192
|
|
||||||
);
|
|
||||||
mix =(bal * fse[0]) + ((1 - bal) * fse[1]);
|
|
||||||
Out.ar(0,Pan2.ar(mix));
|
|
||||||
}
|
|
||||||
).add;
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loopButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("Threshold ").minWidth_(100), thresholdSlider),
|
|
||||||
HLayout(StaticText(win).string_("Min Length").minWidth_(100), lenSlider),
|
|
||||||
HLayout(StaticText(win).string_("Mix").minWidth_(100), mixSlider)
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[freqSscope, stretch:2]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -1,103 +0,0 @@
|
|||||||
(
|
|
||||||
var win, soundFileView, freqSscope,loadButton, loopButton;
|
|
||||||
var fwSlider, bwSlider, mixSlider;
|
|
||||||
var soundFile, buffer;
|
|
||||||
var synthDef, synth;
|
|
||||||
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
buffer = Buffer.new;
|
|
||||||
win = Window.new("TransientExtraction",
|
|
||||||
Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)]]);
|
|
||||||
|
|
||||||
loopButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Play", Color.grey, Color.grey(0.8)],
|
|
||||||
["Stop", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
fwSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
bwSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
mixSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
freqSscope = FreqScopeView(win, server:Server.default);
|
|
||||||
freqSscope.active_(true);
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
buffer = Buffer.read(Server.default, path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
loopButton.action_{|but|
|
|
||||||
if(but.value == 1, {
|
|
||||||
synth = Synth(\transientExtractionDemo, [\buffer, buffer.bufnum]);
|
|
||||||
mixSlider.action.value(mixSlider);
|
|
||||||
fwSlider.action.value(fwSlider);
|
|
||||||
bwSlider.action.value(bwSlider);
|
|
||||||
},{
|
|
||||||
synth.free;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
mixSlider.action_{|slider|
|
|
||||||
synth.set(\bal, ControlSpec(0, 1).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
fwSlider.action_{|slider|
|
|
||||||
synth.set(\fw, ControlSpec(0.0001, 3, \exp).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
bwSlider.action_{|slider|
|
|
||||||
synth.set(\bw, ControlSpec(0.0001, 3, \exp).map(slider.value));
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
synthDef = SynthDef(\transientExtractionDemo,
|
|
||||||
{|buffer, fw = 3, bw = 1, bal = 0.5|
|
|
||||||
var player, fte, mix;
|
|
||||||
player = PlayBuf.ar(1, buffer, loop:1);
|
|
||||||
fte = FluidTransients.ar(in: player, threshFwd:fw, threshBack:bw, clumpLength:256);
|
|
||||||
mix =(bal * fte[0]) + ((1 - bal) * fte[1]);
|
|
||||||
Out.ar(0,Pan2.ar(mix));
|
|
||||||
}
|
|
||||||
).add;
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loopButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("Forward Th ").minWidth_(100), fwSlider),
|
|
||||||
HLayout(StaticText(win).string_("Backward Th").minWidth_(100), bwSlider),
|
|
||||||
HLayout(StaticText(win).string_("Mix").minWidth_(100), mixSlider)
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[freqSscope, stretch:2]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -1,148 +0,0 @@
|
|||||||
(
|
|
||||||
var server;
|
|
||||||
var win, soundFileView,loadButton, processButton;
|
|
||||||
var fwSlider, bwSlider, debounceSlider;
|
|
||||||
var soundFile, audioBuffer, slicesBuffer, slicesArray;
|
|
||||||
var addSelections, playFunc, stopFunc;
|
|
||||||
var synthDef, synth;
|
|
||||||
|
|
||||||
var playing, currentSelection, colors, prevColor;
|
|
||||||
var qwerty = "1234567890qwertyuiopasdfghjklzxcvbnm";
|
|
||||||
|
|
||||||
playing = false;
|
|
||||||
server = Server.default;
|
|
||||||
Font.default = Font("Monaco", 16);
|
|
||||||
|
|
||||||
audioBuffer = Buffer.new;
|
|
||||||
slicesBuffer = Buffer.new;
|
|
||||||
|
|
||||||
colors = Array.fill(64, {Color.rand});
|
|
||||||
|
|
||||||
synthDef = SynthDef(\transientSegDemo,{|buf, start, end|
|
|
||||||
Out.ar(0, BufRd.ar(1, buf, Phasor.ar(1, 1, start, end)));
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
playFunc = {|index|
|
|
||||||
var dur;
|
|
||||||
currentSelection = index;
|
|
||||||
if(playing.not){
|
|
||||||
synth = Synth(\transientSegDemo,
|
|
||||||
[\buf, audioBuffer.bufnum,
|
|
||||||
\start, slicesArray[index],
|
|
||||||
\end, slicesArray[index+1]
|
|
||||||
]);
|
|
||||||
playing = true;
|
|
||||||
};
|
|
||||||
soundFileView.setSelectionColor(currentSelection, Color.white);
|
|
||||||
};
|
|
||||||
|
|
||||||
stopFunc = {synth.free; playing = false;
|
|
||||||
soundFileView.setSelectionColor(currentSelection, colors[currentSelection]);
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
win = Window.new("TransientSegmentation",
|
|
||||||
Rect(200,200,800,450)).background_(Color.gray);
|
|
||||||
|
|
||||||
win.view.keyDownAction_{|view, char, modifiers, unicode, keycode, key|
|
|
||||||
var num = qwerty.indexOf(char);
|
|
||||||
if(num.notNil && slicesArray.notNil){
|
|
||||||
playFunc.value(num);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
win.view.keyUpAction_{stopFunc.value;};
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
soundFileView = SoundFileView.new(win)
|
|
||||||
.gridOn_(false)
|
|
||||||
.waveColors_([Color.white]);
|
|
||||||
|
|
||||||
loadButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_([["Load", Color.grey, Color.grey(0.8)]]);
|
|
||||||
|
|
||||||
processButton = Button(win, Rect(0, 0, 100, 100))
|
|
||||||
.minHeight_(150)
|
|
||||||
.states_(
|
|
||||||
[["Process", Color.grey, Color.grey(0.8)],
|
|
||||||
["Wait", Color.grey, Color.grey(0.2)]]
|
|
||||||
);
|
|
||||||
|
|
||||||
fwSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
bwSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
debounceSlider = Slider(win, Rect(0, 0, 100, 10)).value_(0.5);
|
|
||||||
|
|
||||||
loadButton.action_{
|
|
||||||
FileDialog({ |path|
|
|
||||||
soundFile = SoundFile.new;
|
|
||||||
soundFile.openRead(path[0]);
|
|
||||||
audioBuffer = Buffer.read(server, path[0]);
|
|
||||||
soundFileView.soundfile = soundFile;
|
|
||||||
soundFileView.read(0, soundFile.numFrames);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
processButton.action_{|but|
|
|
||||||
var fw = ControlSpec(0.0001, 3, \exp).map(fwSlider.value);
|
|
||||||
var bw = ControlSpec(0.0001, 3, \exp).map(bwSlider.value);
|
|
||||||
var db = ControlSpec(1, 4410).map(debounceSlider.value);
|
|
||||||
if(but.value == 1, {
|
|
||||||
Routine{
|
|
||||||
FluidBufTransientSlice.process(
|
|
||||||
server,
|
|
||||||
source:audioBuffer.bufnum,
|
|
||||||
indices:slicesBuffer.bufnum,
|
|
||||||
threshFwd: fw,
|
|
||||||
threshBack: bw,
|
|
||||||
clumpLength:db
|
|
||||||
);
|
|
||||||
server.sync;
|
|
||||||
slicesBuffer.loadToFloatArray(action:{|arr|
|
|
||||||
slicesArray = arr;
|
|
||||||
{ processButton.value_(0);
|
|
||||||
addSelections.value(slicesArray)
|
|
||||||
}.defer;
|
|
||||||
|
|
||||||
});
|
|
||||||
}.play;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
addSelections = {|array|
|
|
||||||
var nSegments = min(array.size, soundFileView.selections.size) - 1;
|
|
||||||
soundFileView.selections.do({|sel, i| soundFileView.selectNone(i)});
|
|
||||||
nSegments.do({|i|
|
|
||||||
soundFileView.setSelectionStart(i, array[i]);
|
|
||||||
soundFileView.setSelectionSize(i, array[i+1] - array[i]);
|
|
||||||
soundFileView.setSelectionColor(i, colors[i]);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
win.layout_(
|
|
||||||
VLayout(
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[loadButton, stretch:1],
|
|
||||||
[soundFileView, stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
],
|
|
||||||
[
|
|
||||||
HLayout(
|
|
||||||
[processButton, stretch:1],
|
|
||||||
[VLayout(
|
|
||||||
HLayout(StaticText(win).string_("Forward Th ").minWidth_(100), fwSlider),
|
|
||||||
HLayout(StaticText(win).string_("Backward Th").minWidth_(100), bwSlider),
|
|
||||||
HLayout(StaticText(win).string_("Debounce").minWidth_(100), debounceSlider)
|
|
||||||
), stretch:5]
|
|
||||||
), stretch:2
|
|
||||||
]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
win.front;
|
|
||||||
)
|
|
||||||
@ -0,0 +1,288 @@
|
|||||||
|
/*
|
||||||
|
=================================================
|
||||||
|
| |
|
||||||
|
| LOAD AND ANALYZE THE SOURCE MATERIAL |
|
||||||
|
| |
|
||||||
|
=================================================
|
||||||
|
*/
|
||||||
|
|
||||||
|
(
|
||||||
|
// ============= 1. LOAD SOME FILES TO BE THE SOURCE MATERIAL ===================
|
||||||
|
// put your own folder path here! it's best if they're all mono for now.
|
||||||
|
~source_files_folder = "/Users/macprocomputer/Desktop/sccm/files_fabrizio_01/src_files/";
|
||||||
|
|
||||||
|
~loader = FluidLoadFolder(~source_files_folder); // this is a nice helper class that will load a bunch of files from a folder.
|
||||||
|
~loader.play(s,{ // .play will cause it to *actually* do the loading
|
||||||
|
|
||||||
|
// we really just want access to the buffer. there is also a .index with some info about the files
|
||||||
|
// but we'll igore that for now
|
||||||
|
~source_buf = ~loader.buffer;
|
||||||
|
|
||||||
|
"all files loaded".postln;
|
||||||
|
|
||||||
|
// double check if they're all mono? the buffer of the loaded files will have as many channels as the file with the most channels
|
||||||
|
// so if this is 1, then we know all the files were mono.
|
||||||
|
"num channels: %".format(~source_buf.numChannels).postln
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// ==================== 2. SLICE THE SOURCE MATERIAL ACCORDING TO SPECTRAL ONSETS =========================
|
||||||
|
~source_indices_buf = Buffer(s); // a buffer for writing the indices into
|
||||||
|
FluidBufOnsetSlice.process(s,~source_buf,indices:~source_indices_buf,metric:9,threshold:0.5,minSliceLength:9,action:{ // do the slicing
|
||||||
|
~source_indices_buf.loadToFloatArray(action:{
|
||||||
|
arg indices_array;
|
||||||
|
|
||||||
|
// post the results so that you can tweak the parameters and get what you want
|
||||||
|
"found % slices".format(indices_array.size-1).postln;
|
||||||
|
"average length: % seconds".format((~source_buf.duration / (indices_array.size-1)).round(0.001)).postln;
|
||||||
|
})
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// =========================== 3. DEFINE A FUNCTION FOR DOING THE ANALYSIS ===================================
|
||||||
|
~analyze_to_dataset = {
|
||||||
|
arg audio_buffer, slices_buffer, action; // the audio buffer to analyze, a buffer with the slice points, and an action to execute when done
|
||||||
|
~nmfccs = 13;
|
||||||
|
Routine{
|
||||||
|
var features_buf = Buffer(s); // a buffer for writing the MFCC analyses into
|
||||||
|
var stats_buf = Buffer(s); // a buffer for writing the statistical summary of the MFCC analyses into
|
||||||
|
var flat_buf = Buffer(s); // a buffer for writing only he mean MFCC values into
|
||||||
|
var dataset = FluidDataSet(s); // the dataset that all of these analyses will be stored in
|
||||||
|
slices_buffer.loadToFloatArray(action:{ // get the indices from the server loaded down to the language
|
||||||
|
arg slices_array;
|
||||||
|
|
||||||
|
// iterate over each index in this array, paired with this next neighbor so that we know where to start
|
||||||
|
// and stop the analysis
|
||||||
|
slices_array.doAdjacentPairs{
|
||||||
|
arg start_frame, end_frame, slice_index;
|
||||||
|
var num_frames = end_frame - start_frame;
|
||||||
|
|
||||||
|
"analyzing slice: % / %".format(slice_index + 1,slices_array.size - 1).postln;
|
||||||
|
|
||||||
|
// mfcc analysis, hop over that 0th coefficient because it relates to loudness and here we want to focus on timbre
|
||||||
|
FluidBufMFCC.process(s,audio_buffer,start_frame,num_frames,features:features_buf,startCoeff:1,numCoeffs:~nmfccs).wait;
|
||||||
|
|
||||||
|
// get a statistical summary of the MFCC analysis for this slice
|
||||||
|
FluidBufStats.process(s,features_buf,stats:stats_buf).wait;
|
||||||
|
|
||||||
|
// extract and flatten just the 0th frame (numFrames:1) of the statistical summary (because that is the mean)
|
||||||
|
FluidBufFlatten.process(s,stats_buf,numFrames:1,destination:flat_buf).wait;
|
||||||
|
|
||||||
|
// now that the means are extracted and flattened, we can add this datapoint to the dataset:
|
||||||
|
dataset.addPoint("slice-%".format(slice_index),flat_buf);
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
action.value(dataset); // execute the function and pass in the dataset that was created!
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// =================== 4. DO THE ANALYSIS =====================
|
||||||
|
~analyze_to_dataset.(~source_buf,~source_indices_buf,{ // pass in the audio buffer of the source, and the slice points
|
||||||
|
arg ds;
|
||||||
|
~source_dataset = ds; // set the ds to a global variable so we can access it later
|
||||||
|
~source_dataset.print;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
=================================================
|
||||||
|
| |
|
||||||
|
| LOAD AND ANALYZE THE TARGET |
|
||||||
|
| |
|
||||||
|
=================================================
|
||||||
|
*/
|
||||||
|
|
||||||
|
(
|
||||||
|
// ============= 5. LOAD THE FILE ===================
|
||||||
|
~target_path = FluidFilesPath("Nicol-LoopE-M.wav");
|
||||||
|
~target_buf = Buffer.read(s,~target_path);
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// ============= 6. SLICE ===================
|
||||||
|
~target_indices_buf = Buffer(s);
|
||||||
|
FluidBufOnsetSlice.process(s,~target_buf,indices:~target_indices_buf,metric:9,threshold:0.5,action:{
|
||||||
|
~target_indices_buf.loadToFloatArray(action:{
|
||||||
|
arg indices_array;
|
||||||
|
|
||||||
|
// post the results so that you can tweak the parameters and get what you want
|
||||||
|
"found % slices".format(indices_array.size-1).postln;
|
||||||
|
"average length: % seconds".format((~target_buf.duration / (indices_array.size-1)).round(0.001)).postln;
|
||||||
|
})
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// =========== 7. USE THE SAME ANALYSIS FUNCTION
|
||||||
|
~analyze_to_dataset.(~target_buf,~target_indices_buf,{
|
||||||
|
arg ds;
|
||||||
|
~target_dataset = ds;
|
||||||
|
~target_dataset.print;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// ======================= 8. TEST DRUM LOOP PLAYBACK ====================
|
||||||
|
// play back the drum slices with a .wait in between so we hear the drum loop
|
||||||
|
Routine{
|
||||||
|
~target_indices_buf.loadToFloatArray(action:{
|
||||||
|
arg target_indices_array;
|
||||||
|
|
||||||
|
// prepend 0 (the start of the file) to the indices array
|
||||||
|
target_indices_array = [0] ++ target_indices_array;
|
||||||
|
|
||||||
|
// append the total number of frames to know how long to play the last slice for
|
||||||
|
target_indices_array = target_indices_array ++ [~target_buf.numFrames];
|
||||||
|
|
||||||
|
|
||||||
|
inf.do{ // loop for infinity
|
||||||
|
arg i;
|
||||||
|
|
||||||
|
// get the index to play by modulo one less than the number of slices (we don't want to *start* playing from the
|
||||||
|
// last slice point, because that's the end of the file!)
|
||||||
|
var index = i % (target_indices_array.size - 1);
|
||||||
|
|
||||||
|
// nb. that the minus one is so that the drum slice from the beginning of the file to the first index is call "-1"
|
||||||
|
// this is because that slice didn't actually get analyzed
|
||||||
|
var slice_id = index - 1;
|
||||||
|
var start_frame = target_indices_array[index];
|
||||||
|
var dur_frames = target_indices_array[index + 1] - start_frame;
|
||||||
|
var dur_secs = dur_frames / ~target_buf.sampleRate;
|
||||||
|
|
||||||
|
"playing slice: %".format(slice_id).postln;
|
||||||
|
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~target_buf,BufRateScale.ir(~target_buf),0,start_frame,0,2);
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_secs-0.06,0.03]),doneAction:2);
|
||||||
|
// sig = sig * env; // include this env if you like, but keep the line above because it will free the synth after the slice!
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
dur_secs.wait;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
=================================================
|
||||||
|
| |
|
||||||
|
| KDTREE THE DATA AND DO THE LOOKUP |
|
||||||
|
| |
|
||||||
|
=================================================
|
||||||
|
*/
|
||||||
|
|
||||||
|
(
|
||||||
|
// ========== 9. FIT THE KDTREE TO THE SOURCE DATASET SO THAT WE CAN QUICKLY LOOKUP NEIGHBORS ===============
|
||||||
|
Routine{
|
||||||
|
~kdtree = FluidKDTree(s);
|
||||||
|
~scaled_dataset = FluidDataSet(s);
|
||||||
|
|
||||||
|
// leave only one of these scalers *not* commented-out. try all of them!
|
||||||
|
//~scaler = FluidStandardize(s);
|
||||||
|
~scaler = FluidNormalize(s);
|
||||||
|
// ~scaler = FluidRobustScale(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
~scaler.fitTransform(~source_dataset,~scaled_dataset,{
|
||||||
|
~kdtree.fit(~scaled_dataset,{
|
||||||
|
"kdtree fit".postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// ========= 10. A LITTLE HELPER FUNCTION THAT WILL PLAY BACK A SLICE FROM THE SOURCE BY JUST PASSING THE INDEX =============
|
||||||
|
~play_source_index = {
|
||||||
|
arg index, src_dur;
|
||||||
|
{
|
||||||
|
var start_frame = Index.kr(~source_indices_buf,index); // lookup the start frame with the index *one the server* using Index.kr
|
||||||
|
var end_frame = Index.kr(~source_indices_buf,index+1); // same for the end frame
|
||||||
|
var num_frames = end_frame - start_frame;
|
||||||
|
var dur_secs = min(num_frames / SampleRate.ir(~source_buf),src_dur);
|
||||||
|
var sig = PlayBuf.ar(1,~source_buf,BufRateScale.ir(~source_buf),0,start_frame,0,2);
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_secs-0.06,0.03]),doneAction:2);
|
||||||
|
// sig = sig * env; // include this env if you like, but keep the line above because it will free the synth after the slice!
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// ======================= 11. QUERY THE DRUM SONDS TO FIND "REPLACEMENTS" ====================
|
||||||
|
// play back the drum slices with a .wait in between so we hear the drum loop
|
||||||
|
// is is very similar to step 8 above, but now instead of playing the slice of
|
||||||
|
// the drum loop, it get's the analysis of the drum loop's slice into "query_buf",
|
||||||
|
// then uses that info to lookup the nearest neighbour in the source dataset and
|
||||||
|
// play that slice
|
||||||
|
Routine{
|
||||||
|
var query_buf = Buffer.alloc(s,~nmfccs); // a buffer for doing the neighbor lookup with
|
||||||
|
var scaled_buf = Buffer.alloc(s,~nmfccs);
|
||||||
|
~target_indices_buf.loadToFloatArray(action:{
|
||||||
|
arg target_indices_array;
|
||||||
|
|
||||||
|
// prepend 0 (the start of the file) to the indices array
|
||||||
|
target_indices_array = [0] ++ target_indices_array;
|
||||||
|
|
||||||
|
// append the total number of frames to know how long to play the last slice for
|
||||||
|
target_indices_array = target_indices_array ++ [~target_buf.numFrames];
|
||||||
|
|
||||||
|
|
||||||
|
inf.do{ // loop for infinity
|
||||||
|
arg i;
|
||||||
|
|
||||||
|
// get the index to play by modulo one less than the number of slices (we don't want to *start* playing from the
|
||||||
|
// last slice point, because that's the end of the file!)
|
||||||
|
var index = i % (target_indices_array.size - 1);
|
||||||
|
|
||||||
|
// nb. that the minus one is so that the drum slice from the beginning of the file to the first index is call "-1"
|
||||||
|
// this is because that slice didn't actually get analyzed
|
||||||
|
var slice_id = index - 1;
|
||||||
|
var start_frame = target_indices_array[index];
|
||||||
|
var dur_frames = target_indices_array[index + 1] - start_frame;
|
||||||
|
|
||||||
|
// this will be used to space out the source slices according to the target timings
|
||||||
|
var dur_secs = dur_frames / ~target_buf.sampleRate;
|
||||||
|
|
||||||
|
"target slice: %".format(slice_id).postln;
|
||||||
|
|
||||||
|
// as long as this slice is not the one that starts at the beginning of the file (-1) and
|
||||||
|
// not the slice at the end of the file (because neither of these have analyses), let's
|
||||||
|
// do the lookup
|
||||||
|
if((slice_id >= 0) && (slice_id < (target_indices_array.size - 3)),{
|
||||||
|
|
||||||
|
// use the slice id to (re)create the slice identifier and load the data point into "query_buf"
|
||||||
|
~target_dataset.getPoint("slice-%".format(slice_id.asInteger),query_buf,{
|
||||||
|
// once it's loaded, scale it using the scaler
|
||||||
|
~scaler.transformPoint(query_buf,scaled_buf,{
|
||||||
|
// once it's neighbour data point in the kdtree of source slices
|
||||||
|
~kdtree.kNearest(scaled_buf,{
|
||||||
|
arg nearest;
|
||||||
|
|
||||||
|
// peel off just the integer part of the slice to use in the helper function
|
||||||
|
var nearest_index = nearest.asString.split($-)[1].asInteger;
|
||||||
|
nearest_index.postln;
|
||||||
|
~play_source_index.(nearest_index,dur_secs);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// if you want to hear the drum set along side the neighbor slices, uncomment this function
|
||||||
|
/*{
|
||||||
|
var sig = PlayBuf.ar(1,~target_buf,BufRateScale.ir(~target_buf),0,start_frame,0,2);
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_secs-0.06,0.03]),doneAction:2);
|
||||||
|
// sig = sig * env; // include this env if you like, but keep the line above because it will free the synth after the slice!
|
||||||
|
sig.dup;
|
||||||
|
}.play;*/
|
||||||
|
|
||||||
|
dur_secs.wait;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
@ -0,0 +1,133 @@
|
|||||||
|
(
|
||||||
|
// Window.closeAll;
|
||||||
|
s.waitForBoot{
|
||||||
|
|
||||||
|
Task{
|
||||||
|
var buf = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav"));
|
||||||
|
var slicepoints = Buffer(s); // FluidBufAmpSlice will write into this buffer the samples at which slices are detected.
|
||||||
|
var features_buf = Buffer(s); // a buffer for writing the analysis from FluidSpectralShape into
|
||||||
|
var stats_buf = Buffer(s); // a buffer for writing the statistic analyses into
|
||||||
|
var point_buf = Buffer(s,2); // a buffer that will be used to add points to the dataset - the analyses will be written into this buffer first
|
||||||
|
var ds = FluidDataSet(s); // a data set for storing the analysis of each slice (mean centroid & mean loudness)
|
||||||
|
var scaler = FluidNormalize(s); // a tool for normalizing a dataset (making it all range between zero and one)
|
||||||
|
var kdtree = FluidKDTree(s); // a kdtree for fast nearest neighbour lookup
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
FluidBufAmpSlice.processBlocking(s,buf,indices:slicepoints,fastRampUp:10,fastRampDown:2205,slowRampUp:4410,slowRampDown:4410,onThreshold:10,offThreshold:5,floor:-40,minSliceLength:4410,highPassFreq:20);
|
||||||
|
// slice the drums buffer based on amplitude
|
||||||
|
// the samples at which slices are detected will be written into the "slicepoints" buffer
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
FluidWaveform(buf,slicepoints,bounds:Rect(0,0,1600,400));
|
||||||
|
// plot the drums buffer with the slicepoints overlayed
|
||||||
|
|
||||||
|
slicepoints.loadToFloatArray(action:{ // bring the values in the slicepoints buffer from the server to the language as a float array
|
||||||
|
arg slicepoints_fa; // fa stands for float array
|
||||||
|
slicepoints_fa.postln;
|
||||||
|
slicepoints_fa.doAdjacentPairs{
|
||||||
|
/*
|
||||||
|
take each of the adjacent pairs and pass them to this function as an array of 2 values
|
||||||
|
|
||||||
|
nb. for example [0,1,2,3,4] will execute this function 4 times, passing these 2 value arrays:
|
||||||
|
[0,1]
|
||||||
|
[1,2]
|
||||||
|
[2,3]
|
||||||
|
[3,4]
|
||||||
|
|
||||||
|
this will give us each slice point *and* the next slice point so that we
|
||||||
|
can tell the analyzers where to start analyzing and how many frames to analyze
|
||||||
|
*/
|
||||||
|
arg start_samps, end_samps, slice_i;
|
||||||
|
var num_samps = end_samps - start_samps; // the next slice point minus the current one will give us the difference how many slices to analyze)
|
||||||
|
|
||||||
|
slice_i.postln; // post which slice index we're currently analyzing
|
||||||
|
|
||||||
|
// the ".wait"s will pause the Task (that this whole things is in) until the analysis is done;
|
||||||
|
|
||||||
|
FluidBufSpectralShape.process(s,buf,start_samps,num_samps,features:features_buf).wait;
|
||||||
|
/* analyze the drum buffer starting at `start_samps` and for `num_samps` samples
|
||||||
|
this returns a buffer (feautres_buf) that is 7 channels wide (for the 7 spectral analyses, see helpfile) and
|
||||||
|
however many frames long as there are fft frames in the slice */
|
||||||
|
|
||||||
|
FluidBufStats.process(s,features_buf,numChans:1,stats:stats_buf).wait;
|
||||||
|
/* perform a statistical analysis the spectral analysis, doing only the first channel (specified by `numChans:1`)
|
||||||
|
this will return just one channel because we asked it to analyze only 1 channel. that one channel will have 7 frames
|
||||||
|
corresponding to the 7 statistical analyses that it performs */
|
||||||
|
|
||||||
|
FluidBufCompose.process(s,stats_buf,0,1,destination:point_buf,destStartFrame:0).wait;
|
||||||
|
/* FluidBufCompose is essentially a "buf copy" operation. this will copy just the zeroth frame from `stats_buf` (mean)
|
||||||
|
into the zeroth buf of `point_buf` which is what we'll evenutally use to add the data to the dataset */
|
||||||
|
|
||||||
|
FluidBufLoudness.process(s,buf,start_samps,num_samps,features:features_buf).wait;
|
||||||
|
// do a loudness analysis
|
||||||
|
|
||||||
|
FluidBufStats.process(s,features_buf,numChans:1,stats:stats_buf).wait;
|
||||||
|
// see above
|
||||||
|
|
||||||
|
FluidBufCompose.process(s,stats_buf,0,1,destination:point_buf,destStartFrame:1).wait;
|
||||||
|
/* see above, but this time the mean loudnessi s being copied into the 1st frame of `point_buf` so that it doesn't overwrite the mean centroid */
|
||||||
|
|
||||||
|
ds.addPoint("point-%".format(slice_i),point_buf);
|
||||||
|
/* now that we've added the mean centroid and mean loudness into `point_buf`, we can use that buf to add the data that is in it to the dataset.
|
||||||
|
we also need to give it an identifer. here we're calling it "point-%", where the "%" is replaced by the index of the slice */
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
scaler.fitTransform(ds,ds,{
|
||||||
|
/* scale the dataset so that each dimension is scaled to between 0 and 1. this will do this operation "in place", so that once the
|
||||||
|
scaling is done on the dataset "ds" it will overwrite that dataset with the normalized values. that is why both the "sourceDataSet" and
|
||||||
|
"destDataSet" are the same here
|
||||||
|
*/
|
||||||
|
|
||||||
|
kdtree.fit(ds,{ // fit the kdtree to the (now) normalized dataset
|
||||||
|
ds.dump({ // dump out that dataset to dictionary so that we can use it with the plotter!
|
||||||
|
arg ds_dict;// the dictionary version of this dataset
|
||||||
|
var previous = nil; // a variable for checking if the currently passed nearest neighbour is the same or different from the previous one
|
||||||
|
FluidPlotter(bounds:Rect(0,0,800,800),dict:ds_dict,mouseMoveAction:{
|
||||||
|
/* make a FluidPlotter. nb. the dict is the dict from a FluidDataSet.dump. the mouseMoveAction is a callback function that is called
|
||||||
|
anytime the mouseDownAction or mouseMoveAction function is called on this view. i.e., anytime you click or drag on this plotter */
|
||||||
|
|
||||||
|
arg view, x, y, modifiers;
|
||||||
|
/* the function is passed:
|
||||||
|
(1) itself
|
||||||
|
(2) mouse x position (scaled to what the view's scales are)
|
||||||
|
(3) mouse y position (scaled to what the view's scales are)
|
||||||
|
(4) modifier keys that are pressed while clicking or dragging
|
||||||
|
*/
|
||||||
|
point_buf.setn(0,[x,y]); // write the x y position into a buffer so that we can use it to...
|
||||||
|
kdtree.kNearest(point_buf,{ // look up the nearest slice to that x y position
|
||||||
|
arg nearest; // this is reported back as a symbol, so...
|
||||||
|
nearest = nearest.asString; // we'll convert it to a string here
|
||||||
|
|
||||||
|
if(nearest != previous,{
|
||||||
|
/* if it's not the last one that was found, we can do something with it. this
|
||||||
|
is kind of like a debounce. we just don't want to retrigger this action each time a drag
|
||||||
|
happens if it is actually the same nearest neighbor*/
|
||||||
|
|
||||||
|
var index = nearest.split($-)[1].interpret;
|
||||||
|
// split at the hyphen and interpret the integer on the end to find out what slice index it is
|
||||||
|
|
||||||
|
{
|
||||||
|
var startPos = Index.kr(slicepoints,index); // look up the start sample based on the index
|
||||||
|
var endPos = Index.kr(slicepoints,index + 1); // look up the end sample based on the index
|
||||||
|
var dur_secs = (endPos - startPos) / BufSampleRate.ir(buf); // figure out how long it is in seconds to create an envelope
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_secs-0.06,0.03]),doneAction:2);
|
||||||
|
var sig = PlayBuf.ar(1,buf,BufRateScale.ir(buf),startPos:startPos);
|
||||||
|
sig.dup * env;
|
||||||
|
}.play; // play it!
|
||||||
|
|
||||||
|
view.highlight_(nearest); // make this point a little bit bigger in the plot
|
||||||
|
previous = nearest;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play(AppClock);
|
||||||
|
}
|
||||||
|
)
|
||||||
@ -0,0 +1,231 @@
|
|||||||
|
(
|
||||||
|
// 1. Instantiate some of the things we need.
|
||||||
|
Window.closeAll;
|
||||||
|
s.options.sampleRate_(48000);
|
||||||
|
s.options.device_("Fireface UC Mac (24006457)");
|
||||||
|
s.waitForBoot{
|
||||||
|
Task{
|
||||||
|
var win;
|
||||||
|
~nMFCCs = 13;
|
||||||
|
~trombone = Buffer.read(s,"/Users/macprocomputer/Desktop/_flucoma/code/flucoma-core-src/AudioFiles/Olencki-TenTromboneLongTones-M.wav");
|
||||||
|
~oboe = Buffer.read(s,"/Users/macprocomputer/Desktop/_flucoma/code/flucoma-core-src/AudioFiles/Harker-DS-TenOboeMultiphonics-M.wav");
|
||||||
|
~timbre_buf = Buffer.alloc(s,~nMFCCs);
|
||||||
|
~ds = FluidDataSet(s);
|
||||||
|
~labels = FluidLabelSet(s);
|
||||||
|
~point_counter = 0;
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
win = Window("MFCCs",Rect(0,0,800,300));
|
||||||
|
|
||||||
|
~mfcc_multislider = MultiSliderView(win,win.bounds)
|
||||||
|
.elasticMode_(true)
|
||||||
|
.size_(~nMFCCs);
|
||||||
|
|
||||||
|
win.front;
|
||||||
|
|
||||||
|
}.play(AppClock);
|
||||||
|
};
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
2. Play some trombone sounds.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~trombone,BufRateScale.ir(~trombone),doneAction:2);
|
||||||
|
var mfccs = FluidMFCC.kr(sig,~nMFCCs,40,1,maxNumCoeffs:~nMFCCs);
|
||||||
|
SendReply.kr(Impulse.kr(30),"/mfccs",mfccs);
|
||||||
|
FluidKrToBuf.kr(mfccs,~timbre_buf);
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
OSCFunc({
|
||||||
|
arg msg;
|
||||||
|
{~mfcc_multislider.value_(msg[3..].linlin(-30,30,0,1))}.defer;
|
||||||
|
},"\mfccs");
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
3. When you know the MFCC buf has trombone timbre data in it
|
||||||
|
(because you hear trombone and see it in the multislider),
|
||||||
|
execute this next block to add points to the dataset and
|
||||||
|
labels to the label set.
|
||||||
|
|
||||||
|
Avoid adding points when there is silence inbetween trombone
|
||||||
|
tones, because... silence isn't trombone, so we don't want
|
||||||
|
to label it that way.
|
||||||
|
|
||||||
|
Try adding points continuously during the first three or so
|
||||||
|
trombone tones. We'll save the rest to test on later.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
var id = "example-%".format(~point_counter);
|
||||||
|
~ds.addPoint(id,~timbre_buf);
|
||||||
|
~labels.addLabel(id,"trombone");
|
||||||
|
~point_counter = ~point_counter + 1;
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
4. Play some oboe sounds.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~oboe,BufRateScale.ir(~oboe),doneAction:2);
|
||||||
|
var mfccs = FluidMFCC.kr(sig,~nMFCCs,40,1,maxNumCoeffs:~nMFCCs);
|
||||||
|
SendReply.kr(Impulse.kr(30),"/mfccs",mfccs);
|
||||||
|
FluidKrToBuf.kr(mfccs,~timbre_buf);
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
OSCFunc({
|
||||||
|
arg msg;
|
||||||
|
{~mfcc_multislider.value_(msg[3..].linlin(-30,30,0,1))}.defer;
|
||||||
|
},"\mfccs");
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
5. All same as before with the trombone sounds.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
var id = "example-%".format(~point_counter);
|
||||||
|
~ds.addPoint(id,~timbre_buf);
|
||||||
|
~labels.addLabel(id,"oboe");
|
||||||
|
~point_counter = ~point_counter + 1;
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
6. Make an MLPClassifier (neural network) to train. For more information about the parameters
|
||||||
|
visit: https://learn.flucoma.org/reference/mlpclassifier
|
||||||
|
*/
|
||||||
|
~mlpclassifier = FluidMLPClassifier(s,[5],1,learnRate:0.05,batchSize:5,validation:0.1);
|
||||||
|
|
||||||
|
/*
|
||||||
|
7. You may want to do a ".fit" more than once. For this task a loss value less than 0.01 would
|
||||||
|
be pretty good. Loss values however are always very relative so it's not really possible
|
||||||
|
to make objective observations about what one should "aim" for with a loss value. The best
|
||||||
|
way to know if a neural network is successfully performing the task you would like it to
|
||||||
|
is to test it. Probably using examples that it has never seen before.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
~mlpclassifier.fit(~ds,~labels,{
|
||||||
|
arg loss;
|
||||||
|
loss.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
8. Make a prediction buffer to write the MLPClassifier's predictions into. The predictions that
|
||||||
|
it outputs to a buffer are integers. "0" will be represent what ever the "zeroth" example
|
||||||
|
label it saw was (because we always start counting from zero in these cases). "1" will represent
|
||||||
|
the "first" example label it saw, etc.
|
||||||
|
*/
|
||||||
|
~prediction_buf = Buffer.alloc(s,1);
|
||||||
|
|
||||||
|
/*
|
||||||
|
9. Play some trombone sounds and make some predictions. It should show a 0.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~trombone,BufRateScale.ir(~trombone),doneAction:2);
|
||||||
|
var mfccs = FluidMFCC.kr(sig,~nMFCCs,40,1,maxNumCoeffs:~nMFCCs);
|
||||||
|
FluidKrToBuf.kr(mfccs,~timbre_buf);
|
||||||
|
~mlpclassifier.kr(Impulse.kr(30),~timbre_buf,~prediction_buf);
|
||||||
|
FluidBufToKr.kr(~prediction_buf).poll;
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
10. Play some oboe sounds and make some predictions. It should show a 1.
|
||||||
|
*/
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~oboe,BufRateScale.ir(~oboe),doneAction:2);
|
||||||
|
var mfccs = FluidMFCC.kr(sig,~nMFCCs,40,1,maxNumCoeffs:~nMFCCs);
|
||||||
|
FluidKrToBuf.kr(mfccs,~timbre_buf);
|
||||||
|
~mlpclassifier.kr(Impulse.kr(30),~timbre_buf,~prediction_buf);
|
||||||
|
FluidBufToKr.kr(~prediction_buf).poll;
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
11. During the silences it is reporting either trombone or oboe, because that's all
|
||||||
|
it knows about, let's zero out the timbre_buf to simulate silence and then add
|
||||||
|
some points that are labeled "silence".
|
||||||
|
*/
|
||||||
|
~timbre_buf.setn(0,0.dup(~nMFCCs))
|
||||||
|
|
||||||
|
(
|
||||||
|
100.do{
|
||||||
|
var id = "example-%".format(~point_counter);
|
||||||
|
~ds.addPoint(id,~timbre_buf);
|
||||||
|
~labels.addLabel(id,"silence");
|
||||||
|
~point_counter = ~point_counter + 1;
|
||||||
|
};
|
||||||
|
)
|
||||||
|
|
||||||
|
~ds.print;
|
||||||
|
~labels.print;
|
||||||
|
|
||||||
|
~ds.write("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/%_ds.json".format(Date.localtime.stamp));
|
||||||
|
~labels.write("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/%_labels.json".format(Date.localtime.stamp))
|
||||||
|
|
||||||
|
/*
|
||||||
|
12. Now go retrain some more and do some more predictions. The silent gaps between
|
||||||
|
tones should now report a "2".
|
||||||
|
*/
|
||||||
|
|
||||||
|
// ========================= DATA VERIFICATION ADDENDUM ============================
|
||||||
|
|
||||||
|
// This data is pretty well separated, except for that one trombone point.
|
||||||
|
~ds.read("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/211102_122330_ds.json");
|
||||||
|
~labels.read("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/211102_122331_labels.json");
|
||||||
|
|
||||||
|
/*
|
||||||
|
This data is not well separated. Once can see that in the cluster that should probably be all silences,
|
||||||
|
there is a lot of oboe and trombone points mixed in!
|
||||||
|
|
||||||
|
This will likely be confusing to a neural network!
|
||||||
|
*/
|
||||||
|
|
||||||
|
~ds.read("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/211102_122730_ds.json");
|
||||||
|
~labels.read("/Users/macprocomputer/Desktop/_flucoma/code/Utrecht-2021/Lesson_Plans/classifier (pre-workshop)/211102_122731_labels.json");
|
||||||
|
|
||||||
|
(
|
||||||
|
Task{
|
||||||
|
~stand = FluidStandardize(s);
|
||||||
|
~ds_plotter = FluidDataSet(s);
|
||||||
|
~umap = FluidUMAP(s,2,30,0.5);
|
||||||
|
~normer = FluidNormalize(s);
|
||||||
|
~kdtree = FluidKDTree(s);
|
||||||
|
~pt_buf = Buffer.alloc(s,2);
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
~stand.fitTransform(~ds,~ds_plotter,{
|
||||||
|
~umap.fitTransform(~ds_plotter,~ds_plotter,{
|
||||||
|
~normer.fitTransform(~ds_plotter,~ds_plotter,{
|
||||||
|
~kdtree.fit(~ds_plotter,{
|
||||||
|
~ds_plotter.dump({
|
||||||
|
arg ds_dict;
|
||||||
|
~labels.dump({
|
||||||
|
arg label_dict;
|
||||||
|
// label_dict.postln;
|
||||||
|
~plotter = FluidPlotter(bounds:Rect(0,0,800,800),dict:ds_dict,mouseMoveAction:{
|
||||||
|
arg view, x, y;
|
||||||
|
~pt_buf.setn(0,[x,y]);
|
||||||
|
~kdtree.kNearest(~pt_buf,{
|
||||||
|
arg nearest;
|
||||||
|
"%:\t%".format(nearest,label_dict.at("data").at(nearest.asString)[0]).postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
~plotter.categories_(label_dict);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play(AppClock);
|
||||||
|
)
|
||||||
@ -0,0 +1,72 @@
|
|||||||
|
(
|
||||||
|
// run the analysis
|
||||||
|
Routine{
|
||||||
|
var time = Main.elapsedTime;
|
||||||
|
var ds = FluidDataSet(s);
|
||||||
|
var labels = FluidLabelSet(s);
|
||||||
|
var scaler = FluidStandardize(s);
|
||||||
|
var buf1 = Buffer.alloc(s,1);
|
||||||
|
var dsq = FluidDataSetQuery(s);
|
||||||
|
|
||||||
|
~pitch_features_buf = Buffer.new(s);
|
||||||
|
// specify some params for the analysis (these are the defaults, but we'll specify them here so we can use them later)
|
||||||
|
~windowSize = 4096;
|
||||||
|
~hopSize = 512;
|
||||||
|
|
||||||
|
~buf = Buffer.read(s,"/Users/macprocomputer/Desktop/_flucoma/code/flucoma-core-src/AudioFiles/Tremblay-FMTri-M.wav");
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
FluidBufPitch.process(s,~buf,features:~pitch_features_buf,windowSize:~windowSize,hopSize:~hopSize).wait;
|
||||||
|
// {~pitch_features_buf.plot(separately:true)}.defer;
|
||||||
|
|
||||||
|
ds.fromBuffer(~pitch_features_buf,action:{
|
||||||
|
ds.print;
|
||||||
|
/*dsq.addRange(0,2,{
|
||||||
|
dsq.filter(1,">",0.7,{
|
||||||
|
dsq.transform(ds,ds,{
|
||||||
|
ds.print;*/
|
||||||
|
ds.dump({
|
||||||
|
arg dict;
|
||||||
|
~pitch_features_array = Array.newClear(dict.at("data").size);
|
||||||
|
dict.at("data").keysValuesDo({
|
||||||
|
arg id, pt, i;
|
||||||
|
~pitch_features_array[i] = [id,pt];
|
||||||
|
});
|
||||||
|
|
||||||
|
~pitch_features_sorted = ~pitch_features_array.sort({
|
||||||
|
arg a, b;
|
||||||
|
a[1][0] < b[1][0];
|
||||||
|
});
|
||||||
|
|
||||||
|
~center_pos = ~pitch_features_sorted.collect({arg arr; (arr[0].asInteger * ~hopSize) / ~buf.sampleRate});
|
||||||
|
|
||||||
|
~center_pos_buf = Buffer.loadCollection(s,~center_pos);
|
||||||
|
});
|
||||||
|
/*});
|
||||||
|
});
|
||||||
|
});*/
|
||||||
|
});
|
||||||
|
}.play
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
OSCdef(\fluidbufpitch_help,{
|
||||||
|
arg msg;
|
||||||
|
msg[3].midiname.postln;
|
||||||
|
},"/fluidbufpitch_help");
|
||||||
|
|
||||||
|
{
|
||||||
|
var trig = Impulse.kr(s.sampleRate / ~hopSize);
|
||||||
|
var index = (PulseCount.kr(trig) - 1) % BufFrames.ir(~center_pos_buf);
|
||||||
|
var centerPos = Index.kr(~center_pos_buf,index);
|
||||||
|
var pan = TRand.kr(-1.0,1.0,trig);
|
||||||
|
var sig;
|
||||||
|
var pitch, conf;
|
||||||
|
sig = TGrains.ar(2,trig,~buf,BufRateScale.ir(~buf),centerPos,~windowSize / BufSampleRate.ir(~buf),pan,0.5);
|
||||||
|
# pitch, conf = FluidPitch.kr(sig,unit:1,windowSize:4096);
|
||||||
|
pitch = FluidStats.kr(pitch,25)[0];
|
||||||
|
SendReply.kr(Impulse.kr(30) * (conf > 0.6),"/fluidbufpitch_help",pitch);
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
@ -0,0 +1,129 @@
|
|||||||
|
/* ================= FluidSines =================
|
||||||
|
FluidSines will extract a sound into a sinusoidal and residual component. It does this by trying to recreate the input sound with a sinusoidal model. Anything that it can't confidently form as a sinusoid, is considered "residual".
|
||||||
|
|
||||||
|
Useful for separating the stable, pitched components of a sound from the rest.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// sines in L, residual in R
|
||||||
|
~buf = Buffer.read(s,FluidFilesPath("Tremblay-AaS-SynthTwoVoices-M.wav"));
|
||||||
|
|
||||||
|
(
|
||||||
|
y = {
|
||||||
|
var sig = PlayBuf.ar(1,~buf,BufRateScale.ir(~buf),loop:1);
|
||||||
|
var sines, residual;
|
||||||
|
# sines, residual = FluidSines.ar(sig,detectionThreshold:-40,minTrackLen:2);
|
||||||
|
[sines,residual];
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// isolate just sines or residual;
|
||||||
|
~song = Buffer.readChannel(s,FluidFilesPath("Tremblay-beatRemember.wav"),channels:[0]);
|
||||||
|
|
||||||
|
(
|
||||||
|
y = {
|
||||||
|
arg mix = 0.5;
|
||||||
|
var sig = PlayBuf.ar(1,~song,BufRateScale.ir(~song),loop:1);
|
||||||
|
var sines, residual;
|
||||||
|
# sines, residual = FluidSines.ar(sig);
|
||||||
|
sig = SelectX.ar(mix,[sines,residual]);
|
||||||
|
sig.dup;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// just sines
|
||||||
|
y.set(\mix,0);
|
||||||
|
|
||||||
|
// just residual
|
||||||
|
y.set(\mix,1);
|
||||||
|
|
||||||
|
// a stereo example
|
||||||
|
~song = Buffer.read(s,FluidFilesPath("Tremblay-beatRemember.wav"));
|
||||||
|
|
||||||
|
(
|
||||||
|
y = {
|
||||||
|
arg mix = 0.5;
|
||||||
|
var sig = PlayBuf.ar(2,~song,BufRateScale.ir(~buf),loop:1);
|
||||||
|
var l, r, sinesL, residualL, sinesR, residualR, sines, residual;
|
||||||
|
# l, r = FluidSines.ar(sig);
|
||||||
|
# sinesL, residualL = l;
|
||||||
|
# sinesR, residualR = r;
|
||||||
|
sig = SelectX.ar(mix,[[sinesL,sinesR],[residualL,residualR]]);
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// just sines
|
||||||
|
y.set(\mix,0);
|
||||||
|
|
||||||
|
// just residual
|
||||||
|
y.set(\mix,1);
|
||||||
|
|
||||||
|
// send just the 'sines' to a Reverb
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~song,BufRateScale.ir(~buf),loop:1);
|
||||||
|
var sines, residual;
|
||||||
|
var latency = ((15 * 512) + 1024 ) / ~song.sampleRate;
|
||||||
|
# sines, residual = FluidSines.ar(sig);
|
||||||
|
DelayN.ar(sig,latency,latency) + GVerb.ar(sines);
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// send just the 'residual' to a Reverb
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~song,BufRateScale.ir(~buf),loop:1);
|
||||||
|
var sines, residual;
|
||||||
|
var latency = ((15 * 512) + 1024 ) / ~song.sampleRate;
|
||||||
|
# sines, residual = FluidSines.ar(sig);
|
||||||
|
DelayN.ar(sig,latency,latency) + GVerb.ar(residual);
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
/* ============== FluidHPSS ===============
|
||||||
|
FluidHPSS separates a sound into "harmonic" and "percussive" components. This can be useful for material where there is a somewhat realistic basis for these two types to exist, such as in a drum hit. It can also be interesting on material where the two are merged together in more complex ways.
|
||||||
|
*/
|
||||||
|
|
||||||
|
//load a soundfile to play
|
||||||
|
~buf = Buffer.readChannel(s,FluidFilesPath("Tremblay-beatRemember.wav"),channels:[0]);
|
||||||
|
|
||||||
|
// run with basic parameters (left is harmonic, right is percussive)
|
||||||
|
{FluidHPSS.ar(PlayBuf.ar(1,~buf,loop:1))}.play
|
||||||
|
|
||||||
|
// run in mode 2, listening to:
|
||||||
|
//the harmonic stream
|
||||||
|
{FluidHPSS.ar(PlayBuf.ar(1,~buf,loop:1),maskingMode:2)[0].dup}.play
|
||||||
|
// the percussive stream
|
||||||
|
{FluidHPSS.ar(PlayBuf.ar(1,~buf,loop:1),maskingMode:2)[1].dup}.play
|
||||||
|
// the residual stream
|
||||||
|
{FluidHPSS.ar(PlayBuf.ar(1,~buf,loop:1),maskingMode:2)[2].dup}.play
|
||||||
|
|
||||||
|
// do the above again with another sound file
|
||||||
|
~buf = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav"));
|
||||||
|
|
||||||
|
/* =================== FluidTransients =========================
|
||||||
|
FluidTransients can separate out transient from residual material. Transient is quite a fuzzy term depending on who you are talking to. Producers might use it to talk about any sound that is bright, loud or percussive while an engineer could be referring to a short, full spectrum change in the signal.
|
||||||
|
|
||||||
|
This algorithm is based on a "de-clicking" audio restoration approach.
|
||||||
|
*/
|
||||||
|
|
||||||
|
//load some buffer
|
||||||
|
~buf = Buffer.read(s,FluidFilesPath("Tremblay-AaS-SynthTwoVoices-M.wav"));
|
||||||
|
|
||||||
|
// basic parameters
|
||||||
|
{FluidTransients.ar(PlayBuf.ar(1, ~buf, loop:1))}.play
|
||||||
|
|
||||||
|
// just the transients
|
||||||
|
{FluidTransients.ar(PlayBuf.ar(1, ~buf, loop:1))[0].dup}.play
|
||||||
|
|
||||||
|
// =================== Audio Transport =========================
|
||||||
|
//load 2 files
|
||||||
|
(
|
||||||
|
b = Buffer.read(s,FluidFilesPath("Tremblay-CEL-GlitchyMusicBoxMelo.wav"));
|
||||||
|
c = Buffer.read(s,FluidFilesPath("Tremblay-CF-ChurchBells.wav"));
|
||||||
|
)
|
||||||
|
//listen to them
|
||||||
|
b.play
|
||||||
|
c.play
|
||||||
|
//stereo cross!
|
||||||
|
{FluidAudioTransport.ar(PlayBuf.ar(2,b,loop: 1),PlayBuf.ar(2,c,loop: 1),MouseX.kr())}.play;
|
||||||
@ -0,0 +1,404 @@
|
|||||||
|
(
|
||||||
|
// 1. define a function to load a folder of sounds
|
||||||
|
~load_folder = {
|
||||||
|
arg folder_path, action;
|
||||||
|
var loader = FluidLoadFolder(folder_path);
|
||||||
|
loader.play(s,{
|
||||||
|
fork{
|
||||||
|
var mono_buffer = Buffer.alloc(s,loader.buffer.numFrames); // convert to mono for ease of use for this example
|
||||||
|
FluidBufCompose.processBlocking(s,loader.buffer,destination:mono_buffer,numChans:1);
|
||||||
|
s.sync;
|
||||||
|
action.(mono_buffer);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
~load_folder.(FluidFilesPath(),{
|
||||||
|
arg buffer;
|
||||||
|
"mono buffer: %".format(buffer).postln;
|
||||||
|
~buffer = buffer;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 2. define a function to slice the sounds, play with the threshold to get different results
|
||||||
|
~slice = {
|
||||||
|
arg buffer, action;
|
||||||
|
Routine{
|
||||||
|
var indices = Buffer(s);
|
||||||
|
s.sync;
|
||||||
|
FluidBufNoveltySlice.process(s,buffer,indices:indices,threshold:0.5,action:{
|
||||||
|
"% slices found".format(indices.numFrames).postln;
|
||||||
|
"average duration in seconds: %".format(buffer.duration/indices.numFrames).postln;
|
||||||
|
action.(buffer,indices);
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~slice.(~buffer,{
|
||||||
|
arg buffer, indices;
|
||||||
|
~indices = indices;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 3. analyze the slices
|
||||||
|
~analyze = {
|
||||||
|
arg buffer, indices, action;
|
||||||
|
var time = SystemClock.seconds;
|
||||||
|
Routine{
|
||||||
|
var feature_buf = Buffer(s);
|
||||||
|
var stats_buf = Buffer(s);
|
||||||
|
var point_buf = Buffer(s);
|
||||||
|
var ds = FluidDataSet(s);
|
||||||
|
|
||||||
|
indices.loadToFloatArray(action:{
|
||||||
|
arg fa;
|
||||||
|
fa.doAdjacentPairs{
|
||||||
|
arg start, end, i;
|
||||||
|
var num = end - start;
|
||||||
|
|
||||||
|
FluidBufMFCC.processBlocking(s,buffer,start,num,features:feature_buf,numCoeffs:13,startCoeff:1);
|
||||||
|
FluidBufStats.processBlocking(s,feature_buf,stats:stats_buf);
|
||||||
|
FluidBufFlatten.processBlocking(s,stats_buf,numFrames:1,destination:point_buf);
|
||||||
|
|
||||||
|
ds.addPoint("slice-%".format(i),point_buf);
|
||||||
|
"Processing Slice % / %".format(i+1,indices.numFrames-1).postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
feature_buf.free; stats_buf.free; point_buf.free;
|
||||||
|
|
||||||
|
ds.print;
|
||||||
|
|
||||||
|
"Completed in % seconds".format(SystemClock.seconds - time).postln;
|
||||||
|
action.(buffer,indices,ds);
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~analyze.(~buffer,~indices,{
|
||||||
|
arg buffer, indices, ds;
|
||||||
|
~ds = ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 4. Reduce to 2 Dimensions
|
||||||
|
~umap = {
|
||||||
|
arg buffer, indices, ds, action, numNeighbours = 15, minDist = 0.1;
|
||||||
|
Routine{
|
||||||
|
var standardizer = FluidStandardize(s);
|
||||||
|
var umap = FluidUMAP(s,2,numNeighbours,minDist);
|
||||||
|
|
||||||
|
var redux_ds = FluidDataSet(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
standardizer.fitTransform(ds,redux_ds,{
|
||||||
|
"standardization done".postln;
|
||||||
|
umap.fitTransform(redux_ds,redux_ds,{
|
||||||
|
"umap done".postln;
|
||||||
|
action.(buffer,indices,redux_ds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~umap.(~buffer,~indices,~ds,{
|
||||||
|
arg buffer, indices, redux_ds;
|
||||||
|
~ds = redux_ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 5. Gridify if Desired
|
||||||
|
~grid = {
|
||||||
|
arg buffer, indices, redux_ds, action;
|
||||||
|
Routine{
|
||||||
|
var normer = FluidNormalize(s);
|
||||||
|
var grider = FluidGrid(s);
|
||||||
|
var newds = FluidDataSet(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
normer.fitTransform(redux_ds,newds,{
|
||||||
|
"normalization done".postln;
|
||||||
|
grider.fitTransform(newds,newds,{
|
||||||
|
"grid done".postln;
|
||||||
|
action.(buffer,indices,newds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~grid.(~buffer,~indices,~ds,{
|
||||||
|
arg buffer, indices, grid_ds;
|
||||||
|
~ds = grid_ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 6. Plot
|
||||||
|
~plot = {
|
||||||
|
arg buffer, indices, redux_ds, action;
|
||||||
|
Routine{
|
||||||
|
var kdtree = FluidKDTree(s);
|
||||||
|
var buf_2d = Buffer.alloc(s,2);
|
||||||
|
var scaler = FluidNormalize(s);
|
||||||
|
var newds = FluidDataSet(s);
|
||||||
|
var xmin = 0, xmax = 1, ymin = 0, ymax = 1;
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
scaler.fitTransform(redux_ds,newds,{
|
||||||
|
"scaling done".postln;
|
||||||
|
kdtree.fit(newds,{
|
||||||
|
"kdtree fit".postln;
|
||||||
|
newds.dump({
|
||||||
|
arg dict;
|
||||||
|
var previous, fp;
|
||||||
|
"ds dumped".postln;
|
||||||
|
fp = FluidPlotter(nil,Rect(0,0,800,800),dict,xmin:xmin,xmax:xmax,ymin:ymin,ymax:ymax,mouseMoveAction:{
|
||||||
|
arg view, x, y;
|
||||||
|
[x,y].postln;
|
||||||
|
buf_2d.setn(0,[x,y]);
|
||||||
|
kdtree.kNearest(buf_2d,{
|
||||||
|
arg nearest;
|
||||||
|
if(previous != nearest,{
|
||||||
|
var index = nearest.asString.split($-)[1].asInteger;
|
||||||
|
previous = nearest;
|
||||||
|
nearest.postln;
|
||||||
|
index.postln;
|
||||||
|
{
|
||||||
|
var startPos = Index.kr(indices,index);
|
||||||
|
var dur_samps = Index.kr(indices,index + 1) - startPos;
|
||||||
|
var sig = PlayBuf.ar(1,buffer,BufRateScale.ir(buffer),startPos:startPos);
|
||||||
|
var dur_sec = dur_samps / BufSampleRate.ir(buffer);
|
||||||
|
var env;
|
||||||
|
dur_sec = min(dur_sec,1);
|
||||||
|
env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_sec-0.06,0.03]),doneAction:2);
|
||||||
|
sig.dup * env;
|
||||||
|
}.play;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
action.(fp,newds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~plot.(~buffer,~indices,~ds);
|
||||||
|
)
|
||||||
|
|
||||||
|
// ============== do all of it =======================
|
||||||
|
(
|
||||||
|
var path = "/Users/macprocomputer/Desktop/_flucoma/data_saves/%_2D_browsing_Pitch".format(Date.localtime.stamp);
|
||||||
|
~load_folder.("/Users/macprocomputer/Desktop/_flucoma/favs mono/",{
|
||||||
|
arg buffer0;
|
||||||
|
~slice.(buffer0,{
|
||||||
|
arg buffer1, indices1;
|
||||||
|
~analyze.(buffer1, indices1,{
|
||||||
|
arg buffer2, indices2, ds2;
|
||||||
|
|
||||||
|
/* path.mkdir;
|
||||||
|
buffer2.write(path+/+"buffer.wav","wav");
|
||||||
|
indices2.write(path+/+"indices.wav","wav","float");
|
||||||
|
ds2.write(path+/+"ds.json");*/
|
||||||
|
|
||||||
|
~umap.(buffer2,indices2,ds2,{
|
||||||
|
arg buffer3, indices3, ds3;
|
||||||
|
|
||||||
|
/* path.mkdir;
|
||||||
|
buffer3.write(path+/+"buffer.wav","wav");
|
||||||
|
indices3.write(path+/+"indices.wav","wav","float");
|
||||||
|
ds3.write(path+/+"ds.json");*/
|
||||||
|
|
||||||
|
~plot.(buffer3,indices3,ds3,{
|
||||||
|
arg plotter;
|
||||||
|
"done with all".postln;
|
||||||
|
~fp = plotter;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
/*=============== Know Your Data =================
|
||||||
|
|
||||||
|
hmmm... there's a lot of white space in that UMAP plot. A few options:
|
||||||
|
|
||||||
|
1. Adjust the parameters of UMAP to make the plot look different.
|
||||||
|
- minDist
|
||||||
|
- numNeighbours
|
||||||
|
2. Gridify the whole thing to spread it out.
|
||||||
|
3. Remove some of the outliers to get a more full shape.
|
||||||
|
|
||||||
|
===================================================*/
|
||||||
|
|
||||||
|
// #2
|
||||||
|
(
|
||||||
|
Window.closeAll;
|
||||||
|
Task{
|
||||||
|
var folder = "/Users/macprocomputer/Desktop/_flucoma/data_saves/211103_121441_2D_browsing/";
|
||||||
|
var ds = FluidDataSet(s);
|
||||||
|
var buffer = Buffer.read(s,folder+/+"buffer.wav");
|
||||||
|
var indices = Buffer.read(s,folder+/+"indices.wav");
|
||||||
|
var normalizer = FluidNormalize(s);
|
||||||
|
var ds_grid = FluidDataSet(s);
|
||||||
|
var grid = FluidGrid(s);
|
||||||
|
var kdtree = FluidKDTree(s);
|
||||||
|
var pt_buf = Buffer.alloc(s,2);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
ds.read(folder+/+"ds.json",{
|
||||||
|
"read".postln;
|
||||||
|
normalizer.fitTransform(ds,ds_grid,{
|
||||||
|
"normalized".postln;
|
||||||
|
grid.fitTransform(ds_grid,ds_grid,{
|
||||||
|
"grid done".postln;
|
||||||
|
normalizer.fitTransform(ds_grid,ds_grid,{
|
||||||
|
"normalized".postln;
|
||||||
|
kdtree.fit(ds_grid,{
|
||||||
|
"tree fit".postln;
|
||||||
|
normalizer.fitTransform(ds,ds,{
|
||||||
|
"normalized".postln;
|
||||||
|
ds.dump({
|
||||||
|
arg ds_dict;
|
||||||
|
ds_grid.dump({
|
||||||
|
arg ds_grid_dict;
|
||||||
|
|
||||||
|
defer{
|
||||||
|
var distances = Dictionary.new;
|
||||||
|
var max_dist = 0;
|
||||||
|
var win, plotter, uv;
|
||||||
|
|
||||||
|
var previous;
|
||||||
|
ds_dict.at("data").keysValuesDo({
|
||||||
|
arg id, pt;
|
||||||
|
var other, pt0, pt1, dist, distpoint;
|
||||||
|
|
||||||
|
/*
|
||||||
|
id.postln;
|
||||||
|
pt.postln;
|
||||||
|
"".postln;
|
||||||
|
*/
|
||||||
|
|
||||||
|
other = ds_grid_dict.at("data").at(id);
|
||||||
|
pt0 = Point(pt[0],pt[1]);
|
||||||
|
pt1 = Point(other[0],other[1]);
|
||||||
|
dist = pt0.dist(pt1);
|
||||||
|
distpoint = Dictionary.new;
|
||||||
|
|
||||||
|
if(dist > max_dist,{max_dist = dist});
|
||||||
|
|
||||||
|
distpoint.put("pt0",pt0);
|
||||||
|
distpoint.put("pt1",pt1);
|
||||||
|
distpoint.put("dist",dist);
|
||||||
|
distances.put(id,distpoint);
|
||||||
|
});
|
||||||
|
win = Window("FluidGrid",Rect(0,0,800,800));
|
||||||
|
win.background_(Color.white);
|
||||||
|
uv = UserView(win,win.bounds)
|
||||||
|
.drawFunc_({
|
||||||
|
var size_pt = Point(uv.bounds.width,uv.bounds.height);
|
||||||
|
|
||||||
|
distances.keysValuesDo({
|
||||||
|
arg id, distpoint;
|
||||||
|
var alpha = distpoint.at("dist") / max_dist;
|
||||||
|
var pt0 = distpoint.at("pt0") * size_pt;
|
||||||
|
var pt1 = distpoint.at("pt1") * size_pt;
|
||||||
|
|
||||||
|
pt0.y = uv.bounds.height - pt0.y;
|
||||||
|
pt1.y = uv.bounds.height - pt1.y;
|
||||||
|
|
||||||
|
/* id.postln;
|
||||||
|
distpoint.postln;
|
||||||
|
alpha.postln;
|
||||||
|
"".postln;
|
||||||
|
*/
|
||||||
|
|
||||||
|
Pen.line(pt0,pt1);
|
||||||
|
Pen.color_(Color(1.0,0.0,0.0,0.25));
|
||||||
|
Pen.stroke;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
plotter = FluidPlotter(win,win.bounds,ds_dict,{
|
||||||
|
arg view, x, y;
|
||||||
|
pt_buf.setn(0,[x,y]);
|
||||||
|
kdtree.kNearest(pt_buf,{
|
||||||
|
arg nearest;
|
||||||
|
if(previous != nearest,{
|
||||||
|
var index = nearest.asString.split($-)[1].asInteger;
|
||||||
|
previous = nearest;
|
||||||
|
nearest.postln;
|
||||||
|
index.postln;
|
||||||
|
{
|
||||||
|
var startPos = Index.kr(indices,index);
|
||||||
|
var dur_samps = Index.kr(indices,index + 1) - startPos;
|
||||||
|
var sig = PlayBuf.ar(1,buffer,BufRateScale.ir(buffer),startPos:startPos);
|
||||||
|
var dur_sec = dur_samps / BufSampleRate.ir(buffer);
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_sec-0.06,0.03]),doneAction:2);
|
||||||
|
sig.dup * env;
|
||||||
|
}.play;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
plotter.background_(Color(0,0,0,0));
|
||||||
|
|
||||||
|
ds_grid_dict.at("data").keysValuesDo({
|
||||||
|
arg id, pt;
|
||||||
|
plotter.addPoint_("%-grid".format(id),pt[0],pt[1],0.75,Color.blue.alpha_(0.5));
|
||||||
|
});
|
||||||
|
|
||||||
|
win.front;
|
||||||
|
};
|
||||||
|
})
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play(AppClock);
|
||||||
|
)
|
||||||
|
|
||||||
|
// #3
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var folder = "/Users/macprocomputer/Desktop/_flucoma/data_saves/211103_152523_2D_browsing/";
|
||||||
|
var ds = FluidDataSet(s);
|
||||||
|
var buffer = Buffer.read(s,folder+/+"buffer.wav");
|
||||||
|
var indices = Buffer.read(s,folder+/+"indices.wav");
|
||||||
|
var robust_scaler = FluidRobustScale(s,10,90);
|
||||||
|
var newds = FluidDataSet(s);
|
||||||
|
var dsq = FluidDataSetQuery(s);
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
// {indices.plot}.defer;
|
||||||
|
ds.read(folder+/+"ds.json",{
|
||||||
|
robust_scaler.fitTransform(ds,newds,{
|
||||||
|
dsq.addRange(0,2,{
|
||||||
|
dsq.filter(0,">",-1,{
|
||||||
|
dsq.and(0,"<",1,{
|
||||||
|
dsq.and(1,">",-1,{
|
||||||
|
dsq.and(1,"<",1,{
|
||||||
|
dsq.transform(newds,newds,{
|
||||||
|
~plot.(buffer,indices,newds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
@ -0,0 +1,295 @@
|
|||||||
|
/*
|
||||||
|
|
||||||
|
this script shows how to
|
||||||
|
|
||||||
|
1. load a folder of sounds
|
||||||
|
2. find smaller time segments within the sounds according to novelty
|
||||||
|
3. analyse the sounds according to MFCC and add these analyses to a dataset
|
||||||
|
4. dimensionally reduce that dataset to 2D using umap
|
||||||
|
5. (optional) turn the plot of points in 2D into a grid
|
||||||
|
6. plot the points!
|
||||||
|
|
||||||
|
notice that each step in this process is created within a function so that
|
||||||
|
at the bottom of the patch, these functions are all chained together to
|
||||||
|
do the whole process in one go!
|
||||||
|
|
||||||
|
*/
|
||||||
|
|
||||||
|
(
|
||||||
|
// 1. load a folder of sounds
|
||||||
|
~load_folder = {
|
||||||
|
arg folder_path, action;
|
||||||
|
var loader = FluidLoadFolder(folder_path); // pass in the folder to load
|
||||||
|
loader.play(s,{ // play will do the actual loading
|
||||||
|
var mono_buffer = Buffer.alloc(s,loader.buffer.numFrames);
|
||||||
|
FluidBufCompose.processBlocking(s,loader.buffer,destination:mono_buffer,numChans:1,action:{
|
||||||
|
action.(mono_buffer);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
// this will load all the audio files that are included with the flucoma toolkit, but you can put your own path here:
|
||||||
|
~load_folder.(FluidFilesPath(),{
|
||||||
|
arg buffer;
|
||||||
|
"mono buffer: %".format(buffer).postln;
|
||||||
|
~buffer = buffer; // save the buffer to a global variable so we can use it later
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 2. slice the sounds
|
||||||
|
~slice = {
|
||||||
|
arg buffer, action;
|
||||||
|
var indices = Buffer(s); // a buffer for saving the discovered indices into
|
||||||
|
|
||||||
|
// play around the the threshold anad feature (see help file) to get differet slicing results
|
||||||
|
FluidBufNoveltySlice.processBlocking(s,buffer,indices:indices,feauture:0,threshold:0.5,action:{
|
||||||
|
"% slices found".format(indices.numFrames).postln;
|
||||||
|
"average duration in seconds: %".format(buffer.duration/indices.numFrames).postln;
|
||||||
|
action.(buffer,indices);
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
~slice.(~buffer,{
|
||||||
|
arg buffer, indices;
|
||||||
|
~indices = indices;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
// you may want to check the slice points here using FluidWaveform
|
||||||
|
FluidWaveform(~buffer,~indices); // it may also be way too many slices to see properly!
|
||||||
|
|
||||||
|
(
|
||||||
|
// 3. analyze the slices
|
||||||
|
~analyze = {
|
||||||
|
arg buffer, indices, action;
|
||||||
|
var time = SystemClock.seconds; // a timer just to keep tabs on how long this stuff is taking
|
||||||
|
Routine{
|
||||||
|
var feature_buf = Buffer(s); // a buffer for storing the mfcc analyses into
|
||||||
|
var stats_buf = Buffer(s); // a buffer for storing the stats into
|
||||||
|
var point_buf = Buffer(s); // a buffer we will use to add points to the dataset
|
||||||
|
var ds = FluidDataSet(s); // the dataset that we'll add all these mfcc analyses to
|
||||||
|
|
||||||
|
// bring the values in the slicepoints buffer from the server to the language as a float array
|
||||||
|
indices.loadToFloatArray(action:{
|
||||||
|
arg fa; // float array
|
||||||
|
fa.doAdjacentPairs{
|
||||||
|
/*
|
||||||
|
take each of the adjacent pairs and pass them to this function as an array of 2 values
|
||||||
|
|
||||||
|
nb. for example [0,1,2,3,4] will execute this function 4 times, passing these 2 value arrays:
|
||||||
|
[0,1]
|
||||||
|
[1,2]
|
||||||
|
[2,3]
|
||||||
|
[3,4]
|
||||||
|
|
||||||
|
this will give us each slice point *and* the next slice point so that we
|
||||||
|
can tell the analyzers where to start analyzing and how many frames to analyze
|
||||||
|
*/
|
||||||
|
arg start, end, i;
|
||||||
|
|
||||||
|
// the next slice point minus the current one will give us the difference how many slices to analyze)
|
||||||
|
var num = end - start;
|
||||||
|
|
||||||
|
/* analyze the drum buffer starting at `start_samps` and for `num_samps` samples
|
||||||
|
this returns a buffer (feautre_buf) that is 13 channels wide (for the 13 mfccs, see helpfile) and
|
||||||
|
however many frames long as there are fft frames in the slice */
|
||||||
|
FluidBufMFCC.processBlocking(s,buffer,start,num,features:feature_buf,numCoeffs:13,startCoeff:1);
|
||||||
|
|
||||||
|
/* perform a statistical analysis on the mfcc analysis
|
||||||
|
this will return just 13 channels, one for each mfcc channel in the feature_buf.
|
||||||
|
each channel will have 7 frames corresponding to the 7 statistical analyses that it performs
|
||||||
|
on that channel */
|
||||||
|
FluidBufStats.processBlocking(s,feature_buf,stats:stats_buf);
|
||||||
|
|
||||||
|
/* take all 13 channels from stats_buf, but just the first frame (mean) and convert it into a buffer
|
||||||
|
that is 1 channel and 13 frames. this shape will be considered "flat" and therefore able to be
|
||||||
|
added to the dataset */
|
||||||
|
FluidBufFlatten.processBlocking(s,stats_buf,numFrames:1,destination:point_buf);
|
||||||
|
|
||||||
|
// add it
|
||||||
|
ds.addPoint("slice-%".format(i),point_buf);
|
||||||
|
"Processing Slice % / %".format(i+1,indices.numFrames-1).postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
feature_buf.free; stats_buf.free; point_buf.free; // free buffers
|
||||||
|
|
||||||
|
ds.print;
|
||||||
|
|
||||||
|
"Completed in % seconds".format(SystemClock.seconds - time).postln;
|
||||||
|
action.(buffer,indices,ds);
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~analyze.(~buffer,~indices,{
|
||||||
|
arg buffer, indices, ds;
|
||||||
|
~ds = ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 4. Reduce to 2 Dimensions
|
||||||
|
~umap = {
|
||||||
|
arg buffer, indices, ds, action, numNeighbours = 15, minDist = 0.1;
|
||||||
|
Routine{
|
||||||
|
|
||||||
|
// get all the dimensions in the same general range so that when umap
|
||||||
|
// makes its initial tree structure, the lower order mfcc coefficients
|
||||||
|
// aren't over weighted
|
||||||
|
var standardizer = FluidStandardize(s);
|
||||||
|
|
||||||
|
// this is the dimensionality reduction algorithm, see helpfile for
|
||||||
|
// more info
|
||||||
|
var umap = FluidUMAP(s,2,numNeighbours,minDist);
|
||||||
|
|
||||||
|
var redux_ds = FluidDataSet(s); // a new dataset for putting the 2D points into
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
standardizer.fitTransform(ds,redux_ds,{
|
||||||
|
"standardization done".postln;
|
||||||
|
umap.fitTransform(redux_ds,redux_ds,{
|
||||||
|
"umap done".postln;
|
||||||
|
action.(buffer,indices,redux_ds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~umap.(~buffer,~indices,~ds,{
|
||||||
|
arg buffer, indices, redux_ds;
|
||||||
|
~ds = redux_ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 5. Gridify if Desired
|
||||||
|
~grid = {
|
||||||
|
arg buffer, indices, redux_ds, action;
|
||||||
|
Routine{
|
||||||
|
|
||||||
|
// first normalize so they're all 0 to 1
|
||||||
|
var normer = FluidNormalize(s);
|
||||||
|
|
||||||
|
// this will shift all dots around so they're in a grid shape
|
||||||
|
var grider = FluidGrid(s);
|
||||||
|
|
||||||
|
// a new dataset to hold the gridified dots
|
||||||
|
var newds = FluidDataSet(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
normer.fitTransform(redux_ds,newds,{
|
||||||
|
"normalization done".postln;
|
||||||
|
grider.fitTransform(newds,newds,{
|
||||||
|
"grid done".postln;
|
||||||
|
action.(buffer,indices,newds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~grid.(~buffer,~indices,~ds,{
|
||||||
|
arg buffer, indices, grid_ds;
|
||||||
|
~ds = grid_ds;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// 6. Plot
|
||||||
|
~plot = {
|
||||||
|
arg buffer, indices, redux_ds, action;
|
||||||
|
Routine{
|
||||||
|
var kdtree = FluidKDTree(s); // tree structure of the 2D points for fast neighbour lookup
|
||||||
|
|
||||||
|
// a buffer for putting the 2D mouse point into so that it can be used to find the nearest neighbour
|
||||||
|
var buf_2d = Buffer.alloc(s,2);
|
||||||
|
|
||||||
|
// scaler just to double check and make sure that the points are 0 to 1
|
||||||
|
// if the plotter is receiving the output of umap, they probably won't be...
|
||||||
|
var scaler = FluidNormalize(s);
|
||||||
|
|
||||||
|
// a new dataset told the normalized data
|
||||||
|
var newds = FluidDataSet(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
scaler.fitTransform(redux_ds,newds,{
|
||||||
|
"scaling done".postln;
|
||||||
|
kdtree.fit(newds,{
|
||||||
|
"kdtree fit".postln;
|
||||||
|
newds.dump({
|
||||||
|
arg dict;
|
||||||
|
var previous, fp;
|
||||||
|
"ds dumped".postln;
|
||||||
|
|
||||||
|
// pass in the dict from the dumped dataset. this is the data that we want to plot!
|
||||||
|
|
||||||
|
fp = FluidPlotter(nil,Rect(0,0,800,800),dict,mouseMoveAction:{
|
||||||
|
|
||||||
|
// when the mouse is clicked or dragged on the plotter, this function executes
|
||||||
|
|
||||||
|
// the view is the FluidPlotter, the x and y are the position of the mouse according
|
||||||
|
// to the range of the plotter. i.e., since our plotter is showing us the range 0 to 1
|
||||||
|
// for both x and y, the xy positions will always be between 0 and 1
|
||||||
|
arg view, x, y;
|
||||||
|
buf_2d.setn(0,[x,y]); // set the mouse position into a buffer
|
||||||
|
|
||||||
|
// then send that buffer to the kdtree to find the nearest point
|
||||||
|
kdtree.kNearest(buf_2d,{
|
||||||
|
arg nearest; // the identifier of the nearest point is returned (always as a symbol)
|
||||||
|
|
||||||
|
if(previous != nearest,{ // as long as this isn't also the last one that was returned
|
||||||
|
|
||||||
|
// split the integer off the indentifier to know how to look it up for playback
|
||||||
|
var index = nearest.asString.split($-)[1].asInteger;
|
||||||
|
previous = nearest;
|
||||||
|
nearest.postln;
|
||||||
|
// index.postln;
|
||||||
|
{
|
||||||
|
var startPos = Index.kr(indices,index); // look in the indices buf to see where to start playback
|
||||||
|
var dur_samps = Index.kr(indices,index + 1) - startPos; // and how long
|
||||||
|
var sig = PlayBuf.ar(1,buffer,BufRateScale.ir(buffer),startPos:startPos);
|
||||||
|
var dur_sec = dur_samps / BufSampleRate.ir(buffer);
|
||||||
|
var env;
|
||||||
|
dur_sec = min(dur_sec,1); // just in case some of the slices are *very* long...
|
||||||
|
env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_sec-0.06,0.03]),doneAction:2);
|
||||||
|
sig.dup * env;
|
||||||
|
}.play;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
action.(fp,newds);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
};
|
||||||
|
|
||||||
|
~plot.(~buffer,~indices,~ds);
|
||||||
|
)
|
||||||
|
|
||||||
|
// ============== do all of it in one go =======================
|
||||||
|
(
|
||||||
|
var path = FluidFilesPath();
|
||||||
|
~load_folder.(path,{
|
||||||
|
arg buffer0;
|
||||||
|
~slice.(buffer0,{
|
||||||
|
arg buffer1, indices1;
|
||||||
|
~analyze.(buffer1, indices1,{
|
||||||
|
arg buffer2, indices2, ds2;
|
||||||
|
~umap.(buffer2,indices2,ds2,{
|
||||||
|
arg buffer3, indices3, ds3;
|
||||||
|
~plot.(buffer3,indices3,ds3,{
|
||||||
|
arg plotter;
|
||||||
|
"done with all".postln;
|
||||||
|
~fp = plotter;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
)
|
||||||
@ -0,0 +1,108 @@
|
|||||||
|
(
|
||||||
|
Task{
|
||||||
|
var folder = "/Users/macprocomputer/Desktop/_flucoma/data_saves/211103_152953_2D_browsing_MFCC/";
|
||||||
|
// var folder = "/Users/macprocomputer/Desktop/_flucoma/data_saves/211103_161354_2D_browsing_SpectralShape/";
|
||||||
|
// var folder = "/Users/macprocomputer/Desktop/_flucoma/data_saves/211103_161638_2D_browsing_Pitch/";
|
||||||
|
~ds_original = FluidDataSet(s);
|
||||||
|
~buffer = Buffer.read(s,folder+/+"buffer.wav");
|
||||||
|
~indices = Buffer.read(s,folder+/+"indices.wav");
|
||||||
|
~kdtree = FluidKDTree(s,6);
|
||||||
|
~ds = FluidDataSet(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
~indices.loadToFloatArray(action:{
|
||||||
|
arg fa;
|
||||||
|
~indices = fa;
|
||||||
|
});
|
||||||
|
|
||||||
|
~ds_original.read(folder+/+"ds.json",{
|
||||||
|
~ds.read(folder+/+"ds.json",{
|
||||||
|
~kdtree.fit(~ds,{
|
||||||
|
~ds.dump({
|
||||||
|
arg dict;
|
||||||
|
~ds_dict = dict;
|
||||||
|
"kdtree fit".postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
~play_id = {
|
||||||
|
arg id;
|
||||||
|
var index = id.asString.split($-)[1].asInteger;
|
||||||
|
var start_samps = ~indices[index];
|
||||||
|
var end_samps = ~indices[index+1];
|
||||||
|
var dur_secs = (end_samps - start_samps) / ~buffer.sampleRate;
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~buffer,BufRateScale.ir(~buffer),startPos:start_samps);
|
||||||
|
var env = EnvGen.kr(Env([0,1,1,0],[0.03,dur_secs-0.06,0.03]),doneAction:2);
|
||||||
|
sig.dup;// * env;
|
||||||
|
}.play;
|
||||||
|
dur_secs;
|
||||||
|
};
|
||||||
|
~pt_buf = Buffer.alloc(s,~ds_dict.at("cols"));
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// hear the 5 nearest points
|
||||||
|
Routine{
|
||||||
|
// var id = "slice-558";
|
||||||
|
var id = ~ds_dict.at("data").keys.choose;
|
||||||
|
~ds.getPoint(id,~pt_buf,{
|
||||||
|
~kdtree.kNearest(~pt_buf,{
|
||||||
|
arg nearest;
|
||||||
|
Routine{
|
||||||
|
id.postln;
|
||||||
|
~play_id.(id).wait;
|
||||||
|
nearest[1..].do{
|
||||||
|
arg near;
|
||||||
|
1.wait;
|
||||||
|
near.postln;
|
||||||
|
~play_id.(near).wait;
|
||||||
|
};
|
||||||
|
}.play;
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// Standardize
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var scaler = FluidStandardize(s);
|
||||||
|
s.sync;
|
||||||
|
scaler.fitTransform(~ds_original,~ds,{
|
||||||
|
~kdtree.fit(~ds,{
|
||||||
|
"standardized & kdtree fit".postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// Normalize
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var scaler = FluidNormalize(s);
|
||||||
|
s.sync;
|
||||||
|
scaler.fitTransform(~ds_original,~ds,{
|
||||||
|
~kdtree.fit(~ds,{
|
||||||
|
"normalized & kdtree fit".postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// Robust Scaler
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var scaler = FluidRobustScale(s);
|
||||||
|
s.sync;
|
||||||
|
scaler.fitTransform(~ds_original,~ds,{
|
||||||
|
~kdtree.fit(~ds,{
|
||||||
|
"normalized & kdtree fit".postln;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
@ -0,0 +1,150 @@
|
|||||||
|
s.options.sampleRate_(44100);
|
||||||
|
s.options.device_("Fireface UC Mac (24006457)");
|
||||||
|
|
||||||
|
(
|
||||||
|
// decompose!
|
||||||
|
s.waitForBoot{
|
||||||
|
Routine{
|
||||||
|
var drums = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav"));
|
||||||
|
var resynth = Buffer(s);
|
||||||
|
var n_components = 2;
|
||||||
|
FluidBufNMF.process(s,drums,resynth:resynth,components:n_components).wait;
|
||||||
|
|
||||||
|
"original sound".postln;
|
||||||
|
{
|
||||||
|
PlayBuf.ar(1,drums,BufRateScale.ir(drums),doneAction:2).dup;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
(drums.duration + 1).wait;
|
||||||
|
|
||||||
|
n_components.do{
|
||||||
|
arg i;
|
||||||
|
|
||||||
|
"decomposed part #%".format(i+1).postln;
|
||||||
|
{
|
||||||
|
PlayBuf.ar(n_components,resynth,BufRateScale.ir(resynth),doneAction:2)[i].dup;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
(drums.duration + 1).wait;
|
||||||
|
};
|
||||||
|
|
||||||
|
"all decomposed parts spread across the stereo field".postln;
|
||||||
|
|
||||||
|
{
|
||||||
|
Splay.ar(PlayBuf.ar(n_components,resynth,BufRateScale.ir(resynth),doneAction:2));
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
}.play;
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
// ok so what is it doing?
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var n_components = 2;
|
||||||
|
var drums = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav"));
|
||||||
|
~bases = Buffer(s);
|
||||||
|
~activations = Buffer(s);
|
||||||
|
~resynth = Buffer(s);
|
||||||
|
FluidBufNMF.process(s,drums,bases:~bases,activations:~activations,resynth:~resynth,components:n_components).wait;
|
||||||
|
{
|
||||||
|
~bases.plot("bases");
|
||||||
|
~activations.plot("activations");
|
||||||
|
}.defer;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// base as a filter
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
var drums = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav"));
|
||||||
|
var voice = Buffer.read(s,FluidFilesPath("Tremblay-AaS-VoiceQC-B2K-M.wav"));
|
||||||
|
var song = Buffer.read(s,FluidFilesPath("Tremblay-beatRemember.wav"));
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
"drums through the drums bases as filters".postln;
|
||||||
|
{
|
||||||
|
var src = PlayBuf.ar(1,drums,BufRateScale.ir(drums),doneAction:2);
|
||||||
|
var sig = FluidNMFFilter.ar(src,~bases,2);
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
(drums.duration+1).wait;
|
||||||
|
|
||||||
|
"voice through the drum bases as filters".postln;
|
||||||
|
{
|
||||||
|
var src = PlayBuf.ar(1,voice,BufRateScale.ir(voice),doneAction:2);
|
||||||
|
var sig = FluidNMFFilter.ar(src,~bases,2);
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
(voice.duration+1).wait;
|
||||||
|
|
||||||
|
"song through the drum bases as filters".postln;
|
||||||
|
{
|
||||||
|
var src = PlayBuf.ar(2,song,BufRateScale.ir(song),doneAction:2)[0];
|
||||||
|
var sig = FluidNMFFilter.ar(src,~bases,2);
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// activations as an envelope
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var activation = PlayBuf.ar(2,~activations,BufRateScale.ir(~activations),doneAction:2);
|
||||||
|
var sig = WhiteNoise.ar(0.dbamp) * activation;
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// put them together...
|
||||||
|
(
|
||||||
|
{
|
||||||
|
var activation = PlayBuf.ar(2,~activations,BufRateScale.ir(~activations),doneAction:2);
|
||||||
|
var sig = WhiteNoise.ar(0.dbamp);
|
||||||
|
sig = FluidNMFFilter.ar(sig,~bases,2) * activation;
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// as a matcher, train on only 4 of the 22 seconds
|
||||||
|
|
||||||
|
(
|
||||||
|
Task{
|
||||||
|
var dog = Buffer.readChannel(s,FluidFilesPath("Tremblay-BaB-SoundscapeGolcarWithDog.wav"),channels:[0]);
|
||||||
|
var bases = Buffer(s);
|
||||||
|
var match = [0,0];
|
||||||
|
var win = Window("FluidNMFMatch",Rect(0,0,200,400));
|
||||||
|
var uv = UserView(win,win.bounds)
|
||||||
|
.drawFunc_{
|
||||||
|
var w = uv.bounds.width / 2;
|
||||||
|
Pen.color_(Color.green);
|
||||||
|
match.do{
|
||||||
|
arg match_val, i;
|
||||||
|
var match_norm = match_val.linlin(0,30,0,uv.bounds.height);
|
||||||
|
var top = uv.bounds.height - match_norm;
|
||||||
|
/*top.postln;*/
|
||||||
|
Pen.addRect(Rect(i * w,top,w,match_norm));
|
||||||
|
Pen.draw;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
OSCdef(\nmfmatch,{
|
||||||
|
arg msg;
|
||||||
|
match = msg[3..];
|
||||||
|
{uv.refresh}.defer;
|
||||||
|
},"/nmfmatch");
|
||||||
|
|
||||||
|
win.front;
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
FluidBufNMF.process(s,dog,numFrames:dog.sampleRate * 4,bases:bases,components:2).wait;
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,dog,BufRateScale.ir(dog),doneAction:2);
|
||||||
|
SendReply.kr(Impulse.kr(30),"/nmfmatch",FluidNMFMatch.kr(sig,bases,2));
|
||||||
|
sig;
|
||||||
|
}.play;
|
||||||
|
}.play(AppClock);
|
||||||
|
)
|
||||||
@ -0,0 +1,123 @@
|
|||||||
|
(
|
||||||
|
s.waitForBoot{
|
||||||
|
// a counter that will increment each time we add a point to the datasets
|
||||||
|
// (so that they each can have a unique identifier)
|
||||||
|
~counter = 0;
|
||||||
|
|
||||||
|
~ds_input = FluidDataSet(s); // dataset to hold the input data points (xy position)
|
||||||
|
~ds_output = FluidDataSet(s); // data set to hold the output data points (the 10 synth parameters)
|
||||||
|
~x_buf = Buffer.alloc(s,2); // a buffer for holding the current xy position (2 dimensions)
|
||||||
|
~y_buf = Buffer.alloc(s,10); // a buffer for holding the current synthparameters (10 parameters)
|
||||||
|
|
||||||
|
// the neural network. for more info on these arguments, visit learn.flucoma.com/reference/mlpregressor
|
||||||
|
~nn = FluidMLPRegressor(s,[7],FluidMLPRegressor.sigmoid,FluidMLPRegressor.sigmoid,learnRate:0.1,batchSize:1,validation:0);
|
||||||
|
|
||||||
|
// just nice to close any open windows, in case this script gets run multiple times...
|
||||||
|
// that way the windows don't pile up
|
||||||
|
Window.closeAll;
|
||||||
|
|
||||||
|
~win = Window("MLP Regressor",Rect(0,0,1000,400));
|
||||||
|
|
||||||
|
Slider2D(~win,Rect(0,0,400,400))
|
||||||
|
.action_({
|
||||||
|
arg s2d;
|
||||||
|
// [s2d.x,s2d.y].postln;
|
||||||
|
|
||||||
|
// we're sendinig these values up to the synth, once there, they will get written into the buffer
|
||||||
|
// for the mlp to use as input
|
||||||
|
~synth.set(\x,s2d.x,\y,s2d.y);
|
||||||
|
});
|
||||||
|
|
||||||
|
~multisliderview = MultiSliderView(~win,Rect(400,0,400,400))
|
||||||
|
.size_(10) // we know that it will need 10 sliders
|
||||||
|
.elasticMode_(true) // this will ensure that the sliders are spread out evenly across the whole view
|
||||||
|
.action_({
|
||||||
|
arg msv;
|
||||||
|
|
||||||
|
// here we'll just set these values directly into the buffer
|
||||||
|
// on the server they get read out of the buffer and used to control the synthesizer
|
||||||
|
~y_buf.setn(0,msv.value);
|
||||||
|
});
|
||||||
|
|
||||||
|
// a button for adding points to the datasets, both datasets at the same time
|
||||||
|
// with the same identifier
|
||||||
|
Button(~win,Rect(800,0,200,20))
|
||||||
|
.states_([["Add Point"]])
|
||||||
|
.action_({
|
||||||
|
arg but;
|
||||||
|
var id = "example-%".format(~counter); // use the counter to create a unique identifier
|
||||||
|
~ds_input.addPoint(id,~x_buf); // add a point to the input dataset using whatever values are in x_buf
|
||||||
|
~ds_output.addPoint(id,~y_buf); // add a pointi to the output dataset using whatever values a are in y_buf
|
||||||
|
~counter = ~counter + 1; // increment the counter!
|
||||||
|
|
||||||
|
// nice to just see every time what is going into the datasets
|
||||||
|
~ds_input.print;
|
||||||
|
~ds_output.print;
|
||||||
|
});
|
||||||
|
|
||||||
|
// a button to train train the neural network. you can push the button multiple times to watch the loss
|
||||||
|
// decrease. each time you press it, the neural network doesn't reset, it just keeps training from where it left off
|
||||||
|
Button(~win,Rect(800,20,200,20))
|
||||||
|
.states_([["Train"]])
|
||||||
|
.action_({
|
||||||
|
arg but;
|
||||||
|
~nn.fit(~ds_input,~ds_output,{ // provide the dataset to use as input and the dataset to use os output
|
||||||
|
arg loss;
|
||||||
|
"loss: %".format(loss).postln; // post the loss so we can watch it go down after multiple trainings
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// a button to control when the neural network is actually making predictions
|
||||||
|
// we want it to *not* be making predictions while we're adding points to the datasets (because we want
|
||||||
|
// the neural network to not be writing into y_buf)
|
||||||
|
Button(~win,Rect(800,40,200,20))
|
||||||
|
.states_([["Not Predicting",Color.yellow,Color.black],["Is Predicting",Color.green,Color.black]])
|
||||||
|
.action_({
|
||||||
|
arg but;
|
||||||
|
~synth.set(\predicting,but.value); // send the "boolean" (0 or 1) up to the synth
|
||||||
|
});
|
||||||
|
|
||||||
|
~win.front;
|
||||||
|
|
||||||
|
~synth = {
|
||||||
|
arg predicting = 0, x = 0, y = 0;
|
||||||
|
var osc1, osc2, feed1, feed2, base1=69, base2=69, base3 = 130, val, trig;
|
||||||
|
|
||||||
|
FluidKrToBuf.kr([x,y],~x_buf); // receive the xy positions as arguments to the synth, then write them into the buffer here
|
||||||
|
|
||||||
|
// if predicting is 1 "trig" will be impulses 30 times per second, if 0 it will be just a stream of zeros
|
||||||
|
trig = Impulse.kr(30) * predicting;
|
||||||
|
|
||||||
|
// the neural network will make a prediction each time a trigger, or impulse, is received in the first argument
|
||||||
|
// the next two arguments are (1) which buffer to use as input to the neural network, and (2) which buffer
|
||||||
|
// to write the output prediction into
|
||||||
|
~nn.kr(trig,~x_buf,~y_buf);
|
||||||
|
|
||||||
|
// read the 10 synth parameter values out of this buffer. val is a control rate stream of the 10 values
|
||||||
|
// when the neural network is making predictions (predicting == 1), it will be writing the predictions
|
||||||
|
// into that buffer, so that is what will be read out of here. when the neural network is not making predictions
|
||||||
|
// (predicting == 0) it will not be writing values into the buffer, so you can use the MultiSliderView above to
|
||||||
|
// write values into the buffer -- they'll still get read out into a control stream right here to control the synth!
|
||||||
|
val = FluidBufToKr.kr(~y_buf);
|
||||||
|
|
||||||
|
// if we are making predictions (trig is a series of impulses), send the values back to the language so that we can
|
||||||
|
// update the values in the multislider. this is basically only for aesthetic purposes. it's nice to see the multislider
|
||||||
|
// wiggle as the neural network makes it's predictions!
|
||||||
|
SendReply.kr(trig,"/predictions",val);
|
||||||
|
|
||||||
|
// the actual synthesis algorithm. made by PA Tremblay
|
||||||
|
#feed2,feed1 = LocalIn.ar(2);
|
||||||
|
osc1 = MoogFF.ar(SinOsc.ar((((feed1 * val[0]) + val[1]) * base1).midicps,mul: (val[2] * 50).dbamp).atan,(base3 - (val[3] * (FluidLoudness.kr(feed2, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[4] * 3.5);
|
||||||
|
osc2 = MoogFF.ar(SinOsc.ar((((feed2 * val[5]) + val[6]) * base2).midicps,mul: (val[7] * 50).dbamp).atan,(base3 - (val[8] * (FluidLoudness.kr(feed1, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[9] * 3.5);
|
||||||
|
Out.ar(0,LeakDC.ar([osc1,osc2],mul: 0.1));
|
||||||
|
LocalOut.ar([osc1,osc2]);
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
// catch the osc messages sent by the SendReply above and update the MultiSliderView
|
||||||
|
OSCdef(\predictions,{
|
||||||
|
arg msg;
|
||||||
|
// msg.postln;
|
||||||
|
{~multisliderview.value_(msg[3..])}.defer;
|
||||||
|
},"/predictions");
|
||||||
|
}
|
||||||
|
)
|
||||||
@ -0,0 +1,344 @@
|
|||||||
|
(
|
||||||
|
Window.closeAll;
|
||||||
|
s.options.inDevice_("MacBook Pro Microphone");
|
||||||
|
s.options.outDevice_("External Headphones");
|
||||||
|
// s.options.sampleRate_(48000);
|
||||||
|
s.options.sampleRate_(44100);
|
||||||
|
s.waitForBoot{
|
||||||
|
Task{
|
||||||
|
var win = Window(bounds:Rect(100,100,1000,800));
|
||||||
|
var label_width = 120;
|
||||||
|
var item_width = 300;
|
||||||
|
var mfcc_multslider;
|
||||||
|
var nMFCCs = 13;
|
||||||
|
var mfccbuf = Buffer.alloc(s,nMFCCs);
|
||||||
|
var parambuf = Buffer.alloc(s,3);
|
||||||
|
var id_counter = 0;
|
||||||
|
var continuous_training = false;
|
||||||
|
var mfcc_ds = FluidDataSet(s);
|
||||||
|
var param_ds = FluidDataSet(s);
|
||||||
|
var mfcc_ds_norm = FluidDataSet(s);
|
||||||
|
var param_ds_norm = FluidDataSet(s);
|
||||||
|
var scaler_params = FluidNormalize(s);
|
||||||
|
var scaler_mfcc = FluidNormalize(s);
|
||||||
|
var nn = FluidMLPRegressor(s,[3,3],FluidMLPRegressor.sigmoid,FluidMLPRegressor.sigmoid,learnRate:0.05,batchSize:5,validation:0);
|
||||||
|
var synth, loss_st;
|
||||||
|
var param_sliders = Array.newClear(3);
|
||||||
|
var statsWinSl, hidden_tf, batchSize_nb, momentum_nb, learnRate_nb, maxIter_nb, outAct_pum, act_pum;
|
||||||
|
var add_point = {
|
||||||
|
var id = "point-%".format(id_counter);
|
||||||
|
mfcc_ds.addPoint(id,mfccbuf);
|
||||||
|
param_ds.addPoint(id,parambuf);
|
||||||
|
id_counter = id_counter + 1;
|
||||||
|
};
|
||||||
|
var train = {
|
||||||
|
scaler_mfcc.fitTransform(mfcc_ds,mfcc_ds_norm,{
|
||||||
|
scaler_params.fitTransform(param_ds,param_ds_norm,{
|
||||||
|
// mfcc_ds.print;
|
||||||
|
// param_ds.print;
|
||||||
|
nn.fit(mfcc_ds_norm,param_ds_norm,{
|
||||||
|
arg loss;
|
||||||
|
// loss.postln;
|
||||||
|
defer{loss_st.string_("loss: %".format(loss))};
|
||||||
|
if(continuous_training,{
|
||||||
|
train.value;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
};
|
||||||
|
var open_mlp = {
|
||||||
|
arg path;
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
nn.read(path,{
|
||||||
|
var params = nn.prGetParams;
|
||||||
|
var n_layers = params[1];
|
||||||
|
var layers_string = "";
|
||||||
|
|
||||||
|
// params.postln;
|
||||||
|
|
||||||
|
n_layers.do({
|
||||||
|
arg i;
|
||||||
|
if(i > 0,{layers_string = "% ".format(layers_string)});
|
||||||
|
layers_string = "%%".format(layers_string,params[2+i]);
|
||||||
|
});
|
||||||
|
|
||||||
|
nn.maxIter_(maxIter_nb.value);
|
||||||
|
nn.learnRate_(learnRate_nb.value);
|
||||||
|
nn.momentum_(momentum_nb.value);
|
||||||
|
nn.batchSize_(batchSize_nb.value);
|
||||||
|
|
||||||
|
defer{
|
||||||
|
hidden_tf.string_(layers_string);
|
||||||
|
act_pum.value_(nn.activation);
|
||||||
|
outAct_pum.value_(nn.outputActivation);
|
||||||
|
/* maxIter_nb.value_(nn.maxIter);
|
||||||
|
learnRate_nb.value_(nn.learnRate);
|
||||||
|
momentum_nb.value_(nn.momentum);
|
||||||
|
batchSize_nb.value_(nn.batchSize);*/
|
||||||
|
};
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
~in_bus = Bus.audio(s);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
synth = {
|
||||||
|
arg vol = -15, isPredicting = 0, avg_win = 0, smooth_params = 0;
|
||||||
|
var params = FluidStats.kr(FluidBufToKr.kr(parambuf),ControlRate.ir * smooth_params * isPredicting)[0];
|
||||||
|
var msig = SinOsc.ar(params[1],0,params[2] * params[1]);
|
||||||
|
var csig = SinOsc.ar(params[0] + msig);
|
||||||
|
// var sound_in = SoundIn.ar(0);
|
||||||
|
var sound_in = In.ar(~in_bus);
|
||||||
|
var analysis_sig, mfccs, trig, mfccbuf_norm, parambuf_norm;
|
||||||
|
|
||||||
|
csig = BLowPass4.ar(csig,16000);
|
||||||
|
csig = BHiPass4.ar(csig,40);
|
||||||
|
analysis_sig = Select.ar(isPredicting,[csig,sound_in]);
|
||||||
|
mfccs = FluidMFCC.kr(analysis_sig,nMFCCs,startCoeff:1,maxNumCoeffs:nMFCCs);
|
||||||
|
trig = Impulse.kr(30);
|
||||||
|
mfccbuf_norm = LocalBuf(nMFCCs);
|
||||||
|
parambuf_norm = LocalBuf(3);
|
||||||
|
|
||||||
|
mfccs = FluidStats.kr(mfccs,ControlRate.ir * avg_win)[0];
|
||||||
|
FluidKrToBuf.kr(mfccs,mfccbuf);
|
||||||
|
|
||||||
|
scaler_mfcc.kr(trig * isPredicting,mfccbuf,mfccbuf_norm);
|
||||||
|
nn.kr(trig * isPredicting,mfccbuf_norm,parambuf_norm);
|
||||||
|
scaler_params.kr(trig * isPredicting,parambuf_norm,parambuf,invert:1);
|
||||||
|
|
||||||
|
SendReply.kr(trig * isPredicting,"/params",params);
|
||||||
|
SendReply.kr(trig,"/mfccs",mfccs);
|
||||||
|
|
||||||
|
csig = csig.dup;
|
||||||
|
csig * Select.kr(isPredicting,[vol.dbamp,FluidLoudness.kr(sound_in)[0].dbamp]);
|
||||||
|
}.play;
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
win.view.decorator_(FlowLayout(Rect(0,0,win.bounds.width,win.bounds.height)));
|
||||||
|
|
||||||
|
|
||||||
|
param_sliders[0] = EZSlider(win,Rect(0,0,item_width,20),"carrier freq",\freq.asSpec,{arg sl; parambuf.set(0,sl.value)},440,true,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
param_sliders[1] = EZSlider(win,Rect(0,0,item_width,20),"mod freq",\freq.asSpec,{arg sl; parambuf.set(1,sl.value)},100,true,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
param_sliders[2] = EZSlider(win,Rect(0,0,item_width,20),"index",ControlSpec(0,20),{arg sl; parambuf.set(2,sl.value)},10,true,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
EZSlider(win,Rect(0,0,item_width,20),"params avg smooth",nil.asSpec,{arg sl; synth.set(\avg_win,sl.value)},0,true,label_width);
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
StaticText(win,Rect(0,0,label_width,20)).string_("% MFCCs".format(nMFCCs));
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
statsWinSl = EZSlider(win,Rect(0,0,item_width,20),"fmcc avg smooth",nil.asSpec,{arg sl; synth.set(\avg_win,sl.value)},0,true,label_width);
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
mfcc_multslider = MultiSliderView(win,Rect(0,0,item_width,200))
|
||||||
|
.size_(nMFCCs)
|
||||||
|
.elasticMode_(true);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
Button(win,Rect(0,0,100,20))
|
||||||
|
.states_([["Add Point"]])
|
||||||
|
.action_{
|
||||||
|
add_point.value;
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
// spacer
|
||||||
|
StaticText(win,Rect(0,0,label_width,20));
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
// MLP Parameters
|
||||||
|
StaticText(win,Rect(0,0,label_width,20)).align_(\right).string_("hidden layers");
|
||||||
|
hidden_tf = TextField(win,Rect(0,0,item_width - label_width,20))
|
||||||
|
.string_(nn.hidden.asString.replace(", "," ")[2..(nn.hidden.asString.size-3)])
|
||||||
|
.action_{
|
||||||
|
arg tf;
|
||||||
|
var hidden_ = "[%]".format(tf.string.replace(" ",",")).interpret;
|
||||||
|
nn.hidden_(hidden_);
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
StaticText(win,Rect(0,0,label_width,20)).align_(\right).string_("activation");
|
||||||
|
act_pum = PopUpMenu(win,Rect(0,0,item_width - label_width,20))
|
||||||
|
.items_(["identity","sigmoid","relu","tanh"])
|
||||||
|
.value_(nn.activation)
|
||||||
|
.action_{
|
||||||
|
arg pum;
|
||||||
|
nn.activation_(pum.value);
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
StaticText(win,Rect(0,0,label_width,20)).align_(\right).string_("output activation");
|
||||||
|
outAct_pum = PopUpMenu(win,Rect(0,0,item_width - label_width,20))
|
||||||
|
.items_(["identity","sigmoid","relu","tanh"])
|
||||||
|
.value_(nn.outputActivation)
|
||||||
|
.action_{
|
||||||
|
arg pum;
|
||||||
|
nn.outputActivation_(pum.value);
|
||||||
|
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
maxIter_nb = EZNumber(win,Rect(0,0,item_width,20),"max iter",ControlSpec(1,10000,step:1),{
|
||||||
|
arg nb;
|
||||||
|
nn.maxIter_(nb.value.asInteger);
|
||||||
|
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
},nn.maxIter,false,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
learnRate_nb = EZNumber(win,Rect(0,0,item_width,20),"learn rate",ControlSpec(0.001,1.0),{
|
||||||
|
arg nb;
|
||||||
|
nn.learnRate_(nb.value);
|
||||||
|
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
},nn.learnRate,false,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
momentum_nb = EZNumber(win,Rect(0,0,item_width,20),"momentum",ControlSpec(0,1),{
|
||||||
|
arg nb;
|
||||||
|
nn.momentum_(nb.value);
|
||||||
|
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
},nn.momentum,false,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
batchSize_nb = EZNumber(win,Rect(0,0,item_width,20),"batch size",ControlSpec(1,1000,step:1),{
|
||||||
|
arg nb;
|
||||||
|
nn.batchSize_(nb.value.asInteger);
|
||||||
|
|
||||||
|
// nn.prGetParams.postln;
|
||||||
|
},nn.batchSize,false,label_width);
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
Button(win,Rect(0,0,100,20))
|
||||||
|
.states_([["Train"]])
|
||||||
|
.action_{
|
||||||
|
train.value;
|
||||||
|
};
|
||||||
|
|
||||||
|
Button(win,Rect(0,0,200,20))
|
||||||
|
.states_([["Continuous Training Off"],["Continuous Training On"]])
|
||||||
|
.action_{
|
||||||
|
arg but;
|
||||||
|
continuous_training = but.value.asBoolean;
|
||||||
|
train.value;
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
loss_st = StaticText(win,Rect(0,0,item_width,20)).string_("loss:");
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
Button(win,Rect(0,0,100,20))
|
||||||
|
.states_([["Not Predicting"],["Predicting"]])
|
||||||
|
.action_{
|
||||||
|
arg but;
|
||||||
|
synth.set(\isPredicting,but.value);
|
||||||
|
};
|
||||||
|
|
||||||
|
win.view.decorator.nextLine;
|
||||||
|
|
||||||
|
Button(win,Rect(0,0,100,20))
|
||||||
|
.states_([["Save MLP"]])
|
||||||
|
.action_{
|
||||||
|
Dialog.savePanel({
|
||||||
|
arg path;
|
||||||
|
nn.write(path);
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
Button(win,Rect(0,0,100,20))
|
||||||
|
.states_([["Open MLP"]])
|
||||||
|
.action_{
|
||||||
|
Dialog.openPanel({
|
||||||
|
arg path;
|
||||||
|
open_mlp.(path);
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
win.bounds_(win.view.decorator.used);
|
||||||
|
win.front;
|
||||||
|
|
||||||
|
OSCdef(\mfccs,{
|
||||||
|
arg msg;
|
||||||
|
// msg.postln;
|
||||||
|
defer{
|
||||||
|
mfcc_multslider.value_(msg[3..].linlin(-40,40,0,1));
|
||||||
|
};
|
||||||
|
},"/mfccs");
|
||||||
|
|
||||||
|
OSCdef(\params,{
|
||||||
|
arg msg;
|
||||||
|
// msg.postln;
|
||||||
|
defer{
|
||||||
|
param_sliders.do{
|
||||||
|
arg sl, i;
|
||||||
|
sl.value_(msg[3 + i]);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
},"/params");
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
statsWinSl.valueAction_(0.0);
|
||||||
|
|
||||||
|
/* 100.do{
|
||||||
|
var cfreq = exprand(20,20000);
|
||||||
|
var mfreq = exprand(20,20000);
|
||||||
|
var index = rrand(0.0,20);
|
||||||
|
parambuf.setn(0,[cfreq,mfreq,index]);
|
||||||
|
0.2.wait;
|
||||||
|
add_point.value;
|
||||||
|
0.05.wait;
|
||||||
|
};*/
|
||||||
|
40.do{
|
||||||
|
var cfreq = exprand(100.0,1000.0);
|
||||||
|
var mfreq = exprand(100.0,min(cfreq,500.0));
|
||||||
|
var index = rrand(0.0,8.0);
|
||||||
|
var arr = [cfreq,mfreq,index];
|
||||||
|
parambuf.setn(0,arr);
|
||||||
|
0.1.wait;
|
||||||
|
add_point.value;
|
||||||
|
0.1.wait;
|
||||||
|
arr.postln;
|
||||||
|
param_ds.print;
|
||||||
|
"\n\n".postln;
|
||||||
|
};
|
||||||
|
|
||||||
|
}.play(AppClock);
|
||||||
|
};
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
Routine{
|
||||||
|
//~path = FluidFilesPath("Tremblay-AaS-VoiceQC-B2K.wav");
|
||||||
|
~path = FluidFilesPath("Tremblay-CEL-GlitchyMusicBoxMelo.wav");
|
||||||
|
~test_buf = Buffer.readChannel(s,~path,channels:[0]);
|
||||||
|
s.sync;
|
||||||
|
{
|
||||||
|
var sig = PlayBuf.ar(1,~test_buf,BufRateScale.ir(~test_buf),doneAction:2);
|
||||||
|
Out.ar(0,sig);
|
||||||
|
sig;
|
||||||
|
}.play(outbus:~in_bus);
|
||||||
|
}.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
s.record;
|
||||||
|
s.stopRecording
|
||||||
@ -0,0 +1,153 @@
|
|||||||
|
/* ======= 1. Hear the Sound ============
|
||||||
|
|
||||||
|
load a part of a sound that has 3 clear components:
|
||||||
|
- a clear pitch component to start
|
||||||
|
- a noisy pitchless ending
|
||||||
|
- DC offset silence on both ends
|
||||||
|
|
||||||
|
*/
|
||||||
|
|
||||||
|
(
|
||||||
|
~src = Buffer.read(s,FluidFilesPath("Tremblay-ASWINE-ScratchySynth-M.wav"));//,42250,44100);
|
||||||
|
)
|
||||||
|
|
||||||
|
// listen
|
||||||
|
~src.play;
|
||||||
|
|
||||||
|
// ======= Let's try to extract that frequency from the audio file. ===========
|
||||||
|
|
||||||
|
// analyze
|
||||||
|
~pitches = Buffer(s);
|
||||||
|
~stats = Buffer(s);
|
||||||
|
|
||||||
|
FluidBufPitch.process(s,~src,features: ~pitches);
|
||||||
|
FluidBufStats.process(s,~pitches,stats:~stats);
|
||||||
|
|
||||||
|
(
|
||||||
|
// get the average freq;
|
||||||
|
~stats.get(0,{
|
||||||
|
arg f;
|
||||||
|
~avgfreq = f;
|
||||||
|
~avgfreq.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// play a sine tone at the avg freq alongside the soundfile
|
||||||
|
//average freq
|
||||||
|
~avgfreq_synth = {SinOsc.ar(~avgfreq,mul: 0.05)}.play;
|
||||||
|
//compare with the source
|
||||||
|
~src.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// hmm... that seems wrong...
|
||||||
|
|
||||||
|
/*
|
||||||
|
|
||||||
|
what if we weight the average frequency by the loudness
|
||||||
|
of the analysis frame so that the silences are not considered
|
||||||
|
as strongly.
|
||||||
|
|
||||||
|
*/
|
||||||
|
|
||||||
|
// do a loudness analysis
|
||||||
|
~loud = Buffer(s);
|
||||||
|
FluidBufLoudness.process(s,~src,features:~loud);
|
||||||
|
FluidBufStats.process(s,~loud,stats:~stats);
|
||||||
|
|
||||||
|
(
|
||||||
|
// get min and max
|
||||||
|
~stats.loadToFloatArray(action:{
|
||||||
|
arg stats;
|
||||||
|
~min_loudness = stats.clump(2).flop[0][4];
|
||||||
|
~max_loudness = stats.clump(2).flop[0][6];
|
||||||
|
~min_loudness.postln;
|
||||||
|
~max_loudness.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
// scale the loudness analysis from 0 to 1, using the min and max gotten above
|
||||||
|
~scaled = Buffer(s);
|
||||||
|
FluidBufScale.process(s,~loud,numChans: 1,destination: ~scaled,inputLow: ~min_loudness,inputHigh: ~max_loudness);
|
||||||
|
|
||||||
|
// then use this scaled analysis to weight the statistical analysis
|
||||||
|
FluidBufStats.process(s,~pitches,numChans:1,stats:~stats,weights:~scaled);
|
||||||
|
|
||||||
|
(
|
||||||
|
// get the average freq (now with the weighted average)
|
||||||
|
~stats.get(0,{
|
||||||
|
arg f;
|
||||||
|
~avgfreq = f;
|
||||||
|
~avgfreq.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// play a sine tone at the avg freq alongside the soundfile
|
||||||
|
//average freq
|
||||||
|
~avgfreq_synth = {SinOsc.ar(~avgfreq,mul: 0.05)}.play;
|
||||||
|
//compare with the source
|
||||||
|
~src.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// hmm... still wrong. too low now.
|
||||||
|
|
||||||
|
/*
|
||||||
|
ok, how about if we weight not by loudness, but by the pitch confidence of the pitch analysis
|
||||||
|
*/
|
||||||
|
|
||||||
|
FluidBufPitch.process(s,~src,features: ~pitches);
|
||||||
|
~thresh_buf = Buffer(s);
|
||||||
|
FluidBufThresh.process(s, ~pitches, startChan: 1, numChans: 1, destination: ~thresh_buf, threshold: 0.8)
|
||||||
|
FluidBufStats.process(s,~pitches,numChans:1,stats:~stats,weights:~thresh_buf);
|
||||||
|
|
||||||
|
(
|
||||||
|
// get the average freq
|
||||||
|
~stats.get(0,{
|
||||||
|
arg f;
|
||||||
|
~avgfreq = f;
|
||||||
|
~avgfreq.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// play a sine tone at the avg freq alongside the soundfile
|
||||||
|
//average freq
|
||||||
|
~avgfreq_synth = {SinOsc.ar(~avgfreq,mul: 0.05)}.play;
|
||||||
|
//compare with the source
|
||||||
|
~src.play;
|
||||||
|
)
|
||||||
|
|
||||||
|
// closer!
|
||||||
|
|
||||||
|
FluidBufPitch.process(s,~src,features: ~pitches);
|
||||||
|
|
||||||
|
(
|
||||||
|
~pitches.loadToFloatArray(action:{
|
||||||
|
arg pitches;
|
||||||
|
defer{pitches.histo(50,1000,20000).plot(discrete:true)};
|
||||||
|
});
|
||||||
|
)
|
||||||
|
// raise the threshold and toss out some outliers
|
||||||
|
FluidBufPitch.process(s,~src,features: ~pitches);
|
||||||
|
~pitches.plot(separately:true);
|
||||||
|
~thresh_buf = Buffer(s);
|
||||||
|
FluidBufThresh.process(s, ~pitches, startChan: 1, numChans: 1, destination: ~thresh_buf, threshold: 0.9)
|
||||||
|
FluidBufStats.process(s,~pitches,numChans:1,stats:~stats,weights:~thresh_buf,outliersCutoff:1.5);
|
||||||
|
|
||||||
|
(
|
||||||
|
// get the average freq
|
||||||
|
~stats.get(0,{
|
||||||
|
arg f;
|
||||||
|
~avgfreq = f;
|
||||||
|
~avgfreq.postln;
|
||||||
|
});
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// play a sine tone at the avg freq alongside the soundfile
|
||||||
|
//average freq
|
||||||
|
~avgfreq_synth = {SinOsc.ar(~avgfreq,mul: 0.05)}.play;
|
||||||
|
//compare with the source
|
||||||
|
~src.play;
|
||||||
|
)
|
||||||
@ -1,45 +0,0 @@
|
|||||||
//this patch requests a folder and will iterate through all accepted audiofiles and concatenate them in the destination buffer. It will also yield an array with the numFrame where files start in the new buffer.
|
|
||||||
|
|
||||||
(
|
|
||||||
var fileNames;
|
|
||||||
c = [];
|
|
||||||
|
|
||||||
FileDialog.new({|selection|
|
|
||||||
var total, totaldur = 0, maxchans = 0;
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
fileNames = PathName.new(selection[0])
|
|
||||||
.entries
|
|
||||||
.select({|f|
|
|
||||||
[\wav, \WAV, \mp3,\aif].includes(f.extension.asSymbol);});
|
|
||||||
total = fileNames.size();
|
|
||||||
fileNames.do({arg fp;
|
|
||||||
SoundFile.use(fp.asAbsolutePath , {
|
|
||||||
arg file;
|
|
||||||
var dur = file.numFrames;
|
|
||||||
c = c.add(totaldur);
|
|
||||||
totaldur = totaldur + dur;
|
|
||||||
maxchans = maxchans.max(file.numChannels);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
Routine{
|
|
||||||
b = Buffer.alloc(s,totaldur,maxchans);
|
|
||||||
s.sync;
|
|
||||||
fileNames.do{|f, i|
|
|
||||||
f.postln;
|
|
||||||
("Loading"+(i+1)+"of"+total).postln;
|
|
||||||
Buffer.read(s, f.asAbsolutePath,action:{arg tempbuf; FluidBufCompose.process(s,tempbuf,destination:b,destStartFrame:c[i],action:{tempbuf.free});});
|
|
||||||
};
|
|
||||||
s.sync;
|
|
||||||
("loading buffers done in" + (Main.elapsedTime - t).round(0.1) + "seconds.").postln;
|
|
||||||
}.play;
|
|
||||||
}, fileMode:2);
|
|
||||||
)
|
|
||||||
|
|
||||||
b.plot
|
|
||||||
c.postln
|
|
||||||
b.play
|
|
||||||
|
|
||||||
|
|
||||||
{PlayBuf.ar(1,b.bufnum,startPos:c[15])}.play
|
|
||||||
|
|
||||||
Buffer.freeAll
|
|
||||||
@ -1,40 +0,0 @@
|
|||||||
//destination buffer
|
|
||||||
(
|
|
||||||
b = Buffer.new();
|
|
||||||
c = Array.new();
|
|
||||||
)
|
|
||||||
|
|
||||||
//this patch requests a folder and will iterate through all accepted audiofiles and concatenate them in the destination buffer. It will also yield an array with the numFrame where files start in the new buffer.
|
|
||||||
|
|
||||||
(
|
|
||||||
var tempbuf,dest=0, fileNames;
|
|
||||||
|
|
||||||
FileDialog.new({|selection|
|
|
||||||
var total;
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
fileNames = PathName.new(selection[0])
|
|
||||||
.entries
|
|
||||||
.select({|f|
|
|
||||||
[\wav, \WAV, \mp3,\aif].includes(f.extension.asSymbol);});
|
|
||||||
total = fileNames.size();
|
|
||||||
Routine{
|
|
||||||
fileNames.do{|f, i|
|
|
||||||
f.postln;
|
|
||||||
("Loading"+(i+1)+"of"+total).postln;
|
|
||||||
tempbuf = Buffer.read(s,f.asAbsolutePath);
|
|
||||||
s.sync;
|
|
||||||
c = c.add(dest);
|
|
||||||
FluidBufCompose.process(s,tempbuf,destStartFrame:dest,destination:b);
|
|
||||||
s.sync;
|
|
||||||
dest = b.numFrames;
|
|
||||||
};
|
|
||||||
("loading buffers done in" + (Main.elapsedTime - t).round(0.1) + "seconds.").postln;
|
|
||||||
}.play;
|
|
||||||
}, fileMode:2);
|
|
||||||
)
|
|
||||||
|
|
||||||
b.plot
|
|
||||||
c.postln
|
|
||||||
b.play
|
|
||||||
|
|
||||||
{PlayBuf.ar(1,b.bufnum,startPos:c[15])}.play
|
|
||||||
@ -1,169 +0,0 @@
|
|||||||
// define a few processes
|
|
||||||
(
|
|
||||||
~ds = FluidDataSet(s);//no name needs to be provided
|
|
||||||
//define as many buffers as we have parallel voices/threads in the extractor processing (default is 4)
|
|
||||||
~mfccbuf = 4.collect{Buffer.new};
|
|
||||||
~statsbuf = 4.collect{Buffer.new};
|
|
||||||
~flatbuf = 4.collect{Buffer.new};
|
|
||||||
|
|
||||||
// here we instantiate a loader which creates a single large buffer with a dictionary of what was included in it
|
|
||||||
// ~loader = FluidLoadFolder("/Volumes/machins/projets/newsfeed/sons/smallnum/");
|
|
||||||
~loader = FluidLoadFolder(File.realpath(FluidLoadFolder.class.filenameSymbol).dirname +/+ "../AudioFiles");
|
|
||||||
|
|
||||||
// here we instantiate a further slicing step if needs be, which iterate through all the items of the FluidLoadFolder and slice the slices with the declared function.
|
|
||||||
~slicer = FluidSliceCorpus({ |src,start,num,dest|
|
|
||||||
FluidBufOnsetSlice.kr(src, start, num, metric: 9, minSliceLength: 17, indices:dest, threshold:0.7, blocking: 1)
|
|
||||||
});
|
|
||||||
|
|
||||||
// here we instantiate a process of description and dataset writing, which will run each slice of the previous slice and write the entry. Note the chain of Done.kr triggers.
|
|
||||||
~extractor = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var mfcc, stats, writer, flatten,mfccBuf, statsBuf, flatBuf, identifier, voice;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
mfcc = FluidBufMFCC.kr(src, startFrame:start, numFrames:num, numChans:1, features:~mfccbuf[voice], trig:1, blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[voice], stats:~statsbuf[voice], trig:Done.kr(mfcc), blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice], destination:~flatbuf[voice], trig:Done.kr(stats), blocking: 1);
|
|
||||||
writer = FluidDataSetWr.kr(~ds, identifier, nil, ~flatbuf[voice], trig: Done.kr(flatten), blocking: 1)
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
//loading process
|
|
||||||
|
|
||||||
// just run the loader
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~loader.play(s,action:{(Main.elapsedTime - t).postln;"Loaded".postln;});
|
|
||||||
)
|
|
||||||
|
|
||||||
//load and play to test if it is that quick - it is!
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~loader.play(s,action:{(Main.elapsedTime - t).postln;"Loaded".postln;{var start, stop; PlayBuf.ar(~loader.index[~loader.index.keys.asArray.last.asSymbol][\numchans],~loader.buffer,startPos: ~loader.index[~loader.index.keys.asArray.last.asSymbol][\bounds][0])}.play;});
|
|
||||||
)
|
|
||||||
|
|
||||||
//ref to the buffer
|
|
||||||
~loader.buffer
|
|
||||||
//size of item
|
|
||||||
~loader.index.keys.size
|
|
||||||
//a way to get all keys info sorted by time
|
|
||||||
~stuff = Array.newFrom(~loader.index.keys).sort.collect{|x|~loader.index[x][\bounds]}.sort{|a,b| a[0]<b[0]};
|
|
||||||
|
|
||||||
//or to iterate in the underlying dictionary (unsorted)
|
|
||||||
(
|
|
||||||
~loader.index.pairsDo{ |k,v,i|
|
|
||||||
k.postln;
|
|
||||||
v.pairsDo{|l,u,j|
|
|
||||||
"\t\t\t".post;
|
|
||||||
(l->u).postln;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
// or write to file a human readable, sorted version of the database after sorting it by index.
|
|
||||||
(
|
|
||||||
a = File(Platform.defaultTempDir ++ "sc-loading.json","w");
|
|
||||||
~stuffsorted = Array.newFrom(~loader.index.keys).sort{|a,b| ~loader.index[a][\bounds][0]< ~loader.index[b][\bounds][0]}.do{|k|
|
|
||||||
v = ~loader.index[k];
|
|
||||||
a.write(k.asString ++ "\n");
|
|
||||||
v.pairsDo{|l,u,j|
|
|
||||||
a.write("\t\t\t" ++ (l->u).asString ++ "\n");
|
|
||||||
}
|
|
||||||
};
|
|
||||||
a.close;
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// slicing process
|
|
||||||
|
|
||||||
// just run the slicer
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~slicer.play(s,~loader.buffer,~loader.index,action:{(Main.elapsedTime - t).postln;"Slicing done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
//slice count
|
|
||||||
~slicer.index.keys.size
|
|
||||||
|
|
||||||
// iterate
|
|
||||||
(
|
|
||||||
~slicer.index.pairsDo{ |k,v,i|
|
|
||||||
k.postln;
|
|
||||||
v.pairsDo{|l,u,j|
|
|
||||||
"\t\t\t".post;
|
|
||||||
(l->u).postln;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
///// write to file in human readable format, in order.
|
|
||||||
(
|
|
||||||
a = File(Platform.defaultTempDir ++ "sc-spliting.json","w");
|
|
||||||
~stuffsorted = Array.newFrom(~slicer.index.keys).sort{|a,b| ~slicer.index[a][\bounds][0]< ~slicer.index[b][\bounds][0]}.do{|k|
|
|
||||||
v = ~slicer.index[k];
|
|
||||||
a.write(k.asString ++ "\n");
|
|
||||||
v.pairsDo{|l,u,j|
|
|
||||||
a.write("\t\t\t" ++ (l->u).asString ++ "\n");
|
|
||||||
}
|
|
||||||
};
|
|
||||||
a.close;
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// description process
|
|
||||||
|
|
||||||
// just run the descriptor extractor
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractor.play(s,~loader.buffer,~slicer.index,action:{(Main.elapsedTime - t).postln;"Features done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
// write the dataset to file with the native JSON
|
|
||||||
~ds.write(Platform.defaultTempDir ++ "sc-dataset.json")
|
|
||||||
|
|
||||||
// open the file in your default json editor
|
|
||||||
(Platform.defaultTempDir ++ "sc-dataset.json").openOS
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// manipulating and querying the data
|
|
||||||
|
|
||||||
//building a tree
|
|
||||||
~tree = FluidKDTree(s);
|
|
||||||
~tree.fit(~ds,{"Fitted".postln;});
|
|
||||||
|
|
||||||
//retrieve a sound to match
|
|
||||||
~targetsound = Buffer(s);
|
|
||||||
~targetname = ~slicer.index.keys.asArray.scramble[0].asSymbol;
|
|
||||||
#a,b = ~slicer.index[~targetname][\bounds];
|
|
||||||
FluidBufCompose.process(s,~loader.buffer,a,(b-a),numChans: 1, destination: ~targetsound,action: {~targetsound.play;})
|
|
||||||
|
|
||||||
//describe the sound to match
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var mfcc, stats, flatten;
|
|
||||||
mfcc = FluidBufMFCC.kr(~targetsound,features:~mfccbuf[0],trig:1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[0],stats:~statsbuf[0],trig:Done.kr(mfcc));
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[0],destination:~flatbuf[0],trig:Done.kr(stats));
|
|
||||||
FreeSelfWhenDone.kr(flatten);
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
//find its nearest neighbours
|
|
||||||
~friends = Array;
|
|
||||||
~tree.numNeighbours = 5;
|
|
||||||
~tree.kNearest(~flatbuf[0],{|x| ~friends = x.postln;})
|
|
||||||
|
|
||||||
// play them in a row
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~friends[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~friends[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
@ -1,235 +0,0 @@
|
|||||||
// define a few processes
|
|
||||||
(
|
|
||||||
~ds = FluidDataSet(s);
|
|
||||||
~dsW = FluidDataSet(s);
|
|
||||||
~dsL = FluidDataSet(s);
|
|
||||||
//define as many buffers as we have parallel voices/threads in the extractor processing (default is 4)
|
|
||||||
~loudbuf = 4.collect{Buffer.new};
|
|
||||||
~weightbuf = 4.collect{Buffer.new};
|
|
||||||
~mfccbuf = 4.collect{Buffer.new};
|
|
||||||
~statsbuf = 4.collect{Buffer.new};
|
|
||||||
~flatbuf = 4.collect{Buffer.new};
|
|
||||||
|
|
||||||
// here we instantiate a loader as per example 0
|
|
||||||
~loader = FluidLoadFolder(File.realpath(FluidBufMFCC.class.filenameSymbol).dirname.withTrailingSlash ++ "../AudioFiles/");
|
|
||||||
|
|
||||||
// here we instantiate a further slicing step as per example 0
|
|
||||||
~slicer = FluidSliceCorpus({ |src,start,num,dest|
|
|
||||||
FluidBufOnsetSlice.kr(src,start,num,metric: 9, minSliceLength: 17, indices:dest, threshold:0.2,blocking: 1)
|
|
||||||
});
|
|
||||||
|
|
||||||
// here we instantiate a process of description and dataset writing, as per example 0
|
|
||||||
~extractor = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var identifier, voice, mfcc, stats, flatten;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
mfcc = FluidBufMFCC.kr(src, startFrame:start, numFrames:num, numChans:1, features:~mfccbuf[voice], padding: 2, trig:1, blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[voice], stats:~statsbuf[voice], numDerivs: 1, trig:Done.kr(mfcc), blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice], destination:~flatbuf[voice], trig:Done.kr(stats), blocking: 1);
|
|
||||||
FluidDataSetWr.kr(~ds, identifier, nil, ~flatbuf[voice], Done.kr(flatten), blocking: 1);
|
|
||||||
});
|
|
||||||
|
|
||||||
// here we make another processor, this time with doing an amplitude weighing
|
|
||||||
~extractorW = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var identifier, voice, loud, weights, mfcc, stats, flatten;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
mfcc = FluidBufMFCC.kr(src, startFrame:start, numFrames:num, numChans:1, features:~mfccbuf[voice], padding: 2, trig:1, blocking: 1);
|
|
||||||
loud = FluidBufLoudness.kr(src, startFrame:start, numFrames:num, numChans:1, features:~loudbuf[voice], padding: 2, trig:Done.kr(mfcc), blocking: 1);
|
|
||||||
weights = FluidBufScale.kr(~loudbuf[voice], numChans: 1, destination: ~weightbuf[voice], inputLow: -70, inputHigh: 0, trig: Done.kr(loud), blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[voice], stats:~statsbuf[voice], numDerivs: 1, weights: ~weightbuf[voice], trig:Done.kr(weights), blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice], destination:~flatbuf[voice], trig:Done.kr(stats), blocking: 1);
|
|
||||||
FluidDataSetWr.kr(~dsW, identifier, nil, ~flatbuf[voice], Done.kr(flatten), blocking: 1);
|
|
||||||
});
|
|
||||||
|
|
||||||
// and here we make a little processor for loudness if we want to poke at it
|
|
||||||
~extractorL = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var identifier, voice, loud, stats, flatten;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
loud = FluidBufLoudness.kr(src, startFrame:start, numFrames:num, numChans:1, features:~mfccbuf[voice], trig:1, padding: 2, blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[voice], stats:~statsbuf[voice], numDerivs: 1, trig:Done.kr(loud), blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice], destination:~flatbuf[voice], trig:Done.kr(stats), blocking: 1);
|
|
||||||
FluidDataSetWr.kr(~dsL, identifier, nil, ~flatbuf[voice], Done.kr(flatten), blocking: 1);
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
//loading process
|
|
||||||
|
|
||||||
//load and play to test if it is that quick - it is!
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~loader.play(s,action:{(Main.elapsedTime - t).postln;"Loaded".postln;{var start, stop; PlayBuf.ar(~loader.index[~loader.index.keys.asArray.last.asSymbol][\numchans],~loader.buffer,startPos: ~loader.index[~loader.index.keys.asArray.last.asSymbol][\bounds][0])}.play;});
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// slicing process
|
|
||||||
|
|
||||||
// run the slicer
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~slicer.play(s,~loader.buffer,~loader.index,action:{(Main.elapsedTime - t).postln;"Slicing done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
//slice count
|
|
||||||
~slicer.index.keys.size
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// description process
|
|
||||||
|
|
||||||
// run both descriptor extractor - here they are separate to the batch process duration
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractor.play(s,~loader.buffer,~slicer.index,action:{(Main.elapsedTime - t).postln;"Features done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractorW.play(s,~loader.buffer,~slicer.index,action:{(Main.elapsedTime - t).postln;"Features done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// manipulating and querying the data
|
|
||||||
|
|
||||||
// extracting whatever stats we want. In this case, mean/std/lowest/highest, and the same on the first derivative - excluding MFCC0 as it is mostly volume, keeping MFCC1-12
|
|
||||||
|
|
||||||
(
|
|
||||||
~curated = FluidDataSet(s);
|
|
||||||
~curatedW = FluidDataSet(s);
|
|
||||||
~curator = FluidDataSetQuery.new(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
~curator.addRange(1,12,{
|
|
||||||
~curator.addRange(14,12,{
|
|
||||||
~curator.addRange(53,12,{
|
|
||||||
~curator.addRange(79,12,{
|
|
||||||
~curator.addRange(92,12,{
|
|
||||||
~curator.addRange(105,12,{
|
|
||||||
~curator.addRange(144,12,{
|
|
||||||
~curator.addRange(170,12);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
)
|
|
||||||
~curator.transform(~ds,~curated)
|
|
||||||
~curator.transform(~dsW,~curatedW)
|
|
||||||
|
|
||||||
//check the dimension count
|
|
||||||
~ds.print
|
|
||||||
~dsW.print
|
|
||||||
~curated.print
|
|
||||||
~curatedW.print
|
|
||||||
|
|
||||||
//building a tree for each dataset
|
|
||||||
~tree = FluidKDTree(s,5);
|
|
||||||
~tree.fit(~ds,{"Fitted".postln;});
|
|
||||||
~treeW = FluidKDTree(s,5);
|
|
||||||
~treeW.fit(~dsW,{"Fitted".postln;});
|
|
||||||
~treeC = FluidKDTree(s,5);
|
|
||||||
~treeC.fit(~curated,{"Fitted".postln;});
|
|
||||||
~treeCW = FluidKDTree(s,5);
|
|
||||||
~treeCW.fit(~curatedW,{"Fitted".postln;});
|
|
||||||
|
|
||||||
//select a sound to match
|
|
||||||
// EITHER retrieve a random slice
|
|
||||||
~targetsound = Buffer(s);
|
|
||||||
~targetname = ~slicer.index.keys.asArray.scramble.last.asSymbol;
|
|
||||||
#a,b = ~slicer.index[~targetname][\bounds];
|
|
||||||
FluidBufCompose.process(s,~loader.buffer,a,(b-a),numChans: 1, destination: ~targetsound,action: {~targetsound.play;})
|
|
||||||
|
|
||||||
// OR just load a file in that buffer
|
|
||||||
~targetsound = Buffer.read(s,Platform.resourceDir +/+ "sounds/a11wlk01.wav");
|
|
||||||
|
|
||||||
//describe the sound to match
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var loud, weights, mfcc, stats, flatten, stats2, written;
|
|
||||||
mfcc = FluidBufMFCC.kr(~targetsound,features:~mfccbuf[0],padding: 2, trig:1);
|
|
||||||
stats = FluidBufStats.kr(~mfccbuf[0],stats:~statsbuf[0], numDerivs: 1,trig:Done.kr(mfcc));
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[0],destination:~flatbuf[0],trig:Done.kr(stats));
|
|
||||||
loud = FluidBufLoudness.kr(~targetsound,features:~loudbuf[0],padding: 2,trig:Done.kr(flatten),blocking: 1);
|
|
||||||
weights = FluidBufScale.kr(~loudbuf[0],numChans: 1,destination: ~weightbuf[0],inputLow: -70,inputHigh: 0,trig: Done.kr(loud),blocking: 1);
|
|
||||||
stats2 = FluidBufStats.kr(~mfccbuf[0],stats:~statsbuf[0], numDerivs: 1, weights: ~weightbuf[0], trig:Done.kr(weights),blocking: 1);
|
|
||||||
written = FluidBufFlatten.kr(~statsbuf[0],destination:~flatbuf[1],trig:Done.kr(stats2));
|
|
||||||
FreeSelf.kr(Done.kr(written));
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
//go language side to extract the right dimensions
|
|
||||||
~flatbuf[0].getn(0,182,{|x|~curatedBuf = Buffer.loadCollection(s, x[[0,1,4,6,7,8,11,13].collect{|x|var y=x*13+1;(y..(y+11))}.flat].postln)})
|
|
||||||
~flatbuf[1].getn(0,182,{|x|~curatedWBuf = Buffer.loadCollection(s, x[[0,1,4,6,7,8,11,13].collect{|x|var y=x*13+1;(y..(y+11))}.flat].postln)})
|
|
||||||
|
|
||||||
//find its nearest neighbours
|
|
||||||
~tree.kNearest(~flatbuf[0],{|x| ~friends = x.postln;})
|
|
||||||
~treeW.kNearest(~flatbuf[1],{|x| ~friendsW = x.postln;})
|
|
||||||
~treeC.kNearest(~curatedBuf,{|x| ~friendsC = x.postln;})
|
|
||||||
~treeCW.kNearest(~curatedWBuf,{|x| ~friendsCW = x.postln;})
|
|
||||||
|
|
||||||
|
|
||||||
// play them in a row
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~friends[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~friends[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~friendsW[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~friendsW[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~friendsC[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~friendsC[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~friendsCW[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~friendsCW[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
//explore dynamic range (changing the weigting's value of 0 in lines 39 and 157 will change the various weights given to quieter parts of the signal
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractorL.play(s,~loader.buffer,~slicer.index,action:{(Main.elapsedTime - t).postln;"Features done".postln});
|
|
||||||
)
|
|
||||||
~norm = FluidNormalize.new(s)
|
|
||||||
~norm.fit(~dsL)
|
|
||||||
~norm.dump({|x|x["data_min"][[8,12]].postln;x["data_max"][[8,12]].postln;})//here we extract the stats from the dataset by retrieving the stored maxima of the fitting process in FluidNormalize
|
|
||||||
@ -1,355 +0,0 @@
|
|||||||
// here we will define a process that creates and populates a series of parallel dataset, one of each 'feature-space' that we can then eventually manipulate more easily than individual dimensions.
|
|
||||||
|
|
||||||
// define a few datasets
|
|
||||||
(
|
|
||||||
~pitchDS = FluidDataSet(s);
|
|
||||||
~loudDS = FluidDataSet(s);
|
|
||||||
~mfccDS = FluidDataSet(s);
|
|
||||||
~durDS = FluidDataSet(s);
|
|
||||||
|
|
||||||
//define as many buffers as we have parallel voices/threads in the extractor processing (default is 4)
|
|
||||||
~pitchbuf = 4.collect{Buffer.new};
|
|
||||||
~statsPitchbuf = 4.collect{Buffer.new};
|
|
||||||
~weightPitchbuf = 4.collect{Buffer.new};
|
|
||||||
~flatPitchbuf = 4.collect{Buffer.new};
|
|
||||||
~loudbuf = 4.collect{Buffer.new};
|
|
||||||
~statsLoudbuf = 4.collect{Buffer.new};
|
|
||||||
~flatLoudbuf = 4.collect{Buffer.new};
|
|
||||||
~weightMFCCbuf = 4.collect{Buffer.new};
|
|
||||||
~mfccbuf = 4.collect{Buffer.new};
|
|
||||||
~statsMFCCbuf = 4.collect{Buffer.new};
|
|
||||||
~flatMFCCbuf = 4.collect{Buffer.new};
|
|
||||||
|
|
||||||
// here we instantiate a loader as per example 0
|
|
||||||
~loader = FluidLoadFolder(File.realpath(FluidBufPitch.class.filenameSymbol).dirname.withTrailingSlash ++ "../AudioFiles/");
|
|
||||||
|
|
||||||
// here we instantiate a further slicing step as per example 0
|
|
||||||
~slicer = FluidSliceCorpus({ |src,start,num,dest|
|
|
||||||
FluidBufOnsetSlice.kr(src ,start, num, indices:dest, metric: 9, threshold:0.2, minSliceLength: 17, blocking: 1)
|
|
||||||
});
|
|
||||||
|
|
||||||
// here we make the full processor building our 3 source datasets
|
|
||||||
~extractor = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var identifier, voice, pitch, pitchweights, pitchstats, pitchflat, loud, statsLoud, flattenLoud, mfcc, mfccweights, mfccstats, mfccflat, writePitch, writeLoud;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
// the pitch computation is independant so it starts right away
|
|
||||||
pitch = FluidBufPitch.kr(src, startFrame:start, numFrames:num, numChans:1, features:~pitchbuf[voice], unit: 1, trig:1, blocking: 1);
|
|
||||||
pitchweights = FluidBufThresh.kr(~pitchbuf[voice], numChans: 1, startChan: 1, destination: ~weightPitchbuf[voice], threshold: 0.7, trig:Done.kr(pitch), blocking: 1);//pull down low conf
|
|
||||||
pitchstats = FluidBufStats.kr(~pitchbuf[voice], stats:~statsPitchbuf[voice], numDerivs: 1, weights: ~weightPitchbuf[voice], outliersCutoff: 1.5, trig:Done.kr(pitchweights), blocking: 1);
|
|
||||||
pitchflat = FluidBufFlatten.kr(~statsPitchbuf[voice],destination:~flatPitchbuf[voice],trig:Done.kr(pitchstats),blocking: 1);
|
|
||||||
writePitch = FluidDataSetWr.kr(~pitchDS,identifier, nil, ~flatPitchbuf[voice], Done.kr(pitchflat),blocking: 1);
|
|
||||||
// the mfcc need loudness to weigh, so let's start with that
|
|
||||||
loud = FluidBufLoudness.kr(src,startFrame:start, numFrames:num, numChans:1, features:~loudbuf[voice], trig:Done.kr(writePitch), blocking: 1);//here trig was 1
|
|
||||||
//we can now flatten and write Loudness in its own trigger tree
|
|
||||||
statsLoud = FluidBufStats.kr(~loudbuf[voice], stats:~statsLoudbuf[voice], numDerivs: 1, trig:Done.kr(loud), blocking: 1);
|
|
||||||
flattenLoud = FluidBufFlatten.kr(~statsLoudbuf[voice],destination:~flatLoudbuf[voice],trig:Done.kr(statsLoud),blocking: 1);
|
|
||||||
writeLoud = FluidDataSetWr.kr(~loudDS,identifier, nil, ~flatLoudbuf[voice], Done.kr(flattenLoud),blocking: 1);
|
|
||||||
//we can resume from the loud computation trigger
|
|
||||||
mfcc = FluidBufMFCC.kr(src,startFrame:start,numFrames:num,numChans:1,features:~mfccbuf[voice],trig:Done.kr(writeLoud),blocking: 1);//here trig was loud
|
|
||||||
mfccweights = FluidBufScale.kr(~loudbuf[voice],numChans: 1,destination: ~weightMFCCbuf[voice],inputLow: -70,inputHigh: 0, trig: Done.kr(mfcc), blocking: 1);
|
|
||||||
mfccstats = FluidBufStats.kr(~mfccbuf[voice], stats:~statsMFCCbuf[voice], startChan: 1, numDerivs: 1, weights: ~weightMFCCbuf[voice], trig:Done.kr(mfccweights), blocking: 1);//remove mfcc0 and weigh by loudness instead
|
|
||||||
mfccflat = FluidBufFlatten.kr(~statsMFCCbuf[voice],destination:~flatMFCCbuf[voice],trig:Done.kr(mfccstats),blocking: 1);
|
|
||||||
FluidDataSetWr.kr(~mfccDS,identifier, nil, ~flatMFCCbuf[voice], Done.kr(mfccflat),blocking: 1);
|
|
||||||
});
|
|
||||||
|
|
||||||
)
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
//loading process
|
|
||||||
|
|
||||||
//load and play to test if it is that quick - it is!
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~loader.play(s,action:{(Main.elapsedTime - t).postln;"Loaded".postln;{var start, stop; PlayBuf.ar(~loader.index[~loader.index.keys.asArray.last.asSymbol][\numchans],~loader.buffer,startPos: ~loader.index[~loader.index.keys.asArray.last.asSymbol][\bounds][0])}.play;});
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// slicing process
|
|
||||||
|
|
||||||
// run the slicer
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~slicer.play(s,~loader.buffer,~loader.index,action:{(Main.elapsedTime - t).postln;"Slicing done".postln});
|
|
||||||
)
|
|
||||||
//slice count
|
|
||||||
~slicer.index.keys.size
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// description process
|
|
||||||
|
|
||||||
// run the descriptor extractor (errors will be given, this is normal: the pitch conditions are quite exacting and therefore many slices are not valid)
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractor.play(s,~loader.buffer,~slicer.index,action:{(Main.elapsedTime - t).postln;"Features done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
// make a dataset of durations for querying that too (it could have been made in the process loop, but hey, we have dictionaries we can manipulate too!)
|
|
||||||
(
|
|
||||||
~dict = Dictionary.new;
|
|
||||||
~temp = ~slicer.index.collect{ |k| [k[\bounds][1] - k[\bounds][0]]};
|
|
||||||
~dict.add(\data -> ~temp);
|
|
||||||
~dict.add(\cols -> 1);
|
|
||||||
~durDS.load(~dict)
|
|
||||||
)
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
// manipulating and querying the data
|
|
||||||
|
|
||||||
~pitchDS.print;
|
|
||||||
~loudDS.print;
|
|
||||||
~mfccDS.print;
|
|
||||||
~durDS.print;
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////
|
|
||||||
//reduce the MFCC timbral space stats (many potential ways to explore here... - 2 are provided to compare, with and without the derivatives before running a dimension reduction)
|
|
||||||
~tempDS = FluidDataSet(s);
|
|
||||||
|
|
||||||
~query = FluidDataSetQuery(s);
|
|
||||||
~query.addRange(0,24);//add only means and stddev of the 12 coeffs...
|
|
||||||
~query.addRange((7*12),24);// and the same stats of the first derivative (moving 7 stats x 12 mfccs to the right)
|
|
||||||
~query.transform(~mfccDS, ~tempDS);
|
|
||||||
|
|
||||||
//check that you end up with the expected 48 dimensions
|
|
||||||
~tempDS.print;
|
|
||||||
|
|
||||||
// standardizing before the PCA, as argued here:
|
|
||||||
// https://scikit-learn.org/stable/auto_examples/preprocessing/plot_scaling_importance.html
|
|
||||||
~stan = FluidStandardize(s);
|
|
||||||
~stanDS = FluidDataSet(s);
|
|
||||||
~stan.fitTransform(~tempDS,~stanDS)
|
|
||||||
|
|
||||||
//shrinking A: using 2 stats on the values, and 2 stats on the redivative (12 x 2 x 2 = 48 dim)
|
|
||||||
~pca = FluidPCA(s,4);//shrink to 4 dimensions
|
|
||||||
~timbreDSd = FluidDataSet(s);
|
|
||||||
~pca.fitTransform(~stanDS,~timbreDSd,{|x|x.postln;})//accuracy
|
|
||||||
|
|
||||||
//shrinking B: using only the 2 stats on the values
|
|
||||||
~query.clear;
|
|
||||||
~query.addRange(0,24);//add only means and stddev of the 12 coeffs...
|
|
||||||
~query.transform(~stanDS, ~tempDS);//retrieve the values from the already standardized dataset
|
|
||||||
|
|
||||||
//check you have the expected 24 dimensions
|
|
||||||
~tempDS.print;
|
|
||||||
|
|
||||||
//keep its own PCA so we can keep the various states for later transforms
|
|
||||||
~pca2 = FluidPCA(s,4);//shrink to 4 dimensions
|
|
||||||
~timbreDS = FluidDataSet(s);
|
|
||||||
~pca2.fitTransform(~tempDS,~timbreDS,{|x|x.postln;})//accuracy
|
|
||||||
|
|
||||||
// comparing NN for fun
|
|
||||||
~targetDSd = Buffer(s)
|
|
||||||
~targetDS = Buffer(s)
|
|
||||||
~tree = FluidKDTree(s,5)
|
|
||||||
|
|
||||||
// you can run this a few times to have fun
|
|
||||||
(
|
|
||||||
~target = ~slicer.index.keys.asArray.scramble.[0].asSymbol;
|
|
||||||
~timbreDSd.getPoint(~target, ~targetDSd);
|
|
||||||
~timbreDS.getPoint(~target, ~targetDS);
|
|
||||||
)
|
|
||||||
|
|
||||||
~tree.fit(~timbreDSd,{~tree.kNearest(~targetDSd,{|x|~nearestDSd = x.postln;})})
|
|
||||||
~tree.fit(~timbreDS,{~tree.kNearest(~targetDS,{|x|~nearestDS = x.postln;})})
|
|
||||||
|
|
||||||
// play them in a row
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~nearestDSd[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~nearestDSd[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~nearestDS[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~nearestDS[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////
|
|
||||||
// compositing queries - defining a target and analysing it
|
|
||||||
|
|
||||||
~globalDS = FluidDataSet(s);
|
|
||||||
|
|
||||||
// define a source
|
|
||||||
~targetsound = Buffer.read(s,File.realpath(FluidBufPitch.class.filenameSymbol).dirname.withTrailingSlash ++ "../AudioFiles/Tremblay-ASWINE-ScratchySynth-M.wav",42250,44100);
|
|
||||||
~targetsound.play
|
|
||||||
|
|
||||||
// analyse it as above, using voice 0 in the arrays of buffer to store the info
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var identifier, voice, pitch, pitchweights, pitchstats, pitchflat, loud, statsLoud, flattenLoud, mfcc, mfccweights, mfccstats, mfccflat, writePitch, writeLoud;
|
|
||||||
pitch = FluidBufPitch.kr(~targetsound, numChans:1, features:~pitchbuf[0], unit: 1, trig:1, blocking: 1);
|
|
||||||
pitchweights = FluidBufThresh.kr(~pitchbuf[0], numChans: 1, startChan: 1, destination: ~weightPitchbuf[0], threshold: 0.7, trig:Done.kr(pitch), blocking: 1);
|
|
||||||
pitchstats = FluidBufStats.kr(~pitchbuf[0], stats:~statsPitchbuf[0], numDerivs: 1, weights: ~weightPitchbuf[0], outliersCutoff: 1.5, trig:Done.kr(pitchweights), blocking: 1);
|
|
||||||
pitchflat = FluidBufFlatten.kr(~statsPitchbuf[0],destination:~flatPitchbuf[0],trig:Done.kr(pitchstats),blocking: 1);
|
|
||||||
loud = FluidBufLoudness.kr(~targetsound, numChans:1, features:~loudbuf[0], trig:Done.kr(pitchflat), blocking: 1);
|
|
||||||
statsLoud = FluidBufStats.kr(~loudbuf[0], stats:~statsLoudbuf[0], numDerivs: 1, trig:Done.kr(loud), blocking: 1);
|
|
||||||
flattenLoud = FluidBufFlatten.kr(~statsLoudbuf[0],destination:~flatLoudbuf[0],trig:Done.kr(statsLoud),blocking: 1);
|
|
||||||
mfcc = FluidBufMFCC.kr(~targetsound,numChans:1,features:~mfccbuf[0],trig:Done.kr(flattenLoud),blocking: 1);
|
|
||||||
mfccweights = FluidBufScale.kr(~loudbuf[0],numChans: 1,destination: ~weightMFCCbuf[0],inputLow: -70,inputHigh: 0, trig: Done.kr(mfcc), blocking: 1);
|
|
||||||
mfccstats = FluidBufStats.kr(~mfccbuf[0], stats:~statsMFCCbuf[0], startChan: 1, numDerivs: 1, weights: ~weightMFCCbuf[0], trig:Done.kr(mfccweights), blocking: 1);
|
|
||||||
mfccflat = FluidBufFlatten.kr(~statsMFCCbuf[0],destination:~flatMFCCbuf[0],trig:Done.kr(mfccstats),blocking: 1);
|
|
||||||
FreeSelf.kr(Done.kr(mfccflat));
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
// a first query - length and pitch
|
|
||||||
~query.clear
|
|
||||||
~query.filter(0,"<",44100+22050)//column0 a little smaller than our source
|
|
||||||
~query.and(0,">", 44100-22050)//also as far as a little larger than the source
|
|
||||||
~query.transformJoin(~durDS, ~pitchDS, ~tempDS); //this passes to ~tempDS only the points that have the same label than those in ~durDS that satisfy the condition. No column were added so nothing from ~durDS is copied
|
|
||||||
|
|
||||||
// print to see how many slices (rows) we have
|
|
||||||
~tempDS.print
|
|
||||||
|
|
||||||
// further conditions to assemble the query
|
|
||||||
~query.clear
|
|
||||||
~query.filter(11,">",0.7)//column11 (median of pitch confidence) larger than 0.7
|
|
||||||
~query.addRange(0,4) //copy only mean and stddev of pitch and confidence
|
|
||||||
~query.transform(~tempDS, ~globalDS); // pass it to the final search
|
|
||||||
|
|
||||||
// print to see that we have less items, with only their pitch
|
|
||||||
~globalDS.print
|
|
||||||
|
|
||||||
// compare knearest on both globalDS and tempDS
|
|
||||||
// assemble search buffer
|
|
||||||
~targetPitch = Buffer(s)
|
|
||||||
FluidBufCompose.process(s, ~flatPitchbuf[0],numFrames: 4,destination: ~targetPitch)
|
|
||||||
|
|
||||||
// feed the trees
|
|
||||||
~tree.fit(~pitchDS,{~tree.kNearest(~flatPitchbuf[0],{|x|~nearestA = x.postln;})}) //all the points with all the stats
|
|
||||||
~tree.fit(~globalDS,{~tree.kNearest(~targetPitch,{|x|~nearestB = x.postln;})}) //just the points with the right lenght conditions, with the curated stats
|
|
||||||
|
|
||||||
// play them in a row
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~nearestA[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~nearestA[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
// with our duration limits, strange results appear eventually
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~nearestB[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~nearestB[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////
|
|
||||||
// compositing queries to weigh - defining a target and analysing it
|
|
||||||
|
|
||||||
// make sure to define and describe the source above (lines 178 to 201)
|
|
||||||
|
|
||||||
// let's make normalised versions of the 3 datasets, keeping the normalisers separate to query later
|
|
||||||
~loudDSn = FluidDataSet(s);
|
|
||||||
~pitchDSn = FluidDataSet(s);
|
|
||||||
~timbreDSn = FluidDataSet(s);
|
|
||||||
|
|
||||||
~normL = FluidNormalize(s)
|
|
||||||
~normP = FluidNormalize(s)
|
|
||||||
~normT = FluidNormalize(s)
|
|
||||||
|
|
||||||
~normL.fitTransform(~loudDS, ~loudDSn);
|
|
||||||
~normP.fitTransform(~pitchDS, ~pitchDSn);
|
|
||||||
~normT.fitTransform(~timbreDSd, ~timbreDSn);
|
|
||||||
|
|
||||||
// let's assemble these datasets
|
|
||||||
~query.clear
|
|
||||||
~query.addRange(0,4)
|
|
||||||
~query.transformJoin(~pitchDSn,~timbreDSn, ~tempDS) //appends 4 dims of pitch to 4 dims of timbre
|
|
||||||
~query.transformJoin(~loudDSn, ~tempDS, ~globalDS) // appends 4 dims of loud to the 8 dims above
|
|
||||||
|
|
||||||
~globalDS.print//12 dim: 4 timbre, 4 pitch, 4 loud, all normalised between 0 and 1
|
|
||||||
~globalDS.write(Platform.defaultTempDir ++ "test12dims.json") // write to file to look at the values
|
|
||||||
// open the file in your default json editor
|
|
||||||
(Platform.defaultTempDir ++ "test12dims.json").openOS
|
|
||||||
|
|
||||||
// let's assemble the query
|
|
||||||
// first let's normalise our target descriptors
|
|
||||||
(
|
|
||||||
~targetPitch = Buffer(s);
|
|
||||||
~targetLoud = Buffer(s);
|
|
||||||
~targetMFCC = Buffer(s);
|
|
||||||
~targetMFCCs = Buffer(s);
|
|
||||||
~targetMFCCsp = Buffer(s);
|
|
||||||
~targetTimbre = Buffer(s);
|
|
||||||
~targetAll= Buffer(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
~normL.transformPoint(~flatLoudbuf[0], ~targetLoud) //normalise the loudness (all dims)
|
|
||||||
~normP.transformPoint(~flatPitchbuf[0], ~targetPitch) //normalise the pitch (all dims)
|
|
||||||
FluidBufCompose.process(s,~flatMFCCbuf[0],numFrames: 24,destination: ~targetMFCC) // copy the process of dimension reduction above
|
|
||||||
FluidBufCompose.process(s,~flatMFCCbuf[0],startFrame: (7*12), numFrames: 24, destination: ~targetMFCC,destStartFrame: 24) //keeping 48 dims
|
|
||||||
~stan.transformPoint(~targetMFCC,~targetMFCCs) //standardize with the same coeffs
|
|
||||||
~pca.transformPoint(~targetMFCCs, ~targetMFCCsp) //then down to 4
|
|
||||||
~normT.transformPoint(~targetMFCCsp, ~targetTimbre) //then normalised
|
|
||||||
FluidBufCompose.process(s, ~targetTimbre,destination: ~targetAll) // assembling the single query
|
|
||||||
FluidBufCompose.process(s, ~targetPitch, numFrames: 4, destination: ~targetAll, destStartFrame: 4) // copying the 4 stats of pitch we care about
|
|
||||||
FluidBufCompose.process(s, ~targetLoud, numFrames: 4, destination: ~targetAll, destStartFrame: 8) // same for loudness
|
|
||||||
//check the sanity
|
|
||||||
~targetAll.query
|
|
||||||
|
|
||||||
// now let's see which is nearest that point
|
|
||||||
~tree.fit(~globalDS,{~tree.kNearest(~targetAll,{|x|~nearest = x.postln;})}) //just the points with the right lenght conditions, with the curated stats
|
|
||||||
|
|
||||||
// play them in a row
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
5.do{|i|
|
|
||||||
var dur;
|
|
||||||
v = ~slicer.index[~nearest[i].asSymbol];
|
|
||||||
dur = (v[\bounds][1] - v[\bounds][0]) / s.sampleRate;
|
|
||||||
{BufRd.ar(v[\numchans],~loader.buffer,Line.ar(v[\bounds][0],v[\bounds][1],dur, doneAction: 2))}.play;
|
|
||||||
~nearest[i].postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
// to change the relative weight of each dataset, let's change the normalisation range. Larger ranges will mean larger distance, and therefore less importance for that parameter.
|
|
||||||
// for instance to downplay pitch, let's make it larger by a factor of 10 around the center of 0.5
|
|
||||||
~normP.max = 5.5
|
|
||||||
~normP.min = -4.5
|
|
||||||
~normP.fitTransform(~pitchDS, ~pitchDSn);
|
|
||||||
// here we can re-run just the part that composites the pitch
|
|
||||||
~normP.transformPoint(~flatPitchbuf[0], ~targetPitch) //normalise the pitch (all dims)
|
|
||||||
FluidBufCompose.process(s, ~targetPitch, numFrames: 4, destination: ~targetAll, destStartFrame: 4) // copying the 4 stats of pitch we care about
|
|
||||||
|
|
||||||
//see that the middle 4 values are much larger in range
|
|
||||||
~targetAll.getn(0,12,{|x|x.postln;})
|
|
||||||
|
|
||||||
// let's re-assemble these datasets
|
|
||||||
~query.transformJoin(~pitchDSn,~timbreDSn, ~tempDS) //appends 4 dims of pitch to 4 dims of timbre
|
|
||||||
~query.transformJoin(~loudDSn, ~tempDS, ~globalDS) // appends 4 dims of loud to the 8 dims above
|
|
||||||
|
|
||||||
// now let's see which is nearest that point
|
|
||||||
~tree.fit(~globalDS,{~tree.kNearest(~targetAll,{|x|~nearest = x.postln;})}) //just the points with the right lenght conditions, with the curated stats
|
|
||||||
|
|
||||||
///////////////////////////////////////////////
|
|
||||||
// todo: segment then query musaik
|
|
||||||
@ -1,230 +0,0 @@
|
|||||||
// load a source folder
|
|
||||||
~loader = FluidLoadFolder(File.realpath(FluidBufMFCC.class.filenameSymbol).dirname.withTrailingSlash ++ "../AudioFiles/");
|
|
||||||
~loader.play;
|
|
||||||
|
|
||||||
//slightly oversegment with novelty
|
|
||||||
//segments should still make sense but might cut a few elements in 2 or 3
|
|
||||||
~slicer = FluidSliceCorpus({ |src,start,num,dest| FluidBufNoveltySlice.kr(src,start,num,indices:dest, feature: 1, kernelSize: 29, threshold: 0.1, filterSize: 5, hopSize: 128, blocking: 1)});
|
|
||||||
~slicer.play(s, ~loader.buffer,~loader.index);
|
|
||||||
|
|
||||||
//test the segmentation by looping them
|
|
||||||
(
|
|
||||||
~originalindices = Array.newFrom(~slicer.index.keys).sort{|a,b| ~slicer.index[a][\bounds][0]< ~slicer.index[b][\bounds][0]}.collect{|x|~slicer.index[x][\bounds]};
|
|
||||||
d = {arg start=0, end = 44100;
|
|
||||||
BufRd.ar(1, ~loader.buffer, Phasor.ar(0,1,start,end,start),0,1);
|
|
||||||
}.play;
|
|
||||||
|
|
||||||
w = Window.new(bounds:Rect(100,100,400,60)).front;
|
|
||||||
b = ControlSpec(0, ~originalindices.size - 1, \linear, 1); // min, max, mapping, step
|
|
||||||
c = StaticText(w, Rect(340, 20, 50, 20)).align_(\center);
|
|
||||||
a = Slider(w, Rect(10, 20, 330, 20))
|
|
||||||
.action_({var val = b.map(a.value).asInteger;
|
|
||||||
c.string_(val.asString);
|
|
||||||
d.set(\start,~originalindices[val][0], \end, ~originalindices[val][1]);
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
//analyse each segment with 20 MFCCs in a dataset and spectralshapes in another one
|
|
||||||
(
|
|
||||||
~featuresbuf = 4.collect{Buffer.new};
|
|
||||||
~statsbuf = 4.collect{Buffer.new};
|
|
||||||
~flatbuf = 4.collect{Buffer.new};
|
|
||||||
~slicesMFCC = FluidDataSet(s);
|
|
||||||
~slicesShapes = FluidDataSet(s);
|
|
||||||
~extractor = FluidProcessSlices({|src,start,num,data|
|
|
||||||
var features, stats, writer, flatten,mfccBuf, statsBuf, flatBuf, identifier, voice;
|
|
||||||
identifier = data.key;
|
|
||||||
voice = data.value[\voice];
|
|
||||||
features = FluidBufMFCC.kr(src,startFrame:start,numFrames:num,numChans:1, numCoeffs: 20, features:~featuresbuf[voice],trig:1,blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~featuresbuf[voice],stats:~statsbuf[voice],trig:Done.kr(features),blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice],destination:~flatbuf[voice],trig:Done.kr(stats),blocking: 1);
|
|
||||||
writer = FluidDataSetWr.kr(~slicesMFCC,identifier, nil, ~flatbuf[voice], Done.kr(flatten),blocking: 1);
|
|
||||||
features = FluidBufSpectralShape.kr(src,startFrame:start,numFrames:num,numChans:1, features:~featuresbuf[voice],trig:Done.kr(writer),blocking: 1);
|
|
||||||
stats = FluidBufStats.kr(~featuresbuf[voice],stats:~statsbuf[voice],trig:Done.kr(features),blocking: 1);
|
|
||||||
flatten = FluidBufFlatten.kr(~statsbuf[voice],destination:~flatbuf[voice],trig:Done.kr(stats),blocking: 1);
|
|
||||||
writer = FluidDataSetWr.kr(~slicesShapes,identifier, nil, ~flatbuf[voice], Done.kr(flatten),blocking: 1);
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
t = Main.elapsedTime;
|
|
||||||
~extractor.play(s,~loader.buffer, ~slicer.index, action:{(Main.elapsedTime - t).postln;"Analysis done".postln});
|
|
||||||
)
|
|
||||||
|
|
||||||
~originalindices.size
|
|
||||||
~slicesMFCC.print
|
|
||||||
~slicesShapes.print
|
|
||||||
|
|
||||||
//run a window over consecutive segments, forcing them in 2 classes, and merging the consecutive segments of similar class
|
|
||||||
//we overlap the analysis with the last (original) slice to check for continuity
|
|
||||||
(
|
|
||||||
~winSize = 4;//the number of consecutive items to split in 2 classes;
|
|
||||||
~curated = FluidDataSet(s);
|
|
||||||
~query = FluidDataSetQuery(s);
|
|
||||||
~stan = FluidStandardize(s);
|
|
||||||
~kmeans = FluidKMeans(s,2,1000);
|
|
||||||
~windowDS = FluidDataSet(s);
|
|
||||||
~windowLS = FluidLabelSet(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
//curate stats (MFCCs)
|
|
||||||
~query.clear
|
|
||||||
~query.addRange((0*20)+1,10);
|
|
||||||
~query.transform(~slicesMFCC,~curated);
|
|
||||||
|
|
||||||
//OR
|
|
||||||
//curate stats (moments)
|
|
||||||
~query.clear
|
|
||||||
~query.addRange(0,3);
|
|
||||||
~query.transform(~slicesShapes,~curated);
|
|
||||||
|
|
||||||
//OR
|
|
||||||
//curate both
|
|
||||||
~query.clear
|
|
||||||
~query.addColumn(0);//add col 0 (mean of mfcc0 as 'loudness')
|
|
||||||
~query.transform(~slicesMFCC,~curated);//mfcc0 as loudness
|
|
||||||
~query.clear;
|
|
||||||
~query.addRange(0,3);//add some spectral moments
|
|
||||||
~query.transformJoin(~slicesShapes, ~curated, ~curated);//join in centroids
|
|
||||||
|
|
||||||
//optionally standardize in place
|
|
||||||
~stan.fitTransform(~curated, ~curated);
|
|
||||||
|
|
||||||
~curated.print
|
|
||||||
|
|
||||||
//retrieve the dataset as dictionary
|
|
||||||
~curated.dump{|x|~sliceDict = x;};
|
|
||||||
|
|
||||||
~originalslicesarray = ~originalindices.flop[0] ++ ~loader.buffer.numFrames
|
|
||||||
~orginalkeys = Array.newFrom(~slicer.index.keys).sort{|a,b| ~slicer.index[a][\bounds][0]< ~slicer.index[b][\bounds][0]}
|
|
||||||
|
|
||||||
//the windowed function, recursive to deal with sync dependencies
|
|
||||||
(
|
|
||||||
~windowedFunct = {arg head, winSize, overlap;
|
|
||||||
var nbass = [], assignments = [], tempDict = ();
|
|
||||||
//check the size of everything to not overrun
|
|
||||||
winSize = (~originalslicesarray.size - head).min(winSize);
|
|
||||||
//copy the items to a subdataset from hear
|
|
||||||
winSize.do{|i|
|
|
||||||
tempDict.put((i.asString), ~sliceDict["data"][(~orginalkeys[(i+head)]).asString]);//here one could curate which stats to take
|
|
||||||
// "whichslices:%\n".postf(i+head);
|
|
||||||
};
|
|
||||||
~windowDS.load(Dictionary.newFrom([\cols, ~sliceDict["cols"].asInteger, \data, tempDict]), action: {
|
|
||||||
// "% - loaded\n".postf(head);
|
|
||||||
|
|
||||||
//kmeans 2 and retrieve ordered array of class assignations
|
|
||||||
~kmeans.fitPredict(~windowDS, ~windowLS, action: {|x|
|
|
||||||
nbass = x;
|
|
||||||
// "% - fitted1: ".postf(head); nbass.postln;
|
|
||||||
|
|
||||||
if (nbass.includes(winSize.asFloat), {
|
|
||||||
~kmeans.fitPredict(~windowDS, ~windowLS, {|x|
|
|
||||||
nbass = x;
|
|
||||||
// "% - fitted2: ".postf(head); nbass.postln;
|
|
||||||
if (nbass.includes(winSize.asFloat), {
|
|
||||||
~kmeans.fitPredict(~windowDS, ~windowLS, {|x|
|
|
||||||
nbass = x;
|
|
||||||
// "% - fitted3: ".postf(head); nbass.postln;
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
~windowLS.dump{|x|
|
|
||||||
var assignments = x.at("data").asSortedArray.flop[1].flatten;
|
|
||||||
"% - assigned ".postf(head);
|
|
||||||
|
|
||||||
assignments.postln;
|
|
||||||
|
|
||||||
(winSize-1).do{|i|
|
|
||||||
if (assignments[i+1] != assignments[i], {
|
|
||||||
~newindices= ~newindices ++ (~originalslicesarray[head+i+1]).asInteger;
|
|
||||||
~newkeys = ~newkeys ++ (~orginalkeys[head+i+1]);
|
|
||||||
});
|
|
||||||
|
|
||||||
};
|
|
||||||
//if we still have some frames to do, do them
|
|
||||||
if (((winSize + head) < ~originalslicesarray.size), {
|
|
||||||
"-----------------".postln;
|
|
||||||
~windowedFunct.value(head + winSize - overlap, winSize, overlap);
|
|
||||||
}, {~newindices = (~newindices ++ ~loader.buffer.numFrames); "done".postln;});//if we're done close the books
|
|
||||||
};
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
//the job
|
|
||||||
|
|
||||||
//test 1 - start at the begining, consider 4 items at a time, make 2 clusters, overlap 1
|
|
||||||
~newindices = [~originalslicesarray[0]]; ~newkeys = [~orginalkeys[0]];
|
|
||||||
~windowedFunct.value(0, 4, 1);
|
|
||||||
|
|
||||||
//OPTIONAL: try again with more clusters (3) and a wider window (6) and more overlap (2)
|
|
||||||
~newindices = [~originalslicesarray[0]]; ~newkeys = [~orginalkeys[0]];
|
|
||||||
~kmeans.numClusters = 3;
|
|
||||||
~windowedFunct.value(0,6,2);
|
|
||||||
|
|
||||||
//compare sizes
|
|
||||||
~orginalkeys.size
|
|
||||||
~newkeys.size;
|
|
||||||
|
|
||||||
//export to reaper
|
|
||||||
(
|
|
||||||
//first create a new file that ends with rpp - it will overwrite if the file exists
|
|
||||||
f = File.new(Platform.defaultTempDir ++ "clusteredslices-" ++ Date.getDate.stamp ++".rpp","w+");
|
|
||||||
|
|
||||||
if (f.isOpen , {
|
|
||||||
var path, prevpath ="", sr, count, dur, realDur;
|
|
||||||
//write the header
|
|
||||||
f.write("<REAPER_PROJECT 0.1 \"5.99/OSX64\" 1603037150\n\n");
|
|
||||||
|
|
||||||
//a first track with the originalslicearray
|
|
||||||
//write the track header
|
|
||||||
f.write("<TRACK\nNAME \"novelty output\"\n");
|
|
||||||
// iterate through the items in the track
|
|
||||||
~orginalkeys.do{|v, i|
|
|
||||||
path = ~slicer.index[v][\path];
|
|
||||||
if (path != prevpath, {
|
|
||||||
sr = ~slicer.index[v][\sr];
|
|
||||||
prevpath = path;
|
|
||||||
count = 0;
|
|
||||||
});
|
|
||||||
dur = ~originalslicesarray[i+1] - ~originalslicesarray[i];
|
|
||||||
if ( dur > 0, {
|
|
||||||
f.write("<ITEM\nPOSITION " ++ (~originalslicesarray[i] / sr) ++ "\nLENGTH " ++ (dur / sr) ++ "\nNAME \"" ++ v ++ "\"\nSOFFS " ++ (count / sr) ++ "\n<SOURCE WAVE\nFILE \"" ++ path ++ "\"\n>\n>\n");
|
|
||||||
});
|
|
||||||
count = count + dur;
|
|
||||||
};
|
|
||||||
//write the track footer
|
|
||||||
f.write(">\n");
|
|
||||||
|
|
||||||
// a second track with the new ~indices
|
|
||||||
prevpath = "";
|
|
||||||
//write the track header
|
|
||||||
f.write("<TRACK\nNAME \"clustered output\"\n");
|
|
||||||
// iterate through the items in the track
|
|
||||||
~newkeys.do{|v, i|
|
|
||||||
dur = ~newindices[i+1] - ~newindices[i];
|
|
||||||
if (dur > 0, {
|
|
||||||
path = ~slicer.index[v][\path];
|
|
||||||
if (path != prevpath, {
|
|
||||||
sr = ~slicer.index[v][\sr];
|
|
||||||
prevpath = path;
|
|
||||||
count = 0;
|
|
||||||
});
|
|
||||||
f.write("<ITEM\nPOSITION " ++ (~newindices[i] / sr) ++ "\nLENGTH " ++ (dur / sr) ++ "\nNAME \"" ++ v ++ "\"\nSOFFS " ++ (count / sr) ++ "\n<SOURCE WAVE\nFILE \"" ++ path ++ "\"\n>\n>\n");
|
|
||||||
count = count + dur;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
//write the track footer
|
|
||||||
f.write(">\n");
|
|
||||||
|
|
||||||
//write the footer
|
|
||||||
f.write(">\n");
|
|
||||||
f.close;
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
(then open the time-stamped reaper file clusterdslice in the folder tmp)
|
|
||||||
Platform.defaultTempDir.openOS
|
|
||||||
@ -1,324 +0,0 @@
|
|||||||
// Lookup in a KDTree using melbands
|
|
||||||
// Demonstration of a massive parallel approach to batch process swiftly in SC
|
|
||||||
|
|
||||||
s.options.numBuffers = 16384 //The method below for doing the analysus quickly needs lots of buffers
|
|
||||||
s.reboot
|
|
||||||
|
|
||||||
//Step 0: Make a corpus
|
|
||||||
|
|
||||||
//We'll jam together some random flucoma sounds for illustrative purposes
|
|
||||||
//Get some files
|
|
||||||
(
|
|
||||||
~audioexamples_path = File.realpath(FluidBufMelBands.class.filenameSymbol).dirname.withTrailingSlash +/+ "../AudioFiles/*.wav";
|
|
||||||
~allTheSounds = SoundFile.collect(~audioexamples_path);
|
|
||||||
~testSounds = ~allTheSounds;
|
|
||||||
~testSounds.do{|f| f.path.postln}; // print out the files that are loaded
|
|
||||||
)
|
|
||||||
|
|
||||||
//Load the files into individual buffers:
|
|
||||||
(
|
|
||||||
~audio_buffers = ~testSounds.collect{|f|
|
|
||||||
Buffer.readChannel(
|
|
||||||
server: s,
|
|
||||||
path:f.path,
|
|
||||||
channels:[0],
|
|
||||||
action:{("Loaded" + f.path).postln;}
|
|
||||||
)
|
|
||||||
};
|
|
||||||
)
|
|
||||||
|
|
||||||
//Do a segmentation of each buffer, in parallel
|
|
||||||
(
|
|
||||||
fork{
|
|
||||||
~index_buffers = ~audio_buffers.collect{Buffer.new};
|
|
||||||
s.sync;
|
|
||||||
~count = ~audio_buffers.size;
|
|
||||||
~audio_buffers.do{|src,i|
|
|
||||||
FluidBufOnsetSlice.process(
|
|
||||||
server:s,
|
|
||||||
source:src,
|
|
||||||
indices:~index_buffers[i],
|
|
||||||
metric: 9,
|
|
||||||
threshold:0.2,
|
|
||||||
minSliceLength: 17,
|
|
||||||
action:{
|
|
||||||
(~testSounds[i].path ++ ":" + ~index_buffers[i].numFrames + "slices").postln;
|
|
||||||
~count = ~count - 1;
|
|
||||||
if(~count == 0){"Done slicing".postln};
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
// we now have an array of index buffers, one per source buffer, each containing the segmentation points as a frame positions
|
|
||||||
// this allows us to make an array of sizes
|
|
||||||
~index_buffers.collect{|b| b.numFrames}.sum
|
|
||||||
|
|
||||||
//For each of these segments, let's make a datapoint using the mean melbands.
|
|
||||||
// There's a number of ways of skinning this cat w/r/t telling the server what to do, but here we want to minimize traffic between language and server, and also produce undertsandable code
|
|
||||||
|
|
||||||
//First, we'll grab the onset points as language-side arrays, then scroll through each slice getting the mean melbands
|
|
||||||
(
|
|
||||||
// - a dataset to keep the mean melbands in
|
|
||||||
~mels = FluidDataSet(s);
|
|
||||||
// - a dictionary to keep the slice points in for later playback
|
|
||||||
~slices = Dictionary();
|
|
||||||
//The code below (as well as needing lots of buffers), creates lots of threads and we need a big ass scheduling queue
|
|
||||||
~clock = TempoClock(queueSize:8192);
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
// Do the Mel analysis in a cunning parallel fashion
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var counter, remaining;
|
|
||||||
var condition = Condition.new; // used to create a test condition to pause the routine ...
|
|
||||||
var index_arrays = Dictionary();
|
|
||||||
|
|
||||||
"Process started. Please wait.".postln;
|
|
||||||
|
|
||||||
~total_slice_count = ~index_buffers.collect{|b| b.numFrames}.sum + ~index_buffers.size; //we get an extra slice in buffer
|
|
||||||
~featurebuffers = ~total_slice_count.collect{Buffer.new}; // create a buffer per slice
|
|
||||||
|
|
||||||
//Make our dictionary FluidDataSet-shaped
|
|
||||||
~slices.put("cols",3);//[bufnum,start,end] for each slice
|
|
||||||
~slices.put("data",Dictionary());
|
|
||||||
|
|
||||||
//Collect each set of onsets into a language side array and store them in a dict
|
|
||||||
~index_buffers.do{|b,i| // iterate over the input buffer array
|
|
||||||
{
|
|
||||||
b.loadToFloatArray( // load to language side array
|
|
||||||
action:{|indices|
|
|
||||||
//Glue the first and last samples of the buffer on to the index list, and place in dictionary with the
|
|
||||||
//Buffer object as a key
|
|
||||||
|
|
||||||
index_arrays.put(~audio_buffers[i], Array.newFrom([0] ++ indices ++ (~audio_buffers[i].numFrames - 1)));
|
|
||||||
|
|
||||||
if(i==(~index_buffers.size-1)) {condition.unhang};
|
|
||||||
}
|
|
||||||
)
|
|
||||||
}.fork(stackSize:~total_slice_count);
|
|
||||||
};
|
|
||||||
condition.hang; //Pause until all the callbacks above have completed
|
|
||||||
"Arrays loaded. Starting on the analysis, please wait.".postln;
|
|
||||||
|
|
||||||
//For each of these lists of points, we want to scroll over the indices in pairs and get some mel bands
|
|
||||||
counter = 0;
|
|
||||||
remaining = ~total_slice_count;
|
|
||||||
|
|
||||||
s.sync;
|
|
||||||
|
|
||||||
// now iterate over Dict and calc melbands
|
|
||||||
|
|
||||||
index_arrays.keysValuesDo{|buffer, indices|
|
|
||||||
indices.doAdjacentPairs{|start,end,num|
|
|
||||||
var analysis = Routine({|counter|
|
|
||||||
FluidBufMelBands.processBlocking(
|
|
||||||
server:s,
|
|
||||||
source:buffer,
|
|
||||||
startFrame:start,
|
|
||||||
numFrames:(end-1) - start,
|
|
||||||
features:~featurebuffers[counter],
|
|
||||||
action:{
|
|
||||||
remaining = remaining - 1;
|
|
||||||
if(remaining == 0) { ~numMelBands = ~featurebuffers[0].numChannels;condition.unhang };
|
|
||||||
}
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
~slices["data"].put(counter,[buffer.bufnum,start,end]);
|
|
||||||
|
|
||||||
//I'm spawning new threads to wait for the analysis callback from the server. The final callback will un-hang this thread
|
|
||||||
analysis.value(counter); //Done differently to other blocks because I need to pass in the value of counter
|
|
||||||
counter = counter + 1;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
condition.hang;
|
|
||||||
"Analysis of % slices done.\n".postf(~total_slice_count);
|
|
||||||
}.fork(clock:~clock);
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
// Run stats on each mel buffer
|
|
||||||
|
|
||||||
// create a stats buffer for each of the slices
|
|
||||||
~statsbuffers = ~total_slice_count.collect{Buffer.new}; // create n Slices buffers - to be filled with (40 mel bands * 7 stats)
|
|
||||||
|
|
||||||
// run stats on all the buffers
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var remaining = ~total_slice_count;
|
|
||||||
~featurebuffers.do{|buffer,i|
|
|
||||||
FluidBufStats.processBlocking(
|
|
||||||
server:s,
|
|
||||||
source:buffer,
|
|
||||||
stats:~statsbuffers[i],
|
|
||||||
action:{
|
|
||||||
remaining = remaining - 1;
|
|
||||||
if(remaining == 0) { "done".postln};
|
|
||||||
}
|
|
||||||
);
|
|
||||||
};
|
|
||||||
}.fork(clock:~clock);
|
|
||||||
)
|
|
||||||
|
|
||||||
~featurebuffers.size
|
|
||||||
|
|
||||||
//Flatten each stats buffer into a data point
|
|
||||||
~flatbuffers = ~total_slice_count.collect{Buffer.new};// create an array of flatten stats
|
|
||||||
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var remaining = ~total_slice_count;
|
|
||||||
~statsbuffers.do{|buffer,i|
|
|
||||||
FluidBufFlatten.processBlocking(
|
|
||||||
server:s,
|
|
||||||
source:buffer,
|
|
||||||
destination:~flatbuffers[i],
|
|
||||||
action:{
|
|
||||||
remaining = remaining - 1;
|
|
||||||
if(remaining == 0) { "Got flat points".postln; };
|
|
||||||
}
|
|
||||||
);
|
|
||||||
};
|
|
||||||
}.fork(clock:~clock);
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
//Ram each flat point into a data set. At this point we have more data than we need, but we'll prune in moment
|
|
||||||
(
|
|
||||||
"Filling dataset".postln;
|
|
||||||
~mels.clear;
|
|
||||||
|
|
||||||
// ~flatbuffers = flatbuffers;
|
|
||||||
~flatbuffers.do{|buf,i|
|
|
||||||
~mels.addPoint(i,buf);
|
|
||||||
};
|
|
||||||
|
|
||||||
~mels.print;
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
// Prune & standardise
|
|
||||||
|
|
||||||
// Tidy up the temp arrays of buffers we do not need anymore
|
|
||||||
|
|
||||||
(
|
|
||||||
"Cleaning".postln;
|
|
||||||
(~featurebuffers ++ ~statsbuffers ++ ~flatbuffers).do{|buf| buf.free};
|
|
||||||
)
|
|
||||||
|
|
||||||
//Above we sneakily made a dictionary of slice data for playback (bufnum,start,end). Let's throw it in a dataset
|
|
||||||
~slicedata = FluidDataSet(s); // will hold slice data (bufnum,start,end) for playback
|
|
||||||
|
|
||||||
//dict -> dataset
|
|
||||||
(
|
|
||||||
~slicedata.load(~slices);
|
|
||||||
~slicedata.print;
|
|
||||||
)
|
|
||||||
|
|
||||||
// Step 1. Let's prune and standardize before fitting to a tree
|
|
||||||
(
|
|
||||||
~meanmels = FluidDataSet(s);//will hold pruned mel data
|
|
||||||
~stdmels = FluidDataSet(s);//will standardised, pruned mel data
|
|
||||||
~standardizer = FluidStandardize(s);
|
|
||||||
~pruner = FluidDataSetQuery(s);
|
|
||||||
~tree = FluidKDTree(s,numNeighbours:10,lookupDataSet:~slicedata);//we have to supply the lookup data set when we make the tree (boo!)
|
|
||||||
)
|
|
||||||
|
|
||||||
//Prune, standardize and fit KDTree
|
|
||||||
(
|
|
||||||
{
|
|
||||||
~meanmels.clear;
|
|
||||||
~stdmels.clear;
|
|
||||||
~pruner.addRange(0,~numMelBands).transform(~mels,~meanmels); //prune with a 'query' -- so this is dropping all but ~meanmels
|
|
||||||
~standardizer.fitTransform(~meanmels,~stdmels);
|
|
||||||
~tree.fit(~stdmels,{"KDTree ready".postln});
|
|
||||||
}.fork(clock:~clock);
|
|
||||||
)
|
|
||||||
|
|
||||||
~meanmels.print
|
|
||||||
|
|
||||||
//Step 2: Set the FluidStandardizer and FluidKDTree up for listening
|
|
||||||
//set the buffers and busses needed
|
|
||||||
(
|
|
||||||
~stdInputPoint = Buffer.alloc(s,40);
|
|
||||||
~stdOutputPoint = Buffer.alloc(s,40);
|
|
||||||
~treeOutputPoint = Buffer.alloc(s,3 * 10);//numNeighbours x triples of bufnum,start,end
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
// let's play a random sound (to make sure we understand our data structure!
|
|
||||||
(
|
|
||||||
{
|
|
||||||
var randPoint, buf, start, stop, dur;
|
|
||||||
|
|
||||||
randPoint = ~slices["data"].keys.asArray.scramble[0]; // this good way of getting - but recast as strong
|
|
||||||
|
|
||||||
buf= ~slices["data"][randPoint][0];
|
|
||||||
start = ~slices["data"][randPoint][1];
|
|
||||||
stop = ~slices["data"][randPoint][2];
|
|
||||||
|
|
||||||
dur = stop - start;
|
|
||||||
|
|
||||||
BufRd.ar(1,buf, Line.ar(start,stop,dur/s.sampleRate, doneAction: 2), 0, 2);
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
// Query KD tree
|
|
||||||
|
|
||||||
// a target sound from outside our dataset
|
|
||||||
~inBuf = Buffer.readChannel(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav", numFrames:15000, channels:[0]);
|
|
||||||
~inBuf.play
|
|
||||||
|
|
||||||
//OR one from within (but just the begining so beware of the difference!)
|
|
||||||
~inBuf = Buffer.alloc(s,15000);
|
|
||||||
~randomSlice = ~slices["data"].keys.asArray.scramble[0];
|
|
||||||
~audio_buffers[~slices["data"][~randomSlice][0]].copyData(~inBuf,srcStartAt: ~slices["data"][~randomSlice][1], numSamples: 15000.min(~slices["data"][~randomSlice][2] - (~slices["data"][~randomSlice][1])));
|
|
||||||
~inBuf.play
|
|
||||||
|
|
||||||
// now try getting a point, playing it, grabbing nearest neighbour and playing it ...
|
|
||||||
|
|
||||||
(
|
|
||||||
~inBufMels = Buffer(s);
|
|
||||||
~inBufStats = Buffer(s);
|
|
||||||
~inBufFlat = Buffer(s);
|
|
||||||
~inBufComp = Buffer(s);
|
|
||||||
~inBufStand = Buffer(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
// FluidBuf Compose is buf version of dataSetQuery
|
|
||||||
|
|
||||||
(
|
|
||||||
FluidBufMelBands.process(s, ~inBuf, features: ~inBufMels, action: {
|
|
||||||
FluidBufStats.process(s, ~inBufMels, stats:~inBufStats, action: {
|
|
||||||
FluidBufFlatten.process(s, ~inBufStats, destination:~inBufFlat, action: {
|
|
||||||
FluidBufCompose.process(s, ~inBufFlat, numFrames: ~numMelBands, destination: ~inBufComp, action: {
|
|
||||||
~standardizer.transformPoint(~inBufComp, ~inBufStand, {
|
|
||||||
~tree.kNearest(~inBufStand,{ |a|a.postln;~nearest = a;})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
)
|
|
||||||
|
|
||||||
// playback nearest in order
|
|
||||||
(
|
|
||||||
fork{
|
|
||||||
~nearest.do{|i|
|
|
||||||
var buf, start, stop, dur;
|
|
||||||
|
|
||||||
buf= ~slices["data"][i.asInteger][0];
|
|
||||||
start = ~slices["data"][i.asInteger][1];
|
|
||||||
stop = ~slices["data"][i.asInteger][2];
|
|
||||||
dur = (stop - start)/ s.sampleRate;
|
|
||||||
{BufRd.ar(1,buf, Line.ar(start,stop,dur, doneAction: 2), 0, 2);}.play;
|
|
||||||
|
|
||||||
i.postln;
|
|
||||||
dur.wait;
|
|
||||||
};
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ -1,73 +0,0 @@
|
|||||||
s.reboot
|
|
||||||
~ds = FluidDataSet.new(s)
|
|
||||||
~point = Buffer.alloc(s,1,1)
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
10.do{|i|
|
|
||||||
~point.set(0,i);
|
|
||||||
~ds.addPoint(i.asString,~point,{("addPoint"+i).postln}); //because buffer.set do an immediate update in the RT thread we can take for granted it'll be updated when we call addPoint
|
|
||||||
s.sync; //but we need to sync to make sure everything is done on the DataSet before the next iteration
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
~ds.print;
|
|
||||||
|
|
||||||
/*** KDTREE ***/
|
|
||||||
~tree = FluidKDTree.new(s)
|
|
||||||
~tree.fit(~ds,action:{"Done indexing".postln})
|
|
||||||
|
|
||||||
~tree.numNeighbours = 5; //play with this
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
10.do{|i|
|
|
||||||
~point.set(0,i);
|
|
||||||
~tree.kNearest(~point, {|x| "Neighbours for a value of % are ".postf(i); x.postln});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
/*** KMEANS ***/
|
|
||||||
|
|
||||||
~kmeans = FluidKMeans.new(s,maxIter:100);
|
|
||||||
~kmeans.numClusters = 2; //play with this
|
|
||||||
~kmeans.fit(~ds,action:{|x| "Done fitting with these number of items per cluster ".post;x.postln;})
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
10.do{|i|
|
|
||||||
~point.set(0,i);
|
|
||||||
~kmeans.predictPoint(~point,{|x| ("Predicted Cluster for a value of " + i ++ ":" + x).postln});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
~labels = FluidLabelSet(s);
|
|
||||||
|
|
||||||
~kmeans.predict(~ds,~labels, {|x| ("Size of each cluster" + x).postln})
|
|
||||||
|
|
||||||
(
|
|
||||||
~labels.size{|x|
|
|
||||||
Routine{x.asInteger.do{|i|
|
|
||||||
~labels.getLabel(i,action: {|l|
|
|
||||||
("Label for entry " + i ++ ":" + l).postln;
|
|
||||||
});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play;
|
|
||||||
};
|
|
||||||
)
|
|
||||||
|
|
||||||
// or simply print it
|
|
||||||
~labels.print
|
|
||||||
|
|
||||||
// or dump and format
|
|
||||||
(
|
|
||||||
~labels.dump{|x|
|
|
||||||
var keys = x["data"].keys.asArray.sort;
|
|
||||||
keys.do{|key|
|
|
||||||
"Label for entry % is %\n".postf(key, x["data"][key][0]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ -1,67 +0,0 @@
|
|||||||
s.reboot
|
|
||||||
~ds = FluidDataSet.new(s)
|
|
||||||
~point = Buffer.alloc(s,1,1)
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
10.do{|i|
|
|
||||||
var d;
|
|
||||||
if(i<=4,{d=i},{d=i+5});
|
|
||||||
~point.set(0,d);
|
|
||||||
~ds.addPoint(i.asString,~point,{("addPoint"+i).postln});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
~ds.print;
|
|
||||||
|
|
||||||
/*** KDTREE ***/
|
|
||||||
~tree = FluidKDTree.new(s)
|
|
||||||
~tree.fit(~ds,action:{"Done indexing".postln})
|
|
||||||
|
|
||||||
~tree.numNeighbours = 5; //play with this
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
15.do{|i|
|
|
||||||
~point.set(0,i);
|
|
||||||
~tree.kNearest(~point, {|x| "Neighbours for a value of % are ".postf(i); x.post;" with respective distances of ".post;});
|
|
||||||
~tree.kNearestDist(~point, {|x| x.postln});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
/*** KMEANS ***/
|
|
||||||
|
|
||||||
~kmeans = FluidKMeans.new(s,maxIter:100)
|
|
||||||
~kmeans.numClusters = 2; //play with this
|
|
||||||
~kmeans.fit(~ds, action:{|x| "Done fitting with these number of items per cluster ".post;x.postln;})
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
15.do{|i|
|
|
||||||
~point.set(0,i);
|
|
||||||
~kmeans.predictPoint(~point,{|x| ("Predicted Cluster for a value of " + i ++ ":" + x).postln});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
~labels = FluidLabelSet(s);
|
|
||||||
|
|
||||||
~kmeans.predict(~ds,~labels, {|x| ("Size of each cluster" + x).postln})
|
|
||||||
|
|
||||||
(
|
|
||||||
~labels.size{|x|
|
|
||||||
Routine{x.asInteger.do{|i| //size does not return a value, but we can retrieve it via a function
|
|
||||||
~labels.getLabel(i,action: {|l|
|
|
||||||
("Label for entry " + i ++ ":" + l).postln;
|
|
||||||
});
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play;
|
|
||||||
};
|
|
||||||
)
|
|
||||||
|
|
||||||
// or simply print it
|
|
||||||
~labels.print
|
|
||||||
@ -1,64 +0,0 @@
|
|||||||
(
|
|
||||||
~simpleInput = FluidDataSet(s);
|
|
||||||
~simpleOutput = FluidLabelSet(s);
|
|
||||||
b = Buffer.alloc(s,2);
|
|
||||||
~knn = FluidKNNClassifier(s);
|
|
||||||
~knn.numNeighbours = 3
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
var w,v,myx,myy;
|
|
||||||
|
|
||||||
//initialise the mouse position holder
|
|
||||||
myx=0;
|
|
||||||
myy=0;
|
|
||||||
|
|
||||||
//make a window and a full size view
|
|
||||||
w = Window.new("Viewer", Rect(100,Window.screenBounds.height - 400, 310, 310)).front;
|
|
||||||
v = View.new(w,Rect(0,0, 310, 310));
|
|
||||||
|
|
||||||
//creates a function that reacts to mousedown
|
|
||||||
v.mouseDownAction = {|view, x, y|myx=x;myy=y;w.refresh;
|
|
||||||
// myx.postln;myy.postln;
|
|
||||||
Routine{
|
|
||||||
b.setn(0,[myx,myy]);
|
|
||||||
~knn.predictPoint(b, action: {|x|x.postln;});
|
|
||||||
s.sync;
|
|
||||||
}.play;};
|
|
||||||
|
|
||||||
//custom redraw function
|
|
||||||
w.drawFunc = {
|
|
||||||
100.do { |i|
|
|
||||||
if (i < 50, {Pen.color = Color.white;} ,{Pen.color = Color.red;});
|
|
||||||
Pen.addRect(Rect(i.div(10)*30+10,i.mod(10)*30+10,20,20));
|
|
||||||
Pen.perform(\fill);
|
|
||||||
};
|
|
||||||
Pen.color = Color.black;
|
|
||||||
Pen.addOval(Rect(myx-5, myy-5,10,10));
|
|
||||||
Pen.perform(\stroke);
|
|
||||||
};
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
//populates a dataset with the same squares as the gui (their centres) (old method, iterating over buffers. A dictionary approach would be more efficient, see the example in this folder)
|
|
||||||
Routine{
|
|
||||||
50.do{|i|
|
|
||||||
var x = i.div(10)*30+20;
|
|
||||||
var y = i.mod(10)*30+20;
|
|
||||||
b.setn(0,[x,y]);
|
|
||||||
~simpleInput.addPoint(i.asString,b,{("Added Input" + i).postln});
|
|
||||||
~simpleOutput.addLabel(i.asString,"White",{("Added Output" + i).postln});
|
|
||||||
s.sync;
|
|
||||||
b.setn(0,[x+150,y]);
|
|
||||||
~simpleInput.addPoint((i+50).asString,b,{("Added Input" + (i+50)).postln});
|
|
||||||
~simpleOutput.addLabel((i+50).asString,"Red",{("Added Output" + (i+50)).postln});
|
|
||||||
s.sync;
|
|
||||||
};
|
|
||||||
\done.postln;
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
// fit the dataset
|
|
||||||
~knn.fit(~simpleInput,~simpleOutput, action:{"fitting done".postln})
|
|
||||||
|
|
||||||
// now click on the grid and read the estimated class according to the nearest K neighbours.
|
|
||||||
@ -1,74 +0,0 @@
|
|||||||
s.reboot
|
|
||||||
|
|
||||||
~urn = { |n=31416, min=0,max=31415| (min..max).scramble.keep(n) };
|
|
||||||
|
|
||||||
// creates 200 indices, then values of the output of a fundion with a predictable shape of a sinewave
|
|
||||||
n = 200
|
|
||||||
~idx = ~urn.value(n)
|
|
||||||
~data = n.collect{|i|sin(~idx[i]/5000)}
|
|
||||||
|
|
||||||
// creates the dataset with these associated indices and values
|
|
||||||
(
|
|
||||||
~simpleInput = FluidDataSet(s);
|
|
||||||
~simpleOutput = FluidDataSet(s);
|
|
||||||
b = Buffer.alloc(s,1);
|
|
||||||
c = Buffer.alloc(s,1);
|
|
||||||
~mappingviz = Buffer.alloc(s,512);
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
Routine{
|
|
||||||
n.do{|i|
|
|
||||||
b.set(0,~idx[i]);
|
|
||||||
c.set(0,~data[i]);
|
|
||||||
~simpleInput.addPoint(i.asString,b,{("Added Input" + i).postln});
|
|
||||||
~simpleOutput.addPoint(i.asString,c,{("Added Output" + i).postln});
|
|
||||||
~mappingviz.set((~idx[i]/61.4).asInteger,~data[i]);
|
|
||||||
s.sync;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
~simpleInput.print
|
|
||||||
~simpleOutput.print
|
|
||||||
|
|
||||||
//look at the seeing material
|
|
||||||
~mappingviz.plot(minval:-1,maxval:1)
|
|
||||||
|
|
||||||
//create a buffer to query
|
|
||||||
~mappingresult = Buffer.alloc(s,512);
|
|
||||||
|
|
||||||
//make the process then fit the data
|
|
||||||
~knn = FluidKNNRegressor(s,3,1)
|
|
||||||
~knn.fit(~simpleInput, ~simpleOutput, action:{"fitting done".postln})
|
|
||||||
|
|
||||||
// query 512 points along the line (slow because of all that sync'ing)
|
|
||||||
(
|
|
||||||
~knn.numNeighbours = 1; // change to see how many points the system uses to regress
|
|
||||||
Routine{
|
|
||||||
512.do{|i|
|
|
||||||
b.set(0,i*61);
|
|
||||||
~knn.predictPoint(b,action:{|d|~mappingresult.set(i,d);});
|
|
||||||
s.sync;
|
|
||||||
i.postln;
|
|
||||||
}
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
// look at the interpolated values
|
|
||||||
~mappingresult.plot
|
|
||||||
|
|
||||||
// change the number of neighbours to regress on
|
|
||||||
~knn.numNeighbours_(5)
|
|
||||||
~knn.fit(~simpleInput, ~simpleOutput, action:{"fitting done".postln})
|
|
||||||
|
|
||||||
// instead of doing the mapping per point, let's do a dataset of 512 points
|
|
||||||
~target = FluidDataSet(s)
|
|
||||||
~target.load(Dictionary.newFrom([\cols, 1, \data, Dictionary.newFrom(512.collect{|i|[i.asString, [i.asFloat * 61]]}.flatten)]))
|
|
||||||
~regressed = FluidDataSet(s)
|
|
||||||
~knn.predict(~target, ~regressed, action:{"prediction done".postln})
|
|
||||||
|
|
||||||
//dump the regressed values
|
|
||||||
~outputArray = Array.newClear(512);
|
|
||||||
~regressed.dump{|x| x["data"].keysValuesDo{|key,val|~outputArray[key.asInteger] = val[0]}}
|
|
||||||
~outputArray.plot
|
|
||||||
@ -1,120 +0,0 @@
|
|||||||
(
|
|
||||||
// set some variables
|
|
||||||
~nb_of_dim = 10;
|
|
||||||
~dataset = FluidDataSet(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
(
|
|
||||||
// fill up the dataset with 20 entries of 10 column/dimension/descriptor value each. The naming of the item's label is arbitrary as usual
|
|
||||||
Routine{
|
|
||||||
var buf = Buffer.alloc(s,~nb_of_dim);
|
|
||||||
20.do({ arg i;
|
|
||||||
buf.loadCollection(Array.fill(~nb_of_dim,{rrand(0.0,100.0)}));
|
|
||||||
~dataset.addPoint("point-"++i.asInteger.asString, buf);
|
|
||||||
s.sync;
|
|
||||||
});
|
|
||||||
buf.free;
|
|
||||||
\done.postln;
|
|
||||||
}.play
|
|
||||||
)
|
|
||||||
|
|
||||||
~dataset.print;
|
|
||||||
|
|
||||||
// make a buf for getting points back
|
|
||||||
~query_buf = Buffer.alloc(s,~nb_of_dim);
|
|
||||||
|
|
||||||
// look at a point to see that it has points in it
|
|
||||||
~dataset.getPoint("point-0",~query_buf,{~query_buf.getn(0,~nb_of_dim,{|x|x.postln;});});
|
|
||||||
|
|
||||||
// look at another point to make sure it's different...
|
|
||||||
~dataset.getPoint("point-7",~query_buf,{~query_buf.getn(0,~nb_of_dim,{|x|x.postln;});});
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////
|
|
||||||
// exploring full dataset normalization and standardization
|
|
||||||
|
|
||||||
// make a FluidNormalize
|
|
||||||
~normalize = FluidNormalize(s,0,1);
|
|
||||||
|
|
||||||
// fits the dataset to find the coefficients
|
|
||||||
~normalize.fit(~dataset,{"done".postln;});
|
|
||||||
|
|
||||||
// making an empty 'normed_dataset' which is required for the normalize function
|
|
||||||
~normed_dataset = FluidDataSet(s);
|
|
||||||
|
|
||||||
// normalize the full dataset
|
|
||||||
~normalize.transform(~dataset,~normed_dataset,{"done".postln;});
|
|
||||||
|
|
||||||
// look at a point to see that it has points in it
|
|
||||||
~normed_dataset.getPoint("point-0",~query_buf,{~query_buf.getn(0,~nb_of_dim,{|x|x.postln;});});
|
|
||||||
// 10 numbers between 0.0 and 1.0 where each column/dimension/descriptor is certain to have at least one item on which it is 0 and one on which it is 1
|
|
||||||
// query a few more for fun
|
|
||||||
|
|
||||||
// try FluidStandardize
|
|
||||||
~standardize = FluidStandardize(s);
|
|
||||||
|
|
||||||
// fits the dataset to find the coefficients
|
|
||||||
~standardize.fit(~dataset,{"done".postln;});
|
|
||||||
|
|
||||||
// standardize the full dataset
|
|
||||||
~standardized_dataset = FluidDataSet(s);
|
|
||||||
~standardize.transform(~dataset,~standardized_dataset,{"done".postln;});
|
|
||||||
|
|
||||||
// look at a point to see that it has points in it
|
|
||||||
~standardized_dataset.getPoint("point-0",~query_buf,{~query_buf.getn(0,~nb_of_dim,{|x|x.postln;});});
|
|
||||||
// 10 numbers that are standardize, which mean that, for each column/dimension/descriptor, the average of all the points will be 0. and the standard deviation 1.
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////
|
|
||||||
// exploring point querying concepts via norm and std
|
|
||||||
|
|
||||||
// Once a dataset is normalized / standardized, query points have to be scaled accordingly to be used in distance measurement. In our instance, values were originally between 0 and 100, and now they will be between 0 and 1 (norm), or their average will be 0. (std). If we have data that we want to match from a similar ranging input, which is usually the case, we will need to normalize the searching point in each dimension using the same coefficients.
|
|
||||||
|
|
||||||
// first, make sure you have run all the code above, since we will query these datasets
|
|
||||||
|
|
||||||
// get a know point as a query point
|
|
||||||
~dataset.getPoint("point-7",~query_buf);
|
|
||||||
|
|
||||||
// find the 2 points with the shortest distances in the dataset
|
|
||||||
~tree = FluidKDTree.new(s,numNeighbours:2);
|
|
||||||
~tree.fit(~dataset)
|
|
||||||
~tree.kNearest(~query_buf, {|x| ("Labels:" + x).postln});
|
|
||||||
~tree.kNearestDist(~query_buf, {|x| ("Distances:" + x).postln});
|
|
||||||
// its nearest neighbourg is itself: it should be itself and the distance should be 0. The second point is depending on your input dataset.
|
|
||||||
|
|
||||||
// normalise that point (~query_buf) to be at the right scale
|
|
||||||
~normbuf = Buffer.alloc(s,~nb_of_dim);
|
|
||||||
~normalize.transformPoint(~query_buf,~normbuf);
|
|
||||||
~normbuf.getn(0,~nb_of_dim,{arg vec;vec.postln;});
|
|
||||||
|
|
||||||
// make a tree of the normalized database and query with the normalize buffer
|
|
||||||
~normtree = FluidKDTree.new(s,numNeighbours:2);
|
|
||||||
~normtree.fit(~normed_dataset)
|
|
||||||
~normtree.kNearest(~normbuf, {|x| ("Labels:" + x).postln});
|
|
||||||
~normtree.kNearestDist(~normbuf, {|x| ("Distances:" + x).postln});
|
|
||||||
// its nearest neighbourg is still itself as it should be, but the 2nd neighbourg might have changed. The distance is now different too
|
|
||||||
|
|
||||||
// standardize that same point (~query_buf) to be at the right scale
|
|
||||||
~stdbuf = Buffer.alloc(s,~nb_of_dim);
|
|
||||||
~standardize.transformPoint(~query_buf,~stdbuf);
|
|
||||||
~stdbuf.getn(0,~nb_of_dim,{arg vec;vec.postln;});
|
|
||||||
|
|
||||||
// make a tree of the standardized database and query with the normalize buffer
|
|
||||||
~stdtree = FluidKDTree.new(s, numNeighbours: 2);
|
|
||||||
~stdtree.fit(~standardized_dataset)
|
|
||||||
~stdtree.kNearest(~stdbuf, {|x| ("Labels:" + x).postln});
|
|
||||||
~stdtree.kNearestDist(~stdbuf, {|x| ("Distances:" + x).postln});
|
|
||||||
// its nearest neighbourg is still itself as it should be, but the 2nd neighbourg might have changed yet again. The distance is also different too
|
|
||||||
|
|
||||||
// where it starts to be interesting is when we query points that are not in our original dataset
|
|
||||||
|
|
||||||
// fill with known values (50.0 for each of the 10 column/dimension/descriptor, aka the theoretical middle point of the multidimension space) This could be anything but it is fun to aim in the middle.
|
|
||||||
~query_buf.fill(0,~nb_of_dim,50);
|
|
||||||
|
|
||||||
// normalize and standardize the query buffer. Note that we do not need to fit since we have not added a point to our reference dataset
|
|
||||||
~normalize.transformPoint(~query_buf,~normbuf);
|
|
||||||
~standardize.transformPoint(~query_buf,~stdbuf);
|
|
||||||
|
|
||||||
//query the single nearest neighbourg via 3 different data scaling. Depending on the random source at the begining, you should get (small or large) differences between the 3 answers!
|
|
||||||
[~tree,~normtree,~stdtree].do{|t| t.numNeighbours =1 };
|
|
||||||
~tree.kNearest(~query_buf, {|x| ("Original:" + x).post;~tree.kNearestDist(~query_buf, {|x| (" with a distance of " + x).postln});});
|
|
||||||
~normtree.kNearest(~normbuf, {|x| ("Normalized:" + x).post;~normtree.kNearestDist(~normbuf, {|x| (" with a distance of " + x).postln});});
|
|
||||||
~stdtree.kNearest(~stdbuf, {|x| ("Standardized:" + x).post; ~stdtree.kNearestDist(~stdbuf, {|x| (" with a distance of " + x).postln});});
|
|
||||||
@ -1,90 +0,0 @@
|
|||||||
//1- make the gui then the synth below
|
|
||||||
(
|
|
||||||
var trained = 0, entering = 0;
|
|
||||||
var va = Array.fill(10,{0.5});
|
|
||||||
var input = Buffer.alloc(s,2);
|
|
||||||
var output = Buffer.alloc(s,10);
|
|
||||||
var mlp = FluidMLPRegressor(s,[6],activation: 1,outputActivation: 1,maxIter: 1000,learnRate: 0.1,momentum: 0,batchSize: 1,validation: 0);
|
|
||||||
var entry = 0;
|
|
||||||
|
|
||||||
~inData = FluidDataSet(s);
|
|
||||||
~outData = FluidDataSet(s);
|
|
||||||
|
|
||||||
w = Window("ChaosSynth", Rect(10, 10, 790, 320)).front;
|
|
||||||
a = MultiSliderView(w,Rect(10, 10, 400, 300)).elasticMode_(1).isFilled_(1);
|
|
||||||
a.value=va;
|
|
||||||
a.action = {arg q;
|
|
||||||
b.set(\val, q.value);
|
|
||||||
va = q.value;};
|
|
||||||
f = Slider2D(w,Rect(420,10,300, 300));
|
|
||||||
f.x = 0.5;
|
|
||||||
f.y = 0.5;
|
|
||||||
f.action = {arg x,y; //if trained, predict the point f.x f.y
|
|
||||||
if (entering == 1, { //if entering a point, add to the the database f.x f.y against the array va
|
|
||||||
input.setn(0, [f.x, f.y]);
|
|
||||||
output.setn(0, va);
|
|
||||||
~inData.addPoint(entry.asSymbol,input);
|
|
||||||
~outData.addPoint(entry.asSymbol,output);
|
|
||||||
entering = 0;
|
|
||||||
entry = entry + 1;
|
|
||||||
{d.value = 0;}.defer;
|
|
||||||
}, { //if not entering a point
|
|
||||||
if (trained == 1, { //if trained
|
|
||||||
input.setn(0, [f.x, f.y]);
|
|
||||||
mlp.predictPoint(input,output,{
|
|
||||||
output.getn(0,10,{
|
|
||||||
|x|va = x; b.set(\val, va); {a.value = va;}.defer;});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
c = Button(w, Rect(730,240,50, 20)).states_([["train", Color.red, Color.white], ["trained", Color.white, Color.grey]]).action_{
|
|
||||||
mlp.fit(~inData,~outData,{|x|
|
|
||||||
trained = 1;
|
|
||||||
{
|
|
||||||
c.value = 1;
|
|
||||||
e.value = x.round(0.001).asString;
|
|
||||||
}.defer;
|
|
||||||
});//train the network
|
|
||||||
};
|
|
||||||
d = Button(w, Rect(730,10,50, 20)).states_([["entry", Color.white, Color.grey], ["entry", Color.red, Color.white]]).action_{
|
|
||||||
entering = 1;
|
|
||||||
};
|
|
||||||
StaticText(w,Rect(732,260,50,20)).string_("Error:");
|
|
||||||
e = TextField(w,Rect(730,280,50,20)).string_(0.asString);
|
|
||||||
StaticText(w,Rect(732,70,50,20)).string_("rate:");
|
|
||||||
TextField(w,Rect(730,90,50,20)).string_(0.1.asString).action_{|in|mlp.learnRate = in.value.asFloat.postln;};
|
|
||||||
StaticText(w,Rect(732,110,50,20)).string_("momentum:");
|
|
||||||
TextField(w,Rect(730,130,50,20)).string_(0.0.asString).action_{|in|mlp.momentum = in.value.asFloat.postln;};
|
|
||||||
StaticText(w,Rect(732,150,50,20)).string_("maxIter:");
|
|
||||||
TextField(w,Rect(730,170,50,20)).string_(1000.asString).action_{|in| mlp.maxIter = in.value.asInteger.postln;};
|
|
||||||
StaticText(w,Rect(732,190,50,20)).string_("validation:");
|
|
||||||
TextField(w,Rect(730,210,50,20)).string_(0.0.asString).action_{|in|mlp.validation = in.value.asFloat.postln;};
|
|
||||||
)
|
|
||||||
|
|
||||||
//2- the synth
|
|
||||||
(
|
|
||||||
b = {
|
|
||||||
arg val = #[0,0,0,0,0,0,0,0,0,0];
|
|
||||||
var osc1, osc2, feed1, feed2, base1=69, base2=69, base3 = 130;
|
|
||||||
#feed2,feed1 = LocalIn.ar(2);
|
|
||||||
osc1 = MoogFF.ar(SinOsc.ar((((feed1 * val[0]) + val[1]) * base1).midicps,mul: (val[2] * 50).dbamp).atan,(base3 - (val[3] * (FluidLoudness.kr(feed2, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[4] * 3.5);
|
|
||||||
osc2 = MoogFF.ar(SinOsc.ar((((feed2 * val[5]) + val[6]) * base2).midicps,mul: (val[7] * 50).dbamp).atan,(base3 - (val[8] * (FluidLoudness.kr(feed1, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[9] * 3.5);
|
|
||||||
Out.ar(0,LeakDC.ar([osc1,osc2],mul: 0.1));
|
|
||||||
LocalOut.ar([osc1,osc2]);
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
~inData.print;
|
|
||||||
~outData.print;
|
|
||||||
|
|
||||||
/////////
|
|
||||||
//3 - play with the multislider
|
|
||||||
//4 - when you like a spot, click entry (become read) then a position in the 2D graph where this point should be
|
|
||||||
//5 - do that for a few points
|
|
||||||
//6 - click train
|
|
||||||
//7 - the 2D graph controls the 10D
|
|
||||||
//8 - if you like a new sound and you want to update the graph, just click entry, then where it should be in the 2D, then retrain when you are happy
|
|
||||||
@ -1,161 +0,0 @@
|
|||||||
// Make:
|
|
||||||
// - A kmeans
|
|
||||||
// - a datasetquery
|
|
||||||
// - a normalizer
|
|
||||||
// - a standardizer
|
|
||||||
// - 3 DataSets of example points R-G-B descriptions
|
|
||||||
// - 3 DataSets for the scaled versions
|
|
||||||
// - 1 summative dataset and a LabelSet for predicted labels
|
|
||||||
|
|
||||||
(
|
|
||||||
~classifier = FluidKMeans(s,5, 1000);
|
|
||||||
~query = FluidDataSetQuery(s);
|
|
||||||
~stan = FluidStandardize(s);
|
|
||||||
~norm = FluidNormalize(s);
|
|
||||||
~sourceR = FluidDataSet(s);
|
|
||||||
~sourceG = FluidDataSet(s);
|
|
||||||
~sourceB = FluidDataSet(s);
|
|
||||||
~scaledR = FluidDataSet(s);
|
|
||||||
~scaledG = FluidDataSet(s);
|
|
||||||
~scaledB = FluidDataSet(s);
|
|
||||||
~composited = FluidDataSet(s);
|
|
||||||
~labels = FluidLabelSet(s);
|
|
||||||
)
|
|
||||||
|
|
||||||
//Make some random, but clustered test points, each descriptor category in a separate dataset
|
|
||||||
(
|
|
||||||
~sourceR.load(Dictionary.newFrom([\cols, 1, \data, (Dictionary.newFrom(40.collect{|x| [x, 1.0.sum3rand]}.flatten))]));
|
|
||||||
~sourceG.load(Dictionary.newFrom([\cols, 1, \data, (Dictionary.newFrom(40.collect{|x| [x, 1.0.rand2]}.flatten))]));
|
|
||||||
~sourceB.load(Dictionary.newFrom([\cols, 1, \data, (Dictionary.newFrom(40.collect{|x| [x, (0.5.sum3rand).squared + [0.75,-0.1].choose]}.flatten))]));
|
|
||||||
)
|
|
||||||
|
|
||||||
//here we manipulate
|
|
||||||
|
|
||||||
//assemble the scaled dataset
|
|
||||||
(
|
|
||||||
~query.addColumn(0, {
|
|
||||||
~query.transformJoin(~sourceB, ~sourceG, ~composited, {
|
|
||||||
~query.transformJoin(~sourceR, ~composited, ~composited);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
~composited.print
|
|
||||||
|
|
||||||
//Fit the classifier to the example DataSet and labels, and then run prediction on the test data into our mapping label set
|
|
||||||
~classifier.fitPredict(~composited,~labels,{~labels.dump{|x|~labeldict = x;};~composited.dump{|x|~compodict=x;};});
|
|
||||||
|
|
||||||
//Visualise:
|
|
||||||
(
|
|
||||||
w = Window("sourceClasses", Rect(128, 64, 820, 120));
|
|
||||||
w.drawFunc = {
|
|
||||||
Pen.use{
|
|
||||||
~compodict["data"].keysValuesDo{|key, colour|
|
|
||||||
Pen.fillColor = Color.fromArray((colour * 0.5 + 0.5 ).clip(0,1) ++ 1);
|
|
||||||
Pen.fillRect( Rect( (key.asFloat * 20 + 10), (~labeldict["data"].at(key).asInteger[0] * 20 + 10),15,15));
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
w.refresh;
|
|
||||||
w.front;
|
|
||||||
)
|
|
||||||
|
|
||||||
// standardize our colours and rerun
|
|
||||||
(
|
|
||||||
~stan.fitTransform(~sourceR, ~scaledR, {
|
|
||||||
~stan.fitTransform(~sourceG, ~scaledG, {
|
|
||||||
~stan.fitTransform(~sourceB, ~scaledB, {
|
|
||||||
//assemble
|
|
||||||
~query.addColumn(0, {
|
|
||||||
~query.transformJoin(~scaledB, ~scaledG, ~composited, {
|
|
||||||
~query.transformJoin(~scaledR, ~composited, ~composited, {
|
|
||||||
//fit
|
|
||||||
~classifier.fitPredict(~composited,~labels,{~labels.dump{|x|~labeldict2 = x;};~composited.dump{|x|~compodict2=x;};});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
//Visualise:
|
|
||||||
(
|
|
||||||
w = Window("stanClasses", Rect(128, 204, 820, 120));
|
|
||||||
w.drawFunc = {
|
|
||||||
Pen.use{
|
|
||||||
~compodict2["data"].keysValuesDo{|key, colour|
|
|
||||||
Pen.fillColor = Color.fromArray((colour * 0.25 + 0.5 ).clip(0,1) ++ 1);
|
|
||||||
Pen.fillRect( Rect( (key.asFloat * 20 + 10), (~labeldict2["data"].at(key).asInteger[0] * 20 + 10),15,15));
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
w.refresh;
|
|
||||||
w.front;
|
|
||||||
)
|
|
||||||
|
|
||||||
//now let's normalise instead
|
|
||||||
(
|
|
||||||
~norm.fitTransform(~sourceR, ~scaledR, {
|
|
||||||
~norm.fitTransform(~sourceG, ~scaledG, {
|
|
||||||
~norm.fitTransform(~sourceB, ~scaledB, {
|
|
||||||
//assemble
|
|
||||||
~query.addColumn(0, {
|
|
||||||
~query.transformJoin(~scaledB, ~scaledG, ~composited, {
|
|
||||||
~query.transformJoin(~scaledR, ~composited, ~composited, {
|
|
||||||
//fit
|
|
||||||
~classifier.fitPredict(~composited,~labels,{~labels.dump{|x|~labeldict2 = x;};~composited.dump{|x|~compodict2=x;};});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
//Visualise:
|
|
||||||
(
|
|
||||||
w = Window("normClasses", Rect(128, 344, 820, 120));
|
|
||||||
w.drawFunc = {
|
|
||||||
Pen.use{
|
|
||||||
~compodict2["data"].keysValuesDo{|key, colour|
|
|
||||||
Pen.fillColor = Color.fromArray((colour * 0.25 + 0.5 ).clip(0,1) ++ 1);
|
|
||||||
Pen.fillRect( Rect( (key.asFloat * 20 + 10), (~labeldict2["data"].at(key).asInteger[0] * 20 + 10),15,15));
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
w.refresh;
|
|
||||||
w.front;
|
|
||||||
)
|
|
||||||
|
|
||||||
// let's mess up with the scaling of one dimension: let's multiply the range of Red by 10
|
|
||||||
~norm.min = -10;
|
|
||||||
~norm.max = 10;
|
|
||||||
(
|
|
||||||
~norm.fitTransform(~sourceR, ~scaledR, {
|
|
||||||
//assemble
|
|
||||||
~query.addColumn(0, {
|
|
||||||
~query.transformJoin(~scaledB, ~scaledG, ~composited, {
|
|
||||||
~query.transformJoin(~scaledR, ~composited, ~composited, {
|
|
||||||
//fit
|
|
||||||
~classifier.fitPredict(~composited,~labels,{~labels.dump{|x|~labeldict2 = x;};~composited.dump{|x|~compodict2=x;};});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
)
|
|
||||||
|
|
||||||
//Visualise:
|
|
||||||
(
|
|
||||||
w = Window("norm10rClasses", Rect(128, 484, 820, 120));
|
|
||||||
w.drawFunc = {
|
|
||||||
Pen.use{
|
|
||||||
~compodict2["data"].keysValuesDo{|key, colour|
|
|
||||||
Pen.fillColor = Color.fromArray((colour * 0.25 + 0.5 ).clip(0,1) ++ 1);
|
|
||||||
Pen.fillRect( Rect( (key.asFloat * 20 + 10), (~labeldict2["data"].at(key).asInteger[0] * 20 + 10),15,15));
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
w.refresh;
|
|
||||||
w.front;
|
|
||||||
)
|
|
||||||
@ -1,202 +0,0 @@
|
|||||||
// using nmf in 'real-time' as a classifier
|
|
||||||
// how it works: a circular buffer is recording and attacks trigger the process
|
|
||||||
// if in learning mode, it does a one component nmf which makes an approximation of the base. 3 of those will be copied in 3 different positions of our final 3-component base
|
|
||||||
// in in guessing mode, it does a thres component nmf from the trained bases and yields the 3 activation peaks, on which it thresholds resynth
|
|
||||||
|
|
||||||
//how to use:
|
|
||||||
// 1. start the server
|
|
||||||
// 2. select between parenthesis below and execute. You should get a window with 3 pads (bd sn hh) and various menus
|
|
||||||
// 3. train the 3 classes:
|
|
||||||
// 3.1 select the learn option
|
|
||||||
// 3.2 select which class you want to train
|
|
||||||
// 3.3 play the sound you want to associate with that class a few times (the left audio channel is the source)
|
|
||||||
// 3.4 click the transfer button
|
|
||||||
// 3.5 repeat (3.2-3.4) for the other 2 classes.
|
|
||||||
// 3.x you can observe the 3 bases here:
|
|
||||||
~classify_bases.plot(numChannels:3)
|
|
||||||
|
|
||||||
// 4. classify
|
|
||||||
// 4.1 select the classify option
|
|
||||||
// 4.2 press a pad and look at the activation
|
|
||||||
// 4.3 tweak the thresholds and enjoy the resynthesis. (the right audio channel is the detected class where classA is a bd sound)
|
|
||||||
// 4.x you can observe the 3 activations here:
|
|
||||||
~activations.plot(numChannels:3)
|
|
||||||
|
|
||||||
/// code to execute first
|
|
||||||
(
|
|
||||||
var circle_buf = Buffer.alloc(s,s.sampleRate * 2); // b
|
|
||||||
var input_bus = Bus.audio(s,1); // g
|
|
||||||
var classifying = 0; // c
|
|
||||||
var cur_training_class = 0; // d
|
|
||||||
var train_base = Buffer.alloc(s, 65); // e
|
|
||||||
var activation_vals = [0.0,0.0,0.0]; // j
|
|
||||||
var thresholds = [0.5,0.5,0.5]; // k
|
|
||||||
var activations_disps;
|
|
||||||
var analysis_synth;
|
|
||||||
var osc_func;
|
|
||||||
var update_rout;
|
|
||||||
|
|
||||||
~classify_bases = Buffer.alloc(s, 65, 3); // f
|
|
||||||
~activations = Buffer.new(s);
|
|
||||||
|
|
||||||
// the circular buffer with triggered actions sending the location of the head at the attack
|
|
||||||
Routine {
|
|
||||||
SynthDef(\JITcircular,{arg bufnum = 0, input = 0, env = 0;
|
|
||||||
var head, head2, duration, audioin, halfdur, trig;
|
|
||||||
duration = BufFrames.kr(bufnum) / 2;
|
|
||||||
halfdur = duration / 2;
|
|
||||||
head = Phasor.ar(0,1,0,duration);
|
|
||||||
head2 = (head + halfdur) % duration;
|
|
||||||
|
|
||||||
// circular buffer writer
|
|
||||||
audioin = In.ar(input,1);
|
|
||||||
BufWr.ar(audioin,bufnum,head,0);
|
|
||||||
BufWr.ar(audioin,bufnum,head+duration,0);
|
|
||||||
trig = FluidAmpSlice.ar(audioin, 10, 1666, 2205, 2205, 12, 9, -47,4410, 85);
|
|
||||||
|
|
||||||
// cue the calculations via the language
|
|
||||||
SendReply.ar(trig, '/attack',head);
|
|
||||||
|
|
||||||
Out.ar(0,audioin);
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
// drum sounds taken from original code by snappizz
|
|
||||||
// https://sccode.org/1-523
|
|
||||||
// produced further and humanised by PA
|
|
||||||
SynthDef(\fluidbd, {
|
|
||||||
|out = 0|
|
|
||||||
var body, bodyFreq, bodyAmp;
|
|
||||||
var pop, popFreq, popAmp;
|
|
||||||
var click, clickAmp;
|
|
||||||
var snd;
|
|
||||||
|
|
||||||
// body starts midrange, quickly drops down to low freqs, and trails off
|
|
||||||
bodyFreq = EnvGen.ar(Env([Rand(200,300), 120, Rand(45,49)], [0.035, Rand(0.07,0.1)], curve: \exp));
|
|
||||||
bodyAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.005,Rand(0.08,0.085),Rand(0.25,0.35)]), doneAction: 2);
|
|
||||||
body = SinOsc.ar(bodyFreq) * bodyAmp;
|
|
||||||
// pop sweeps over the midrange
|
|
||||||
popFreq = XLine.kr(Rand(700,800), Rand(250,270), Rand(0.018,0.02));
|
|
||||||
popAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.001,Rand(0.018,0.02),Rand(0.0008,0.0013)]));
|
|
||||||
pop = SinOsc.ar(popFreq) * popAmp;
|
|
||||||
// click is spectrally rich, covering the high-freq range
|
|
||||||
// you can use Formant, FM, noise, whatever
|
|
||||||
clickAmp = EnvGen.ar(Env.perc(0.001,Rand(0.008,0.012),Rand(0.07,0.12),-5));
|
|
||||||
click = RLPF.ar(VarSaw.ar(Rand(900,920),0,0.1), 4760, 0.50150150150) * clickAmp;
|
|
||||||
|
|
||||||
snd = body + pop + click;
|
|
||||||
snd = snd.tanh;
|
|
||||||
|
|
||||||
Out.ar(out, snd);
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
SynthDef(\fluidsn, {
|
|
||||||
|out = 0|
|
|
||||||
var pop, popAmp, popFreq;
|
|
||||||
var noise, noiseAmp;
|
|
||||||
var click;
|
|
||||||
var snd;
|
|
||||||
|
|
||||||
// pop makes a click coming from very high frequencies
|
|
||||||
// slowing down a little and stopping in mid-to-low
|
|
||||||
popFreq = EnvGen.ar(Env([Rand(3210,3310), 410, Rand(150,170)], [0.005, Rand(0.008,0.012)], curve: \exp));
|
|
||||||
popAmp = EnvGen.ar(Env.perc(0.001, Rand(0.1,0.12), Rand(0.7,0.9),-5));
|
|
||||||
pop = SinOsc.ar(popFreq) * popAmp;
|
|
||||||
// bandpass-filtered white noise
|
|
||||||
noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.13,0.15), Rand(1.2,1.5),-5), doneAction: 2);
|
|
||||||
noise = BPF.ar(WhiteNoise.ar, 810, 1.6) * noiseAmp;
|
|
||||||
|
|
||||||
click = Impulse.ar(0);
|
|
||||||
snd = (pop + click + noise) * 1.4;
|
|
||||||
|
|
||||||
Out.ar(out, snd);
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
SynthDef(\fluidhh, {
|
|
||||||
|out = 0|
|
|
||||||
var click, clickAmp;
|
|
||||||
var noise, noiseAmp, noiseFreq;
|
|
||||||
|
|
||||||
// noise -> resonance -> expodec envelope
|
|
||||||
noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.28,0.3), Rand(0.4,0.6), [-20,-15]), doneAction: 2);
|
|
||||||
noiseFreq = Rand(3900,4100);
|
|
||||||
noise = Mix(BPF.ar(ClipNoise.ar, [noiseFreq, noiseFreq+141], [0.12, 0.31], [2.0, 1.2])) * noiseAmp;
|
|
||||||
|
|
||||||
Out.ar(out, noise);
|
|
||||||
}).add;
|
|
||||||
|
|
||||||
// makes sure all the synthdefs are on the server
|
|
||||||
s.sync;
|
|
||||||
|
|
||||||
// instantiate the JIT-circular-buffer
|
|
||||||
analysis_synth = Synth(\JITcircular,[\bufnum, circle_buf, \input, input_bus]);
|
|
||||||
train_base.fill(0,65,0.1);
|
|
||||||
|
|
||||||
// instantiate the listener to cue the processing from the language side
|
|
||||||
osc_func = OSCFunc({ arg msg;
|
|
||||||
var head_pos = msg[3];
|
|
||||||
// when an attack happens
|
|
||||||
if (classifying == 0, {
|
|
||||||
// if in training mode, makes a single component nmf
|
|
||||||
FluidBufNMF.process(s, circle_buf, head_pos, 128, bases:train_base, basesMode: 1, windowSize: 128);
|
|
||||||
}, {
|
|
||||||
// if in classifying mode, makes a 3 component nmf from the pretrained bases and compares the activations with the set thresholds
|
|
||||||
FluidBufNMF.process(s, circle_buf, head_pos, 128, components:3, bases:~classify_bases, basesMode: 2, activations:~activations, windowSize: 128, action:{
|
|
||||||
// we are retrieving and comparing against the 2nd activation, because FFT processes are zero-padded on each sides, therefore the complete 128 samples are in the middle of the analysis.
|
|
||||||
~activations.getn(3,3,{|x|
|
|
||||||
activation_vals = x;
|
|
||||||
if (activation_vals[0] >= thresholds[0], {Synth(\fluidbd,[\out,1])});
|
|
||||||
if (activation_vals[1] >= thresholds[1], {Synth(\fluidsn,[\out,1])});
|
|
||||||
if (activation_vals[2] >= thresholds[2], {Synth(\fluidhh,[\out,1])});
|
|
||||||
defer{
|
|
||||||
activations_disps[0].string_("A:" ++ activation_vals[0].round(0.01));
|
|
||||||
activations_disps[1].string_("B:" ++ activation_vals[1].round(0.01));
|
|
||||||
activations_disps[2].string_("C:" ++ activation_vals[2].round(0.01));
|
|
||||||
};
|
|
||||||
});
|
|
||||||
};
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}, '/attack', s.addr);
|
|
||||||
|
|
||||||
// make sure all the synths are instantiated
|
|
||||||
s.sync;
|
|
||||||
|
|
||||||
// GUI for control
|
|
||||||
{
|
|
||||||
var win = Window("Control", Rect(100,100,610,100)).front;
|
|
||||||
|
|
||||||
Button(win, Rect(10,10,80, 80)).states_([["bd",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidbd, [\out, input_bus], analysis_synth, \addBefore)});
|
|
||||||
Button(win, Rect(100,10,80, 80)).states_([["sn",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidsn, [\out, input_bus], analysis_synth, \addBefore)});
|
|
||||||
Button(win, Rect(190,10,80, 80)).states_([["hh",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidhh, [\out, input_bus], analysis_synth,\addBefore)});
|
|
||||||
StaticText(win, Rect(280,7,85,25)).string_("Select").align_(\center);
|
|
||||||
PopUpMenu(win, Rect(280,32,85,25)).items_(["learn","classify"]).action_({|value|
|
|
||||||
classifying = value.value;
|
|
||||||
if(classifying == 0, {
|
|
||||||
train_base.fill(0,65,0.1)
|
|
||||||
});
|
|
||||||
});
|
|
||||||
PopUpMenu(win, Rect(280,65,85,25)).items_(["classA","classB","classC"]).action_({|value|
|
|
||||||
cur_training_class = value.value;
|
|
||||||
train_base.fill(0,65,0.1);
|
|
||||||
});
|
|
||||||
Button(win, Rect(375,65,85,25)).states_([["transfer",Color.black,Color.white]]).mouseDownAction_({
|
|
||||||
if(classifying == 0, {
|
|
||||||
// if training
|
|
||||||
FluidBufCompose.process(s, train_base, numChans:1, destination:~classify_bases, destStartChan:cur_training_class);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
StaticText(win, Rect(470,7,75,25)).string_("Acts");
|
|
||||||
activations_disps = Array.fill(3, {arg i;
|
|
||||||
StaticText(win, Rect(470,((i+1) * 20 )+ 7,80,25));
|
|
||||||
});
|
|
||||||
StaticText(win, Rect(540,7,55,25)).string_("Thresh").align_(\center);
|
|
||||||
3.do {arg i;
|
|
||||||
TextField(win, Rect(540,((i+1) * 20 )+ 7,55,25)).string_("0.5").action_({|x| thresholds[i] = x.value.asFloat;});
|
|
||||||
};
|
|
||||||
|
|
||||||
win.onClose_({circle_buf.free;input_bus.free;osc_func.clear;analysis_synth.free;});
|
|
||||||
}.defer;
|
|
||||||
}.play;
|
|
||||||
)
|
|
||||||
|
|
||||||
// thanks to Ted Moore for the SC code cleaning and improvments!
|
|
||||||
@ -0,0 +1,55 @@
|
|||||||
|
TITLE:: FluidFilesPath
|
||||||
|
summary:: A convenience class for accessing the audio files provided with the FluCoMa Extension
|
||||||
|
categories:: Libraries>FluidCorpusManipulation
|
||||||
|
related:: Classes/FluidLoadFolder
|
||||||
|
|
||||||
|
DESCRIPTION::
|
||||||
|
|
||||||
|
|
||||||
|
CLASSMETHODS::
|
||||||
|
|
||||||
|
METHOD:: new
|
||||||
|
Get the path to the "AudioFiles" folder inside the FluCoMa extensions folder. Following this with a ++ "name_Of_The_File-You-Want.wav" will create the path to file you want.
|
||||||
|
|
||||||
|
ARGUMENT:: fileName
|
||||||
|
Optionally, you may pass in the name of the file you want to use and the *new class method will return the path to that file.
|
||||||
|
|
||||||
|
returns:: The path to the "AudioFiles" folder inside the FluCoMa extensions folder (optionally with provided file name).
|
||||||
|
|
||||||
|
EXAMPLES::
|
||||||
|
|
||||||
|
code::
|
||||||
|
(
|
||||||
|
// these will return the same path
|
||||||
|
(FluidFilesPath()++"Nicol-LoopE-M.wav").postln;
|
||||||
|
FluidFilesPath("Nicol-LoopE-M.wav").postln;
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// test it one way
|
||||||
|
s.waitForBoot{
|
||||||
|
Routine{
|
||||||
|
var path = FluidFilesPath()++"Nicol-LoopE-M.wav";
|
||||||
|
var buf = Buffer.read(s,path);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
buf.play;
|
||||||
|
}.play;
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
// test it another way
|
||||||
|
s.waitForBoot{
|
||||||
|
Routine{
|
||||||
|
var path = FluidFilesPath("Nicol-LoopE-M.wav");
|
||||||
|
var buf = Buffer.read(s,path);
|
||||||
|
|
||||||
|
s.sync;
|
||||||
|
|
||||||
|
buf.play;
|
||||||
|
}.play;
|
||||||
|
}
|
||||||
|
)
|
||||||
|
::
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue