thetechnobear's Recent Posts

Ive been trying to use Aalto and Kaivo with a sustain pedal (CC 64) and I'm having issues with notes sometimes being stuck on.
It seems to be worst if you go over the voice count, but it also happens sometimes when this is not the case.

Ive put a midi monitor in front of Kaivo/Aalto and I can clearly see CC 64 = 0 is being sent to it.

(its very easy to reproduce as it happens pretty frequently)

related, Id really like to be able to hold sustain when using the soundplane over OSC,
how could this be achieved?



ok, this is a bit of a weird one :)
Im using my soundplane with my Virus TI.

all works ok, except the sending of CCs was causing issues, as these CCs are used by the TI for other purposes. So I decided to use M4L to filter them out.

when I did this, I started getting instability in the note pitch (a constant rapid vibrato)

when I slide to a new note, I noticed this instability was not there, and found a pattern (U=unstable, S=stable)


(regardless of starting note, bend range etc... and its hardly affected by lowering the data rate)

odd, so starting on a C instability, slide to D stable. but then start on D its unstable, and you an slide back to C and its stable. (i.e. its the pitchbend values)

Initially I assumed it was the virus, but then noticed, if I don't have the Max device, it was absolutely fine.

so i tried in Max directly, if I send midi data directly thru, no issue, but if pitchbends are 'parsed' eg. midiin to midiparse to midiformat to midiout, the data gets 'garbled'

ok, so I thought, its a max issue...
so plugged in my Eigenharp, no issue at all (its every bit as sensitive, and if anything has faster data rates)

So, it 'appears' (I did say it was odd :o) ) to be some combination of the soundplane software + max,
my only 'guess' (having looked at the midi data), is that the soundplane software seems to be 'beating' between a few values even when your finger is still, and I wonder if this rapid changes is causing issues in max. (Perhaps EigenD smooths it, Id need to check the code)

But I kind of thought thats what vibrato would do, kind of smooth out the data a bit?
(my usual settings are 0.5, bend range +/-24)


Nope, Midi is note/cc input is deactivated when OSC is active.
(but plugin automation is still possible)

out of interest, why not just use the pitch/gate from the OSC inputs?

Just one touch, Ive not tried with more touches.

Max patch, simple notein->noteout bendin->bendout

thats it, there is no processing going on.
i get the same if i do midin -> midiparse -> midiformat-> midiout (and connect note and pb only)

as I say its odd, if instead i do midiin->midiout , it works fine, including pitchbends

Im sure some how, the TI is a factor, as I don't see it with VSTs, but as i say, I cannot really blame it, as it doesnt do it when max is not processing the messages, and max doesnt do it when I use the eigenharp.

I suppose Id need to see exactly what PBs are being sent by SP software.

a question, if you are using quantise, and vibrato at 0.5, how much movement is required before a pitchbend should be sent?

Im assuming different levels of vibrato, some compress the X movement of the signal, such that small movements are ignored?


yeah, /t3d/sustain could work for me :o) I could then easily support this in EigenD.

I could also look into changing the soundplane app, to be able to listen on a midi port, to allow for some pedal inputs, that could then be routed over OSC for synths.
(the 'issue' here being, how do musicians get their midi pedals to work alongside the soundplane
when using aalto/kaivo)

currently in eigenD I've done something similar for t3d for breath etc
e.g. I have the messages
/t3d/breath float
/t3d/strip1 float
/t3d/strip2 float
/t3d/pedal1 float
/t3d/pedal2 float
/t3d/pedal3 float
/t3d/pedal4 float

( I configured these in the soundplane app as zones for use with the soundplane to EigenD for use with t3dInput , and I also output these on my t3doutput agent)

I think the most common 'additional' inputs used are:

  • sustain

  • breath

  • expression

(I guess there are other 'pedals' for things like hold, legato etc)

I guess the 'issue' for Aalto/Kaivo is only sustain has a known function,
where as the others would all need outputs on device section, so we could route them appropriately. (Id settle for breath and expression :o) )

I know we also talked about automation names over OSC, which is also useful,
but its not as useful, as its not saved per patch, and I can also do this routing by using the plugins automation, a simple M4L device could do this.

I know lots of 'ideas', but id settle for sustain working over midi, and some way of getting sustain when using the soundplane for now

but of course I don't want to delay you getting 1.6 out... as think there are quite a few waiting for it.

cool, like Windrush... would be nice to hear more about what synths your using, and how your using Aalto etc

Thanks, I hope to be doing more over time.
Currently doing more stuff with Reaktor, which is also fun.. pity its OSC implementation is broken, Im still trying to 'perfect' the multi touch handling with Reaktor.

Ive also order a few Axoloti boards, which I will be using for voice per channel for both my Eigenharp and Soundplane ... very excited by this prospect.


My latest video is the start of a series where Im going to show how you can use EigenD to build a modular synth, with full per note expression.
part 1 goes from the basics... and we then get more serious and fruity

You Tube link

Im using the Soundplane as my controller, as its great for this... but techniques are applicable to all controllers.
Soundplane Im using t3d osc, but you can also use midi (including voice per channel.

Note: please subscribe, as I don't want to spam this forum with my videos, so may not always update this thread.

great idea, Ive got a few aalto and kaivo patches Ive been working on...
perhaps we should have a separate 'Patches for soundplane thread' , as the Aalto/Kaivo patches threads are very long.

Yeah, Id love to see some improvement in this area, as I find chords challenging,
with fourths, there are fingerings (I've found)
a) linear, which is a bit of a stretch, and takes a little too much space

b) over two rows, this can work, but i find getting equal pressure tricky due to fingering
e.g. (left hand)





the main issue though, is many inversions & other chords cannot be done, since you end up with adjacent notes, either in the vertical or horizontal axis.

I think it 'how musicians play the soundplane' I think is a possible rich area for discussion,
I'll start another topic rather than de-trail this one :o)

Playing chords (assuming fourths layouts)

heres now I'm trying to play chords

a) on one row - most obvious, but can take up quite a bit of surface to play a chord

b) over two rows - more compact, fingering not too bad, not all chords/inversions possible due to adjacent (vertical) notes, equal pressure can be difficult, and practice required to consistently get correct spacing.

example (left hand)





My answers

a) Currently primarily I use it as a playing surface

b) Using rows as fourths

c) Im getting pretty comfortable with the SP, playing solo parts with perhaps 2-3 touches active using one or both hands. Im still practicing to do multi part pieces, (see Difficulties)

d) Difficulties

Arps, getting even pressure and ensuring each sounds and does not slide

Chords, fingering is diffcult (see next post), and I find it easy to either be too close to a border, and trigger incorrect note, or pressure is not enough on some fingers
(its getting better but its still hard)

playing non-legato with adjacent notes, when played faster... too often i end up with a slide. I think this is partially me, and partially the software not always treating as a new touch (regardless of LP setting)

Consistent velocity over midi, I don't seem to be able to get very light touches or hard, seems to play in the range 40-90 (rather than 0-127), can make subtle playing on some soft synth tricky

e) Enjoyment

I love playing both hands, where only 2-3 touches are used. i.e. 1 touch left hand playing 'bass', 2-3 touches in right, the sliding between notes is brilliant. the 'poly pressure' is great, and the Y movement for times is excellent.

Its a really different instrument to the Eigenharp,
The Eigenharp excels in playing 'anything' as its key action is faster, and no limitations on layout.
The soundplane excels with multi finger expressiveness, its hard to explain why, but I think, its partly the size of the key zone, means you can really slide around it, its a more 'exaggerated' action. Overall Im glad i have both, as they compliment each other really well.

part 3 is up... thats the last of the basics
this rounds of this 'section', covering sub oscillator, LFO for PWM, and envelopes.. and a bit of FM.
From here, it will be less regular, and will concentrate on more complex patches and techniques, and integrating things in perhaps unexpected ways.
EigenD : Modular Synth part3

Note: Link to downloads and documentation in youtube description of each video

Thanks Randy, Lots more developments to come... its great being able to collaborate with both you and Antonio, its alot of fun, and so much potential.... looking forward to 2015.

part 2 up, getting fruity... again using the soundplane
EigenD : Modular Synth part2

Heres a video of my new EigenD agent which allows full control of EigenD from the soundplane, providing scales, splits, step sequencers, loop control all directly from the soundplanes surface
You tube demonstration

note: this video concentrates on showing features available, Im going to be doing a follow up video which will show 'exclusive' features for the soundplane, and in particular how to build a per note expressive synth in eigenD and more ... :)

yeah, though... I think its reaktor... as exactly the same tests work on Max/Msp and also on a C++ app I'm writing, on the same machine.

yeah, the frame message idea should work, and given the speed of NI fixes, I think is probably the only realistic solution for now.

touch off wont help in this scenario, as it would still look like a touch-off followed by a 'a new touch on' ... really timestamps/sequencing is the only real solution.
and I totally agree, really its Reaktor that should provide us with access to the bundle timestamp.

my only 'concern' over using the bundle though, is I think there are a few apps that don't explicitly support osc bundles and their timestamps e.g. Numerology also doesnt(though Jim may be willing to add it)

channel property only works for "controllers" not note_rows.

would be nice for splits, but would only work for non-multichannel setups.

here you go... not 1 but 2 T3D reaktor based synths

included is the t3d macro which replaces Marks 'continuum front end' and is pin compatible.
I then updated two of his synth NanoWave and Matrix using this macro.
(whole thing took less than 5 minutes)

you can easily do his other synths, by simply downloading his ensembles and then replacing the front end, with the macro.. just be careful to wire the correct things up.
(tip: import the t3d macro, and wire up one by one, as you attach the t3d wires the continuum front end wires will disappear, so you know whats left to do... and only delete the continuum front end once you have done all the wires)

thanks to Mark Smart for sharing the originals, and I hope others here find these useful and instructive
p.s. I checked with mark and he was fine with me sharing.

(another!) Mark

EDIT: ok, Ive noticed there are issues with Reaktor and T3D OSC,
a) note stuck, there appears to be a bug in Reaktor, when alot of osc data is sent quickly, it is present to the application out of order. this is most noticeable when the last couple of pressure values get reversed, so we 'miss' the note-off, so we get a stuck note.

possibly this could be circumvented, by watching frame messages, and if we haven't had an update since N ms, then turn the note off.

b) continuous event streams in osc
these should be sent at a continuous rate as specified in the soundplane app (in Hz),
but I'm seeing quite a variation in this, e.g. at 250hz i see between 175 and 375 hz,
in fairness I see the same behaviour in MaxMsp.
BUT the issue with reaktor is there seems to be a 'ceiling' at around 300hz, over this, and I still see that data at similar rates to 250hz.
THIS is not the case in Max, so its a Reaktor issue.

Im checking on the NI forums to see if its a known issue.
really (a) is a problem, I could work around it, but its not nice...
its a pity we don't have a timestamp/seq on the individual messages as this would make a workaround trivial.
(I know the time is on the OSC bundle, but Reaktor does not expose this)
perhaps an OSC option, which enables a seq on the tch messages... let efficient, useful for some hosts.

thanks @timoka those are really good examples.
they could be very easily adapted to use OSC, I might give that a go in next few days.
if so I will share
(Ive emailed Mark Smart, to see if he minds me sharing 'derivative' products)

nice track, your getting alot of variety out of Aalto... percussion to pads, great job.

Ah, I think i may have misread (or not looked closely enough) the t3d code, and assumed the note number was the starting note, not that it was continually changing.
(such is the problem with reading code, without being able to run it :))

EDIT: actually just looked at my code, i also did continuous pitch, so just 'forgot' this temporarily :)

that alters the approach a little, when converting existing reaktor instruments
(since they follow a midi model, note on w/ pitch, then pitchbend ), but still possible,
you just need to track the voices(=touches) as mentioned.

my plans are more around building my own instruments, so not an issue, as these don't need to follow the same midi model.

yeah the tchN comment, was not that it would be better the other way, just its a bit of a pain in Reaktor, as it is unable to match partial paths or even process the path. so you have to be explicit.
there are of course some use-cases where being able to match a particular voice is useful.


Im trying to get my head around how the soundplane software works, and Id like to understand the principles of how it determines touch x/y/z from the raw matrix.
This is really just 'out of interest' as I love to know how things tick :)

as the source code is open source, I can work through that, but I wondered if there is something a bit higher level.

In particular, are there some details of the basic maths involved, e.g. what maths techniques are used to go from the matrix (which i assume is a 2d pressure values).
(I can then get the specifics from the internet :))

and/or perhaps a max patch that does something similar?

I guess ideally, Id like to try modeling the process in Max, just so i can a feel for it.

note: I know Randy has spent a lot of time refining the touch data, eliminating noise etc, so i know it won't be as good… but more an understanding of the principles involved.
(its this refinements, that make me think perhaps the C++ code might make it tricky to see the basic process)


so i assume I'm after peak detection algos for 2D arrays (3D grid), and looking for one thats not a brute force search thru the grid for maxima (with respect to neighbours).

I've also had a quick check of the DIY projects here, I guess part of this may be a starting point. (just need to remove the bit about converting audio signal, as this is already done in the soundplane)

Q. is the matrix from OSC completely raw, or has it had the calibration data applied to remove 'noise' , if the former, I might look at the source, to see if I can 'optionally' output the later.

fair enough, will check out the code.

Max, agreed, I just thought it might be easier to visualize whats going on.
anyway, once I looked through the code i will have a better idea.

yeah, Im not an expert either, as Ive only built very small things too, really to try to understand reaktor

as far as I've played with it, there are two things you can do with OSC

  • osc learn
    this basically allows you to take one of the parameters from the messages (using an index), as far as i can tell this has to be 0.0 - 1.0 and then is automatically scaled by the control
    this is easy to do :)

  • OscReceive / Osc Receive Array

the first is trivial to use, just give it a pattern and specify the number of parameters (=ports) - but its limited to 10 outputs

the second, is standard array semantics in Reaktor, Ive not used but I assume it can do greater than 10 parameters... but t3d always has less than 10, so just use OSC receive :)

ok, playing notes into instruments, this is where it gets kind of tricky :)

you have to do 2 things:

  • you have to go through you instrument and find all the midi objects and replace them with an osc receive. e.g note pitch. (actually, you'd probably be best using an OSC recieve than then sending this thru the internal messaging of reaktor, as you will find many midi objects are repeated e.g. you might find a note pitch connected to the oscillator AND to the filter (to do key tracking) )

for simple instruments, its straightforward enough, for more complex, it be be quite difficult to find all the midi objects,
partly because there not prefixed with 'midi' or anything, just called note pitch, pb

there are a couple of things i find problematic with the t3d spec when using with reaktor:

  • the note off, does not send the pitch, this means you need to look it up in some kind of voice array
  • reaktor expects fixed patterns /t3d/tch* is not working, so you need to put a pattern in for N voices, which is a bit of a pain

ok, once you get over that it should work...

the next step brings 'extra fun'
you want to do per note expression, e.g. pitch bend on individual fingers,
this is something as i said, Ive looked into, its possible, but means you need to be tracking voices, and then only affecting the correct voice... this is possible in Reaktor as is has a pretty good poly system, but its not trivial.
(note: also you may have to redesign part of the instrument depending on how its using the poly mapping)

personally, my idea, is to probably build a simple synth of my own first, get used to doing the above
and only then looking a retro fitting existing instruments, which will be much more complicated.

I think its a rich avenue of investigation for sure, my only issue is time... as Im also wanting to do similar things in Max/Msp
which currently has my focus.

hope the above helps a bit.

looks interesting, hope to have a play with this next week, when my soundplane arrives.

Ive not tried SC but looks good, very compact. alot of functionality for a relatively small amount of code.

this is an area I hope to be experimenting with soon (though Ive quite a few soundplane projects, so may take some time :))

what are you trying to do?

  • use for control? e.g. mapping to sliders?

  • use for note input?


should be pretty much the same, as doing with touchOSC
basically you will need to set up a zone file, which contains X or Y sliders,
then the messages are
/t3d/zonename value

note input

much more effort, and tricker that mapping :)
first most ensembles are build around midi, so you have to find the appropriate midi inputs, and replace them with osc.
one of the difficulties, is t3d is touch focused, whereas midi is note focused.
(this is a problem with the note-off message in particular, as t3d does not sent the note value)
its actual possible to make it work nicely, I experimented with this in the past with the eigenharp, the touchId can be used to drive the poly
(in fairness, its quite a task to get it working, and many parts will be instrument specific)

as i said, ive not got my soundplane yet... but when I do, I will be experimenting.
probably simple control first, and then note input much later.

Randy: perhaps the final touch message should contain the note (but zero x,y,z etc) this means a trivial conversion to midi would be possible, which ignores touchId.
(usually this is not best practice, but for some applications is 'good enough')

I bought it last night ... very excited :)

I'm sure randy will be along soon, but I had a little play to see if I could reproduce,
as i use LPX and couldn't remember any issue

Im using with LPX on 10.9.4, aalto (i could try kaivo, but i have no reason to believe its different)

with OSC on my Eigenharp, I don't notice any appreciable latency, so unlikely to be above 1-2ms, certainly not 80ms...
(I've used this before with both aalto and kaivo and not had issues)

the osc latency is not really possible to time, but I tried with midi
i created a midi track , used a kick on 1st beat then directed that to an audio channel,
that showed no latency, in fact if anything it showed negative latency.
(PDC mismatch?)

I also tested in AULab, and no issue.

I wonder if for OSC you have your networks seutp correct?
I don't have access to a soundplane or yosemite (I keep my music machine on only proven releases), but perhaps an issue with soundplane software rather than aalto/kaivo?

one thing i did have odd with LPX/Aalto
I did get into a state where Aalto was ignoring first few bars of notes, I know thats sounds weird.... basically start transport and wasn't until bar 4 the notes would come thru as sound. I replaced aalto with another plugin, to check it wasn't me/lpx being stupid... and it immediately worked. I then put aalto back, and it was fine. (so obviously a fresh aalto instance fixed it)
The only thing i wonder about, is originally on the 'bad instance' id being used OSC, and then switched to midi.... aalto did switch (osc message gone etc), but I'm wondering if it was in an odd state.

@wanterkeelt, hard to say if your performance is 'normal' without machine specs/operating system etc.
but I found Live 9.1.6 was not really any different to other hosts, in practical use.
(LPX, Vienna ensemble pro,Bitwig, Max)

Im running Mac OSX 10.9.4, Live 9.1.6 on a i5 2.9Ghz, (so not that powerful) and on default patch, Live shows 9% and on Koto 40% (8 voices).
bare in mind, comments in randys article, about 'always' active.

as for Live…
its 'well known' live cpu meter, is not a cpu reading at all… its percentage time required to process the audio buffer, i.e. 25% means its used 25% of the max time it could to process a buffer.

this is a reasonable way, but its rather subject to external things, like the operating system preempting it, and can be a little misleading with multiple cores/cpus.

one thing is worth noting though, is its worth getting to know how a daw handles plugins with regard to threads (= distribution over cores).
e.g. in Live do not put FX on the same track as a heavy cpu use plugin, as it will be put in the same thread, instead put the fx on a return track.

Randy, a question … if you bypass Kaivo/Aalto does it stop the oscillators, and stop all other processing? i ask, as i noticed it keeps the osc connection open.
It would be nice, if the plugin is bypass if everything is stopped including closing the osc server port.