The ability to run the plugins with both MIDI and OSC (using Soundplane, etc) is essential. PLEASE!
Can you describe how you want to use this in more detail? It doesn't make sense to get notes from MIDI + OSC at the same time because they would step on each other and create stuck notes. So I guess you mean other control data. What kinds are you interested in? Thanks
A small apology for this somewhat redundant post as I've written about it in the past.
Primary Case: MIDI sequence feeding pitch and velocity data to the KEY module. While this sequence is running the Z, DY, X and Y parameters of the Soundplane can gesturally control the plugin without having to worry about influencing the pitch and gating. This has been critically missing functionality imo. While the sequencer can sort of approximate this paradigm it's not as flexible as working with as MIDI arrangement while focusing on gesture.
Secondary Case: Either gate/velocity OR pitch information from the MIDI running while working with gestures. We might want to have the gates coming from a MIDI sequence while using the Soundplane for pitch information. Or...Maybe we want the pitch coming from a MIDI sequence while using the Soundplane for gate/velocity information.
Decoupling these control paradigms will open up performance avenues. I think the core idea is cherry picking where we want pitch and gating information to come from.
Other users please chime in if you'd like to see this or have other approaches in mind.
You know the Soundplane Zones can send MIDI controllers out right? So Aalto should be able to take notes from a MIDI sequence and controls from the Soundplane at the same time, just as you suggest above, by using MIDI.
Also you can send out rhythmic patterns from the sequencer and at the same time use the Soundplane for pitch, in either OSC or MIDI mode. Unlike with most software synthesizers, pitch and gate are patchable in the matrix. You are free to use any signal you want to control oscillator pitch. So it's pretty flexible already.
I know about the zones. How could we exploit the multi-touch and polyphonic capabilities of the voice module in OSC with MIDI zones though? If Aalto is set to 2 voices and the zone has a CC slider mapped to timbre then that CC controls timbre for both voices which isn't the same as using OSC where two touches along the y axis can independently modulate the timbre parameter for each voice.
In this instance, it'd be useful to then source the pitch/gate from a MIDI arrangement. The sequencer could certainly still control the gates and pitch but the small step size of the sequencer is limited compared to the ability to work with MIDI arrangements in the DAW.
You could use MIDI MPE, sending out a control on one channel to affect the pitch of just one voice. Then have a few of these controls for different voices.
It seems like you are starting with two sources of data, one from say a sequencer and one from a controller. In my view you want to first merge the sources into one, then send them to Aalto. You have some particular vision in mind for control and to get to it you are probably going to need control over that merging process.
I think of OSC and MIDI as different languages, French and English, say. I have one translator device that allows me to tune into either language and understand it perfectly. But if I bypass the translator and try to listen to input in both languages at once, it's very confusing. This is a good description of the architecture going on inside the plugins.
I haven't looked into MIDI MPE for this application. I suppose now is the time! :)
Thanks for the info.