A MIDI-compatible gesture synthesizer is provided for use with a conventional music synthesizer to create musically realistically sounding gestures. The gesture synthesizer is responsive to one or more user controllable input signals, and includes several transfer function models that may be user-selected. One transfer function models properties of muscles using hill's force-velocity equation to describe the non-linearity of muscle activation. A second transfer function models the cyclic oscillation produced by opposing effects of two force sources representing the cyclic oppositional action of muscle systems. A third transfer function emulates the response of muscles to internal electrical impulses. A fourth transfer function provides a model representing and altering virtual trajectory of gestures. A fifth transfer function models visco-elastic properties of muscle response to simulated loads. The gesture synthesizer outputs MIDI-compatible continuous pitch data, tone volume and tone timbre information. The continuous pitch data is combined with discrete pitch data provided by the discrete pitch generator within the conventional synthesizer, and the combined signal is input to a tone generator, along with the tone volume and tone timbre information. The tone generator outputs tones that are user-controllable in real time during performance of a musical gesture.

Patent
   RE37654
Priority
Jan 22 1996
Filed
Jun 13 2000
Issued
Apr 16 2002
Expiry
Jan 21 2017
Assg.orig
Entity
Small
87
8
EXPIRED
33. A method for providing gesture synthesis for an electronic sound device, comprising the steps of:
(a) synthesizing in response to control data, a first transfer function based upon a muscle emulation model, which includes at least one module selected from the group consisting of (i) a model implementing hill's equation, (ii) a module modeling model representing cyclic opposing effects of two force sources representing cyclic opposition action of a muscular system, (iii) a module model emulating muscular action by representing muscular stimulus response to internal electrical impulses, and (iv) a module modelling model representing visco-elastic properties of muscle pairs and elasticity of simulated loads, and generating that generates synthesized control data responsive to said control data; and
(b) inputting said synthesized control data to said sound device;
wherein said sound device outputs a sound signal including simulated gestures.
1. A gesture synthesizer (gs) for use with an electronic sound synthesizer (ESS), which electronic music synthesizer provides at least a tone signal responsive to an ESS user-input device that permits user selection of discrete pitches, said gesture synthesizer coupleable to at least a first gs user-input control device, said gesture synthesizer comprising:
first detection means, coupled to said first gs user-input control device, for generating control data representing user-operation of said first gs user-input control device;
at least a first gesture synthesis means, coupled to said first detection means, for synthesizing a desired transfer function simulating muscle action, said transfer function represented by at least one model selected from a group consisting of (i) a model implementing hill's equation, (ii) a model representing cyclic opposing effects of two force sources representing cyclic opposition action of a muscular system, (iii) a model representing muscular stimulus response to internal electrical impulses, (iv) a model representing visco-elastic properties of muscle pairs and elasticity of simulated loads, and (v) a model altering virtual trajectory of gesture created with said gesture synthesizer ;
said first gesture synthesis means outputting synthesized control data responsive to said control data such that said tone signal provided by said electronic sound synthesizer is responsive to and modifiable by said first gs user-input control device.
2. The gesture synthesizer of claim 1, wherein said gesture synthesizer has at least one characteristic selected from a group consisting of (i) said gesture synthesis means accepts MIDI-compatible data, (ii) said gesture synthesis means accepts MIDI-compatible data from said gs user-input control device, (iii) said gesture synthesis means outputs MIDI-compatible data, and (iv) said gesture synthesis means accepts MIDI-compatible data as input and outputs MIDI-compatible data.
3. The gesture synthesizer of claim 1, wherein:
said detection means detects forward and reverse direction deflection of said first gs user-input control device and outputs logic data in accordance with detected said direction;
wherein said first gesture synthesis means includes:
means for generating clock pulses whose individual amplitudes are modulatable according to an amplitude modulation defining an amplitude envelope, said amplitude envelope having a forward portion and a reverse portion wherein at least one of said portions is varied according to a parameter selected a group consisting of (i) a forward direction of a gs user-input control device, (ii) modulation data from a user-selected modulation source whose modulation data has a direction proportional to said logic data, (iii) a combination of control data and modulation data, (iv) said synthesized control data; and
at least one accumulator selected from a group consisting of (i) a forward accumulator for controllably combining amplitudes of said clock pulses responsive to digital numbers representing said forward portion of said amplitude envelope, and (ii) a reverse accumulator for controllably combining amplitudes of said clock pulses responsive to digital numbers representing said reverse portion of said amplitude envelope.
4. The gesture synthesizer of claim 1, wherein said gesture synthesizer further includes;
at least a second synthesis means coupled to said first detection means, and;
an interpolation means that controllably combines synthesized control data from said first and second synthesis means, said interpolation means including at least one means selected from the group consisting of (i) a switching means that alternately switches between synthesized control data from said first and second synthesis means, (ii) a crossfade means that crossfades between synthesized control data from said first and second synthesis means.
5. The gesture synthesizer of claim 1, wherein said first detection means detects forward direction and reverse direction deflection of said first gs user-input control device, and outputs first and second logic data corresponding to detected said forward and reverse direction; and
said first gesture synthesis means further including:
switching means for bifurcating said control data so as to provide forward control data and reverse control data; and
persistently available conversion means for converting at least one of said forward control data and said reverse control data according to a conversion characteristic selected from a group consisting of (i) a conversion characteristic generated from a muscle emulation model, (ii) a conversion characteristic sampled from a musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic emulating sampled data, and (v) a conversion characteristic emulating musical gestures.
6. The gesture synthesizer of claim 1, wherein:
said first detection means detects forward direction and reverse direction deflection of said first gs user-input control device, and outputs first and second logic data corresponding to detected said forward and reverse direction; and
said first gesture synthesis means further includes:
a source of modulation data responsive at least in part to a direction of said first gs user-input control device, said direction being determined by said first and second logic data.
7. The gesture synthesizer of claim 6, wherein said modulation data includes data selected from a group consisting of (i) a control data signal, (ii) peak amplitude of a control data signal, (iii) said synthesized control data, (iv) substantially constant said modulation data, (v) said modulation data includes a first derivative with respect to time of said positional control data, and (vi) said modulation data includes a second derivative with respect to time of said positional control data.
8. The gesture synthesizer of claim 1, wherein said gesture synthesizer further includes:
a second gs user-input control device;
second detection means, coupled to said second gs user-input control device, for generating control data representing user-operation of said second gs user-input control device;
said second detection means detecting forward and reverse direction of said second gs user-input control device, and outputting first and second logic data corresponding to detected said forward and reverse direction; and
a source of modulation data, responsive at least in part to said direction of said second gs user-input control device, said direction being determined by said first and second logic data.
9. The gesture synthesizer of claim 1, wherein:
said first detection means detects forward and reverse direction deflection of said first gs user-input control device, and outputs first and second logic data corresponding to detected said forward and reverse direction;
said first gesture synthesis means further includes:
a source of modulation data, responsive at least in part to said direction of said first gs user-input control device, said direction being determined by said first and second logic data;
switching means for bifurcating said control data so as to provide forward control data and reverse control data; and
means for scaling amplitude of said forward control data proportionally to said modulation data; and
means for scaling amplitude of said reverse control data proportionally to said modulation data.
10. The gesture synthesizer of claim 1, wherein said first detection means detects forward and reverse direction of said first gs user-input control device, and outputs first and second logic data corresponding to detected said forward and reverse direction;
wherein said first gesture synthesis means further includes:
a source of modulation data responsive at least in part to a direction of said first gs user-input control device, said direction being determined by said first and second logic data;
a first and second source of control data signals, responsive at least in part to said first gs user-input control device;
means for delaying said first control data signal by an amount of delay determined at least in part by data from said source of modulation data; and
means for controllably combining at least said second control data signal with delayed said first control data.
11. The gesture synthesizer of claim 10, wherein said first gesture synthesis means further includes persistently available conversion means for converting said synthesized control data according to a conversion characteristic selected from a group consisting of (i) a conversion characteristic generated from a muscle emulation model, (ii) a conversion characteristic sampled from a musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic emulating sampled data, and (v) a conversion characteristic emulating musical gestures.
12. The gesture synthesizer of claim 1, wherein said first detection means detects forward direction and reverse direction deflection of said first gs user-input control device, and outputs first and second logic data corresponding to detected said forward and reverse direction; and
said first gesture synthesis means further includes a source of modulation data responsive at least in part to detected direction of said first gs user-input control device, said direction being determined by said first and second logic data; and
means for combining and using said modulation data to vary at least one parameter selected from a group consisting of (i) peak amplitude of a time-delayed control data signal, (ii) delay time of a time-delayed control data, (iii) salience-curvature applied to a control data signal, (iv) width of a shaping window applied to control data signal, (v) height of a shaping window applied to control data signal, (vi) start point of a shaping window applied to control data signal, and (vii) stop point of a shaping window applied to control data signal.
13. The gesture synthesizer of claim 1, wherein said first detection means detects forward and reverse direction of said first gs user-input control device; wherein:
said first gesture synthesis means includes bifurcation means for separately directing, according to detected said forward and reverse direction of said first gs user-input control device, data selected from a group consisting of (i) control data, and (ii) synthesized control data;
said gesture synthesizer further including bifurcated gesture synthesis means for generating forward synthesized control data by activating a first half of simulated said muscle action responsive to said forward control data, and for generating reverse synthesized control data by activating a second half of simulated said muscle action responsive to said reverse control data.
14. The gesture synthesizer of claim 13, wherein:
said bifurcation means includes at least one comparison means selected from the group consisting of (i) threshold detection means for comparing control data values to a lower threshold and outputting a first logic signal when said lower threshold is traversed, and for comparing control data values to an upper threshold and outputting a second logic signal when said upper threshold is traversed, and (ii) means for comparing subsequent values of said control data, and outputting first and second logic data in accordance with increasing and decreasing values of said control data,
said gs further including:
bipolar switching means for generating alternating switching logic data responsive to logic signal from said threshold detection means;
wherein said bifurcated gesture synthesis means includes
at least one switching means selected from a group consisting of (i) a signal switch for bifurcating said control data in response to said alternating switching logic data from said bipolar switching means, and (ii) a signal gate for selecting forward or reverse direction of said synthesized control data in response to said alternating switching logic data.
15. The gesture synthesizer of claim 14 13, wherein said bifurcated gesture synthesis means further includes at least one gesture synthesis module selected from a group consisting of (i) a salience module, (ii) a time oscillator module, (iii) a flex filter module, (iv) a waveshaper module, (v) a bifurcated scale module, and (vi) a delay module.
16. The gesture synthesizer of claim 1, wherein:
said detection means detects forward and reverse direction deflection of said first gs user-input control device and outputs logic data in accordance with detected said direction;
said first gesture synthesis means includes:
means for generating clock pulses, said clock pulses having an amplitude magnitude that varies according to a filter modulation function having a modulation function selected from a group consisting of (i) said modulation function has salience-curvature, (ii) said modulation function is varied in accordance with control data, (iii) said modulation function is varied in accordance with modulation data responsive at least in part to a direction of said first gs user-input control device, said direction being determined by said first and second logic data, (iv) said modulation function is varied responsive to a combination of control data and modulation data, and (v) said modulation function is varied responsive to said synthesized control data; and
a unit that provides forward synthesized control data functionally proportional to summed amplitude of each said clock pulse.
17. The gesture synthesizer of claim 1, wherein:
said detection means detects forward and reverse direction of said first gs user-input control device and outputs logic data in accordance with detected said direction;
said first gesture synthesis means includes:
means for generating clock pulses, said clock pulses having an amplitude magnitude that varies with at least one characteristic selected from a group consisting of (i) said amplitude is substantially constant, (ii) said amplitude is varied responsive to control data, (iii) said amplitude is varied responsive to modulation data, (iv) said amplitude is varied responsive to said logic data, (v) said amplitude corresponds to at least a partial summation of at least two successive said clock pulses, and (vi) said amplitude is varied responsive to a combination of control data and modulation data;
at least one accumulator selected from a group consisting of (i) a forward accumulator that sums said clock pulses in response to said logic data and provides forward synthesized control data responsive to a running sum of said clock pulses, and (ii) a reverse accumulator that subtracts said clock pulses in response to said logic data and provides reverse synthesized control data responsive to a running subtraction of said clock pulses.
18. The gesture synthesizer of claim 17, wherein said first gesture synthesis means further includes persistently available conversion means for converting said synthesized control data according to a conversion characteristic selected from a group consisting of (i) a muscle emulation model generated conversion characteristic, (ii) a conversion characteristic sampled from a musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic that emulates sampled data, and (v) a conversion characteristic that emulates musical gestures.
19. The gesture synthesizer of claim 18, wherein said first gesture synthesis means further includes:
at least one scaling means selected from a group consisting of (i) means for scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) means for scaling amplitude of said reverse synthesized control data proportional to modulation data.
20. The gesture synthesizer of claim 17, wherein said first gesture synthesis means further includes at least one threshold detector selected from a group consisting of (i) an upper threshold detector, coupled to an output of said accumulator, that deactivates said summation when said upper threshold is traversed, and (ii) a lower threshold detector, coupled to an output of said accumulator, that deactivates said subtraction when said lower threshold is traversed.
21. The gesture synthesizer of claim 17, wherein said first gesture synthesis means further includes:
at least one scaling means selected from a group consisting of (i) means for scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) means for scaling amplitude of said reverse synthesized control data proportional to modulation data.
22. The gesture synthesizer of claim 17, wherein said first gesture synthesis means further includes at least one module selected from the group consisting of (i) a salience module, (ii) a time oscillator module, (iii) a flex filter module, (iv) a waveshaper module, (v) a scale module, and (vi) a delay module.
23. The gesture synthesizer of claim 22 1, wherein said gesture synthesizer is implemented in a manner selected from a group consisting of (i) said gesture synthesizer is included as part of a stand-alone musical instrument, (ii) said gesture synthesizer is a stand-alone electromechanical device, (iii) said gesture synthesizer is implemented on a machine-readable memory device for use with a host system, and (iv) said gesture synthesizer is implemented on a magnetic storage medium for use in a host system.
24. The gesture synthesizer of claim 1, wherein:
said detection means detects forward and reverse direction deflection of said first gs user-input control device and outputs logic data in accordance with detected said direction;
wherein said first gesture synthesis means includes:
means for generating clock pulses;
envelope generation means for generating a time envelope representing a period of said clock pulses, wherein said time envelope includes at least one of an attack portion and a release portion;
envelope generation means for generating, in response to data selected from a group consisting of (i) control data and (ii) modulation data, a time envelope representing a period of said clock pulses, wherein said time envelope includes at least one of an attack portion and a release portion;
period modulation means for modifying said period responsive to a signal selected from a group consisting of (i) said time envelope, (ii) said attack portion of said time envelope, (iii) said release portion of said time envelope, and (iv) modulation data from a user-selected modulation source;
at least one counter selected from a group consisting of (i) a forward pulse counter that increments said clock pulses and provides forward synthesized control data responsive to a running count of said clock pulses, and (ii) a reverse pulse counter that decrements said clock pulses and provides reverse synthesized control data responsive to a running count of said clock pulses.
25. The gesture synthesizer of claim 24, wherein said first gesture synthesis means further includes at least one amplitude modulation means for generating an amplitude envelope comprising a series of digital numbers responsive to pulse data selected from the group consisting of (i) forward pulse data and (ii) reverse pulse data, wherein said amplitude envelope has a least one characteristic selected from the group consisting of (i) said amplitude envelope has salience curvature, (ii) said amplitude envelope is varied according to control data (iii) said amplitude envelope is varied according to modulation data, and (iv) said amplitude envelope is varied according to said synthesized control data.
26. The gesture synthesizer of claim 24, wherein said first gesture synthesis means further includes persistently available conversion means for converting said synthesized control data according to a conversion characteristic selected from the group consisting of (i) a muscle emulation model generated conversion characteristic, (ii) a conversion characteristic sampled from an actual musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic that emulates sampled data, and (v) a conversion characteristic that emulates musical gestures.
27. The gesture synthesizer of claim 24, wherein said first gesture synthesis means further includes at least one threshold detector selected from a group consisting of (i) an upper threshold detector, coupled to an output of said accumulator pulse counter, that deactivates said incrementation when said upper threshold is traversed, and (ii) a lower threshold detector, coupled to an output of said detector pulse counter, that deactivates said decrementation when said lower threshold is traversed.
28. The gesture synthesizer of claim 24 wherein said first gesture synthesis means further includes:
at least one scaling means selected from a group consisting of (i) means for scaling amplitude of said forward pulse data proportional to modulation data, and (ii) means for scaling amplitude of said reverse pulse data proportional to modulation data.
29. The gesture synthesizer of claim 24 wherein said first gesture synthesis means further includes at least one means for combining and using modulation data to vary at least one parameter selected from a group consisting of (i) peak amplitude of time-delayed said synthesized control data, (ii) delay time of a time-delayed said synthesized control data, (iii) salience-curvature applied to said synthesized control data, (iv) width of a shaping window applied to said synthesized control data, (v) height of a shaping window applied to said synthesized control data, (vi) start point of a shaping window applied to said synthesized control data, and (vii) stop point of a shaping window applied to said synthesized control data.
30. The method of claim 29, providing gesture synthesis as in claim 29 33, wherein said method is embodied in a medium selected from the group consisting of (i) said method is part of a stand-alone musical instrument, (ii) said method is implemented in a stand-alone electromechanical device, (iii) said method is stored on a machine-readable memory device for use with a host system, and (iv) said method is stored on a magnetic storage medium for use in a host system.
31. The gesture synthesizer of claim 1, wherein said synthesis means includes at least a first and second threshold detection means, for detecting at least one of the group consisting of (i) motion of control data (ii) velocity of control data (iii) acceleration of control data, (iv) motion of synthesized control data (v) velocity of synthesized control data (vi) acceleration of control data and (vii) time between said first and second detection, wherein said first and second threshold detection means output logic data in accordance with said first and second detection and,
at least a first and second clock pulse generation means respectively triggered by logic data from said first and second threshold detection means.
32. The gesture synthesizer of claim 1 wherein said synthesis means further includes:
differentiation means for determining the first derivative with respect to time of said control data;
zero detection means for detecting when said first derivative with respect to time is zero and outputting logic data upon said zero detection;
wherein said logic data deactivates said gesture synthesis means.
34. The method of claim 33, wherein step (a) includes providing said gesture synthesis with at least one characteristic selected from a group consisting of (i) control data input to said method is MIDI-compatible, (ii) synthesized control data output by said method is MIDI-compatible, (iii) control data input to said method is MIDI-compatible and synthesized control data output by said method is MIDI-compatible.
35. The method of claim 33 that further includes the steps of:
(a) synthesizing in response to control data, a second transfer function based upon a muscle emulation model, and generating synthesized control data responsive to said control data;
(b) controllably combining synthesized control data from said first and second transfer functions by at least one method selected from the group consisting of (i) alternately switching between synthesized control data generated by said first and second transfer functions, and (ii) continuously crossfading between synthesized control data generated by said first and second transfer functions.
36. The method of claim 33, providing gesture synthesis as in claim 29 33 wherein step (a) includes:
detecting forward direction and reverse direction of said control data;
outputting first and second logic data corresponding to detected said forward and reverse direction;
bifurcating forward direction and reverse direction of said control data responsive to said first and second logic data; and
converting at least one of forward direction and reverse direction control data according to a conversion characteristic selected from the group consisting of (i) a muscle emulation model generated conversion characteristic, (ii) a conversion characteristic sampled from an actual musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic that emulates sampled data, and (v) a conversion characteristic that emulates musical gestures.
37. The method of claim 33, providing gesture synthesis as in claim 29 33 wherein step (a) includes:
detecting forward direction and reverse direction of said control data;
outputting first and second logic data corresponding to detected said forward and reverse direction; and
modulating said control data according to modulation data responsive at least in part to a direction of said control data, said direction being determined by said first and second logic data.
38. The method of claim 37, wherein said modulation data includes data selected from a group consisting of (i) a control data signal, (ii) peak amplitude of a control data signal, (iii) said synthesized control data, (iv) substantially constant said modulation data, (v) said modulation data includes a first derivative with respect to time of said positional control data, and (vi) said modulation data includes a second derivative with it respect to time of said positional control data.
39. The method of claim 33, providing gesture synthesis as in claim 32 33, wherein step (a) includes:
detecting forward and reverse direction of additional control data;
outputting logic data in according to detected said detection;
bifurcating said forward and reverse direction of said additional control data responsive to said first and second logic data; and
modulating said synthesized control data responsive at least in part to said forward and reverse direction of said additional control data.
40. The method of claim 33, providing gesture synthesis as in claim 33, wherein step (a) includes:
detecting forward and reverse direction of said control data;
outputting first and second logic data corresponding to detected said forward and reverse direction;
bifurcating said control data so as to provide forward control data and reverse control data;
scaling amplitude of said forward control data proportionally to modulation data responsive at least in part to a direction of said control data, said direction being determined by said first and second logic data; and
scaling amplitude of said reverse control data proportionally to modulation data responsive at least in part to a direction of said control data, said direction being determined by said first and second logic data.
41. The method of claim 33, providing gesture synthesis as in claim 33, wherein step a) includes:
detecting forward and reverse direction of said control data;
outputting first and second logic data corresponding to detected said forward and reverse direction;
delaying said control data by an amount of delay determined at least in part by modulation data responsive at least in part to a direction of said control data, said direction being determined by said first and second logic data; and
controllably combining additional control data with delayed said control data.
42. The method of claim 41, providing gesture synthesis as in claim 41 wherein step (a) further includes the step of:
converting said synthesized control data according to a conversion characteristic selected from a group consisting of (i) a conversion characteristic generated from a muscle emulation model, (ii) a conversion characteristic sampled from a musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic emulating sampled data, and (v) a conversion characteristic emulating musical gestures.
43. The method of claim 33, providing gesture synthesis as in claim 33, wherein step (a) includes:
detecting forward and reverse direction of said control data;
outputting first and second logic data corresponding to detected said forward and reverse direction; and
combining and using modulation data to vary at least one parameter selected from a group consisting of (i) peak amplitude of a time-delayed control data signal, (ii) delay time of a time-delayed control data, (iii) salience-curvature applied to a control data signal, (iv) width of a shaping window applied to control data signal, (v) height of a shaping window applied to control data signal, (vi) start point of a shaping window applied to control data signal, and (vii) stop point of a shaping window applied to control data signal, said modulation data being responsive at least in part to detected direction of said control data, said direction being determined by said first and second logic data.
44. The method of claim 33, providing gesture synthesis as described in claim 33, wherein step (a) includes:
providing control data having first and second logic data corresponding to forward and reverse direction of said control data;
generating clock pulses in which said clock pulses have an amplitude magnitude that varies according to a filter modulation function having at least one characteristic selected from the group consisting of (i) said filter modulation function has salience-curvature, (ii) said filter modulation function is varied in accordance with control data, (iii) said filter modulation function varied in accordance with modulation data, wherein said modulation data is responsive at least in part to said direction of said first user-input control device, said direction being determined by said first and second logic data, (iv) said filter modulation function is varied responsive to a combination of control data and modulation data, and (v) said filter modulation function is varied responsive to said synthesized control data; and
at least one step selected from a group consisting of (i) summing amplitudes of said clock pulses responsive to said first logic data to provide forward synthesized control data responsive to a running sum, and (ii) subtracting said clock pulsed to provide reverse synthesized control data responsive to a running subtraction.
45. The method of claim 33, providing gesture synthesis as in claim 33 wherein step (a) includes:
providing control data having first and second logic data corresponding to forward and reverse direction of said control data;
generating clock pulses;
accumulating clock pulses upon receipt of said first logic data and providing forward synthesized control data responsive to said accumulation; and
accumulating negative clock pulses, upon receipt of said second logic data, from a an initial data value selected from the group consisting of (i) said data amount value is a present accumulation and (ii) said data amount value is a maximum increment, and providing reverse synthesized control data responsive to said accumulation;
wherein said clock pulses have an amplitude magnitude having at least one characteristic selected from the group consisting of (i) said amplitude is substantially constant, (ii) said amplitude is varied responsive to control data, (iii) said amplitude is varied responsive to modulation data, (iv) said amplitude is varied responsive to said logic data, (v) said amplitude corresponds to at least a partial summation of at least two successive said clock pulses, and (vi) said amplitude is varied responsive to a combination of control data and modulation data.
46. The method of claim 45, providing gesture synthesis as in claim 45 wherein step (a) further includes at least one step selected from the group consisting of (i) scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) scaling amplitude of said reverse synthesized control data proportional to modulation data.
47. The method of claim 45, providing gesture synthesis as in claim 45 herein step (a) further includes the step of:
converting said synthesized control data according to a conversion characteristic selected from the group consisting of (i) a muscle emulation model generated conversion characteristic, (ii) a conversion characteristic sampled from an actual musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic that emulates sampled data, and (v) a conversion characteristic that emulates musical gestures.
48. The method of claim 47, providing gesture synthesis as in claim 47 wherein step (a) further includes at least one step selected from a group consisting of (i) scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) scaling amplitude of said reverse synthesized control data proportional to modulation data.
49. The method of claim 45, providing gesture synthesis as in claim 45 wherein step (a) further includes a method of gesture synthesis performed by at least one module selected from the group consisting of (i) a salience module, (ii) a time oscillator module, (iii) a flex filter module, (iv) a waveshaper module, (v) a scale module, and (vi) a delay module.
50. The method of claim 33, providing gesture synthesis as in claim 33, wherein step (a) includes providing control data having first and second logic data corresponding to forward and reverse direction of said control data;
generating clock pulses and modulating individual clock pulse amplitudes according to an amplitude modulation defining an amplitude envelope, said amplitude envelope having a forward portion and a reverse portion wherein at least one said portions is varied according to a parameter selected a group consisting of (i) a forward direction of a gs user-input control device, (ii) modulation data from a user-selected modulation source whose modulation data has a direction proportional to said logic data, (ii) a combination of control data and modulation data, (iv) said synthesized control data; and
controllably combining amplitudes of said clock pulses responsive to digital numbers representing at least one said portion of said amplitude envelope selected from the group consisting of (i) said forward portion, and (ii) said reverse portion.
51. The method of claim 33, providing gesture synthesis as in claim 33, wherein step (a) includes providing control data having first and second logic data corresponding to forward and reverse direction of said control data;
generating clock pulses;
incrementing said clock pulses upon receipt of said first logic data and providing forward synthesized control data responsive to a running count;
decrementing said clock pulses upon receipt of said second logic data and providing reverse synthesized control data responsive to a running count;
generating in response to data selected from the group consisting of (i) control data and (ii) modulation data, a time envelope comprising a series of digital numbers representing period of said clock pulses, said time envelope including an attack portion and a release portion, said attack portion being responsive to at least one parameter selected from the group consisting of (i) a forward direction of a user-input control device and (ii) modulation data from a user-selected modulation source, and said release portion being responsive to at least the parameter selected from the group consisting of (i) a reverse direction of a user-input control device and (ii) modulation data from a user-selected modulation source; and
modifying said period of said clock pulses responsive to a signal selected from the group consisting of (i) digital numbers representing said time envelope, (ii) digital numbers representing said attack portion of said time envelope, (iii) digital numbers representing said release portion of said time envelope, (iv) modulation data from a user-selected modulation source.
52. The method of claim 51, providing gesture synthesis of claim 51 wherein step (a) further includes at least one step selected from a group consisting of (i) scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) scaling amplitude of said reverse synthesized control data proportional to modulation data.
53. The method of claim 51, providing gesture synthesis of claim 51 wherein step (a) further includes the step of:
converting said synthesized control data according to a conversion characteristic selected from the group consisting of (i) a muscle emulation model generated conversion characteristic, (ii) a conversion characteristic sampled from an actual musical instrument, (iii) a conversion characteristic sampled from a MIDI-compatible controller, (iv) a conversion characteristic that emulates sampled data, and (v) a conversion characteristic that emulates musical gestures.
54. The method of claim 51, providing gesture synthesis as in claim 51 53, wherein step (a) further includes at least one step selected from a group consisting of (i) scaling amplitude of said forward synthesized control data proportional to modulation data, and (ii) scaling amplitude of said reverse synthesized control data proportional to modulation data.
55. The method of claim 51, providing gesture synthesis as in claim 51, wherein step (a) further includes:
combining and using modulation data to vary at least one least one parameter selected from a group consisting of (i) peak amplitude of time-delayed said synthesized control data, (ii) delay time of a time-delayed said synthesized control data, (iii) salience-curvature applied to said synthesized control data, (iv) width of a shaping window applied to said synthesized control data, (v) height of a shaping window applied to said synthesized control data, (vi) start point of a shaping window applied to said synthesized control data, and (vii) stop point or a shaping window applied to said synthesized control data.
56. The method of claim 51, providing gesture synthesis of as in claim 51 wherein step (a) further includes at least one step selected from the group consisting of (i) detecting when an upper threshold is traversed, and then deactivating said incrementing, and (ii) detecting when a lower threshold is traversed and then deactivating said decrementing.
57. The method of claim 33, providing gesture synthesis as in claim 33 wherein step a) includes:
detecting a first threshold of data selected from the group consisting of (i) motion of control data (ii) velocity of control data (iii) acceleration of control data, (iv) motion of synthesized control data (v) velocity of synthesized control data (vi) acceleration of control data,
outputting first logic data in accordance with detection of said first threshold,
generating clock pulses responsive to said fist first logic data,
detecting a second threshold of data selected from the group consisting of (i) motion of control data (ii) velocity of control data (iii) acceleration of control data, (iv) motion of synthesized control data (v) velocity of synthesized control data (vi) acceleration of control data, (vii) time from detection of said first threshold,
outputting second logic data in accordance with said detection, and
generating clock pulses responsive to said second logic data.
58. The gesture synthesizer of claim 1, wherein:
said detection means detects forward and reverse direction of said first gs user-input control device and outputs logic data in accordance with detected said direction;
wherein said synthesis means includes:
means for counting time from detected said forward and reverse direction, and
modulation means for modifying said control data with respect to time.
59. The method of gesture synthesis as described in claim 33 wherein step (a) further includes:
detecting forward and reverse direction of said control data;
generating first and second logic data in accordance with detected said forward and reverse direction;
counting time from receipt of said first logic data;
varying forward control data responsive to time;
counting time from receipt of said second logic data; and
varying reverse control data responsive to time.
60. The method of gesture synthesis as described in claim 33 wherein step (a) further includes:
differentiating control data with respect to time;
detecting when the result of said differentiation is zero;
deactivating said gesture synthesis method upon said detection.
0. 61. The gesture synthesizer of claim 1 wherein said gesture synthesis means includes at least a first gesture synthesis chain containing a first delay module, and a second gesture synthesis chain containing a second delay module, wherein:
synthesized control data output from said first gesture synthesis chains is used as modulation data in another gesture synthesis chain and,
synthesized control data output from said second gesture synthesis chains is used as modulation data in another gesture synthesis chain.
0. 62. The method of gesture synthesis as described in claim 33 wherein step (a) includes the steps of
(a) outputting first synthesized control data from a first gesture synthesis chain which includes at least a first delay module,
(b) outputting second synthesized control data from a second gesture synthesis chain which includes a second delay module,
(c) inputting said first synthesized control data as modulation data to another gesture synthesis chain, and
(d) inputting said second synthesized control data as modulation data to another gesture synthesis chain.

This application is a Continuation in Part of application Ser. No. 08/786,150 filed Jan. 21, 1997, now abandoned all parts of which are herein incorporated by reference

TABLE 1 contains various equations which may be used to implement the present invention, as depicted in the drawings and referred to in the following descriptions. Applicant has discovered that realistic a y other. For these kinds of gestures, another module is generally required to provide input to the tables. In particular, a time module 500 provides appropriate time-position data which is then converted to note data by the tables. A flex filter module 600, or delay module 900 may also provide usefull time-position trajectories for note data.

To effect slow inflected bends, the shaping amount can advantageously be modified in real time using modulation data. In practice, user experimentation with shaping and other programming arguments will provide a sense of how gesture synthesis works, and how certain types of gestures and modification techniques performed on real instruments can be accomplished with the gesture synthesizer.

Ordinarily there must be a scale module 800 at the end of the gesture synthesis chain to scale the gesture amplitude to the desired musical interval. Musical intervals are generally measured in semitones. Thus the programming arguments to the scale module 800 include a Base Interval BI in semitones, an Interval Limit IL in semitones, and an Ad Interval AI, also in semitones. Most pitch bend gestures are performed to traverse a distance of two semitones. However vibrato requires a fractional semitone amount, and so the arguments preferably must allow for fine adjustment to two decimals. Other gestures are performed to five or seven or twelve semitones. In addition, some gestures are performed to raise the pitch of a note, and some to lower the pitch. Thus the arguments may be set to positive or negative numbers, allowing for a bend up or a bend down.

The gesture amplitude is usually set as the Base Interval BI. The Interval Limit IL is used to allow amplitude modulation of the gesture as it is performed. This may be used so that interval of the gesture resulting from deflection of an operator can be changed in between gestures during a performance to accommodate requirements of the music. Or the modification can be performed during the gesture to create an additional gesture effect. Modification of the virtual amplitude of a planned gesture is embodied in the virtual trajectory function 260. This may be done with a modulation operator such as a pedal, e.g. pedal 54 in FIG. 2, or with controller velocity MO1v or CTL1v, or with data from a second controller operated simultaneously.

The Add Interval AI amount is used with a second modulation operator, such as velocity or a pedal. It adds a given amount to both the Base Interval BI and the Interval Limit IL. It will thus add an interval amount to any other gesture that is performed with another control operator. Its effect with velocity as modulator is to add slight variations to the amplitude of a performed gesture as might occur if it were performed with greater or lesser force. Its effect with a pedal may be to simulate the action of a guitarist's hand sliding up or down the fretboard of a guitar, while continuing to use the fingers to inflect notes. The effect of such a motion would be to raise the pitch of all the gestures by an amount of semitones equal to the number of frets crossed.

In another embodiment, a scale module 800 may be used at the beginning of the gesture synthesis chain to modify the virtual time parameter before input to the waveshaper module 700, or to flex filter module 600, for example. In this case, the Base Interval BI and/or Interval Limit IL would usually be set to at least twelve semitones. This is because twelve semitones would ordinarily indicate the total control data available in one direction. The interval scale equations are specified so that a setting of twelve semitones gives a maximum data output of 127. This insures that a full range of values is available for further gesture synthesis modules. The Add Interval AI argument can also be set to a negative amount. In this case, the virtual time values will be displaced forward in time, while modulation may compress or distort these values to effectively modify the action of subsequent modules. Alternatively, an input scale module might simply have parameters for start and stop point, indicating the range of values output in response to input control data. The start and stop points may also be modulatable.

As previously discussed, a bifurcated scale module may include parameters that allow for varying the reverse deflection characteristics separately from those of the forward deflection. Thus gestures of different amplitudes may be represented in the forward and reverse directions.

The delay module 900 acts to delay control data by a variable amount and recombine it in variable amounts with direct control data. The effect of the delay module 900 is to modify the virtual time parameter of gestures in a manner that can be continuously controlled by the user. The effect of varying the amounts of delayed data is to also vary the amplitude of virtual gestures in a way that is somehow dependent on the time modification. This is embodied in the virtual trajectory function 260.

The effect of delay module 900 is similar to the time oscillator module 500, but where module 500 gives the user control over performance data as it is projected into the future by a clock, delay module 900 gives the user control over performance data that is projected into the past. In this way it mimics the force dependent viscous damping element of muscle systems. Velocity controlled delay represents viscous damping of the force exerted by the muscle itself. This is embodied in the V visco-elastic properties function 270. Position dependent reflex delay represents delay due to reflected force of a position varying load, such as a guitar string. Constant delay represents damping due to constant force such as friction. In another embodiment, acceleration dependent delay may represent delay from force due to inertia. The amounts of delay time due to these arguments may be modified in real time with modulation data, and/or the amount of delayed signal may also be modified.

The delay module 900 is used at the end of the gesture synthesis chain as shown in FIG. 4 and imparts an added subtle effect to any gesture generated by a previous module. Since its effect is to elongate gestures, care must be exercised in its use, or the effect is sloppy or too languid sounding. It may also be necessary to vary arguments of other modules such as envelope rates to account for the time lag. It may also be effective to use a time oscilator oscillator module 600 with little or no period modulation as a source of constant rate control data input to delay module 900. Gesture variation is then accomplished by using a control operator or other source of modulation data to vary delay time of the constant rate control data.

In the gesture synthesis chain of FIG. 4, the delay module 900 is shown after the scale module 800. However, it is also possible to use two delay modules, one for the forward direction of motion and one for the reverse, in order to impart different time modulation effects for each direction. Delay module 900 may also be used alone, giving a distinctly recognizeable recognizable effect encountered in some musical styles, noteably Country and Western. Delay module 900 may also precede the waveshaper module 700. In particular, realistic guitar slides may be thus synthesized using appropriate lookup tables.

A delay module may also be used at the beginning of the gesture synthesis module chain. This may give a different type of gesture variation. In another embodiment, data from each delay line 940 is input to a separate gesture synthesis chain 280, before being recombined in the mix/normalize function 950. Thus one operator may control up to four chains simultaneously. In this case, data from each synthesis chain may be used as modulation data in another. Since the data is ultimately combined to make one gesture, lookup tables in the waveshaper module 700 may represent gesture partials instead of complete gestures. A gesture partial may simulate an effect such as audible clicks that result from crossing frets when performing glissando on a guitar, or a glottal bump from a saxophonist's throat. These may be combined in varying amounts by varying the amount arguments with modulation data.

The present invention provides a gesture synthesizer for electronic musical instruments, which solves the well-known and much lamented problem of obtaining expressive musical control of electronic instruments. The physical construction is designed to be simple and comfortable to operate while electronic simulations perform difficult gestures. The simulations are very flexible and produce a wide range of realistic results. Since the parameters are specified and manipulated in the electronic domain, they are easily applied to the parameters of electronic sound production. The present invention may be implemented using relatively inexpensive, standardized parts, and can thus be produced cost-effectively.

Other means may be specified of mathematically modifying gesture parameters herein described as virtual time, virtual amplitude, and virtual shape or acceleration-deceleration characteristic. These parameters may also be specified as rate, musical interval, and inflection, where inflection is represented in terms of location of inflection point, amount of curvature or salience, and smoothness or roundness of curvature or "radiance".

The above description should not be construed as limiting the scope of the invention. Those skilled in the art will recognize many other forms in which the above described invention may be embodied. Another embodiment, possibly of greater advantage from a marketing standpoint, would be a black box style MIDI accessory containing electronic components, such as ROM and RAM memory and a dedicated processing unit necessary to store and execute the program described above. As such the present invention could be implemented on a single printed circuit board. A set of performance operators may be provided such as a joystick, and/or an array of levers, and one or more pedals and/or additional operators to be manipulated by the user's thumb. Such an embodiment might also contain a visual display for a menu-driven user interface and operators such as buttons and/or a slide control for entering programming parameters. Additionally a means of storing all the programming arguments for a single gesture synthesis chain, or indeed for all specified chains may be implemented. Such storage capability may include RAM with battery backup, EEPROM or disk storage. In such an embodiment, all specified programming arguments may be stored as a single "preset" and later recalled for a live performance. Storage for a number of presets may be provided, as may ROM storage of presets provided by the manufacturer and designed for general use.

Another embodiment would be a black box as described above, but without the menu driven user interface. In this case, a set of presets would be provided that could be selected with a button or buttons, but the presets could not be modified by the user. Still another embodiment would be the above black box with performance operators but with only one set of synthesized gestures hard-wired to the operators with no means of altering their parameters or selecting alternative gestures. Still another embodiment would be any of the three above-mentioned black boxes without the performance operators but with MIDI inputs provided that would accept data from external operators such as alternative MIDI controllers or conventional MIDI keyboards.

Another embodiment could be contained within an otherwise conventional MIDI or non-MIDI music keyboard or alternative controller, interfacing directly with the synthesis or control architecture of the host unit, and as such may or may not contain means of altering the gesture synthesis parameters. Still another embodiment might be contained on a disk or memory card or memory card, so that information representing the gesture synthesis architecture described above could be read into a host unit designed as an open-architecture music system, which is thus configured by data read from some such or other memory storage device. In a similar way, any or all of the above described implementations could be burned into a ROM memory chip for installation into a pre-existing music system, such that the ROM chip accesses the resources of the host system and interfaces with its performance operators and tone synthesis architecture to create gesture synthesis as described.

The gesture synthesis architecture could be embodied on a single printed circuit board or card such as a PCMCIA card for installation into a host computer system. Finally, a dedicated integrated circuit could be designed and manufactured containing all the necessary electronic components and codes to perform gesture synthesis as described, when interfaced with any performance operator or operators and any tone generating means. Such a printed circuit board, PCMCIA card, or integrated circuit could also contain an internal tone generating means requiring only a physical operator to activate it.

It will be appreciated that gesture synthesis is not limited to modification of data as set forth in the MIDI specification. Other data communications protocols representing musical parameters may likewise be modified. Non-musical data may also be modified.

Non-music applications for the gesture synthesis architecture described above might include computer graphics, games, and animation. A painting, drawing, or "sculpting" program could be implemented that synthesizes brush strokes or strokes from a pen or other drawing, painting, or sculpting tool, on a two dimensional surface or within a simulated three-dimensional space. In another embodiment, preset synthesized strokes or gestures could be specified for programs designed to facilitate writing on a computer screen in a particular alphabet or style which is pictographic, or based more on irregular curves than an alphabet composed of regular lines and circles. Other graphics applications might be used to author computer-driven animation of rendered objects, photographs, hand drawn figures or other images such that their movements appear natural and life-like.

Likewise, natural lifelike motion could be simulated on motor controlled devices such as prosthetic limbs, or robotic appendages using this gesture synthesis model. Finally, although synthesized speech is well-specified in the prior art, natural human speech contains tonal information similar to that in music. This invention could be applied to alter certain tonal attributes, such as the fundamental frequency of synthesized words and phrases to impart a lifelike sense of emotion and meaning. Modifications and variations may be made to the disclosed embodiments without departing from the subject and spirit of the invention as defined by the following claims.

TABLE 1
No. Equation Function
1 S +MD - S*MD/127 Total Salience; normalizes
modulation data and constant
Salience programming amount.
2 (651 - TS*4)*CTLF/(524 - Hyperbolic Positive Salience
4*TS + CTLF) Transfer Function; uses
convex hyperbolic curve with
variable salience to modify
control data in accordance
with modulation data.
3 260 - TS - (260 - TS)* Exponential Positive Salience
exp((-.0079*(ln(260 - TS)) + Transfer Function; uses
.0079*ln(133 - TS))*CTLF) convex exponential curve with
variable salience to modify
control data in accordance
with modulation data.
4 (sqrt((10835/TS - .184*TS - Radial Positive Salience
63.5)2 - CTLF2 + Transfer Function; uses
21670*CTLF/TS - .368*CTLF* convex radial curve with
TS + 127*CTLF) - 10835/TS + variable salience to modify
.184*TS + 63.5) control data in accordance
with modulation data.
5 (-524 + 4*TS)*CTLR/(CTLR - Hyperbolic Negative Salience
651 + 4*TS) Transfer Function; uses
concave hyperbolic curve to
modify control data in
accordance with modulation
data.
6 127*(exp(CTLR/(158 - TS)) - Exponential Negative Salience
1)/(exp(127/(158 - TS)) - 1) Transfer Function; uses
concave exponential transfer
function with variable
salience to modify control
data in accordance with
modulation data.
7 10835/TS - .184*TS + 63.5 - Radial Negative Salience
sqrt((10835/TS - .184*TS - Transfer Function; uses
63.5)2 - CTLR3 - concave radial transfer
21669*CTLR/TS + .368* function with variable
CTLR*TS + 127*CTLR - salience to modify control
46.8*TS + 2752007/TS) data in accordance with
modulation data.
8 (254*TEAS*TEAMx - Time Envelope Attack;
TEAS*TEAMx*CTLF - modulates clock period in
TEAS*TEAMx*MD + accordance with forward
254*TEAMn*CTLF + control data, modulation
TEAMn*TEAS*MD)/((CTLF + data, and three programming
MD + TEAS)*254) arguments that are envelope
parameters Maximum,
Minimum, and Salience.
9 (TEAMx)/(CTLF + MD + Time Envelope Attack;
TEAS) + TEAMn modulates clock period in
accordance with forward
control data, modulation
data, and three programming
arguments that are envelope
paramenters Maximum,
Minimum, and Salience.
10 (TEAMx + 254 - CTLF - Time Envelope Attack;
MD)/(TEAS + CTLF + MD) + modulates clock period in
TEAMn accordance with forward
control data, modulation
data, and three programming
arguments that are envelope
paramenters Maximum,
Minimum, and Salience.
11 (CTR*AEAR*4)/(128 - Amplitude Envelope Attack;
.001*FAMA*MD + CTR + modulates amplitude of summed
AEAS) clock pulses in accordance
with modulation data, a
programming argument
Modulation Amount, and two
programming arguments. that
are envelope parameters, Rate
and Salience.
12 (CTR*AEAR*CTLF)/(128 - Amplitude Envelope Attack;
.003*FAMA*MD + CTR + modulates amplitude of summed
AEAS) clock pulses in accordance
with forward control data,
modulation data, a
programming argument
Modulation Amount, and two
programming arguments that
are envelope parameters, Rate
and Salience.
13 (CTR*AEAR*AEAS*10/(128* Amplitude Envelope Attack;
AEAS + 500 + CTR*AEAS) modulates amplitude of summed
clock pulses in accordance
with forward control data,
and programming arguments,
Rate and Salience.
14 (-TERS*TERMx*CTLR - Time Envelope Release;
TERS*TERMx*MD - 64516* modulates clock period in
TERMn + 254*TERMn* accordance with reverse
CTLR + 254*TERMn*MD - control data, modulation
254*TERMn*TERS + data, and three programming
TERMn*TERS*CTLR + arguments that are envelope
TERMn*TERS*MD)/(254* parameters Maximum,
(-254 + CTLR + MD -TERS)) Minimum, and Salience.
15 (TERMx)/(254 - MD - CTLR + Time Envelope Release;
TERS) + TERMn modulates clock period in
accordance with reverse
control data, modulation
data, and three programming
arguments that are envelope
parameters Maximum,
Minimum, and Salience.
16 (TERMx + CTLR + MD)/ Time Envelope Release;
(TERS + 254 - CTLR - modulates clock period in
MD) + TERMn accordance with reverse
control data, modulation
data, and three programming
arguments that are envelope
parameters Maximum,
Minimum, and Salience.
17 129 - Amplitude Envelope Release;
((CTR*AERR*4)/ modulates amplitude of summed
(.001*RAMA*MD + clock pulses in accordance
(127 - CTR) + AERS)) with modulation data, a
programming argument
Modulation Amount, and two
programming arguments that
are envelope parameters, Rate
and Salience.
18 128 - (CTR*AERR*(127 - Amplitude Envelope Release;
CTLF))/(89.9 + modulates amplitude of summed
.003*FAMA*MD + clock pulses in accordance
(127 - CTR) + AERS) with reverse control data,
modulation data, a
programming argument
Modulation Amount, and two
programming arguments that
are envelope parameters, Rate
and Salience.
19 127 - Amplitude Envelope Release;
(CTR*AEAR*AEAS*10)/ modulates amplitude of summed
(128*AEAS + clock pulses in accordance
500 + CTR*AEAS) with reverse control data,
and programming arguments,
Rate and Salience.
20 100*(FCR*CTLF + Forward Filter Modulation;
FMR*MD+)/ Hill's equation in which
(127*(FCR*FS + FMR*FS) - forward control data and
FS*(FCR*CTLF + FMR*MD) + modulation data are treated
FS*1000) as decreasing load. Has
programming arguments Control
Rate, Modulation Rate and
Salience.
21 (RCR*(127 - CTLR) + Reverse Filter Modulation;
RMR*MD)*100/ Hill's equation in which
(127*(RCR*RS + reverse control data and
RMR*RS) - RS*(RCR*(127 - modulation data are treated
CTLR) + RMR*MD) + as decreasing load. Has
1000*RS programming arguments Control
Rate, Modulation Rate and
Salience.
22 (127*CTLF - CTLF*FB)/ Forward Filter Modulation;
(8*(168 - CTLF)) includes negative feedback
term, FB.
23 (127 - CTLR)*(127 - Reverse Filter Modulation
FB)/(8*(41 + CTLR) includes negative feedback
term, FB.
24 CTLF*TS/127 + (127 - TS)/2 Shaping Window; scales and
offsets control data in
accordance with Total Shaping
parameter.
25 SA + (127 - SA)*MD/127 Total Shaping; normalizes
modulation data and constant
Shaping Amount programming
argument.
26 (27 - TS)/2 Normalize Amount 1; the
conversion characteristic of
this amount is the lower
endpoint of the shaping
window, to be used to
normalize converted data.
27 (127 + TS)/2 Normalize Amount 2; the
conversion characteristic of
this amount is the upper
endpoint of the shaping
window, to be used to
normalize converted data.
28 (A1 - A2)*127/(A3 - A2) Normalize; normalizes
converted data from forward
shaping window.
29 (B1 - B2)*127/(B3 - B2) Normalize; normalizes
converted data from reverse
shaping window.
30 (CTL1*83*BI + Interval Scaling; scales
MD2*83*AI)/1000 control data in accordance
with Base Interval
programming argument, and
adds an amount determined by
Add Interval argument in
accordance with modulation
data.
31 (66*CTL1*(MD1*IL - Interval Scaling; scales
MD1*BI - 128*BI) + control data in between Base
MD2*8300*AI)/100000 Interval and Interval Limit
programming arguments in
accordance with modulation
data, and adds an amount
determined by Add Interval
argument in acordance with
additional modulation data.
32 .083*CTL1*(IL*(.06*exp(.056* Interval Scaling; scales
MD1)) + BI - control data in between Base
.06*IL)/(.06*exp(.056*MD1) + Interval and Interval Limit
MD2*.083*AI programming arguments in
accordance with modulation
data, and adds an amount
determined by Add Interval
argument in accordance with
additional modulation data.
33 CTL1*(RD + MD1)/127 Controller Delay Time;
determines time delay of
control data in accordance
with control data, modulation
data, and Reflex Delay
programming argument.
34 (CTL1 + MD1)*RD Controller Delay Time;
determines time delay of
control data in accordance
with control data, modulation
data, and Reflex Delay
programming argument.
35 CTL1v*(VD + MD2) Velocity Delay Time,
determines time delay of
control data in accordance
with velocity data,
modulation data, and Velocity
Delay programming argument.
36 (CTL1v + MD2)*VD Velocity Delay Time,
determines time delay of
control data in accordance
with velocity data,
modulation data, and Velocity
Delay programming argument.
37 (CD + MD3) Constant Delay, determines
time delay of control data in
accordance with modulation
data and Constant Delay
programming argument.
38 (IN 1 + IN 2*MAT 2 + IN 3* Mix/Normalize; mixes control
AMT 3 + IN 4*AMT 4)/(1 + data with delayed control
AMT 2 + AMT 3 + AMT 4) data in accordance with
Amount argument and
normalizes output.
39 ((127 + MD1)*(CTLF1 + Interpolation equation;
CTLR1) + crossfades inputs from two
MD1*(CTLF2 + CTLR2))/127 gesture synthesis chains
under control of modulation
data.
40 F(t) = M(t)*d2x(t)/dt2 + Mass-Spring differential
b(t)*dx(t)/dt + k(t)*(x(t) - equation; gives a function
x0(t)) for force development over
time, in terms of resistance
due to mass (times
acceleration), viscosity
(times velocity), stiffness
or elasticity (times
displacement), and an initial
displacement possibly due to
friction.
41 V = (T0 - T)*a/(b + T) Hill's equation; giving
velocity of shortening of a
muscle in terms of the
tension developed by the
muscle T, the maximum tension
the muscle can develop T0,
and two constants a and b
representing properties that
may be modeled as
combinations of elastic and
viscous elements.

Longo, Nicholas

Patent Priority Assignee Title
10019435, Oct 22 2012 GOOGLE LLC Space prediction for text input
10102835, Apr 28 2017 Intel Corporation Sensor driven enhanced visualization and audio effects
10147408, Nov 02 2015 Yamaha Corporation Connection setting of tone processing module
10241673, May 03 2013 GOOGLE LLC Alternative hypothesis error correction for gesture typing
10319352, Apr 28 2017 Intel Corporation Notation for gesture-based composition
10489508, Oct 16 2012 GOOGLE LLC Incremental multi-word recognition
10528663, Jan 15 2013 GOOGLE LLC Touch keyboard using language and spatial models
10922852, Nov 16 2017 Adobe Inc. Oil painting stroke simulation using neural network
10977440, Oct 16 2012 GOOGLE LLC Multi-gesture text input prediction
11334717, Jan 15 2013 GOOGLE LLC Touch keyboard using a trained model
11379663, Oct 16 2012 GOOGLE LLC Multi-gesture text input prediction
11380294, Apr 16 2019 Keyless synthesizer
11699421, Sep 30 2019 Mr. Christmas Incorporated System for touchless musical instrument
11727212, Jan 15 2013 GOOGLE LLC Touch keyboard using a trained model
6490503, May 10 1999 Sony Corporation Control device and method therefor, information processing device and method therefor, and medium
7176373, Apr 05 2002 Interactive performance interface for electronic sound device
7271331, Jan 30 2006 Musical synthesizer with expressive portamento based on pitch wheel control
7309828, May 15 1998 NRI R&D PATENT LICENSING, LLC Hysteresis waveshaping
7421155, Apr 01 2004 Kyocera Corporation Archive of text captures from rendered documents
7437023, Aug 18 2004 Kyocera Corporation Methods, systems and computer program products for data gathering in a digital and hard copy document environment
7593605, Apr 01 2004 Kyocera Corporation Data capture from rendered documents using handheld device
7596269, Apr 01 2004 Kyocera Corporation Triggering actions in response to optically or acoustically capturing keywords from a rendered document
7599580, Apr 01 2004 Kyocera Corporation Capturing text from rendered documents using supplemental information
7599844, Apr 01 2004 Kyocera Corporation Content access with handheld document data capture devices
7606741, Apr 01 2004 Kyocera Corporation Information gathering system and method
7702624, Apr 19 2004 Kyocera Corporation Processing techniques for visual capture data from a rendered document
7706611, Aug 23 2004 Kyocera Corporation Method and system for character recognition
7707039, Apr 01 2004 Kyocera Corporation Automatic modification of web pages
7718885, Dec 05 2005 Expressive music synthesizer with control sequence look ahead capability
7742953, Apr 01 2004 Kyocera Corporation Adding information or functionality to a rendered document via association with an electronic counterpart
7812860, Apr 19 2004 Kyocera Corporation Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
7818215, Apr 19 2004 Kyocera Corporation Processing techniques for text capture from a rendered document
7831912, Apr 01 2004 Kyocera Corporation Publishing techniques for adding value to a rendered document
7990556, Dec 03 2004 Kyocera Corporation Association of a portable scanner with input/output and storage devices
8005720, Feb 15 2004 Kyocera Corporation Applying scanned information to identify content
8019648, Apr 01 2004 Kyocera Corporation Search engines and systems with handheld document data capture devices
8081849, Dec 03 2004 Kyocera Corporation Portable scanning and memory device
8179563, Aug 23 2004 Kyocera Corporation Portable scanning device
8214387, Apr 01 2004 Kyocera Corporation Document enhancement system and method
8261094, Apr 19 2004 Kyocera Corporation Secure data gathering from rendered documents
8346620, Jul 19 2004 Kyocera Corporation Automatic modification of web pages
8418055, Feb 18 2009 Kyocera Corporation Identifying a document by performing spectral analysis on the contents of the document
8442331, Apr 01 2004 Kyocera Corporation Capturing text from rendered documents using supplemental information
8447066, Mar 12 2009 Kyocera Corporation Performing actions based on capturing information from rendered documents, such as documents under copyright
8489624, May 17 2004 Kyocera Corporation Processing techniques for text capture from a rendered document
8505090, Apr 01 2004 Kyocera Corporation Archive of text captures from rendered documents
8515816, Apr 01 2004 Kyocera Corporation Aggregate analysis of text captures performed by multiple users from rendered documents
8600196, Sep 08 2006 Kyocera Corporation Optical scanners, such as hand-held optical scanners
8620083, Dec 03 2004 Kyocera Corporation Method and system for character recognition
8638363, Feb 18 2009 Kyocera Corporation Automatically capturing information, such as capturing information using a document-aware device
8664508, Mar 14 2012 Casio Computer Co., Ltd. Musical performance device, method for controlling musical performance device and program storage medium
8713418, Apr 12 2004 Kyocera Corporation Adding value to a rendered document
8723013, Mar 15 2012 Casio Computer Co., Ltd. Musical performance device, method for controlling musical performance device and program storage medium
8759659, Mar 02 2012 Casio Computer Co., Ltd. Musical performance device, method for controlling musical performance device and program storage medium
8781228, Apr 01 2004 Kyocera Corporation Triggering actions in response to optically or acoustically capturing keywords from a rendered document
8799099, May 17 2004 Kyocera Corporation Processing techniques for text capture from a rendered document
8831365, Apr 01 2004 Kyocera Corporation Capturing text from rendered documents using supplement information
8874504, Dec 03 2004 Kyocera Corporation Processing techniques for visual capture data from a rendered document
8892495, Feb 01 1999 Blanding Hovenweep, LLC; HOFFBERG FAMILY TRUST 1 Adaptive pattern recognition based controller apparatus and method and human-interface therefore
8953886, Aug 23 2004 Kyocera Corporation Method and system for character recognition
8969699, Mar 14 2012 Casio Computer Co., Ltd. Musical instrument, method of controlling musical instrument, and program recording medium
8990235, Mar 12 2009 Kyocera Corporation Automatically providing content associated with captured information, such as information captured in real-time
9008447, Mar 26 2004 Kyocera Corporation Method and system for character recognition
9021380, Oct 05 2012 GOOGLE LLC Incremental multi-touch gesture recognition
9030699, Dec 03 2004 Kyocera Corporation Association of a portable scanner with input/output and storage devices
9075779, Mar 12 2009 Kyocera Corporation Performing actions based on capturing information from rendered documents, such as documents under copyright
9081500, May 03 2013 GOOGLE LLC Alternative hypothesis error correction for gesture typing
9081799, Dec 04 2009 GOOGLE LLC Using gestalt information to identify locations in printed information
9116890, Apr 01 2004 Kyocera Corporation Triggering actions in response to optically or acoustically capturing keywords from a rendered document
9134906, Oct 16 2012 GOOGLE LLC Incremental multi-word recognition
9143638, Apr 01 2004 Kyocera Corporation Data capture from rendered documents using handheld device
9171531, Feb 13 2009 Commissariat a l Energie Atomique et aux Energies Alternatives; Movea SA Device and method for interpreting musical gestures
9268852, Apr 01 2004 Kyocera Corporation Search engines and systems with handheld document data capture devices
9275051, Jul 19 2004 Kyocera Corporation Automatic modification of web pages
9323784, Dec 09 2009 Kyocera Corporation Image search using text-based elements within the contents of images
9336762, Sep 02 2014 Native Instruments GmbH Electronic music instrument with touch-sensitive means
9514134, Apr 01 2004 Kyocera Corporation Triggering actions in response to optically or acoustically capturing keywords from a rendered document
9535563, Feb 01 1999 Blanding Hovenweep, LLC; HOFFBERG FAMILY TRUST 1 Internet appliance system and method
9542385, Oct 16 2012 GOOGLE LLC Incremental multi-word recognition
9552080, Oct 05 2012 GOOGLE LLC Incremental feature-based gesture-keyboard decoding
9633013, Apr 01 2004 Kyocera Corporation Triggering actions in response to optically or acoustically capturing keywords from a rendered document
9678943, Oct 16 2012 GOOGLE LLC Partial gesture text entry
9710453, Oct 16 2012 GOOGLE LLC Multi-gesture text input prediction
9798718, Oct 16 2012 GOOGLE LLC Incremental multi-word recognition
9830311, Jan 15 2013 GOOGLE LLC Touch keyboard using language and spatial models
9841895, Jun 12 2015 GOOGLE LLC Alternative hypothesis error correction for gesture typing
9997147, Jul 20 2015 Musical instrument digital interface with voice note identifications
Patent Priority Assignee Title
5074184, Dec 20 1988 Roland Corporation Controllable electronic musical instrument
5097741, Dec 29 1989 Roland Corporation Electronic musical instrument with tone volumes determined according to messages having controlled magnitudes
5200568, Jan 31 1990 YAMAHA CORPORATION, Method of controlling sound source for electronic musical instrument, and electronic musical instrument adopting the method
5216189, Nov 30 1988 Yamaha Corporation Electronic musical instrument having slur effect
5241126, Jun 12 1989 Yamaha Corporation Electronic musical instrument capable of simulating special performance effects
5260507, Mar 20 1990 Yamaha Corportion Envelope waveform generation apparatus for an electronic musical instrument responsive to touch data
5498836, Dec 13 1991 Yamaha Corporation Controller for tone signal synthesizer of electronic musical instrument
5726374, Nov 22 1994 Keyboard electronic musical instrument with guitar emulation function
Executed onAssignorAssigneeConveyanceFrameReelDoc
Date Maintenance Fee Events
Nov 17 2003M2551: Payment of Maintenance Fee, 4th Yr, Small Entity.
Oct 30 2007M2552: Payment of Maintenance Fee, 8th Yr, Small Entity.
Jan 02 2012REM: Maintenance Fee Reminder Mailed.
May 23 2012EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Apr 16 20054 years fee payment window open
Oct 16 20056 months grace period start (w surcharge)
Apr 16 2006patent expiry (for year 4)
Apr 16 20082 years to revive unintentionally abandoned end. (for year 4)
Apr 16 20098 years fee payment window open
Oct 16 20096 months grace period start (w surcharge)
Apr 16 2010patent expiry (for year 8)
Apr 16 20122 years to revive unintentionally abandoned end. (for year 8)
Apr 16 201312 years fee payment window open
Oct 16 20136 months grace period start (w surcharge)
Apr 16 2014patent expiry (for year 12)
Apr 16 20162 years to revive unintentionally abandoned end. (for year 12)