A musical playback amusement system is disclosed. A primary audio track defined by a plurality of time-sequence audio data elements and associated synchronization identifiers are loaded and played back on a first interactive device. A second interactive device is in communication with the first interactive device to receive playback synchronization commands that coordinates playback of a secondary track loaded on the second interactive device. The synchronization identifiers in the primary track are transmitted from the first to the second interactive device as the playback synchronization commands in coordination with the playback of the primary audio track.
|
1. A method for synchronized audio output between a first device and a second device, the method comprising:
generating on the first device a first audio output corresponding to a primary track defined by a plurality of sequential audio data elements and one or more first playback synchronization identifiers stored together and associated with specific audio data elements at spaced intervals of the sequential audio data elements of the primary track;
transmitting, to the second device, synchronization commands corresponding to the first playback synchronization identifiers as playback of the primary track on the first device advances to those specific audio data elements that include the associated first playback synchronization identifiers;
generating a second audio output of a secondary track on the second device in synchrony with the first audio output on the first device, the secondary track being defined by a plurality of sequential audio data elements and one or more second playback synchronization identifiers stored together and associated with specific audio data elements at spaced intervals of the sequential audio data elements of the secondary track, relative positions of the first playback synchronization identifiers and the second playback synchronization identifiers along the sequential audio data elements of the respective primary and secondary tracks are substantially the same;
wherein synchrony between the first device and the second device is maintained with the transmitted synchronization commands independent of any internal clocks of the first device and the second device, playback of the second audio output being adjusted to a specific one of the sequential audio data elements with the associated second synchronization identifier as directed by the synchronization commands from the first device.
22. A method for synchronized audio output between a first device and a second device, the method comprising:
generating on the first device a first audio output corresponding to a primary track defined by a plurality of sequential audio data elements of musical notes each associated with a timestamp and one or more first playback synchronization identifiers associated with specific audio data elements at spaced intervals;
corresponding the first audio output generated on the first device to a first one of the musical notes of the primary track;
transmitting, to the second device, synchronization commands corresponding to the first playback synchronization identifiers as playback of the primary track on the first device adjusts to those specific audio data elements including the associated first playback synchronization identifiers, one of the synchronization commands being a first timestamp associated with the first one of the musical notes; and
generating a second audio output of a secondary track on the second device in synchrony with the first audio output on the first device, the secondary track being defined by a plurality of sequential audio data elements of musical notes each associated with a timestamp and one or more second playback synchronization identifiers associated with specific audio data elements at spaced intervals, relative time instances of the first playback synchronization identifiers of the primary track and the second playback synchronization identifiers of the secondary track being substantially the same;
corresponding the second audio output generated on the second device to a first one of the musical notes of the secondary track, the received synchronization timestamp corresponding to a second timestamp associated with the first one of the musical notes of the secondary track being generated as the second audio output;
wherein synchrony between the first device and the second device is maintained with the transmitted synchronization commands, playback of the second audio output being adjusted to a specific one of the sequential audio data elements with the associated second synchronization identifier as directed by the synchronization commands from the first device.
2. The method of
3. The method of
receiving a user input on the first device; and
mixing into the first audio output, in response to the user input, audio data elements of a tertiary track stored on the first device.
4. The method of
5. The method of
discontinuing the mixing in of the audio data elements of the tertiary track into the first audio output in response to a termination of the user input.
6. The method of
7. The method of
the primary track is a melody track of a song;
the secondary track is one of an accompaniment track and a harmony track of the song; and
the tertiary track is one of a riff track and a solo track of the song.
8. The method of
transmitting, to a third device, the synchronization commands corresponding to the first playback synchronization identifiers as the playback of the primary track on the first device advances to those specific audio data elements that include the associated first playback synchronization identifiers;
generating a third audio output of a second secondary track on the third device in synchrony with the first audio output on the first device and the second audio output on the second device, the second secondary track being de fined by a plurality of sequential audio data elements and one or more third playback synchronization identifiers associated with specific audio data elements at spaced intervals, relative time instances of the first playback synchronization identifiers of the primary track, the second playback synchronization identifiers of the first secondary track, and the third playback synchronization identifiers of the second secondary track being substantially the same;
wherein synchrony between the first device and the third device is maintained with the transmitted synchronization commands, playback of the third audio output being adjusted to a specific one of the sequential audio data elements with the associated third playback synchronization identifiers as directed by the synchronization commands from the first device.
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
corresponding the first audio output generated on the first device to a first one of the musical notes of the primary track;
transmitting a first timestamp associated with the first one of the musical notes to the second device as a synchronization timestamp; and
corresponding the second audio output generated on the second device to a first one of the musical notes of the second track, the received synchronization timestamp corresponding to a second timestamp associated with the first one of the musical notes of the secondary track being generated as the second audio output.
17. The method of
18. The method of
generating on the third device a third audio output corresponding a first one of the musical notes of the third track, the received synchronization timestamp corresponding to a third timestamp associated with the first one of the musical notes of the third track being generated as the third audio output.
19. The method of
20. The method of
21. The method of
23. The method of
24. The method of
generating on the third device a third audio output corresponding a first one of the musical notes of the third track, the received synchronization timestamp corresponding to a third timestamp associated with the first one of the musical notes of the third track being generated as the third audio output.
25. The method of
26. The method of
27. The method of
|
Not Applicable
Not Applicable
1. Technical Field
The present disclosure relates generally to toys and various interactive entertainment/amusement devices, as well as playing music and other sounds thereon. More particularly, the present disclosure relates to synchronized multiple device audio playback and interaction.
2. Related Art
Children are often attracted to interactive amusement devices that provide both visual and aural stimulation. In recognizing this attraction, a wide variety have been developed throughout recent history, beginning with the earliest “talking dolls” that produced simple phrasings with string-activated wood and paper bellows, or crying sounds with weight activated cylindrical bellows having holes along its side. These talking dolls were typically limited to crying “mama” or “papa.”
Another well-known apparatus that generates sounds is the music box, which is generally comprised of a comb with each tooth thereof having a specific length that, when mechanically plucked, emits a sound at a particular frequency or musical note. A disc or cylinder having pins or other protuberances was rotated along the comb at a set speed by a clockwork mechanism that was manually wound. The position of the pins could be variously arranged and spaced to pluck the desired tooth of the comb at a specific time and combined to reproduce a musical composition. Music boxes were typically standalone devices enclosed in snuff boxes, though due to their relatively small size, they could be incorporated into dolls and other toys.
Further advancements utilized wax cylinder phonograph recordings. Various phrases were recorded on the phonographs for playback through the dolls to simulate dialogue. Still popular among collectors today, one historically significant embodiment of a talking doll is the “Bebe Phonographe” made by the Jumeau Company in the late 19th century. In addition to spoken words, music was also recorded on the phonograph so that the doll could sing songs and nursery rhymes.
Beyond the audio output capabilities, efforts to make dolls more lifelike led to movable limbs and facial features. In some cases the movement of such features was coordinated with the audio output. For example, when a phrase was uttered, the jaws of the doll could be correspondingly moved. The instructions required for such synchronized animation of the features of the doll were stored in a cassette recording that included the electrical control signals for the servo motors actuating the movable features along with the audio signal.
As the use of digital electronics became more feasible and cost effective, gradually all functions of the toys have come to be implemented on programmable integrated circuit devices such as microcontrollers. The play pattern or routine, including all audio information and mechanical actuation sequences therefor, are stored on memory devices for subsequent retrieval and processing by the microcontroller. Pursuant to the specific programmed instructions, digital audio data is passed to a digital-to-analog converter, with the resulting analog signal being passed to an audio transducer (speaker). Movements of the mechanical features of the toys is represented as a series of motor activation and deactivation signals, which are also generated by the processor pursuant to the programmed instructions.
Earlier digital processor-operated dolls were typically single standalone units that functioned autonomously. To the extent any external inputs affected its play pattern, such inputs were received from the user via buttons, sensors, and other on-board devices connected to the processor. In more sophisticated devices, wired or wireless remote control devices could communicate with the doll to provide operational directions thereto. The availability of inter-processor data communication modalities in microcontrollers led to the development of systems of multiple dolls that can communicate with each other. While each doll can have its own play routine, the flow of that routine may be altered by input signals received from another doll. For example, one doll could generate a first part of a dialogue, while another doll could respond with a second part of the same dialogue.
Along the same lines as talking/singing dolls, musical instruments, and simplified versions thereof, are also popular amusement devices for children. Depending on the target age range, the level of realism may be varied. For instance, in preparation for transitioning to real musical instruments, a scaled down and lower fidelity device, but otherwise requiring the same instrumentation skills, may be appropriate. Alternatively, for younger children with whom the goal is to introduce the joys of playing music, the number of inputs/producible sounds may be greatly reduced, or a single input may be operative to produce a sequence of multiple sounds. Such devices can be driven by electronic synthesizers, which may be controlled by programmable data processors or integrated circuit devices.
Conventional amusement devices that allow the operator to produce or manipulate musical outputs and sounds are usually standalone units with limited possibilities for amalgamation with other sounds from different devices unless independently operated. Just as ensemble performances with real instruments can be more captivating and enjoyable than solo performances for some, such is likewise the case with simulated instruments and other amusement devices that output music. Accordingly, there is a need in the art for synchronized multiple device audio playback and interaction.
In accordance with one embodiment of the present disclosure, a musical playback system is contemplated. There may be a first interactive device with a primary audio track that can be defined by a plurality of time-sequenced audio data elements and associated synchronization identifiers loaded thereon. The first primary audio track may be played back on the first interactive device. Additionally, there may be a second interactive device with a secondary track loaded thereon. The second interactive device may be in communication with the first interactive device to receive playback synchronization commands that can coordinate play back of the secondary track on the second interactive device. The synchronization identifiers can be transmitted from the first interactive device to the second interactive device as the playback synchronization commands in coordination with the play back of the primary audio track.
Another embodiment of the present disclosure contemplates an interactive device. The interactive device may include an acoustic transducer, as well as a data communications transceiver linkable to a corresponding data communications transceiver on another interactive device to exchange data therewith. There may also be a memory with audio data stored thereon. The audio data may include a primary track and a secondary track, with each being respectively defined by a plurality of time-sequenced audio data elements with selected ones of the audio data elements linked to playback synchronization identifiers. The device may further include a programmable data processor that can be connected to the acoustic transducer, the data communications transceiver, and the memory. The data processor can be programmed to operate in one of a master mode and a secondary mode. In the master mode, the audio data elements of the primary track can be synthesized as a primary track audio signal to the acoustic transducer. Furthermore, linked ones of the playback synchronization identifiers can be passed to the data communications transceiver as the corresponding audio data elements are being synthesized at given time instants. In the secondary mode, the audio data elements of the secondary track can be synthesized as a secondary track audio signal to the acoustic transducer. Received playback synchronization identifiers from the data communications transceiver may designate particular time-sequenced audio data elements being synthesized at given time instants.
Yet another embodiment contemplates a method for synchronized audio output between a first device and a second device. The method may include a step of generating a first audio output corresponding to a primary track. The first audio output may be generated on the first device. Furthermore, the first audio output may correspond to a primary track that is defined by a plurality of sequential audio data elements and one or more first playback synchronization identifiers associated with specific audio data elements, at spaced intervals. There may also be a step of transmitting, to the second device, synchronization commands corresponding to the first playback synchronization identifiers. This may proceed as playback of the primary track on the first device adjusts to those specific audio data elements that include the associated first playback synchronization identifiers. The method may further include generating a second audio output of a first secondary track on the second device in synchrony with the first audio output on the first device. The first secondary track may be defined by a plurality of sequential audio data elements and one or more second playback synchronization identifiers associated with specific audio data elements at spaced intervals. Relative time instances of the first playback synchronization identifiers of the primary track and the second playback synchronization identifiers of the first secondary track may be substantially the same. Synchrony between the first device and the second device can be maintained with the transmitted synchronization commands. Playback of the second audio output may be adjusted to a specific one of the sequential audio data elements with the associated second synchronization identifiers as directed by the synchronization commands from the first device.
A method for synchronizing audio output between a first device with a first audio track and a second device with a second audio track is also contemplated. The first audio track and the second audio track may each be defined by a plurality of musical notes each in turn associated with a timestamp. The method may include generating on the first device a first audio output corresponding to a first one of the musical notes of the first audio track. There may be a step of transmitting a first timestamp that is associated with the first one of the musical notes to a second device as a synchronization timestamp. The method may further include generating, on the second device, a second audio output that can correspond to a first one of the musical notes of the second audio track. The received synchronization timestamp may further correspond to a second timestamp associated with the first one of the musical notes of the second audio track that is being generated as the second audio output.
The present invention will be best understood by reference to the following detailed description when read in conjunction with the accompanying drawings.
These and other features and advantages of the various embodiments disclosed herein will be better understood with respect to the following description and drawings, in which:
Common reference numerals are used throughout the drawings and the detailed description to indicate the same elements.
The detailed description set forth below in connection with the appended drawings is intended as a description of the presently preferred embodiments of the invention, and is not intended to represent the only form in which the present invention may be constructed or utilized. The description sets forth the functions of the invention in connection with the illustrated embodiment. It is to be understood, however, that the same or equivalent functions and may be accomplished by different embodiments that are also intended to be encompassed within the scope of the present disclosure. It is further understood that the use of relational terms such as first and second and the like are used solely to distinguish one from another entity without necessarily requiring or implying any actual such relationship or order between such entities.
With reference to
Together with the auditory outputs, various embodiments also contemplate movement of limbs 16 of the anthropomorphized characters to simulate the playing of the instruments 14. For example, one of the limbs 16 of the first interactive device 12a may move from side to side to simulate the stroking of a bow across strings of the cello, while both of the limbs 16 of the second interactive device 12b may move up and down to simulate the striking motion of the mallets against the bars, as exerted by the character. Additionally, there may be various indicator devices that produce illumination in predetermined patterns. It will be recognized by those having ordinary skill in the art that such visual appearances are merely exemplary in nature, and any other suitable appearance may be substituted to match desired thematic characteristics.
In further detail as shown in the block diagram of
Although only one audio sequence is shown for each of the respective first and second interactive devices 12a, 12b for the sake of simplicity, it is expressly contemplated that many different songs can be loaded thereon. In order to identify the particular audio sequence among other available ones, the first audio sequence 18 may have a first audio sequence identifier 22, and the second audio sequence 20 may have a second audio sequence identifier 24. So that the respective first and second audio sequences 18, 20 can be played back synchronously as the single composition, both of the values of the first audio sequence identifier 22 and the second audio sequence identifier 24 are understood to be the same. Other sets of audio sequences for different compositions may have identical audio sequence identifier values.
Each of the audio sequences is segregated into multiple tracks. For instance, the first audio sequence 18 includes a primary track 26, a secondary track 28, and a tertiary track 30. Along these lines, the second audio sequence 20 likewise includes a primary track 32, a secondary track 34, and a tertiary track 36. The individual tracks may represent different parts of the composition as would be played on a single instrument, including the melody, harmony, accompaniment, solo, and/or riff parts. Thus, the primary tracks 26, 32 may correspond to the melody portion, the secondary tracks 28, 34 may correspond to the accompaniment or harmony portion, and the tertiary tracks 30, 36 may correspond to a solo or a riff portion. All, some, or just one of these tracks may be selectively played back or generated as an audio output in accordance with various embodiments of the present disclosure. It is expressly contemplated that the interactive device 12 may include more than one secondary track and/or more than one tertiary track, notwithstanding the exemplary implementation shown in
Each of the tracks is further segregated into multiple audio sequence data elements 38. One possible implementation of the audio playback amusement system 10 may utilize MIDI (Musical Instrument Data Interface) sequence data to represent the musical composition. Each audio sequence data element 38 is understood to have a specific pitch or output signal frequency and correspond to a compositional note. For example, the numerical value 69 may correspond to an audio frequency of 440 Hz, or the “A4” note. Additional data such as clock/sequence identifiers to define a tempo can be included. The exact current playback position among the respective audio sequences 18, 20 may be indicated by a sequence identifier that is a timestamp or time code. Alternatively, the audio sequence data elements 38 may be raw pulse code modulated (PCM) data representative of audio signals. A common format for such audio data is Waveform Audio File Format (WAVE), though others such as AIFF (Audio Interchange File Format) may also be utilized. Although the amount of data may vary between a MIDI audio sequence and a WAVE audio sequence, audio data in general is oftentimes stored as a stream of time-sequenced information “chunks” to which metadata can be attached at particular time instances. In this regard, compressed, lossy audio data formals such as MP3 (MPEG2 Audio Layer 3) may also be utilized.
The block diagram of
In addition to the memory 39, there may also be an on-board audio synthesizer module 42 that converts the audio sequence data in the order and speed designated into an audio output. The generated analog audio signal may be amplified and passed to an acoustic transducer 44 or loudspeaker, where it is mechanically reproduced as sound waves. The synthesizer module 42 can be programmed to generate sound signals that mimic those of a particular instrument, and so the tracks may include instrument identifiers specifying the aforementioned cello or marimba, or others such as the piano, harp, flute, trumpet, bells, and congas that can produce musical scales. Additionally, indefinite pitch percussive instruments such as the snare drum may also be specified. Besides reproducing the sounds corresponding to the audio sequence data, the synthesizer module 42 may accept text data, from which speech may be synthesized. In accordance with one embodiment of the present disclosure, the programmable data processor 40 can be the 24-channel MIDI/Speech Controller integrated circuit SNC88681A from Sonix Technology Co., Ltd. of Chupei City, Taiwan.
The programmable data processor 40 has several input/output ports 46 to which various peripheral devices may be connected. Although specific functional details of these peripheral devices as pertaining to the functionality of the interactive device 12 and the audio playback amusement system 10 on a broader level will be discussed more fully below, by way of general overview, these include an input device 48, a transceiver 50, a data link modality/front end 52, an illumination output device or Light Emitting Diodes (LED) 54, and a mechanical actuator 55. Because not all interactive devices 12 in the audio playback amusement system 10 require movement, the mechanical actuator 55 is optional. In further detail, the input device 48 may be connected to a first input/output port 46a, and the transceiver 50 may be connected to an input/output port 46b. The transceiver 50, in turn, is connected to the data link modality or front end 52. Connected to a third input/output port 46c is the LED 54, and connected to a fourth input/output port 46d is the mechanical actuator 55.
Various embodiments of the present disclosure contemplate a data communications link being established between the first interactive device 12a and the second interactive device 12b, and possibly others. The physical layer of the data communications link may be wired or wireless. It is understood that the transceiver 50 incorporates the pertinent data received from the programmable data processor 40 into a suitable data transmission packet that conforms to such standards as RS-485, RS-232, and so forth. The data link modality 52 converts the individual bits of the data transmission packet into corresponding signals that can be received and converted by the receiving data link modality 52 on another interactive device 12. One possible data link modality 52 is infrared (IR), while another is radio frequency (RF). Other data link modalities 52 such as optical signals, inaudible sounds or tones, Bluetooth, WiFi, ZigBee, and so forth may also be utilized. Those having ordinary skill in the art will recognize that any other suitable data link modality may be substituted without departing from the scope of the present disclosure.
Having generally described the components of the interactive device 12, additional details pertaining to several exemplary implementations will now be considered with reference to the circuit diagrams of
In all embodiments, the aforementioned programmable data processor 40 is utilized to control the various peripheral devices connected thereto. Along these lines, each embodiment may utilize the same clock/crystal circuits 41 and power connections 43. Additionally, as explained previously, audio signals are synthesized by an on-board synthesizer that is in turn connected to the acoustic transducer 44 or loudspeaker. In embodiments where the interactive device 12 may move the limbs 16 of the depicted characters such as with the cello or marimba, a motor driver circuit that boosts the control signal from the programmable data processor 40 and isolates potential voltage and current spikes from the actuator 55 may be included.
The transceiver 50 of the interactive device may be implemented with various input/output ports, with the programmable data processor 40 being provided with instructions that implement the basic functionality thereof. In embodiments where infrared communications is utilized, the generated transmission signals from the programmable data processor 40 is passed to an infrared-wavelength light emitting diode circuit 58 shown in
With reference to the schematic diagram of
In one embodiment, the LEDs 54 are utilized to indicate the status of the data link between the interactive device 12 and others. A first LED 54a may be colored red to indicate that a connection with any other interactive device 12 has not been established, while a second LED 54b may be colored green to indicate that a connection has been established. The LEDs 54 may be selectively activated or flashed to indicate various operating modes and status conditions of the interactive device 12.
As mentioned above, the audio playback amusement system 10 may include several interactive devices 12 that simulate different instruments. In this regard, the kind of inputs that can be provided to the interactive device 12 via the input device 48 to alter the playback of the audio sequences may vary depending upon the specifics of the simulated instrument. In one example, it may be the most intuitive to sweep a hand from side to side, as contemplated for a cello or a marimba. In this case, the input device 48 shown in
To simulate a percussion instrument such as a conga or a drum, it may be most appropriate to receive an actual strike from the user's hand upon a striking surface. With reference to the schematic diagram of
For keyboard-type instruments with which it is possible to produce multiple notes, a series of inputs each representative of a particular note may be the most suitable.
As can be appreciated from the foregoing description of the numerous variations of the interactive device 12, each shares several similarities, including the programmable data processor 40. In order to streamline the manufacturing process, a single software program can be written to cover all functions of such variations. In order to differentiate one variation from another, at the time of manufacture, data inputs to the programmable data processor 40 representative of specific variant of that particular interactive device 12 may be provided.
By way of example only, a truth table 76 of
Some entries of the truth table 76 show that some inputs may correspond to two different variations of instruments. For example, in the first row 76a, the binary value “000” may correspond to either a piano or a harp. Furthermore, in the third row 76c, the binary value “010” may correspond to either a drum or a conga. Since the interactive device 12 functions differently and require different audio synthesis although being categorically the same, a selection modality thereof is also contemplated. With reference to the schematic diagrams of
Referring back to the block diagram of
Where the second audio sequence 20 on the second interactive device 12b is the same composition, then the track section identifiers 56 of the respective primary, secondary, and tertiary tracks are also understood to reference the same relative time instance or order sequence number within the overall song. Thus, a third identifier 56c on the primary track 32 of the second audio sequence 20 that references the first audio sequence data element 38c corresponds to the same time instances as that referenced by the first identifier 56a on the primary track 26 of the first audio sequence 18 that references the time-wise identical first audio sequence data element 38a. In some cases, such as when the MIDI format is utilized, there are no separate identifiers 56 for each of the respective tracks 38a-38c, and a single set that applies to all tracks is envisioned. Where alternative formats such as WAV or MP3 is involved, each separate track may have its own identifiers 56 as in the manner discussed above.
While in some implementations, the order of operations is such that the transmission of the synchronization command occurs after playback the corresponding audio sequence data element, it need not be limited thereto. In some cases where the transmission is fast enough, real-time playback synchronization may be possible regardless of the order in which the operations are executed. Where the transmission speed and the speed at which the programmable data processor 40 responds to the instructions to generate such transmissions may be less than ideal, playback may be delayed with the transmission of the synchronization command occurring substantially simultaneously, or even before playback of a particular audio sequence data element. Thus, to account for inherent time delays associated with generating and propagating the synchronization command, in actual implementation, the track section identifiers 56 may be processed independently of the audio sequence data element 38. In other words, the track section identifiers 56 may be processed and transmitted to the second interactive device 12b before playback of the corresponding audio sequence data element 38 occurs. The delay between these two events may be preset, or adjusted depending on the quality of the data link.
Various embodiments of the present disclosure contemplate the exchange of such track section identifiers 56 to synchronously play back the first audio sequence 18 and the second audio sequence 20 on the respective first and second interactive devices 12. More particularly, as transmitted by one interactive device 12, the track section identifier 56 may be referred to as a synchronization command that adjusts, by either advancing to or retreating from the playback of the audio sequence of the receiving interactive device 12 to that specific audio sequence data element 38 specified thereby.
As indicated above, the primary track represents a melody portion of the composition, while the secondary track represents an accompaniment or harmony portion of the composition. Thus, the first interactive device 12a may play back the melody in one simulated instrument, while the second interactive device 12b may play back the harmony or accompaniment in another simulated instrument different from the first. A rich musical experience is possible, with multiple instruments each playing, synchronously, a different part of the composition. Further enhancement of the user experience is also contemplated with the selective activation of solo, riff or tertiary tracks based on inputs received from the user via the respective input devices 48. This is also understood to be synchronized to the other tracks that are being actively played.
Generally, synchronization between entities is predicated on the setting of one entity as a primary or master and another entity as a secondary, with certain values of the primary being applied to the secondary. In an example embodiment of the audio playback amusement system 10, the first interactive device 12a may be designated the primary, while the second interactive device 12b may be designated the secondary. All interactive devices 12 in the audio playback amusement system 10 can function as either the primary or the secondary, and as a consequence, each of the programmable data processor 40 thereof can be said to have a primary or master mode and a secondary mode. In the primary mode, certain functions are performed by the programmable data processor 40, while in the secondary mode, certain other functions not necessarily the same as those of the primary mode are performed.
As the first interactive device 12a plays back the primary track 26, the track section identifiers 56 are encountered and processed. More particularly, the track section identifiers 56 are transmitted to the second interactive device 12b as a synchronization command. Upon receipt, the synchronization command sets the current playback position on the second interactive device 12b with respect to the secondary track 34. Any subsequent interactive device 12 that begins communicating with the primary becomes secondary thereto, and similarly receives synchronization commands that set the current playback position of a secondary track stored thereon to be in synchrony with the playback of the primary track 26 on the first interactive device 12a.
The primary/secondary status negotiation process amongst the interactive devices 12, as well as maintenance of playback synchrony of the audio sequences 18, 20, is implemented via software code executed by the programmable data processor 40. An exemplary embodiment of such executable code is depicted in the flowcharts of
After boot-up, the interactive device 12, and more particularly the programmable data processor 40, can enter one of four operational categories. With reference to the schematic diagrams of
In a first position, the switch 82 sets a Do-Re-Mi operational category 204. Although implementation specifics may vary, for the interactive devices 12e, 12f capable of producing varying notes or scales, a press of the keys 73 generates its corresponding note or tone on the scale. For multi-tone percussive instruments such as the drum or conga, corresponding sounds are generated depending on which of the capacitive touch sensors 70 was activated as discussed above.
In a second position, the switch 82 sets a second, One Key, One Note operational category 206. With this operational category, each user input received on the input devices 48 (whether that is the key 73, the touch sensors 70, or the proximity sensors 66) plays a single note or audio sequence data element 38 of the respective audio sequence loaded on the interactive device 12 in order. This way, the user can experience “playing” music by merely pressing, tapping, and swiping various input devices without knowing which specific note to play.
A third operational category 208 and a fourth operational category 210 are essentially the same, as both contemplate an interactive orchestra. The difference relates to the functionality that can be activated via the input device 48. With the third operational category 208, the user inputs cause individual notes to be generated. On the other hand, with the fourth operational category 210, the user inputs cause a series of audio sequence data elements 38 to be played back. For purposes of conciseness, only the fourth operational category 210 will be described. Those having ordinary skill in the art will be able to recognize the modifications that can be made to the fourth operational category 210 to yield the functions possible with the third operational category 208.
Referring now to
One of the ways in which the interactive device 12 may be activated is via a local user input 218. As shown in
As shown in
It is expressly contemplated that when the primary interactive device 12a is operating without cooperation with any others in the vicinity, then both the primary track 26 and the secondary track 28 are played back together, synchronously. In a decision block 240, the primary interactive device 12a checks to see if any secondary interactive devices 12b are currently active, as any such active ones should transmit the secondary handshake. If not, then the accompaniment, harmony, or secondary track 28 is played per step 242. To visually indicate that the interactive device 12 is set to be a master/primary, the red-colored first LED 54a may be illuminated. When the accompaniment, harmony, or secondary track 28 is also being played back, the green-colored second LED 54b may also be illuminated. This is by way of example only, and any other configuration and arrangement of lights may be substituted.
As mentioned above, user inputs provided to the input devices 48 can add another layer of interactivity to the playback of the audio sequences or songs on the interactive device. As a master/primary (or even as a secondary, for that matter, as will be described in greater detail below) the execution continues to check for received user input 244. If there has been a user input, then the tertiary track 30, also referred to as the solo or riff track, is played back in a step 246 for a predetermined time period per instance of detected user input. The play back of the solo or tertiary track 30 is also synchronous with the play back of the primary track 26. Considering that the programmable data processor 40 is synthesizing a particular audio sequence data element 38 with a definite sequence number or timestamp, it is understood that audio sequence data element 38 of the tertiary track 30 with the same sequence number or timestamp is retrieved and synthesized. This also applies to the synchronous playback of the secondary track 28. The playback of the primary track 26 continues so long as a user input via the input devices 48 is received.
When multiple tracks are being synthesized or mixed at once, it is understood that less than ideal acoustic performance may result. Accordingly, the reduction in volume of certain tracks, whether that is the primary track 26, the secondary track 28, or the tertiary track 30, is contemplated. By reducing the volume, it is envisioned to smooth out the audio output mix for more pleasant listening. Although this aspect has been discussed in relation to the mixing of audio output from a single interactive device 12, such volume reduction or output audio shaping is applicable in the context of multiple remote interactive devices 12 generating respective secondary tracks 34 and tertiary tracks 36. In another embodiment, the playback of the tertiary track 30, 36 is understood to be at the highest volume level, the playback of the primary track 26, 32 is understood be lower than that of the tertiary track 30, 36 but higher than that of the secondary track 28, 34. The relative volume levels may be pre-set, or in the alternative, determined based on the received volume level 256 in the synchronization command.
Upon reaching the end of the first audio sequence 18, in a decision block 248 it is determined whether a play cycle has been completed. The number of compositions or audio sequences played back can be varied, though by way of example, one embodiment contemplates three. In other words, if three songs have been played, the cycle is complete, and execution returns to the standby mode 212. Otherwise, the interactive device 12 sets itself to secondary status 226, and waits until another one asserts itself as a master/primary, or the more likely, until timing out, at which point master/primary status is reestablished per step 228.
Having covered the functionality of the interactive device 12 in the master/primary mode, the functionality of the same in the secondary mode will now be considered. Referring back to the flowchart of
Once a primary interactive device 12a has been established within the local vicinity, it is possible for multiple other secondary interactive devices 12b to join the communications link. Negotiating with the primary interactive device 12a is sufficient, and no other involvement from existing secondary interactive devices 12b is necessary. If the interactive device 12 can receive the foregoing data within the time limit set, then it too can become a secondary interactive device 12b.
Another possible way in which the interactive device 12 may be set to secondary status is receiving or otherwise detecting the local user input 218 by way of the wake up/play/stop button 80. After checking for the master handshake in decision block 220, it may be ascertained that one has indeed been received. This is understood to mean that another local interactive device 12 had already asserted master/primary status, notwithstanding the local user input 218. In such case, the interactive device 12b defaults to the secondary mode, and once again, waits until the expiration of a timeout or the receipt of the synchronization commands. As indicated above, this includes the mode identifier received in step 250, the song or audio sequence identifier 22 in the step 252, the track section identifier 56 in the step 254, and the volume setting in the step 256. If the entirety of this data has been received per decision block 260, execution continues to the secondary mode functions as more fully discussed below with reference to the flowchart of
In the secondary mode, the secondary interactive device 12b plays the secondary track 34 loaded thereon starting at the track section identifier 56 that is part of the synchronization command according to a step 264. If at any point during the playback of the secondary track 34 the wake up/play/stop button 80 is pressed, then such playback is stopped, and the execution jumps back to the standby mode 212. While continuing to play the secondary track 34, the secondary interactive device 12b waits 266 for user input from the input device 48. When a decision block 268 detects such input, the solo, riff, or tertiary track 36 loaded on the secondary interactive device 12b is played in a step 270, in synchrony with the secondary track 34 being played, as well as the primary track 26 being played on the primary interactive device 12a.
The playback of the secondary track 34 is in accordance with the operational category as specified in the received mode identifier 250. For instance, if the mode identifier designates a second operational category 206 (One Key, One Note), then only a single audio sequence data element 38 is generated per input received on the primary interactive device 12a. The behavior of the secondary interactive device 12b when set to the other operational categories is likewise the same as those discussed above. The operational category designated by the master/primary is understood to be controlling, regardless of the position of the switch 82 set to a different operational category.
The secondary handshake is used to maintain the status of a given interactive device 12 as the master/primary, as described above, and so too, is the master handshake utilized to maintain the secondary status of a given interactive device 12. The decision block 272 checks to whether the master handshake has been received within a predetermined time limit. If not, this means that there is no ongoing communication with the primary interactive device 12a, and so the existing secondary interactive device 12b is switched over to a master/primary. The periodic transmission of the synchronization command, which includes the track section identifiers, maintains the synchrony of the secondary interactive device 12b by advancing to or retreating from the play back of the secondary track 34.
If communications with the primary interactive device 12a is ongoing, the next decision block 274 determines whether a cycle (three songs played) has been completed. Just as a completed cycle in the master/primary functionality next led to the standby mode 212, completion of a cycle in the secondary functionality leads to the same standby mode 212. If not, the secondary handshake is transmitted in step 276, and again sets the secondary status of the interactive device 12b to wait to receive further playback and synchronization commands from the master/primary.
Although the audio playback amusement system 10 has been described in the context of two similarly configured dolls or interactive devices 12, it is also possible for one interactive device 12 to communicate with and receive synchronization commands from a primary/master that is not so configured. For example, such a device could be a personal computer, tablet or other general purpose data processing apparatus on which interface software could be installed. The interface software could communicate with one or more interactive devices 12 utilizing conventional data link standards such as USB, two-way IR dongle, Bluetooth, WiFi, and permit the control thereof, including the playback of melody tracks, harmony tracks, accompaniment tracks, solo tracks, riff tracks, and the like via the interface software. In general, the same features of the interactive device 12 as described above could be incorporated into the interface software.
The particulars shown herein are by way of example and for purposes of illustrative discussion of the embodiments of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the present invention. In this regard, no attempt is made to show structural details of the present invention in more detail than is necessary for the fundamental understanding of the present invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the present invention may be embodied in practice.
Fong, Peter Sui Lun, Fong, Kelvin Yat Kit, Zhu, Xi-Song, Liu, Chun-Yan
Patent | Priority | Assignee | Title |
10600395, | Apr 12 2017 | Miniature interactive lighted electronic drum kit | |
11132983, | Aug 20 2014 | Music yielder with conformance to requisites |
Patent | Priority | Assignee | Title |
3872766, | |||
4245430, | Jul 16 1979 | Voice responsive toy | |
4272915, | Sep 28 1979 | Mego Corp. | Audio-visual amusement device |
4717364, | Sep 05 1983 | TOMY KOGYO CO , INC | Voice controlled toy |
4840602, | Feb 06 1987 | Hasbro, Inc | Talking doll responsive to external signal |
4904222, | Apr 27 1988 | ATOCHEM NORTH AMERICA, INC , A PA CORP | Synchronized sound producing amusement device |
4949327, | Aug 02 1985 | TULALIP CONSULTORIA COMERCIAL SOCIEDADE UNIPESSOAL S A | Method and apparatus for the recording and playback of animation control signals |
6110000, | Feb 10 1998 | SOUND N LIGHT ANIMATRONICS COMPANY; SOUND N LIGHT ANIMATRONICS COMPANY LIMITED | Doll set with unidirectional infrared communication for simulating conversation |
6309275, | Apr 09 1997 | IETRONIX, INC | Interactive talking dolls |
6375535, | Apr 09 1997 | Interactive talking dolls | |
6454625, | Apr 09 1997 | Peter Sui Lun, Fong | Interactive talking dolls |
6497604, | Apr 09 1997 | IETRONIX, INC | Interactive talking dolls |
6497606, | Apr 09 1997 | Interactive talking dolls | |
6497607, | Dec 15 1998 | Hasbro, Inc | Interactive toy |
6514117, | Dec 15 1998 | Hasbro, Inc | Interactive toy |
6641454, | Apr 09 1997 | IETRONIX, INC | Interactive talking dolls |
6682392, | Apr 19 2001 | Thinking Technology, Inc. | Physically interactive electronic toys |
6686530, | Apr 26 1999 | WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Universal digital media communications and control system and method |
6822154, | Aug 20 2003 | SUNCO LTD | Miniature musical system with individually controlled musical instruments |
7030308, | Sep 06 2002 | Seiko Instruments Inc | Synchronized beat notification system and master device and slave device for use therewith |
7120257, | Jan 17 2003 | Mattel, Inc. | Audible sound detection control circuits for toys and other amusement devices |
7297044, | Aug 26 2002 | SPIN MASTER, INC | Method, apparatus, and system to synchronize processors in toys |
7709725, | Dec 16 2004 | SAMSUNG ELECTRONICS CO , LTD | Electronic music on hand portable and communication enabled devices |
8364005, | Aug 24 2009 | SAMSUNG ELECTRONICS CO , LTD | Method for play synchronization and device using the same |
8444452, | Oct 25 2010 | Hallmark Cards, Incorporated | Wireless musical figurines |
8461444, | Dec 28 2010 | Yamaha Corporation | Tone-generation timing synchronization method for online real-time session using electronic music device |
20040038620, | |||
20050098022, | |||
20060130636, | |||
20060266201, | |||
20100005951, | |||
20100218664, | |||
20120160080, | |||
20130032023, | |||
20130276620, | |||
20130305903, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 21 2012 | Peter Sui Lun, Fong | (assignment on the face of the patent) | / | |||
Jun 02 2012 | FONG, PETER SUI LUN | FONG, PETER SUI LUN | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028370 | /0472 | |
Jun 02 2012 | FONG, KELVIN YAT-KIT | FONG, PETER SUI LUN | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028370 | /0472 | |
Jun 02 2012 | ZHU, XI-SONG | FONG, PETER SUI LUN | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028370 | /0472 | |
Jun 02 2012 | LIU, CHUN-YAN` | FONG, PETER SUI LUN | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028370 | /0472 |
Date | Maintenance Fee Events |
Jun 11 2018 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Aug 08 2022 | REM: Maintenance Fee Reminder Mailed. |
Jan 23 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Dec 16 2017 | 4 years fee payment window open |
Jun 16 2018 | 6 months grace period start (w surcharge) |
Dec 16 2018 | patent expiry (for year 4) |
Dec 16 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 16 2021 | 8 years fee payment window open |
Jun 16 2022 | 6 months grace period start (w surcharge) |
Dec 16 2022 | patent expiry (for year 8) |
Dec 16 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 16 2025 | 12 years fee payment window open |
Jun 16 2026 | 6 months grace period start (w surcharge) |
Dec 16 2026 | patent expiry (for year 12) |
Dec 16 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |