Described herein are embodiments of sensorized spherical input and output devices, systems, and methods for capturing gestural input from a user's physical interactions with a spherical device, including tossing, bouncing and spinning. In one embodiment, the spherical input and output device includes force sensors in a configuration to capture a variety of user gestures with the sphere. A microprocessor receives sensor input and transmits the sensor data to receiving devices which include computer software to translate the sensor signals to audio output, visual output, or various functions on receiving devices. Embodiments of the invention include the integration of inertial measurement units (IMUs), which may include a combination of accelerometers, gyroscopes and magnetometers to capture complex user gestures involving motion, direction and spin of the sensorized sphere in three dimensional space.
|
1. A method of generating musical outputs from a spherical input and output device, comprising the steps of:
tossing said device against a surface of an object to generate a return movement; activating a sensor embedded under a compressive material of said device to generate a force signal corresponding to the impact of said device against said surface;
activating an inertial measurement unit sensor to generate an inertial measurement unit signal corresponding to the acceleration, velocity or spin of said return movement;
processing said force signal to generate force data;
processing said inertial measurement unit signal to generate inertial measurement data;
transmitting said force data and said inertial measurement data to a receiving device electrically coupled to the spherical input and output device;
translating said force data and said inertial measurement unit data to generate a musical output corresponding to the force and one of acceleration, velocity or rotation of the return movement.
9. A spherical input and output device for capturing a return movement from tossing said device against a surface, said device comprising:
a compressive outer shell for absorbing the force of said device against said surface;
a protected inner core with a power source for powering electrical components comprising:
a sensor for capturing a force of impact of said device against said surface to generate force signals;
an inertial measurement unit sensor for measuring the return movement of said device to generate inertial measurement signals corresponding to acceleration, velocity or spherical rotation of said device;
a microprocessor for processing force signals and inertial measurement signals for generating force data and inertial measurement unit data;
a transceiver for transmitting force data and inertial measurement unit data to a receiving device electrically coupled to the spherical input and output device,
said receiving device comprising:
an algorithm for translating said force data and said inertial measurement unit data to generate a musical output corresponding to the force and one of acceleration, velocity or spin of the return movement.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The device of
11. The device of
12. The device of
14. The device of
15. The device of
16. The device of
17. The device of
18. The device of
19. The device of
|
This U.S. non-provisional utility application is a continuation application of and claims priority to U.S. non-provisional application Ser. No. 15/853,722, filed Dec. 23, 2017 which claims priority to provisional application U.S. Application No. 62/441,113, filed Dec. 30, 2016.
Aspects of the present disclosure relate to a sensorized spherical input and output device for systems, processes and applications for learning, music, rehabilitation and gaming.
A sphere is an object that is universally found in nature from planets, stars, to atomic particles. Because of its spherical qualities and interaction with forces such as gravity and movement in space, the ball has been an ideal object for sport and play that has spanned uses in civilizations from ancient to modern. A sphere's inherent qualities and unique interaction with forces in nature render it an ideal object for human use and interaction. And while balls and spheres have been ubiquitous in traditional sports and games, there are few devices in the market that combine the properties of a sphere to capture full ranges of human gestures, forces and interactions and provide unique output applications based on these unique gestural inputs. And while interactive hardware and connected smart devices have become an integral part of society, there are few devices in the market that can combine the sphere's natural ability to capture human gestures to render interactive output in the form of music, data, art, gaming and learning applications.
Moreover, there is need for spherical input and output devices in healthcare. Rehabilitation of stroke patients and other patients who have neuromuscular or neurodegenerative disorders with loss of motor or sensory functions requires treatment that includes motor and sensory learning. In patients with a neurodegenerative disorder (such as stroke) a person may lose fine-tuned motor skills in their hands or fingers or be unable to complete complex tasks which require fine motor coordination. Other patients who suffer neurodegenerative damage may lose visual, auditory, tactile or other sense impressions that are vital to daily life.
It has been well-documented that intensive and repetitive training of motor skills can be used to modify neural organization and recovery of functional motor skills. Schneider S, Münte T, Rodriguez-Fornells A, Sailer M, E A, Music-Supported Training is More Efficient than Functional Motor Training for Recovery of Fine Motor Skills in Stroke Patients. Music Perception: An Interdisciplinary Journal. 2010; 27(4):271-280. doi:10.1525/mp.2010.27.4.271. There are many forms of treatments currently being deployed for such patients, which include having patients squeeze objects, place blocks or objects in a puzzle, and even interact with computerized boards on a wall which been sensorized to detect whether a user has pushed a button in response to visual or auditory feedback. Other brain-training exercises include having a user play learning or memory games on a computer with a traditional mouse and keyboard which requires the user to identify objects or words on a screen and take appropriate responsive action in the game with the mouse or keyboard.
It has also been demonstrated that music and sound are beneficial to patients who have suffered neurodegenerative loss of motor or sensory skills. It is well documented that certain musical tones, timbres and rhythms can stimulate different parts of the brain, such as the auditory, visual occipital lobes, and also the cerebellum, motor cortex and amygdala. The complex range of musical tones, timbres, and rhythms can activate different regions of the brain and trigger neurophysiological responses and cognitive learning. Studies have shown that music improves learning and cognitive functions in patients with cognitive or neurodegenerative disorders. Brain mapping studies have shown a clear correlation between musical notes, tones, frequencies, tempo, rhythm and other musical intonations which correspond to different regions or interactions of different regions within the brain. Gaidos S., More than a feeling: Emotionally evocative, yes, but music goes much deeper. Science News. 2010; 178(4):24-29. doi:10.1002/scin.5591780423.
The auditory cortex is organized in terms of sound frequencies, with some cells responding to low frequencies and others to high. Moving from the inside to the outside of part of the auditory cortex, different kinds of auditory analysis take place. In the core, basic musical elements, such as pitch and volume, are analyzed, whereas surrounding regions process more complex elements, such as timbre, melody and rhythm.
There are few activities that require more of the brain than playing music. It uses complex feedback systems that take in information, such as pitch and melody, through the auditory cortex and allow the performer to adjust their playing.
The visual cortex is activated by reading or even imagining a score; the parietal lobe is involved in a number of processes, including computation of finger position; the motor cortex helps control body movements; the sensory cortex is stimulated with each touch of the instrument; the premotor area helps perform movements in the correct order and time; the frontal lobe plans and coordinates the overall activity; and the cerebellum helps create smooth, integrated movements. Habib M, Besson M., What do Music Training and Musical Experience Teach Us About Brain Plasticity? Music Perception: An Interdisciplinary Journal. 2009; 26(3):279-285. doi:10.1525/mp.2009.26.3.279.
It is also well documented that musical learning helps autistic children and children with learning disorders. Research shows that music enhances and optimizes the brain, providing better, more efficient therapy and improved performance of cognitive, motor, and speech/language tasks. Lee H, Noppeney U., Long-term music training tunes how the brain temporally binds signals from multiple senses. Proceedings of the National Academy of Sciences. 2011; 108(51). doi:10.1073/pnas.1115267108. Studies show that people perform these tasks better with music than without.
Research shows musical training in children enhances the activity of important neural systems. Playing a musical instrument results in changes in the brain in specific regions such as the auditory cortex used for processing musical tones; the motor cortex, a region activated when using the hands or fingers; the cerebellum, a part of the brain used in timing and learning; and the corpus callosum, which acts as a bridge between both hemispheres of the brain. Other regions may also be enhanced.
Studies show that music can improve motor skills. Palmer C, Meyer R K., Conceptual and Motor Learning in Music Performance. Psychological Science. 2000; 11(1):63-68. doi:10.1111/1467-9280.00216. Research supports parallels between rhythm and movement. Rhythm can be used as an external timekeeper to organize, coordinate, and improve movement. Halsband U, Binkofski F, Camp M. The Role of the Perception of Rhythmic Grouping in Musical Performance: Evidence from Motor-Skill Development in Piano Playing. Music Perception: An Interdisciplinary Journal. 1994; 11(3):265-288. doi:10.2307/40285623. Musical training and engagement can facilitate more functional, organized, coordinated, and higher quality movements in fine motor and gross motor skills including motor planning, motor control, motor coordination, gait training and body awareness.
Research also demonstrates that music can improve cognitive skills. Music provides an optimal learning environment, organizes information into smaller packages that are easier to learn and retain, and aids in memorization. Music has the capacity to engage attention and encourage concentration. Research indicates that attention is necessary before learning can take place. Research indicates that music is often successful as a mnemonic device for learning new concepts, such as learning the alphabet through the “ABC Song”. Music therapists use music to improve cognitive skills such as attention, memory, mood, and executive functioning (higher level thought processing), including academic skills. Making Material More Memorable . . . with Music. The American Biology Teacher. 2013; 75(9):713-714. doi:10.1525/abt.2013.75.9.16.
Musical learning can improve speech and language. Research supports parallels between singing and speech production, and music's ability to facilitate improved communication skills. Murphy A T, Simons R F., Music Therapy for the Speech-Handicapped. The Elementary School Journal. 1958; 59(1):39-45. doi:10.1086/459687. Musical engagement can enable those without language to communicate and express themselves non-verbally. Additionally, musical engagement often assists in the development of verbal communication, speech, and language skills. Music therapists can assist a person with dysfunction or delays in various speech/language abilities to learn how to speak through singing or communicate nonverbally through music.
Music can also improve social, emotional and behavioral skills. Music is highly motivating and engaging and may be used as a natural reinforcer for desired responses. Musical engagement can stimulate patients to reduce negative and/or self-stimulatory responses and increase participation in more socially appropriate ways. Musical engagement facilitates improved social skills such as shared play, turn-taking, reciprocity, and listening and responding to others. Musical engagement also provides a non-threatening and structured environment in which individuals have the opportunity to develop identification and appropriate expression of their emotions.
Music can improve sensory skills. Music provides concrete, multi-sensory stimulation (auditory, visual, and tactile). The rhythmic component of music is very organizing for the sensory systems, and as a result, auditory, visual, tactile, proprioceptive (input to muscles and joints), vestibular (input for balance) and self-regulation processing skills can be improved through musical engagement.
Since it has been shown that patients with neurodegenerative, sensory, motor or cognitive disorders react favorably to games and interactive devices, a sensorized ball or sphere is an ideal object for patients: it can be easily adaptable to therapies that enhance learning and benefit from dynamic interactions. A ball, which is spherical in shape, can be easily held, rolled, touched and squeezed. Such a ball can be adapted with a number of different sensors that measure data related to touch or movement, and can be mapped to generate auditory, visual, and haptic feedback for the user. A ball that has sensors and an embedded processor can record the input of the user or patient as they interact with the ball through pressing sensors, rolling, or throwing and catching the object. Interaction with such a ball can stimulate learning, improved motor function and sensory stimulation, as well as neurophysiological changes which can be recorded through software, hardware and brain mapping tools such as CAT, PET, EEG or MM scanning equipment.
In order to track the motor developmental progress of stroke patients, and others with neuro-motor, neuro-sensory, or neurodegenerative disorders, what is desired is a sensorized ball that is connected to a computer which records user input (in the form of pressure, touch, movement, and other gestures) and has output means to provide a visual display and auditory feedback of the user's interactions with the ball.
Also provided the documented benefits of neuro-musical therapy, what is needed is a ball that adapts as a musical instrument based on the abilities and skills of a user. For example, facilitating a meaningful musical experience for a patient with neurodegenerative impairments using a traditional keyboard, trumpet or even a computer, under the theory that music learning and engagement enhances general learning and cognitive function, can present prohibitive barriers for a therapist and their patient. Such instruments are difficult to learn even for patients without motor, sensory or cognitive impairments. However, a ball that simply requires the user to squeeze one or more areas activated by sensors that fit the natural gestures of a user is far more approachable for a user with neurodegenerative and/or cognitive issues. Such a ball can be programmed based on the input and abilities of the user, unlike traditional musical instruments.
Another benefit of such a device is to introduce a variety of musical possibilities that can be customized by a user through a computer user interface. This allows a user to selectively specify the sounds, instruments, and audio samples which are mapped to one or more sensors along the surface area of a sensorized spherical device.
An embodiment of the present invention is a sensorized spherical control interface, and input and output device capable of sending and receiving user-input data wirelessly to and from other devices, which can then be mapped to control music and sound, video, lights, motors, video game mechanics, and other systems; as well as can be used to capture and record data streams from user input. In one embodiment, sensors are embedded along the surface of the sphere allowing it to maintain the physical properties of a sphere. The sensorized spherical interface can be used for medical rehabilitation therapies; as a musical instrument; for dancers and other artists; in a series of learning games for children; in sporting goods; to control video game mechanics; in robotics; and generally as a more ergonomic input/output device for interfacing with a wide variety of other systems.
One embodiment of the invention includes a spherical input and output device with sensors responsive to input from a plurality of user gestures; wherein a plurality of sensors are in a spatial proximity along the surface area of said spherical input and output device in a configuration capable of receiving input from the hands and fingers of a user grasping the spherical input and output device with one or both hands; an inner core with electrical components comprising: a microprocessor for processing signals from one or more said sensors; a power source for powering said sensors and said electrical components; a transceiver for transmitting sensor signals corresponding to said plurality of user gestures to a computing device.
One embodiment of the invention includes a method for capturing electrical input from a spherical input and output device, including the steps of: receiving input through sensors from a plurality of user gestures; wherein the plurality of the sensors are in spatial proximity along the surface area of the spherical input and output device in a configuration capable of receiving input from the hands and fingers of a user grasping the spherical input and output device with one or both hands; receiving electrical signals at a microprocessor from a plurality of sensors responsive to each of said plurality of user gestures; processing said electrical signals to create a data output corresponding to each of said plurality of user gestures; and transmitting the data output to a computing device.
One embodiment of the invention includes a sensorized sphere wherein a plurality of sensors capture user gestures through a sensing module responsive to touch, a sensing module responsive to force, and a sensing module responsive to movement or orientation of the device.
In one embodiment, the sensors are selected from a group consisting of tactile sensors, force sensors, pressure sensors, proximity sensors, and inertial measurement units (IMU) including, accelerometers, magnetometers, and gyroscopes.
In one embodiment, the sensorized sphere comprises one or more capacitive sensors or sensor arrays for capturing data from a plurality of user gestures.
In some embodiments, the sensorized sphere includes an outer protective material made of rubber, silicone, plastic, glass, wood, fabric, or a synthetic polymer.
In one embodiment, the inner core of the sensorized sphere is surrounded by a first conductive layer, second resistive layer, and third conductive layer.
In one embodiment, the sensorized sphere has force sensors inlaid under an outer layer in a configuration to conform to a plurality of fingers from a human hand such that when the the device is grasped by the user, the sensors are in proximity to a plurality of fingers.
In one embodiment, the sensorized sphere is electrically coupled to a computing device which may include a smartphone, tablet computer, audio output system, television, laptop, desktop computer, MRI machine, EEG machine and other medical devices which are capable of providing real time feedback on a user's neuronal activity.
One embodiment of the invention is a system for processing sensor signals from a spherical input and output device, including: a computing device electrically coupled to the spherical input and output device; a receiver for receiving signals from the sensors of the spherical input and output device; a sensor processing engine for processing the sensor signals; a memory for storing data corresponding to sensor inputs; an audio engine for translating the sensor signals to audio outputs corresponding to said data; and a graphics engine for displaying the data files corresponding to the sensor output signals.
One embodiment of the system includes audio files individually mapped to individual sensors on the spherical input and output device; wherein the audio output can be modified based on the user's gestures with said spherical input and output device to create a multilayered musical experience.
One embodiment of the system includes computer programmable code for generating a map, or graph, and a data history of the user's gestures with the spherical input and output device to track the progress and motor mobility of a user's hands and individual fingers.
One embodiment of the system includes a haptics module for providing tactile feedback to the user based on the user's gestures with the spherical input and output device.
One embodiment of the invention is approximately 5-7 inches in diameter and includes a protective spherical core and a shell (with a skin) which covers the core. The core contains a circuit board, a microprocessor, an accelerometer, a gyroscope, a wireless transmitter, and a battery. The shell contains an array of sensors which are embedded and inlaid into the surface (skin) of the sphere and that connect to processors in the circuit board inside the protective core. In one embodiment, the shell and skin are replaceable and interchangeable as accessories, according to different uses and software mappings.
One embodiment of the invention includes a metal core covered by a resistive fabric material, and a conductive silicone (or conductive rubber) shell and skin. When the user squeezes the sphere, the resistance measured between the metal core and the conductive shell decreases. These fluctuations in measured resistance can be used to measure the user's force on the sphere, and can be mapped to musical gestures such as swelling dynamics in sample playback, or to trigger musical events according to different degree-stages of force.
One embodiment of the invention includes small silicone force-sensing buttons which are seamlessly inlaid into the surface skin of the sphere in an array that allows for ergonomic use by following the natural positioning and contours of human hands as they naturally and comfortably rest on the surface of the sphere.
One embodiment of the invention includes proximity sensors embedded in the surface of the sphere to measure the user's hands' proximity to the surface of the sphere.
One embodiment of the invention includes fabric force sensors arranged ergonomically around the surface of the sphere.
One embodiment of the invention includes small sensors that are arranged in an array covering the entire surface area of the sphere.
One embodiment of the invention includes an augmented reality engine which can dynamically remap the user interface configuration.
One embodiment of the invention includes a wooden shell and skin, with built-in capacitive touch sensors.
One embodiment of the invention includes a glass shell and skin, with built-in capacitive touch sensors.
One embodiment of the invention includes a metal shell and skin, with built-in capacitive touch sensors.
One embodiment of the invention includes linear potentiometer sensors embedded in the surface of the sphere that measure the distance (along a slider) from the surface of the sphere as the user applies force towards the sphere's core. The linear potentiometer extends from the surface of the sphere into its shell and is also fixed to the core at its other end. At the top of the potentiometer is a small button which is fused flush with the surface of the skin. The slider is moved up as the user applies force to the button fused into the surface of the sphere's material, and down as the material recovers from the force. The amount of force required to move the potentiometer is therefore determined by the properties of each uniquely chosen shell and skin materials. A soft silicone shell will require less force than a very hard rubber shell, for example. Also, as different materials recover their form very differently after force has been applied, the materials chosen in each case will affect the recovery of the potentiometer to its resting position. These sensors are very cost-effective and can provide a rich amount of data. The integrated sensors' range will also be affected by the nature of the materials used for the shell and skin.
One embodiment of the invention includes light sensors that orient outwards from the core through holes in the surface of the sphere. These holes are covered by a clear membrane so as to allow light through while also maintaining a seamless surface and skin. As the user covers and uncovers specific holes, data is sent to a microprocessor and corresponding software mapping on a computing device.
One embodiment of the invention includes a charging station with built-in speakers and embedded system, including a CPU and touchscreen. The sphere is wirelessly paired with the docking station and runs the various software systems for each different mapping of the sphere's data stream.
One embodiment of the invention includes a camera (or series of cameras) which can be employed as sensors or as live video feed affected by gestures and other user controls.
One embodiment of the software mapping is to control the spatial position, direction, and speed of musical sounds and other sound design.
One embodiment of the software mapping is to control the spatial position, direction, and speed of a physical drone, or other similar remote-controlled devices and uncrewed vehicles.
One embodiment of the software mapping is to control the spatial position, direction, and speed of a physical drone, or other similar remote-controlled devices and uncrewed vehicles.
The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, and in order to avoid obscuring such concepts, well-known structures and components are shown in block diagram form.
One embodiment of the invention includes ten sensors embedded along the surface the sphere. In one embodiment, the sphere is approximately 5-7 inches in diameter in order to conform to the shape of an average human hand and to allow a user to easily grasp the sphere with one or both hands. Sensors may be inside or outside of the surface. Each sensor position corresponds to one of the fingers of a human hand. In one prototype embodiment, force-sensing resistors (FSR) (0.5″), Sparkfun SEN-09375 ROHS made by Interlink Electronics were used. The properties and characteristics of these FSRs as described in FSR Integration Guide and Evaluation Parts Catalog With Suggested Electrical Interfaces (v.1.0, 90-45632 Rev. D), are incorporated by reference. This is a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1 MΩ. This FSR can sense applied force anywhere in the range of 100 g-10 kg. These sensors can be placed at different regions along the sphere or core to allow for a natural grip by the user to optimally capture user gestures and interactions with the sphere. There is no limit on the number of sensors that can be embedded along the surface of the sphere. Along the surface may mean that some sensors may be inside the sphere with a distance in proximity to the surface, while others can be on the outside of the surface or just underneath a sensor pad. In one embodiment, the sensors may include force sensors which are triggered by the user's compression of the sensor or squeezing of the palms. Other types of sensors, including sensor arrays and matrices, tactile sensors and arrays (as well as piezoelectric, piezoresistive, capacitive, elastoresistive sensing) may also be employed, depending on the input that needs to be measured and in consideration of desirable outputs. The different types of sensors that can be used in the construction of the sensorized sphere are known to one of ordinary skill in the art. In one embodiment, a tactile matrix array is formed across the full inner surface of the sphere such that the user can hold or grip the ball from any direction and their fingertips will land on at least one tactile element.
In one embodiment, the force sensors in an array along the surface of the sphere are embedded into a compressive material (such as silicone or a foam, like blown EVA, for example) such that their behavior and data output is dependent on the materiality of the compressive layer as it relates to the positioning within said layer, of each of the sensors (FSR's, piezoresistive, or otherwise). A firmer, less compressive layer (with embedded sensors) will result in different sensor behavior and data output than a softer, more compressive layer; such that the sensors embedded in the more compressive layer will more slowly return to their default state of electrical resistance when engaged and then disengaged by a user, and likewise will be slightly slower and more measured in their response to being ‘attacked’ or compressed. A more compressive layer also potentially allows for a wider range of sensor sensitivity to certain gestures, and often a more steady, smoother data stream. There is an optimal range of material compression (according to each application and use case) as related to the embedded sensors, wherein the relationship affects the sensor behavior and the quality of data output, and in turn the possibilities for mapping user gestures to meaningful outputs. If the compressive layer is too soft, for example, it may affect the discreet data localization coming from each individual sensor as the shape of the surface becomes excessively distorted through user compression. If it is too firm, the user may lose some expressive controls and it may limit the range of effective user gestures and therefore limit the richness and diversity of applications.
The sensor signals are picked up at the core microprocessor. In one prototype embodiment, Development Board (central processor) Sparkfun Fio v3-ATmega32U4 DEV-11520 ROHS is used is used as the core microprocessor. The JST-connector and 3.3 v system voltage makes this processor suitable for portable devices. The processor is compatible with a Li-Poly battery (or lithium ion batteries). Wireless sensor networks and communications are provided by on-board XBee socket. The ATmega32U4, running at 8 MHz, makes it possible to use the on-board USB jack not only to charge a connected Li-Poly battery, but to program the device as well. The features and properties of this part as noted in the schematics and datasheets for Fio v3-ATmega32U4 are incorporated by reference herein. https://www.sparkfun.com/products/11520.
In one prototype embodiment, the microprocessor is connected to a wireless Bluetooth transmitter Bluetooth Module Sparkfun RN42-XV, WRL-11601 ROHS. The RN42XV is a small form factor, low power Bluetooth radio module offering plug-in compatibility for the widely used 2×10 (2 mm) socket typically used for 802.15.4 radio modules. Based on the 2×10 (2 mm) socket footprint often found in embedded applications, the Roving Networks' RN42XV module provides Bluetooth connectivity in legacy and existing designs that may have been based upon the 802.15.4 standard. The RN42XV Class 2 Bluetooth module is based on the RN42. This module supports multiple interface protocols, on-chip antenna and support for Bluetooth EDR. The RN42 delivers up to a 3 Mbps data rate for distances up to 20 meters. The properties and characteristics of this Bluetooth module as described in its datasheet RN4142XV-DS by Roving Networks (v.1.0) are incorporated by reference herein.
In one prototype embodiment, the microprocessor is further connected to an Inertial Measurement Unit (IMU) comprising an accelerometer, magnetometer, and gyroscope Adafruit 9-DOF LSM9DS0 chip. The chip is 33 mm×20 mm×2 mm/1.30″×0.79″×0.08″ and weighs approximately 2.3 g. Inside the chip are three sensors, one is a 3-axis accelerometer, which can inform the user which direction is down towards the Earth (by measuring gravity) or how fast the board is accelerating in 3D space. The other is a 3-axis magnetometer that can sense where the strongest magnetic force is coming from, generally used to detect magnetic north. The third is a 3-axis gyroscope that can measure spin and twist. By combining this data a user can orient the sphere in 3D and use the unique gestural measurements of direction, acceleration, position and spin to uniquely manipulate the output of sound, music, mathematical or visual data and also calibrate the sphere. The properties and characteristics of this IMU are described in the datasheet for Adafruit 9-DOF Accel/Mag/Gyro+Temp Breakout Board—LSM9DS0, PRODUCT ID: 2021, which is incorporated by reference herein.
In one prototype embodiment, the microprocessor is further connected to a vibration motor, Sparkfun Vibration Motor ROB-08449 ROHS. The vibration motor provides haptic feedback to the user when one or more sensors are activated. The properties and characteristics of this vibration motor as described in Product Specification by Zhejiang Yuesui Electron Stock Co., Ltd., Model B1034.FL45-00-015 (2016 Jan. 12) is incorporated herein by reference.
In one prototype embodiment, the power source for the microprocessor, Bluetooth module, sensors and haptics is powered by Sparkfun Lithium-Ion 2 Ah, 3.7V at 2000 mAh, PRT-13855 ROHS. The battery is light-weight based on Lithium Ion chemistry and each cell outputs a nominal 3.7V at 2000 mAh and comes terminated with a standard 2-pin JST-PH connector. The properties and characteristics of this battery as described in Sparkfun datasheets and specifications at https://www.sparkfun.com/products/13855, is incorporated by reference herein.
In one embodiment, radio frequency signals from the Bluetooth module or other appropriate RF transmitting device are outputted to wireless receiver or transceiver of a computer, smartphone or tablet where the signals are further processed by the Central Processing Unit (CPU) and software that is resident on the user's computing device. The signal output may be translated by the computer's software system into musical notes, pre-programmed sounds, or melodies, colors, shapes, graphs, or any other logical output that corresponds to the sensory input from the sphere. The accelerometer/magnetometer/gyroscope, in combination or individually, may change the octave, frequency or amplitude of the note or audio data file that is mapped to each sensor, thereby allowing the user to use a single sensor to create a multitude of sounds, musical effects or visual outputs, including digital drawings.
In one embodiment, the shell body of the sensorized sphere consists of foam, and the core and its constituent processors are protected by a plastic inner ball or other suitable material which keeps the components safe and damage free. The core may also be made of solid rubber, silicone, 3D-printable materials, or any other suitable material that keeps the hardware of the sphere intact and safe from damage. Because the sensorized sphere may be thrown, dropped, tossed, or squeezed, the sphere is designed to contain and protect the core electronics from damage due to movement and/or impact; as well as to maintain their fixed positioning inside the sphere.
In one embodiment, the sensorized sphere 300 also includes a first inner core 320 which encapsulates the components of core 350. The core 320 may be made out of foam, Styrofoam, rubber, silicone, plastic, or any other suitable materials that can provide the sphere its shape while serving as a protective layer for the inner core 350 components. In one embodiment, the sensorized sphere 300 can be opened through connector 310 which may include a zipper, or in other embodiments, a thread and screw twist-off mechanism which allows the user to open the sphere and reveal its two hemispheres. As shown here, in this embodiment, the two hemispheres of the sensorized sphere 300 are connected by hinge 340 and connector 310 which allows the sphere to be easily opened in order to change the battery 360, replace sensors 410, or access the processors, hardware, and firmware of core 350.
In one embodiment, of
In one embodiment, the inner core 320 of the sphere includes one or more sensors which react to a user's touch, compression, or other gestures of the user. As discussed herein, sensors along the surface, may be inside the core a suitable distance from the surface to detect changes in force, pressure, or resistance; and located just underneath the surface of the sensor pads 110, or on top of the surface of the sphere. The outer surface of the sphere may also include a USB port which can be used to charge the battery 360 inside the core. The battery is necessary to power the sensors, microprocessor, accelerometer, gyroscope, magnetometer, haptics, and any other electrical components of the sphere. The battery 360 may be a chargeable lithium ion or lithium polymer battery, or may consist of non-chargeable standard batteries which may be replaced by the user. The USB charging port is not shown here, but may reside inside the inner core 320 and accessible to a user on the outer surface 310. The USB charging port may also serve as a connector to a remote computer for uploading data to a user computer as shown in
In one embodiment, the microprocessor 356 processes signals from the sensors 410, the accelerometer 358 and output modifier 362 circuit. The output modifier circuit 362 is electrically connected to the binary modifiers 120. In one embodiment, the data signals from sensors 410 are processed at microprocessor 356 and relayed to a remote computer through the wireless transceiver 354. The electrical connection of the core 350 to a user's computing device is shown in
The computer 600 may also receive programming and data via network communications through the receiver 620 and communicator interface 630 which may include any number of inputs such as USB, or any number of various wired or wireless protocol inputs known to one of ordinary skill in the art. The computer 600 also includes a display 650 and graphical user interface 640 for output of data signals from the sensorized sphere to the user's computer 600. The GUI 640 itself may be a touch screen display which serves as an input and output device for communications with the core 350 of the sensorized sphere. The graphical user interfaces are described in further detail in
In one embodiment, computer 600 may be used to program audio or data files associated with each of the sensors 410. For example, if the sensorized sphere is to serve as a musical instrument, in one embodiment, a user may program audio files or musical notes through computer 600 which are stored in the storage device 690. The data or audio files will be subsequently activated by a user through sensors 410 and the output of such signals will be communicated through the receiver 620 or communicator interface 630 and processed by the processor 610 or firmware or software resident on the user's computer or computing device 600. The output may then be displayed to the user via the GUI 640 or output through some other means such as a sound card on the user's computer to the user's speakers.
The left and right binary modifiers 704 and 706 can also trigger different functions for the sensorized sphere in some embodiments. For example, if the L binary modifier 704 is triggered by user at step 736, it can either be set to an “on” position 738 or “off” position 740. In the “on” scenario the binary modifier 704 can effect a number of different functions, including activation of LEDs 742, change of audio/visual sample 744, adjust volume 746, or remap and change function at 748, which may include changing audio samples or data file associated with particular sensors. Similarly, the right binary modifier 706, if triggered by the user at step 750, can be set to an “on” position 710 or “off” position 752. If in the “on” position, the binary modifier can effect functions such as game mode selection 754, activate motors at 756, set or lock current state 758 of sensor output or toggle between different modes 760. In these embodiments, the right and left binary modifiers, are able to trigger a number of different functions for the user from changing the audio sample, visual effects, volume, sound effects and other variety of functions. The functions provided here in 742-748 and 754-760 are exemplary and one of ordinary skill in the art can program any number of customized user functions that are triggered by the binary modifiers.
In one embodiment the sensors are activated at step 702 through force, linear potential or some other capacitive means. The binary modifiers may also serve to affect modifications of signal output such as sample playback speed (or filter sweeps, panning, amplitude) through signal modifiers 708 and 710; before being further processed by the audio engine 720. For example, if the binary modifiers 704 or 706 are activated, the resulting input modifier signals at step 708 or 710 can uniquely change the sound, speed, frequency, octave, or other musical or visual functions that are processed by the audio and graphics engines.
Also, in one embodiment, the binary modifiers 704 or 706 may activate a gate to affect the filtering of the audio output signal. In this embodiment, the sample playback speed and filter sweep can also be depicted at GUI 640 at the user's computer 600. The audio filtering may also be activated or adjusted from input from the accelerometer 358, such that higher acceleration of the sphere by the user may increase or decrease the frequency of the filter sweep. The combination of the sensor inputs, binary modifier inputs, and IMU may result in an elaborate change in the tone, pitch, volume or frequency of the audio file that is linked to each sensor input. In this way, the sensorized sphere serves as a complex musical instrument and control structure enabling the user to simultaneously play music and adjust frequency and tempo of programmed beats, musical notes and tunings, samples, and other features, by movements of the ball.
Depending the on the movement of the sensorized sphere along an X, Y, or Z axis, when a certain threshold is reached (for example along a scale of 0.0-1.0, with 1.0 representing more movement along an X, Y or Z axis, or movement towards one or more axes), IMU data 1030 may be generated to affect the sample playback functions of audio outputs. In one example, according to the user's quick or gradual movement of the sensorized sphere through space and accompanying activation of the accelerometer, the sample playback speed may be accelerated or decelerated in audio output 722. For example, faster acceleration of the ball through space may increase the speed of audio output. In another use case, the acceleration or movement of the sphere may affect the filter sweep of the signal resulting in an increased frequency of the sound signal in audio output 722. Signal frequency and musical tunings and pitches may also be increased or decreased depending on IMU data.
In one embodiment of the invention the sensorized sphere can be calibrated or programmed through machine learning. Machine learning refers to a group of statistical analysis methods that learn from training by example, allowing for inference and generalizations of complex data structures, and pattern identification and comparisons with pre-existing or previously trained patterns. B. Caramiaux and A. Tanaka, “Machine learning of musical gestures,” in Proc. International Conference on New Interfaces for Musical Expression, 2013. This process is divided into two phases of training (learning from a series of data samples) and testing (taking new samples and inferring decisions based on its previously-learned data structure). Different training approaches are applied according to each scenario. Some common strategies are: supervised, unsupervised, and semi-supervised. A supervised learning algorithm can be used when the output goal is known, training with data pairs of corresponding inputs and desired outputs. This may be musically meaningful when mappings specific gestures to discreet musical output scenarios. An unsupervised learning approach may be useful when the goal is unknown and will be learned from the data. This may be helpful when attempting to understand data patterns representing discursive and complex sets of user input data, and in grouping these data into categories. Semi-supervised approaches combine a supervised approach (considering pairs of outputs with their desired outputs) with a refining of the data by considering more unanticipated data.
In one type of supervised learning algorithm, any type of N-dimensional signal may be classified by fitting M clusters to each data class during a (machine learning) training phase. A new data sample can then be classified by identifying the cluster with minimum divergence from the new sample. This is useful for mapping musically-meaningful gestures that will naturally vary slightly as the user moves the sphere in three-dimensional space, over time.
The following table depicts an exemplary relationship between data sets from a user's interaction and training input with a sensorized sphere to corresponding machine learning signals and mapped output functions:
TABLE 1
Machine
Learning
Force
Function
IMU Data (accelerometer, gyroscope, magnetometer) (M Clusters)
Sensors
(N Signal)
X axis -
Y axis -
Z axis -
orbital
Force
f (x, y, z,
Training
value
value
value
Acceleration
Velocity
velocity
value
a, v, ov,
Input
(X)
(Y)
(Z)
(a)
(v)
(ov)
(F)
F) = N
Output
Gesture 1
X1
Y1
Z1
A1
V1
Ov1
F1
10
Rhythm1
Gesture 2
X2
Y2
Z2
A2
V2
Ov2
F2
9.5
Rhythm1
Gesture 3
X3
Y3
Z3
A3
V3
Ov3
F3
10.5
Rhythm1
Gesture 4
X4
Y4
Z4
A4
V4
Ov4
F4
15
Rhythm2
Gesture 5
X5
Y5
Z5
A5
V5
Ov5
F5
14.5
Rhythm2
Gesture 6
X6
Y6
Z6
A6
V6
Ov6
F6
15.5
Rhythm2
In one embodiment, the user's interactions with the sphere in three dimensional space are measured by IMU data across x, y, and z axes, acceleration data (a), velocity data (v), orbital velocity data (ov) and force values (F) resulting from the user's movement and gripping of the sphere. The method may also capture angular velocity, rotation and other data points and their combinatorial sets. For example, the user may be gripping the ball, activating a combination of force sensors, and also rotating the ball through 3D-space to create a unique gesture. These movements will be translated to data via the IMU sensors and the force sensors. Data from these sensors can be defined as M-clusters. A machine learning function or algorithm can be defined to generate an N-signal value which in one embodiment is a machine learning mathematical function (f) of (x, y, z, a v, ov and F). The M-clusters may consist of data points from any number of the IMU or F sensor input data values. For example, in one embodiment, as shown in Table 1, Gestures 1, 2 and 3, generate N-Signal values of 10, 9.5 and 10.5. In this example, the machine learning algorithm for the sensorized sphere correlates values within 0.5 of mean value 10 as corresponding to Rhythm1. In this embodiment, a user may also interact with the sensorized sphere to generate Gestures 4, 5, and 6. These gestures generate N-Signal values of 15, 14.5, and 15.5, through a machine learning mathematical function (f) of (x, y, z, a, v, ov, and F). These values which have standard deviation of 0.5 from the mean value of 15 can be used to map to a unique output of Rhythm2.
Therefore, according to embodiments described herein, complex (but reproducible) physical gestures with a sensorized sphere can be trained and identified for mapping to specific musical outputs. These relationships may be defined by preset parameters, but can also be defined by allowing a user to train new and unique customized gestures to their preferred musical or data outputs. This is especially helpful in a therapeutic setting when calibrating the sphere's system to accurately reflect the movements and spherical interactions of a patient with physical and/or cognitive impairments. Such users' ability to express certain gestures with their body, will often improve throughout their therapy. In one embodiment, machine learning algorithms would allow the mappings to evolve comfortably with a patient's individual rehabilitation progress with the sensorized sphere.
One embodiment makes use of time series analysis (a series of temporally-indexed data points), dynamic time warping is an algorithm that may be employed to measure similarity between two temporal data sequences, which may be varying in speed over time. Dynamic time warping can be used to find pattern-similarity even as internal temporal relationships may be inconsistent between samples. This is useful in developing musical mappings that recognize certain user gestures regardless of varying speeds over time.
The technique of regression uses supervised learning and is the task of modeling samples of an unknown function by learning to identify the function generating the samples, and based on samples of input variables paired with their target variables. The relationship between input and output is a function learned by the method. This may be used for gesture representation and to map simple, linear relationships such as between acceleration and audio amplitude, as well as more complex combinations of non-linear data combinations. For example, the sphere's movements in three-dimensional space may be represented as a concatenation and integration of a variety of gesture parameters observed over time. Data parameters such as upward, downward, and lateral force; X, Y, Z movements and acceleration; and 3-dimensional angular velocity can all be combined to represent and musically map, dynamic and complex user gestures, and unlock complex sets of musical (and multimedia/multimodal) outputs.
The technique of classification is the task of categorizing datasets into groups called classes and deciding to which categories these datasets pertain. Unlike the continuous output of regression, classification provides a discreet, high-level representation of user gesture. In classification, continuous input variables are labeled as discreet gestural classes. During the testing phase, a new input sample is assigned an output label. This method can allow us to categorize certain qualities of discreet musical gestures and patterns in ways that continuous streaming input cannot.
A common method of classification useful for musical interfaces and sensorized devices that employ simultaneous multidimensional data streams (such as the sensorized sphere), are Artificial Neural Networks. These methods can be useful in cases where there may be non-linear noise and data redundancy across a dynamic set of multidimensional inputs, and in mapping these reliably and meaningfully to reproducible musical (and multimedia) outputs as described herein.
In one embodiment that contemplates ten force sensors, the device is mapped to an audio synthesizer such that each of the 10 force-sensing resistors controls the pitch and amplitude for (each channel of) the synthesizer output. Different musical scales may be mapped to the sensorized sphere and one or more pitches may be mapped to each of the force-sensing ‘pads’. For example, the user may choose to map a C major scale to be output by the device; such that pad 1 will generate a C at frequency 261.626 hz (known as middle C) at the amplitude controlled by the amount of pressure applied to the surface of each force-sensing resistor. Extending the C major scale across all 10 pads would follow with pad 2 generating a D (at a frequency of 293.665 hz.); pad 3, E (at 329.628 hz.); pad 4, F (349.228 hz.); pad 5, G (391.995 hz.); pad 6, A (440 hz.); pad 7, B (493.883 hz.); pad 8, C (523.251); pad 9, D (587.330 hz.); pad 10, E (659.255 hz.). Different settings allow for different scales and ranges of frequencies that can be dynamically re-mapped by the user in realtime (using the binary switch buttons to change between these settings). In this manner, the sensorized sphere enables a wide range of musical possibilities by allowing a user to change between scales, instruments, sounds effects, and other musical mappings made possible by the unique combination of force sensors, IMU data, and binary modifiers.
Large data sets from multiple patients can likewise be analyzed to better understand larger trends in research populations. The sphere can be used in the home to continue care initiated in the clinical environment and as therapeutic means for traumatic brain injury (TBI) patients. By analyzing and utilizing data from the sphere during in-home use, therapists and doctors can learn from patients' home care habits and engagement with their therapies and with their personal care-givers. In a musical context, the recorded data might be played back, mimicking the sonic implications of the gestures of the user during a performance. These performative musical gestures could be re-mapped during the data playback phase, where the recorded data, now might control entirely different musical sounds or even accompanying lights or motors, for example.
The various computerized aspects of a computing device described in connection with the disclosure herein may be implemented or performed with a processor shown as CPU, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, or microcontroller. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
At least some aspects of the methods described herein may be embodied in programming. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on, embodied in, or physically stored on a type of machine-readable medium. Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
Those of skill would further appreciate that the various computer instructions or methods in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
Hence, a machine-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, which may be used to implement the system or any of its components as shown in the drawings. Volatile storage media include dynamic memory, such as a main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media can take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
Those skilled in the art will recognize that the present teachings are amenable to a variety of modifications and/or enhancements. For example, although the implementation of various components described above may be embodied in a hardware device, it can also be implemented as a software only solution, e.g., an installation on an existing server. In addition, data drive dynamic logging system and its components as disclosed herein can be implemented as a firmware, firmware/software combination, firmware/hardware combination, or a hardware/firmware/software combination.
While the foregoing has described what are considered to be the best mode and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10031594, | Dec 16 2010 | International Business Machines Corporation | Sphere-like input device |
10180731, | Dec 28 2015 | DASSAULT SYSTEMES AMERICAS CORP | Virtual clay modeling haptic device |
10748437, | Aug 14 2017 | HEDGEHOG HEALTH, INC | Learning aid apparatus and system |
10775941, | Dec 30 2016 | Sensorized spherical input and output device, systems, and methods | |
5287089, | May 13 1992 | Micro-Integration Corporation | Hand manipulatable computer input device |
5663514, | May 02 1995 | Yamaha Corporation | Apparatus and method for controlling performance dynamics and tempo in response to player's gesture |
5666473, | Oct 08 1992 | GLOBAL HAPTICS, INC | Tactile computer aided sculpting device |
5923318, | Apr 12 1996 | Finger manipulatable 6 degree-of-freedom input device | |
6184454, | May 18 1998 | Sony Corporation | Apparatus and method for reproducing a sound with its original tone color from data in which tone color parameters and interval parameters are mixed |
6891527, | Dec 06 1999 | ELO TOUCH SOLUTIONS, INC | Processing signals to determine spatial positions |
7755605, | May 18 2004 | Spherical display and control device | |
7808484, | Jan 21 2007 | Squeezable computer mouse | |
8149209, | Dec 16 2005 | Computer interface system | |
8207939, | Jan 21 2007 | Squeezable computer mouse | |
9474968, | Jul 27 2002 | Sony Interactive Entertainment LLC | Method and system for applying gearing effects to visual tracking |
9740305, | Apr 18 2012 | Sony Corporation | Operation method, control apparatus, and program |
20020083425, | |||
20070211033, | |||
20090040175, | |||
20090225030, | |||
20110310002, | |||
20120011932, | |||
20120017702, | |||
20120154387, | |||
20120179408, | |||
20130027294, | |||
20130088421, | |||
20140201477, | |||
20150042563, | |||
20170031502, | |||
20180188850, | |||
D543212, | Jan 04 2006 | SONY INTERACTIVE ENTERTAINMENT INC | Object for interfacing with a computer program |
D748095, | Aug 30 2013 | B2ENGINE Co., Ltd. | Spherical stylus |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Date | Maintenance Fee Events |
Aug 12 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Aug 17 2020 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Apr 12 2025 | 4 years fee payment window open |
Oct 12 2025 | 6 months grace period start (w surcharge) |
Apr 12 2026 | patent expiry (for year 4) |
Apr 12 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 12 2029 | 8 years fee payment window open |
Oct 12 2029 | 6 months grace period start (w surcharge) |
Apr 12 2030 | patent expiry (for year 8) |
Apr 12 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 12 2033 | 12 years fee payment window open |
Oct 12 2033 | 6 months grace period start (w surcharge) |
Apr 12 2034 | patent expiry (for year 12) |
Apr 12 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |