Presented herein are techniques for extracting features from sound signals received at a hearing prosthesis at least partially based on an environmental classification of the sound signals. More specifically, one or more sound signals are received at a hearing prosthesis and are converted in to stimulation control signals for use in delivering stimulation to a recipient of the hearing prosthesis. The hearing prosthesis determines an environmental classification of the sound environment associated with the one or more sound signals and is configured to use the environmental classification in the determination of a feature-based adjustment for incorporation into the stimulation control signals.
|
1. A method, comprising:
receiving one or more sound signals at a hearing prosthesis;
determining, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals; and
performing one or more feature-extraction operations to extract at least one signal feature from the one or more sound signals,
wherein the one or more feature-extraction operations are controlled at least partially based on the environmental classification of the current sound environment.
26. A hearing prosthesis, comprising:
means for receiving one or more sound signals at the hearing prosthesis;
means for generating, from the one or more sound signals, stimulation control signals for use in stimulating a recipient of the hearing prosthesis;
means for determining, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals; and
means for determining at least one feature-based adjustment for incorporation into the stimulation control signals at least partially based on the environmental classification of the current sound environment and one or more features extracted from the one or more sound signals.
18. A hearing prosthesis, comprising:
one or more input devices configured to receive one or more sound signals; and
one or more processors coupled to the one or more input devices, the one or more processors configured to:
convert the one or more sound signals into one or more stimulation control signals for use in delivering stimulation to a recipient of the hearing prosthesis,
determine, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals, and
determine at least one feature-based adjustment for incorporation into the one or more stimulation control signals at least partially based on the environmental classification of the current sound environment and one or more features extracted from the one or more sound signals.
11. A method, comprising:
receiving at least one sound signal at a hearing prosthesis;
determining, from the at least one sound signal, an environmental classification of a current sound environment associated with the at least one sound signal;
converting the at least one sound signal into stimulation control signals for use in stimulating a recipient of the hearing prosthesis;
extracting, using a feature extraction process, one or more signal features from the at least one sound signal, wherein one or more parameters of the feature extraction process are controlled based on the environmental classification of the current sound environment; and
determining at least one feature-based adjustment for incorporation into the stimulation control signals, wherein the at least one feature-based adjustment is based on at least one of the one or more signal features extracted from the at least one sound signal.
2. The method of
generating, from the one or more sound signals, stimulation control signals for use in stimulating a recipient of the hearing prosthesis; and
incorporating at least one feature-based adjustment into the stimulation control signals, wherein the at least one feature-based adjustment is based on the at least one signal feature extracted from the one or more sound signals.
3. The method of
executing one or more feature-based adjustment operations to determine the at least one feature-based adjustment for incorporation into the stimulation control signals, wherein the one or more feature-based adjustment operations are adjusted at least partially based on the environmental classification of the current sound environment.
4. The method of
performing one or more feature-extraction pre-processing operations on the one or more sound signals to generate one or more pre-processed signals;
extracting the at least one signal feature from the one or more pre-processed signals; and
adjusting the one or more feature-extraction pre-processing operations at least partially based on the environmental classification of the current sound environment.
5. The method of
generating, from the one or more sound signals, stimulation control signals for use in stimulating a recipient of the hearing prosthesis,
wherein the one or more feature-extraction pre-processing operations and the extracting of the at least one signal feature are performed separate from the generating of the stimulation control signals.
6. The method of
executing sound processing operations; and
adjusting one or more of the sound processing operations at least partially based on the environmental classification of the current sound environment.
7. The method of
8. The method of
9. The method of
receiving one or more user inputs; and
adjusting the one or more feature-extraction pre-processing operations at least partially based on the environmental classification of the current sound environment and at least partially based on the one or more user inputs.
10. The method of
extracting at least one measure of fundamental frequency (F0) from the one or more sound signals.
12. The method of
executing one or more feature-based adjustment operations to determine the at least one feature-based adjustment for incorporation into the stimulation control signals, wherein one or more parameters of the one or more feature-based adjustment operations are controlled based on the environmental classification of the current sound environment.
13. The method of
performing one or more feature-extraction pre-processing operations on the at least one sound signal to generate one or more pre-processed signals;
extracting the one or more signal features from the one or more pre-processed signals; and
adjusting one or more parameters of the one or more feature-extraction pre-processing operations based on the environmental classification of the current sound environment.
14. The method of
15. The method of
receiving one or more user inputs; and
adjusting the one or more parameters of a feature-based adjustment one or more feature-extraction pre-processing operations based on the environmental classification of the current sound environment and based on the one or more user inputs.
16. The method of
executing sound processing operations; and
adjusting one or more parameters of one or more of the sound processing operations based on the environmental classification of the current sound environment.
17. The method of
extracting at least one measure of fundamental frequency (F0) from the at least one sound signal.
19. The hearing prosthesis of
execute one or more feature-based adjustment operations to determine the at least one feature-based adjustment for incorporation into the one or more stimulation control signals, wherein one or more parameters of the one or more feature-based adjustment operations are adjusted based on the environmental classification of the current sound environment.
20. The hearing prosthesis of
extract, using one or more feature extraction operations, the one or more features from the one or more sound signals, wherein one or more parameters of the one or more feature extraction operations are controlled based on the environmental classification of the current sound environment.
21. The hearing prosthesis of
perform one or more feature-extraction pre-processing operations on the one or more sound signals to generate one or more pre-processed signals;
extract the one or more features from the one or more pre-processed signals; and
adjust the one or more parameters of the one or more feature-extraction pre-processing operations based on the environmental classification of the current sound environment.
22. The hearing prosthesis of
23. The hearing prosthesis of
receive one or more user inputs; and
adjust the one or more parameters of the one or more feature-extraction pre-processing operations based on the environmental classification of the current sound environment and based on the one or more user inputs.
24. The hearing prosthesis of
extract at least one measure of fundamental frequency (F0) from the one or more sound signals.
25. The hearing prosthesis of
execute sound processing operations; and
adjust one or more parameters of one or more of the sound processing operations based on the environmental classification of the current sound environment.
27. The hearing prosthesis of
means for executing one or more feature-based adjustment operations to determine the at least one feature-based adjustment for incorporation into the stimulation control signals, wherein one or more parameters of the one or more feature-based adjustment operations are adjusted at least partially based on the environmental classification of the current sound environment.
28. The hearing prosthesis of
means for performing one or more feature-extraction operations to extract at least one signal feature from the one or more sound signals,
wherein one or more parameters of the one or more feature-extraction operations are at least partially controlled based on the environmental classification of the current sound environment.
29. The hearing prosthesis of
means for performing one or more feature-extraction pre-processing operations on the one or more sound signals to generate one or more pre-processed signals;
means for extracting the at least one signal feature from the one or more pre-processed signals; and
means for adjusting one or more parameters of the one or more feature-extraction pre-processing operations at least partially based on the environmental classification of the current sound environment.
|
The present invention relates generally to feature extraction in hearing prostheses.
Hearing loss, which may be due to many different causes, is generally of two types, conductive and/or sensorineural. Conductive hearing loss occurs when the normal mechanical pathways of the outer and/or middle ear are impeded, for example, by damage to the ossicular chain or ear canal. Sensorineural hearing loss occurs when there is damage to the inner ear, or to the nerve pathways from the inner ear to the brain.
Individuals who suffer from conductive hearing loss typically have some form of residual hearing because the hair cells in the cochlea are undamaged. As such, individuals suffering from conductive hearing loss typically receive an auditory prosthesis that generates motion of the cochlea fluid. Such auditory prostheses include, for example, acoustic hearing aids, bone conduction devices, and direct acoustic stimulators.
In many people who are profoundly deaf, however, the reason for their deafness is sensorineural hearing loss. Those suffering from some forms of sensorineural hearing loss are unable to derive suitable benefit from auditory prostheses that generate mechanical motion of the cochlea fluid. Such individuals can benefit from implantable auditory prostheses that stimulate nerve cells of the recipient's auditory system in other ways (e.g., electrical, optical and the like). Cochlear implants are often proposed when the sensorineural hearing loss is due to the absence or destruction of the cochlea hair cells, which transduce acoustic signals into nerve impulses. An auditory brainstem stimulator is another type of stimulating auditory prosthesis that might also be proposed when a recipient experiences sensorineural hearing loss due to damage to the auditory nerve.
Certain individuals suffer from only partial sensorineural hearing loss and, as such, retain at least some residual hearing. These individuals may be candidates for electro-acoustic hearing prostheses.
In one aspect, a method is provided. The method comprises: receiving one or more sound signals at a hearing prosthesis; determining, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals; and performing one or more feature-extraction operations to extract at least one signal feature from the one or more sound signals, wherein the one or more feature-extraction operations are controlled at least partially based on the environmental classification of the current sound environment.
In another aspect, a method is provided. The method comprises: receiving at least one sound signal at a hearing prosthesis; determining, from the at least one sound signal, an environmental classification of a current sound environment associated with the at least one sound signal; converting the at least one sound signal into stimulation control signals for use in stimulating a recipient of the hearing prosthesis; extracting, using a feature extraction process, one or more signal features from the at least one sound signal, wherein one or more parameters of the feature extraction process are controlled based on the environmental classification of the current sound environment; and determining at least one feature-based adjustment for incorporation into the stimulation control signals, wherein the at least one feature-based adjustment is based on at least one of the one or more features extracted from the at least one sound signal.
In another aspect, a hearing prosthesis is provided. The hearing prosthesis comprises: one or more input devices configured to receive one or more sound signals; and one or more processors coupled to the one or more input devices, the one or more processors configured to: convert the one or more sound signals into one or more stimulation control signals for use in delivering stimulation to a recipient of the hearing prosthesis, determine, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals, and extract, using one or more feature extraction operations, one or more features from the one or more sound signals, wherein one or more parameters of the one or more feature extraction operations are controlled based on the environmental classification of the current sound environment.
In another aspect, a hearing prosthesis is provided. The hearing prosthesis comprises: means for receiving one or more sound signals at a hearing prosthesis; means for determining, from the one or more sound signals, an environmental classification of a current sound environment associated with the one or more sound signals; and means for performing one or more feature-extraction operations to extract at least one signal feature from the one or more sound signals, wherein one or more parameters of the one or more feature-extraction operations are at least partially controlled based on the environmental classification of the current sound environment.
Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
Presented herein are techniques for extracting features from sound signals received at a hearing prosthesis at least partially based on an environmental classification of the sound signals. More specifically, one or more sound signals are received at a hearing prosthesis and are converted in to stimulation control signals for use in delivering stimulation to a recipient of the hearing prosthesis. The hearing prosthesis determines an environmental classification of the sound environment (i.e., determines the “class” or “category” of the sound environment) associated with the one or more sound signals and is configured to use the environmental classification in the determination of a feature-based adjustment for incorporation into the stimulation control signals. In certain embodiments, the hearing prosthesis is configured to perform a feature extraction process to extract at least one feature from the one or more sound signals for use in the feature-based adjustment. One or more parameters of the feature extraction process are controlled/adjusted based on the environmental classification of the current sound environment.
There are a number of different types of hearing prostheses in which embodiments of the present invention may be implemented. However, merely for ease of illustration, the techniques presented herein are primarily described with reference to one type of hearing prosthesis, namely a cochlear implant. It is to be appreciated that the techniques presented herein may be used in other hearing prostheses, such as auditory brainstem stimulators, hearing aids, electro-acoustic hearing prostheses, bimodal hearing prosthesis, bilateral hearing prosthesis, etc.
The cochlear implant 100 comprises an external component 102 and an internal/implantable component 104. The external component 102 is directly or indirectly attached to the body of the recipient and typically comprises an external coil 106 and, generally, a magnet (not shown in
The sound processing unit 112 also includes, for example, at least one battery 107, a radio-frequency (RF) transceiver 121, and a processing module 125. The processing module 125 comprises one or more processors 130 (e.g., one or more Digital Signal Processors (DSPs), one or more or uC core, etc.) and a number of logic elements (e.g., firmware), including environment classifier logic (classifier logic) 131, sound processing logic 133, classifier-based feature extraction logic 134, and feature-based adjustment logic 135.
In the examples of
Returning to the example embodiment of
As noted, stimulating assembly 118 is configured to be at least partially implanted in the recipient's cochlea 137. Stimulating assembly 118 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 126 that collectively form a contact or electrode array 128 for delivery of electrical stimulation (current) to the recipient's cochlea. Stimulating assembly 118 extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 120 via lead region 116 and a hermetic feedthrough (not shown in
As noted, the cochlear implant 100 includes the external coil 106 and the implantable coil 122. The coils 106 and 122 are typically wire antenna coils each comprised of multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire. Generally, a magnet is fixed relative to each of the external coil 106 and the implantable coil 122. The magnets fixed relative to the external coil 106 and the implantable coil 122 facilitate the operational alignment of the external coil with the implantable coil. This operational alignment of the coils 106 and 122 enables the external component 102 to transmit data, as well as possibly power, to the implantable component 104 via a closely-coupled wireless link formed between the external coil 106 with the implantable coil 122. In certain examples, the closely-coupled wireless link is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electomagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such.
As noted above, sound processing unit 112 includes the processing module 125. The processing module 125 is configured to convert received sound signals into stimulation control signals 136 for use in stimulating a first ear of a recipient (i.e., the processing module 125 is configured to perform sound processing on sound signals received at the sound processing unit 112). Stated differently, the one or more processors 130 are configured to execute sound processing logic 133 (e.g., firmware) to convert the received sounds signals into stimulation control signals 136 that represent electrical stimulation for delivery to the recipient. The sound signals that are processed and converted into stimulation control signals may be signals received via the sound input devices 108, signals received via the auxiliary input devices 109, and/or signals received via the wireless transceiver 111.
In the embodiment of
As noted, the processing module 125 in sound processing unit 112 is configured to execute sound processing operations to convert the received sound signals into stimulation control signals 136 (i.e., processed sound signals). In addition to these sound processing operations, as described further below, the processing module 125 is configured to determine an environmental classification of the sound environment (i.e., determines the “class” or “category” of the sound environment) associated with the sound signals. Also as described further below, the processing module 125 is configured to extract at least one signal feature (“feature”) from the received sound signals and, based on the at least one least one signal feature, determine at least one feature-based adjustment for incorporation into the stimulation control signals 136. The at least one feature-based adjustment is controlled at least partially based on the signal feature extracted from the sound signals. For example, in certain embodiments, one or more parameters of the feature extraction operations are controlled at least partially based on the environmental classification of the current sound environment.
Cochlear implant 200 includes an implant body (main implantable component) 214, one or more input elements 213 for receiving sound signals (e.g., one or more implantable microphones 208 and a wireless transceiver 211), an implantable coil 222, and an elongate intra-cochlear stimulating assembly 118 as described above with reference to
In the embodiment of
As noted above.
In addition to the sound processing operations, as described further below, the processing module 225 is configured to determine an environmental classification of the sound environment associated with the received sound signals. Also as described further below, the processing module 225 is configured to extract at least one signal feature (“feature”) from the received sound signals and, based on the at least one least one signal feature, determine at least one feature-based adjustment for incorporation into the stimulation control signals 236. The at least one feature-based adjustment is controlled at least partially based on the signal feature extracted from the sound signals. For example, in certain embodiments, one or more parameters of the feature extraction operations are controlled at least partially based on the environmental classification of the current sound environment.
As noted, the techniques presented herein may be implemented in a number of different types of hearing prostheses. However, for ease of description, further details of the techniques presented herein will generally be described with reference to cochlear implant 100 of
More specifically,
As noted, the cochlear implant 100 comprises one or more input devices 113. In the example of
Also as noted above, the cochlear implant 100 comprises the processing module 125 which includes, among other elements, sound processing logic 133, classifier-based feature extraction logic 134, and feature-based adjustment logic 135. The sound processing logic 133, when executed by the one or more processors 130, enables the processing module 125 to perform sound processing operations that convert sound signals into stimulation control signals for use in delivery of stimulation to the recipient. In
More specifically, the electrical sound signals 153 generated by the input devices 113 are provided to the pre-filterbank processing module 154. The pre-filterbank processing module 154 is configured to, as needed, combine the electrical sound signals 153 received from the input devices 113 and prepare/enhance those signals for subsequent processing. The operations performed by the pre-filterbank processing module 154 may include, for example, microphone directionality operations, noise reduction operations, input mixing operations, input selection/reduction operations, dynamic range control operations and/or other types of signal enhancement operations. The operations at the pre-filterbank processing module 154 generate a pre-filterbank output signal 155 that, as described further below, is the basis of further sound processing operations. The pre-filterbank output signal 155 represents the combination (e.g., mixed, selected, etc.) of the input signals (e.g., mixed, selected, etc.) received at the sound input devices 113 at a given point in time.
In operation, the pre-filterbank output signal 155 generated by the pre-filterbank processing module 154 is provided to the filterbank module 156. The filterbank module 156 generates a suitable set of bandwidth limited channels, or frequency bins, that each includes a spectral component of the received sound signals. That is, the filterbank module 156 comprises a plurality of band-pass filters that separate the pre-filterbank output signal 155 into multiple components/channels, each one carrying a frequency sub-band of the original signal (i.e., frequency components of the received sounds signal).
The channels created by the filterbank module 156 are sometimes referred to herein as sound processing channels, and the sound signal components within each of the sound processing channels are sometimes referred to herein as band-pass filtered signals or channelized signals. The band-pass filtered or channelized signals created by the filterbank module 156 are processed (e.g., modified/adjusted) as they pass through the sound processing path 151. As such, the band-pass filtered or channelized signals are referred to differently at different stages of the sound processing path 151. However, it will be appreciated that reference herein to a band-pass filtered signal or a channelized signal may refer to the spectral component of the received sound signals at any point within the sound processing path 151 (e.g., pre-processed, processed, selected, etc.).
At the output of the filterbank module 156, the channelized signals are initially referred to herein as pre-processed signals 157. The number ‘m’ of channels and pre-processed signals 157 generated by the filterbank module 156 may depend on a number of different factors including, but not limited to, implant design, number of active electrodes, coding strategy, and/or recipient preference(s). In certain arrangements, twenty-two (22) channelized signals are created and the sound processing path 151 is said to include 22 channels.
The pre-processed signals 157 are provided to the post-filterbank processing module 158. The post-filterbank processing module 158 is configured to perform a number of sound processing operations on the pre-processed signals 157. These sound processing operations include, for example, channelized gain adjustments for hearing loss compensation (e.g., gain adjustments to one or more discrete frequency ranges of the sound signals), noise reduction operations, speech enhancement operations, etc., in one or more of the channels. After performing the sound processing operations, the post-filterbank processing module 158 outputs a plurality of processed channelized signals 159.
In the specific arrangement of
In the embodiment of
The sound processing path 151 also comprises the channel mapping module 162. The channel mapping module 162 is configured to map the amplitudes of the selected signals 161 (or the processed channelized signals 159 in embodiments that do not include channel selection) into a set of stimulation control signals (e.g., stimulation commands) that represent the attributes of the electrical stimulation signals that are to be delivered to the recipient so as to evoke perception of at least a portion of the received sound signals. This channel mapping may include, for example, threshold and comfort level mapping, dynamic range adjustments (e.g., compression), volume adjustments, etc., and may encompass selection of various sequential and/or simultaneous stimulation strategies.
In the embodiment of
As noted, the sound processing path 151 generally operates to convert received sound signals into stimulation control signals 136 for use in delivering stimulation to the recipient in a manner that evokes perception of the sound signals. In certain examples, the stimulation control signals 136 may include one or more adjustments (enhancements) that are based on specific “signal features” (“features”) of the received sound signals. That is, the processing module 125 is configured to determine one or more “feature-based” adjustments for incorporation into the stimulation control signals 136, where the one or more feature-based adjustments are incorporated at one or more points within the processing path 151 (e.g., at module 158). The feature-based adjustments may take a number of different forms. For example, one or more measures of fundamental frequency (F0) (e.g., frequency or magnitude) may be extracted from the received sound signals and then used to apply an enhancement within the sound processing path such that the F0 present in the sound signals is incorporated into the stimulation control signals 136 in a manner that produces a more salient pitch percept to the recipient. Henceforth the percept of pitch elicited by the acoustic feature F0 in the sounds signals is referred to as “F0-pitch”.
As noted, different feature-based adjustments may be incorporated into a sound processing path, such as sound processing path 151. However, in accordance with embodiments presented herein, an element of each of these adjustments is that the adjustments are all made based on one or more “signal features” and, as describe further below, the feature-based adjustments in accordance with embodiments presented herein are controlled at least partially based on an environmental classification of the sound signals. As used herein, a “signal feature” or “feature” of a received sound signal refers to an acoustic property of the signal that has a perceptual correlate. For instance, intensity is an acoustic signal property that affects perception of loudness, while the fundamental frequency (F0) of an acoustic signal (or set of signals) is an acoustic property of the signal that affects perception of pitch. In other examples, the signal features may include, for example, other percepts or sensations (e.g., the first formant (F1) frequency, the second formant (F2) frequency), and/or other formants), other harmonicity measures, rhythmicity measures, measures regarding the static and/or dynamic nature of the sound signals, etc. As described further below, these or other signal features may be extracted and used as the basis for one or more feature-based adjustments for incorporation into the stimulation control signals 136.
In order to incorporate feature-based adjustments into the stimulation control signals 136, the one or more signal features that form the basis for the adjustment(s) need to first be extracted from the received sound signals using a feature extraction process. Certain embodiments presented herein are directed to techniques for controlling/adjusting one or more parameters of the feature extraction process based on a sound environment of the input sound signals. As described further below, controlling/adjusting one or more parameters of the feature extraction process based on the sound environment tailors/optimizes the feature extraction process for the current/present sound environment, thereby improving the feature extraction processing (e.g., increasing the likelihood that the signal features are correctly identified and extracted) and improving the feature-based adjustments, which ultimately improves the stimulation control signals 136 that are used for generation of stimulation signals for delivery to the recipient.
Merely for ease of illustration, further details of the techniques presented herein will generally be described with reference to extraction and use of one specific signal feature, namely the fundamental frequency (F0) and its perceptual correlate (F0-pitch) of the received sound signal. The fundamental frequency is the lowest frequency of vibration in a sound signal such as a voiced-vowel in speech or a tone played by a musical instrument (i.e., the rate at which the periodic shape of the signal repeats). In these illustrative examples, the feature-based adjustment incorporated into the sound processing path is an F0-pitch enhancement where the amplitudes of the signals in certain channels are modulated at the F0 frequency, thereby improving the recipient's perception of the F0-pitch. It is to be appreciated that specific reference to the extraction and use of the F0 frequency (and the subsequent F0-pitch enhancement) is merely illustrative and, as such, the techniques presented herein may be implemented to extract other features of sound signals for use in feature-based adjustments. For example, extracted signal features may include the F0 harmonic frequencies and magnitudes, non-harmonic signal frequencies and magnitudes, etc.
Returning to the specific example of
The category of the sound environment associated with the sound signals may be used to adjust one or more settings/parameters of the sound processing path 151 for different listening situations or environments encountered by the recipient, such as noisy or quiet environments, windy environments, or other uncontrolled noise environments. In
In accordance with embodiments of the present invention, in addition to controlling sound processing operations of the signal processing path 151, the environmental classifier 164 may also be configured to optimize extraction of the features used for feature-based adjustments included in the signal processing path 151. As noted above, the processing module 125 includes classifier-based feature extraction logic 134 that, when executed by the one or more processors 130, enables the processing module 125 to extract one or more features of the sound signals. In
Also as noted above, the processing module 125 includes feature-based adjustment logic 135 that, when executed by the one or more processors 130, enables the processing module 125 to generate, from extracted signal features, one or more signal adjustments for incorporation into the sound processing path 151. In
The feature extraction pre-processing module 166 may be configured to perform operations that are similar to the pre-filterbank processing module 154. For example, the pre-processing module 166 may be configured to perform microphone directionality operations, noise reduction operations, input mixing operations, input selection/reduction operations, dynamic range control operations, and/or other types of signal enhancement operations. However, in the embodiment of
The present applicants have determined that different sound environments may affect the quality of the signal feature(s) extracted from the sound signals. As such, in certain embodiments, the environment classifier 164 controls which feature-extraction pre-processing operations are, or are not, performed on the sound signals before the extraction of the signal features therefrom to improve the feature extraction process (e.g., increasing the likelihood that the signal features are correctly identified and extracted). The operations of the feature extraction pre-processing module 166, which are at least partially controlled by the environmental classifier 164, result in the generation of a pre-processed signal that is provided to the feature extraction module 168. The pre-processed signal is generally represented in
As represented by arrow 169, the extracted features obtained by the feature extraction module 168 are provided to the feature-based adjustment module 170. The feature-based adjustment module 170 then performs one or more operations to generate a feature-based adjustment for incorporation into the signal processing path 151 and, accordingly, into the stimulation control signals 136.
In summary,
Further understanding of the embodiment of
Now consider a recipient attending a music concert. In this example, the environment classifier 164 determines that recipient is located in a “Music” environment. As result of this determination, the environment classifier 164 configures the pre-filterbank processing module 154 to a “Moderate” microphone directionality, which is only partly directional allowing for sound input from a broad area ahead of the recipient. Also a result of the “Music” classification, the environment classifier 164 configures the feature extraction pre-processing module 166 to an omnidirectional microphone setting, which again is a broad area of input, from all around the recipient.
The above two examples illustrate two specific techniques in which the environment classifier 164 adjusts the operations of the feature extraction pre-processing module 166 to optimize the ability to extract the F0 frequency information from the input, based on the sound environment. More generally, in accordance with certain embodiments presented herein, the environment classifier 164 operates to control what operations are, or are not, performed on the sound signals (i.e., electrical signals 153) feature extraction of one or more signal features therefrom in accordance with (i.e., based on) the sound environment of the received sound signals. As can be seen from the above two examples, the specific feature-extraction pre-processing operations performed at the feature extraction pre-processing module 166 may be different from the operations performed at the pre-filterbank processing module 154 because the goals/intents of the two modules are different (i.e., the pre-filterbank processing module 154 is configured to prepare the sound signals for sound processing and conversion to the stimulation commands while the feature extraction pre-processing module 166 is configured to prepare the sound signals for feature extraction).
As noted,
For example,
In the embodiment of
In the embodiment of
In summary,
More specifically,
The environmental classifier 664(1) operates, similar to the above embodiments, to optimize the extraction of the desired feature(s) within the feature adjustment processing path 173 (i.e., to control one or more parameters of the one or more feature extraction operations, as represented by arrows 665). In this example, the feature extraction pre-processing module 166 includes a dynamic range compression module 676 and a voice activity detector (VAD) 678. The dynamic range compression module 676 is configured to perform aggressive dynamic range compression, which differs by environment, such that there is minimal, or no difference in long-term signal level (i.e., the environmental classifier 664(1) controls the applied dynamic range compression based on the environmental classification). As a result, the voice activity detector 678 will detect voice activity, regardless of environment or the loudness of the voice presents in the sound signals.
In addition to controlling the feature extraction pre-processing at module 166, the environmental classifier 664(1) of
Further understanding of the embodiment of
Now consider a recipient attending a music concert. In this example, the environment classifier 164 determines that recipient is located in a “Music” environment. As result of this “Music” classification, the environment classifier 664(1) configures the feature extraction pre-processing module 166 to an omnidirectional microphone setting. In addition, in this example, the feature-based adjustment performed at module 170 is again configured to modulate the signals within one or more channels with the F0 frequency, thereby enhancing the perception of the F0-pitch by the recipient. Given the “Music” classification, the environment classifier 664(1) configures the feature-based adjustment performed at module 170 to operate in accordance with a “maximum” setting in which the F0 frequency modulation is strongly applied to one or more channels. In on example, a “maximum” setting of pitch enhancement would provide 100% of the possible enhancement range (e.g., F0 modulation that spans 100% (all) of electrical channel signals' dynamic range).
The above two examples illustrate two specific techniques in which the environment classifier 664(1) controls/adjusts one or more parameters of one or more feature-based adjustment operations performed at module 170 to optimize the perception of the F0-pitch information by the recipient, based on the sound environment. More generally, in accordance with certain embodiments presented herein, the environment classifier 664(1) operates to control the feature-based adjustments for incorporation into the sound processing path in accordance with (i.e., at least partially based on) the sound environment of the received sound signals.
As noted,
As noted above, embodiments of the present invention are generally directed to techniques for determination a feature-based adjustment for incorporation into stimulation control signals, where the feature-based adjustment is at least partially controlled/adjusted based on the current sound environment.
More specifically, the arrangement shown in
However, as shown in
In addition, the embodiment of
For example, in one example F0-pitch enhancement strategy, an estimate of the harmonic signal power-to-total power ratio (STR) is utilized as a metric to the probability of the sound input being a harmonic (voiced/musical) signal that is harmonically related to the estimated F0 frequency. As such, the STR it is used to programmatically control the overall degree of pitch enhancement applied by adjustment of the F0 modulation depth applied to the channel signals.
In the embodiment of
Returning to
In certain embodiments of
Extending upon the above, an environmental classifier could be trained based using the features that are used by a feature-based adjustment process, thereby making the classifier more robust.
More specifically,
An environmental classifier could also be trained using supplemental signal features 989. For example, with the addition of learning harmonic probability measures and an understanding of voiced-speech pitch ranges of training data, a speech classifier may exhibit higher performance. A music classifier or noise classifier may also be improved.
It is to be appreciated that the above described embodiments are not mutually exclusive and that the various embodiments can be combined in various manners and arrangements.
The invention described and claimed herein is not to be limited in scope by the specific preferred embodiments herein disclosed, since these embodiments are intended as illustrations, and not limitations, of several aspects of the invention. Any equivalent embodiments are intended to be within the scope of this invention. Indeed, various modifications of the invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the appended claims.
Vandali, Andrew, Brown, Matthew, Goorevich, Michael
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
7231257, | Feb 28 2003 | UNIVERSITY OF MELBOURNE, THE | Cochlear implant sound processing method and system |
9084893, | Feb 03 2009 | Cochlear Limited | Enhanced envelope encoded tone, sound processor and system |
9240193, | Jan 21 2013 | Cochlear Limited | Modulation of speech signals |
20020191799, | |||
20050129262, | |||
20060126872, | |||
20140105433, | |||
20160205481, | |||
20160241971, | |||
20170171674, | |||
20170359659, | |||
KR101600426, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 11 2017 | BROWN, MATTHEW | Cochlear Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 062233 | /0551 | |
Dec 11 2017 | VANDALI, ANDREW | Cochlear Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 062233 | /0551 | |
Dec 18 2017 | GOOREVICH, MICHAEL | Cochlear Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 062233 | /0551 | |
Nov 30 2018 | Cochlear Limited | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 05 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Apr 18 2026 | 4 years fee payment window open |
Oct 18 2026 | 6 months grace period start (w surcharge) |
Apr 18 2027 | patent expiry (for year 4) |
Apr 18 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 18 2030 | 8 years fee payment window open |
Oct 18 2030 | 6 months grace period start (w surcharge) |
Apr 18 2031 | patent expiry (for year 8) |
Apr 18 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 18 2034 | 12 years fee payment window open |
Oct 18 2034 | 6 months grace period start (w surcharge) |
Apr 18 2035 | patent expiry (for year 12) |
Apr 18 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |