A software-only real time text-to-speech system includes intonation control which does not introduce discontinuities into output speech stream. The text-to-speech system includes a module for translating text to a sequence of sound segment codes and intonation control signals. A decoder is coupled to the translator to produce sets of digital frames of speech data, which represent sounds for the respective sound segment codes in the sequence. An intonation control system is responsive to intonation control signals for modifying a block of one or more frames in the sets of frames of speech data to generate a modified block. The modified block substantially preserves the continuity of the beginning and ending segments of the block with adjacent frames in the sequence. Thus, when the modified block is inserted in the sequence, no discontinuities are introduced and smooth intonation control is accomplished. The intonation control system provides for both pitch and duration control.

Patent
   5642466
Priority
Jan 21 1993
Filed
Jan 21 1993
Issued
Jun 24 1997
Expiry
Jun 24 2014
Assg.orig
Entity
Large
188
9
all paid
2. An apparatus for adjusting an intonation of a sound wherein the sound is specified by a sequence of frames each comprising a set of digital samples, the apparatus comprising:
means for receiving a set of intonation control signals that indicate a pitch adjustment and a duration adjustment to the sound;
buffer that stores the sequence of frames;
intonation control means that generates an intonation adjusted sequence of frames by accessing a block of one or more frames of the sequence of frames from the buffer and by generating a modified block in response to the intonation control signals and by inserting the modified block into the sequence of frames such that the intonation control frames minimizes a discontinuity between a beginning segment and an ending segment of the block and a pair of adjacent frames in the intonation adjusted sequence of frames;
wherein the intonation control signals indicate a change in a nominal length of a specified frame of the sequence of frames to indicate the pitch adjustment and indicate a change in a number of frames in the sequence of frames to indicate the duration adjustment and,
wherein the intonation control means includes pitch lowering means for increasing a length n of the specified frame by an amount equal to Δ samples wherein the block of one or more frames consists of the specified frame, the pitch lowering means including means for applying a first weighting function to the block emphasizing the beginning segment to generate a first vector and means for applying a second weighting function to the block emphasizing the ending segment to generate a second vector and means for combining the first vector with the second vector shifted by Δ samples to generate the modified block having a length n+Δ.
4. An apparatus for adjusting an intonation of a sound wherein the sound is specified by a sequence of frames each comprising a set of digital samples, the apparatus comprising:
means for receiving a set of intonation control signals that indicate a pitch adjustment and a duration adjustment to the sound;
buffer that stores the sequence of frames;
intonation control means that generates an intonation adjusted sequence of frames by accessing a block of one or more frames of the sequence of frames from the buffer and by generating a modified block in response to the intonation control signals and by inserting, the modified block into the sequence of frames such that the intonation control frames minimizes a discontinuity between a beginning segment and an ending segment of the block and a pair of adjacent frames in the intonation adjusted sequence of frames;
wherein the intonation control signals indicate a change in a nominal length of a specified frame of the sequence of frames to indicate the pitch adjustment and indicate a change in a number of frames in the sequence of frames to indicate the duration adjustment and,
wherein the intonation control means includes duration shortening means for modifying the block to reduce the number of frames in the sequence of frames wherein the block consists of a pair of sequential frames having lengths NL and NR respectively, the duration shortening means including means for applying a first weighting function to the block emphasizing the beginning segment to generate a first vector and means for applying a second weighting function to the block emphasizing the ending segment to generate a second vector and means for combining the first vector with the second vector to generate the modified block having the length NL or the length NR.
3. An apparatus for adjusting an intonation of a sound wherein the sound is specified by a sequence of frames each comprising a set of digital samples, the apparatus comprising:
means for receiving a set of intonation control signals that indicate a pitch adjustment and a duration adjustment to the sound;
buffer that stores the sequence of frames;
intonation control means that generates an intonation adjusted sequence of frames by accessing a block of one or more frames of the sequence of frames from the buffer and by generating a modified block in response to the intonation control signals and by inserting the modified block into the sequence of frames such that the intonation control frames minimizes a discontinuity between a beginning segment and an ending segment of the block and a pair of adjacent frames in the intonation adjusted sequence of frames;
wherein the intonation control signals indicate a change in a nominal length of a specified frame of the sequence of frames to indicate the pitch adjustment and indicate a change in a number of frames in the sequence of frames to indicate the duration adjustment and,
wherein the intonation control means includes pitch raising means for decreasing a length n of the specified frame by an amount equal to Δ samples wherein the block of one or more frames consists of the specified frame and a next frame having a length NR in the sequence of frames, the pitch raising means including means for applying a first weighting function to the block emphasizing the beginning segment to generate a first vector and means for applying a second weighting function to the block emphasizing the ending segment to generate a second vector and means for combining the first vector with the second vector shifted by Δ samples to generate a shortened frame with the next frame to generate the modified block having a length n-Δ+NR.
5. An apparatus for adjusting an intonation of a sound wherein the sound is specified by a sequence of frames each comprising a set of digital samples, the apparatus comprising:
means for receiving a set of intonation control signals that indicate a pitch adjustment and a duration adjustment to the sound;
buffer that stores the sequence of frames;
intonation control means that generates an intonation adjusted sequence of frames by accessing a block of one or more frames of the sequence of frames from the buffer and by generating a modified block in response to the intonation control signals and by inserting the modified block into the sequence of frames such that the intonation control frames minimizes a discontinuity between a beginning segment and an ending segment of the block and a pair of adjacent frames in the intonation adjusted sequence of frames;
wherein the intonation control signals indicate a change in a nominal length of a specified frame of the sequence of frames to indicate the pitch adjustment and indicate a change in a number of frames in the sequence of frames to indicate the duration adjustment and,
wherein the intonation control means includes duration lengthening means for modifying the block to increase the number of frames in the sequence of frames wherein the block consists of a pair of left and right sequential frames having lengths NL and NR respectively, the duration lengthening means including means for applying a first weighting function to the block emphasizing the beginning segment to generate a first vector and means for applying a second weighting function to the block emphasizing the ending segment to generate a second vector and means for combining the first vector with the second vector to generate a new frame and means for concatenating the left frame, the new frame, and the right frame to generate the modified block.
1. An apparatus for adjusting an intonation of a sound wherein the sound is specified by a sequence of frames each comprising a set of digital samples, the apparatus comprising:
means for receiving a set of intonation control signals that indicate a pitch adjustment and a duration adjustment to the sound;
buffer that stores the sequence of frames;
intonation control means that generates an intonation adjusted sequence of frames by accessing a block of one or more frames of the sequence of frames from the buffer and by generating a modified block in response to the intonation control signals and by inserting the modified block into the sequence of frames wherein the intonation control means minimizes discontinuity between a beginning segment and an ending segment of the block and a pair of adjacent frames in the intonation adjusted sequence of frames, wherein the intonation control signals indicate a change in a nominal length of a specified frame of the sequence of frames to indicate the pitch adjustment and indicate a change in a number of frames in the sequence of frames to indicate the duration adjustment, and wherein the intonation control means includes
pitch lowering means for increasing a length n of the specified frame by an amount equal to Δ samples wherein the block of one or more frames consists of the specified frame, the pitch lowering means including means for applying a first weighting function to the block emphasizing the beginning segment to generate a first vector and means for applying a second weighting function to the block emphasizing the ending segment to generate a second vector and means for combining the first vector with the second vector shifted by Δ samples to generate the modified block having a length n+Δ,
pitch raising means for decreasing the length n of the specified frame by an amount equal to Δ samples wherein the block of one or more frames consists of the specified frame and a next frame having a length NR in the sequence of frames, the pitch raising means including means for applying the first weighting function to the block emphasizing the beginning segment to generate the first vector and means for applying the second weighting function to the block emphasizing the ending segment to generate the second vector and means for combining the first vector with the second vector shifted by Δ samples to generate a shortened frame with the next frame to generate the modified block having a length n-Δ+NR,
duration shortening means for modifying the block to reduce the number of frames in the sequence of frames wherein the block consists of a pair of sequential frames having lengths NL and NR respectively, the duration shortening means including means for applying the first weighting function to the block emphasizing the beginning segment to generate the first vector and means for applying the second weighting function to the block emphasizing the ending segment to generate the second vector and means for combining the first vector with the second vector to generate the modified block having the length NL or the length NR, and
duration lengthening means for modifying the block to increase the number of frames in the sequence of frames wherein the block consists of a pair of left and right sequential frames having the lengths NL and NR respectively, the duration lengthening means including means for applying the first weighting function to the block emphasizing the beginning segment to generate the first vector and means for applying the second weighting function to the block emphasizing the ending segment to generate the second vector and means for combining the first vector with the second vector to generate a new frame and means for concatenating the left frame, the new frame, and the right frame to generate the modified block.

The present application is related to U.S. Patent Application entitled METHOD AND APPARATUS FOR PROSODY OF SYNTHETIC SPEECH, invented by Scott E. Meredith, U.S. Patent Application entitled DIRECT MANIPULATION INTERFACE FOR PROSODY CONTROL OF SPEECH, invented by Scott E. Meredith, and U.S. Patent Application entitled METHOD AND APPARATUS FOR AUTOMATIC ASSIGNMENT OF DURATION VALUES FOR SYNTHETIC SPEECH, invented by Scott E. Meredith, which are being filed on the same day as the present application, and are owned now and were owned at the time of the inventions by the same Assignee. This related application is incorporated by reference as if fully set forth herein.

A portion of the disclosure of this patent document contains material to which the claim of copyright protection is made. The copyright owner has no objection to the facsimile reproduction by any person of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office file or records, but reserves all other rights whatsoever.

1. Field of the Invention

The present invention relates to translating text in a computer system to synthesized speech; and more particularly to techniques used in such systems for control of intonation in synthesized speech.

2. Description of the Related Art

In text-to-speech systems, stored text in a computer is translated to synthesized speech. As can be appreciated, this kind of system would have wide spread application if it were of reasonable cost. For instance, a text-to-speech system could be used for reviewing electronic mail remotely across a telephone line, by causing the computer storing the electronic mail to synthesize speech representing the electronic mail. Also, such systems could be used for reading to people who are visually impaired. In the word processing context, text-to-speech systems might be used to assist in proofreading a large document.

However in prior art systems which have reasonable cost, the quality of the speech has been relatively poor making it uncomfortable to use or difficult to understand. In order to achieve good quality speech, prior art speech synthesis systems need specialized hardware which is very expensive, and/or a large amount of memory space in the computer system generating the sound.

Prior art systems which have addressed this problem are described in part in U.S. Pat. No. 8,452,168, entitled COMPRESSION OF STORED WAVE FORMS FOR ARTIFICIAL SPEECH, invented by Sprague; and U.S. Pat. No. 4,692,941, entitled REAL-TIME TEXT-TO-SPEECH CONVERSION SYSTEM, invented by Jacks, et al. Further background concerning speech synthesis may be found in U.S. Pat. No. 4,384,169, entitled METHOD AND APPARATUS FOR SPEECH SYNTHESIZING, invented by Mozer, et al.

In text-to-speech systems, an algorithm reviews an input text string, and translates the words in the text string into a sequence of diphones which must be translated into synthesized speech. Also, text-to-speech systems analyze the text based on word type and context to generate intonation control used for adjusting the duration of the sounds and the pitch of the sounds involved in the speech.

Diphones consist of a unit of speech composed of the transition between one sound, or phoneme, and an adjacent sound, or phoneme. Diphones typically are encoded as a sequence of frames of sound data starting at the center of one phoneme and ending at the center of a neighboring phoneme. This preserves the transition between the sounds relatively well. The encoded diphones have a nominal pitch determined by the length of a pitch period in the encoded speech and a nominal duration determined by the number of pitch periods corresponding to a particular encoded sound. These nominal values must be adjusted to synthesize natural sounding speech.

Intonation control in such systems involves lengthening or shortening particular frames, or pitch periods, of speech data for pitch control, and inserting or deleting frames associated with particular sounds for duration control. Prior art systems have accomplished these modifications by relatively crude clipping and extrapolation on pitch period boundaries that introduce discontinuities in output speech data sequences. In some cases, these discontinuities may introduce audible clicks or other noise.

Notwithstanding the prior work in this area, the use of text-to-speech systems has not gained widespread acceptance. It is desireable therefore to provide a software only text-to-speech system which is portable to a wide variety of microcomputer platforms, and conserves memory space in such platforms for other uses, and performs intonation control with high quality.

The present invention provides a software-only real time text-to-speech system including intonation control which does not introduce discontinuities into output speech stream. The intonation control system adjusts the intonation of sounds represented by a sequence of frames having respective lengths of digital samples. It includes a means that receives intonation control signals and a buffer for storing frames in the sequence of sound data. The intonation control system is responsive to the intonation control signals for modifying a block of one or more frames in the sequence to generate a modified block. The modified block substantially preserves the continuity of the beginning and ending segments of the block with adjacent frames in the sequence. Thus, when the modified block is inserted in the sequence, no discontinuities are introduced and smooth intonation control is accomplished.

According to one aspect of the invention, the intonation control signals include pitch control signals which indicate an amount of adjustment of the nominal lengths of particular frames in the sequence. Also, the intonation control signal may include duration control signals which indicate an amount to reduce or increase the number of frames in the sequence corresponding to particular sounds.

The pitch adjustment means includes a pitch lowering module which increases the length N of a particular frame by amount of Δ samples. In this case, the block which is modified consists of the particular frame. A first weighting function is applied to the block in the buffer emphasizing the beginning segment to generate a first vector, and a second weighting function is applied to the block emphasizing the ending segment to generate a second vector. The first vector is combined with the second vector shifted by Δ samples to generate a modified block of length N+Δ.

A pitch raising module is included for decreasing the length N of a particular frame by amount Δ. In this case, the block stored in the buffer consists of the particular frame subject of pitch adjustment and the next frame in the sequence of length NR. A first weighting function is applied to the block emphasizing the beginning segment to generate a first vector, and a second weighting function is applied to the block emphasizing the ending segment to generate a second vector. The first vector is combined with the second vector shifted by Δ samples to generate a shortened frame, and the shortened frame is concatenated with the next frame to produce a modified block of length N-Δ+NR.

Duration control includes duration shortening modules and duration lengthening modules. In the duration shortening module, the duration control signals indicate an amount to reduce the number of frames in a sequence that correspond to a particular sound. In this case, the block stored in the buffer consists of two sequential frames of respective lengths NL and NR which correspond to a particular sound. A first weighting function is applied to the block emphasizing the beginning segment to generate a first vector, and a second weighting function is applied to the block emphasizing the ending segment to generate a second vector. The first and second vectors are combined to generate a modified block having the length either NL or the length NR.

The duration lengthening module is responsive to duration control signals which indicate an amount to increase the number of frames in the sequence which correspond to a particular sound. In this case, the block to be modified consists of left and right sequential frames of respective lengths NL and NR which correspond to the particular sound. A first weighting function is applied to the block emphasizing the beginning segment to generate a first vector. A second weighting function is applied to the block emphasizing the ending segment to generate a second vector. The first and second vectors are combined to generate a new frame for insertion in the sequence. The left frame, the new frame, and the right frame are concatenated to produce the modified block.

According to another aspect of the invention, the intonation control is explicitly applied to speech data, in a text-to-speech system. The text-to-speech system includes a module for translating text to a sequence of sound segment codes and intonation control signals. A decoder is coupled to the translator to produce sets of digital frames which represent sounds for the respective sound segment codes in the sequence. An intonation adjustment module as described above is included which is responsive to the translator, and to modify the outputs of the decoder to produce an intonation adjusted sequence of data. An audio transducer receives the intonation adjusted sequence to produce synthesized speech.

By modifying speech data to adjust the intonation without introducing discontinuities between frames of speech data, a much improved text-to-speech system is achieved. Furthermore, the present invention is well suited to real time application in a wide variety of standard microcomputer platforms, such as the Apple Macintosh class computers, DOS based computers, UNIX based computers, and the like. The system occupies a relatively small amount of system memory, and utilizes the relatively small amount of processor resources to achieve very high quality synthesized speech.

Other aspects and advantages of the present invention can be seen upon review of the figures, the detailed description, and the claims which follow.

FIG. 1 is a block diagram of a generic hardware platform incorporating the text-to-speech system of the present invention.

FIG. 2 is a flow chart illustrating the basic text-to-speech routine according to the present invention.

FIG. 3 illustrates the format of diphone records according to one embodiment of the present invention.

FIG. 4 is a flow chart illustrating the encoder for speech data according to the present invention.

FIG. 5 is a graph discussed in reference to the estimation of pitch filter parameters in the encoder of FIG. 4.

FIG. 6 is a flow chart illustrating the full search used in the encoder of FIG. 4.

FIG. 7 is a flow chart illustrating a decoder for speech data according to the present invention.

FIG. 8 is a flow chart illustrating a technique for blending the beginning and ending of adjacent diphone records.

FIGS. 9a-c consist of a set of graphs referred to in explanation of the blending technique of FIG. 8.

FIG. 10 is a graph illustrating a typical pitch versus time diagram for a sequence of frames of speech data.

FIG. 11 is a flow chart illustrating a technique for increasing the pitch period of a particular frame.

FIGS. 12a-e are a set of graphs referred to in explanation of the technique of FIG. 11.

FIG. 13 is a flow chart illustrating a technique for decreasing the pitch period of a particular frame.

FIGS. 14a-c are a set of graphs referred to in explanation of the technique of FIG. 13.

FIG. 15 is a flow chart illustrating a technique for inserting a pitch period between two frames in a sequence.

FIGS. 16a-c are a set of graphs referred to in explanation of the technique of FIG. 15.

FIG. 17 is a flow chart illustrating a technique for deleting a pitch period in a sequence of frames.

FIGS. 18a-c are a set of graphs referred to in explanation of the technique of FIG. 17.

A detailed description of preferred embodiments of the present invention is provided with reference to the figures. FIGS. 1 and 2 provide a overview of a system incorporating the present invention. FIG. 3 illustrates the basic manner in which diphone records are stored according to the present invention. FIGS. 4-6 illustrate the encoding methods based on vector quantization of the present invention. FIG. 7 illustrates the decoding algorithm according to the present invention.

FIGS. 8 and 9a-c illustrate a preferred technique for blending the beginning and ending of adjacent diphone records. FIGS. 10, 11, 12a-e, 13, 14a-c, 15, 16a-c, 17, and 18a-c illustrate the techniques for controlling the pitch and duration of sounds in the text-to-speech system.

I. System Overview (FIGS. 1-3)

FIG. 1 illustrates a basic microcomputer platform incorporating a text-to-speech system based on vector quantization according to the present invention. The platform includes a central processing unit 10 coupled to a host system bus 11. A keyboard 12 or other text input device is provided in the system. Also, a display system 13 is coupled to the host system bus. The host system also includes a non-volatile storage system such as a disk drive 14. Further, the system includes host memory 15. The host memory includes text-to-speech (TTS) code, including encoded voice tables, buffers, and other host memory. The text-to-speech code is used to generate speech data for supply to an audio output module 16 which includes a speaker 17.

According to the present invention, the encoded voice tables include a TTS dictionary which is used to translate text to a string of diphones. Also included is a diphone table which translates the diphones to identified strings of quantization vectors. A quantization vector table is used for decoding the sound segment codes of the diphone table into the speech data for audio output. Also, the system may include a vector quantization table for encoding which is loaded into the host memory 15 when necessary. Also, the text-to-speech code in the instruction memory includes an intonation control module which preserves the continuity of encoded speech, while providing sophisticated pitch and duration control.

The platform illustrated in FIG. 1 represents any generic microcomputer system, including a Macintosh based system, an DOS based system, a UNIX based system or other types of microcomputers. The text-to-speech code and encoded voice tables according to the present invention for decoding occupy a relatively small amount of host memory 15. For instance, a text-to-speech decoding system according to the present invention may be implemented which occupies less than 640 kilobytes of main memory, and yet produces high quality, natural sounding synthesized speech.

The basic algorithm executed by the text-to-speech code is illustrated in FIG. 2. The system first receives the input text (block 20). The input text is translated to diphone strings using the TTS dictionary (block 21). At the same time, the input text is analyzed to generate intonation control data, to control the pitch and duration of the diphones making up the speech (block 22). The intonation control signals in the preferred system may be produced for instance as described in the related applications, incorporated by reference above.

After the text has been translated to diphone strings, the diphone strings are decompressed to generate vector quantized data frames (block 23). After the vector quantized (VQ) data frames are produced, the beginnings and endings of adjacent diphones are blended to smooth any discontinuities (block 24). Next, the duration and pitch of the diphone VQ data frames are adjusted in response to the intonation control data (block 25 and 26). Finally, the speech data is supplied to the audio output system for real time speech production (block 27). For systems having sufficient processing power, an adaptive post filter may be applied to further improve the speech quality.

The TTS dictionary can be implemented using any one of a variety of techniques known in the art. According to the present invention, diphone records are implemented as shown in FIG. 3 in a highly compressed format.

As shown in FIG. 3, records for a left diphone 30 and a record for a right diphone 31 are shown. The record for the left diphone 30 includes a count 32 of the number NL of pitch periods in the diphone. Next, a pointer 33 is included which points to a table of length NL storing the number LPi for each pitch period, i goes from 0 to NL-1 of pitch values for corresponding compressed frame records. Finally, pointer 34 is included to a table 36 of ML vector quantized compressed speech records, each having a fixed set length of encoded frame size related to nominal pitch of the encoded speech for the left diphone. The nominal pitch is based upon the average number of samples for a given pitch period for the speech data base.

A similar structure can be seen for the right diphone 31. Using vector quantization, a length of the compressed speech records is very short relative to the quality of the speech generated.

The format of the vector quantized speech records can be understood further with reference to the frame encoder routine and the frame decoder routine described below with reference to FIGS. 4-7.

II. The Encoder/Decoder Routines (FIGS. 4-7)

The encoder routine is illustrated in FIG. 4. The encoder accepts as input a frame sn of speech data. In the preferred system, the speech samples are represented as 12 or 16 bit two's complement numbers, sampled at 22,252 Hz. This data is divided into non-overlapping frames sn having a length of N, where N is referred to as the frame size. The value of N depends on the nominal pitch of the speech data. If the nominal pitch of the recorded speech is less than 165 samples (or 135 Hz), the value of N is chosen to be 96. Otherwise a frame size of 160 is used. The encoder transforms the N-point data sequence sn into a byte stream of shorter length, which depends on the desired compression rate. For example, if N=160 and very high data compression is desired, the output byte stream can be as short as 12 eight bit bytes. A block diagram of the encoder is shown in FIG. 4.

Thus, the routine begins by accepting a frame sn (block 50). To remove low frequency noise, such as DC or 60 Hz power line noise, and produce offset free speech data, signal sn is passed through a high pass filter. A difference equation used in a preferred system to accomplish this is set out in Equation 1 for 0≦n<N.

xn =sn -sn-1 +0.999*xn-1 Equation 1

The value xn is the "offset free" signal. The variables s-1 and x-1 are initialized to zero for each diphone and are subsequently updated using the relation of Equation 2.

x-1 =xN and s-1 =sN Equation 2

This step can be referred to as offset compensation or DC removal (block 51).

In order to partially decorrelate the speech samples and the quantization noise, the sequence xn is passed through a fixed first order linear prediction filter. The difference equation to accomplish this is set forth in Equation 3.

yn =xn -0.875*xn-1 Equation 3

The linear prediction filtering of Equation 3 produces a frame yn (block 52). The filter parameter, which is equal to 0.875 in Equation 3, will have to be modified if a different speech sampling rate is used. The value of x-1 is initialized to zero for each diphone, but will be updated in the step of inverse linear prediction filtering (block 60) as described below.

It is possible to use a variety of filter types, including, for instance, an adaptive filter in which the filter parameters are dependent on the diphones to be encoded, or higher order filters.

The sequence yn produced by Equation 3 is then utilized to determine an optimum pitch value, Popt, and an associated gain factor, β. Popt is computed using the functions sxy (P), sxx (P), syy (P), and the coherence function Coh(P) defined by Equations 4, 5, 6 and 7 as set out below. ##EQU1##

PBUF is a pitch buffer of size Pmax, which is initialized to zero, and updated in the pitch buffer update block 59 as described below. Popt is the value of P for which Coh(P) is maximum and sxy (P) is positive. The range of P considered depends on the nominal pitch of the speech being coded. The range is (96 to 350) if the frame size is equal to 96 and is (160 to 414) if the frame size is equal to 160. Pmax is 350 if nominal pitch is less than 160 and is equal to 414 otherwise. The parameter Popt can be represented using 8 bits.

The computation of Popt can be understood with reference to FIG. 5. In FIG. 5, the buffer PBUF is represented by the sequence 100 and the frame yn is represented by the sequence 101. In a segment of speech data in which the preceding frames are substantially equal to the frame yn, PBUF and yn will look as shown in FIG. 5. Popt will have the value at point 102, where the vector yn 101 matches as closely as possible a corresponding segment of similar length in PBUF 100.

The pitch filter gain parameter β is determined using the expression of Equation 8.

β=sxy (Popt)/syy (Popt). Equation 8

β is quantized to four bits, so that the quantized value of β can range from 1/16 to 1, in steps of 1/16.

Next, a pitch filter is applied (block 54). The long term correlations in the pre-emphasized speech data yn are removed using the relation of Equation 9. ##EQU2##

This results in computation of a residual signal rn.

Next, a scaling parameter G is generated using a block gain estimation routine (block 55). In order to increase the computational accuracy of the following stages of processing, the residual signal rn is rescaled. The scaling parameter, G, is obtained by first determining the largest magnitude of the signal rn and quantizing it using a 7-level quantizer. The parameter G can take one of the following 7 values: 256, 512, 1024, 2048, 4096, 8192, and 16384. The consequence of choosing these quantization levels is that the rescaling operation can be implemented using only shift operations.

Next the routine proceeds to residual coding using a full search vector quantization code (block 56). In order to code the residual signal rn, the n point sequence rn is divided into non-overlapping blocks of length M, where M is referred to as the "vector size". Thus, M sample blocks bij are created, where i is an index from zero to M-1 on the block number, and j is an index from zero to N/M-1 on the sample within the block. Each block may be defined as set out in Equation 10.

bij =rMi+j, (0≦i<N/M and j≦0<M) Equation 10

Each of these M sample blocks bij will be coded into an 8 bit number using vector quantization. The value of M depends on the desired compression ratio. For example, with M equal to 16, very high compression is achieved (i.e., 16 residual samples are coded using only 8 bits). However, the decoded speech quality can be perceived to be somewhat noisy with M=16. On the other hand, with M=2, the decompressed speech quality will be very close to that of uncompressed speech. However the length of the compressed speech records will be longer. The preferred implementation, the value M can take values 2, 4, 8, and 16.

The vector quantization is performed as shown in FIG. 6. Thus, for all blocks bij a sequence of quantization vectors is identified (block 120). First, the components of block bij are passed through a noise shaping filter and scaled as set out in Equation 11 (block 121). ##EQU3##

Thus, vij is the jth component of the vector vi, and the values w-1, w-2 and w-3 are the states of the noise shaping filter and are initialized to zero for each diphone. The filter coefficients are chosen to shape the quantization noise spectra in order to improve the subjective quality of the decompressed speech. After each vector is coded and decoded, these states are updated as described below with reference to blocks 124-126.

Next, the routine finds a pointer to the best match in a vector quantization table (block 122). The vector quantization table 123 consists of a sequence of vectors C0 through C255 (block 123).

Thus, the vector vi is compared against 256 M-point vectors, which are precomputed and stored in the code table 123. The vector Cqi which is closest to vi is determined according to Equation 12. The value Cp for p=0 through 255 represents the pth encoding vector from the vector quantization code table 123. ##EQU4##

The closest vector Cqi can also be determined efficiently using the technique of Equation 13.

viT ·Cqi ≦viT ·Cp for all p(0≦p≦255) Equation 13

In Equation 13, the value vT represents the transpose of the vector v, and "·" represents the inner product operation in the inequality.

The encoding vectors Cp in table 123 are utilized to match on the noise filtered value vij. However in decoding, a decoding vector table 125 is used which consists of a sequence of vectors QVp. The values QVp are selected for the purpose of achieving quality sound data using the vector quantization technique. Thus, after finding the vector Cqi, the pointer q is utilized to access the vector QVqi. The decoded samples corresponding to the vector bi which is produced at step 55 of FIG. 4, is the M-point vector (1/G)*QVqi. The vector Cp is related to the vector QVp by the noise shaping filter operation of Equation 11. Thus, when the decoding vector QVp is accessed, no inverse noise shaping filter needs to be computed in the decode operation. The table 125 of FIG. 6 thus includes noise compensated quantization vectors.

In continuing to compute the encoding vectors for the vectors bij which make up the residual signal rn, the decoding vector of the pointer to the vector bi is accessed (block 124). That decoding vector is used for filter and PBUF updates (block 126).

For the noise shaping filter, after the decoded samples are computed for each sub-block bi, the error vector (bi -QVqi) is passed through the noise shaping filter as shown in Equation 14. ##EQU5##

In Equation 14, the value QVqi (j) represents the jth component of the decoding vector QVqi. The noise shaping filter states for the next block are updated as shown in Equation 15.

w-1 =wM-1

w-2 =wM-2

w-3 =wM-3 Equation 15

This coding and decoding is performed for all of the N/M sub-blocks to obtain N/M indices to the decoding vector table 125. This string of indices Qn, for n going from zero to N/M-1 represent identifiers for a string of decoding vectors for the residual signal rn.

Thus, four parameters represent the N-point data sequence yn :

1) Optimum pitch, Popt (8 bits),

2) Pitch filter gain, β (4 bits),

3) Scaling parameter, G (3 bits), and

4) A string of decoding table indices, Qn (0≦n<N/M).

The parameters β and G can be coded into a single byte. Thus, only (N/M) plus 2 bytes are used to represent N samples of speech. For example, suppose nominal pitch is 100 samples long, and M=16. In this case, a frame of 96 samples of speech are represented by 8 bytes: 1 byte for Popt, 1 byte for β and G, and 6 bytes for the decoding table indices Qn. If the uncompressed speech consists of 16 bit samples, then this represents a compression of 24:1.

Back to FIG. 4, four parameters identifying the speech data are stored (block 57). In a preferred system, they are stored in a structure as described with respect to FIG. 3 where the structure of the frame can be characterized as follows:

______________________________________
#define NumOfVectorsPerFrame (FrameSize / VectorSize)
struct
frame {
unsigned Gain : 4;
unsigned Beta : 3;
unsigned UnusedBit: 1;
unsigned char Pitch ;
unsigned char VQcodes[NumOfVectorsPerFrame];};
______________________________________

The diphone record of FIG. 3 utilizing this frame structure can be characterized as follows:

______________________________________
DiphoneRecord
char LeftPhone, RightPhone;
short LeftPitchPeriodCount,RightPitchPeriodCount;
short *LeftPeriods, *RightPeriods;
struct frame *LeftData, *RightData;
}
______________________________________

These stored parameters uniquely provide for identification of the diphones required for text-to-speech synthesis.

As mentioned above with respect to FIG. 6, the encoder continues decoding the data being encoded in order to update the filter and PBUF values. The first step involved in this is an inverse pitch filter (block 58). With the vector r'n corresponding to the decoded signal formed by concatenating the string of decoding vectors to represent the residual signal r'n, the inverse filter is implemented as set out in Equation 16. ##EQU6##

Next, the pitch buffer is updated (block 59) with the output of the inverse pitch filter. The pitch buffer PBUF is updated as set out in Equation 17. ##EQU7##

Finally, the linear prediction filter parameters are updated using an inverse linear prediction filter step (block 60). The output of the inverse pitch filter is passed through a first order inverse linear prediction filter to obtain the decoded speech. The difference equation to implement this filter is set out in Equation 18.

x'n =0.875*x'n-1 +y'n Equation 18

In Equation 18, x'n is the decompressed speech. From this, the value of x-1 for the next frame is set to the value xN for use in the step of block 52.

FIG. 7 illustrates the decoder routine. The decoder module accepts as input (N/M)+2 bytes of data, generated by the encoder module, and applies as output N samples of speech. The value of N depends on the nominal pitch of the speech data and the value of M depends on the desired compression ratio.

In software only text-to-speech systems, the computational complexity of the decoder must be as small as possible to ensure that the text-to-speech system can run in real time even on slow computers. A block diagram of the encoder is shown in FIG. 7.

The routine starts by accepting diphone records at block 200. The first step involves parsing the parameters G, β, Popt, and the vector quantization string Qn (block 201). Next, the residual signal r'n is decoded (block 202). This involves accessing and concatenating the decoding vectors for the vector quantization string as shown schematically at block 203 with access to the decoding quantization vector table 125.

After the residual signal r'n is decoded, an inverse pitch filter is applied (block 204). This inverse pitch filter is implemented as shown in Equation 19:

y'n =r'n +β*SPBUF(Pmax -Popt +n), 0≦n<N.Equation 19

SPBUF is a synthesizer pitch buffer of length Pmax initialized as zero for each diphone, as described above with respect to the encoder pitch buffer PBUF.

For each frame, the synthesis pitch buffer is updated (block 205). The manner in which it is updated is shown in Equation 20: ##EQU8##

After updating SPBUF, the sequence y'n is applied to an inverse linear prediction filtering step (block 206). Thus, the output of the inverse pitch filter y'n is passed through a first order inverse linear prediction filter to obtain the decoded speech. The difference equation to implement the inverse linear prediction filter is set out in Equation 21:

x'n =0.875*x'n-1 +y'n Equation 21

In Equation 21, the vector x'n corresponds to the decompressed speech. This filtering operation can be implemented using simple shift operations without requiring any multiplication. Therefore, it executes very quickly and utilizes a very small amount of the host computer resources.

Encoding and decoding speech according to the algorithms described above, provide several advantages over prior art systems. First, this technique offers higher speech compression rates with decoders simple enough to be used in the implementation of software only text-to-speech systems on computer systems with low processing power. Second, the technique offers a very flexible trade-off between the compression ratio and synthesizer speech quality. A high-end computer system can opt for higher quality synthesized speech at the expense of a bigger RAM memory requirement.

III. Waveform Blending For Discontinuity Smoothing (FIGS. 8 and 9a-c)

As mentioned above with respect to FIG. 2, the synthesized frames of speech data generated using the vector quantization technique may result in slight discontinuities between diphones in a text string. Thus, the text-to-speech system provides a module for blending the diphone data frames to smooth such discontinuities. The blending technique of the preferred embodiment is shown with respect to FIGS. 8 and 9a-c.

Two concatenated diphones will have an ending frame and a beginning frame. The ending frame of the left diphone must be blended with the beginning frame of the right diphone without audible discontinuities or clicks being generated. Since the right boundary of the first diphone and the left boundary of the second diphone correspond to the same phoneme in most situations, they are expected to be similar looking at the point of concatenation. However, because the two diphone codings are extracted from different context, they will not look identical. This blending technique is applied to eliminate discontinuities at the point of concatenation. In FIGS. 9a-c, the last frame, referring here to one pitch period, of the left diphone is designated Ln (0≦n<PL) at the top of the page. The first frame (pitch period) of the right diphone is designated Rn (0≦n<PR). The blending of Ln and Rn according to the present invention will alter these two pitch periods only and is performed as discussed with reference to FIG. 8. The waveforms in FIGS. 9a-c are chosen to illustrate the algorithm, and may not be representative of real speech data.

Thus, the algorithm as shown in FIG. 8 begins with receiving the left and right diphone in a sequence (block 300). Next, the last frame of the left diphone is stored in the buffer Ln (block 301). Also, the first frame of the right diphone is stored in buffer Rn (block 302).

Next, the algorithm replicates and concatenates the left frame Ln to form extend frame (block 303). In the next step, the discontinuities in the extended frame between the replicated left frames are smoothed (block 304). This smoothed and extended left frame is referred to as Eln in FIGS. 9a-c.

The extended sequence Eln (0≦n<PL) is obtained in the first step as shown in Equation 22: ##EQU9## Then discontinuity smoothing from the point n=PL is conducted according to the filter of Equation 23: ##EQU10## In Equation 23, the value Δ is equal to 15/16 and El'(PL-1) =El2 +3*(El1 -El0). Thus, as indicated in FIGS. 9a-c, the extended sequence Eln is substantially equal to Ln on the left hand side, has a smoothed region beginning at the point PL and converges on the original shape of Ln toward the point 2PL. If Ln was perfectly periodic, then ElPL-1 =ElPL-1.

In the next step, the optimum match of Rn with the vector Eln is found. This match point is referred to as Popt. (Block 305.) This is accomplished essentially as shown in FIGS. 9a-c by comparing Rn with Eln to find the section of Eln which most closely matches Rn. This optimum blend point determination is performed using Equation 23 where W is the minimum of PL and PR, and AMDF represents the average magnitude difference function. ##EQU11##

This function is computed for values of p in the range of 0 to PL-1. The vertical bars in the operation denote the absolute value. W is the window size for the AMDF computation. Popt is chosen to be the value at which AMDF(p) is minimum. This means that p=Popt corresponds to the point at which sequences Eln+p (0≦n<W) and Rn (0≦n<W) are very close to each other.

After determining the optimum blend point Popt, the waveforms are blended (block 306). The blending utilizes a first weighting ramp WL which is shown in FIGS. 9a-c beginning at Popt in the Eln trace. In a second ramp, WR is shown in FIGS. 9a-c at the Rn trace which is lined up with Popt. Thus, in the beginning of the blending operation, the value of Eln is emphasized. At the end of the blending operation, the value of Rn is emphasized.

Before blending, the length PL of Ln is altered as needed to ensure that when the modified Ln and Rn are concatenated, the waveforms are as continuous as possible. Thus, the length P'L is set to Popt if Popt is greater than PL/2. Otherwise, the length P'L is equal to W+Popt and the sequence Ln is equal to Eln for 0≦n<(P'L-1).

The blending ramp beginning at Popt is set out in Equation 25: ##EQU12##

Thus, the sequences Ln and Rn are windowed and added to get the blended Rn. The beginning of Ln and the ending of Rn are preserved to prevent any discontinuities with adjacent frames.

This blending technique is believed to minimize blending noise in synthesized speech produced by any concatenated speech synthesis.

IV. Pitch and Duration Modification (FIGS. 10, 11, 12a-e, 13, 14a-c, 15, 16a-c, 17, and 18a-c)

As mentioned above with respect to FIG. 2, a text analysis program analyzes the text and determines the duration and pitch contour of each phone that needs to be synthesized and generates intonation control signals. A typical control for a phone will indicate that a given phoneme, such as AE, should have a duration of 200 milliseconds and a pitch should rise linearly from 220 Hz to 300 Hz. This requirement is graphically shown in FIG. 10. As shown in FIG. 10, T equals the desired duration (e.g. 200 milliseconds) of the phoneme. The frequency fb is the desired beginning pitch in Hz. The frequency fe is the desired ending pitch in Hz. The labels P1, P2 . . . , P6 indicate the number of samples of each frame to achieve the desired pitch frequencies fb, f2 . . . , f6. The relationship between the desired number of samples, Pi, and the desired pitch frequency fi (f1 =fb), is defined by the relation:

Pi Fs /fi, where Fs is the sampling frequency for the data.

As can be seen in FIG. 10, the pitch period for a lower frequency period of the phoneme is longer than the pitch period for a higher frequency period of the phoneme. If the nominal frequency were P3, then the algorithm would be required to lengthen the pitch period for frames P1 and P2 and decrease the pitch periods for frames P4, P5 and P6. Also, the given duration T of the phoneme will indicate how many pitch periods should be inserted or deleted from the encoded phoneme to achieve the desired duration period. FIGS. 11, 12a-e, 13, 14a-c, 15, 16a-c, 17, and 18a-c illustrate a preferred implementation of such algorithms.

FIG. 11 illustrates an algorithm for increasing the pitch period, with reference to the graphs of FIGS. 12a-e. The algorithm begins by receiving a control to increase the pitch period to N+Δ, where N is the pitch period of the encoded frame. (Block 350). In the next step, the pitch period data is stored in a buffer xn (block 351). xn is shown in FIGS. 12a-e at the top of the page. In the next step, a left vector Ln is generated by applying a weighting function WL to the pitch period data xn with reference to Δ (block 352). This weighting function is illustrated in Equation 26 where M=N-Δ: ##EQU13## As can be seen in FIGS. 12a-e, the weighting function WL is constant from the first sample to sample Δ, and decreases from Δ to N.

Next, a weighting function WR is applied to xn (block 353) as can be seen in the FIGS. 12a-e. This weighting function is executed as shown in Equation 27: ##EQU14##

As can be seen in FIGS. 12a-e, the weighting function WR increases from 0 to N-Δ and remains constant from N-Δ to N. The resulting waveforms Ln and Rn are shown conceptually in FIGS. 12a-e. As can be seen, Ln maintains the beginning of the sequence xn, while Rn maintains the ending of the data xn.

The pitch modified sequence yn is formed (block 354) by adding the two sequences as shown in Equation 28:

yn =Ln +R(n-Δ) Equation 28

This is graphically shown in FIGS. 12a-e by placing Rn shifted by Δ below Ln. The combination of Ln and Rn shifted by Δ is shown to be yn at the bottom of FIGS. 12a-e. The pitch period for yn is N+Δ. The beginning of yn is the same as the beginning of xn, and the ending of yn is substantially the same as the ending of xn. This maintains continuity with adjacent frames in the sequence, and accomplishes a smooth transition while extending the pitch period of the data.

Equation 28 is executed with the assumption that Ln is 0, for n≦N, and Rn is 0 for n<0. This is illustrated pictorially in FIGS. 12a-e.

An efficient implementation of this scheme which requires at most one multiply per sample, is shown in Equation 29: ##EQU15## This results in a new pitch period having a pitch period of N+Δ.

There are also instances in which the pitch period must be decreased. The algorithm for decreasing the pitch period is shown in FIG. 13 with reference to the graphs of FIGS. 14a-c. Thus, the algorithm begins with a control signal indicating that the pitch period must be decreased to N-Δ. (Block 400). The first step is to store two consecutive pitch periods in the buffer xn (block 401). Thus, the buffer xn as can be seen in FIGS. 14a-c consists of two consecutive pitch periods, with the period Nl being the length of the first pitch period, and Nr being the length of the second pitch period. Next, two sequences Ln and Rn are conceptually created using weighting functions WL and WR (blocks 402 and 403). The weighting function WL emphasizes the beginning of the first pitch period, and the weighting function WR emphasizes the ending of the second pitch period. These functions can be conceptually represented as shown in Equations 30 and 31, respectively: ##EQU16##

In these equations, Δ is equal to the difference between Nl and the desired pitch period Nd. The value W is equal to 2*Δ, unless 2*Δ is greater than Nd, in which case W is equal to Nd.

These two sequences Ln and Rn are blended to form a pitch modified sequence yn (block 404). The length of the pitch modified sequence yn will be equal to the sum of the desired length and the length of the right phoneme frame Nr. It is formed by adding the two sequences as shown in Equation 32:

yn =Ln +R(n+Δ) Equation 32

Thus, when a pitch period is decreased, two consecutive pitch periods of data are affected, even though only the length of one pitch period is changed. This is done because pitch periods are divided at places where short-term energy is the lowest within a pitch period. Thus, this strategy affects only the low energy portion of the pitch periods. This minimizes the degradation in speech quality due to the pitch modification. It should be appreciated that the drawings in FIGS. 14a-c are simplified and do not represent actual pitch period data.

An efficient implementation of this scheme, which requires at most one multiply per sample, is set out in Equations 33 and 34.

The first pitch period of length Nd is given by Equation 33: ##EQU17##

The second pitch period of length Nr is generated as shown in Equation 34: ##EQU18##

As can be seen in FIGS. 14a-c, the sequence Ln is essentially equal to the first pitch period until the point Nl -W. At that point, a decreasing ramp WL is applied to the signal to dampen the effect of the first pitch period.

As also can be seen, the weighting function WR begins at the point Nl -W+Δ and applies an increasing ramp to the sequence xn until the point Nl +Δ. From that point, a constant value is applied. This has the effect of damping the effect of the right sequence and emphasizing the left during the beginning of the weighting functions, and generating a ending segment which is substantially equal to the ending segment of xn emphasizing the right sequence and damping the left. When the two functions are blended, the resulting waveform yn is substantially equal to the beginning of xn at the beginning of the sequence, at the point Nl -W a modified sequence is generated until the point Nl. From Nl to the ending, sequence xn shifted by Δ results.

A need also arises for insertion of pitch periods to increase the duration of a given sound. A pitch period is inserted according to the algorithm shown in FIG. 15 with reference to the drawings of FIGS. 16a-c.

The algorithm begins by receiving a control signal to insert a pitch period between frames Ln and Rn (block 450). Next, both Ln and Rn are stored in the buffer (block 451), where Ln and Rn are two adjacent pitch periods of a voice diphone. (Without loss of generality, it is assumed for the description that the two sequences are of equal lengths N.)

In order to insert a pitch period, xn of the same duration, without causing a discontinuity between Ln and xn and between xn and Rn, the pitch period xn should resemble Rn around n=0 (preserving Ln to xn continuity), and should resemble Ln around n=N (preserving xn to Rn continuity). This is accomplished by defining xn as shown in Equation 35: ##EQU19##

Conceptually, as shown in FIG. 15, the algorithm proceeds by generating a left vector WL(Ln), essentially applying to the increasing ramp WL to the signal Ln. (Block 452).

A right vector WR(Rn) is generated using the weighting vector WR (block 453) which is essentially a decreasing ramp as shown in FIGS. 16a-c. Thus, the ending of Ln is emphasized with the left vector, and the beginning of Rn is emphasized with the vector WR.

Next, WR(Ln) and WR(Rn) are blended to create an inserted period xn (block 454).

The computation requirement for inserting a pitch period is thus just a multiplication and two additions per speech sample.

Finally, concatenation of Ln, xn and Rn produces a sequence with an inserted pitch period (block 455).

Deletion of a pitch period is accomplished as shown in FIG. 17 with reference to the graphs of FIGS. 18a-c. This algorithm, which is very similar to the algorithm for inserting a pitch period, begins with receiving a control signal indicating deletion of pitch period Rn which follows Ln (block 500). Next, the pitch periods Ln and Rn are stored in the buffer (block 501). This is pictorially illustrated in FIGS. 18a-c at the top of the page. Again, without loss of generality, it is assumed that the two sequences have equal lengths N.

The algorithm operates to modify the pitch period Ln which precedes Rn (to be deleted) so that it resembles Rn, as n approaches N. This is done as set forth in Equation 36: ##EQU20## In Equation 36, the resulting sequence L'n is shown at the bottom of FIGS. 18a-c. Conceptually, Equation 36 applies a weighting function WL to the sequence Ln (block 502). This emphasizes the beginning of the sequence Ln as shown. Next, a right vector WR(Rn) is generated by applying a weighting vector WR to the sequence Rn that emphasizes the ending of Rn (block 503).

WL(Ln) and WR(Rn) are blended to create the resulting vector L'n. (Block 504). Finally, the sequence Ln -Rn is replaced with the sequence L'n in the pitch period string. (Block 505).

IV. Conclusion

Accordingly, the present invention presents a software only text-to-speech system which is efficient, uses a very small amount of memory, and is portable to a wide variety of standard microcomputer platforms. It takes advantage of knowledge about speech data, and to create a speech compression, blending, and duration control routine which produces very high quality speech with very little computational resources.

A source code listing of the software for executing the compression and decompression, the blending, and the duration and pitch control routines is provided in the Appendix as an example of a preferred embodiment of the present invention.

The foregoing description of preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents. ##SPC1##

Narayan, Shankar

Patent Priority Assignee Title
10019995, Mar 01 2011 STIEBEL, ALICE J Methods and systems for language learning based on a series of pitch patterns
10049663, Jun 08 2016 Apple Inc Intelligent automated assistant for media exploration
10049668, Dec 02 2015 Apple Inc Applying neural network language models to weighted finite state transducers for automatic speech recognition
10049675, Feb 25 2010 Apple Inc. User profiling for voice input processing
10057736, Jun 03 2011 Apple Inc Active transport based notifications
10067938, Jun 10 2016 Apple Inc Multilingual word prediction
10074360, Sep 30 2014 Apple Inc. Providing an indication of the suitability of speech recognition
10078631, May 30 2014 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
10079014, Jun 08 2012 Apple Inc. Name recognition system
10083688, May 27 2015 Apple Inc Device voice control for selecting a displayed affordance
10083690, May 30 2014 Apple Inc. Better resolution when referencing to concepts
10089072, Jun 11 2016 Apple Inc Intelligent device arbitration and control
10101822, Jun 05 2015 Apple Inc. Language input correction
10102359, Mar 21 2011 Apple Inc. Device access using voice authentication
10108612, Jul 31 2008 Apple Inc. Mobile device having human language translation capability with positional feedback
10127220, Jun 04 2015 Apple Inc Language identification from short strings
10127911, Sep 30 2014 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
10134385, Mar 02 2012 Apple Inc.; Apple Inc Systems and methods for name pronunciation
10169329, May 30 2014 Apple Inc. Exemplar-based natural language processing
10170123, May 30 2014 Apple Inc Intelligent assistant for home automation
10176167, Jun 09 2013 Apple Inc System and method for inferring user intent from speech inputs
10185542, Jun 09 2013 Apple Inc Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
10186254, Jun 07 2015 Apple Inc Context-based endpoint detection
10192552, Jun 10 2016 Apple Inc Digital assistant providing whispered speech
10199051, Feb 07 2013 Apple Inc Voice trigger for a digital assistant
10223066, Dec 23 2015 Apple Inc Proactive assistance based on dialog communication between devices
10241644, Jun 03 2011 Apple Inc Actionable reminder entries
10241752, Sep 30 2011 Apple Inc Interface for a virtual digital assistant
10249300, Jun 06 2016 Apple Inc Intelligent list reading
10255907, Jun 07 2015 Apple Inc. Automatic accent detection using acoustic models
10269345, Jun 11 2016 Apple Inc Intelligent task discovery
10276170, Jan 18 2010 Apple Inc. Intelligent automated assistant
10283110, Jul 02 2009 Apple Inc. Methods and apparatuses for automatic speech recognition
10289433, May 30 2014 Apple Inc Domain specific language for encoding assistant dialog
10297253, Jun 11 2016 Apple Inc Application integration with a digital assistant
10311871, Mar 08 2015 Apple Inc. Competing devices responding to voice triggers
10318871, Sep 08 2005 Apple Inc. Method and apparatus for building an intelligent automated assistant
10354011, Jun 09 2016 Apple Inc Intelligent automated assistant in a home environment
10366158, Sep 29 2015 Apple Inc Efficient word encoding for recurrent neural network language models
10381016, Jan 03 2008 Apple Inc. Methods and apparatus for altering audio output signals
10431204, Sep 11 2014 Apple Inc. Method and apparatus for discovering trending terms in speech requests
10446141, Aug 28 2014 Apple Inc. Automatic speech recognition based on user feedback
10446143, Mar 14 2016 Apple Inc Identification of voice inputs providing credentials
10467348, Oct 31 2010 SPEECH MORPHING SYSTEMS, INC Speech morphing communication system
10475446, Jun 05 2009 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
10490187, Jun 10 2016 Apple Inc Digital assistant providing automated status report
10496753, Jan 18 2010 Apple Inc.; Apple Inc Automatically adapting user interfaces for hands-free interaction
10497365, May 30 2014 Apple Inc. Multi-command single utterance input method
10509862, Jun 10 2016 Apple Inc Dynamic phrase expansion of language input
10521466, Jun 11 2016 Apple Inc Data driven natural language event detection and classification
10552013, Dec 02 2014 Apple Inc. Data detection
10553209, Jan 18 2010 Apple Inc. Systems and methods for hands-free notification summaries
10565997, Mar 01 2011 Alice J., Stiebel Methods and systems for teaching a hebrew bible trope lesson
10567477, Mar 08 2015 Apple Inc Virtual assistant continuity
10568032, Apr 03 2007 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
10592095, May 23 2014 Apple Inc. Instantaneous speaking of content on touch devices
10593346, Dec 22 2016 Apple Inc Rank-reduced token representation for automatic speech recognition
10607140, Jan 25 2010 NEWVALUEXCHANGE LTD. Apparatuses, methods and systems for a digital conversation management platform
10607141, Jan 25 2010 NEWVALUEXCHANGE LTD. Apparatuses, methods and systems for a digital conversation management platform
10657961, Jun 08 2013 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
10659851, Jun 30 2014 Apple Inc. Real-time digital assistant knowledge updates
10671428, Sep 08 2015 Apple Inc Distributed personal assistant
10679605, Jan 18 2010 Apple Inc Hands-free list-reading by intelligent automated assistant
10691473, Nov 06 2015 Apple Inc Intelligent automated assistant in a messaging environment
10705794, Jan 18 2010 Apple Inc Automatically adapting user interfaces for hands-free interaction
10706373, Jun 03 2011 Apple Inc. Performing actions associated with task items that represent tasks to perform
10706841, Jan 18 2010 Apple Inc. Task flow identification based on user intent
10733993, Jun 10 2016 Apple Inc. Intelligent digital assistant in a multi-tasking environment
10747498, Sep 08 2015 Apple Inc Zero latency digital assistant
10747963, Oct 31 2010 SPEECH MORPHING SYSTEMS, INC Speech morphing communication system
10762293, Dec 22 2010 Apple Inc.; Apple Inc Using parts-of-speech tagging and named entity recognition for spelling correction
10789041, Sep 12 2014 Apple Inc. Dynamic thresholds for always listening speech trigger
10791176, May 12 2017 Apple Inc Synchronization and task delegation of a digital assistant
10791216, Aug 06 2013 Apple Inc Auto-activating smart responses based on activities from remote devices
10795541, Jun 03 2011 Apple Inc. Intelligent organization of tasks items
10810274, May 15 2017 Apple Inc Optimizing dialogue policy decisions for digital assistants using implicit feedback
10904611, Jun 30 2014 Apple Inc. Intelligent automated assistant for TV user interactions
10978090, Feb 07 2013 Apple Inc. Voice trigger for a digital assistant
10984326, Jan 25 2010 NEWVALUEXCHANGE LTD. Apparatuses, methods and systems for a digital conversation management platform
10984327, Jan 25 2010 NEW VALUEXCHANGE LTD. Apparatuses, methods and systems for a digital conversation management platform
11010550, Sep 29 2015 Apple Inc Unified language modeling framework for word prediction, auto-completion and auto-correction
11025565, Jun 07 2015 Apple Inc Personalized prediction of responses for instant messaging
11037565, Jun 10 2016 Apple Inc. Intelligent digital assistant in a multi-tasking environment
11062615, Mar 01 2011 STIEBEL, ALICE J Methods and systems for remote language learning in a pandemic-aware world
11069347, Jun 08 2016 Apple Inc. Intelligent automated assistant for media exploration
11080012, Jun 05 2009 Apple Inc. Interface for a virtual digital assistant
11087759, Mar 08 2015 Apple Inc. Virtual assistant activation
11120372, Jun 03 2011 Apple Inc. Performing actions associated with task items that represent tasks to perform
11133008, May 30 2014 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
11152002, Jun 11 2016 Apple Inc. Application integration with a digital assistant
11227579, Aug 08 2019 International Business Machines Corporation Data augmentation by frame insertion for speech data
11257504, May 30 2014 Apple Inc. Intelligent assistant for home automation
11380334, Mar 01 2011 Methods and systems for interactive online language learning in a pandemic-aware world
11405466, May 12 2017 Apple Inc. Synchronization and task delegation of a digital assistant
11410053, Jan 25 2010 NEWVALUEXCHANGE LTD. Apparatuses, methods and systems for a digital conversation management platform
11423886, Jan 18 2010 Apple Inc. Task flow identification based on user intent
11500672, Sep 08 2015 Apple Inc. Distributed personal assistant
11526368, Nov 06 2015 Apple Inc. Intelligent automated assistant in a messaging environment
11556230, Dec 02 2014 Apple Inc. Data detection
11587559, Sep 30 2015 Apple Inc Intelligent device identification
5749071, Mar 19 1993 GOOGLE LLC Adaptive methods for controlling the annunciation rate of synthesized speech
5878393, Sep 09 1996 MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD High quality concatenative reading system
5950162, Oct 30 1996 Google Technology Holdings LLC Method, device and system for generating segment durations in a text-to-speech system
5991711, Feb 26 1996 Fuji Xerox Co., Ltd. Language information processing apparatus and method
6006187, Oct 01 1996 Alcatel Lucent Computer prosody user interface
6178402, Apr 29 1999 Google Technology Holdings LLC Method, apparatus and system for generating acoustic parameters in a text-to-speech system using a neural network
6226614, May 21 1997 Nippon Telegraph and Telephone Corporation Method and apparatus for editing/creating synthetic speech message and recording medium with the method recorded thereon
6332121, Dec 04 1995 Kabushiki Kaisha Toshiba Speech synthesis method
6334106, May 21 1997 Nippon Telegraph and Telephone Corporation Method for editing non-verbal information by adding mental state information to a speech message
6385581, May 05 1999 CUFER ASSET LTD L L C System and method of providing emotive background sound to text
6519558, May 21 1999 Sony Corporation Audio signal pitch adjustment apparatus and method
6546366, Feb 26 1999 Mitel Networks Corporation Text-to-speech converter
6553343, Dec 04 1995 Kabushiki Kaisha Toshiba Speech synthesis method
6625575, Mar 03 2000 LAPIS SEMICONDUCTOR CO , LTD Intonation control method for text-to-speech conversion
6760703, Dec 04 1995 Kabushiki Kaisha Toshiba Speech synthesis method
6961895, Aug 10 2000 Recording for the Blind & Dyslexic, Incorporated Method and apparatus for synchronization of text and audio data
7076426, Jan 30 1998 Nuance Communications, Inc Advance TTS for facial animation
7184958, Dec 04 1995 Kabushiki Kaisha Toshiba Speech synthesis method
7257259, Oct 17 2001 Symbol Technologies, LLC Lossless variable-bit signature compression
7369995, Feb 25 2003 Samsung Electonics Co., Ltd. Method and apparatus for synthesizing speech from text
7454348, Jan 08 2004 BEARCUB ACQUISITIONS LLC System and method for blending synthetic voices
7546241, Jun 05 2002 Canon Kabushiki Kaisha Speech synthesis method and apparatus, and dictionary generation method and apparatus
7606701, Aug 09 2001 VOICESENSE, LTD Method and apparatus for determining emotional arousal by speech analysis
7930172, Oct 23 2003 Apple Inc. Global boundary-centric feature extraction and associated discontinuity metrics
7966186, Jan 08 2004 RUNWAY GROWTH FINANCE CORP System and method for blending synthetic voices
8015012, Oct 23 2003 Apple Inc. Data-driven global boundary optimization
8670980, Oct 26 2009 III Holdings 12, LLC Tone determination device and method
8892446, Jan 18 2010 Apple Inc. Service orchestration for intelligent automated assistant
8903716, Jan 18 2010 Apple Inc. Personalized vocabulary for digital assistant
8930191, Jan 18 2010 Apple Inc Paraphrasing of user requests and results by automated digital assistant
8942986, Jan 18 2010 Apple Inc. Determining user intent based on ontologies of domains
9053094, Oct 31 2010 SPEECH MORPHING, INC Speech morphing communication system
9053095, Oct 31 2010 SPEECH MORPHING, INC Speech morphing communication system
9069757, Oct 31 2010 SPEECH MORPHING, INC Speech morphing communication system
9117447, Jan 18 2010 Apple Inc. Using event alert text as input to an automated assistant
9262612, Mar 21 2011 Apple Inc.; Apple Inc Device access using voice authentication
9300784, Jun 13 2013 Apple Inc System and method for emergency calls initiated by voice command
9318108, Jan 18 2010 Apple Inc.; Apple Inc Intelligent automated assistant
9330720, Jan 03 2008 Apple Inc. Methods and apparatus for altering audio output signals
9338493, Jun 30 2014 Apple Inc Intelligent automated assistant for TV user interactions
9368114, Mar 14 2013 Apple Inc. Context-sensitive handling of interruptions
9430463, May 30 2014 Apple Inc Exemplar-based natural language processing
9483461, Mar 06 2012 Apple Inc.; Apple Inc Handling speech synthesis of content for multiple languages
9495129, Jun 29 2012 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
9502031, May 27 2014 Apple Inc.; Apple Inc Method for supporting dynamic grammars in WFST-based ASR
9535906, Jul 31 2008 Apple Inc. Mobile device having human language translation capability with positional feedback
9548050, Jan 18 2010 Apple Inc. Intelligent automated assistant
9576574, Sep 10 2012 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
9582608, Jun 07 2013 Apple Inc Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
9620104, Jun 07 2013 Apple Inc System and method for user-specified pronunciation of words for speech synthesis and recognition
9620105, May 15 2014 Apple Inc. Analyzing audio input for efficient speech and music recognition
9626955, Apr 05 2008 Apple Inc. Intelligent text-to-speech conversion
9633004, May 30 2014 Apple Inc.; Apple Inc Better resolution when referencing to concepts
9633660, Feb 25 2010 Apple Inc. User profiling for voice input processing
9633674, Jun 07 2013 Apple Inc.; Apple Inc System and method for detecting errors in interactions with a voice-based digital assistant
9646609, Sep 30 2014 Apple Inc. Caching apparatus for serving phonetic pronunciations
9646614, Mar 16 2000 Apple Inc. Fast, language-independent method for user authentication by voice
9668024, Jun 30 2014 Apple Inc. Intelligent automated assistant for TV user interactions
9668121, Sep 30 2014 Apple Inc. Social reminders
9697820, Sep 24 2015 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
9697822, Mar 15 2013 Apple Inc. System and method for updating an adaptive speech recognition model
9711141, Dec 09 2014 Apple Inc. Disambiguating heteronyms in speech synthesis
9715875, May 30 2014 Apple Inc Reducing the need for manual start/end-pointing and trigger phrases
9721566, Mar 08 2015 Apple Inc Competing devices responding to voice triggers
9734193, May 30 2014 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
9760559, May 30 2014 Apple Inc Predictive text input
9785630, May 30 2014 Apple Inc. Text prediction using combined word N-gram and unigram language models
9798393, Aug 29 2011 Apple Inc. Text correction processing
9805711, Dec 22 2014 Casio Computer Co., Ltd. Sound synthesis device, sound synthesis method and storage medium
9818400, Sep 11 2014 Apple Inc.; Apple Inc Method and apparatus for discovering trending terms in speech requests
9842101, May 30 2014 Apple Inc Predictive conversion of language input
9842105, Apr 16 2015 Apple Inc Parsimonious continuous-space phrase representations for natural language processing
9858925, Jun 05 2009 Apple Inc Using context information to facilitate processing of commands in a virtual assistant
9865248, Apr 05 2008 Apple Inc. Intelligent text-to-speech conversion
9865280, Mar 06 2015 Apple Inc Structured dictation using intelligent automated assistants
9886432, Sep 30 2014 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
9886953, Mar 08 2015 Apple Inc Virtual assistant activation
9899019, Mar 18 2015 Apple Inc Systems and methods for structured stem and suffix language models
9922642, Mar 15 2013 Apple Inc. Training an at least partial voice command system
9934775, May 26 2016 Apple Inc Unit-selection text-to-speech synthesis based on predicted concatenation parameters
9953088, May 14 2012 Apple Inc. Crowd sourcing information to fulfill user requests
9959870, Dec 11 2008 Apple Inc Speech recognition involving a mobile device
9966060, Jun 07 2013 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
9966065, May 30 2014 Apple Inc. Multi-command single utterance input method
9966068, Jun 08 2013 Apple Inc Interpreting and acting upon commands that involve sharing information with remote devices
9971774, Sep 19 2012 Apple Inc. Voice-based media searching
9972304, Jun 03 2016 Apple Inc Privacy preserving distributed evaluation framework for embedded personalized systems
9986419, Sep 30 2014 Apple Inc. Social reminders
Patent Priority Assignee Title
4384169, Jan 14 1974 ESS Technology, INC Method and apparatus for speech synthesizing
4577343, Dec 10 1979 NEC Electronics Corporation Sound synthesizer
4692941, Apr 10 1984 SIERRA ENTERTAINMENT, INC Real-time text-to-speech conversion system
4709390, May 04 1984 BELL TELEPHONE LABORATORIES, INCORPORATED, A NY CORP Speech message code modifying arrangement
4797930, Nov 03 1983 Texas Instruments Incorporated; TEXAS INSTRUMENTS INCORPORATED A DE CORP constructed syllable pitch patterns from phonological linguistic unit string data
4802223, Nov 03 1983 Texas Instruments Incorporated; TEXAS INSTRUMENTS INCORPORATED, A DE CORP Low data rate speech encoding employing syllable pitch patterns
4852168, Nov 18 1986 SIERRA ENTERTAINMENT, INC Compression of stored waveforms for artificial speech
5029211, May 30 1988 NEC CORPORATION, 33-1, SHIBA 5-CHOME, MINATO-KU, TOKYO, JAPAN Speech analysis and synthesis system
EP30390A1,
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 20 1993NARAYAN, SHANKARApple Computer, IncASSIGNMENT OF ASSIGNORS INTEREST 0064540914 pdf
Jan 21 1993Apple Computer, Inc.(assignment on the face of the patent)
Jan 09 2007Apple Computer, IncApple IncCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0192350583 pdf
Date Maintenance Fee Events
Nov 27 2000M183: Payment of Maintenance Fee, 4th Year, Large Entity.
Mar 12 2001ASPN: Payor Number Assigned.
Sep 27 2004M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Nov 26 2008M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Jun 24 20004 years fee payment window open
Dec 24 20006 months grace period start (w surcharge)
Jun 24 2001patent expiry (for year 4)
Jun 24 20032 years to revive unintentionally abandoned end. (for year 4)
Jun 24 20048 years fee payment window open
Dec 24 20046 months grace period start (w surcharge)
Jun 24 2005patent expiry (for year 8)
Jun 24 20072 years to revive unintentionally abandoned end. (for year 8)
Jun 24 200812 years fee payment window open
Dec 24 20086 months grace period start (w surcharge)
Jun 24 2009patent expiry (for year 12)
Jun 24 20112 years to revive unintentionally abandoned end. (for year 12)