A system is disclosed for recognizing typing from typing transducers that provide the typist with only limited tactile feedback of key position. The system includes a typing decoder sensitive to the geometric pattern of a keystroke sequence as well as the distance between individual finger touches and nearby keys. The typing decoder hypothesizes plausible key sequences and compares their geometric pattern to the geometric pattern of corresponding finger touches. It may also hypothesize home row key locations for touches caused by hands resting on or near home row. The resulting pattern match metrics may be combined with character sequence transition probabilities from a spelling model. The typing decoder then chooses the hypothesis sequence with the best cumulative match metric and sends it as key codes or commands to a host computing device.

Patent
   RE40993
Priority
Jan 28 2001
Filed
Jan 13 2006
Issued
Nov 24 2009
Expiry
Jan 28 2021
Assg.orig
Entity
Large
47
200
all paid
0. 22. A method for recognizing typing, the method comprising:
receiving a touch location and time sequence for a plurality of keystrokes;
generating a set of key hypothesis sequences for the plurality of keystrokes;
computing a geometry match metric for each key hypothesis sequence; and
choosing a best hypothesized key sequence based on the geometry match metrics.
0. 19. A typing recognition apparatus comprising:
a typing surface;
at least one touch sensor configured to provide surface coordinates of each touch by a typist to the typing surface;
a hypothesis tree generator configured to generate key hypothesis sequences from the surface coordinates of each touch; and
a pattern geometry evaluator configured to compute a geometry match metric for each of the key hypothesis sequences.
0. 35. A method for recognizing typing, the method comprising:
receiving a touch location and time sequence for a plurality of keystrokes;
generating a set of key hypothesis sequences for the plurality of keystrokes;
computing a geometry match metric for each key hypothesis sequence;
computing a character transition cost for each key hypothesis sequence based on whether the key hypothesis sequence is building a dictionary word; and
selecting a best hypothesized key sequence from the hypothesized key sequences, the best hypothesized key sequence having a best cumulative match metric formulated from the geometry match metric and the character transition cost.
0. 31. A typing recognition apparatus comprising:
a typing surface;
at least one touch sensor integrated with the typing surface and configured to provide surface coordinates of each touch on the typing surface;
a hypothesis tree generator configured to generate key hypothesis sequences from the surface coordinates of each touch;
a pattern geometry evaluator configured to compute a geometry match metric for each of the key hypothesis sequences;
a dictionary selector configured to compute a character transition cost for each of the key hypothesis sequences based on whether the hypothesized key sequence is building a dictionary word; and
a decoder configured for selecting a best hypothesized key sequence from the hypothesized key sequences, the best hypothesized key sequence having a best cumulative match metric formulated from the geometry match metric and the character transition cost.
0. 23. A typing recognition apparatus that compensates for finger and hand drift during typing on a touch-sensitive surface, the apparatus comprising:
sensor scanning hardware configured for providing surface coordinates of each touch received on the touch-sensitive surface; and
a processor programmed for
extending existing key hypothesis sequences with hypotheses for keys in a neighborhood of each new touch,
computing geometry match metrics for the hypothesized key sequences by comparing touch separation vectors between successive touch locations with key separation vectors between successively hypothesized key locations and measuring zero-order key/touch alignment error,
computing a character transition cost for each of the hypothesized key sequences based on whether the hypothesized key sequence is building a dictionary word,
selecting a best hypothesized key sequence from the hypothesized key sequences, the best hypothesized key sequence having a best cumulative match metric formulated from the geometry match metric and the character transition cost, and
communicating symbols and commands represented by the best hypothesized key sequence to a host computer application.
1. A typing recognition apparatus for touch typing on surfaces with limited tactile feedback that compensates for finger and hand drift during typing and discourages any integrated spelling model from choosing dictionary words over unusual but carefully typed strings, the apparatus comprising:
a typing surface means that displays symbols indicating the locations of touchable keys;
touch sensor means that provides the surface coordinates of each touch by a typist attempting to strike said key symbols on said surface;
hypothesis tree generator means that extends existing key hypothesis sequences with hypotheses for keys in the neighborhood of each new touch;
pattern geometry evaluation means that computes geometry match metrics for the hypothesized key sequences by comparing separation vectors between the successive touch locations with separation vectors between the successively hypothesized key locations as well as by measuring the zero-order key/touch alignment error;
decoding means that finds the hypothesized key sequence with the best cumulative match metric; and,
transmission means for communicating the symbols and commands represented by the best hypothesized key sequence to host computer applications.
0. 27. A method for compensating for finger and hand drift during typing on a touch-sensitive surface, comprising:
obtaining a touch location and time sequence for each detected touch in a touch sequence;
computing a set of touch separation vectors of increasing orders between the detected touches in the touch sequence;
generating a set of key hypothesis sequences for each touch in the touch sequence, each key hypothesis sequence associated with a key near the location of the touch;
for each key hypothesis sequence, computing a set of key separation vectors of increasing orders between the keys in the hypothesized key sequence;
for each key hypothesis sequence, computing a geometry match metric as a function of a magnitude of a zero-order touch/key alignment error and the magnitudes of each order's touch and key separation vector difference;
computing a character transition cost for each of the hypothesized key sequences based on whether the hypothesized key sequence is building a dictionary word;
selecting a best hypothesized key sequence from the hypothesized key sequences, the best hypothesized key sequence having a best cumulative match metric formulated from the geometry match metric and the character transition cost, and
transmitting symbols and commands represented by the best hypothesized key sequence to a host computer for further action.
13. A method for recognizing typing from typing devices that sense lateral finger position but provide limited tactile feedback of key location, the method advantageously compensating for finger and hand drift during typing and discouraging any integrated spelling model from choosing dictionary words over unusual but carefully typed strings, wherein the method comprises the following steps:
forming a touch location and time sequence from the fingertip position at the end of each keystroke as measured by typing sensors;
generating a set of key hypothesis sequences for the given touch sequence, each hypothesis in a sequence being for a key near the location of the touch causing the hypothesis;
for each key hypothesis, computing a key/touch alignment error vector as the difference between the location of the hypothesized key and the location of its causing touch;
for each key hypothesis, computing a geometry match metric as a function of the magnitude of the hypothesis' key/touch alignment error as well as of the magnitude of differences between the hypothesis' key/touch alignment error vector and that of preceding hypotheses in its sequence;
combining the geometry match metrics from each hypothesis in a key hypothesis sequence into a cumulative match metric for the hypothesis sequence;
choosing the hypothesized key sequence with the best cumulative metric as the best hypothesized key sequence; and,
transmitting the symbols and commands represented by the best hypothesized key sequence to a host computer for further action.
7. A method for recognizing typing from typing devices that sense lateral finger position but provide limited tactile feedback of key location, the method advantageously compensating for finger and hand drift during typing and discouraging any integrated spelling model from choosing dictionary words over unusual but carefully typed strings, wherein the method comprises the following steps:
forming a touch location and time sequence from the fingertip position at the end of each keystroke as measured by typing sensors;
computing a set of touch separation vectors of increasing orders from the location difference between the newest touch and previous touch in said touch location sequence;
generating a set of key hypothesis sequences for the given touch sequence, each hypothesis in a sequence being for a key near the location of the touch causing the hypothesis;
for each key hypothesis, computing a set of key separation vectors of increasing orders from differences between the position of the newest key and previous keys in the hypothesized sequence;
for each key hypothesis, computing a geometry match metric as a function of the magnitude of the zero-order touch/key alignment error as well as of the magnitudes of each order's touch and key separation vector difference;
combining the geometry match metrics from each hypothesis in a key hypothesis sequence into a cumulative match metric for the hypothesis sequence;
choosing the hypothesized key sequence with the best cumulative metric as the best hypothesized key sequence; and,
transmitting the symbols and commands represented by the best hypothesized key sequence to a host computer for further action.
2. The apparatus of claim 1 wherein a synchronization detection means inserts resting finger hypothesis into the hypothesis tree upon detection of a hand resting substantially on home row, and wherein said resting hypotheses are given for key separation vector computation purposes the coordinates of the home row key that their touch's identified finger normally rests upon.
3. The apparatus of claim 1 wherein a stack decoder is utilized as the particular decoding means.
4. The apparatus of claim 1 wherein the geometry match metric for a hypothesized key is substantially formulated as the squared distance between a touch and its hypothesized key plus the sum of squared differences between corresponding key and touch separation vectors of all valid orders.
5. The apparatus of claim 4 wherein the difference between a touch separation vector and the corresponding key separation vector is weighted in roughly inverse proportion to the touch time difference between the two touches from which the touch separation vector was computed.
6. The apparatus of claim 4 wherein the difference between a touch separation vector and the corresponding key separation vector is weighted less if the touch separation vector is large.
8. The method of claim 7 wherein the magnitude of each difference between a touch separation vector and the corresponding key separation vector is weighted in roughly inverse proportion to the time between the two touches from which the touch separation vector was computed.
9. The method of claim 7 wherein the magnitude of each difference between a touch separation vector and the corresponding key separation vector is weighted less if the touch separation vector is large.
10. The method of claim 7 wherein a synchronization detection means inserts resting finger hypothesis into the hypothesis tree upon detection of a hand resting substantially on home row, and wherein said resting hypotheses are given for key separation vector computation purposes the coordinates of the home row key that their touch's identified finger normally rests upon.
11. The method of claim 7 wherein the set of key hypothesis sequences are stored as a hypothesis tree that can extend the sequences upon reception of a new touch by sprouting new hypotheses.
12. The method of claim 11 wherein a stack decoder is utilized to find the best hypothesized key sequence.
14. The method of claim 13 wherein the magnitude of the difference between two hypotheses' key/touch alignment error vectors is weighted in roughly inverse proportion to the time between the two touches from which the touch separation vector was computed.
15. The method of claim 13 wherein the magnitude of the difference between two hypotheses' key/touch alignment error vectors is weighted less if the separation between the corresponding touches is large.
16. The method of claim 13 wherein a synchronization detection means inserts resting finger hypothesis into the hypothesis tree upon detection of a hand resting substantially on home row, and wherein said resting hypotheses are given for key/touch alignment error vector computation purposes the coordinates of the home row key that their touch's identified finger normally rests upon.
17. The method of claim 13 wherein the set of key hypothesis sequences are stored as a hypothesis tree that can extend the sequences upon reception of a new touch by sprouting new hypotheses.
18. The method of claim 17 wherein a stack decoder is utilized to find the best hypothesized key sequence.
0. 20. The typing recognition apparatus of claim 19 further comprising:
a decoder configured to select as a best hypothesized key sequence from among the key hypothesis sequences based on the computed geometry match metrics.
0. 21. The typing recognition apparatus of claim 20 further comprising:
a transmitter configured to send at least one symbol or command represented by the best hypothesized key sequence.
0. 24. The typing recognition apparatus of claim 23, further comprising a touch-sensitive surface configured for displaying symbols indicating locations of touchable keys.
0. 25. The typing recognition apparatus of claim 23, wherein the character transition cost is high when a dictionary match is not found.
0. 26. The typing recognition apparatus of claim 23, wherein the character transition cost is set to neutral or zero when the hypothesized key location is a command or editing key.
0. 28. The method of claim 27, further comprising detecting the touches in the touch sequence on a touch-sensitive surface configured for displaying symbols indicating locations of touchable keys.
0. 29. The method of claim 27, wherein the character transition cost is high when a dictionary match is not found.
0. 30. The method of claim 27, wherein the character transition cost is set to neutral or zero when a hypothesized key location is a command or editing key.
0. 32. The typing recognition apparatus of claim 31, the typing surface configured for displaying symbols indicating locations of touchable keys.
0. 33. The typing recognition apparatus of claim 31, wherein the character transition cost is high when a dictionary match is not found.
0. 34. The typing recognition apparatus of claim 31, wherein the character transition cost is set to neutral or zero when a hypothesized key location is a command or editing key.
0. 36. The method of claim 35, further comprising detecting the plurality of keystrokes on a touch-sensitive surface configured for displaying symbols indicating locations of touchable keys.
0. 37. The method of claim 35, wherein the character transition cost is high when a dictionary match is not found.
0. 38. The method of claim 35, wherein the character
transition cost is set to neutral or zero when a hypothesized key location is a command or editing key.

Ser. No. 09/236,513 Jan. 1, 1999 U.S. Pat. No. 5,463,388 Jan. 29, 1993 U.S. Pat. No. 5,812,698 Jul. 14, 1997 U.S. Pat. No. 5,818,437 Jul. 26, 1995 U.S. Pat. No. 6,137,908 Jun. 29, 1994 U.S. Pat. No. 6,107,997 Jun. 27, 1996.

1. Field of the Invention

The present invention pertains to typing recognition systems and methods, and more particularly to recognition of typing in air or on a relatively smooth surface that provides less tactile feedback than conventional mechanical keyboards.

2. The Related Art

Typists generally employ various combinations of two typing techniques: hunt and peck and touch typing. When hunting and pecking, the typist visually searches for the key center and strikes the key with the index or middle finger. When touch typing, the fingers initially rest on home row keys, each finger is responsible for striking a certain column of keys and the typist is discouraged from looking down at the keys. The contours and depression of mechanical keys provide strong tactile feedback that helps typists keep their fingers aligned with the key layout. The finger motions of touch typists are ballistic rather than guided by a slow visual search, making touch typing faster than hunt and peck. However, even skilled touch typists occasionally fall back on hunt and peck to find rarely-used punctuation or command keys at the periphery of the key layout.

Many touchscreen devices display pop-up or soft keyboards meant to be activated by lightly tapping a displayed button or key symbol with a finger or stylus. Touch typing is considered impractical on such devices for several reasons: a shrunken key layout may have a key spacing too small for each finger to be aligned with its own key column, the smooth screen surface provides no tactile feedback of finger/key alignment as keys are struck, and most touchscreens cannot accurately report finger positions when touched by more than one finger at a time. Such temporal touch overlap often occurs when typing a quick burst of keys with both hands, holding the finger on modifier keys while striking normal keys, or attempting to rest the hands. Thus users of touchscreen key layouts have had to fall back on a slow, visual search for one key at a time.

Since touchscreen and touch keyboard users are expected to visually aim for the center of each key, typing recognition software for touch surfaces can use one of two simple, nearly equivalent methods to decide which key is being touched. Like the present invention, these methods apply to devices that report touch coordinates interpolated over a fine grid of sensors rather than devices that place a single large sensor under the center of each key. In the first method, described in U.S. patent application Ser. No. 09/236,513 by Westerman and Elias, the system computes for each key the distance from key center to the sensed touch location. The software then selects the key nearest the finger touch. In the second method, described in U.S. Pat. No. 5,463,388 to Boie et al., the software establishes a rectangle or bounding box around each key and decides which, if any, bounding box the reported touch coordinates lie within. The former method requires less computation, and the latter method allows simpler control over individual key shape and guard bands between keys, but both methods essentially report the key nearest to the finger touch, independent of past touches. Hence we refer to them as ‘nearest key’ recognizers.

Unlike touchscreens, the multi-touch surface (MTS) described by Westerman and Elias in Ser. No. 09/236,513 can handle resting hands and temporal finger overlap during quick typing bursts. Since the MTS sensing technology is fully scalable, an MTS can easily be built large enough for a full-size QWERTY key layout. The only remaining barrier to fast touch typing on an MTS is the lack of tactile feedback. While it is possible to add either textures or compressibility to an MTS to enhance tactile feedback, there are two good reasons to keep the surface firm and smooth. First, any textures added to the surface to indicate key centers can potentially interfere with smooth sliding across the surface during multi-finger pointing and dragging operations. Second, the MTS proximity sensors actually allow zero-force typing by sensing the presence of a fingertip on the surface whether or not the finger applies noticeable downward pressure to the surface. Zero-force typing reduces the strain on finger muscles and tendons as each key is touched.

Without rich tactile feedback, the hands and individual fingers of an MTS touch typist tend to drift out of perfect alignment with the keys. Typists can limit the hand drift by anchoring their palms in home position on the surface, but many keystrokes will still be slightly off center due to drift and reach errors by individual fingers. Such hand drift and erroneous finger placements wreak havoc with the simple ‘nearest key’ recognizers disclosed in the related touchscreen and touch keyboard art. For example, if the hand alignment with respect to the key layout drifts by half a key-spacing (˜9 mm or ⅜″), all keystrokes may land half-way between adjacent keys. A ‘nearest key’ recognizer is left to choose one of the two adjacent keys essentially at random, recognizing only 50% of the keystrokes correctly. A spelling model integrated into the recognizer can help assuming the typist intended to enter a dictionary word, but then actually hinders entry of other strings. Thus there exists a need in the touchscreen and touch keyboard art for typing recognition methods that are less sensitive to the hand drift and finger placement errors that occur without strong tactile feedback from key centers.

For many years, speech, handwriting, and optical character recognition systems have employed spelling or language models to help guess users' intended words when speech, handwriting, or other input is ambiguous. For example, in U.S. Pat. No. 5,812,698 Platt et al. teach a handwriting recognizer that analyzes pen strokes to create a list of probable character strings and then invokes a Markov language model and spelling dictionary to pick the most common English word from that list of potential strings. However, such systems have a major weakness. They assume all user input will be a word contained in their spelling or language model, actually impeding entry of words not anticipated by the model. Even if the user intentionally and unambiguously enters a random character string or foreign word not found in the system vocabulary, the system tries to interpret that input as one of its vocabulary words. The typical solution is to provide the user an alternative (often comparatively clumsy) process with which to enter or select strings outside the system vocabulary. For example, U.S. Pat. No. 5,818,437 to Grover et al. teaches use of a dictionary and vocabulary models to disambiguate text entered on a ‘reduced’ keyboard such as a telephone keypad that assigns multiple characters to each physical key. In cases that the most common dictionary word matching an input key sequence is not the desired word, users must select from a list of alternate strings. Likewise, users of speech recognition system typically fall back on a keyboard to enter words missing from the system's vocabulary.

Unfortunately, heavy reliance on spelling models and alternative entry processes is simply impractical for a general-purpose typing recognizer. Typing, after all, is the fallback entry process for many handwriting and speech recognition systems, and the only fallback conceivable for typing is a slower, clumsier typing mode. Likewise, personal computer users have to type into a wide variety of applications requiring strange character strings like passwords, filenames, abbreviated commands, and programming variable names. To avoid annoying the user with frequent corrections or dictionary additions, spelling model influence must be weak enough that strings missing from it will always be accepted when typed at moderate speed with reasonable care. Thus a general-purpose typing recognizer should only rely on spelling models as a last resort, when all possible measurements of the actual typing are ambiguous.

Since a typing recognizer cannot depend too much on spelling models, there still exists a need in the touchscreen and touch keyboard art for spelling-independent methods to improve recognition accuracy. The main aspect of the present invention is to search for the geometric pattern of keys that best matches the geometric pattern of a touch sequence, rather than just searching for the key closest to each touch. This method improves recognition accuracy without any assumptions about the character content being typed.

According to this aspect of the invention, touch or finger stroke coordinates reported by a sensing device and key coordinates from a key layout feed into a typing recognizer module. The typing recognizer then hypothesizes plausible sequences of keys by extending existing sequences with keys that are within the immediate neighborhood of the newest finger touch. It can also hypothesize home row key locations for touches caused by hands resting on or near the home row keys. For each hypothesized sequence, the typing recognizer computes separation vectors between the layout position of successive keys in the sequence. The typing recognizer also computes separation vectors between successive touch positions in the touch sequence. Each key sequence is evaluated according to a pattern match metric that includes not only the distance between each finger touch and the corresponding key but also how closely the separation vectors between successive touches match the separation vectors between successive keys. The hypothesized sequence with the best cumulative match metric is transmitted to the host computer, possibly replacing an older, higher cost partial sequence that was transmitted previously.

It is therefore an objective of this invention to provide typing recognition methods that overcome the shortcomings of the related touchscreen and touch keyboard art.

A primary object of the present invention is to recognize typing accurately even when lack of tactile key position feedback leads to significant hand and finger drift.

Yet another objective of this invention is to improve typing recognition accuracy without excessive dependence on spelling models.

A further objective of this invention is to disambiguate typing as much as possible with measurements of its geometric pattern before falling back on a spelling model to resolve any remaining recognition ambiguities.

A secondary objective of this invention is to beneficially incorporate key/hand alignment measurements from resting hands into recognition decisions without explicitly shifting the key layout into alignment with the resting hands.

FIG. 1 is a block level diagram of the preferred surface typing detection and recognition system for the present invention.

FIG. 2 contains illustrations of a sample touch sequence on the left half of a standard QWERTY key layout (FIG. 2A), the touch separation vectors for the sample touch sequence (FIG. 2B), and the key separation vectors for several hypothesized key sequences that might correspond to the key sequence intended by the touch typist (FIGS. 2C-J).

FIG. 3 illustrates the contents of the touch data structure used to store measured touch parameters, a decoding stack, and key finally output for a touch.

FIG. 4 illustrates the contents of the hypothesis data structure that serves as nodes of the hypothesis trees for the present invention.

FIG. 5 is a flow chart illustrating the preferred embodiment of key hypothesis tree generation according to the present invention.

FIG. 6 is a diagram illustrating a hypothesis tree that could be generated by the process of FIG. 5 during recognition of the sample touch sequence in FIG. 2.

FIG. 7 is a flow chart illustrating the steps for computing the geometry match metric of each key hypothesis.

FIG. 8 is a flow chart illustrating the process that outputs the best new key hypothesis to the host computer, erasing as necessary previously output keys that differ from past keys in the current best sequence.

In the preferred embodiment, the typing recognition methods of this invention are utilized within a multi-touch system like that shown in FIG. 1. The sensor scanning hardware 6 detects touches by fingers 2 on the surface 4. The proximity image formation 8 and contact tracking 10 modules determine the touch timing and surface coordinates and report these to the typing recognizer 12. The typing recognizer decides which keys the user intended to press and tells the host communications interface 16 to send those keys to the host computer 18. The system may also include a chord motion recognizer module 14 that interprets lateral sliding of multiple fingers as pointing or gesture input and effectively disables the typing recognizer for such touches. The synchronization detector 13 searches for simultaneous presses or releases of multiple fingers, thereby aiding in detection of chord slides, chord taps, and resting hands. All modules besides the typing recognizer are fully described in related U.S. patent application Ser. No. 09/236,513 by Westerman and Elias. That application is incorporated herein by reference in its entirety. The present invention constitutes improvements to the rudimentary ‘nearest key’ typing recognizer described in that application.

Those skilled in the art will recognize that the typing recognizer disclosed herein could be utilized with any sensing device that accurately reports the lateral position of fingertips as they near the end of their stroke, whether or not the fingers actually touch a surface of depress physical keys. Examples of such alternative finger position sensing systems include micro radar, data gloves, and pressure-sensitive surface materials. The term touch location will be used hereafter for the lateral position or x and y coordinates detected for fingertips within a plane roughly normal to the fingertips at the end of their stroke, even for sensing devices that require no physical contact with a surface at the end of the stroke. Likewise, the typing recognition software need not reside within a microprocessor packaged with the sensing device. It could just as easily execute within the host computer system, or the host computer system and sensing device might be combined such that the same microprocessor executes finger tracking, typing recognition, and user application software.

Related art ‘nearest key’ typing recognizers typically assume that touch location errors are independent from keystroke to keystroke. But for typing devices that don't provide strong tactile feedback of key position, the hand sometimes drifts slightly out of alignment with the key layout. This causes the absolute location errors for most touches to be biased in the drift direction and statistically dependent. However, if the typist still reaches the proper amount (a whole number of key spacings) relative to recent touches, the lateral separations between finger touches will closely match the separations between the keys the typist intended to strike, regardless of the overall hand drift.

A related type of bias occurs when individual fingers drift relative to the rest of the hand. This causes the absolute location errors to be biased the same way for all keys typed by the drifting finger(s). However, keys typed by adjacent fingers may not share this bias.

An important discovery of the present invention is that when trying to recognize a sequence of touches located ambiguously between keys, searching for key sequences whose relative geometric pattern matches the touch pattern greatly narrows the list of plausible key sequences. This is illustrated intuitively in FIG. 2. FIG. 2A shows a series of four touches as triangles t0, t1, t2, t3, on the left half of a QWERTY key layout 29. The distance between a given key and touch, herein referred to as the zero-order key/touch alignment error, is apparent by inspection. The radii of the dotted circles 30 indicate the distance from a touch to the nearest key. Touch t0 is roughly equidistant from keys ‘D’ and ‘F’, as indicated by t0's circle passing through both key symbols, and t0 is not far from ‘C’ or ‘V’ either. A ‘nearest key’ recognizer would associate t0 with ‘D’, but with little confidence. If t0 was just a bit farther right, ‘F’ would become the nearest choice. A nearest key recognizer also faces a tossup between ‘E’ and ‘R’ for t3, and cannot be terribly confident of recognizing t2 as ‘R’. Touch t1 is the only touch close enough to a single key (‘A’) to be confidently interpreted as that key.

FIG. 2B illustrates the vectors separating successive touches. Solid lines 32 are ‘first-order’ vectors from t0 to t1, t1 to t2, and t2 to t3. Dashed lines 34 are ‘second-order’ vectors from t0 to t2 and t1 to t3. The dotted line 36 is the ‘third-order’ vector from t0 to t3. FIGS. 2‘C’-‘H’ show corresponding key separate vectors for possible matching key sequences. In all cases but FIG. 2H and 2J, at least one of the key separation vectors clearly differs from a corresponding touch separation vector. For the ‘CARE’ hypothesis in FIG. 2C, the third-order ‘C’-‘E’ vector is significantly longer than the corresponding t0-t3 vector. For the ‘FARE’ hypothesis in FIG. 2D, the second-order ‘F’-‘R’ and third order ‘F’-‘E’ vectors have clearly different angles than the corresponding t0-t2 and t0-t3 vectors. For the ‘CARR’ and ‘DARR’ hypotheses in FIGS. 2E and 2G, the first order ‘R’-‘R’ vector will have length 0, quite different than the first order t2-t3 vector's length of one full key-spacing. For the ‘FATE’ hypothesis of FIG. 2F, the ‘T’-‘E’ vector is now a full key-spacing longer than the t2-t3 vector. Even though all the hypotheses shown are nearly indistinguishable in terms of the zero-order alignment error between each touch and corresponding key, an improved typing recognizer that compares the touch separation and key separation vectors can quickly eliminate all but hypotheses ‘DARE’ and ‘FRST’ in FIGS. 2H and 2J. The final decision can be made based upon ‘DARE's smaller zero-order, absolute error between t1 and ‘A’ than between t1 and ‘S’. In even more ambiguous cases, a language model can help choose English words (like ‘DARE’ instead of ‘FRST’) from the list of remaining hypotheses.

Since typists expect the symbol of each touched key to appear on the host computer screen immediately after each corresponding finger stroke, a typing recognizer cannot wait for an entire touch sequence to complete before choosing the best key sequence. In a preferred embodiment of this invention, the recognizer module decodes the touch sequence incrementally, extending key hypothesis sequences by one key each time a new touch is detected. This process will form a hypothesis tree whose nodes are individual key hypotheses. It is important to note that related art ‘nearest key’ recognizes need not construct a hypothesis tree since they assume that finger placement errors from each keystroke are statistically independent.

FIG. 3 lists the basic parameters the recognizer needs to store in each touch data structure 79. A ring or chain of such data structures ordered by touchdown time represents a touch sequence. Each touch data structure 79 must contain the touch's x and y surface coordinates 70 as reported by the touch sensors. These should estimate the center of the touch, which for proximity or pressure sensors is typically computed as the centroid of fingertip flesh contacting the surface. To help lookup the home row key of each touch from a resting hand, each touch data structure should have a copy of the hand and finger identity 71 estimated for the touch by the contact tracking and identification module 10. To keep track of the recency of past touches, the touch data should also include the finger touchdown time or press time 72. For compressible surfaces, this should correspond to the time the finger stroke bottomed out. The touch release time 73 should be set to either the time of finger liftoff from the surface or the current system time if the finger is still touching. To aid in decoding the most likely hypothesis sequence, all hypotheses caused by a touch will be inserted into a stack 76 and sorted so that the hypothesis with the best cumulative metric 98 rises to the top of the stack. Finally, to support undoing preliminary key outputs, the touch structure should maintain a reference 77 to the hypothesis whose key gets output in response to the touch. This reference will be null until a key is chosen to be output through the host communications interface 16.

FIG. 4 shows that to establish the tree structure, each hypothesis data structure 85 needs a reference 86 to its parent hypothesis from the previous touch. For the very first touch, this reference will be null, representing the root of the hypothesis tree. Having a reference to the data structure 88 of the touch causing the hypothesis is also convenient. The key center coordinates 92, and key code, symbol or command to be output 94, are retrieved from the key layout according to which key the hypothesis represents. Once computed, a measure of the match between the touch pattern and key pattern represented by the key hypothesis and its parent sequence will be stored as the geometry match metric 96. Though the embodiment presented herein formulates this geometry match metric as a cost to be minimized, it can just as easily be formulated as a probability to be maximized and remain well within the scope of this invention. It will be added in step 222 of FIG. 7 to the parent 86 hypothesis' cumulative match metric to obtain a new, extended cumulative match metric 98 for the sequence. In embodiments that include a spelling model, each hypothesis data structure 85 will also need to hold a spelling match metric 97. The spelling match metric may also be formulated as either a bad spelling cost to be minimized or a character transition probability to be maximized.

FIG. 5 is a flowchart illustrating the preferred embodiment of the hypothesis tree extension, evaluation, and decoding processes. Step 100 shows that the typing recognizer starts up with the touch count n set to 0 and the hypothesis tree empty. Decision diamond 102 waits for a new touch to be detected by the sensors and recorded as T[n], the newest touch data structure 79 of the chain. We will use the pseudo-code notation T[n].x and T[n].y for the touch coordinates 70. Step 104 resets the parent hypothesis index p to 0. Step 106 retrieves a parent hypothesis h p[n−1] data structure 85 associated with the previous touch T[n−1]. In the case that n equals 0, step 106 simply sets the parent hypothesis to null, representing the root of the empty tree. Step 108 resets the new hypothesis counter j to 0. Step 110 picks a key from the key layout, an array of key coordinates and symbols that describes the arrangement of keys across the surface. Decision diamond 112 tests whether the key center is within a maximum activation radius Ract of the new touch T[n]'s surface coordinates. If the key is too far away, it need not be evaluated further, and decision diamond 111 will pick another key from the layout 110 until all keys' in the vicinity of the touch have been hypothesized. About one standard key-spacing (˜2 cm or ¾″ inch) is sufficiently large for Ract, but Ract can be bumped up for oversize keys like Space, Shift, and Enter. Choosing Ract too large wastes computation by hypothesizing keys that are nowhere near the finger touch and that the typist clearly did not intend to hit. Choosing Ract too small limits the amount of hand drift that the typing recognizer can correct for.

If a key is within the radius Ract of the new touch, step 114 creates a new hypothesis hj[n] (using data structure 85) descended from the current parent hp[n−1]. The new hypothesis' parent hypothesis reference 86 is set accordingly. Block 116 evaluates how well the new key hypothesis hj[n] and its parent sequence matches the touch sequence T[0] . . . T[n]. FIG. 7 will describe this critical block in more detail. Step 118 inserts the new hypothesis hj[n] into T[n]'s stack 76, which is sorted such that hypotheses with the best cumulative match metric (either lowest sum of costs or highest product of probabilities) rise to the top.

Once hypotheses descended from parent hp[n−1] have been generated for all keys near the touch T[n], decision diamond 120 decides whether the previous touch T[n−1]'s stack 76 contains additional parent hypotheses that need to be extended. If so, the parent hypothesis index p is incremented in step 122, and steps 106-122 repeat for the next parent. Once all parent hypotheses have been extended, block 124 actually outputs the best hypothesis sequence as described further in FIG. 8. Step 126 prunes from the tree those hypotheses whose cumulative match metric is already so poor that they are very unlikely to spawn best hypotheses in the future. This prevents exponential growth of the hypothesis tree by discarding clearly had hypotheses but preserving competitive hypotheses that might become parents of the best hypothesis for a future touch. The most efficient pruning method is to start at the bottom of T[n]'s stack 76 and discard all hypotheses whose cumulative metric is not within a future cost margin of the top (best) hypothesis's cumulative match metric. When all of a parent's child hypotheses have been discarded the parent is discarded as well. The pruning step 126 completes all processing of touch T[n], leaving step 128 to increment the touch index n so decision diamond 102 can resume waiting for the next touch.

Working together, steps 118, 124, and 126 constitute a stack decoder. They sort all of the new hypotheses for the current touch T[n] according to their cumulative match metric, choose the lowest cost sequence that winds up at the top of the stack as the best hypothesis sequence to output, and prune the implausible sequences at the bottom of the stack whose costs are much greater than the current best sequence. The stack decoder is a well-known method in the speech recognition, handwriting recognition, and digital communications arts for finding the optimal path through a hypothesis tree. For example, see F. Jelinek, Statistical Methods for Speech Recognition (published by The MIT Press, pages 93-110, 1997). Those skilled in the art will recognize that a basic Viterbi decoder would only be appropriate in place of the stack decoder if the touch geometry metric only included first order separation vectors. Including higher order separation vectors as is necessary to get a wholesome hand drift estimate makes the touch cost dependent on more than the previous touch and thus violates the first-order Markov condition for basic Viterbi decoders.

FIG. 6 shows an example of a hypothesis tree that the typing recognition process in FIG. 5 might generate while decoding the touch sequence described in FIG. 2. The tree starts empty while waiting for the first touch, consisting only of the null root 150. When touch t0152 is detected, the typing recognizer will sprout hypotheses 154 for the keys ‘D’, ‘F’, and ‘C’ neighboring t0. Because the sequence so far contains only one touch, the match metric for these first keys will only include the zero-order, key/touch alignment error distance. In this case, the typing recognizer would be ready to output the ‘D’ key since, referring to FIG. 2A, ‘D’ is closest to t0. When touch t1 arrives 156, each hypothesis for t0 branches into hypotheses 158 for the keys nearest t1, namely ‘A’ and ‘S’. The match metric for these t1 hypotheses can include both the zero-order key/touch alignment error and first-order separation vectors between t1 and t0. With a second touch, the typing recognizer is ready to start picking the best hypothesis sequence. To do so, for each t1 hypothesis it must compute a cumulative cost that also includes the cost of the parent to hypothesis. The t1 hypothesis with lowest cumulative cost will be selected, in this case ‘DA’. Since ‘D’ was just output, only ‘A’ need be sent to the host.

In case the previous touch's output had been some key other than ‘D’, say ‘F’, the preliminary ‘F’ output would need to be undone and replaced with ‘D’ by sending a Backspace or Erase key followed by ‘DA’ to the host. The hypothesis tree extensions and output of best sequence would continue similarly for the t2 and t3 touches, except that the match metrics for these touches would include second and third-order separation vectors, respectively. Pruning of hypothesis chains 160 that accumulate relatively high total costs prevents the tree from growing exponentially as more touches occur.

The flowchart in FIG. 7 illustrates how the preferred embodiment of the typing recognizer evaluates the quality of the match between a hypothesized key sequence and the corresponding touch sequence. This expanded flowchart corresponds to step 116 of FIG. 5. For the convenience of those skilled in the art, the evaluation process is also shown below as pseudocode:

Copy hj[n] and its parent hypothesis sequence into hseq[n]. . .hseq[0]
for (m=0; m < 10 && n−m >= 0; m++) {
if (m == 0) {//zero-order key/touch alignment error
hseq[n].geomcost = d0(T[n].x − hseq[n].x,T[n].y − hseq[n].y)
continue;
} else if (′T′[n].hand_identity != T[n−m].hand_identity)
continue;
else if (T[n−m] not keystroke or resting finger)
break;
τ[m].x = T[n].x − T[n−m].x //touch separation vectors
τ[m].y = T[n].y − T[n−m].y
λ[m].x = hseq[n].x − hseq[n−m].x //key separation vectors
λ[m].y = hseq[n].y − hseq[n−m].y
wt[m] = ft(T[n].tpress−T[n−m].trelease)
wa[m] = fa(τ[m].x,τ[m].y)
hseq[n].geomcost +=wt[m]*wa[m]*
dM(τ[m].x−λ[m].x,τ[m].y−λ[m].y)
}
hseq[n].cumulcost = hseq[n].geomcost + hseq[n−1].cumulcost

For notational and computational convenience, step 200 copies the particular key hypothesis sequence to be evaluated into the array hseq[ ], starting at hj[n], the new leaf of the hypothesis tree, traversing back through its parent hypothesis references, and stopping at the root. Step 202 computes the zero-order, key/touch misalignment error and stores it as the hypothesis' geometry match metric 96, hseq[n].geomcost. The distance metric d0 determines how the hseq[n].geomcost scales with misalignment in the x and y dimensions. Those skilled in the art will realize that any of a Manhattan metric, Euclidean distance, squared Euclidean distance metric or other metrics would be suitable here. Related art ‘nearest key’ typing recognizers essentially stop with this zero-order alignment error as the final geometry metric, but the current invention includes higher order separation vector mismatches in the geometry metric via the following steps.

Step 204 initializes the order index m to 1. Since each hand's drift is presumed to be independent of the other's drift, only separation vectors for touches and keys typed within the same hand should be considered. Decision diamond 206 tests whether the m th previous hypothesized key hseq[n−m]is normally typed by the same hand as the currently hypothesized key hseq[n]. If not, hseq[n−m] presumably contains no information about the drift of the current hand, so the evaluation process skips m th-order separation vector computations and advances to step 218.

If both touches come from the same hand, decision diamond 207 decides whether the m th previous was actually typing related and thus a possible predictor of hand drift. Decision diamond 207 is particularly important for multi-touch systems that support non-typing synchronous touches such as chord taps, lateral chord slides, and hand resting. For instance, finger location at the beginning or end of pointing motions has nothing to do with subsequent typing drift, so decision diamond 207 should break the loop and skip to the final cost accumulation step 222 when it encounters a touch involved in pointing or any other sliding gesture. However, when typing on a surface, resting a hand (all fingers simultaneously) on home row in between words is quite convenient. Any slight misalignments between the home row keys and finger locations within the resting chord are a good predictor of hand/key misalignment during subsequent typing. Such resting finger locations can be incorporated into separation vector evaluation by having the synchronization detector 13 insert a chain of five special resting finger hypotheses into the hypothesis tree for any five nearly simultaneous touches deemed to be part of a hand resting on or near its home row keys. Each resting finger hypothesis is given key coordinates 92 from the home row key that its finger normally rests on. The hypothesis can look up its finger and hand identity 71 through its causing touch reference 88, and the identities can then index into a table of home row key center coordinates. Resting finger hypotheses are given a null key code 94 so that they produce no output signals to the host computer. For the purpose of key and touch separation vector matching, however, decision diamond 207 and steps 208-216 of FIG. 7 treat them as typing-related hypotheses. This subtle incorporation of resting hand alignment is an alternative to the key layout morphing method described by Westerman and Elias in U.S. patent application Ser. No. 09/236,513. The morphing method snaps the home row keys to the precise resting finger locations and shifts the rest of the key layout accordingly, thus removing any misalignment between the resting hand and the key layout, but is only practical for touch surfaces integrated onto a video display that indicates key location shifts to the user.

For typing-related touches from the same hand, step 208 creates the m th-order touch separation vector I.[m] by subtracting the spatial and temporal coordinates of the m th previous touch T[n−m] from the current touch T[n]. Likewise, step 210 creates the m th-order key separation vector {umlaut over (l)}>>[m] by subtracting the layout coordinates of hseq[n−m]'s key from the currently hypothesized key hseq [n].

Step 212 computes the temporal confidence weighting wr[m] that should decrease monotonically toward 0 with the time elapsed between the press 72 of the current touch, T[n].tpress and release 73 of the m th previous touch, T[n−m].trelease. The release time is used in case the preceding touch was caused by a hand that began resting near home row many seconds ago but lifted off quite recently. This temporal confidence weighting is meant to reflect the fact that old touches are poorer predictors of the current hand drift than newer touches. Those skilled in the art will realize that the exact downward slope for this weighting function can be empirically optimized by computing old and new touch drift correlations from actual typing samples. For instance, if the typing samples showed that the hand/layout alignment error remained fairly consistent over ten second periods, then the weighting function should be designed to stay well above 0 for touches less than ten seconds old.

Step 214 computes a touch adjacency weighting wa[m] that should decrease monotonically toward 0 as the separation between the current and m th previous touch increases. The touch adjacency weighting is meant to reflect the fact that the separation between touches by the same finger or an adjacent finger, especially if the fingers have not reached far between the touches, is a better predictor of finger drift and overall hand drift than separation vectors for touches by non-adjacent fingers. Thus the second-order separation vector between t2 and t0 in FIG. 2B should be weighted more heavily than the long, first-order separation vector between t2 and t1. The adjacency weighting should be strongest when the m th previous touch occurred at the same surface location as the current touch, as this is a very strong indication both touches were intended to produce the same key. In this situation, the m th order key separation vector {umlaut over (l)}>>[m] of the matching key sequence is expected to have zero length, and any hypothesized key sequences with a non-zero m th order vector length should be punished with a strongly weighted cost.

Step 216 adds to the geometry metric a cost for any mismatch between the m th-order touch separation vector {umlaut over (l)}.[m] and the m th-order key separation vector {umlaut over (l)}>>[m]. This incremental cost should generally increase with the magnitude of the difference between the two vectors. In the preferred embodiment, the square of the magnitude of the vector difference is weighted by the temporal confidence wr[m] and adjacency confidence wa[m] to obtain the m th-order cost increment. The squared Euclidean metric is preferred for dM because it favors sequences with uniformly small vector differences.

Step 218 increments the order index m so that decision diamond 220 can decide whether to continue evaluating higher order separation vectors. Ideally, the evaluation process would continue with previous touches all the way back to the tree root, where m reaches n, but in practice it is usually sufficient to include separation vectors from the ten or so most recent typing-related touches. Once decision diamond 220 decides m has reached its useful limit, flow falls through to the final step 222. Step 222 sets the sequence cumulative match metric hj[m].cumulcost to the sum of the new touch cost hseq[n]. geomcost and the parent's cumulative metric hseq[n−1].cumulcost.

It is also instructive to examine an alternative embodiment of geometry match metric evaluation that, mathematically, is the exact equivalent of and produces the same result as the process in FIG. 7. However, a different factoring of the computations lends this alternative embodiment a differently intuitive interpretation. For the convenience of those of ordinary skill in the art, this alternative embodiment is shown below as pseudocode:

Copy hj[n] and its parent hypothesis sequence into hseq[n]. . .hseq[0]
Allocate key/touch error array e[ ] for different orders
for (m=0; m < 10 && n−m >= 0; m++) {
e[m].x = T[n−m].x − hseq[n−m].x //alignment errors
e[m].y = T[n−m].y − hseq[n−m].y
if (m == 0) {//zero-order key/touch alignment error
hseq[n].geomcost = d0(e[0].x,e[0].y)
continue;
} else if (T[n].hand_identity != T[n−m].hand_identity)
continue;
else if(T[n−m] not keystroke or resting finger)
break;
wt[m] = ft(T[n].tpress − T[n−m].trelease)
τ[m].x = T[n].x − T[n−m].x //touch separation vectors
τ[m].y = T[n],y − T[n−m].y
wa[m] = fa(τ[m].x,τ[m].y)
hseq[n].geomcost +=wt[m]*wa[m]*
dM(e[0].x−e[m].x,e[0].y−e[m].y)
}
hseq[n].cumulcost = hseq[n].geomcost + hseq[n−1].cumulcost

Both embodiments compute the zero-order alignment error component the same, but this alternative embodiment restates the comparison between the m th-order key and touch separate vectors as a comparison between the new touch T[n]'s key/touch alignment error vector, c[0], and the m th previous touch T[n−m]'s key/touch alignment error vector, e[m]. This suggests that the stack decoder in either embodiment will tend to pick as the best sequence a key hypothesis sequence whose individual key/touch alignment error vectors at small yet consistent with one another. Clearly this alternative, equivalent embodiment falls well within the scope of this invention.

The output module in FIG. 8 is responsible for transmitting the key code, command or symbol 94 from the best hypothesis hbest[n] to the host application. This job is complicated by the fact that any keys sent for previous touches may not have come from hbest[n]'s parent sequence. This happens when, based on additional cost evaluations from a new touch, a stack decoder decides a totally different sequence is optimal than was considered optimal from previous touch information alone. This occurrence presents the human interface designer with a tough question—leave the old character sequence or partial word on the screen, even though the new key is likely to be from a different word, or erase characters that have already been displayed to the typist and replace them with the better sequence. This question is important because in rare instances the old characters may actually be what the user intended to type, in which case replacing them with the new, supposedly more optimal sequence will annoy and surprise the typist.

The preferred embodiment of the output module adopts a compromise. It will only replace characters within the current word (i.e. it will not go back past any space characters and change any completed words), and it will only replace these characters it they have only been typed within the last couple seconds, before the typist has had a chance to notice and correct the probably erroneous old characters himself. The output module starts with the current best hypothesis 350 hbest[n]from the stack decoder. Step 352 sets the previous output index m to 1. Decision diamond 354 checks whether the hypothesis 77 whose key was output for touch T[n−m] was hbest[n]'s parent hypothesis hbest[n−m]. If not, decision diamond 356 checks whether the old key was a word-breaking space or was output more than a few seconds ago. It not, step 358 sends an Erase or Backspace key to the host to undo the old character, and step 360 increments m to continue checking for a parent hypothesis that both the best sequence and previously sent sequence share. Once that parent is found or the search is aborted at a word boundary, step 362 begins sending the replacement key codes 94 from the hbest[ ] sequence, looping through step 363 to increment m until decision diamond finds that m has reached 0, and hseq[n]'s key code 94 has been transmitted.

Now that the preferred embodiment of the typing recognizer has been described, it is instructive to consider additional consequences of its design. One important consequence is that the key activated may not always be the key nearest the fingertip. Generating a neighboring key when the finger actually lands right on top of another key would be startling to the user. However, if the adjacency weightings are kept sufficiently low, the separation vectors cannot override a zero-order, key/touch position error near zero. Proper tuning of the adjacency weighting function ensures that separation vectors can only be decisive when the finger lies in a zone between keys, at least 2-4 mm (⅛″-¼″) from the center of any key.

To further improve recognition accuracy when typing plain English or another predictable language, alternative embodiments of the typing recognizer may incorporate a spelling model. Such integration of spelling models into character recognizers is clearly taught in the handwriting recognition art (see, for example, the post-processing with Markov model and Dictionary in U.S. Pat. No. 5,812,698 to Platt et al. and the use of trigrams in U.S. Pat. No. 6,137,908), and will only be summarized here briefly. Basicly, the spelling model computes for each hypothesis a character transition cost that indicates whether the hypothesized key/character is building a dictionary word out of its parent hypothesis sequence. Costs will be high for character transitions that cannot be found in the dictionary. Command or editing keys can be given a neutral or zero spelling cost. Step 222 of FIG. 7 can then be modified to include the character transition cost weighted with the geometry cost in the cumulative cost total. Character transition costs need only be determining of the best sequence when different hypothesized key sequences have equally high touch geometry costs.

The case of a finger repetitively striking the same location halfway between keys is a good example of the advantages of considering touch sequence geometry in addition to zero-order alignment error, especially for typing recognizers that include a spelling model. Typists find it disconcerting if they strike the same location repeatedly yet the decoder outputs different neighboring characters. This can happen, say, if the user intended to type ‘DDD’ but the three consecutive finger strikes occur roughly between the ‘S’, ‘E’, and ‘W’ and ‘D’ keys. For a ‘nearest key’ recognizer with spelling model, the zero-order alignment errors for all four keys would be roughly equal, leaving the character transition costs to dominate and encourage the stack decoder to output common spelling sequences like ‘WES’, ‘SEW’, and ‘DES.’ But for a typing recognizer improved with touch geometry matching, only the key sequences ‘SSS’, ‘EEE’, ‘DDD’ and ‘WWW’ have small key separation vectors matching the small touch separations, so these sequences' relatively low geometry match costs would override the spelling model, causing one of them to be output. Even though the ‘SSS’ or ‘EEE’ sequences may not be what the typist intended, they are less disconcerting than a mixed output sequence like ‘SEW’ when the typist knows her finger was not hopping between keys. Thus separation vector matching can overcome misleading character transition costs to ensure the typist sees a consistent, homogeneous output sequence when a finger strikes approximately the same location repeatedly.

Though embodiments and applications of this invention have been shown and described, it will be apparent to those skilled in the art that numerous further embodiments and modifications than mentioned above are possible without departing from the inventive concepts disclosed herein. The invention, therefore, is not to be restricted except in the true spirit and scope of the appended claims.

Westerman, Wayne Carl

Patent Priority Assignee Title
10216279, Jun 19 2008 Tactile Display, LLC Interactive display with tactile feedback
10254890, Jan 03 2007 Apple Inc. Front-end signal compensation
10338789, Jul 30 2004 Apple Inc. Operation of a computer with touch screen interface
10437459, Jan 07 2007 Apple Inc. Multitouch data fusion
10459523, Apr 13 2010 Tactile Displays, LLC Interactive display with tactile feedback
10705692, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Continuous and dynamic scene decomposition for user interface
10719131, Apr 13 2010 Tactile Displays, LLC Interactive display with tactile feedback
10725587, Jan 03 2007 Apple Inc. Front-end signal compensation
10908729, May 06 2004 Apple Inc. Multipoint touchscreen
10915207, May 02 2006 Apple Inc. Multipoint touch surface controller
10921941, Mar 04 2005 Apple Inc. Electronic device having display and surrounding touch sensitive surfaces for user interface and control
10990183, Apr 13 2010 Tactile Displays, LLC Interactive display with tactile feedback
10990184, Apr 13 2010 Tactile Displays, LLC Energy efficient interactive display with energy regenerative keyboard
10996762, Apr 13 2010 Tactile Displays, LLC Interactive display with tactile feedback
11195118, Nov 20 2017 International Business Machines Corporation Detecting human input activity in a cognitive environment using wearable inertia and audio sensors
11353989, Jan 03 2007 Apple Inc. Front-end signal compensation
11360509, Mar 04 2005 Apple Inc. Electronic device having display and surrounding touch sensitive surfaces for user interface and control
11481109, Jan 07 2007 Apple Inc. Multitouch data fusion
11604547, May 06 2004 Apple Inc. Multipoint touchscreen
11816329, Jan 07 2007 Apple Inc. Multitouch data fusion
11853518, May 02 2006 Apple Inc. Multipoint touch surface controller
7848825, Jan 03 2007 Apple Inc Master/slave mode for sensor processing devices
8049732, Jan 03 2007 Apple Inc Front-end signal compensation
8068093, Oct 10 2006 Promethean House Duplicate objects
8352884, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Dynamic reconfiguration of GUI display decomposition based on predictive model
8375295, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Customization of GUI layout based on history of use
8405617, Jan 03 2007 Apple Inc Gated power management over a system bus
8413075, Jan 04 2008 Apple Inc Gesture movies
8434003, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Touch control with dynamically determined buffer region and active perimeter
8553004, Jan 03 2007 Apple Inc. Front-end signal compensation
8583421, Mar 06 2009 Google Technology Holdings LLC Method and apparatus for psychomotor and psycholinguistic prediction on touch based device
8639494, Dec 28 2010 INTUIT INC. Technique for correcting user-interface shift errors
8711129, Jan 03 2007 Apple Inc Minimizing mismatch during compensation
8797283, Nov 22 2010 Sony Interactive Entertainment LLC Method and apparatus for performing user-defined macros
8907903, Jan 13 2011 Sony Interactive Entertainment LLC Handing control of an object from one touch input to another touch input
8971572, Aug 12 2011 The Research Foundation of State University of New York Hand pointing estimation for human computer interaction
9009588, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Customization of GUI layout based on history of use
9305229, Jul 30 2012 Method and system for vision based interfacing with a computer
9311528, Jan 03 2007 Apple Inc. Gesture learning
9323405, Jan 03 2007 Apple Inc. Front-end signal compensation
9367216, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Hand-held device with two-finger touch triggered selection and transformation of active elements
9372546, Aug 12 2011 The Research Foundation for The State University of New York Hand pointing estimation for human computer interaction
9448701, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Customization of GUI layout based on history of use
9513705, Jun 19 2008 Tactile Displays, LLC Interactive display with tactile feedback
9524085, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Hand-held device with ancillary touch activated transformation of active element
9778807, Jan 03 2007 Apple Inc. Multi-touch input discrimination
9927964, May 21 2009 SONY INTERACTIVE ENTERTAINMENT INC Customization of GUI layout based on history of use
Patent Priority Assignee Title
3333160,
3541541,
3662105,
3798370,
4246452, Jan 05 1979 Mattel, Inc. Switch apparatus
4550221, Oct 07 1983 VOLKS COMMUNICATION, INC Touch sensitive control device
4672364, Jun 18 1984 Carroll Touch Inc Touch input device having power profiling
4672558, Sep 25 1984 CANBERRA ALBUQUERQUE, INC Touch-sensitive data input device
4692809, Nov 20 1984 HE HOLDINGS, INC , A DELAWARE CORP ; Raytheon Company Integrated touch paint system for displays
4695827, Nov 20 1984 HE HOLDINGS, INC , A DELAWARE CORP ; Raytheon Company Electromagnetic energy interference seal for light beam touch panels
4733222, Dec 27 1983 Integrated Touch Arrays, Inc.; INTEGRATED TOUCH ARRAYS, INC A CORP OF DE Capacitance-variation-sensitive touch sensing array system
4734685, Jul 28 1983 Canon Kabushiki Kaisha Position control apparatus
4746770, Feb 17 1987 Sensor Frame Incorporated; SENSOR FRAME INCORPORATION Method and apparatus for isolating and manipulating graphic objects on computer video monitor
4771276, Apr 15 1985 International Business Machines Corporation Electromagnetic touch sensor input system in a cathode ray tube display device
4788384, Dec 18 1986 Centre National de la Recherche Scientifique Device for two-dimensional localization of events that generate current on a resistive surface
4806846, Jul 06 1987 High accuracy direct reading capacitance-to-voltage converter
4898555, Mar 23 1989 PROQUEST BUSINESS SOLUTIONS INC Display screen bezel and assembly method
4968877, Sep 14 1988 Sensor Frame Corporation VideoHarp
5003519, May 26 1988 ETA S.A. Fabriques d'Ebauches Alarm arrangement for a timepiece
5017030, Jul 07 1986 Ergonomically designed keyboard
5178477, Jun 06 1991 MOTIONLESS KEYBOARD COMPANY Ergonomic keyboard input device
5189403, Sep 26 1989 HANGER SOLUTIONS, LLC Integrated keyboard and pointing device system with automatic mode change
5194862, Jun 29 1990 U.S. Philips Corporation Touch sensor array systems and display systems incorporating such
5224861, Sep 17 1990 L-3 Communications Corporation Training device onboard instruction station
5241308, Feb 22 1990 AVI SYSTEMS, INC ; TMY, INC Force sensitive touch panel
5252951, Apr 28 1989 INTERNATIONAL BUSINESS MACHINES CORPORATION A CORP OF NEW YORK Graphical user interface with gesture recognition in a multiapplication environment
5281966, Jan 31 1992 Method of encoding alphabetic characters for a chord keyboard
5305017, Sep 04 1991 Cirque Corporation Methods and apparatus for data input
5345543, Nov 16 1992 Apple Inc Method for manipulating objects on a computer display
5376948, Mar 25 1992 3M Innovative Properties Company Method of and apparatus for touch-input computer and related display employing touch force location external to the display
5398310, Apr 13 1992 Apple Inc Pointing gesture based computer note pad paging and scrolling interface
5442742, Dec 21 1990 Apple Inc Method and apparatus for the manipulation of text on a computer display screen
5463388, Jan 29 1993 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Computer mouse or keyboard input device utilizing capacitive sensors
5463696, May 27 1992 Apple Inc Recognition system and method for user inputs to a computer system
5483261, Feb 14 1992 ORGPRO NEXUS INC Graphical input controller and method with rear screen image detection
5488204, Jun 08 1992 Synaptics Incorporated; Synaptics, Incorporated Paintbrush stylus for capacitive touch sensor pad
5495077, Jun 08 1992 Synaptics, Inc. Object position and proximity detector
5513309, Jan 05 1993 Apple Computer, Inc. Graphic editor user interface for a pointer-based computer system
5523775, May 26 1992 Apple Inc Method for selecting objects on a computer display
5530455, Aug 10 1994 KYE SYSTEMS AMERICA CORPORATION Roller mouse for implementing scrolling in windows applications
5543590, Jun 08 1992 SYNAPTICS, INC ; Synaptics, Incorporated Object position detector with edge motion feature
5543591, Jun 08 1992 SYNAPTICS, INC Object position detector with edge motion feature and gesture recognition
5563632, Apr 30 1993 3M Innovative Properties Company Method of and apparatus for the elimination of the effects of internal interference in force measurement systems, including touch - input computer and related displays employing touch force location measurement techniques
5563996, Apr 13 1992 Apple Inc Computer note pad including gesture based note division tools and method
5565658, Jul 13 1992 Cirque Corporation Capacitance-based proximity with interference rejection apparatus and methods
5579036, Apr 28 1994 NCR Corporation Touch screen device and shielding bracket therefor
5581681, Dec 14 1994 Apple Inc Pointing gesture based computer note pad paging and scrolling interface
5583946, Sep 30 1993 Apple Inc Method and apparatus for recognizing gestures on a computer system
5590219, Sep 30 1993 Apple Inc Method and apparatus for recognizing gestures on a computer system
5592566, Jan 05 1993 Apple Inc Method and apparatus for computerized recognition
5594810, Sep 19 1993 Apple Inc Method and apparatus for recognizing gestures on a computer system
5596694, May 27 1992 Apple Inc Method and apparatus for indicating a change in status of an object and its disposition using animation
5612719, Dec 03 1992 Apple Inc Gesture sensitive buttons for graphical user interfaces
5631805, Sep 27 1995 3M Innovative Properties Company Touch screen enclosure having an insertable graphic sheet
5633955, May 27 1992 Apple Computer, Inc. Method of connecting shapes on a display of a computer system
5634102, Aug 07 1995 Apple Inc Methods and apparatus for a selectable backdrop
5636101, Sep 27 1995 3M Innovative Properties Company Touch screen enclosure system having touch screen pan and hinged rear enclosure section for ease of serviceability
5642108, Jun 28 1991 Infogrip, Inc. Chordic keyboard system for generating a signal in response to a chord that is assigned using a correlation based on a composite chord-difficulty index
5644657, Jan 05 1993 Apple Computer, Inc. Method for locating and displaying information in a pointer-based computer system
5666113, Jul 31 1991 3M Innovative Properties Company System for using a touchpad input device for cursor control and keyboard emulation
5666502, Aug 07 1995 Apple Inc Graphical user interface using historical lists with field classes
5666552, Dec 21 1990 Apple Inc Method and apparatus for the manipulation of text on a computer display screen
5675361, Aug 23 1995 Computer keyboard pointing device
5677710, May 10 1993 Apple Inc Recognition keypad
5689253, Apr 10 1991 Kinesis Corporation Ergonomic keyboard apparatus
5710844, May 27 1992 Apple Inc Method for searching and displaying results in a pen-based computer system
5729250, May 08 1995 Toshiba Global Commerce Solutions Holdings Corporation Front cover assembly for a touch sensitive device
5730165, Dec 26 1995 Atmel Corporation Time domain capacitive field detector
5736976, Feb 13 1995 Computer data entry apparatus with hand motion sensing and monitoring
5741990, Feb 17 1989 Notepool, Ltd. Method of and means for producing musical note relationships
5745116, Sep 09 1996 Google Technology Holdings LLC Intuitive gesture-based graphical user interface
5745716, Aug 07 1995 Apple Inc Method and apparatus for tab access and tab cycling in a pen-based computer system
5746818, Aug 31 1995 Seiko Epson Corporation Pigment ink composition capable of forming image having no significant bleeding or feathering
5748269, Nov 21 1996 Westinghouse Air Brake Company Environmentally-sealed, convectively-cooled active matrix liquid crystal display (LCD)
5764222, May 28 1996 International Business Machines Corporation Virtual pointing device for touchscreens
5767457, Nov 13 1995 Cirque Corporation Apparatus and method for audible feedback from input device
5767842, Feb 07 1992 International Business Machines Corporation Method and device for optical input of commands or data
5790104, Jun 25 1996 International Business Machines Corporation Multiple, moveable, customizable virtual pointing devices
5790107, Jun 07 1995 ELAN MICROELECTRONICS CORP Touch sensing method and apparatus
5802516, Nov 03 1993 Apple Inc Method of controlling an electronic book for a computer system
5808567, May 17 1993 DSI DATOTECH SYSTEMS INC Apparatus and method of communicating using three digits of a hand
5809267, Dec 30 1993 Xerox Corporation Apparatus and method for executing multiple-concatenated command gestures in a gesture based input system
5812698, May 12 1995 Synaptics, Inc. Handwriting recognition system and method
5821690, Aug 26 1993 Cambridge Display Technology Limited Electroluminescent devices having a light-emitting layer
5821930, Aug 23 1992 Qwest Communications International Inc Method and system for generating a working window in a computer system
5823782, Dec 29 1995 Tinkers & Chance Character recognition educational system
5825351, May 12 1994 Apple Computer, Inc. Method and apparatus for noise filtering for an input device
5825352, Jan 04 1996 ELAN MICROELECTRONICS CORP Multiple fingers contact sensing method for emulating mouse buttons and mouse operations on a touch sensor pad
5854625, Nov 06 1996 Synaptics Incorporated Force sensing touchpad
5880411, Jun 08 1992 Synaptics Incorporated Object position detector with edge motion feature and gesture recognition
5898434, May 15 1991 Apple Inc User interface system having programmable user interface elements
5920309, Jan 04 1996 ELAN MICROELECTRONICS CORP Touch sensing method and apparatus
5923319, May 08 1995 Toshiba Global Commerce Solutions Holdings Corporation Front cover assembly for touch sensitive device
5933134, Jun 25 1996 LENOVO SINGAPORE PTE LTD Touch screen virtual pointing device which goes into a translucent hibernation state when not in use
5943044, Aug 05 1996 INTERLINK ELECTRONIC Force sensing semiconductive touchpad
6002389, Apr 24 1996 ELAN MICROELECTRONICS CORP Touch and pressure sensing method and apparatus
6002808, Jul 26 1996 Mitsubishi Electric Research Laboratories, Inc Hand gesture control system
6020881, May 24 1993 Sun Microsystems Graphical user interface with method and apparatus for interfacing to remote devices
6031524, Jun 07 1995 Intermec IP CORP Hand-held portable data terminal having removably interchangeable, washable, user-replaceable components with liquid-impervious seal
6037882, Sep 30 1997 Apple Inc Method and apparatus for inputting data to an electronic system
6050825, May 08 1998 Speedskin LLC Opaque, one-size-fits-all computer keyboard cover which covers only the three or four alpha-numeric rows
6052339, Jun 11 1997 Asulab S.A. Watch with touch reading and setting of time functions
6072494, Oct 15 1997 Microsoft Technology Licensing, LLC Method and apparatus for real-time gesture recognition
6084576, Sep 27 1997 User friendly keyboard
6107997, Jun 27 1996 Touch-sensitive keyboard/mouse and computing device using the same
6128003, Dec 20 1996 Hitachi, Ltd. Hand gesture recognition system and method
6131299, Jul 01 1998 FARO TECHNOLOGIES, INC Display device for a coordinate measurement machine
6135958, Aug 06 1998 Siemens Medical Solutions USA, Inc Ultrasound imaging system with touch-pad pointing device
6137908, Jun 29 1994 Microsoft Technology Licensing, LLC Handwriting recognition system simultaneously considering shape and context information
6144380, Nov 03 1993 Apple Inc Method of entering and using handwriting to identify locations within an electronic book
6188391, Jul 09 1998 Synaptics, Incorporated Two-layer capacitive touchpad and method of making same
6198515, Mar 16 1998 Apparatus and method for controlled sealing between bezel and CRT
6208329, Aug 13 1996 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Supplemental mouse button emulation system, method and apparatus for a coordinate based data input device
6222465, Dec 09 1998 Lucent Technologies Inc. Gesture-based computer interface
6239790, Aug 05 1996 Interlink Electronics Force sensing semiconductive touchpad
6243071, Nov 03 1993 Apple Inc Tool set for navigating through an electronic book
6246862, Feb 03 1999 Google Technology Holdings LLC Sensor controlled user interface for portable communication device
6249606, Feb 19 1998 CREATIVE TECHNOLOGY LTD Method and system for gesture category recognition and training using a feature vector
6288707, Jul 29 1996 NEODRÓN LIMITED Capacitive position sensor
6289326, Jun 04 1997 Portable interactive kiosk
6292178, Oct 19 1998 JOHNSON & JOHNSON SURGICAL VISION, INC Screen navigation control apparatus for ophthalmic surgical instruments
6323846, Jan 26 1998 Apple Inc Method and apparatus for integrating manual input
6347290, Jun 24 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Apparatus and method for detecting and executing positional and gesture commands corresponding to movement of handheld computing device
6377009, Sep 08 1999 UUSI, LLC Capacitive closure obstruction sensor
6378234, Apr 09 1999 Sequential stroke keyboard
6380931, Jun 08 1992 Synaptics Incorporated Object position detector with edge motion feature and gesture recognition
6411287, Sep 08 1999 Tyco Electronics Corporation Stress seal for acoustic wave touchscreens
6414671, Jun 08 1992 Synaptics Incorporated Object position detector with edge motion feature and gesture recognition
6421234, Oct 10 2000 JUNIPER SYSTEMS INC Handheld electronics device having ergonomic features
6452514, Jan 26 1999 Atmel Corporation Capacitive sensor and array
6457355, Aug 27 1999 Level sensing
6466036, Nov 25 1998 NEODRÓN LIMITED Charge transfer capacitance measurement circuit
6515669, Oct 23 1998 Olympus Optical Co., Ltd. Operation input device applied to three-dimensional input device
6525749, Dec 30 1993 Xerox Corporation Apparatus and method for supporting the implicit structure of freeform lists, outlines, text, tables and diagrams in a gesture-based input system and editing system
6535200, Jul 29 1996 NEODRÓN LIMITED Capacitive position sensor
6543684, Mar 28 2000 NCR Voyix Corporation Transaction terminal with privacy shield for touch-screen pin entry
6543947, Mar 14 2001 Keyboard having keys arranged in a pan configuration
6570557, Feb 10 2001 Apple Inc Multi-touch system and method for emulating modifier keys via fingertip chords
6593916, Nov 03 2000 ELO TOUCH SOLUTIONS, INC Touchscreen having multiple parallel connections to each electrode in a series resistor chain on the periphery of the touch area
6610936, Jun 08 1992 Synaptics, Inc. Object position detector with edge motion feature and gesture recognition
6624833, Apr 17 2000 WSOU Investments, LLC Gesture-based input interface system with shadow detection
6639577, Mar 04 1998 Rovi Technologies Corporation Portable information display device with ergonomic bezel
6650319, Oct 29 1996 ELO TOUCH SOLUTIONS, INC Touch screen based topological mapping with resistance framing design
6658994, Apr 10 2002 Antares Capital LP; ANTARES CAPITAL LP, AS SUCCESSOR AGENT Modular assembly for a holding cabinet controller
6670894, Feb 05 2001 System and method for keyboard independent touch typing
6677932, Jan 28 2001 Apple Inc System and method for recognizing touch typing under limited tactile feedback conditions
6677934, Jul 30 1999 L-3 Communications Corporation Infrared touch panel with improved sunlight rejection
6724366, Apr 03 2001 PINEAPPLE34, LLC Thumb actuated x-y input device
6757002, Nov 04 1999 Hewlett-Packard Company Track pad pointing device with areas of specialized function
6803906, Jul 05 2000 SMART Technologies ULC Passive touch system and method of detecting user input
6842672, Feb 28 2002 Garmin International, Inc. Cockpit instrument panel systems and methods with redundant flight data display
6856259, Feb 06 2004 ELO TOUCH SOLUTIONS, INC Touch sensor system to detect multiple touch events
6888536, Jan 26 1998 Apple Inc Method and apparatus for integrating manual input
6900795, Feb 27 2002 WORD MACHINERY, INC Unitary molded lens filter for touch screen interface
6927761, Mar 29 2002 3M Innovative Properties Company Moisture deflector for capacitive NFI touch screens for use with bezels of conductive material
6942571, Oct 16 2000 SG GAMING, INC Gaming device with directional and speed control of mechanical reels using touch screen
6965375, Apr 27 2001 Qualcomm Incorporated Compact integrated touch panel display for a handheld device
6972401, Jan 30 2003 SMART Technologies ULC Illuminated bezel and touch system incorporating the same
6977666, Sep 04 1998 INNOVATIVE SOLUTIONS AND SUPPORT INC Flat panel display using dual CPU's for an aircraft cockpit
6985801, Feb 28 2002 Garmin International, Inc. Cockpit instrument panel systems and methods with redundant flight data display
6992659, May 22 2001 Qualcomm Incorporated High transparency integrated enclosure touch screen assembly for a portable hand held device
7031228, Aug 30 2002 Asulab S.A. Timepiece with touch-type reading and control of time data
20020118848,
20030006974,
20030076301,
20030076303,
20030076306,
20030095095,
20030095096,
20030098858,
20030206202,
20030234768,
20040263484,
20050012723,
20050052425,
20050104867,
20050110768,
20060022955,
20060022956,
20060026521,
20060026535,
20060026536,
20060032680,
20060033724,
20060053387,
20060066582,
20060085757,
20060097991,
20060197753,
CA1243096,
DE10251296,
EP288692,
EP464908,
EP664504,
EP1014295,
WO2003088176,
WO2006023569,
WO97018547,
WO97023738,
WO9814863,
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 13 2006Apple Inc.(assignment on the face of the patent)
Aug 31 2007FINGERWORKS, INC Apple IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0198560474 pdf
Date Maintenance Fee Events
Nov 04 2009ASPN: Payor Number Assigned.
Jun 15 2011M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jul 01 2015M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Nov 24 20124 years fee payment window open
May 24 20136 months grace period start (w surcharge)
Nov 24 2013patent expiry (for year 4)
Nov 24 20152 years to revive unintentionally abandoned end. (for year 4)
Nov 24 20168 years fee payment window open
May 24 20176 months grace period start (w surcharge)
Nov 24 2017patent expiry (for year 8)
Nov 24 20192 years to revive unintentionally abandoned end. (for year 8)
Nov 24 202012 years fee payment window open
May 24 20216 months grace period start (w surcharge)
Nov 24 2021patent expiry (for year 12)
Nov 24 20232 years to revive unintentionally abandoned end. (for year 12)