The preferred embodiments described herein relate to voice-enhanced diagnostic medical ultrasound imaging systems and review stations as well as to voice-related user interfaces. With these preferred embodiments, a user can interact with an imaging system or review station by issuing verbal commands instead of using a mouse, keyboard, or other user interface that requires physical manipulation by the user. This provides a very user-friendly interface, especially to those users who have difficulty navigating complex window and menu hierarchies or who have trouble manipulating pointing devices. This also improves patient flow and provides a more efficient report generation system. voice feedback can also be used to allow the imaging system or review station to better communicate with a user.
|
32. An ultrasound transducer comprising:
at least one transducer element; and a microphone for receiving voice input, wherein said input is being part of the ultrasound transducer; said at least one transducer element and said microphone providing electrical signals to an ultrasound imaging system coupled with the ultrasound transducer.
1. An ultrasound review station comprising:
a voice input device; a processor; and a voice recognition unit coupled with the voice input device and processor, the voice recognition unit being operative to convert a voice signal received from the voice input device into a command signal recognizable by the processor; wherein the processor is operative to provide the voice recognition unit with voice information correlating a set of command signals recognizable by the processor-with an associated set of respective voice signals.
21. An ultrasound imaging system comprising;
a processor; a voice output device; a voice production unit coupled with the processor and the voice output device, the voice production unit being operative to convert a voice output signal from the processor into a voice reproducible by the voice output device; a voice input device; and a voice recognition unit coupled with the voice input device and processor, the voice recognition unit being operative to convert a voice signal received from the voice input device into a command signal recognizable by the processor.
14. An ultrasound review station comprising:
a voice output device; a processor; a voice production unit coupled with the processor and the voice output device, the voice production unit being operative to convert a voice output signal from the processor into a voice reproducible by the voice output device; a voice input device; and a voice recognition unit coupled with the voice input device and processor, the voice recognition unit being operative to convert a voice signal received from the voice input device into a command signal recognizable by the processor.
30. A method for converting a received voiced command into a command signal recognizable by an ultrasound review station processor, the method comprising:
(a) with an ultrasound review station processor, providing a voice recognition unit with voice information correlating a set of command signals recognizable by the ultrasound review station processor with an associated set of respective voiced commands; (b) with the voice recognition unit, receiving a voiced command; and (c) converting the received voiced command into a command signal using the voice information provided by the ultrasound review station processor.
28. A method for converting a received voiced command into a command signal recognizable by an ultrasound imaging system processor, the method comprising:
(a) with an ultrasound imaging system processor, providing a voice recognition unit with voice information correlating a set of command signals recognizable by the ultrasound imaging system processor with an associated set of respective voiced commands; (b) with the voice recognition unit, receiving a voiced command; and (c) converting the received voiced command into a command signal using the voice information provided by the ultrasound imaging system processor.
2. The ultrasound review station of
3. The ultrasound review station of
4. The ultrasound review station of
5. The ultrasound review station of
6. The ultrasound review station of
7. The ultrasound review station of
8. The ultrasound review station of
9. The ultrasound review station of
11. The ultrasound review station of
12. The ultrasound review station of
a voice output device; and a voice production unit coupled with the processor and the voice output device, the voice production unit being operative to convert a voice output signal from the processor into a voice reproducible by the voice output device.
13. The ultrasound review station of
15. The ultrasound review station of
16. The ultrasound review station of
17. The ultrasound review station of
18. The ultrasound review station of
19. The ultrasound review station of
20. The ultrasound review station of
22. The ultrasound imaging system of
23. The ultrasound imaging system of
24. The ultrasound imaging system of
25. The ultrasound imaging system of
26. The ultrasound imaging system of
27. The ultrasound imaging system of
29. The method of
(d) with the voice recognition unit, providing the command signal to the ultrasound imaging system processor.
31. The method of
(d) with the voice recognition unit, providing the command signal to the ultrasound review station processor.
33. The ultrasound transducer of
34. The ultrasound transducer of
|
There are several steps involved in providing a diagnosis of a patient based on an ultrasound examination. First, the ultrasound examination is performed on an ultrasound imaging system. The images generated from this examination can then be digitally stored and reviewed by a physician on an ultrasound review station, which is typically coupled with an ultrasound imaging system though a network. The ultrasound review station can display images, text, and measurement and calculation data and can also be used to facilitate the production of ultrasound examination reports. Based on his analysis at the review station, the physician generates an ultrasound examination report to provide a diagnosis. Often, a physician will dictate his diagnosis onto an audio tape or recording system, and the diagnosis is later transcribed and entered into an ultrasound examination report. Alternatively, the diagnosis can be typed into the ultrasound imaging system.
To assist in the performance of an ultrasound examination, some ultrasound imaging systems allow voice control of some of the operations of the system. Typically, a voice recognition unit, which is either part of or separate from the ultrasound imaging system's processor, converts an incoming voice signal to a control signal using voice information stored in the voice recognition unit. To enhance recognition performance, U.S. Pat. No. 5,544,654, which is assigned to the assignee of the present invention, describes an ultrasound imaging system in which a subset of voice information is used based on the operating state of the ultrasound imaging system. Specifically, the ultrasound imaging system's processor provides the voice recognition unit with an indication of its operating state, and the voice recognition unit selects only the portions the voice information that are relevant to the operating state. Because the voice recognition unit makes the selection based on the provided indication of operating state, the voice recognition unit and processor must be synchronized to ensure proper selection, especially when the processor is shipped separately from the voice recognition unit and when the processor is updated without updating the voice recognition unit.
To assist the physician review ultrasound images at a review station, graphical user interfaces have been used to provide a more user-friendly environment for the physician. Typically, these graphical user interfaces have windows, menus, and buttons, and a visual focus manipulated by a pointing device such as a mouse, keyboard, or trackball. Ultrasound review stations often have so many functions that applications are divided into hierarchies of menus and sub-menus, dialogs and sub-dialogs, and windows and sub-windows. Although graphical user interfaces were intended to facilitate interaction with the review station, some users have difficulty finding the desired functionality in the complex windows and menu hierarchies. Some users also find it difficult to fluidly manipulate pointing devices that require click and double-click actions.
Finally, to reduce the time needed to produce an ultrasound examination report and to improve the overall diagnostic workflow for a patient, automatic transcription systems have been used, such as Medspeak from IBM, Clinical Reporter from Lernout & Hauspie, and Powerscribe from the MRC Group. These systems are stand-alone devices with specialized vocabularies and are not incorporated with the ultrasound imaging system or review station. Some transcription systems attempt to transcribe every word voiced by the physician. Because of limitations in current transcription technology, these systems often produce inaccurate transcriptions. To overcome this problem, some systems reduce the amount of automatic dictation that is needed by creating macros, which, when spoken, trigger a longer text to be inserted into the report. Although typically more accurate than automatic dictation systems, these systems also encounter recognition problems. To further enhance accuracy, some systems use inline-style macro displays, in which a proposed textual phrase is displayed to a user for acceptance. Because these systems only need to recognize the command to accept or reject the proposed textual phrase, recognition accuracy is increased. However, presenting proposed textual phrases to a user can be a time consuming process, especially if the user rejects several proposed phrases before reaching an acceptable phrase.
There is, therefore, a need for an improved diagnostic medical ultrasound imaging system and review station to overcome the problems described above.
The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims.
By way of introduction, the preferred embodiments described below relate to voice-enhanced diagnostic medical ultrasound imaging systems and review stations as well as to voice-related user interfaces. With these preferred embodiments, a user can interact with an imaging system or review station by issuing verbal commands instead of using a mouse, keyboard, or other user interface that requires physical manipulation by the user. This provides a very user-friendly interface, especially to those users who have difficulty navigating complex window and menu hierarchies or who have trouble manipulating pointing devices. This also improves patient flow and provides a more efficient report generation system. Voice feedback can also be used to allow the imaging system or review station to better communicate with a user.
The preferred embodiments will now be described with reference to the attached drawings.
A voice recognition unit can be used to provide voiced commands to an ultrasound imaging system or review station, and
In regard to architecture, the voice recognition unit 10 can be separate from the processor 20, such as when the voice recognition unit 10 takes the form of software running on a separate processor. In one embodiment, the separate processor is a general-purpose computer directly coupled with the ultrasound imaging system or review station. For example, a general-purpose computer can be directly connected to an ultrasound imaging system and carried on the system cart, thereby appearing to a user to be integrated with the system. As described below, a separate processor can also be located in a server coupled with the ultrasound imaging system or review station through a network. In another preferred embodiment, some or all of the functionality of the voice control unit 10 is implemented with the ultrasound imaging system's or review station's processor 20.
For simplicity, the term "voice recognition unit" is used in the specification and claims to broadly refer to hardware and/or software components that use voice information to recognize an incoming voice signal from a voice input device 15 to generate and provide a command signal to a processor 20 of an ultrasound imaging system or review station. As used herein, the term "voice information" refers to data that correlates a set of voice signals (e.g., voiced commands from a user) with an associated set of respective command signals recognizable by the processor of the ultrasound imaging system or review station. The term "set" in the specification and claims refers to one or more than one element. In addition to providing this recognition profile, voice information can include engine usage information (e.g., percent of the CPU dedicated to recognition), user-adjustable recognition parameters (e.g., minimum volumes, timeouts to recognized complete and incomplete phrases), and a list of voice input devices and their capabilities that the user trained with and that the recognition profile is appropriate for. Voice information can also include user-specific voice commands and non-GUI user-specific voice parameters, such as preferred speak-back voice, dictation parameters, and dialog parameters. The recognition engine of the voice recognition unit 10 compares the incoming voice signal with the recognition profile of the voice information to determine which command signal should be sent to the processor 20. To enhance recognition performance, it is preferred that a finite-state-machine language description (e.g., Backus-Naurer) be used to provide the voice recognition unit 10 with various forms of legal speech and that a dictionary of synonyms be used to recognize equivalent voiced commands.
Also as used herein, the term "command signal" is used to refer to any signal that is recognizable by the processor 20 as an instruction to perform an operation or function performable by the processor 20 (e.g., the selection of a field, window, or monitor). In addition to controlling some aspect of the ultrasound imaging system or review station, a command signal can be a signal that provides the processor 20 with a text or other message. For example, the command signal can comprise a textual phrase that will be inserted into an ultrasound examination report.
As described above, the voice recognition unit 10 uses voice information to generate a command signal in response to a voiced command from a user. Voice information can be provided to the voice recognition unit 10 by any appropriate source. In the preferred embodiment shown in
The voice information can also be provided to the voice recognition unit 20 by a server 25 externally coupled with the voice recognition unit 20, as shown in FIG. 3. This preferred embodiment is particularly useful in a network environment in which the server 25 is coupled with several ultrasound imaging systems or review stations. In such an environment, voice information customized for a particular user is centrally located in the server 25. When that user identifies himself to a device on the network, the user's customized voice information is provided to the voice recognition unit of that device. This provides a coherent network environment in which commands are consistently recognized. If the user changes the voice information when using the device (such as when the user corrects a misidentified word), the centrally-stored voice information is updated as well. In this way, the user is not only provided with greater access to his customized voice information, but he is also given more opportunities to update the voice information to enhance recognition performance. If a user is using multiple ultrasound devices on the network simultaneously, the voice information that is accessed first or, alternatively, the voice information that is the most up-to-date can be used.
To select customized voice information (stored in an external server 25 or in a server integrated with the ultrasound imaging system or review station), the user can identify himself by providing non-verbal identification information, such as by typing his user name and password into a log-in screen. If the voice recognition unit 20 comprises a speaker identification engine (such as Keyware by Keyware Technologies or SpeakEZ Voice Print by T-Netix Inc.), the user can also identify himself by providing verbal identification information. For example, with a speaker identification engine, the voice recognition unit 20 can identify the user when he voices a command to the voice input device 15 or when he provides a voice sample to gain access to the ultrasound system or review station, as described in U.S. Pat. No. 6,129,671, which is hereby incorporated by reference and assigned to the assignee of the present application.
As described above, the voice recognition unit 10 can be implemented in either an ultrasound imaging system or an ultrasound review station to control the operation of the system or station. For example, by using the voice recognition unit 10 with an ultrasound review station, instructions to the review station can be given using voice commands instead of or in combination with using a mouse, keyboard, or other user interface that requires physical manipulation by the user. A verbal interface provides a user with a much more user-friendly interface, especially for those users who have difficulty finding the desired function in complex window and menu hierarchies or who have trouble manipulating pointing devices. With the voice recognition unit 10, a user can instruct the review station to view a desired report, worksheet, study list, or image. The user can also use voice commands to navigate through display information (e.g., "page up"), respond to visual requests (e.g., "press ok"), and perform operations (e.g., "zoom image 5"). The voice command can also be associated with multiple operations. For example, the command "use equation A to calculate birth weight" can trigger the ultrasound review system to perform the requested calculation and place the result into a particular section of an ultrasound examination report. Such operations can reference imaging, post-processing, and computational and calculation data.
The following is a description of specific implementations of methods that can be performed with a voice recognition unit. For simplicity, the following embodiments will be described in terms of an ultrasound review station. It is important to note that all of these implementation can also be implemented on an ultrasound imaging system and that one or more of these applications can be used in combination.
In one preferred embodiment, the voice recognition unit 10 implements a method for using voiced commands to insert a textual phrase into a section of an ultrasound examination report. As used herein, the term "phrase" refers to a string of one or more characters, such as one or more letters, words, numbers, or symbols, and the term "insert" refers to inserting, overwriting, replacing, and/or adding at a specified location. Such a method is illustrated in the flow chart of FIG. 4. The first step in this method is to receive a voice request to create an ultrasound examination report (40). As shown in
In response to the voiced request to create a particular ultrasound examination report, the ultrasound review station displays the report template to the user. For example, if the user requests a normal obstetrics examination report, the template with the sections associated with such a report is displayed to the user, as shown in FIG. 6. The user is also informed which section in the report is active (i.e., which section will received input from the user). For example, a cursor positioned in the diagnosis section would inform the user that the diagnosis section will receive input.
Next, a voiced command is received from the user to insert a textual phrase into the active section (41). Preferably, the voiced command is a single word or short phrase that triggers a macro for the insertion of a longer textual phrase into the report. For example, the command "normal diagnosis" can be associated with the textual phrase "There is no evidence of abnormal development. The fetus is normal." To assist the user in selecting a command, the available voice commands for the active section can be provided to the user, for example, by displaying a menu of available commands for that section. In the example shown in
To increase recognition performance, each section of the report can be associated with a respective set of textual phrases. To convert the voiced command into a textual phrase, first the set of textual phrases associated with the active section is identified (42), and then the voiced command is converted into one of the textual phrases from the set (43). By basing the conversion on only those textual phrases associated with the active section instead of all available textual phrases, recognition performance is enhanced. Finally, the textual phrase is inserted into the section (44). This method can be used instead of or in conjunction with a pure transcription service.
Additionally, a user can voice commands to perform an editing operation (e.g., cut-and-paste) to the ultrasound examination report or to insert an ultrasound image into the report.
In another preferred embodiment, the voice recognition unit is used to place a marker on an ultrasound image displayed on a display device of an ultrasound review station. As used herein, the term "marker" is intended to broadly refer to any textual word or phase or any graphic that can be displayed on an ultrasound image displayed on a display device of an ultrasound imaging system or review station. A marker includes, but is not limited to, a word or phase used to identify anatomy and a geometric shape (such as a square or circle) used to identify a region of interest.
Next, the received voiced command is converted into a marker displayable on the display device (82). In one embodiment, the received voiced command is compared to the system's entire voice information to determine which marker is associated with the voiced command. To improve recognition, it is preferred that only a subset of the voice information be used to convert the voiced command into a marker. Preferably, ultrasound images are classified by a study type, and only those markers associated with that study type are used to convert the voiced command into a marker. An indication of the study type can be provided with the image, or the system can analyze the image to determine the set of anatomical regions and the corresponding vocabulary.
After the voiced command is converted to a marker, the marker is displayed on the ultrasound image (84). As shown in
In the embodiment describe above, a user first positioned a pointer over an ultrasound image, and the marker was placed on the image at the location identified by the pointer. In an alternate embodiment, the marker that is displayed on the ultrasound image can be positioned by the user with a pointing device such as a mouse. Pressing the mouse button would pin the marker in place. Further, a voiced command can be converted into a plurality of markers that are each positioned by the user. For example, if the user says "tumors," multiple "tumor" markers are displayed. The user then can use a pointing device to drag and drop each marker at the appropriate location on the image. Additionally, the user can position the pointer over a displayed marker and voice a command to copy or delete the marker.
In another preferred embodiment shown in
With the network relationship shown in
These preferred embodiments are particularly useful when the first ultrasound device is an ultrasound imaging system and the second ultrasound device is an ultrasound review station. After an ultrasound examination is performed with the ultrasound imaging system, a user can dictate information into the voice input device for a report to be generated for the examination on the review station. The dictated information is recorded by a voice recorder of the ultrasound examination system, and the voice data is provided to the ultrasound imaging system via the server or portable storage device. The voice recognition unit of the server or ultrasound review station transcribes the voice data and inserts the transcription into an ultrasound examination report. The above-described embodiments associated with generating an ultrasound examination report can be used to enhance recognition. For example, in dictating a Normal OB report, the user can utter: "Diagnosis section: Normal diagnosis". On conversion, the command "normal diagnosis" would be trigger a macro to provide a textual phrase in the diagnosis section of a Normal OB report.
A voice production unit can be used with to provide voice feedback with an ultrasound imaging system or review station, and
With a voice production unit 310, an ultrasound imaging system or review station can bring information to the user's attention using voice feedback instead of or along with displaying such information visually. In this way, the ultrasound imaging system or review station can communicate with a user without cluttering a display screen. Additionally, voice feedback can be used to provide the user with information that is of interest but not important enough to merit distracting the user by presenting it visually. In this way the information can be provided to the user as background audio, which the user can choose to ignore. The voice production unit can be used with or integrated with a voice recognition unit, such as when the voice production unit and the voice recognition unit share some or all or its hardware and/or software components. An ultrasound imaging system or review station using both a voice recognition unit and a voice production unit can provide a fluid voice environment. For example, voice feedback can be used to confirm an action ("Are-you-sure?") and can also be used to reply to a voiced request (e.g., User asks, "Is the study in room 10 done? and the system responds in voice, "The study is complete and prior studies are being obtained"). The voice production unit can also be used to provide verbal alerts, the status of a voice recognition unit, and an indication of a completion of an activity. To avoid interruptions, the user can also command the system not to provide voice feedback.
There are several voice-related user interfaces that can be used with an ultrasound imaging system or review station. A user can issue voiced commands to an ultrasound imaging system or review station via a headset, a wireless microphone (such as a microphone manufactured by Shure), an attached microphone, or an array microphone. To allow a user to work closer to a patient, the ultrasound transducer shown in
In another preferred embodiment, voice commands can be used to assign a function to a user interface device built into or attached to an ultrasound imaging system or ultrasound review station. The user interface device can take any suitable form (such as, but not limited to, a wheel, button, trackball, slider, and knob) attached directly or indirectly to the ultrasound imaging system or review station. For example, the user interface can be a pre-existing button on an ultrasound review station keyboard or a specially-designed knob added to an ultrasound imaging system. The user interface device can also be part of or attached to an ultrasound transducer. For example,
With these preferred embodiments, the function of a user interface device can be easily changed by voice, such as when an ultrasound transducer has both a built-in depressible wheel and an attached microphone. As a user is performing an ultrasound examination, he speaks the word "gain" into the microphone of the ultrasound transducer. The ultrasound imaging system would then assign the gain function to the wheel. When the user scrolls the wheel forward, the ultrasound imaging system would increase the gain, and when the user scrolls the wheel backwards, the ultrasound imaging system would decrease the gain. If the user then says "image," an image would be generated when the user presses the depressible wheel. In this way, by issuing simple voice commands, the user can control a variety of functions with a single user interface device. Of course multiple user interface devices can be used in combination.
Appendix A provides further details of a presently preferred embodiment. Additionally, while the above-preferred embodiments were described above with respect to an ultrasound review station, review stations for use with other imaging modalities can be used.
The foregoing detailed description has described only a few of the many forms that this invention can take. Of course, many changes and modifications are possible to the preferred embodiments described above. For this reason it is intended that this detailed description be regarded as an illustration and not as a limitation of the invention. It is only the following claims, including all equivalents, that are intended to define the scope of this invention.
Patent | Priority | Assignee | Title |
10507009, | Oct 05 2017 | ECHONOUS, INC. | System and method for fusing ultrasound with additional signals |
10586018, | Mar 29 2007 | Microsoft Technology Licensing, LLC | Method and system for generating a medical report and computer program product therefor |
10646206, | Jan 10 2019 | Imorgon Medical LLC | Medical diagnostic ultrasound imaging system and method for communicating with a server during an examination of a patient using two communication channels |
10725734, | Jul 10 2013 | Sony Corporation | Voice input apparatus |
10740552, | Oct 08 2014 | Stryker Corporation | Intra-surgical documentation system |
10874377, | Oct 05 2017 | ECHONOUS, INC. | System and method for fusing ultrasound with additional signals |
11429780, | Jan 11 2021 | Suki AI, Inc. | Systems and methods to briefly deviate from and resume back to amending a section of a note |
11547382, | Jan 30 2003 | TeraTech Corporation | Networked ultrasound system and method for imaging a medical procedure using an invasive probe |
11630943, | Jan 11 2021 | Suki AI, Inc. | Systems and methods to briefly deviate from and resume back to amending a section of a note |
11647977, | Oct 08 2018 | ECHONOUS, INC. | Device including ultrasound, auscultation, and ambient noise sensors |
11647992, | Oct 05 2017 | ECHONOUS, INC. | System and method for fusing ultrasound with additional signals |
11830607, | Sep 08 2021 | AI METRICS, LLC; AL METRICS, LLC | Systems and methods for facilitating image finding analysis |
11903772, | Feb 09 2021 | FUJIFILM Healthcare Corporation | Ultrasonic diagnostic system |
6951541, | Dec 20 2002 | Koninklijke Philips Electronics, N.V. | Medical imaging device with digital audio capture capability |
7052459, | Sep 10 2003 | GE Medical Systems Global Technology Company, LLC | Method and apparatus for controlling ultrasound systems |
7127401, | Mar 12 2001 | GE Medical Systems Global Technology Company, LLC | Remote control of a medical device using speech recognition and foot controls |
7251352, | Aug 16 2001 | Siemens Healthcare GmbH | Marking 3D locations from ultrasound images |
7380203, | May 14 2002 | Microsoft Technology Licensing, LLC | Natural input recognition tool |
7419469, | Jun 24 2004 | Siemens Medical Solutions USA, Inc.; Siemens Medical Solutions USA, Inc | Method and system for diagnostigraphic based interactions in diagnostic medical imaging |
7891230, | Feb 08 2007 | Siemens Medical Solutions USA, Inc | Methods for verifying the integrity of probes for ultrasound imaging systems |
7984651, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8046226, | Jan 18 2008 | ASCEND HIT LLC | System and methods for reporting |
8079263, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8166822, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8220334, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8312771, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8490489, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8499634, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8499635, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8600299, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
8656783, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
9084574, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
9295444, | Nov 10 2006 | Siemens Medical Solutions USA, Inc | Transducer array imaging system |
9706976, | Feb 08 2007 | Siemens Medical Solutions USA, Inc | Ultrasound imaging systems and methods of performing ultrasound procedures |
Patent | Priority | Assignee | Title |
4516207, | Mar 09 1981 | Toyota Jidosha Kogyo Kabushiki Kaisha | Apparatus for controlling air conditioner by voice |
4819271, | May 29 1985 | International Business Machines Corporation; INTERNATIONAL BUSINESS MACHINES CORPORATION, ARMONK, NEW YORK 10504, A CORP OF NEW YORK | Constructing Markov model word baseforms from multiple utterances by concatenating model sequences for word segments |
5146439, | Jan 04 1989 | Pitney Bowes Inc. | Records management system having dictation/transcription capability |
5377303, | Jun 23 1989 | Multimodal Technologies, LLC | Controlled computer interface |
5544654, | Jun 06 1995 | Siemens Medical Solutions USA, Inc | Voice control of a medical ultrasound scanning machine |
5553620, | May 02 1995 | Siemens Medical Solutions USA, Inc | Interactive goal-directed ultrasound measurement system |
5581460, | Nov 06 1990 | Kabushiki Kaisha Toshiba | Medical diagnostic report forming apparatus capable of attaching image data on report |
5581657, | Jul 29 1994 | Zerox Corporation | System for integrating multiple genetic algorithm applications |
5592374, | Jul 02 1993 | CARESTREAM HEALTH, INC | Patient identification and x-ray exam data collection bar code system |
5605153, | Jun 19 1992 | Kabushiki Kaisha Toshiba | Medical image diagnostic system |
5611060, | Feb 22 1995 | Microsoft Technology Licensing, LLC | Auto-scrolling during a drag and drop operation |
5619991, | Apr 26 1995 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Delivery of medical services using electronic data communications |
5636631, | May 12 1992 | CARESTREAM HEALTH, INC | Ultrasonic image data formats |
5651099, | Jan 26 1995 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Use of a genetic algorithm to optimize memory space |
5655084, | Nov 26 1993 | EMED Technologies Corporation | Radiological image interpretation apparatus and method |
5659665, | Dec 08 1994 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Method and apparatus for including speech recognition capabilities in a computer system |
5660176, | Dec 29 1993 | Clinical Decision Support, LLC | Computerized medical diagnostic and treatment advice system |
5724968, | Dec 29 1993 | Clinical Decision Support, LLC | Computerized medical diagnostic system including meta function |
5740801, | Mar 31 1993 | KARL STORZ ENDOSCOPY-AMERICA, INC | Managing information in an endoscopy system |
5748191, | Jul 31 1995 | Microsoft Technology Licensing, LLC | Method and system for creating voice commands using an automatically maintained log interactions performed by a user |
5758322, | Dec 09 1994 | INTERNATIONAL VOICE REGISTER, INC | Method and apparatus for conducting point-of-sale transactions using voice recognition |
5761641, | Jul 31 1995 | Microsoft Technology Licensing, LLC | Method and system for creating voice commands for inserting previously entered information |
5853367, | Mar 17 1997 | General Electric Company | Task-interface and communications system and method for ultrasound imager control |
5920317, | Jun 11 1996 | CAMTRONICS MEDICAL SYSTEMS LTD | System and method for storing and displaying ultrasound images |
5957849, | Jun 30 1997 | The Regents of the University of California | Endoluminal ultrasound-guided resectoscope |
5970457, | Oct 25 1995 | Johns Hopkins University | Voice command and control medical care system |
5971923, | Dec 31 1997 | Siemens Medical Solutions USA, Inc | Ultrasound system and method for interfacing with peripherals |
6031526, | Aug 08 1996 | APOLLO CAMERA, L L C | Voice controlled medical text and image reporting system |
6032120, | Dec 16 1997 | Siemens Medical Solutions USA, Inc | Accessing stored ultrasound images and other digital medical images |
6083167, | Feb 10 1998 | Emory University | Systems and methods for providing radiation therapy and catheter guides |
6159150, | Nov 20 1998 | Siemens Medical Solutions USA, Inc | Medical diagnostic ultrasonic imaging system with auxiliary processor |
6238344, | Mar 30 2000 | Siemens Medical Solutions USA, Inc | Medical diagnostic ultrasound imaging system with a wirelessly-controlled peripheral |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 29 1999 | Acuson Corporation | (assignment on the face of the patent) | / | |||
Jan 29 1999 | GREENBERG, JEFFREY M | Acuson Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 009745 | /0114 | |
Aug 01 2001 | Siemens Medical Systems, Inc | Siemens Medical Solutions USA, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 024563 | /0051 | |
Dec 18 2002 | Acuson Corporation | Siemens Medical Solutions USA, Inc | RE-RECORD TO CORRECT CONVEYING PARTY NAME PREVIOUSLY RECORDED AT REEL 024563 FRAME 0051 | 024651 | /0673 | |
Jan 02 2003 | ACUSON LLC | Siemens Medical Solutions USA, Inc | RE-RECORD TO CORRECT CONVEYING PARTY NAME PREVIOUSLY RECORDED AT REEL 024563 FRAME 0051 | 024651 | /0673 | |
Sep 26 2005 | Acuson Corporation | Siemens Medical Solutions USA, Inc | RE-RECORD TO CORRECT CONVEYING PARTY NAME PREVIOUSLY RECORDED AT REEL 024563 FRAME 0051 | 024651 | /0673 |
Date | Maintenance Fee Events |
Jul 20 2006 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 12 2010 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jul 15 2010 | ASPN: Payor Number Assigned. |
Jul 17 2014 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 04 2006 | 4 years fee payment window open |
Aug 04 2006 | 6 months grace period start (w surcharge) |
Feb 04 2007 | patent expiry (for year 4) |
Feb 04 2009 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 04 2010 | 8 years fee payment window open |
Aug 04 2010 | 6 months grace period start (w surcharge) |
Feb 04 2011 | patent expiry (for year 8) |
Feb 04 2013 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 04 2014 | 12 years fee payment window open |
Aug 04 2014 | 6 months grace period start (w surcharge) |
Feb 04 2015 | patent expiry (for year 12) |
Feb 04 2017 | 2 years to revive unintentionally abandoned end. (for year 12) |