ontologies are used to comprehend regular expressions, by selecting, based on a context relating to a domain of a regular expression, an ontology and an assertion base, parsing the regular expression to identify at least one fragment of the regular expression, identifying one or more assertions in the assertion base corresponding to one of the identified fragments, identifying, for each identified assertion, an associated node in the ontology, and returning, based on the associated nodes, a concept in the ontology as representing the associated fragment of the regular expression.

Patent
   9466027
Priority
Dec 31 2013
Filed
Jun 25 2014
Issued
Oct 11 2016
Expiry
Jul 04 2034

TERM.DISCL.
Extension
185 days
Assg.orig
Entity
Large
95
18
currently ok
1. A method, comprising:
selecting, based on a context relating to a domain of a regular expression, an ontology and an assertion base;
parsing the regular expression to identify at least one fragment of the regular expression;
identifying one or more assertions in the assertion base corresponding to one of the identified fragments;
identifying, for each identified assertion, an associated node in the ontology; and
returning, based on the associated nodes, a concept in the ontology as representing the associated fragment of the regular expression.
2. The method of claim 1, wherein the context comprises at least one of: (i) an explicit context, (ii) an implicit context derived an entry in a business glossary, and (iii) an implicit context derived from metadata associated with a database table.
3. The method of claim 1, wherein the concept in the ontology is returned upon determining one or more of the associated nodes point to the concept.
4. The method of claim 1, further comprising:
receiving feedback for at least one returned concept; and
updating a weight between the identified node of each concept and the associated assertion in the assertion base based on the received feedback.
5. The method of claim 4, further comprising:
upon determining that the weight between the at least one concept and the associated assertion falls below a predefined threshold, removing the association between the at least one concept and the associated assertion.
6. The method of claim 1, further comprising:
identifying one or more terms in a knowledge base related to each of the returned concepts; and
updating metadata of each identified term in the knowledge base to include the regular expression.
7. The method of claim 1, wherein each of the plurality of assertions in the assertion base are defined based on at least one of: (i) user input, and (ii) auto-retrieval of one or more existing assertions.

This application is a continuation of co-pending U.S. patent application Ser. No. 14/145,113, filed Dec. 31, 2013. The aforementioned related patent application is herein incorporated by reference in its entirety.

The present disclosure relates to computer software, and more specifically, to computer software which uses ontologies to comprehend regular expressions. A regular expression (regex) is an expression that specifies a set of rules for identifying strings that satisfy the expression. One challenge for computer users is how to understand what kind of pattern a regex is meant to identify. Also, users may wish to tweak an existing regex to determine why certain expected strings are being matched or are not being matched against a regex. Regular expressions, however, are typically hard-coded or pre-exist in tools in a format not readily understandable to humans. As such, to transform a regex to a more human-readable form, the user must first determine what the regex means. While computers can easily process regular expressions, they are typically difficult for humans to understand.

Ontologies are used to comprehend regular expressions, by selecting, based on a context relating to a domain of a regular expression, an ontology and an assertion base, parsing the regular expression to identify at least one fragment of the regular expression, identifying one or more assertions in the assertion base corresponding to one of the identified fragments, identifying, for each identified assertion, an associated node in the ontology, and returning, based on the associated nodes, a concept in the ontology as representing the associated fragment of the regular expression.

FIG. 1 illustrates techniques to use ontologies to understand regular expressions, according to one embodiment.

FIG. 2 illustrates a system to use ontologies to understand regular expressions, according to one embodiment.

FIG. 3 illustrates a method to use ontologies to understand regular expressions, according to one embodiment.

Embodiments disclosed herein describe techniques that provide users with detailed information about regular expressions by using ontologies and assertion bases (also referred to as knowledge bases) to determine the semantic meaning of a given regular expression. The assertion bases and ontologies may be linked together by the assertions in the assertion base. When a regular expression (regex) is supplied, a context of the regular expression may be identified, and an appropriate ontology and assertion base may be selected in response. For example, a regex for longitude and latitude may appear in a geo-coding application. Given the context of the geo-coding application, a relevant ontology and assertion base may be selected. The regex may then be broken down into smaller fragments, which are then matched to one or more assertions in the selected assertion base. Embodiments disclosed herein may then perform semantic searches on semantic or ontology-based graphs, storing intermediate and final nodes as abstract and auxiliary representations for the regex. Continuing with the geo-coding example, detailed information may be outputted indicating that the regular expression is related to the geo-coding application. For example, the output may specify that the regex is generally related to location based services and indicate that a first fragment of the regex is related to latitude, while a second fragment is related to longitude.

Stated differently, embodiments disclosed herein pair the knowledge captured in ontologies with the context in which a regular expression appears to determine higher-level abstraction and structure from fragments of a regular expression. For a regular expression provided as input, users are provided with a semantic translation of the regular expression along with meaningful examples of what the regular expression captures. Doing so improves user understanding of a regular expression. As a result, debugging regular expressions is simplified since the user need not guess the meaning of a regex. By providing a semantically annotated library of regular expressions, users are able to develop new regular expressions much faster by using the library, where regular expressions can be used as a starting point for creating new regular expressions.

FIG. 1 is a schematic 100 illustrating techniques to use ontologies to understand regular expressions, according to one embodiment. As shown, at box 101, a user (or an application) specifies one or more regular expressions and a context of the regular expressions. Any regular expression may be provided, and may include several constituent parts. For example, the regular expression R may include elements r1, r3, and r4, which unknown to most users, represent latitude, longitude, and temperature, respectively. In this example, r1 and r3 may be “^(\−?\d+(\.\d+)?),\s*(\−?\d+(\.\d+)?)$” while r4 may be “(?!\d+(?:[.,]\d+)?|\bdeg\b|\b[CF]\b|>)”. The regular expression may be provided by a user or an application, while the context may be explicitly defined or inferred based on the tool or application using the regular expression. For example, if the regular expression is used by an application that ensures data quality in a database, the fields and tables of the database may be used to provide insight as to the context of the regular expression. The database field or table may have metadata associated with it, or could be linked to a term in a business glossary defining a specific term. If the metadata or business glossary term indicate, for example, that the field is related to a GPS coordinate, embodiments disclosed herein may infer that the context of the regular expression is geography or global positioning system (GPS) related. Generally, the context may be inferred from any source, such as a script, application, application source code, a database, an ontology, and the like.

At box 102, the system analyzes the context of the regular expression and selects an appropriate ontology and assertion base. Continuing with the geography location example, the system may select an ontology and assertion base related to geo-coding that includes concepts linked to a geographical location. At box 103, the regular expression is analyzed and matched to assertions in the assertion base 104. To do so, the regular expression is broken down into fragments by parsing the regular expression. Some techniques include identifying an “open” parenthesis, pushing the parenthesis and all following characters into a stack until the “close” parenthesis is met. The stack may then be popped—if the stack is empty, the characters popped from the buffer are identified as a separate regex.

The assertion base 104 includes a plurality of assertions, which in this example, include assertions 121-126. Each of the assertions includes a text description of a regular expression fragment and a mapping to concepts in the ontology 105. For example, r1 123 in the assertion base may map to a latitude concept in the ontology 105. The description of the assertion r1 123 may indicate that r1 is a regular expression pattern used to identify latitude. Any description may be provided for each assertion, such as “r3 126 is a regular expression pattern to identify temperature,” “L1 125 is a label for a possible location,” and “W11 122 is a label for a possible weather concept.” The assertions in the assertion base 104 may be populated and maintained by a data steward, or may be auto-harnessed by searching the Internet. For example, an Internet search may reveal that r1 123 and r2 124 both relate to GPS coordinates, and the information in the assertion base may be populated accordingly.

Once the system matches the fragments of the regular expression to the assertions in the assertion base 104, the system identifies the corresponding fine-grained regex nodes (r1, r3, and r4 in this example) and identifies incoming edges to these nodes in order to walk a graph of the ontology to find possible label nodes until all paths converge to a common node. Any directional graph traversal algorithm may be used, including, without limitation, a breadth first or depth first search. Continuing with the example depicted in FIG. 1, the system may find L1 125 from r1 123 and r3 126, and W11 122 from r4 121. The system may further traverse the graph to find W11 122 from L1 125. As W11 122 is the node where all paths converge, the system may identify W11 as a final abstraction. Since W11 is a type of “weather” node, as defined in node 110 of the ontology, the system may output “Weather” as the high-level pattern identified by the regex. In addition, the system may output “Location” as an auxiliary sub-structure contained in the regex.

In some embodiments, the system may present examples of formats and/or data that are accepted or rejected by the provided regular expression. For instance, the system may present examples of a valid weather specification including valid location and temperature instances. The system may indicate whether a validation is unidirectional (i.e., has a specific ordering). For instance, latitude may be required to come before longitude even if logically speaking either can come first. Another example is regular expressions used to parse address data where the regular expressions enforce different ordering of certain tokens in an address by country. For example, in the United states, the structure of an address is:

(Optional Title) First Name (Optional Middle Name) Last Name

House Number—(Optional Unit Number)—Street Name

City—State—Zip Code—United States

While in Germany, addresses are structured as:

(Optional Title)—First Name—(Optional Middle Name)—Last Name

Street Name—House Number—(Optional Unit Number)

Zip Code—City

Germany

As these address formats show, the order and permissible range values vary within sub-domains of a given domain. Therefore, if in matching the regex fragments to the assertions in the assertion base, a subdomain/domain like latitude is found, any available knowledge on ordering from that domain is considered to improve the accuracy of the output generated by the system relative to the regular expression.

In still another embodiment, terms in a business glossary may be enriched by linking regular expressions to terms in the business glossary using the high-level patterns identified by the system. These regular expressions may also define key performance indicators for data quality, etc. For instance, in the above example, after determining “weather” and “location” as main and auxiliary patterns identified by the regex, the system may also search the business glossary trying to find a semantic match for the patterns. If matches are found, the system writes extra metadata (i.e., the regular expressions) to the terms or terms in the business glossary.

In some embodiments, the user may provide feedback for output generated by the system. For example, the user may indicate that the “weather” designation was accurate, but that the “location” designation was inaccurate. In such embodiments, the system may use the user feedback to assign weights to the mappings between the assertions in the assertion base 104 and the ontology 105. If the weight of a given link falls below or above a threshold, the system may refine the links by reinforcing the links whose weight is above the threshold, and removing links whose weight falls below the threshold.

FIG. 2 illustrates a system 200 to use ontologies to understand regular expressions, according to one embodiment. The networked system 200 includes a computer 202. The computer 202 may also be connected to other computers via a network 230. In general, the network 230 may be a telecommunications network and/or a wide area network (WAN). In a particular embodiment, the network 230 is the Internet.

The computer 202 generally includes a processor 204 connected via a bus 220 to a memory 206, a network interface device 218, a storage 208, an input device 222, and an output device 224. The computer 202 is generally under the control of an operating system (not shown). Examples of operating systems include the UNIX operating system, versions of the Microsoft Windows operating system, and distributions of the Linux operating system. (UNIX is a registered trademark of The Open Group in the United States and other countries. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.) More generally, any operating system supporting the functions disclosed herein may be used. The processor 204 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like. The network interface device 218 may be any type of network communications device allowing the computer 202 to communicate with other computers via the network 230.

The storage 208 may be a persistent storage device. Although the storage 208 is shown as a single unit, the storage 208 may be a combination of fixed and/or removable storage devices, such as fixed disc drives, solid state drives, SAN storage, NAS storage, removable memory cards or optical storage. The memory 206 and the storage 208 may be part of one virtual address space spanning multiple primary and secondary storage devices.

The input device 222 may be any device for providing input to the computer 202. For example, a keyboard and/or a mouse may be used. The output device 224 may be any device for providing output to a user of the computer 202. For example, the output device 224 may be any conventional display screen or set of speakers. Although shown separately from the input device 222, the output device 224 and input device 222 may be combined. For example, a display screen with an integrated touch-screen may be used.

As shown, the memory 206 contains the regex tool 212, which is an application generally configured to provide users with detailed information about regular expressions. For a regular expression (and context) received as input, the regex tool 212 analyzes the context to select an appropriate ontology and knowledge base. For example, if the context expressly indicates that the regex is part of a geo-coding application, the regex tool 212 may select a relevant ontology and knowledge base having terms and assertions related to geo-coding. The context may be expressly defined, or the regex tool 212 may infer the context using a variety of existing data that may provide insight to the context. The regex tool 212 may then break the regex down into fragments, which may then be matched to assertions in the assertion base 248. The regex tool 212 may then identify common nodes shared by the nodes in the assertion base 248. The assertions in the assertion base 248 may be linked to concepts in one or more ontologies 247. The regex tool 212 may then identify one or more concepts in the ontology 247 as related to the regular expression based on the information in the ontology 247 related to the common nodes. The regex tool 212 may then output detailed information to the user regarding the regex.

As shown, storage 208 contains ontology 247, which provides the structural framework for organizing information. An ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts. Assertion bases 248 each includes a plurality of assertions related to a specific domain. Each assertion includes a text description of a regular expression fragment and a mapping to one or more concepts in the ontology 247. Business glossary 249 allows users to create and manage an enterprise vocabulary and classification system using a domain-specific collection of industry standard terms and definitions. A plurality of client computers 2501-N may access the regex tool 212 through a client interface 260. In at least some embodiments, the regex tool 212 executes on one or more client computers 2501-N.

FIG. 3 illustrates a method 300 to use ontologies to understand regular expressions, according to one embodiment. Executing the steps of the method 300 generates detailed information regarding a regular expression to a user. At step 310, assertions in one or more assertion bases 248 may be populated by a user or programmatically populated by leveraging information retrieved from the Internet. Each assertion includes a text description of a regular expression fragment and a mapping to one or more concepts in one of the ontologies. For example, an assertion may state that “R1 represents airplane models.” At step 320, the regex tool 212 may receive a regular expression and a context from a user or application. At step 320, the regex tool 212 selects an appropriate ontology 247 and assertion base 248 based on an analysis of the context received at step 320. For example, if the context indicates that the regular expression may be related to aviation, the regex tool 212 may select an assertion base and ontology related to aviation. At step 340, the regex tool 212 may break down the regular expression into one or more smaller fragments using known heuristics.

At step 350, the regex tool 212 may match the fragments to assertions in the selected assertion base 248 by searching the assertions in the assertion base 248 to identify assertions describing each respective fragment of the regular expression. Once the assertions are identified, the regex tool 212 may traverse a graph of the assertions in the assertion base 248 to identify one or more nodes that are common to the identified assertions at step 360. As each of the assertions in the assertion base 248 are mapped to one or more concepts in the ontology 247, the regex tool 212 may return the mapped concepts in the ontology 247 as being associated with the regular expression at step 370. For example, the regex tool 212 may output one or more broad focuses of the regular expression, such as “aviation,” and then include a detailed description of each fragment of the regular expression. At step 380, the regex tool 212 may optionally update relationships between the assertions in the assertion base 248 and the ontology 247 based on user feedback. For example, if the user indicates that the regex tool 212 returned correct information regarding the regular expression, a weight for the identified assertions may be increased indicating greater confidence in the assertions and the mappings to the ontology. If, however, the user indicates that the regex tool 212 was incorrect, the weight may be decreased. If the weight falls below a threshold, the regex tool 212 may remove the assertion from the assertion base 248. At step 390, the regex tool 212 may optionally enrich terms in the business glossary 249 with the regular expression by linking reverse engineered regular expressions to business terms using the identified high-level patterns. For instance, after identifying “aviation” and “airplanes” as main and auxiliary patterns matched by the regex, the regex tool 212 may also search the business glossary 249 trying to find a semantic match for the patterns. If matches are found, the system writes extra metadata (i.e., the regular expressions) to the terms or terms in the business glossary.

Advantageously, embodiments disclosed herein take regular expressions, which are not easily understood by human readers, and provide insight as to their meaning and utility. Users may receive automatic semantic translations of any regular expression alongside meaningful examples of what the regular expression captures. With the improved understanding of the regular expressions, users can significantly reduce debugging time, since the user no longer needs to determine what the regular expressions are intended to capture. Additionally, by providing a semantically annotated library of regular expressions, users may be able to develop new regular expressions by using the existing regular expressions as a starting point.

The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).

Aspects of the present disclosure are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

Embodiments of the disclosure may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in “the cloud,” without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources.

Typically, cloud computing resources are provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present disclosure, a user may access applications such as the regex tool or related data available in the cloud. For example, the regex tool could execute on a computing system in the cloud and determine the conceptual meaning of regular expressions. In such a case, the regex tool could analyze a regular expression and store higher-level abstractions describing the regular expression at a storage location in the cloud. Doing so allows a user to access this information from any computing system attached to a network connected to the cloud (e.g., the Internet).

The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Byrne, Brian P., Pandit, Sushain, Milman, Ivan M., Oberhofer, Martin A.

Patent Priority Assignee Title
10755051, Sep 29 2017 Apple Inc Rule-based natural language processing
10878809, May 30 2014 Apple Inc. Multi-command single utterance input method
10978090, Feb 07 2013 Apple Inc. Voice trigger for a digital assistant
10984798, Jun 01 2018 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
11009970, Jun 01 2018 Apple Inc. Attention aware virtual assistant dismissal
11037565, Jun 10 2016 Apple Inc. Intelligent digital assistant in a multi-tasking environment
11070949, May 27 2015 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display
11087759, Mar 08 2015 Apple Inc. Virtual assistant activation
11120372, Jun 03 2011 Apple Inc. Performing actions associated with task items that represent tasks to perform
11126400, Sep 08 2015 Apple Inc. Zero latency digital assistant
11133008, May 30 2014 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
11152002, Jun 11 2016 Apple Inc. Application integration with a digital assistant
11169616, May 07 2018 Apple Inc. Raise to speak
11237797, May 31 2019 Apple Inc. User activity shortcut suggestions
11257504, May 30 2014 Apple Inc. Intelligent assistant for home automation
11321116, May 15 2012 Apple Inc. Systems and methods for integrating third party services with a digital assistant
11348582, Oct 02 2008 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
11360577, Jun 01 2018 Apple Inc. Attention aware virtual assistant dismissal
11360739, May 31 2019 Apple Inc User activity shortcut suggestions
11380310, May 12 2017 Apple Inc. Low-latency intelligent automated assistant
11388291, Mar 14 2013 Apple Inc. System and method for processing voicemail
11405466, May 12 2017 Apple Inc. Synchronization and task delegation of a digital assistant
11423886, Jan 18 2010 Apple Inc. Task flow identification based on user intent
11431642, Jun 01 2018 Apple Inc. Variable latency device coordination
11467802, May 11 2017 Apple Inc. Maintaining privacy of personal information
11468282, May 15 2015 Apple Inc. Virtual assistant in a communication session
11487364, May 07 2018 Apple Inc. Raise to speak
11488406, Sep 25 2019 Apple Inc Text detection using global geometry estimators
11500672, Sep 08 2015 Apple Inc. Distributed personal assistant
11516537, Jun 30 2014 Apple Inc. Intelligent automated assistant for TV user interactions
11526368, Nov 06 2015 Apple Inc. Intelligent automated assistant in a messaging environment
11532306, May 16 2017 Apple Inc. Detecting a trigger of a digital assistant
11538469, May 12 2017 Apple Inc. Low-latency intelligent automated assistant
11550542, Sep 08 2015 Apple Inc. Zero latency digital assistant
11557310, Feb 07 2013 Apple Inc. Voice trigger for a digital assistant
11580990, May 12 2017 Apple Inc. User-specific acoustic models
11599331, May 11 2017 Apple Inc. Maintaining privacy of personal information
11630525, Jun 01 2018 Apple Inc. Attention aware virtual assistant dismissal
11636869, Feb 07 2013 Apple Inc. Voice trigger for a digital assistant
11656884, Jan 09 2017 Apple Inc. Application integration with a digital assistant
11657813, May 31 2019 Apple Inc Voice identification in digital assistant systems
11657820, Jun 10 2016 Apple Inc. Intelligent digital assistant in a multi-tasking environment
11670289, May 30 2014 Apple Inc. Multi-command single utterance input method
11671920, Apr 03 2007 Apple Inc. Method and system for operating a multifunction portable electronic device using voice-activation
11675491, May 06 2019 Apple Inc. User configurable task triggers
11675829, May 16 2017 Apple Inc. Intelligent automated assistant for media exploration
11696060, Jul 21 2020 Apple Inc. User identification using headphones
11699448, May 30 2014 Apple Inc. Intelligent assistant for home automation
11705130, May 06 2019 Apple Inc. Spoken notifications
11710482, Mar 26 2018 Apple Inc. Natural assistant interaction
11727219, Jun 09 2013 Apple Inc. System and method for inferring user intent from speech inputs
11749275, Jun 11 2016 Apple Inc. Application integration with a digital assistant
11750962, Jul 21 2020 Apple Inc. User identification using headphones
11755276, May 12 2020 Apple Inc Reducing description length based on confidence
11765209, May 11 2020 Apple Inc. Digital assistant hardware abstraction
11783815, Mar 18 2019 Apple Inc. Multimodality in digital assistant systems
11790914, Jun 01 2019 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
11798547, Mar 15 2013 Apple Inc. Voice activated device for use with a voice-based digital assistant
11809483, Sep 08 2015 Apple Inc. Intelligent automated assistant for media search and playback
11809783, Jun 11 2016 Apple Inc. Intelligent device arbitration and control
11809886, Nov 06 2015 Apple Inc. Intelligent automated assistant in a messaging environment
11810562, May 30 2014 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
11837237, May 12 2017 Apple Inc. User-specific acoustic models
11838579, Jun 30 2014 Apple Inc. Intelligent automated assistant for TV user interactions
11838734, Jul 20 2020 Apple Inc. Multi-device audio adjustment coordination
11842734, Mar 08 2015 Apple Inc. Virtual assistant activation
11853536, Sep 08 2015 Apple Inc. Intelligent automated assistant in a media environment
11853647, Dec 23 2015 Apple Inc. Proactive assistance based on dialog communication between devices
11854539, May 07 2018 Apple Inc. Intelligent automated assistant for delivering content from user experiences
11862151, May 12 2017 Apple Inc. Low-latency intelligent automated assistant
11862186, Feb 07 2013 Apple Inc. Voice trigger for a digital assistant
11886805, Nov 09 2015 Apple Inc. Unconventional virtual assistant interactions
11888791, May 21 2019 Apple Inc. Providing message response suggestions
11893992, Sep 28 2018 Apple Inc. Multi-modal inputs for voice commands
11900923, May 07 2018 Apple Inc. Intelligent automated assistant for delivering content from user experiences
11900936, Oct 02 2008 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
11907436, May 07 2018 Apple Inc. Raise to speak
11914848, May 11 2020 Apple Inc. Providing relevant data items based on context
11924254, May 11 2020 Apple Inc. Digital assistant hardware abstraction
11947873, Jun 29 2015 Apple Inc. Virtual assistant for media playback
11954405, Sep 08 2015 Apple Inc. Zero latency digital assistant
11979836, Apr 03 2007 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
12061752, Jun 01 2018 Apple Inc. Attention aware virtual assistant dismissal
12067985, Jun 01 2018 Apple Inc. Virtual assistant operations in multi-device environments
12067990, May 30 2014 Apple Inc. Intelligent assistant for home automation
12073147, Jun 09 2013 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
12080287, Jun 01 2018 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
12087308, Jan 18 2010 Apple Inc. Intelligent automated assistant
12118999, May 30 2014 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
ER1602,
ER4248,
ER5706,
ER7934,
ER8583,
ER8782,
Patent Priority Assignee Title
8386530, Jun 05 2006 Intel Corporation Systems and methods for processing regular expressions
8433715, Dec 16 2009 Board of Regents, The University of Texas System Method and system for text understanding in an ontology driven platform
20070050343,
20070265833,
20090070328,
20090112841,
20110213823,
20120005184,
20120102456,
20120124064,
20120143881,
20120311529,
20130041921,
20130054285,
20130054286,
20130338998,
20150074034,
JP8063350,
/////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Dec 18 2013BYRNE, BRIAN P International Business Machines CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0331790702 pdf
Dec 18 2013OBERHOFER, MARTIN A International Business Machines CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0331790702 pdf
Dec 19 2013MILMAN, IVAN M International Business Machines CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0331790702 pdf
Dec 20 2013PANDIT, SUSHAINInternational Business Machines CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0331790702 pdf
Jun 25 2014International Business Machines Corporation(assignment on the face of the patent)
Date Maintenance Fee Events
Jan 16 2020M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Feb 23 2024M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Oct 11 20194 years fee payment window open
Apr 11 20206 months grace period start (w surcharge)
Oct 11 2020patent expiry (for year 4)
Oct 11 20222 years to revive unintentionally abandoned end. (for year 4)
Oct 11 20238 years fee payment window open
Apr 11 20246 months grace period start (w surcharge)
Oct 11 2024patent expiry (for year 8)
Oct 11 20262 years to revive unintentionally abandoned end. (for year 8)
Oct 11 202712 years fee payment window open
Apr 11 20286 months grace period start (w surcharge)
Oct 11 2028patent expiry (for year 12)
Oct 11 20302 years to revive unintentionally abandoned end. (for year 12)