A method and apparatus are provided for prioritizing message flows in a state machine execution environment. A state machine is disclosed that employs a flow graph that provides a flow control mechanism The flow control mechanism defines a plurality of states and one or more transitions between the plurality of states, wherein one or more tokens circulate within the flow graph and execute functions during the one or more transitions between the states The disclosed state machine parses one of the tokens to extract one or more predefined information elements; and assigns a priority to the token based on the extracted information elements and a state occupancy of the token, wherein the assigned priority controls an order in which the token is processed.
|
1. A state machine, comprising:
a memory containing a flow graph that provides a flow control mechanism, wherein said flow control mechanism defines a plurality of states and one or more transitions between said plurality of states, wherein one or more tokens circulate within said flow graph and execute functions during said one or more transitions between said states; and
at least one processor, coupled to the memory, said at least one processor operative to:
parse one of said tokens to extract one or more predefined information elements; and
assign a priority to said token based on said extracted information elements and a state occupancy of said token, wherein said assigned priority controls an order in which said token is processed and wherein said state occupancy is a position of said token in said flow graph.
10. A method for prioritizing message flows in a state machine execution environment, wherein said state machine execution environment employs a flow graph that provides a flow control mechanism, wherein said flow control mechanism defines a plurality of states and one or more transitions between said plurality of states, wherein one or more tokens circulate within said flow graph and execute functions during said one or more transitions between said states, said method comprising:
parsing one of said tokens to extract one or more predefined information elements; and
assigning a priority to said token based on said extracted information elements and a state occupancy of said token, wherein said assigned priority controls an order in which said token is processed and wherein said state occupancy is a position of said token in said flow graph.
18. An article of manufacture for prioritizing message flows in a state machine execution environment, wherein said state machine execution environment employs a flow graph that provides a flow control mechanism, wherein said flow control mechanism defines a plurality of states and one or more transitions between said plurality of states, wherein one or more tokens circulate within said flow graph and execute functions during said one or more transitions between said states, said article of manufacture comprising a machine readable storage medium containing one or more programs which when executed implement the steps of:
parsing one of said tokens to extract one or more predefined information elements; and
assigning a priority to said token based on said extracted information elements and a state occupancy of said token, wherein said assigned priority controls an order in which said token is processed and wherein said state occupancy is a position of said token in said flow graph.
2. The state machine of
3. The state machine of
4. The state machine of
5. The state machine of
6. The state machine of
7. The state machine of
8. The state machine of
9. The state machine of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
17. The method of
|
The present invention relates to message processing techniques and, more particularly, to techniques for prioritizing message flows in a state machine execution environment
Software applications are often constructed using an abstract state description language Voice over Internet Protocol (IP) (or VoIP) call control applications, for example, have been realized by using a high level description language to describe the data processing and message exchange behavior of a VoIP Media Gateway Controller (MGC). In this system, an “execution engine” accepts a high level “graph language” used by the execution engine to create internal data structures that realize the VoIP MGC behaviors described in the input graph language. The combination of the execution engine and the input file that contains the graph language describing the VoIP MGC behavior results in a system that exhibits the desired MGC processing characteristics
The description language supports an enhanced transition graph model. As a result, the computation can be best understood as a graph consisting of a set of nodes connected by arcs. Using this model, there may be many tokens that exist at a given node The arcs represent transitions that tokens take between nodes. Tokens traverse arcs in response to events that can be derived from either timers or messages.
In many systems, especially those that implement VoIP call control, the system must perform predictably in the face of high demand, such as spikes of traffic Typically, a call flow will be started with an event in the form of a message, generated on behalf of a user, when a user goes off-hook or completes entering a telephone number of a desired destination. The initial event will be followed by a series of message exchanges used by the specific protocol to move the call into a quiescent state
In a conventional system under heavy or overload conditions, all messages (and therefore transitions between states) will be given an equal priority This may result in a delay for the completion of a call setup message sequence (representing a call accepted by the system and already in progress), as requests for new service (e.g., a new call) will compete for resources with calls that are already in the process of being torn down. Thus, additional resources will be expended for a given offered load, as the system will not give precedence to a call in a tear-down phase that would be returning resources to the system.
A need therefore exists for an improved state machine execution environment that prioritizes message flows.
Generally, a method and apparatus are provided for prioritizing message flows in a state machine execution environment. According to one aspect of the invention, a state machine is disclosed that employs a flow graph that provides a flow control mechanism. The flow control mechanism defines a plurality of states and one or more transitions between the plurality of states, wherein one or more tokens circulate within the flow graph and execute functions during the one or more transitions between the states. The disclosed state machine parses one of the tokens to extract one or more predefined information elements; and assigns a priority to the token based on the extracted information elements and a state occupancy of the token, wherein the assigned priority controls an order in which the token is processed.
For example, the extracted information elements can comprise one or more name/value pairs As used herein, the state occupancy is a position of the token in the flow graph. The assigned priority can be based, for example, on an effect that a particular transition by the token will have on other resources In further variations, the assigned priority can be based, for example, on one or more of an amount of computation resources required by the token; a time sensitivity of the token; a determination of whether the token includes idle time and a determination of whether a response is expected by another entity to the token In one embodiment, the flow graph is a call flow graph that controls a processing of call control messages and wherein the functions include one or more of sending messages to other tokens and to external devices; and parsing incoming messages from other tokens or external devices.
According to another aspect of the invention, a higher priority can be assigned to a token that will trigger a transition that will return one or more currently used system resources to an available status The state machine can optionally assign the token to one of a plurality of priority-based queues In addition, the state machine can optionally dynamically adjust a priority assigned to the token based on an overload status. The state machine can discard one or more lower priority tokens when a priority queue of a predefined priority level is full.
A more complete understanding of the present invention, as well as further features and advantages of the present invention, will be obtained by reference to the following detailed description and drawings.
The present invention provides methods and apparatus for prioritizing messages flows within a formal state machine execution environment. As used herein, a flow control mechanism defines a plurality of states (or nodes) in a graph and the transition arcs between such states. The present invention provides a state machine execution environment that employs an explicit flow control mechanism. According to one aspect of the invention, the disclosed state machine execution environment employs a two-stage processing of messages. In a first stage, a preliminary pass is performed over messages (or selected portions thereof) to extract information, such as one or more name/value pairs, that can be employed to prioritize the message. In a second stage, the messages are processed using the context of the computation to choose a specific message parsing behavior
Using a combination of the preliminary parsing in the first stage and the position of the target token in the call flow graph (referred to herein as the “state occupancy” of the token), it is possible to assign a priority to the subsequent processing of the message within the formal state machine execution environment. The assigned priority can be used to control the order in which messages are processed
In general, the prioritization can be based on, for example, the effect that a particular state transition has on other resources. In various embodiments, the prioritization can be based, for example, one or more of the amount of required computation resources, the time sensitivity of the message, whether the required message processing includes idle time, and whether a prompt response is expected to the message. If no response is expected, the recipient can typically replay the message out of order In addition, using the assigned priority, call flows that are in the process of being set up, or being torn down, can be assigned a higher priority, thus minimizing the peak resources needed by the system to function responsively and those providing extra margin for handling overload.
According to another aspect of the invention, a conditional prioritization technique can be employed. Building upon the information that is available to the execution engine as a side effect of prioritization of the messages, the queue length of the lower priority messages can be used to set a dynamic policy for treatment of message processing. Since in this system, messages are partially processed upon reception, this mechanism can be used to selectively discard unassociated messages (i.e., messages that are not part of an existing call flow/computation) and/or unsolicited messages (i.e., messages that are not expected responses)
In a conditional prioritization implementation, as the load on the system teaches a predefined threshold, such as a “yellow zone” queue length of the lower priority messages currently waiting processing, the system would start to discard messages currently unassociated with a current flow. If the queue length of the lower priority messages continues to grow, reaching a predefined “red zone,” the system begins to discard messages that are associated but are unsolicited. As the queue length shrinks below each of the thresholds described above, the system ceases discarding the related class of messages. In this way, the system will maximize its ability to properly handle extreme spikes in message traffic, minimizing the impact on memory and processing burden on the host system.
While the present invention is illustrated herein in the context of an exemplary Voice over Internet Protocol Media Gateway Controller, the present invention may be employed in any environment where a number of tokens circulate within a flow graph in a state machine and execute functions during transitions between states, as would be apparent to a person of ordinary skill in the art.
As discussed further below in conjunction with
As discussed hereinafter, in the exemplary embodiment, the functionality of the present invention resides in the personal call agent 110. In this example, the VoIP control is being managed by the personal call agent 110, and the desired behavior is created by managing the messages at the personal call agent 110 without burdening the gateway 130 with additional processing requirements.
As discussed further below in conjunction with
Generally, a number of tokens (corresponding to messages in the exemplary VoIP environment) circulate within the graph 300 and execute functions during transitions 320 between states 310. In the exemplary VoIP environment, the transition functions include sending messages to other tokens and to external devices; and parsing incoming messages from other tokens or external devices.
As shown in
As previously indicated, a number of tokens (e.g., corresponding to messages 520) circulate within the graph 300 and execute functions during transitions between states. In the exemplary VoIP environment, the transition functions include sending messages to other tokens and to external devices; and parsing incoming messages from other tokens or external devices. In the example shown in
During step 620, the process determines if there are additional incoming messages to be processed that are waiting on the server input by the state machine. If there are additional incoming messages, the message is partially parsed during step 625 to recover information that relates the message to a particular token. If a token not found, then the message is assigned to a base token.
A test is performed during step 630 to determine if the state machine is in an overload condition. If the state machine is in an overload condition, the state of the token is examined during step 635 to determine an appropriate input message handling priority. The message is then placed in a priority queue.
If it is determined during step 640 that the message has an immediate priority, then the message is processed to completion and control is passed to that token. Thereafter, the process 600 breaks out of the while “incoming messages” loop during step 645
If the message does not have an immediate priority, then the process 600 continues at the top of the while “incoming messages” loop during step 650 to check for more messages waiting on input.
If the state machine is not in an overload condition, the message is processed during step 660 and control is passed to the token. Thereafter, the process 600 breaks out of the while “incoming messages” loop during step 665
If it is determined during step 675 that either priority queue is full, then the oldest message is selected during step 680 from highest priority queue, the selected message is processed, and control is passed to the associated token. Thereafter, the process 600 breaks out of the while “incoming messages” loop during step 685.
During step 698, the process 600 selects the oldest message from the highest priority queue, processes the selected message and passes control to the associated token before returning.
While
In addition, while exemplary embodiments of the present invention have been described with respect to processing steps in a software program, as would be apparent to one skilled in the art, various functions may be implemented in the digital domain as processing steps in a software program, in hardware by circuit elements of state machines, or in combination of both software and hardware. Such software may be employed in, for example, a digital signal processor, micro-controller, or general-purpose computer. Such hardware and software may be embodied within circuits implemented within an integrated circuit.
Thus, the functions of the present invention can be embodied in the form of methods and apparatuses for practicing those methods. One or more aspects of the present invention can be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a device that operates analogously to specific logic circuits. The invention can also be implemented in one or more of an integrated circuit, a digital signal processor, a microprocessor, and a micro-controller.
System and Article of Manufacture Details
As is known in the art, the methods and apparatus discussed herein may be distributed as an article of manufacture that itself comprises a computer readable medium having computer readable code means embodied thereon. The computer readable program code means is operable, in conjunction with a computer system, to carry out all or some of the steps to perform the methods or create the apparatuses discussed herein. The computer readable medium may be a recordable medium (e.g., floppy disks, hard drives, compact disks, memory cards, semiconductor devices, chips, application specific integrated circuits (ASICs)) or may be a transmission medium (e.g., a network comprising fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store information suitable for use with a computer system may be used. The computer-readable code means is any mechanism for allowing a computer to read instructions and data, such as magnetic variations on a magnetic media or height variations on the surface of a compact disk.
The computer systems and servers described herein each contain a memory that will configure associated processors to implement the methods, steps, and functions disclosed herein. The memories could be distributed or local and the processors could be distributed or singular. The memories could be implemented as an electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term “memory” should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by an associated processor. With this definition, information on a network is still within a memory because the associated processor can retrieve the information from the network.
It is to be understood that the embodiments and variations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention
Shamilian, John H, Wood, Thomas L
Patent | Priority | Assignee | Title |
8141065, | Sep 27 2007 | WSOU Investments, LLC | Method and apparatus for performing non service affecting software upgrades in place |
8146069, | Sep 27 2007 | WSOU Investments, LLC | Method and apparatus for performing non service affecting software upgrades in place |
8239068, | Jun 26 2009 | Itron, Inc | Method and system for cooperative powering of unitary air conditioners |
8548631, | Jun 26 2009 | Itron, Inc | Method and system for cooperative powering of unitary air conditioners |
8650538, | May 01 2012 | Microsoft Technology Licensing, LLC | Meta garbage collection for functional code |
8726255, | May 01 2012 | Microsoft Technology Licensing, LLC | Recompiling with generic to specific replacement |
8793669, | Jul 17 2012 | Microsoft Technology Licensing, LLC | Pattern extraction from executable code in message passing environments |
9575813, | Jul 17 2012 | Microsoft Technology Licensing, LLC | Pattern matching process scheduler with upstream optimization |
9747086, | Jul 17 2012 | Microsoft Technology Licensing, LLC | Transmission point pattern extraction from executable code in message passing environments |
Patent | Priority | Assignee | Title |
4663748, | Apr 12 1984 | Unisearch Limited | Local area network |
5224108, | Jan 23 1991 | Verizon Patent and Licensing Inc | Method and apparatus for call control signaling |
6333931, | Dec 28 1998 | Cisco Technology, Inc | Method and apparatus for interconnecting a circuit-switched telephony network and a packet-switched data network, and applications thereof |
6345042, | Sep 20 1996 | Tektronix, Inc. | Method for checking a data exchange based on a communication protocol |
6363065, | Nov 10 1999 | RIBBON COMMUNICATIONS OPERATING COMPANY, INC | okApparatus for a voice over IP (voIP) telephony gateway and methods for use therein |
6594815, | Aug 14 2000 | Asynchronous controller generation method | |
6606588, | Mar 14 1997 | INTERUNIVERSITAIR MICRO-ELEKTRONICA CENTRUM IMEC VZW | Design apparatus and a method for generating an implementable description of a digital system |
7346047, | Jun 01 1999 | Cisco Technology, Inc. | Method and apparatus for backhaul of telecommunications signaling protocols over packet-switching networks |
20040059443, | |||
20040136344, | |||
20050105464, | |||
20070014289, | |||
20070113222, | |||
20080320452, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 29 2008 | Alcatel-Lucent USA Inc. | (assignment on the face of the patent) | / | |||
Mar 17 2008 | SHAMILIAN, JOHN H | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020872 | /0322 | |
Mar 17 2008 | WOOD, THOMAS L | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020872 | /0322 | |
Nov 01 2008 | Lucent Technologies Inc | Alcatel-Lucent USA Inc | MERGER SEE DOCUMENT FOR DETAILS | 025410 | /0067 |
Date | Maintenance Fee Events |
Jan 24 2011 | ASPN: Payor Number Assigned. |
Jul 03 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 03 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 29 2022 | REM: Maintenance Fee Reminder Mailed. |
Feb 13 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 11 2014 | 4 years fee payment window open |
Jul 11 2014 | 6 months grace period start (w surcharge) |
Jan 11 2015 | patent expiry (for year 4) |
Jan 11 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 11 2018 | 8 years fee payment window open |
Jul 11 2018 | 6 months grace period start (w surcharge) |
Jan 11 2019 | patent expiry (for year 8) |
Jan 11 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 11 2022 | 12 years fee payment window open |
Jul 11 2022 | 6 months grace period start (w surcharge) |
Jan 11 2023 | patent expiry (for year 12) |
Jan 11 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |