Improved techniques for managing propagation of data through software modules used by computer systems are disclosed. The improved techniques can be implemented to manage the propagation of data through software modules used by computer systems. More particularly, the improved techniques obtain improved propagation of data messages to and from synchronization queues which back up main queues associated with the software modules. A segregated synchronization queue which allows segregation of data pertaining to events from data that does not pertain to events. In addition, data can be organized and processed in accordance with different priority levels.
|
15. A method for managing flow of messages in a computer system having a synchronization queue suitable for holding one or more message blocks while the one or more message blocks are awaiting to be processed, said synchronization queue comprising an event queue that is used only for storing and dynamic processing within the event queue at least one message block pertaining to an operational event and a second synchronization data queue suitable for storing at least one message block not pertaining to an operational event, said method comprising:
(a) determining whether a message pertains to an operational event; (b) placing the message in the event queue of said synchronization queue when said determining (a) determines that the message pertains to an operational event; (c) placing the message in the data queue of said synchronization queue when said determining (a) determines that the message does not pertain to an operational event; and reading a synchronization queue header that provides references to both the event queue and the data queue in order to respectively place messages in the event queue and the data queue. 7. A method for managing flow of messages between a first layer software module and a second layer software module, the first and second layer software modules being arranged in a layered stack, said method comprising;
(a) determining whether a message pertains to an operational event; (b) placing the message in an event queue portion of a synchronization queue associated with the first layer software module when said determining (a) determines that the message pertains to an operational event, wherein said synchronization queue is suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed by a main queue, and wherein the event queue is used to store only messages relating to operational events and are arranged for dynamic processing within the event queue portion; and (c) placing the message in an appropriate data queue portion of the synchronization queue associated with the first layer software module when said determining (a) determines that the message does not pertain to an operational event, and wherein the data queue portion is used to store for dynamic processing within the data queue portion only messages that are to be propagated between the first and second software modules.
19. A computer readable media including computer program code for managing flow of messages between a first software module and a second software module, said computer readable media comprising;
computer program code for receiving a message that is to be stored and dynamically processed in a synchronization queue that stores messages that are awaiting to be processed in a main queue that facilitates processing of messages between the first software module and a second software module, wherein the synchronization queue includes at least one event queue and at least one data queue, wherein the at least one event queue is used to store and dynamically process only messages relating to events, and two or more data queues that are not used to store messages that are relating to event; computer program code for determining whether the message pertains to an operational event; computer program code for placing the message in the event queue associated with the first software module when said computer program code for determining determines that the message pertains to an operational event; and computer program code for placing the message in an appropriate data queue associated with the first software module when said computer program code for determining determines that the message does not pertain to an operational event.
1. A synchronization queue for managing flow of messages between a first layer software module and a second layer software module, the first and second layer software modules being arranged in a layered stack, the synchronization queue comprising:
a first synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed by a main queue that is used for processing messages between two software modules, the one or more first message blocks of the first synchronization queue container being arranged in accordance with a first desired order for dynamic processing within the first synchronization queue, wherein the first synchronization queue is used to store and process only message blocks relating to operational events associated with managing flow of data between the first layer software module and a second layer software module; a second synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed by the main queue used for processing messages between the two software modules, and the one or more second message blocks of the second synchronization queue container being arranged in accordance with a second desired order for dynamic processing within the second synchronization queue, and wherein the second synchronization queue is not used to store any message blocks relating to operational events; and a synchronization queue header providing reference to at least the first and second synchronization queue containers.
2. A data synchronization queue as recited in
3. A data synchronization queue as recited in
4. A data synchronization queue as recited in
5. A data synchronization queue as recited in
wherein the synchronization queue further comprises: a queue control header providing two or more control references to at least the first and second synchronized queue containers, the two or more control references being arranged in accordance with a desired order, wherein each of the first and second synchronized queue containers is referenced by at least one control reference, and wherein the synchronization queue header provides reference to the first and second synchronization queue containers by referencing the queue control header.
6. A data synchronization queue as recited in
8. A method as recited in
determining the appropriate data queue to place the message; and placing the message at the end of the appropriate data queue.
9. A method as recited in
determining whether the appropriate data queue is empty; passing the message to the second layer software module when the determining determines that the appropriate data queue is empty; and placing the message at the end of the appropriate data queue when the determining determines that the data queue is not empty.
10. A method as recited in
obtaining a first message from the appropriate data queue; and passing the first message the second layer software module.
11. A method as recited in
obtaining a first message from the appropriate data queue; and passing the first message to the second layer software module.
12. A method as recited in
determining whether an event has been posted in the event queue after the first message has been passed to the second layer software module; and processing one or more events when the determining whether an event has been posted in the event queue determines that an event has been posted.
13. A method as recited in
determining whether another message is in the appropriate data queue after the message in the appropriate data queue is sent to the second layer software module; and passing the another message from the appropriate data queue to the second layer software module when the determining determines that the another message is in the appropriate data queue.
14. A method as recited in
determining whether any events are pending in the event queue container.
16. A method as recited in
wherein said synchronization queue comprises a plurality of data queues, and wherein said method further comprises: determining an appropriate data queue of said synchronization queue to place the message when said determining (a) determines that the message does not pertain to an operational event; and placing the message in the appropriate data queue of said synchronization queue when said determining (a) determines that the message does not pertain to an operational event. 17. A method as recited in
determining whether the appropriate data queue is empty; propagating the message when said determining determines that the appropriate data queue is empty; and placing the message at the end of the appropriate data queue when said determining determines that the data queue is not empty.
18. A method as recited in
obtaining a first message from the appropriate data queue; and propagating the message when said determining determines that the appropriate data queue is empty.
20. A computer readable media as recited in
computer program code for determining the appropriate data queue to place the message; and computer program code for placing the message at the end of the appropriate data queue.
21. A computer readable media as recited in
computer program code for determining whether the appropriate data queue is empty; computer program code for passing the message to the second layer software module when the determining determines that the appropriate data queue is empty; and computer program code for placing the message at the end of the appropriate data queue when the determining determines that the data queue is not empty.
22. A computer readable media as recited in
computer program code for obtaining a first message from the appropriate data queue; and computer program code for passing the first message the second layer software module.
23. A computer readable media as recited in
computer program code for obtaining a first message from the appropriate data queue; and computer program code for passing the first message to the second layer software module.
|
This application is related to U.S. patent application Ser. No. 09/513,706, entitled "METHOD AND APPARATUS FOR CONCURRENT PROPAGATION OF DATA BETWEEN SOFTWARE MODULES", filed concurrently herewith, and hereby incorporated herein by reference.
1. Field of the Invention
The present invention relates to computing systems and, more particularly, to data communications for computing systems.
2. Description of the Related Art
Recent developments in data communication for computing systems have software modules in a layered model. One feature of UNIX System V uses a layered model of software modules, referred to herein as the STREAMS model. The STREAMS model provides a standard way of dynamically building and passing messages through software modules that are placed in layers in a protocol stack. In the STREAMS programming model, the protocol stack can be dynamically changed, e.g., software modules can be added or removed (pushed and popped) at run-time. Broadly speaking, a "stream" generally refers to an instance of a full-duplex path using the model and data communication facilities between a process in user space and a driver which is typically located within the kernel space of the computing system. In other words, a stream can be described as a data path that passes data in both directions between a stream driver in the kernel space and a process in user space.
After the stream head 104 and stream driver 108 are provided, one or more stream modules, such as stream modules 104 and 106, can be pushed on the stream between the stream head 102 and stream driver 108. An application can dynamically add or remove (push or pop) stream modules on the stream stack at run-time. Each of the stream modules 104 and 106 includes a defined set of kernel-level routines and data structures to process data that passes through these modules. For example, stream modules can operate to convert lower case to upper case, or to add network routing information, etc.
As depicted in
It should be noted that in some situations messages cannot be placed in queues 112 and 114. For example, when a stream queue has reached it allotted size, messages can no longer be placed in that queue. As another example, messages cannot be placed in the queues 112 and 114 when other processing threads have acquired their software locks. In such cases, messages are stored in another queue that can serve as a back-up queue, herein referred to as a "synchronization queue". For example, synchronization queues 116 and 118 depicted in
One problem with conventional implementations of the STREAMS model is that messages are intermixed in one synchronization queue regardless of their type. As will be appreciated by those skilled in the art, some of the messages held in synchronization queue 116 and 118 contain data pertaining to operational events (events) that may effect the flow of data and/or how data is to be processed. For example, one such operational event may be related to changing the path of data flow to facilitate re-routing messages through a different physical router. Typically, data pertaining to operational events needs to be processed before other data in the synchronized queue can be processed. However, since data pertaining to operational events is intermixed with data not pertaining to any operational events, the conventional models do not provide an efficient mechanism for identifying and processing data pertaining to events.
Another problem with conventional implementations of the STREAMS model is that there is no mechanism for arranging or prioritizing data held in a synchronized queue. All messages are maintained in one synchronized queue regardless of their relative importance. As a result, messages with less importance may be processed by a high priority processing thread. Thus, the conventional models do not provide an effective mechanism to process data held in synchronized queues in accordance to the relative importance of the data.
In view of the foregoing, there is a need for improved methods for managing data propagation between software modules.
Broadly speaking, the invention relates to techniques for managing propagation of data through software modules used by computer systems. More particularly, the invention obtains improved propagation of data (namely, messages to and from synchronization queues which back up main queues associated with the software modules). In one aspect, the invention provides a segregated synchronization queue which allows segregation of data pertaining to events from data that does not pertain to events. In accordance with another aspect, data can be organized within a synchronization queue and processed in accordance with priorities. The invention is particularly well suited for use with the STREAMS model that uses software models arranged in a stack to provide data communications.
The invention can be implemented in numerous ways, including a system, an apparatus, a method or a computer readable medium. Several embodiments of the invention are discussed below.
As a synchronization queue for a computer system one embodiment of the invention includes: a first synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed, and the one or more first message blocks of the first synchronization queue container being arranged in accordance with a first desired order; a second synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed, and the one or more second message blocks of the second synchronization queue container being arranged in accordance with a second desired order; and a synchronization queue header providing reference to at least the first and second synchronization queue containers.
As a method for managing flow of messages between a first layer software module and a second layer software module with the first and second layer software modules being arranged in a layered stack, one embodiment of the invention includes the acts of: determining whether a message pertains to an operational event; placing the message in an event queue associated with the first layer software module when it is determined that the message pertains to an operational event; and placing the message in an appropriate data queue associated with the first layer software module when it is determined that the message does not pertain to an operational event.
As a computer readable media including computer program code for managing flow of messages between a first software module and a second software module, one embodiment of the invention includes: computer program code for determining whether a message pertains to an operational event; computer program code for placing the message in an event queue associated with the first software module when it is determined that the message pertains to an operational event; and computer program code for placing the message in an appropriate data queue associated with the first software module when said computer program code for determining has determined that the message does not pertain to an operational event.
The advantages of the invention are numerous. Different embodiments or implementations may have one or more of the following advantages. One advantage of the invention is that, within synchronization queues, data pertaining to events can be segregated from data that does not pertain to events. Another advantage of the invention is that data within synchronization queues can be organized and/or processed in accordance with desired priorities. Yet another advantage is that more efficient organization and propagation of data can be achieved.
Other aspects and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
The invention pertains to techniques for managing propagation of data through software modules used by computer systems. More particularly, the invention obtains improved propagation of data (namely, messages to and from synchronization queues which back up main queues associated with the software modules). In one aspect, the invention provides a segregated synchronization queue which allows segregation of data pertaining to events from data that does not pertain to events. In accordance with another aspect, data can be organized within a synchronization queue and processed in accordance with priorities. The invention is particularly well suited for use with the STREAMS model that uses software models arranged in a stack to provide data communications.
Embodiments of the invention are discussed below with reference to
On the other hand, if it is determined at operation 504 that the message does not pertain to an event, then the data flow management 500 proceeds to operation 508 where a determination is made as to whether any events are pending. The determination in operation 508 can be efficiently performed by checking the event queue container of the segregated synchronized queue to determine whether the event queue container contains an event. Typically, the message should not be processed and/or passed to the next software module when events are pending. Accordingly, as noted in operation 510, if one or more events are pending, the message is queued in an appropriate data queue container for future processing. It should be noted that the data queue containers can be organized based on priority levels. Accordingly, the appropriate data queue for the message container can be build or located (if already exists). Thus, allowing the message to be placed in the appropriate one of the data queue containers in accordance with the priority assigned to the message.
However, if it is determined at operation 508 that there are no events pending, the data flow management 500 proceeds to operation 512 where synchronized data processing of the message is performed. The synchronized data processing is performed by (or under the control of) the software module. It should be noted that the processing performed in operation 512 may result in passing the message to the next software module. Alternatively, the processing performed in operation 512 may result in the message being queued in a data queue container and/or another message previously queued in the data queue container being passed to the software module in the next level. The operation 512 is described in greater detail below with respect to
Initially, at operation 602, the appropriate data queue container for the message being processed is determined. In one implementation, each of the data queue containers pertains to a different priority level. A variety of algorithms, such as hashing algorithms can be utilized to build, maintain and search a list of data queue containers to determine the appropriate one of the data queue containers for the message. Next, at operation 604, a determination is made as to whether the appropriate data queue container for the message is empty. In other words, the determination at operation 604 determines whether there is one or more messages already queued in the data queue container appropriate for the message. If the data queue container is empty, the message can be passed to the next software module as noted in operation 606. The synchronized data processing method 600 then ends following operation 606.
On the other hand, if the operation 604 determines that the appropriate data queue container is not empty, the synchronized data processing method 600 proceeds to operation 608 where the message is placed at the end of the appropriate data queue container. Next, at operation 610, the first message in the data queue container is obtained. The first message obtained from the data queue container is then passed to the next software module as noted by operation 612. After the message has been passed to the next software module, a determination is made at operation 614 as to whether any events are pending in the event queue container. It should be noted that an event could have been posted in the event queue container by another thread (or process) than the thread (or process) performing the processing of the synchronized data processing method 600.
Accordingly, the synchronized data processing method 600 proceeds to operation 616 if it is determined at operation 614 that the event queue container contains one or more events. At operation 616, a first event in the event queue container is performed. Next, at operation 618, a determination is made as to whether there are more events pending in the event queue container. If there are one or more events still pending they can be processed by operation 620 until all the events in the event queue container have been processed. When there are no more events in the event queue container, the synchronized data processing method 600 ends.
Alternatively, if the determination at operation 614 determines that there are no events pending, the synchronized data processing method 600 proceeds to operation 622 where a determination is made as to whether there are more messages queued in the data queue container. When there are no more messages in the data queue container, the data queue container can be removed from the list of data queue containers of the segregated synchronized queue at operation 624. Following the operation 624, the synchronized data processing method 600 ends. On the other hand, if there are more messages in the data queue container, the synchronized data processing method 600 proceeds back to operation 610 where the next message is obtained from the head of the data queue container.
The system bus architecture of computer system 701 is represented by arrows 767. However, these arrows are illustrative of any interconnection scheme serving to link the subsystems. For example, a local bus could be utilized to connect the central processor to the system memory and display adapter. Computer system 701 shown in
The invention can use a combination of hardware and software components. The software can be embodied as computer readable code (or computer program code) on a computer readable medium. The computer readable medium is any data storage device that can store data which can be thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, magnetic tape, optical data storage devices. The computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The advantages of the invention are numerous. Different embodiments or implementations may have one or more of the following advantages. One advantage of the invention is that, within synchronization queues, data pertaining to events can be segregated from data that does not pertain to events. Another advantage of the invention is that data within synchronization queues can be organized and/or processed in accordance with desired priorities. Yet another advantage is that more efficient organization and propagation of data can be achieved.
The many features and advantages of the present invention are apparent from the written description, and thus, it is intended by the appended claims to cover all such features and advantages of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation as illustrated and described. Hence, all suitable modifications and equivalents may be resorted to as falling within the scope of the invention.
Lodrige, Paul F., Fishel, Randy S.
Patent | Priority | Assignee | Title |
10180980, | Mar 31 2004 | GOOGLE LLC | Methods and systems for eliminating duplicate events |
10417106, | May 15 2017 | FULIAN PRECISION ELECTRONICS TIANJIN CO , LTD | System for sharing processed data between software applications |
10423679, | Dec 31 2003 | GOOGLE LLC | Methods and systems for improving a search ranking using article information |
11457032, | May 23 2019 | KYNDRYL, INC | Managing data and data usage in IoT network |
6983464, | Jul 31 2000 | Microsoft Technology Licensing, LLC | Dynamic reconfiguration of multimedia stream processing modules |
7185105, | May 11 2001 | Oracle International Corporation | Application messaging system with flexible message header structure |
7359882, | May 11 2001 | Oracle International Corporation | Distributed run-time licensing |
7363495, | Feb 22 2001 | Oracle International Corporation | System and method for message encryption and signing in a transaction processing system |
7400650, | Jan 31 2005 | Oracle America, Inc | System and method for eliminating Streams framework overhead in data communications |
7412708, | Mar 31 2004 | GOOGLE LLC | Methods and systems for capturing information |
7461380, | Feb 18 2003 | Denso Corporation | Inter-task communications method, program, recording medium, and electronic device |
7500250, | Mar 27 2003 | Microsoft Technology Licensing, LLC | Configurable event handling for user interface components |
7523457, | Jul 31 2000 | Microsoft Technology Licensing, LLC | Dynamic reconfiguration of multimedia stream processing modules |
7536356, | May 11 2001 | Oracle International Corporation | Distributed run-time licensing |
7555756, | Jul 31 2000 | Microsoft Technology Licensing, LLC | Dynamic reconfiguration of multimedia stream processing modules |
7581227, | Mar 31 2004 | GOOGLE LLC | Systems and methods of synchronizing indexes |
7650393, | Mar 10 2005 | Hitachi, Ltd. | Information processing system and method |
7665095, | Jul 31 2000 | Microsoft Technology Licensing, LLC | Dynamic reconfiguration of multimedia stream processing modules |
7680809, | Mar 31 2004 | GOOGLE LLC | Profile based capture component |
7680888, | Mar 31 2004 | GOOGLE LLC | Methods and systems for processing instant messenger messages |
7725508, | Jun 30 2004 | GOOGLE LLC | Methods and systems for information capture and retrieval |
7941439, | Mar 31 2004 | GOOGLE LLC | Methods and systems for information capture |
8018946, | May 11 2001 | Oracle International Corporation | Application messaging system with flexible message header structure |
8099407, | Mar 31 2004 | GOOGLE LLC | Methods and systems for processing media files |
8161053, | Mar 31 2004 | GOOGLE LLC | Methods and systems for eliminating duplicate events |
8275839, | Mar 31 2004 | GOOGLE LLC | Methods and systems for processing email messages |
8346777, | Mar 31 2004 | GOOGLE LLC | Systems and methods for selectively storing event data |
8352957, | Jan 31 2008 | VALTRUS INNOVATIONS LIMITED | Apparatus and method for passing metadata in streams modules |
8386728, | Mar 31 2004 | GOOGLE LLC | Methods and systems for prioritizing a crawl |
8561199, | Jan 11 2007 | International Business Machines Corporation | Method and system for secure lightweight stream processing |
8631076, | Mar 31 2004 | GOOGLE LLC | Methods and systems for associating instant messenger events |
8812515, | Mar 31 2004 | GOOGLE LLC | Processing contact information |
8954420, | Dec 31 2003 | GOOGLE LLC | Methods and systems for improving a search ranking using article information |
9189553, | Mar 31 2004 | GOOGLE LLC | Methods and systems for prioritizing a crawl |
9262446, | Dec 29 2005 | GOOGLE LLC | Dynamically ranking entries in a personal data book |
9311408, | Mar 31 2004 | GOOGLE LLC | Methods and systems for processing media files |
9836544, | Mar 31 2004 | GOOGLE LLC | Methods and systems for prioritizing a crawl |
Patent | Priority | Assignee | Title |
3686641, | |||
4847754, | Oct 15 1985 | International Business Machines Corporation | Extended atomic operations |
5404562, | Jun 06 1990 | RTPC CORPORATION; TM PATENTS, L P | Massively parallel processor including queue-based message delivery system |
5465335, | Oct 15 1991 | Hewlett-Packard Company | Hardware-configured operating system kernel having a parallel-searchable event queue for a multitasking processor |
5519833, | Aug 13 1992 | Bankers Trust Company | Distributed data processing system providing a distributed stream software environment to enable application on a first system to use driver on a second system |
5944778, | Mar 28 1996 | Hitachi Ltd | Periodic process scheduling method |
6408324, | Jul 03 1997 | TRW Inc | Operating system having a non-interrupt cooperative multi-tasking kernel and a method of controlling a plurality of processes with the system |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 25 2000 | Sun Microsystems, Inc. | (assignment on the face of the patent) | / | |||
Apr 10 2000 | LODRIGE, PAUL F | Sun Microsystems, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010797 | /0356 | |
Apr 10 2000 | FISHEL, RANDY S | Sun Microsystems, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010797 | /0356 | |
Feb 12 2010 | ORACLE USA, INC | Oracle America, Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 037278 | /0727 | |
Feb 12 2010 | Sun Microsystems, Inc | Oracle America, Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 037278 | /0727 | |
Feb 12 2010 | Oracle America, Inc | Oracle America, Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 037278 | /0727 |
Date | Maintenance Fee Events |
Jul 13 2007 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 13 2011 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jul 29 2015 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 10 2007 | 4 years fee payment window open |
Aug 10 2007 | 6 months grace period start (w surcharge) |
Feb 10 2008 | patent expiry (for year 4) |
Feb 10 2010 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 10 2011 | 8 years fee payment window open |
Aug 10 2011 | 6 months grace period start (w surcharge) |
Feb 10 2012 | patent expiry (for year 8) |
Feb 10 2014 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 10 2015 | 12 years fee payment window open |
Aug 10 2015 | 6 months grace period start (w surcharge) |
Feb 10 2016 | patent expiry (for year 12) |
Feb 10 2018 | 2 years to revive unintentionally abandoned end. (for year 12) |