A method and apparatus are provided for reducing latency associated with processing events of a hardware interrupt. Send and receive events share the same hardware interrupt. A receive handler and a separate send handler are provided to simultaneously process completion of a send event and a receive event. In addition, separate queues are provided to communicate receipt of an event to the respective interrupt handler.
|
1. A method for processing data, comprising:
receiving an event from a processor that generates a hardware interrupt, wherein the received event is in the form of one of a receive event and a send event;
limiting placement of the receive event in a receive queue, and limiting placement of the send event in a send queue;
invoking an interrupt handler to complete processing of data associated with said event, said interrupt handler having a send handler to process send data in the send queue and associated with completion of a send transaction and a receive handler to process receive data in the receive queue and associated with completion of a receive transaction, wherein said interrupt handler supports simultaneous and separate processing of send process data and receive process data; and
the receive and send handlers operating on different processors threads.
15. An article comprising:
a tangible computer readable carrier including computer program instructions configured to cause a computer to process data comprising:
instructions to receive an event from a processor that generates a hardware interrupt, wherein a receive event is assigned to a receive event queue and a send event is assigned to a send event queue separate from said receive event queue; and
instructions to invoke an interrupt handler to complete processing of data associated with said event, said interrupt handler having a send handler to process send data in the send event queue associated with completion of a send transaction and a receive handler to process receive data in the receive event queue associated with completion of a receive transaction, wherein said interrupt handler supports simultaneous and separate processing of send process data and receive process data; and
the receive handler operating on a first processor thread and the send handler operating on a second processor thread.
8. A computer system comprising:
a processor operatively connected to an event manager;
the processor to receive an event;
the event manager to receive the event from said processor and to generate a hardware interrupt in response to receipt of said event, wherein each event is assigned to one of a receive event queue and a send event queue, wherein the receive event queue is limited to receipt of a receive event and the send event queue is limited to receipt of a send event; and
an interrupt handler to complete processing of data associated with said event received from said event manager, said interrupt handler having a send handler to process send data in the send event queue associated with completion of a send transaction, and a receive handler to process receive data in the receive event queue associated with completion of a receive transaction, wherein said interrupt handler supports simultaneous and separate processing of send process data and receive process data; and
the receive handler operating on a first processor thread and the send handler operating on a second processor thread.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
9. The system of
10. The system of
11. The system of
12. The system of
13. The system of
14. The system of
16. The article of
17. The article of
18. The article of
19. The article of
20. The article of
|
1. Technical Field
This invention relates to an interrupt handler for a hardware interrupt. More specifically, the invention relates to separating management of tasks in the interrupt handler based upon categorization of the task.
2. Description of the Prior Art
An interrupt is a signal informing a CPU that an event has occurred. The interrupt can be in the form of a software interrupt or a hardware interrupt. In one embodiment, the interrupt is a digital signal to a CPU that indicates some event has happened. When the CPU receives an interrupt, it takes a specified action. For example, an interrupt can cause the CPU to suspend an interruptible task to temporarily service the interrupt. Before the CPU can respond to an interrupt, the processor must wait for an interruptible state in its processing. For example, if the processor is writing to memory, it must wait until the write is completed before processing the interrupt. Once the informing CPU detects the interrupt, it must save all of the information it will need to resume normal processing of the interrupted task once the interrupt is over. An interrupt handler is a callback subroutine in an operating system or device driver whose execution is triggered by the receipt of an interrupt. After the event that caused the interrupt is complete, an interrupt service routine restarts the interruptible task from where it had previously been suspended.
Placement of an event in the mixed interrupt event queue includes the need for processing of the event. As events are processed, they are placed in a completion queue for an interrupt handler to complete processing of event data. Both completed send and receive events are placed in the completion queue in the order in which they are processed from the mixed interrupt event queue (108). Processing of send or receive data in the completion queue invokes an interrupt handler which utilizes a single thread to periodically poll the completion queue to determine if there are items present in the completion queue to be processed (110). In one embodiment, the thread that monitors the completion queue is inactive, i.e. sleeps, when the completion queue is empty and is woken up by placement of any new data in the completion queue. If it is determined that there is no data present in the completion queue, the interrupt waits a preset time interval (112) before it returns to step (110) to poll the completion queue or the thread goes to sleep and waits for any new data in the completion queue. However, if it is determined that there is an item present in the completion queue for processing, it must be determined whether the next data item present in the completion queue is associated with an originating send event (114). In one embodiment, the determination at step (114) may solicit whether the next data item in the queue is associated with a receive event. A positive determination at step (114) is an indication that the send has been completed and a send handler in the interrupt handler is invoked to process the send data (116). The send interrupt handler needs to find the buffer pointer of the send data (118), validate the data packet (120), and release the data packet (122). Alternatively, if it is determined at step (114) that the next data item is not associated with completion data from a send event, then by default the next data item in the completion queue is associated with completion data from a receive event (124) and the interrupt handler needs to process the data associated with the receive interrupt handler (126). The receive interrupt handler needs to find the buffer pointer for the data being received (128), allocate a new buffer for a new direct memory access (130), process the data packet (132), and pass the data packet to upper layer protocols (134). Accordingly, a single interrupt handler polls a single completion queue to process data from both send and receive events, and forwards the next item in the queue to the appropriate send or receive handler in the interrupt handler.
Furthermore, as shown in
As shown above, the prior art process for processing data associated with a hardware interrupt has a single queue for executing both send and receive events, and another single queue for completing processing of data associated with such events. In addition, the interrupt handler is limited to a single thread for polling the completion queue. This structure restricts the interrupt handler to initiate data processing from items in the completion queue one item at a time, whether the items are associated with completion of data processing from a receive event or a send event. Accordingly, there is a need to accelerate processing of data in the completion queue by separating processing of completion data based upon whether they are associated with an originating send event or a receive event.
This invention comprises a method and apparatus for improving operation of an interrupt handler by separating task items based upon their respective categorization.
In one aspect of the invention, a method is provided for processing data. An event is received from a processor that generates a hardware interrupt. An interrupt handler is invoked to complete processing of data associated with the event. The interrupt handler has a send handler to process data associated with completion of a send transaction and a receive handler to process data associated with completion of a receive transaction. The interrupt handler supports simultaneous and separate processing of send process data and receive process data.
In another aspect of the invention, a computer system is provided with a processor in communication with an operating system. An event manager in communication with the operating system is provided to receive an event from the processor and to generate a hardware interrupt in response to receipt of the event. An interrupt handler in communication with the event manager is provided to complete processing of data associated with the event. The interrupt handler has a send handler to process data associated with completion of a send transaction and a receive handler to process data associated with completion of a receive transaction. The interrupt handler supports simultaneous and separate processing of send process data and receive process data.
In yet another aspect of the invention, an article is provided with a tangible computer readable carrier including computer program instructions configured to cause a computer to process data. Instructions are provided to receive an event from a processor that generates a hardware interrupt. Instructions are also provided to invoke an interrupt handler to complete processing of data associated with the event. The interrupt handler has a send handler to process data associated with completion of a send transaction and a receive handler to process data associated with completion of a receive transaction. The interrupt handler supports simultaneous and separate processing of send process data and receive process data.
Other features and advantages of this invention will become apparent from the following detailed description of the presently preferred embodiment of the invention, taken in conjunction with the accompanying drawings.
Two separate queues are provided for completion of data processing for tasks associated with an interrupt event. More specifically, the device queue includes a send completion queue for processing data associated with a send interrupt event, and a receive completion queue for processing data associated with a receive interrupt event. In addition, the interrupt handler is provided with a send handler to manage send data and a receive handler to manage receive data. Both the send handler and the receive handler have separate threads to manage the respective device queues. More specifically, the send handler has a thread to monitor items in the send completion queue and the receive handler has a separate thread to monitor items in the receive completion queue. Both of the threads pull items from the respective completion queues to process the associated data packet. Separation of the completion queues together with separate threads to manage each of the completion queues enables the interrupt handler to simultaneously process a send event and a receive event.
Events in the separate send and receive queues are processed in the order in which they are placed in the respective queue. The send queue and receive queue are unrelated. Events in the separate queues are processed separately by separate threads.
At the same time as the receive handler in the interrupt handler is managing completion of processing of receive data, a separate send interrupt handler may be managing completion of processing of send data.
As demonstrated above with respect to
In the example shown in
Furthermore, as shown in
Embodiments within the scope of the present invention also include articles of manufacture comprising program storage means having encoded therein program code to facilitate processing of an interrupt. The program code may include event manager instructions responsible for receipt of an event from the processor (704), and event handler instructions to initiate program code of the interrupt handler to generate a hardware interrupt with the operating system (706) in response to receipt of the event. Similarly, the program code may include instructions responsible to complete processing of data associated with the event. Program code is provided in the event manager to process and separate a send event and a receive event. The event manager program code associated with a send event processes data associated with completion of a send transaction. Similarly, the event manager program code associated with a receive event processes data associated with completion of a receive transaction. Such program storage means can be any available media which can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such program storage means can include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired program code means and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included in the scope of the program storage means.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, random access memory (RAM), read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include compact disk B read only (CD-ROM), compact disk B read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
The software implementation can take the form of a computer program product accessible from a computer-useable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
The configuration of the interrupt handler with separate send and receive handler enables send and receive tasks to be processed simultaneously and separately when they share the same hardware interrupt. This helps reduce latency associated with processing data associated with both send and receive tasks. In addition, the provision of separate device queues and event queues enables the send and receive tasks to be separate and organized prior to forwarding to the respective interrupt handlers. Similarly, the separation of the queues enables tasks associated with receive events to be processed separately from tasks associated with send events. Accordingly, separation of the queues and interrupts handlers reduces latency with processing of send and receive events, and it also increases both unidirectional and bidirectional throughput bandwidth.
It will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the invention. In particular, the send and receive threads of the send and receive interrupt handlers, respectively, may run on different CPUs where the receive interrupt handler would be configured to complete processing of receive events, and the send interrupt handler would be configured to complete processing of send events. In one embodiment, the receive interrupt thread may be invoked on the same CPU in which the hardware interrupt is invoked, and the send interrupt thread may be scheduled for execution on a different CPU. Accordingly, the scope of protection of this invention is limited only by the following claims and their equivalents.
Patent | Priority | Assignee | Title |
10331595, | Oct 23 2014 | MELLANOX TECHNOLOGIES, LTD. | Collaborative hardware interaction by multiple entities using a shared queue |
8122167, | Aug 06 2010 | International Business Machines Corporation | Polling in a virtualized information handling system |
8296490, | Jun 29 2007 | Intel Corporation | Method and apparatus for improving the efficiency of interrupt delivery at runtime in a network system |
8387072, | Sep 16 2011 | GOOGLE LLC | Transactional environments for event and data binding handlers |
8539512, | Sep 16 2011 | GOOGLE LLC | Transactional environments for event and data binding handlers |
8838864, | Jun 29 2007 | Intel Corporation | Method and apparatus for improving the efficiency of interrupt delivery at runtime in a network system |
Patent | Priority | Assignee | Title |
4271468, | Nov 06 1979 | International Business Machines Corp. | Multiprocessor mechanism for handling channel interrupts |
5305454, | Aug 12 1991 | International Business Machines Corporation | Notification of event handlers in broadcast or propagation mode by event management services in a computer system |
5515538, | May 29 1992 | Sun Microsystems, Inc. | Apparatus and method for interrupt handling in a multi-threaded operating system kernel |
5701495, | Sep 20 1993 | International Business Machines Corporation | Scalable system interrupt structure for a multi-processing system |
5892957, | Mar 31 1995 | Sun Microsystems, Inc. | Method and apparatus for interrupt communication in packet-switched microprocessor-based computer system |
6415332, | Aug 19 1998 | International Business Machines Corporation | Method for handling of asynchronous message packet in a multi-node threaded computing environment |
6715005, | Jun 29 2000 | LENOVO SINGAPORE PTE LTD | Method and system for reducing latency in message passing systems |
6775728, | Nov 15 2001 | TAHOE RESEARCH, LTD | Method and system for concurrent handler execution in an SMI and PMI-based dispatch-execution framework |
6799317, | Jun 27 2000 | International Business Machines Corporation | Interrupt mechanism for shared memory message passing |
6859462, | Aug 10 1999 | Cisco Technology, Inc | Minimization and optimization of overall data transfer connect time between handheld wireless communicating devices and remote machines |
6968411, | Mar 19 2002 | Intel Corporation | Interrupt processing apparatus, system, and method |
7016998, | Nov 27 2000 | RPX Corporation | System and method for generating sequences and global interrupts in a cluster of nodes |
20080104296, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 10 2007 | MA, XIULING | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019877 | /0457 | |
Apr 12 2007 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 18 2010 | ASPN: Payor Number Assigned. |
Feb 14 2014 | REM: Maintenance Fee Reminder Mailed. |
Apr 11 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 11 2014 | M1554: Surcharge for Late Payment, Large Entity. |
Oct 17 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 18 2021 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 06 2013 | 4 years fee payment window open |
Jan 06 2014 | 6 months grace period start (w surcharge) |
Jul 06 2014 | patent expiry (for year 4) |
Jul 06 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 06 2017 | 8 years fee payment window open |
Jan 06 2018 | 6 months grace period start (w surcharge) |
Jul 06 2018 | patent expiry (for year 8) |
Jul 06 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 06 2021 | 12 years fee payment window open |
Jan 06 2022 | 6 months grace period start (w surcharge) |
Jul 06 2022 | patent expiry (for year 12) |
Jul 06 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |