This disclosure is directed to a fast and scalable concurrent queuing system. A device may comprise, for example, at least a memory module and a processing module. The memory module may be to store a queue comprising at least a head and a tail. The processing module may be to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
|
8. A method, comprising:
executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail;
enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued;
observing a second state based on the predecessor node;
determining if the predecessor node has changed based on comparing the first state to the second state;
setting ordering in the queue based on the determination;
executing at least one thread desiring to dequeue a node from the queue;
reading the head pointer and head node counter;
determining if the head pointer is pointing at the tail;
setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail;
setting the address of the head pointer equal to the address of the new head pointer; and
taking corrective action if the head pointer is pointing at the tail.
1. A device, comprising:
a memory module to store a queue comprising at least a head and a tail; and
a processing module to:
execute at least one thread desiring to enqueue at least one new node to the queue;
enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued;
observe a second state based on the predecessor node;
determine if the predecessor node has changed based on comparing the first state to the second state;
set ordering in the queue based on the determination; and
wherein:
the head, tail and at least one new node each comprise at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail; and
the processing module is further to:
execute at least one thread desiring to dequeue a node from the queue;
read the head pointer and head node counter;
determine if the head pointer is pointing at the tail;
set a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail;
set the address of the head pointer equal to the address of the new head pointer; and
take corrective action if the head pointer is pointing at the tail.
15. At least one machine-readable storage medium having stored thereon, individually or in combination, instructions that when executed by one or more processors result in the following operations comprising:
executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail;
enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued;
observing a second state based on the predecessor node;
determining if the predecessor node has changed based on comparing the first state to the second state;
setting ordering in the queue based on the determination;
executing at least one thread desiring to dequeue a node from the queue;
reading the head pointer and head node counter;
determining if the head pointer is pointing at the tail;
setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail;
setting the address of the head pointer equal to the address of the new head pointer; and
taking corrective action if the head pointer is pointing at the tail.
2. The device of
3. The device of
4. The device of
5. The device of
6. The device of
7. The device of
9. The method of
incrementing the node counter of the at least one new node; and
setting the pointer of the at least one new node to the tail address.
10. The method of
11. The method of
setting a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer.
12. The method of
13. The method of
14. The method of
16. The medium of
incrementing the node counter of the at least one new node; and
setting the pointer of the at least one new node to the tail address.
17. The medium of
18. The medium of
setting a predecessor address in the at least one new node to the node indicated by the temporary node pointer.
19. The medium of
20. The medium of
21. The medium of
|
The present disclosure relates to data processing, and more particularly, to systems for enqueuing and dequeuing data in a manner that provides traceability for possible data changes.
In data processing, the manner in which a data processing device may place information into a data processing queue (e.g., enqueuing) and remove information from the data processing queue (e.g., dequeuing) may have a substantial impact on the maximum speed at which the data processing device may operate. The concurrent queuing of data permits a plurality of processor threads (e.g., a sequence of programmed instructions executed by the data processor) to enqueue and dequeue information from the same processing queue at substantially the same time. While the ability to process data in this manner allows for multiple queue operations to be performed in parallel, and thus for data to be processed more quickly, it is not without some inherent issues.
For example, at least one issue in existing concurrent queuing schemes is the “A-B-A” problem. An example A-B-A problem scenario initiates with a particular data processing queue location (e.g., node) containing a value “A” when first read by a data processing device. While the value of the node may subsequently change (e.g., to “B”), it may still appear to be “A” (or may even change back to “A”) when the transition goes unnoticed by the data processing device. For example, a node may be removed from the queue, deleted, and then replaced by a new node that appears to be original node, which may occur frequently in concurrent queuing. This quick transition may occur because, as stated above, threads in concurrent queuing may enqueue and dequeue nodes at substantially the same time. Not being aware of changes in the data processing queue may result in, for example, errors, corrupted data processing results, delays in receiving data processing results due to the need to reprocess, etc. Thus, any increases in speed that may be realized from concurrent queuing may be decreased or even nullified by the overall negative impact in performance due to the A-B-A problem and/or other similar processing-related issues.
Features and advantages of various embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals designate like parts, and in which:
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications and variations thereof will be apparent to those skilled in the art.
This disclosure is directed to a fast and scalable concurrent queuing system. In general, a queue in a device may include at least a head and a tail and possibly one or more nodes, wherein the head, tail and nodes include at least a pointer and a node-counter. When a thread desires to add a node to the queue, two observations may be made with respect to the state of a predecessor node of the newly-added node. State, as referenced herein, may pertain to the values of variables (e.g., pointer addresses, counter values, etc.) in a node, or at least associated with the node, at the time an observation is made. A first observation may be made while adding the new node and a second observation may be made after the new node is added. A determination may then be made, based on the two observations, as to whether the predecessor node has changed (e.g., been dequeued) after adding the new node to the queue. If it is determined that the predecessor node has changed, then the head pointer may be set to point to the new node. If it is determined that the predecessor has not changed, the pointer of the predecessor may be updated to point to the new node. An example dequeuing function may include updating the head address to point to the next node in the queue if it is determined that the queue contains at least one other node.
In one embodiment there is a device comprising, for example, at least a memory module and a processing module. The memory module may be to store a queue comprising at least a head and a tail. The processing module may be to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
In the same or a different embodiment, the head, tail and at least one new node may each comprise, for example, at least a pointer and a one-bit node counter. For example, the head pointer may include the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue and the at least one new node pointer including the address of a node situated after the at least one new node in the queue. The processing module may then be to increment the node counter of the at least one new node and to set the pointer of the at least one new node to the tail address. In one embodiment, in observing the first state the processing module may be to set the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. The processing module may then be to set a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer.
In determining if the predecessor has changed, the processing module may be to compare at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In setting ordering in the queue the processing module may be to set the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node. Alternatively, if it is determined that the node pointer and node counter for the predecessor node are the same as the node pointer and node counter for the temporary node, in setting ordering in the queue the processing module may be to set the pointer of the predecessor node to the address of the at least one new node. In at least one embodiment the comparison and subsequent order-setting operations may be performed as single/atomic operations.
In the same or a different embodiment, the processing module may further be to execute at least one thread desiring to dequeue a node from the queue, read the head pointer and head node counter, determine if the head pointer is pointing at the tail, set a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, set the address of the head pointer equal to the address of the new head pointer, and take corrective action if the head pointer is pointing at the tail. An example method consistent with embodiments of the present disclosure may include executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
Device 200 may comprise system module 202 configured to manage device operations. System module 202 may include, for example, processing module 204, memory module 206, power module 208, user interface module 210 and communication interface module 212 that may be configured to interact with communication module 214. While communication module 214 has been illustrated as separate from system module 200, this configuration is merely for the sake of explanation herein. It is also possible for some or all of the functionality associated with communication module 214 may also be incorporated within system module 200.
In device 200, processing module 204 may comprise one or more processors situated in separate components, or alternatively, may comprise one or more processing cores embodied in a single component (e.g., a multi-core configuration component comprising multiple one or more processing cores) and any processor-related support circuitry (e.g., bridging interfaces, etc.). Example processors may include, but are not limited to, various x86-based microprocessors available from the Intel Corporation including those in the Pentium, Xeon, Itanium, Celeron, Atom, Core i-series product families, Advanced RISC (e.g., Reduced Instruction Set Computing) Machine or “ARM” processors, etc. Examples of support circuitry may include chipsets (e.g., Northbridge, Southbridge, etc. available from the Intel Corporation) configured to provide an interface through which processing module 204 may interact with other system components that may be operating at different speeds, on different buses, etc. in device 200. Some or all of the functionality commonly associated with the support circuitry may also be included in the same physical package as the processor (e.g., a System-on-Chip (SoC) package like the Sandy Bridge integrated circuit available from the Intel Corporation).
Processing module 204 may be configured to execute various instructions in device 200. Instructions may include program code configured to cause processing module 204 to perform activities related to reading data, writing data, processing data, formulating data, converting data, transforming data, etc. Information (e.g., instructions, data, etc.) may be stored in memory module 206. Memory module 206 may comprise random access memory (RAM) or read-only memory (ROM) in a fixed or removable format. RAM may include memory configured to hold information during the operation of device 200 such as, for example, static RAM (SRAM) or Dynamic RAM (DRAM). ROM may include memories such as bios memory configured to provide instructions when device 200 activates, programmable memories such as electronic programmable ROMs (EPROMS), Flash, etc. Other fixed and/or removable memory may include magnetic memories such as, for example, floppy disks, hard drives, etc., electronic memories such as solid state flash memory (e.g., embedded multimedia card (eMMC), etc.), removable memory cards or sticks (e.g., micro storage device (uSD), USB, etc.), optical memories such as compact disc-based ROM (CD-ROM), etc.
Power module 208 may include internal power sources (e.g., a battery) and/or external power sources (e.g., electromechanical or solar generator, power grid, fuel cell, etc.), and related circuitry configured to supply device 200 with the power needed to operate. User interface module 210 may include circuitry configured to allow users to interact with device 200 such as, for example, various input mechanisms (e.g., microphones, switches, buttons, knobs, keyboards, speakers, touch-sensitive surfaces, one or more sensors configured to capture images and/or sense proximity, distance, motion, gestures, etc.) and output mechanisms (e.g., speakers, displays, lighted/flashing indicators, electromechanical components for vibration, motion, etc.). Communication interface module 212 may be configured to handle packet routing and other control functions for communication module 214, which may include resources configured to support wired and/or wireless communications. Wired communications may include serial and parallel wired mediums such as, for example, Ethernet, Universal Serial Bus (USB), Firewire, Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI), etc. Wireless communications may include, for example, close-proximity wireless mediums (e.g., radio frequency (RF) such as based on the Near Field Communications (NFC) standard, infrared (IR), optical character recognition (OCR), magnetic character sensing, etc.), short-range wireless mediums (e.g., Bluetooth, WLAN, Wi-Fi, etc.) and long range wireless mediums (e.g., cellular, satellite, etc.). In one embodiment, communication interface module 212 may be configured to prevent wireless communications that are active in communication module 214 from interfering with each other. In performing this function, communication interface module 212 may schedule activities for communication module 214 based on, for example, the relative priority of messages awaiting transmission.
As operations related to queue 100 may occur at a somewhat low level within device 200, in some embodiments consistent with the present disclosure only processing module 204 and/or memory 206 may be active. For example, memory module 206 may comprise various memory locations corresponding to the nodes in queue 100. Threads executed by processing module 204 may then proceed to enqueue or dequeue nodes in queue 100 as will be described in
The pseudo-code disclosed in
At least one advantage of using a per-node counter (e.g., instead of a tail-counter) is that the per-node counter may be optimized if any given node won't be enqueued into more than one queue (e.g., or a fixed and small number of queues). Therefore, a counter of a single bit suffices in the case in which a node may be enqueued in one queue only. That's easy to be extended to N queues with N separate counters per node, one bit per counter. Given that N is a small number, all nodes may be aligned on a 2N-byte boundary, so that per-node counters may be encoded into unused address bits. Such optimization allows enqueuing to be done using an XCHG operation in place of a doublesize CAS operation. Experiments show that such optimization could result in a substantial (e.g., 10%-15%) overall queue throughput improvement. In practice, aligned nodes are common in I/O queues (e.g., Network/USB stacks), where the variant may be a good fit.
A determination may then be made in operation 1010 as to whether the values (e.g., at least the pointer and counter) of the predecessor node have changed. If it is determined in operation 1010 that the values of the predecessor node have changed, then in operation 1012 the pointer of the queue header may be set to point to the new node (e.g., the new node is now the first node in the queue). Alternatively, if it is determined in operation 1010 that the predecessor values have not changed, then in operation 1014 the pointer of the predecessor node may be set to point at the newly enqueued node. In at least one embodiment operations 1010 to 1014 may be performed as single/atomic operations
While
As used in this application and in the claims, a list of items joined by the term “and/or” can mean any combination of the listed items. For example, the phrase “A, B and/or C” can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and in the claims, a list of items joined by the term “at least one of” can mean any combination of the listed terms. For example, the phrases “at least one of A, B or C” can mean A; B; C; A and B; A and C; B and C; or A, B and C.
As used in any embodiment herein, the term “module” may refer to software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. “Circuitry”, as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc.
Any of the operations described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), embedded multimedia cards (eMMCs), secure digital input/output (SDIO) cards, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device.
Thus, this disclosure is directed to a fast and scalable concurrent queuing system. A device may comprise, for example, at least a memory module and a processing module. The memory module may be to store a queue comprising at least a head and a tail. The processing module may be to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
The following examples pertain to further embodiments. In one example there is provided a device. The device may include a memory module to store a queue comprising at least a head and a tail and a processing module to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
The above example device may be further configured, wherein the head, tail and at least one new node each comprise at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail. In this configuration the example device may be further configured, wherein the processing module is to increment the node counter of the at least one new node and to set the pointer of the at least one new node to the tail address. In this configuration the example device may be further configured, wherein in observing the first state, the processing module is to set the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. In this configuration the example device may be further configured, wherein the processing module is to set a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example device may be further configured, wherein in determining if the predecessor node has changed, the processing module is to compare at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example device may be further configured, wherein in setting ordering in the queue, the processing module is to set the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node. In this configuration the example device may be further configured, wherein in setting ordering in the queue, the processing module is to set the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for predecessor node are the same as the node pointer and node counter for the temporary node. In this configuration the example device may be further configured, wherein the processing module is further to execute at least one thread desiring to dequeue a node from the queue, read the head pointer and head node counter, determine if the head pointer is pointing at the tail, set a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, set the address of the head pointer equal to the address of the new head pointer, and take corrective action if the head pointer is pointing at the tail.
In another example there is provided a method. The method may include executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail, enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued, observing a second state based on the predecessor node, determining if the predecessor node has changed based on comparing the first state to the second state, and setting ordering in the queue based on the determination.
The above example method may further comprise incrementing the node counter of the at least one new node, and setting the pointer of the at least one new node to the tail address.
The above example method may be further configured, alone or in combination with the above further configurations, wherein observing the first state comprises setting the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. In this configuration the example method may further comprise setting a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example method may be further configured, wherein determining if the predecessor node has changed comprises comparing at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example method may be further configured, wherein setting ordering in the queue comprises setting the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node. In this configuration the example method may be further configured, wherein setting ordering in the queue comprises setting the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are the same as the node pointer and node counter for the temporary node.
The above example method may further comprise, alone or in combination with the above further configurations, executing at least one thread desiring to dequeue a node from the queue, reading the head pointer and head node counter, determining if the head pointer is pointing at the tail, setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, setting the address of the head pointer equal to the address of the new head pointer, and taking corrective action if the head pointer is pointing at the tail.
In another example there is provided a system comprising at least a device, the system being arranged to perform any of the above example methods.
In another example there is provided a chipset arranged to perform any of the above example methods.
In another example there is provided at least one machine readable medium comprising a plurality of instructions that, in response to be being executed on a computing device, cause the computing device to carry out any of the above example methods.
In another example there is provided a device configured for use with a fast and scalable concurrent queuing system, the device being arranged to perform any of the above example methods.
In another example there is provided a device having means to perform any of the above example methods.
In another example there is provided a system comprising at least one machine-readable storage medium having stored thereon individually or in combination, instructions that when executed by one or more processors result in the system carrying out any of the above example methods
In another example there is provided a device. The device may include a memory module to store a queue comprising at least a head and a tail and a processing module to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
The above example device may be further configured, wherein the head, tail and at least one new node each comprise at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail. In this configuration the example device may be further configured, wherein in observing the first state, the processing module is to set the pointer and node counter in a temporary node equal to the tail pointer and tail node counter, and set a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example device may be further configured, wherein in determining if the predecessor node has changed, the processing module is to compare at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example device may be further configured, wherein in setting ordering in the queue, the processing module is to set the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node, and set the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for predecessor node are the same as the node pointer and node counter for the temporary node. In this configuration the example device may be further configured, wherein the processing module is further to execute at least one thread desiring to dequeue a node from the queue, read the head pointer and head node counter, determine if the head pointer is pointing at the tail, set a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, set the address of the head pointer equal to the address of the new head pointer, and take corrective action if the head pointer is pointing at the tail.
In another example there is provided a method. The method may include executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail, enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued, observing a second state based on the predecessor node, determining if the predecessor node has changed based on comparing the first state to the second state, and setting ordering in the queue based on the determination.
The above example method may be further configured, wherein observing the first state comprises setting the pointer and node counter in a temporary node equal to the tail pointer and tail node counter, and setting a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example method may be further configured, wherein determining if the predecessor node has changed comprises comparing at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example method may be further configured, wherein setting ordering in the queue comprises setting the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node, and setting the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are the same as the node pointer and node counter for the temporary node.
The above example method may further comprise, alone or in combination with the above further configurations, executing at least one thread desiring to dequeue a node from the queue, reading the head pointer and head node counter, determining if the head pointer is pointing at the tail, setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, setting the address of the head pointer equal to the address of the new head pointer, and taking corrective action if the head pointer is pointing at the tail.
In another example there is provided a system comprising at least a device, the system being arranged to perform any of the above example methods.
In another example there is provided a chipset arranged to perform any of the above example methods.
In another example there is provided at least one machine readable medium comprising a plurality of instructions that, in response to be being executed on a computing device, cause the computing device to carry out any of the above example methods.
In another example there is provided a device having means to perform any of the above example methods.
In another example there is provided a device. The device may include a memory module to store a queue comprising at least a head and a tail and a processing module to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination.
The above example device may be further configured, wherein the head, tail and at least one new node each comprise at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail. In this configuration the example device may be further configured, wherein the processing module is to increment the node counter of the at least one new node and to set the pointer of the at least one new node to the tail address. In this configuration the example device may be further configured, wherein in observing the first state, the processing module is to set the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. In this configuration the example device may be further configured, wherein the processing module is to set a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example device may be further configured, wherein in determining if the predecessor node has changed, the processing module is to compare at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example device may be further configured, wherein in setting ordering in the queue, the processing module is to set the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node. In this configuration the example device may be further configured, wherein in setting ordering in the queue, the processing module is to set the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for predecessor node are the same as the node pointer and node counter for the temporary node. In this configuration the example device may be further configured, wherein the processing module is further to execute at least one thread desiring to dequeue a node from the queue, read the head pointer and head node counter, determine if the head pointer is pointing at the tail, set a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, set the address of the head pointer equal to the address of the new head pointer, and take corrective action if the head pointer is pointing at the tail.
In another example there is provided a method. The method may include executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail, enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued, observing a second state based on the predecessor node, determining if the predecessor node has changed based on comparing the first state to the second state, and setting ordering in the queue based on the determination.
The above example method may further comprise incrementing the node counter of the at least one new node, and setting the pointer of the at least one new node to the tail address.
The above example method may be further configured, alone or in combination with the above further configurations, wherein observing the first state comprises setting the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. In this configuration the example method may further comprise setting a predecessor address associated with the at least one new node to the node indicated by the temporary node pointer. In this configuration the example method may be further configured, wherein determining if the predecessor node has changed comprises comparing at least the node pointer and node counter of the predecessor node to the node pointer and node counter of the temporary node. In this configuration the example method may be further configured, wherein setting ordering in the queue comprises setting the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node counter for the temporary node. In this configuration the example method may be further configured, wherein setting ordering in the queue comprises setting the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are the same as the node pointer and node counter for the temporary node.
The above example method may further comprise, alone or in combination with the above further configurations, executing at least one thread desiring to dequeue a node from the queue, reading the head pointer and head node counter, determining if the head pointer is pointing at the tail, setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, setting the address of the head pointer equal to the address of the new head pointer, and taking corrective action if the head pointer is pointing at the tail.
In another example there is provided a system. The system may include means for executing at least one thread desiring to enqueue at least one new node to a queue including at least a head and a tail, the head, tail and at least one new node each comprising at least a pointer and a one-bit node counter, the head pointer including the address of a node situated first in the queue, the tail pointer including the address of a node situated last in the queue, and the at least one new node pointer including the address of the tail, means for enqueuing the at least one new node to the queue, a first state being observed based on the tail pointer and tail counter when the at least one new node is enqueued, means for observing a second state based on the predecessor node, means for determining if the predecessor node has changed based on comparing the first state to the second state, and means for setting ordering in the queue based on the determination.
The above example system may further comprise means for incrementing the node counter of the at least one new node, and means for setting the pointer of the at least one new node to the tail address.
The above example system may be further configured, alone or in combination with the above further configurations, wherein observing the first state comprises setting the pointer and node counter in a temporary node equal to the tail pointer and tail node counter. In this configuration the example system may further comprise means for setting a predecessor address in the at least one new node to the node indicated by the temporary node pointer. In this configuration the example system may be further configured, wherein determining if the predecessor node has changed comprises comparing at least the node pointer and node counter of the predecessor node to the node counter of the temporary node. In this configuration the example system may be further configured, wherein setting ordering in the queue comprises setting the head pointer to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are different than the node pointer and node pointer and node counter for the temporary node. In this configuration the example system may be further configured, wherein setting ordering in the queue comprises setting the pointer of the predecessor node to the address of the at least one new node if it is determined that the node pointer and node counter for the predecessor node are the same as the node pointer and node counter for the temporary node.
The above example system may further comprise, alone or in combination with the above further configurations, means for executing at least one thread desiring to dequeue a node from the queue, means for reading the head pointer and head node counter, means for determining if the head pointer is pointing at the tail, means for setting a new head pointer to point at the next node in the queue if it is determined that the head is not pointing at the tail, means for setting the address of the head pointer equal to the address of the new head pointer, and means for taking corrective action if the head pointer is pointing at the tail.
The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.
Xing, Bin, Del Cuvillo, Juan B.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
7447875, | Nov 26 2003 | EMC IP HOLDING COMPANY LLC | Method and system for management of global queues utilizing a locked state |
20030023786, | |||
20050066082, | |||
20070276973, | |||
20080066066, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 14 2013 | Intel Corporation | (assignment on the face of the patent) | / | |||
Feb 19 2015 | DEL CUVILLO, JUAN B | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035408 | /0023 | |
Mar 26 2015 | XING, BIN | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035408 | /0023 |
Date | Maintenance Fee Events |
Jul 24 2015 | ASPN: Payor Number Assigned. |
Apr 15 2019 | REM: Maintenance Fee Reminder Mailed. |
Sep 30 2019 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Aug 25 2018 | 4 years fee payment window open |
Feb 25 2019 | 6 months grace period start (w surcharge) |
Aug 25 2019 | patent expiry (for year 4) |
Aug 25 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 25 2022 | 8 years fee payment window open |
Feb 25 2023 | 6 months grace period start (w surcharge) |
Aug 25 2023 | patent expiry (for year 8) |
Aug 25 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 25 2026 | 12 years fee payment window open |
Feb 25 2027 | 6 months grace period start (w surcharge) |
Aug 25 2027 | patent expiry (for year 12) |
Aug 25 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |