A parallel processing apparatus including a plurality of compute nodes and a management node including a first processor configured to execute a process including collecting failure information regarding a plurality of ports of the plurality of compute nodes, and transmitting, to the plurality of compute nodes, failed port information including information on a failed port of the plurality of ports when an update in the failure information is detected in the collecting, wherein each of the plurality of compute nodes includes a second processor configured to execute a process including determining a retransmission route based on the failed port information when an inter-compute node communication in a low-level communication library has failed, and re-executing the inter-node communication by using the determined retransmission route.
|
1. A parallel processing apparatus, comprising:
a plurality of compute nodes; and
a management node including a first processor configured to:
collect failure information regarding a plurality of ports of the plurality of compute nodes by checking the compute nodes at intervals of a predetermined period; and
transmit, to the plurality of compute nodes, failed port information including information on a failed port of the plurality of ports when an update in the failure information is detected, wherein
each of the plurality of compute nodes includes a second processor configured to:
determine a retransmission route based on the failed port information when an inter-compute node communication has failed; and
re-execute the inter-node communication by using the determined retransmission route.
5. A parallel processing method executed by parallel processing apparatus including a plurality of compute nodes, the parallel processing method comprising:
collecting failure information regarding a plurality of ports of the plurality of compute nodes by checking the compute nodes at intervals of a predetermined period;
transmitting, to the plurality of compute nodes, failed port information including information on a failed port of the plurality of ports when an update in the failure information is detected in the collecting;
determining, by at least one of plurality of compute nodes, a retransmission route based on the failed port information when an inter-compute node communication has failed; and
re-executing, by at least one of plurality of compute nodes, the inter-node communication by using the determined retransmission route.
4. A non-transitory computer-readable storage medium storing a program that causes a parallel processing apparatus to execute a process, the parallel processing apparatus including a plurality of compute nodes, the process comprising:
collecting failure information regarding a plurality of ports of the plurality of compute nodes by checking the compute nodes at intervals of a predetermined period;
transmitting, to the plurality of compute nodes, failed port information including information on a failed port of the plurality of ports when an update in the failure information is detected in the collecting;
causing at least one of plurality of compute nodes to determine a retransmission route based on the failed port information when an inter-compute node communication has failed; and
causing the at least one of plurality of compute nodes to re-execute the inter-node communication by using the determined retransmission route.
2. The parallel processing apparatus according to
3. The parallel processing apparatus according to
a node identifier of the plurality of compute nodes,
coordinate information indicating coordinates of each of the plurality of compute nodes in a parallel processing executed by the parallel processing apparatus,
a port identifier of the plurality of ports used for data communication to an adjacent compute node, and
state information that indicates a state of the plurality of ports.
|
This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2016-238848, filed on Dec. 8, 2016, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to a parallel processing apparatus and non-transitory computer-readable storage medium.
In high performance computing (HPC) where a plurality of compute nodes connected by a network execute a parallel program by cooperating with each other, communication paths are set so as to avoid passing through failed locations when a failed compute node and a failed path are known at the time when a job is assigned. However, if a communication path is interrupted during execution of a job, an issued communication command will not reach the transmission destination, a compute node, and thus the communication command is lost.
To address this issue, a communication command is retransmitted. For example, when communication is performed by using a message passing interface (MPI) library, since the MPI library performs communication via a low-level communication library at a lower layer, retransmission is performed by calling a transmission function and a reception-confirmation function of the low-level communication library a plurality of times.
In addition, there is a technique in which when it is notified from a destination processor that a reception buffer is not permitted to be used because of the occurrence of an error in the reception buffer, retransmission of data to be transmitted is stopped, and when it is notified that the reception buffer is in use, retransmission of data to be transmitted is performed at a predetermined timing. According to this technique, it is possible to inhibit meaningless repeats of retransmission.
There is also a technique in which when a failure occurs at the receiving destination, the correspondence between logical addresses and physical addresses of a conversion table for converting the logical address of the destination to the physical address is changed, dynamic reconfiguration processing of network routing is performed rapidly and effectively.
Japanese Laid-open Patent Publication No. 5-265989 and Japanese Laid-open Patent Publication No. 7-262146 are examples of the related art.
According to an aspect of the invention, the parallel processing apparatus including a plurality of compute nodes and a management node including a first processor configured to execute a process including collecting failure information regarding a plurality of ports of the plurality of compute nodes, and transmitting, to the plurality of compute nodes, failed port information including information on a failed port of the plurality of ports when an update in the failure information is detected in the collecting, wherein each of the plurality of compute nodes includes a second processor configured to execute a process including determining a retransmission route based on the failed port information when an inter-compute node communication in a low-level communication library has failed, and re-executing the inter-node communication by using the determined retransmission route.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
There is a problem in that if a transmission function and a reception-confirmation function of a low-level communication library are called a plurality of times in order to retransmit a communication command, it takes a long time to execute a parallel program.
The embodiments disclosed herein, for example, provide techniques for decreasing the execution time of a parallel program.
Hereinafter, embodiments of a parallel processing apparatus and an inter-node communication program disclosed in the present application will be described in detail with reference to the accompanying drawings. Note that the embodiments are not intended to limit the techniques disclosed herein.
First, the configuration of a parallel processing apparatus according to a first embodiment will be described.
Although only four compute nodes 1 are illustrated in
The compute node 1 is an information processing apparatus that executes a parallel program by cooperating with other compute nodes 1. The compute node 1 includes a central processing unit (CPU) & memory 11 and a network interface (NI) 12.
The CPU & memory 11 is composed of a CPU that reads a program from memory and executes the program and main memory that stores therein programs, intermediate execution results of programs, and the like. A program, for example, is stored in a digital versatile disc (DVD), is read from the DVD, and is installed in the compute node 1. Alternatively, a program is stored in a database or the like of a computer system coupled via a network, is read from the database or the like, and is installed in the compute node 1. Further, the installed program is stored in a hard disk drive (HDD), is read into main memory, and is executed by the CPU. The NI 12 is an interface for communication with another compute node 1 via the NS 2.
The boot-IO node 1a performs input of data to the compute node 1 and output of data from the compute node 1, and the like. The boot-IO node 1a is responsible for input and output of a predetermined number of compute nodes 1. The boot-IO node 1a includes the CPU & memory 11 and the NI 12. The boot-IO node 1a is coupled to the control node 3 and relays transmission and reception of data between the control node 3 and the compute node 1.
The NS 2 is a switch for coupling the compute nodes 1 and the boot-IO node 1a. The compute node 1 communicates with another compute node 1 or the boot-IO node 1a via the NS 2.
The control node 3 is a device that controls the parallel processing apparatus 6. The control node 3 is coupled to the boot-IO node 1a. The control node 3 monitors the compute nodes 1 and communication paths as will be described in detail below.
Next, the hierarchy of components related to communication will be described.
The low-level communication library 22 is achieved by a network interface driver 23. The network interface driver 23 is software that runs in the kernel space. The network interface driver 23 uses the NI 12 to communicate with the network interface driver 23 of another compute node 1.
Next, the relationship among a transmission command queue, a transmission complete queue, and a reception complete queue will be described.
As illustrated in
Upon executing the transmission command written to the transmission command queue 12a, the NI 12 sets a transmission complete notification in the transmission complete queue 12b (2). Further, when data is transmitted to the receiving node 1c on the other end of communication (3), a reception complete notification is set in the reception complete queue 12c (4). In
Next, the functional configuration of the parallel processing apparatus 6 will be described.
As illustrated in
The failure monitor daemon 31 is a daemon that runs on the control node 3, checks the states of all of the compute nodes 1 and communication paths at regular time intervals, and creates a failure information file and writes the failure information file to the failure information storage unit 32 if a port failure has occurred.
The failure information storage unit 32 stores therein a failure information file. Information about a failed port is described in the failure information file.
The node ID is an identifier identifying the compute node 1 where a failure has occurred. Coordinate information is the coordinates of the compute node 1 where a failure has occurred. For example, when the compute node 1 is arranged in three dimensions, the coordinate information is represented by x, y, and z. Here, x is a coordinate on the X-axis, y is a coordinate on the Y-axis, and z is a coordinate on the Z-axis; each of x, y, and z is an integer greater than or equal to zero.
The port number is the number of a port used for communication with the adjacent compute node 1. For example, when the compute node 1 is arranged in three dimensions, “0” corresponds to “Z+”, “1” corresponds to “Z−”, “2” corresponds to “X−”, “3” corresponds to “X+”, “4” corresponds to “Y−”, and “5” corresponds to “Y+”. Here, “+” indicates the positive direction of each axis, and “−” indicates the negative direction of each axis. For example, the port with a port number “0” is used when data is transmitted in the positive direction of the Z-axis.
The port state represents whether the port is in failure. For example, “1” indicates “a fatal hardware error of a router is detected”, “2” indicates “a fatal hardware error of a port router is detected”, and “3” indicates “an alarm for a hardware error is detected”.
For example, the compute node 1 with a node ID “011” has coordinates (1, 1, 2), and a port used when data is transmitted in the positive direction of the Z-axis is in a state where “a fatal hardware error of a port router is detected”.
Upon updating a failure information file, the failure monitor daemon 31 distributes the failure information file to all of the compute nodes 1.
The failure information storage unit 41 stores therein the failure information file 32a. Upon receiving a transmission command from the MPI library 21, the transmitting unit 42 writes the transmission command to the transmission command queue 12a for execution. When the transmission command written to the transmission command queue 12a is executed by the NI 12, the NI 12 writes a transmission completion notification to the transmission complete queue 12b.
The transmission confirmation unit 43 confirms that the transmission completion notification has been written to the transmission complete queue 12b. Then, the transmission confirmation unit 43 verifies whether a reception completion notification has been written to the reception complete queue 12c. The transmission confirmation unit 43 verifies a receipt completion notification corresponding to the transmission command of the transmission command queue 12a. Upon confirming a receipt completion notification corresponding to the transmission command, the transmission confirmation unit 43 passes a transmission completion to the MPI library 21.
Otherwise, if a reception completion notification is not able to be confirmed after a certain time period has elapsed since a transmission command was written to the transmission command queue 12a, the transmission confirmation unit 43 determines that a failure has occurred and performs retransmission processing. The transmission confirmation unit 43 includes a retransmitting unit 43a, and the retransmitting unit 43a performs retransmission processing.
The retransmitting unit 43a acquires the failure information file 32a and rebuilds a path by using the failure information file 32a. At this point, the retransmission unit 43a searches again for paths from its compute node 1 to all of the other compute nodes 1. The retransmitting unit 43a then retransmits data by using the rebuilt path. The retransmitting unit 43a then stores, in the retransmission information storage unit 45, information indicating that data has been retransmitted and information on the directions of transmission in the case of using the paths searched for again.
Which direction of the positive and negative directions of the X-axis and the positive and negative directions of the Y-axis is the direction in which data is to be transmitted is designated by a transmission command. In addition, the transfer path of data is determined by the transmission direction. For example, data transferred in the positive direction of the Y-axis is transferred to the compute node 1 having the same y-coordinate as the receiving node 1c, and then is transferred in the positive direction of the X-axis to arrive at the receiving node 1c. There are two paths in the case of two-dimensional mesh arrangement, while there are four paths in the case of two-dimensional tor us arrangement.
The reception confirmation unit 44 notifies the MPI library 21 of reception completion when it is confirmed that a reception completion notification has been written to the reception complete queue 12c.
The retransmission information storage unit 45 stores therein information indicating whether retransmission has been performed and information on the directions of transmission using paths searched for again by the retransmitting unit 43a.
As illustrated in
Next, the flow of transmission processing will be described.
Further, the low-level communication library 22 checks the transmission complete queue 12b (step S3) and repeats step S4 to step S7 until the condition of step S6 or step S7 is satisfied. That is, the low-level communication library 22 checks the reception complete queue 12c (step S4) and determines whether a reception completion notification is present (step S5).
If a reception completion notification is present, the low-level communication library 22 determines whether the reception completion notification is a reception completion notification of the transmission of interest (step S6) and proceeds to step S13 if the reception completion notification is a reception completion notification of the transmission of interest. Otherwise, if the reception completion notification is not a reception completion notification of the transmission of interest, the low-level communication library 22 returns to step S4. In addition, if no reception completion notification is present, the low-level communication library 22 determines whether a certain time period has elapsed after transmission (step S7) and returns to step S4 if the certain time period has not elapsed.
Otherwise, if the certain time period has elapsed, the low-level communication library 22 acquires the failure information file 32a (step S8) and searches again for a path by using the acquired failure information file 32a (step S9). Further, the low-level communication library 22 sets a transmission command based on the path searched for again in the transmission command queue 12a (step S10) and issues an instruction for retransmission of data (step S11). Further, the low-level communication library 22 stores, in the retransmission information storage unit 45, information indicating that retransmission has been performed and information on the path searched for again (step S12) and returns to step S3.
If, in step S6, the notification is a reception completion notification of the transmission of interest, the low-level communication library 22 provides a transmission completion notification as a response to the MPI library 21 (step S13) and then provides a reception completion notification as a response to the MPI library 21 (step S14).
In this way, when a certain time period has elapsed after transmission of data, the low-level communication library 22 searches for a path again by using the failure information file 32a and retransmits data by using the path searched for again. This may make it unnecessary for the MPI library 21 to perform retransmission.
Note that the process of step S1 to step S2 is a process executed by the transmitting unit 42 that copes with a transmission function of the low-level communication library 22. The process of step S1 to step S13 is a process executed by the transmission confirmation unit 43. The process of step S14 is a process executed by the reception confirmation unit 44.
Next, the flow of a process of creating the failure information file 32a will be described.
That is, the failure monitor daemon 31 monitors all of the compute nodes 1 and paths at regular time intervals (step S21) and determines whether there is a failure in ports (step S22). If there is no failure in ports, the failure monitor daemon 31 returns to step S21.
Otherwise, if there is a failure in ports, the failure monitor daemon 31 creates the failure information file 32a (step S23) and distributes the created failure information file 32a to all of the compute nodes 1 (step S24).
In this way, the failure monitor daemon 31 creates and distributes the failure information file 32a to all of the compute nodes 1, enabling each compute node 1 to search again for communication paths.
As described above, according to the first embodiment, the failure monitor daemon 31 monitors the compute nodes 1 and paths at regular time intervals and, if a port failure is detected, creates and distributes the failure information file 32a to all of the compute nodes 1. Further, if the low-level communication library 22 is not notified of completion of data reception even after a certain time period has elapsed since the data was transmitted, the low-level communication library 22 searches again for a path by using the failure information file 32a and retransmits data by using the path searched for again. Accordingly, retransmission performed by the MPI library 21 may be made unnecessary, and thus the time for execution of a parallel program may be decreased.
According to the first embodiment, when the transmission confirmation unit 43 is not notified of completion of data reception, the transmission confirmation unit 43 searches again for a path by using the failure information file 32a and retransmits data by using the path searched for again. Accordingly, the MPI library 21 may be inhibited from being notified of transmission completion before reception completion is confirmed.
According to the first embodiment, since the failure file 32a includes a node ID, coordinate information, a port number, and a port state, the low-level communication library 22 may search again for a path that does not use a failed port.
It is to be noted that although the case where the low-level communication library 22 searches again for a path by using the failure information file 32a has been described in the first embodiment, the present disclosure is not limited to this and may be applied in a similar manner to the case where the path search is delegated to an external path search program.
According to the first embodiment described above, the transmission confirmation unit 43 stores, in the retransmission information storage unit 45, information on a path searched for by using the failure information file 32a. This information is available not only at the time when data is retransmitted but also at the time when data is transmitted for the first time. if this information is not used, a timeout sometimes occurs in transmitting data to another receiving node 1c. When there are a large number of receiving nodes each having a possibility of becoming the receiving node 1c in such a situation, a problem lies in that the duration of timeout is multiplied.
However, using information in the retransmission information storage unit 45 from the first data transmission may resolve the problem of multiplication of the timeout duration.
Therefore, in the second embodiment, a low-level communication library that transmits data by using information in the retransmission information storage unit 45 will be described.
For the sake of explanatory convenience, here, functional components that fulfil roles similar to those of the components illustrated in
If retransmission has been performed, the transmitting unit 42a sets a transmission command based on a path searched for again in the transmission instruction queue 12a (step S33) and, if not, sets a transmission command based on a predetermined path in the transmission command queue 12a (step S34). The transmitting unit 42a then issues an instruction for transmitting data (step S35).
As described above, according to the second embodiment, the transmitting unit 42a changes the direction of transmission by using information in the retransmission information storage unit 45. This may reduce retransmission between the compute nodes 1.
It is to be noted that the case where the low-level communication library 22 changes the direction of transmission by using information in the retransmission information storage unit 45 has been described in the second embodiment; however, the present disclosure is not limited to this and may be applied in a similar manner to the case where, for example, the MPI library 21 issues an instruction for a transmission direction by using information in the retransmission information storage unit 45.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5161156, | Feb 02 1990 | CISCO TECHNOLOGY, INC , A CORPORATION OF CALIFORNIA | Multiprocessing packet switching connection system having provision for error correction and recovery |
5170393, | May 18 1989 | California Institute of Technology | Adaptive routing of messages in parallel and distributed processor systems |
5303383, | May 01 1991 | TERADATA US, INC | Multiprocessor computer system |
5321813, | May 01 1991 | TERADATA US, INC | Reconfigurable, fault tolerant, multistage interconnect network and protocol |
5808886, | Mar 17 1994 | Hitachi, Ltd. | Reconfiguring control system in a parallel processing system by replacing an error-detected processing unit |
5822605, | Mar 24 1994 | Hitachi, Ltd.; Hitachi ULSI Engineering Corp. | Parallel processor system with a broadcast message serializing circuit provided within a network |
6543014, | Sep 01 1998 | Hitachi, LTD; HITACHI INFORMATION TECHNOLOGY CO , LTD | Data transmitting/receiving apparatus for executing data retransmission and parallel processor system |
7516320, | Mar 27 2006 | Quartics, Inc. | Distributed processing architecture with scalable processing layers |
9749222, | Sep 24 2012 | Fujitsu Limited | Parallel computer, node apparatus, and control method for the parallel computer |
20170078222, | |||
20170126482, | |||
20180048518, | |||
20180159770, | |||
20180212792, | |||
JP5265989, | |||
JP7262146, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 05 2017 | IHARA, NOBUTAKA | Fujitsu Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044313 | /0644 | |
Dec 06 2017 | Fujitsu Limited | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 06 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Mar 01 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 17 2022 | 4 years fee payment window open |
Mar 17 2023 | 6 months grace period start (w surcharge) |
Sep 17 2023 | patent expiry (for year 4) |
Sep 17 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 17 2026 | 8 years fee payment window open |
Mar 17 2027 | 6 months grace period start (w surcharge) |
Sep 17 2027 | patent expiry (for year 8) |
Sep 17 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 17 2030 | 12 years fee payment window open |
Mar 17 2031 | 6 months grace period start (w surcharge) |
Sep 17 2031 | patent expiry (for year 12) |
Sep 17 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |