Example systems for determining a configuration of a test system execute operations that include receiving first parameters specifying at least part of an operation of a test system; receiving second parameters specifying at least part of a first configuration of the test system; determining a second configuration of the test system based, at least in part, on the first parameters and the second parameters, with the second configuration being determined to impact a cost of test of the test system; generating, by one or more processing devices, data for a graphical user interface representing information about the second configuration and the cost of test; and outputting the data for the graphical user interface for rendering on a display device.

Patent
   11169203
Priority
Sep 26 2018
Filed
Sep 26 2018
Issued
Nov 09 2021
Expiry
Feb 06 2039
Extension
133 days
Assg.orig
Entity
Large
0
31
window open
1. A method performed by one or more processing devices, comprising:
obtaining first parameters relating to at least part of an operation of a test system, the first parameters relating to hardware components of the test system, the hardware components comprising a number of controllers to be included in the test system and a number of test sites to be included in the system, where each controller is configured to control testing in one or more of the test sites;
obtaining second parameters specifying at least part of a first configuration of the test system, the first configuration comprising a first assembly of hardware components;
determining a second configuration of the test system based, at least in part, on the first parameters and the second parameters, the second configuration comprising a second assembly of hardware components that is different from the first assembly of hardware components, wherein determining the second configuration comprises iterating through different configurations of the test system including different numbers of controllers and test sites tested by each of the controllers, and where the second configuration is determined to impact a cost of test of the test system;
generating, by one or more processing devices, data for a graphical user interface representing information about the second configuration and the cost of test, the information comprising one or more plots showing a change in cost of test based on at least the first configuration and the second configuration; and
outputting the data for the graphical user interface for rendering on a display device.
20. One or more non-transitory machine-readable storage devices storing instructions that are executable by one or more processing devices to perform operations comprising:
obtaining first parameters relating to at least part of an operation of a test system, the first parameters relating to hardware components of the test system, the hardware components comprising a number of controllers to be included in the test system and a number of test sites to be included in the system, where each controller is configured to control testing in one or more of the test sites;
obtaining second parameters specifying at least part of a first configuration of the test system, the first configuration comprising a first assembly of hardware components;
determining a second configuration of the test system based, at least in part, on the first parameters and the second parameters, the second configuration comprising a second assembly of hardware components that is different from the first assembly of hardware components, wherein determining the second configuration comprises iterating through different configurations of the test system including different numbers of controllers and test sites tested by each of the controllers, and where the second configuration is determined to impact a cost of test of the test system;
generating data for a graphical user interface representing information about the second configuration and the cost of test, the information comprising one or more plots showing a change in cost of test based on at least the first configuration and the second configuration; and
outputting the data for the graphical user interface for rendering on a display device.
2. The method of claim 1, further comprising:
configuring the test system automatically based on the second configuration.
3. The method of claim 2, wherein configuring the test system comprises controlling assembly of hardware components of the test system.
4. The method of claim 3, wherein controlling assembly comprises controlling robotics to assemble the test system using the hardware components.
5. The method of claim 2, wherein configuring the test system comprises specifying a number of test slots and a number of test boards to incorporate into the test system to perform testing.
6. The method of claim 2, wherein configuring the test system comprises specifying a number of controllers to include in the test system.
7. The method of claim 1, wherein the first parameters comprise one of more of the following: single test time, parallel test site efficiency, test controller efficiency, or multi-test time.
8. The method of claim 1, wherein the second parameters comprise one or more of the following: types of test head slots, test instrument assignments for test head slot types, or test controllers per test instrument, or test system component costs.
9. The method of claim 1, wherein the test system is for testing a device;
wherein the method further comprises receiving third parameters based on the device to be tested; and
wherein the second configuration is determined based also on the third parameters.
10. The method of claim 9, wherein the first parameters comprise a test time;
wherein the second parameters comprise at least one of a number of test slots, a number of test boards, or a number of controllers; and
wherein the cost of test comprises a minimized cost of test corresponding to the second configuration.
11. The method of claim 1, wherein the second configuration is determined to minimize a cost of test of the test system.
12. The method of claim 1, wherein the at least part of the first configuration is based on operational parameters of one or more test instruments in the test system that are used to test one or more devices.
13. The method of claim 1, wherein the second configuration is determined to optimize a cost of test of the test system;
wherein the method further comprises determining an initial cost of test for the first configuration; and
wherein an optimized cost of test is less than the initial cost of test.
14. The method of claim 13, wherein the optimized cost of test is different from a minimized cost of test of the test system.
15. The method of claim 1, further comprising:
determining third parameters based on the second parameters, the third parameters specifying at least part of the second configuration;
wherein the second configuration is determined also based on the third parameters.
16. The method of claim 15, wherein the third parameters comprise information about the test system unknown from the second parameters.
17. The method of claim 1, further comprising:
determining third parameters based on the first parameters, the third parameters specifying at least part of the operation of the test system;
wherein the second configuration is determined also based on the third parameters.
18. The method of claim 17, wherein the third parameters comprise information about the test system unknown from the first parameters.
19. The method of claim 18, wherein the second configuration is determined to minimize a cost of test of the test system.
21. The one or more non-transitory machine-readable storage devices of claim 20, further comprising:
configuring the test system automatically based on the second configuration.
22. The one or more non-transitory machine-readable storage devices of claim 21, wherein configuring the test system comprises controlling assembly of hardware components of the test system.
23. The one or more non-transitory machine-readable storage devices of claim 22, wherein controlling assembly comprises controlling robotics to assemble the test system using the hardware components.
24. The one or more non-transitory machine-readable storage devices of claim 21, wherein configuring the test system comprises specifying a number of test slots and a number of test boards to incorporate into the test system to perform testing.
25. The one or more non-transitory machine-readable storage devices of claim 21, wherein configuring the test system comprises specifying a number of controllers to include in the test system.
26. The one or more non-transitory machine-readable storage devices of claim 20, wherein the first parameters comprise one of more of the following: single test time, parallel test site efficiency, test controller efficiency, or multi-test time.
27. The one or more non-transitory machine-readable storage devices of claim 20, wherein the second parameters comprise one or more of the following: types of test head slots, test instrument assignments for test head slot types, or test controllers per test instrument, or test system component costs.
28. The one or more non-transitory machine-readable storage devices of claim 20, wherein the test system is for testing a device;
wherein the operations comprise receiving third parameters based on the device to be tested; and
wherein the second configuration is determined based also on the third parameters.
29. The one or more non-transitory machine-readable storage devices of claim 28, wherein the first parameters comprise a test time;
wherein the second parameters comprise at least one of a number of test slots, a number of test boards, or a number of controllers; and
wherein the cost of test comprises a minimized cost of test corresponding to the second configuration.
30. The one or more non-transitory machine-readable storage devices of claim 20, wherein the second configuration is determined to minimize a cost of test of the test system.
31. The one or more non-transitory machine-readable storage devices of claim 20, wherein the at least part of the first configuration is based on operational parameters of one or more test instruments in the test system that are used to test one or more devices.
32. The one or more non-transitory machine-readable storage devices of claim 31, wherein the second configuration is determined to optimize a cost of test of the test system;
wherein the operations comprise determining an initial cost of test for the first configuration; and
wherein an optimized cost of test is less than the initial cost of test.
33. The one or more non-transitory machine-readable storage devices of claim 32, wherein the optimized cost of test is different from a minimized cost of test of the test system.
34. The one or more non-transitory machine-readable storage devices of claim 30, wherein the operations comprise:
determining third parameters based on the second parameters, the third parameters specifying at least part of the second configuration;
wherein the second configuration is determined also based on the third parameters.
35. The one or more non-transitory machine-readable storage devices of claim 34, wherein the third parameters comprise information about the test system unknown from the second parameters.
36. The one or more non-transitory machine-readable storage devices of claim 20, wherein the operations comprise:
determining third parameters based on the first parameters, the third parameters specifying at least part of the operation of the test system;
wherein the second configuration is determined also based on the third parameters.
37. The one or more non-transitory machine-readable storage devices of claim 36, wherein the third parameters comprise information about the test system unknown from the first parameters.
38. The one or more non-transitory machine-readable storage devices of claim 37, the second configuration is determined to minimize a cost of test of the test system.

This specification relates generally to determining a configuration of a test system.

Test systems are configured to test the operation of electronic devices, such as microprocessors and memory chips. Testing may include sending signals to a device and determining how the device reacted to those signals based on its response. The device's reaction will dictate whether the device has passed or failed testing.

Test systems may be configurable. Different configurations may have different costs of test (COT). In some cases, a COT includes monetary expenses associated with testing a type of device, a device from a manufacturer, or a group of such devices. The cost of testing a device may be influenced by the number of test sites for implementing parallel testing. Several parameters about the device and the test system may be required to determine an optimal tester configuration including the number of test sites for a specific tester.

Example techniques for determining a configuration of a test system include receiving first parameters specifying at least part of an operation of a test system; receiving second parameters specifying at least part of a first configuration of the test system; and determining a second configuration of the test system based, at least in part, on the first parameters and the second parameters. The second configuration is determined to impact a cost of test of the test system. The example techniques also include generating, by one or more processing devices, data for a graphical user interface representing information about the second configuration and the cost of test; and outputting the data for the graphical user interface for rendering on a display device. The example techniques may include one or more of the following features, either alone or in combination.

The example techniques may include configuring the test system automatically based on the second configuration. Configuring the test system may include controlling assembly of hardware components of the test system. Controlling assembly may include controlling robotics to assemble the test system using the hardware components. Configuring the test system may include specifying a number of test slots and a number of test boards to incorporate into the test system to perform testing. Configuring the test system may include specifying a number of controllers to include in the test system.

The first parameters may include one of more of the following: single test time, parallel test site efficiency, test controller efficiency, a number of controllers in the first configuration, or multi-test time. The second parameters may include one or more of the following: types of test head slots, test instrument assignments for test head slot types, test controllers per test instrument, or test system component costs.

The test system may be for testing a device. The example techniques may include receiving third parameters based on the device to be tested. The second configuration may be determined based also on the third parameters. The first parameters may include a test time. The second parameters may include at least one of a number of test slots, a number of test boards, or a number of controllers. The cost of test may include a minimized cost of test corresponding to the second configuration. The second configuration may be determined to minimize a cost of test of the test system. The data for the graphical user interface may also represent information about one or more other configurations of the test system.

At least part of the first configuration may be based on operational parameters of one or more test instruments in the test system that are used to test one or more devices. The second configuration may be determined to optimize a cost of test of the test system. The example techniques may include determining an initial cost of test for the first configuration. An optimized cost of test is less than the initial cost of test. The optimized cost of test may be different from a minimized cost of test of the test system.

The example techniques may include determining third parameters based on the second parameters, with the third parameters specifying at least part of the second configuration. The second configuration may be determined also based on the third parameters. The third parameters may include information about the test system unknown from the second parameters.

The example techniques may include determining third parameters based on the first parameters, with the third parameters specifying at least part of the operation of the test system. The second configuration may be determined also based on the third parameters. The third parameters may include information about the test system unknown from the first parameters.

Any two or more of the features described in this specification, including in this summary section, can be combined to form implementations not specifically described herein.

The techniques, systems, and processes described herein, or portions thereof, can be implemented as or controlled by a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices to implement or to control (e.g., to coordinate) the operations described herein. The techniques, systems, and processes described herein, or portions thereof, can be implemented as an apparatus, method, or electronic system that can include one or more processing devices and memory to store executable instructions to implement various operations.

The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.

FIG. 1 is a block diagram of components of an example test system.

FIG. 2 is a block diagram of single test controller architecture for the test system.

FIG. 3 is a block diagram of multiple test controller architecture for the test system.

FIG. 4 is a flowchart containing example operations for determining and displaying a configuration and an optimized cost of test of a test system.

FIGS. 5, 6, 7, 8, and 9 show components of a graphical user interface for displaying parameters for, and an optimized cost of test of, a test system.

Like reference numerals in different figures indicate like elements.

Described herein are example implementations of processes for determining a configuration of a test system, such as automatic test equipment (ATE). In an example, the processes are for determining an optimal configuration of a test system on a production floor to reduce the overall cost of testing a semiconductor device.

An example process may be performed on a computing system comprised of one or more processing devices, such as microprocessors. Operations performed by the process may include rendering a graphical user interface (GUI) on the display device of a computing system. The GUI is configured to receive parameters relating to the configuration of the test system, and to present information, such as numbers and graphics, relating to a test system having an improved or optimal cost of test (COT). An optimal cost of test may be, but is not limited to, a minimum cost of test. As noted, in some cases, a COT includes the monetary expenses associated with testing a type of device, a device from a manufacturer, or a group of such devices.

An example implementation of the process includes receiving first parameters specifying at least part of an operation of a test system. For example, the first parameters may include the time allocated to perform testing or “test time”. The example process may include receiving second parameters specifying at least part of a first configuration of the test system. The first configuration may be a current configuration of the test system having a COT that has not been optimized. The second parameters may include, for example, the number of test instruments or test controllers included in the current configuration of the test system.

The process may include determining a second configuration of the test system based, at least in part, on the first parameters and the second parameters. The second configuration may be determined to impact—for example, to reduce or to optimize—a COT of the test system for one or more devices under test (DUT). For example, the process may generate a second configuration of the test system that will have a minimum COT to test one or more DUTs based on constraints, such as available hardware. Information about the second configuration may be presented on the GUI.

For example, the information may represent the second configuration numerically, graphically, or both numerically and graphically. The information may also include the COT of the second configuration, along with COTs of one or more other configurations. The GUI may be rendered on a display locally or remotely.

The information may be used to configure the test system automatically. For example, the test system may be configured so that the test system is in the second configuration. Configuring the test system may include controlling assembly of hardware components of the test system. In some implementations, robotics may be used to configure the test system. In an example, one or more robots may be configured to connect, or to disconnect, hardware components from the test system to produce the second configuration. In an example, one or more robots may be configured to assemble the test system in the second configuration using hardware components. In some implementations, a computing system controls connecting, or disconnecting, hardware components from the test system to produce the second configuration. For example, the computing system may be configured—for example, programmed—to control switches or other electrical, mechanical, or electromechanical devices to configure the test system into the second configuration. For example, as part of the configuration process, the computing system may be configured to close some switches to connect some test instruments to the test system and to open other switches to disconnect some test instruments from the test system.

In some implementations, configuration of the test system may be controlled by a test engineer directing the robots via the computing system or specifying, on the computing system, control over switches or other electrical, mechanical, or electromechanical devices to configure the test system into the second configuration. In some implementations, the computing system may be programmed to direct the robots or to control the switches or other electrical, mechanical, or electromechanical devices automatically. For example, operations for directing the robots or controlling the switches or other electrical, mechanical, or electromechanical devices may be initiated independent of input on the computing system from the test engineer.

FIG. 1 shows components of example ATE 10. Notably, however, the systems and processes described in this specification are not limited to use with the ATE of FIG. 1 or to use with any particular type of DUT, but rather may be used in any appropriate technical context. In FIG. 1, the dashed lines represent, conceptually, potential signal paths between instruments.

ATE 10 includes a test head 11 and a test computer 12. Test head 11 interfaces to DUTs (not shown) on which tests are to be performed. Test computer 12 communicates with test head 11 to control testing. For example, test computer may download test program sets to test instruments on the test head, which then run the test program sets to test DUTs in communication with the test head.

ATE 10 includes test instruments 13A to 13N (where “N” indicates more than three test instruments). In this example, the test instruments are housed in the test head. Each test instrument may be housed in a separate slot in the test head. In some implementations, the test instruments are modular. For example, one test instrument may be replaced with a different test instrument that performs a different function or the same function, without replacing other test instruments. Each test instrument may be configured to output test signals to test a DUT, and to receive signals from the DUT.

The signals may be digital, analog, wireless, or wired, for example. The signals received may include response signals that are based on the test signals, signals that originate from the DUT that are not prompted by (e.g., are not in response to) test signals, or both types of these signals.

ATE 10 includes a connection interface 14, which connects test instrument outputs 15 to DIB 16. Connection interface 14 may include connectors 20 or other devices for routing signals between the test instruments and DIB 16. For example, the connection interface may include one or more circuit boards or other substrates on which such connectors are mounted. Other types of connections may be used.

In the example of FIG. 1, DIB 16 connects, electrically and mechanically, to test head 11. DIB 16 includes test sites 21, which may include pins, traces, or other points of electrical and mechanical connection to which DUTs connect. Test signals, response signals, and other signals pass over the test sites between the DUT and test instruments. DIB 16 also may include, for example, connectors, conductive traces, circuitry, or some combination thereof for routing signals between the test instruments and the DUTs.

FIGS. 2 and 3 represent, respectively, a single controller test architecture 25 and a multiple controller test architecture 26. FIG. 2 shows a DIB 27 containing eight test sites 28. In this example, test controller 29 is a processing device, such as a microprocessor. Test controller 29 may be part of a test instrument such as 13A to 13N and is used to control testing over all eight test sites 27 and, thus, over DUTs in those test sites.

Control over all eight test sites is represented by dashed ellipse 30.

FIG. 3 shows a DIB 31 containing eight test sites 32. In this example, there are multiple test controllers 33, 34—each for controlling four test sites on the DIB. In this example, each test controller is a processing device, such as a microprocessor. Each test controller may be part of a separate test instrument and is used to control testing over four test sites and, thus, over DUTs in those test sites. Each test controller may contain or control digital resources, analog resources, or both digital and analog resources to test a DUT in a test site. Control over four test sites by controller 33 is represented by dashed ellipse 35. Control over four test sites by controller 34 is represented by dashed ellipse 36.

The example process described herein may configure features of a test system such as ATE 10 to optimize—for example, to minimize—a COT for one or more DUTs using that test system. An example process 40 for optimizing a COT is shown in FIG. 4. The following variables are used in a description of the process.

In an example, “SSTT”, or single test time, is the test time required to test a single DUT. The SSTT may be measured in seconds (“s” or “sec”). In an example, “TEST SITES” defines the total number of DUTs being tested in parallel. Parallel testing may include testing that is simultaneous, concurrent, or contemporaneous. In an example, “MSTT”, or single multi-site test time, is the total test time required to test all DUTs in parallel for a test insertion. The MSTT may be expressed in seconds. In an example, “PTE”, or parallel test efficiency, is a measure of efficiency when testing multiple DUTs in parallel during a single test sequence compared to the amount of time required to test a single DUT. The PTE may be expressed in terms of percentage (%).

In an example, the “Relative COT” is a COT value that is determined by dividing the total test system cost by the number of second in a year and multiplying the resulting dividend by the test time per DUT (“TEST TIME”). The test system cost or “system cost” is the actual cost—for example, in US dollars or other currency—of the test system. The Relative COT may be expressed in cost/time, e.g., US dollars/second, or shorthanded to exclude the time component. The “TEST TIME” variable is the total time to test a single DUT. TEST TIME is also expressed as “Seconds/Device” and is based on the MSTT and the TEST SITES variables as shown in EQUATIONS 1 below.

COT may be determined for a test architecture that includes a single test controller, as in the example of FIG. 2. In this example, the relative COT is determined using “EQUATIONS 1” as follows:
PTE=1−(MSTT−SSTT)/(SSTT*(TEST SITES−1))
MSTT=((1−PTE)*(SSTT*(TEST SITES−1)))+SSTT
SSTT=MSTT/(((1−PTE)*(TEST SITES−1))+1)
TEST SITES=(((MSTT/SSTT)−1)/(1−PTE))+1
TEST TIME=MSTT/TEST SITES
Relative COT=(Total System Cost/Second in One Year)*TEST TIME  EQUATIONS 1

For a test system that includes multiple test controllers, such as the system of FIG. 3, “CNTRLS” is the number of test controllers in a system configuration. In some implementations, a test system includes one or more test controllers responsible for running a test program. Some test system architectures have multiple test controllers in which each test controller runs a test program on a subset of test sites. The test controllers may be part of the test instruments, as noted. In some implementations, there may be one test controller per test instrument. In some implementations, there may be more than one test controller per test instrument. CNTRLS may be expressed as an integer value. In an example, “CntrIPTE” is the test controller parallel test efficiency. CntrIPTE is a measure of efficiency when operating multiple test controllers at the same time (e.g., simultaneously, concurrently, or contemporaneously). CntrIPTE is expressed as a percentage (%).

In an example, “TEST SITEScntrl” defines the total number of DUTs being tested in parallel using one test controller in a multiple controller architecture. In an example, “PTEcntrl”, or parallel test efficiency, is a measure of efficiency when testing multiple DUTs in parallel during a single test sequence compared to the amount of time required to test a single DUT. For the total test sites assigned to one test controller, this is the PTE for those test sites tested using that test controller. In an example, “PTEcompound” is the overall parallel test efficiency of a test system configuration when using a multiple test controller architecture. PTEcompound may be expressed in terms of percentage (%). PTEacrosscntrl is a measure of parallel test efficiency when operating multiple test controllers at the same time (e.g., simultaneously, concurrently, or contemporaneously). In some implementations, this value is greater than 99%.

In an example, “MSTTcntrl” is the multi-site test time within a test controller. For the total test sites assigned to one test controller in a multi-controller system, this is the total test time for those test sites using that test controller. This is a parameter used for determining the compound PTE (PTEcompound). MSTTcntrl may be expressed in seconds. In an example, “MSTTtotal” is the test time for all test sites using all test controllers. This is the total test time during which all test sites operate with all test controllers. MSTTtotal may be expressed in seconds. In an example, “SSTTcntrl” is the single site test time for testing one DUT in one test site. SSTTcntrl may be expressed in seconds.

COT may be determined for a test architecture that includes multiple test controllers, as in the example of FIG. 3. In this example, the relative COT is determined using “EQUATIONS 2” as follows:
TEST SITEStotal=total number of test sites in the test system
TEST SITEScntrl=TEST SITES/CNTRLS
MSTTcntrl=((1−PTEcntrl)*(SSTTcntrl*(TEST SITEScntrl−1)))+SSTTcntrl
SSTT=MSTTcntrl
MSSTtotal=((1−PTEcntrl)*(SSTT*(CNTRLS−1)))+SSTT
PTEcompound=1−(MSTTtotal−SSTTcntrl)/(SSTTcntrl)*(TEST SITEStotal−1))
TEST TIME=MSTTtotal/SITES
Relative COT=System Cost/Second*TEST TIME  EQUATIONS 2

Analysis constraints are parameters that restrict how a test system can be configured in a minimum COT analysis. Example analysis constraints for the process include site count, site count resolution, and maximum number of test controllers. For example, the process for determining the COT can be limited to a maximum number of test sites to analyze. For example a “site count” value of 32 instructs the process to evaluate test sites of a test system that are numbered from 1 to 32. The process may be limited to a particular site count resolution. For example, the process may be limited to analyzing 128 test sites at a four test-site resolution. This constraint would limit the process to analyzing 128 test sites in four-site increments. In an example, “maximum number of test controllers” is the maximum number of test controllers that a test system can support. Some example test systems may be configured to support one test controller, four test controllers, or eight test controllers, for instance.

In an example, analysis constraints for the process also include the following. “Test instrument” is the name of the test instrument available for including in a test system configuration. Another example test constraint parameter is “test slot head type”. In this regard, test systems may have different types of slots for different types of instruments. The “test slot head type” parameter defines the number of slots for a particular type of instrument that are available on a test system. The purpose of this parameter is that, as the test site count is increased during processing, there may be a point where there are no more slots available for a particular type of test instrument. This situation would limit the possible test configurations. Another example parameter is “test controllers per test instrument”. In this regard, system architectures may allow multiple test controllers on a single test instrument board. This parameter defines the total number of test controllers each test instrument can include. This parameter may be used for multiple controller test system architecture configurations. Another example parameter is “test sites per test instrument”. This parameter is based on a device to be tested. If the DUT has 64 digital pins and the test instrument supports 256 digital pins, then the instrument will support a maximum of four device test sites. Another example parameter is “price”. This parameter represents the price of individual configurable test instruments. This parameter is used to determine the total configured price of the test system and is used in relative COT determinations.

FIG. 4 is a flowchart showing a process 40 containing example operations for determining a configuration of a test system and a COT associated with that configuration. Process 40 includes generating (41) a graphical user interface (GUI). The GUI includes fields for entering parameters relating to test time, parameters defining the test system configuration, and parameters defining analysis constraints. The GUI may also include fields for displaying a COT and parameters for an optimized tester configuration.

Referring also to FIG. 5, process 40 includes receiving (42) known test time parameters entered into fields of column 43 of GUI component 39. In this regard, GUI component 39 may be part of a single GUI that includes the GUI components shown in FIGS. 5 to 9. Examples of known test time parameters may include SSTT, SITES, MSTT, PTE, SITEScntrl, MSTTcntrl, PTEcntrl, CNTRLS, and PTEacrosscntrl. In some implementations, all or some test time parameters may be known beforehand. Those that are known may be entered and those that are unknown may be determined based on those that are known. Known test time parameters may be entered by a test engineer into a computing system that displays the GUI.

Referring also to FIG. 6, process 40 includes determining (44) unknown test time parameters for single test controller and multiple test controller systems. Operations performed to determine the unknown test time parameters may be performed by a test time calculation engine. The test time calculation engine may be implemented using executable instructions that determine the unknown test time parameters and that populate fields 45 of GUI component 46 with determined values of those test time parameters. The operations may determine the unknown test time parameters using the following equations for a single test controller test system.
SSTT=MSTT/(((1−PTE)*(TEST SITES−1))+1)
TEST SITES=(((MSTT/SSTT)−1)/(1−PTE))+1
MSTT=((1−PTE)*(SSTT*(TEST SITES−1)))+SSTT
PTE=1−(MSTT−SSTT)/(SSTT*(TEST SITES−1))
The operations may determine unknown test time parameters using the following equations for a multiple test controller test system.
TEST SITEScntrl=TEST SITES/CNTRLS
MSTTcntrl=((1−PTEcntrl)*(SSTT*(TEST SITEScntrl−1)))+SSTT
PTE=1−(MSTTcntrl−SSTT)/(SSTT*(TEST SITEScntrl-1))
MSTTtotal=((1−PTEcntrl)*(MSTTcntrl*(CNTRLS−1)))+MSTTcntrl
MSTT=MSTTtotal
The test time calculation engine populates fields 45 of GUI component 46 with values for the unknown test parameters that were determined based on the known test parameters.

Referring also to FIG. 7, process 40 includes receiving (47), within field 48 of GUI component 49, information identifying a test system to analyze. The identity of the test system to analyze (“System-X”) may be entered or selected using a drop-down menu. The test system to be analyzed is associated with configuration data, some of which may be used to populate corresponding fields of the GUI. The configuration data specifies a configuration of the test system, including whether the test system uses a single test controller or multiple test controllers. In some implementations, the configuration data may specify the number and types of test instruments in the test system. For example, the configuration data may specify the number of digital test instruments in the test system, the number of serial digital test instruments in the test system, the number of alternating current (AC) source test instruments in the test system, the number of direct current (DC) voltage/current (V/I) test instruments in the test system, and the number of power supply test instruments in the test system.

Process 40 includes receiving (50) analysis constraints into fields of column 51 of GUI 52. The analysis constraints may be entered or selected using a drop-down menu for each field. Examples of analysis constraints are provided above and include a maximum number of test sites for the test system, the total test site increment (test site resolution), and a maximum number of test controllers for the test system.

Process 40 includes receiving (55) modifications to entries in fields of columns 56, 57, 58, and 59 of GUI component 60. For example, parameters populated automatically based on the test configuration data loaded when the test system is selected may be modified. In this example, parameters that may be modified include the test head and test instrument type, the slot count for each test head and test instrument type, the number of test controllers per test instrument (also referred to as “test instrument board”), and the price of each component of the test system.

Process 40 includes receiving (61), into fields of column 62 of GUI component 60, parameters specifying the number of test sites that each test instrument supports. The number of test sites may be entered or selected using a drop-down menu for each field.

Operations 64, 65, and 66 of FIG. 4 may be performed using EQUATIONS 1 for a single test controller test systems and EQUATIONS 2 for multiple test controller test systems. Operations 64, 65, and 66 may be performed by a relative COT calculation engine. The relative COT calculation engine may include executable instructions to perform calculations using EQUATIONS 1 and EQUATIONS 2, including performing iterative calculations for different test system configurations. In this example, process 40 includes determining (64) the relative COT for a test system configuration having a specified number of test sites, a specified configuration of test instruments, a specified cost of the configuration, and a specified number of test slots. In this example, process 40 includes determining (65) a configuration of the test system that results in a minimum COT. For example, process 40 determines the number of test sites, the configuration of test instruments, the number of test slots, and the number of test controllers that together produce a minimum COT. In this example, process 40 includes determining (66) the relative COT for all potential (e.g., configurable) numbers of test sites in the test system and for all potential (e.g., configurable) numbers of test controllers in the test system.

Referring also to FIG. 8, process 40 includes outputting (70), for display on the GUI, parameters for the configuration of the current test system and the COT of the current test system. For example, the parameters for the current test system configuration 71 may be displayed in parts 72 of GUI component 74. The current test system configuration may include the number and types of digital test instrument in the test system, the number of slots in the test system, and the number of test sites used for testing on the DIB. The COT of the current test system is also displayed. Field 75 includes the cost of the configuration of the test system. This cost is a sum of the costs incurred from the various components in column 76. Field 76 includes the relative COT, which is determined from EQUATIONS 1 or EQUATIONS 2 above.

Process 40 includes outputting (79), for display on the GUI, configuration parameters of the test system determined to have the minimum COT. The minimum COT is typically less than the COT of the current (or initial) test system configuration. The configuration of the test system having the minimum COT obeys whatever constraints are input into the system. In this example, parameters for the test system configuration may be displayed in columns 80. For example, the test system configuration having the minimum COT may include the number and types of digital test instruments in the test system, the number of slots in the test system used, and the number of test sites used for testing on the DIB. The minimum COT is also displayed in columns 80. Field 81 includes the cost of the configuration of the test system having the minimum COT. Field 83 includes the minimum relative COT, which is determined from EQUATIONS 1 or EQUATIONS 2 above.

Process 40 includes determining (85) the relative COT for multiple different configurations of the test system—for example, for test system configurations having different numbers of test sites or controllers. Determining (85) may be part of the operations performed to determine the configuration of the test system having the minimum COT. For example, in order to determine a configuration having a minimum COT, process 40 iterates over various different configurations of the test system. Process 40 saves these configurations in computer memory and uses the data for these configurations to compare the different configurations graphically.

Process 40 outputs (85), for display on the GUI, a graphical representation of the relative COT of different configurations of the test system. In the example of FIG. 9, graphical representation 90 plots the change in relative COT for different numbers of numbers of test sites in the test system, assuming all remaining configuration parameters are the same. The plot shows the minimum relative COT, which occurs at about 12 test sites in this example. In some implementations, differently colored plots may be included in the graphical representation, with each differently colored plot being for a test system having a different number of test controllers.

Configurations determined by process 40 are limited by the hardware capability of the test system. In an example, process 40 determines a test system configuration that produced a lowest COT. Process 40 may identify a test system having the hardware capability to support this configuration. Process 40 may then control a tester network server to configure the test system into that configuration.

The tester network server may be or include a computing system of the type described herein. The tester network server may be connected to communicate with process 40 and with the various test instruments contained in the test system. Process 40 may instruct the tester network server to include certain test instruments, to exclude certain test instruments, to set the number of each included type of test instrument, and to vary operational parameters of the included test instruments. For example, process 40 may instruct the tester network server to include zero, one or more digital test instruments in the test system; zero, one or more serial digital test instruments in the test system; zero, one or more AC source test instruments in the test system; zero, one or more DC V/I test instruments in the test system; and/or zero, one or more power supply test instruments in the test system. For all or some of the test instruments included in the system, process 40 may instruct the tester network server to change one or more operational parameters. For example, process 40 may instruct the tester network server to set the voltage and current outputs of a DC V/I test instrument or to set the output power level of a power supply test instrument. Process 40 may send one or more notifications specifying the test system configuration to a production floor supervisor specifying which test system to use for testing.

In FIG. 4, process 40 includes identifying (91) a test system having components that can be configured into the configuration determined to have the minimum COT. Process 40 includes configuring (92) the test system so that it has the minimum COT. Configuring the test system may include controlling assembly of hardware components, such as test instruments, of the test system. In some examples, assembly may include connecting test instruments together to build the test system or disconnecting test instruments from an existing test system. Configuring the test system may include controlling robotics to assemble the test system using appropriate hardware components. For example, robotics may be used to move test instruments across a test floor to insert the test instruments into slots of a test system.

In some implementations, process 40 may configure the test system automatically. Automatic in this example includes operations performed absent user intervention unless an error occurs. In some implementations, process 40 may configure the test system based on input from a test engineer. For example, a test engineer may review the configuration determined by process 40 and instruct process 40 to produce the test system. In some implementations, process 40 may receive inputs during the configuring process to guide assembly of hardware components.

The configuration of the test system may be validated (93). For example, process 40 may confirm that the configuration is appropriate for the test system given any constraints associated with the test system.

The example process described herein may be implemented by, and/or controlled using, one or more computer systems comprising hardware or a combination of hardware and software. For example, a system like the ones described herein may include various test controllers and/or processing devices located at various points in the system to control operation of the automated elements. A central computer may coordinate operation among the various test controllers or processing devices. The central computer, test controllers, and processing devices may execute various software routines to effect control and coordination of the various automated elements.

The example process described herein can be implemented and/or controlled, at least in part, using one or more computer program products, e.g., one or more computer program tangibly embodied in one or more information carriers, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.

A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one test site or distributed across multiple test sites and interconnected by a network.

Actions associated with implementing all or part of the testing can be performed by one or more programmable processors executing one or more computer programs to perform the functions described herein. All or part of the testing can be implemented using special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).

Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Machine-readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.

Each computing device may include a hard drive for storing data and computer programs, and a processing device (e.g., a microprocessor) and memory (e.g., RAM) for executing computer programs. Each computing device may include an image capture device, such as a still camera or video camera. The image capture device may be built-in or simply accessible to the computing device and may be used to capture images of, and to control operations of, robotics to configure a test system.

Each computing device may include a graphics system, including a display screen. A display screen, such as an LCD or a CRT (Cathode Ray Tube) displays, to a user, images that are generated by the graphics system of the computing device. As is well known, display on a computer display (e.g., a monitor) physically transforms the computer display. For example, if the computer display is LCD-based, the orientation of liquid crystals can be changed by the application of biasing voltages in a physical transformation that is visually apparent to the user. As another example, if the computer display is a CRT, the state of a fluorescent screen can be changed by the impact of electrons in a physical transformation that is also visually apparent. Each display screen may be touch-sensitive, allowing a user to enter information onto the display screen via a virtual keyboard. On some computing devices, such as a desktop or smartphone, a physical QWERTY keyboard and scroll wheel may be provided for entering information onto the display screen. Each computing device, and computer programs executed thereon, may also be configured to accept voice commands, and to perform functions in response to such commands. For example, the example processes described herein may be initiated at a client, to the extent possible, via voice commands.

Elements of different implementations described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the structures described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.

Kramer, Randall T.

Patent Priority Assignee Title
Patent Priority Assignee Title
10048304, Oct 25 2011 Teradyne, Inc Test system supporting simplified configuration for controlling test block concurrency
10139449, Jan 26 2016 Teradyne, Inc. Automatic test system with focused test hardware
5578930, Mar 16 1995 Teradyne, Inc. Manufacturing defect analyzer with improved fault coverage
5604751, Nov 09 1995 Teradyne, Inc.; Teradyne, Inc Time linearity measurement using a frequency locked, dual sequencer automatic test system
5699402, Sep 26 1994 TOLLGRADE COMMUNICATIONS, INC PENNSYLVANIA Method and apparatus for fault segmentation in a telephone network
5828674, Sep 16 1997 Teradyne, Inc. Production interface for integrated circuit test system
5870451, Aug 29 1994 TOLLGRADE COMMUNICATIONS, INC PENNSYLVANIA Method and apparatus for high impedance ringer detection
5910895, Jun 13 1997 Teradyne, Inc. Low cost, easy to use automatic test system software
5938781, Sep 16 1997 Teradyne, Inc. Production interface for an integrated circuit test system
6047293, Sep 16 1997 Teradyne, Inc. System for storing and searching named device parameter data in a test system for testing an integrated circuit
6507920, Jul 15 1999 Teradyne, Inc. Extending synchronous busses by arbitrary lengths using native bus protocol
6717432, Apr 16 2002 Teradyne, Inc Single axis manipulator with controlled compliance
7046027, Oct 15 2004 Teradyne, Inc. Interface apparatus for semiconductor device tester
7080304, Feb 26 2002 Teradyne, Inc.; Teradyne, Inc Technique for programming clocks in automatic test system
7085668, Aug 20 2004 Teradyne, Inc. Time measurement method using quadrature sine waves
7171587, Apr 28 2003 Teradyne, Inc.; Teradyne, Inc Automatic test system with easily modified software
7343387, Feb 26 2002 Teradyne, Inc.; Teradyne, Inc Algorithm for configuring clocking system
7908531, Sep 29 2006 Teradyne, Inc.; Teradyne, Inc Networked test system
8131387, Aug 09 2007 Teradyne, Inc Integrated high-efficiency microwave sourcing control process
8762095, May 05 2010 Teradyne, Inc.; Teradyne, Inc System for concurrent test of semiconductor devices
9134377, Mar 14 2013 Teradyne, Inc. Method and apparatus for device testing using multiple processing paths
9165735, Mar 05 2012 Teradyne, Inc. High reliability, high voltage switch
9244126, Nov 06 2013 Teradyne, Inc.; Teradyne, Inc Automated test system with event detection capability
9279857, Nov 19 2013 Teradyne, Inc Automated test system with edge steering
9470759, Oct 28 2011 Teradyne, Inc.; Teradyne, Inc Test instrument having a configurable interface
9484116, Aug 17 2015 Advantest Corporation Test system
9514037, Dec 16 2015 International Business Machines Corporation Test program scheduling based on analysis of test data sets
9638742, Nov 14 2008 Teradyne, Inc Method and apparatus for testing electrical connections on a printed circuit board
9755766, Dec 07 2015 Teradyne, Inc. Front end module for automatic test equipment
20050153465,
20100082284,
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Sep 24 2018KRAMER, RANDALL T Teradyne, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0469970228 pdf
Sep 26 2018Teradyne, Inc.(assignment on the face of the patent)
Date Maintenance Fee Events
Sep 26 2018BIG: Entity status set to Undiscounted (note the period is included in the code).


Date Maintenance Schedule
Nov 09 20244 years fee payment window open
May 09 20256 months grace period start (w surcharge)
Nov 09 2025patent expiry (for year 4)
Nov 09 20272 years to revive unintentionally abandoned end. (for year 4)
Nov 09 20288 years fee payment window open
May 09 20296 months grace period start (w surcharge)
Nov 09 2029patent expiry (for year 8)
Nov 09 20312 years to revive unintentionally abandoned end. (for year 8)
Nov 09 203212 years fee payment window open
May 09 20336 months grace period start (w surcharge)
Nov 09 2033patent expiry (for year 12)
Nov 09 20352 years to revive unintentionally abandoned end. (for year 12)