A system and method for yield management is disclosed wherein a data set containing one or more prediction variable values and one or more response values is input into the system. The system can pre-process the input data set to remove prediction variables with missing values and data sets with missing values. The pre-processed data can then be used to generate a model that may be a decision tree. The system can accept user input to modify the generated model. Once the model is complete, one or more statistical analysis tools can be used to analyze the data and generate a list of the key yield factors for the particular data set.

Patent
   RE42481
Priority
Dec 08 1999
Filed
Oct 22 2004
Issued
Jun 21 2011
Expiry
Dec 08 2019
Assg.orig
Entity
Large
2
5
all paid
0. 27. A yield management method, comprising:
pre-processing, via a computing device, an input data set comprising one or more prediction variables and one or more response variables containing data about a particular semiconductor process to remove data having at least a predetermined number of missing values to generate pre-processed data, the pre-processing further comprising removing one or more prediction variables from the input data set having more than a predetermined number of classes;
generating, via the computing device, a model based on the pre-processed data, the model being a decision tree identifying one or more variables as key yield factors; and
analyzing the model using a statistical tool to examine one or more key yield factors based on the input data set.
12. A yield management method, comprising:
pre-processing, via a computing device, an input data set comprising one or more prediction variables and one or more response variables containing data about a particular semiconductor process, the pre-processing further comprising removing one or more prediction variables from the input data set having more than a predetermined number of missing values, removing one or more prediction variables from the input data set having more than a predetermined number of classes and removing data having more than a predetermined number of missing values to generate pre-processed data;
generating, via the computing device, a model based on the pre-processed data, the model being a decision tree;
modifying, via the computing device, the model based on user input; and
analyzing, via the computing device, the model using statistical tools to examine one or more key yield factors based on the input data set.
0. 31. A computerized yield management system, comprising:
pre-processing computing device means executed by a data pre-processor in a computer processing unit for pre-processing an input data set comprising one or more prediction variables and one or more response variables containing data about a particular semiconductor process to remove data having at least a predetermined number of missing values to generate pre-processed data, and the pre-processing computing device means comprising means for removing one or more prediction variables from the input data set having more than a predetermined number of classes;
computing device means executed by a model builder in the computer processing unit, the model builder being in communication with the data pre-processor, for generating a model based on the pre-processed data, the model being a decision tree identifying one or more variables as key yield factors; and
computing device means executed by a statistical tool in the computer processing unit, the statistical tool being in communication with the model builder, for analyzing the model using the statistical tool to generate yield management information.
0. 23. A computerized yield management system, comprising:
pre-processing computing device means executed by a data pre-processor in a computer processing unit for pre-processing an input data set comprising one or more prediction variables and one or more response variables containing data about a particular semiconductor process to remove data having at least a predetermined number of missing values to generate pre-processed data, and the pre-processing computing device means comprising means for removing one or more prediction variables from the input data set having more than a predetermined number of classes;
computing device means executed by a model builder in the computer processing unit, the model builder being in communication with the data pre-processor, for generating a model based on the pre-processed data, the model being a decision tree identifying one or more variables as key yield factors; and
computing device means executed by a statistical tool library in the computer processing unit, the statistical tool library being in communication with the model builder, for analyzing the model using a statistical tool to examine one or more key yield factors based on the input data set.
1. A computerized yield management system, comprising:
pre-processing computing device means executed by a data pre-processor in a computer processing unit for pre-processing an input data set comprising one or more prediction variables and one or more response variables containing data about a particular semiconductor process, the pre-processing computing device means further comprising means for removing one or more prediction variables from the input data set having more than a predetermined number of missing values, means for removing one or more prediction variables from the input data set having more than a predetermined number of classes, and means for removing data having more than a predetermined number of missing values to generate pre-processed data;
model generating computer device means executed by a model builder in the computer processing unit, the model builder being in communication with the data pre-processor, for generating a model based on the pre-processed data, the model being a decision tree;
computing device means executed by the model builder for modifying the model based on user input; and
computing device means executed by a statistical tool library in the computer processing unit, the statistical tool library being in communication with the model builder, for analyzing the model using a statistical tool to generate one or more key yield factors based on the input data set.
2. The system of claim 1, wherein the model generating computing device means further comprises means for building a decision tree containing a root node, one or more intermediate nodes and one or more terminal nodes wherein a response value at the one or more terminal nodes is presented to the user and splitting means for splitting a node in the tree into one or more sub-nodes based on prediction variables contained in the node.
3. The system of claim 2, wherein the splitting means further comprises means for determining if a number of data cases in a node are less than a predetermined threshold value, means for calculating a goodness of split value for splitting the node based on each predictor prediction variable in the node, means for selecting prediction variables having a maximum goodness of split value and means for splitting the node into one or more sub-nodes based on the prediction variables having the maximum goodness of split value.
4. The system of claim 3, wherein the one or more prediction variables and the one or more response variables are categorical variables.
5. The system of claim 4, wherein the splitting means further comprises a splitting rule and a goodness of split rule, the splitting rule comprising means for placing a case into a left sub-node if the case is included in the values of the predictor prediction variable and wherein the goodness of split rule is of the form:
Φ ( s ) = g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R )
where Φ(s) represents a goodness of split rule for a split, s; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in a left sub-node of node T; and NTR is a number of cases in a right sub-node of node T.
6. The system of claim 3, wherein the one or more prediction variables and the one or more response variables are numerical.
7. The system of claim 6, wherein the splitting means further comprises a splitting rule and a goodness of split rule, the splitting rule comprising means for placing a case into a left sub-node if the value of the predictor prediction variable for a particular case is less than or equal to a first predetermined value of the predictor prediction variables or if the value of the predictor prediction variable for the particular case is between the first predetermined value and a second predetermined value of the predictor prediction variables and wherein the goodness of split rule is of the form:
Φ ( S * ) = { g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) if S * has a first form c ( g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) ) if S * has a second form
where Φ(S*) represents a goodness of split rule for a split, S*; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in a left sub-node of node T; NTR is a number of cases in a right sub-node of node T; c is a user-selectable variable between 0 and 1.
8. The system of claim 3, wherein the a response variable from the one or more response variables is a categorical variable and the a prediction variable from the one or more prediction variables is a numerical variable.
9. The system of claim 8, wherein the splitting means further comprises a splitting rule and a goodness of split rule, the splitting rule comprising means for placing a case into a left sub-node if the value of the predictor prediction variable for a particular case is less than or equal to a first predetermined value of the predictor prediction variables or if the value of the predictor prediction variable for the particular case is between the first predetermined value and a second predetermined value of the predictor prediction variables and wherein the goodness of split rule is of the form:
Φ ( S ) = { g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) if S has a first form c ( g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) ) if S has a second form
where Φ(S) represents a goodness of split rule for a split, S; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in the left sub-node of node T; NTR is a number of cases in the right sub-node of node T; c is a user-selectable variable between 0 and 1.
10. The system of claim 3, wherein the a response variable from the one or more response variables is a numerical variable and the a prediction variable from the one or more prediction variables is a categorical variable.
11. The system of claim 10, wherein the splitting means further comprises a splitting rule and a goodness of split rule, the splitting rule comprising means for placing a case into a left sub-node if the case is included in the values of the predictor prediction variable and wherein the goodness of split rule is of the form:
Φ ( s ) = g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R )
where Φ(s) represents a goodness of split rule for a split, s; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in the left sub-node of node T; and NTR is a number of cases in the right sub-node of node T.
13. The method of claim 12, wherein the generated model further comprises building a decision tree containing a root node, one or more intermediate nodes and one or more terminal nodes wherein a response value at the one or more terminal nodes is presented to the user and splitting a node in the tree into one or more sub-nodes based on prediction variables contained in the node.
14. The method of claim 13, wherein the splitting further comprises determining if a number of data cases in a node are less than a predetermined threshold value, calculating a goodness of split value for splitting the node based on each predictor prediction variable in the node, selecting prediction variables having a maximum goodness of split value and splitting the node into one or more sub-nodes based on the prediction variables having the maximum goodness of split value.
15. The method of claim 14, wherein the one or more prediction variables and the one or more response variables are categorical variables.
16. The method of claim 15, wherein the splitting further comprises a splitting rule and a goodness of split rule, the splitting rule comprising placing a case into a left sub-node if the case is included in the values of the predictor prediction variable and wherein the goodness of split rule is of the form:
Φ ( s ) = g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R )
where Φ(s) represents a goodness of split rule for a split, s; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in the left sub-node of node T; and NTR is a number of cases in the right sub-node of node T.
17. The method of claim 14, wherein the one or more prediction variables and the one or more response variables are numerical.
18. The method of claim 17, wherein the splitting further comprises a splitting rule and a goodness of split rule, the splitting rule comprising placing a case into a left sub-node if the value of the predictor prediction variable for a particular case is less than or equal to a first predetermined value of the predictor prediction variables or if the value of the predictor prediction variable for the particular case is between the first predetermined value and a second predetermined value of the predictor prediction variables and wherein the goodness of split rule is of the form:
Φ ( S * ) = { g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) if S * has a first form c ( g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) ) if S * has a second form
where Φ(S*) represents a goodness of split rule for a split, S*; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in a left sub-node of node T; NTR is a number of cases in a right sub-node of node T; c is a user-selectable variable between 0 and 1.
19. The method of claim 14, wherein the a response variable from the one or more response variables is a categorical variable and the a prediction variable from the one or more prediction variables is a numerical variable.
20. The method of claim 19, wherein the splitting further comprises a splitting rule and a goodness of split rule the splitting rule comprising placing a case into a left sub-node if the value of the predictor prediction variable for a particular case is less than or equal to a first predetermined value of the predictor prediction variables or if the value of the predictor prediction variable for the particular case is between the first predetermined value and a second predetermined value of the predictor prediction variables and wherein the goodness of split rule is of the form:
Φ ( S ) = { g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) if S has a first form c ( g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R ) ) if S has a second form
where Φ(S) represents a goodness of split rule for a split, S; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in the left sub-node of node T; NTR is a number of cases in the right sub-node of node T; c is a user-selectable variable between 0 and 1.
21. The method of claim 14, wherein the a response variable from the one or more response variables is a numerical variable and the a prediction variable from the one or more prediction variables is a categorical variable.
22. The method of claim 21, wherein the splitting further comprises a splitting rule and a goodness of split rule, the splitting rule comprising placing a case into a left sub-node if the case is included in the values of the predictor prediction variable and wherein the goodness of split rule is of the form:
Φ ( s ) = g ( T ) - N T L N T g ( T L ) - N T R N T g ( T R )
where Φ(s) represents a goodness of split rule for a split, s; g(T) represents noisiness of a node T; g(TL) represents noisiness of a left sub-node of node T; g(TR) represents noisiness of a right sub-node of node T; NT is a number of cases in node T; NTL is a number of cases in the left sub-node of node T; and NTR is a number of cases in the right sub-node of node T.
0. 24. The system of claim 23 wherein the pre-processing computing device means further comprises means for removing one or more prediction variables from the input data set having more than a predetermined number of missing values.
0. 25. The system of claim 23 wherein the pre-processing computing device means further comprises means for removing data having more than a predetermined number of missing values.
0. 26. The system of claim 23, further comprising means for modifying the model based on user input.
0. 28. The method of claim 27 wherein the pre-processing further comprises removing one or more prediction variables from the input data set having more than a predetermined number of missing values.
0. 29. The method of claim 27 wherein the pre-processing further comprises removing data having more than a predetermined number of missing values.
0. 30. The method of claim 27, further comprising modifying the model based on user input.
0. 32. The system of claim 31 wherein the pre-processing computing device means further comprises means for removing data containing erroneous values.
0. 33. The system of claim 31 wherein the pre-processing computing device means further comprises means for removing data containing invalid values.
0. 34. The system of claim 31, further comprising means for modifying the model based on user input.

This invention relates generally to a system and method for managing a semiconductor process and in particular to a system and method for managing yield in a semiconductor process.

The semiconductor industry is continually pushing toward smaller and smaller geometries of the semiconductor devices being produced since smaller devices generate less heat and operate at a higher speed than larger devices. Currently, a single chip may contain over one billion patterns. The semiconductor manufacturing process is extremely complicated since it involves hundreds of processing steps. A mistake or small error at any of the process steps or tool specifications may cause lower yield in the final semiconductor product, wherein yield may be defined as the number of functional devices produced by the process as compared to the theoretical number of devices that could be produced assuming no bad devices. Improving yield is a critical problem in the semiconductor industry and has a direct economic impact to the semiconductor industry. In particular, a higher yield translates into more devices that may be sold by the manufacturer.

Semiconductor manufacturing companies have been collecting data for a long time about various process parameters in an attempt to improve the yield of the semiconductor process. Today, an explosive growth of database technology has contributed to the yield analysis that each company follows. In particular, the database technology has far outpaced the yield management ability when using conventional statistical methods to interpret and relate yield to major yield factors. This has created a need for a new generation of tools and techniques for automated and intelligent database analysis for yield management.

Current conventional yield management systems have a number of limitations and disadvantages which make them less desirable to the semiconductor industry. For example, the conventional systems may require some manual processing which slows the analysis and makes it susceptible to human error. In addition, these conventional systems may not handle both continuous and categorical yield management variables. Some conventional systems cannot handle missing data elements and do not permit rapid searching through hundreds of yield parameters to identify key yield factors. Some conventional systems output data that is difficult to understand or interpret even by knowledgeable semiconductor yield management people. In addition, the conventional systems typically process each yield parameter separately, which is time consuming and cumbersome and cannot identify more than one parameter at a time.

Thus, it is desirable to provide a yield management system and method which solves the above limitations and disadvantages of the conventional systems and it is to this end that the present invention is directed.

The yield management system and method in accordance with the invention may provide many advantages over conventional methods and systems which make the yield management system and method more useful to semiconductor device manufacturers. In particular, the system may be fully automated and easy to use so that no extra training is necessary to make use of the yield management system. In addition, the system handles both continuous (e.g., temperature) and categorical (e.g., Lot 1, Lot 2, etc.) variables. The system also automatically handles missing data during a pre-processing step. The system can rapidly search through hundreds of yield parameters and generate an output indicating the one or more key yield factors/parameters. The system generates an output (a decision tree) that is easy to interpret and understand. The system is also very flexible in that it permits prior yield parameter knowledge (from users) to be easily incorporated into the building of the model in accordance with the invention. Unlike conventional systems, if there is more than one yield factor/parameter affecting the yield of the process, the system can identify all of the parameters/factors simultaneously so that the multiple factors are identified during a single pass through the yield data.

In accordance with a preferred embodiment of the invention, the yield management method may receive a yield data set. When a data set comes in, it first goes through a data preprocessing step in which the validity of the data in the data set is checked and cases or parameters with missing data are eliminated. Using the cleaned up data set, a Yield Mine model is built during a model building step. Once the model is generated automatically by the yield management system, the model may be modified by one or more users based on their experience or prior knowledge of the data set. Once the model has been modified, the data set may be processed using various statistical analysis tools to help the user better understand the relationship between the response and predict variables.

FIG. 1 is a diagram illustrating an example of a yield management system in accordance with the invention implemented on a personal computer;

FIG. 2 is a block diagram illustrating more details of the yield management system in accordance with the invention;

FIG.3 is a flowchart illustrating an example of a yield management method in accordance with the invention;

FIG. 4 is a diagram illustrating the data preprocessing procedure in accordance with the invention;

FIG. 5 illustrates an example of a yield parameter being selected by the user and a tree node being automatically split or manually split in accordance with the invention;

FIG. 6 is a flowchart illustrating a recursive node splitting method in accordance with the invention;

FIG. 7 time

    • where ε is assumed to be i.i.d. Gaussian with mean 0 and variance σ2
      Now, let ŷi denote the fitted value of the model for case i. Let r be the LP norm of the residuals. That is,

r = ( i = 1 N T ( y - y _ i ) p ) 1 p ( 6 )

If Φ(S*)<cxr , then S* is the best split. Otherwise, the linear model fits better than split form 1 and 2. In this case, the node T is split into d sub-nodes, T1, T2, . . . , Td. Let {circumflex over (x)}1,{circumflex over (x)}2, . . . , {circumflex over (x)}NT denote the ordered values of x1, x2, . . . , xNT in an increasing order. Then a case (x, y)εTi if
{circumflex over (x)}L1≦x<{circumflex over (x)}Ri
where

L i = floor ( N T d ) × ( i - 1 ) + h 1 , R i = L i + floor ( N T d ) + h 2 ,

h1=max{i, (NT mod d)},

h 2 = { 1 if i < N T mod d 0 Otherwise

      • where d is a user defined parameter. The default value of d is 4. Now, assigning a value or class to a terminal node will be described.

When a terminal node is reached, a value or a class, ƒ(T), is assigned to all cases in the node depending on the type of the response variable. If the type of the response variable is numerical, ƒ(T) is a real value number. Otherwise, ƒ(T) is set to be a class member of the set A={A1, A2 , . . . , Ak}. Now, the cost function may be determined if Y is categorical or numerical.

Y is Categorical

Assume Y takes values in set A={A1, A2, . . . , Ak}. T is a terminal node with NT cases. Let NiT be the number, Y, equal to Ai in T, iε{1,2, . . . , L}. If the node is pure (i.e., all the cases in the node has the same response Aj), then, ƒ(T)=Aj. Otherwise, the node is not pure. No matter which class, ƒ(T), is assigned to, there is at least one case misclassified in the node. Let u(i|j) be the cost of assigning a class j case to class i. Then the total cost of assigning ƒ(T) to node T is

U ( f ( T ) ) = i = 1 N T u ( f ( T ) | y i ) ( 7 )

    • where ƒ(T)=Aj, such that U(Aj)=min(U(Ai), iε{1,2, . . . , l})

If u(i|j) is constant for all i and j, then ƒ(T) is assigned to the biggest class in the node. When there is a tie for the best choice of ƒ(T) among several classes, ƒ(T) is picked arbitrarily among those classes. Now, the case where Y is numerical is described.

Y is Numerical

In this case, the cost function is the same function g(T) which

Theorem: Let T0 be the node, such that g(T0)=g(TR). Then, pruning off all sub-nodes of T0 will not increase the complexity of the tree.

Proof

Let TRN be the tree obtained by pruning off T0 from TR. Every node TN in tree TRN comes from the node T in TR. If we can show that, for every TN, g(TN)≦g(T), then, by definition, g(TRN)≦g(TR).

There are two scenarios. 1) For node TN, its counter part counterpart T contains T0 as one of its sub-node sub-nodes. 2) For node TN, its counter part counterpart T does not contain T0 as a sub-node. In the second scenario, TN and T has have the same structure. Therefore, g(TN)=g(T). Now, let us consider the first scenario. If TN has no sub-node, then, g(TN)=0≦g(T). Otherwise, by definition,

g ( T N ) = C ( T N ) - C ( T T N ) T T N - 1 g ( T ) = C ( T ) - C ( T T ) T T - 1 .

Since, C(TN)=C(T), C(T)−C(TT)−(C(TN)−C(TTN))=C(T0)−C(T0T), |TT|−1−(|TTN|−1)=|T0|−1, and g(T0)=g(TR), therefore, g(T)≦g(T0). Hence, g(TN)≦g(T).

This theorem establishes a relationship between the size of a tree structure model and its complexity g(TR). In general, the bigger the complexity the more the number of nodes of the tree.

Cross validation can point out which complexity value v is likely to produce the most accurate tree structure. Using this v, we can prune the tree generated from the whole data set until its complexity is just below v. This pruned tree is used as the final tree structure model. Now, the model modification step will be described.

In some cases, the predictor variables can be correlated with each other. The splits of a node based on different parameters can produce similar results. In such cases, it is really up to the process engineer who uses the software to identify which parameter is the real cause of the Yield problem. To help the engineer to identify the possible candidates of parameters at any node split, all predictor variables are ranked according to their relative significance if the split were based on them. To be more precise, let Xi be the variable picked by the method which the split, S*, is based on.

For any j≠i let Sj denote the best split based on Xj. Then, define

q ( j ) = Φ ( S j ) Φ ( S * )

Since S* is the best split 0≦q(i)≦1 . Then, when double clicking on a node, a list of all predictor variables ranked by their q values is shown as illustrated in FIG. 5. If the user decides it is more appropriate to split on a predictor variable other than the one picked by the method, the user can then highlight it from the list, and a single click of the mouse will reproduce the tree and force node T to be split based on user picked parameters. For example, FIG. 7 illustrates two trees 130, 140 wherein the first tree 130 is built using the PWELLASH variable selected by the method and the second tree 140 is built using the user-selected parameter PWELLCB. Now, a method for tree prediction in accordance with the invention will be described in more detail.

FIG. 8 is a flowchart illustrating a method 150 for tree prediction in accordance with the invention. In step 152, the previously generated tree model is input. Next, the prediction value, X, of interest is input. In step 156, the prediction using the tree starts at the root node, T. In step 158, the method may determine if the node is terminal. If the current node is not terminal, then the method may assign X to one of the sub-nodes in step 160 according to the split rule described above. After the assignment in step 160, the method loops back to step 158 to test if the next node is a terminal node. If the node is terminal, then the method outputs the prediction value in step 162 and the method is completed. Now, the analysis step in accordance with the invention will be described.

All the basic statistical analysis tools are available to help the user to validate the model and identify the yield problem. At each node, a right click of the mouse produces a list of tools available as shown in FIG. 9. Every analysis is done at the node level (i.e., it only uses the data from that particular node). An example of the analysis tools available at the right node after the first split is shown in FIG. 9. In this example, those analysis tools may include box-whisker chart, Cumsum control chart, Shewhet control chart, histogram, one-way ANOVA, two sample comparison and X-Y correlation analysis. The particular tools available to the user depend upon the nature of the X and Y parameters (e.g., continuous vs. categorical).

After each model is built, the tree can be saved for future predictions. If a new set of parameter values is available, it can be fed into the model and generate prediction of the response value for each case. This functionality can be very handy for the user.

While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.

Wang, Weidong, Buckheit, Jonathan B., Budd, David W.

Patent Priority Assignee Title
9645097, Jun 20 2014 KLA-Tencor Corporation In-line wafer edge inspection, wafer pre-alignment, and wafer cleaning
9885671, Jun 09 2014 KLA-Tencor Corporation Miniaturized imaging apparatus for wafer edge
Patent Priority Assignee Title
4754410, Feb 06 1986 Westinghouse Electric Corp. Automated rule based process control method with feedback and apparatus therefor
5727128, May 08 1996 Fisher-Rosemount Systems, Inc.; FISHER-ROSEMOUNT SYSTEMS, INC , A CORP OF DELAWARE System and method for automatically determining a set of variables for use in creating a process model
5897627, May 20 1997 MOTOROLA SOLUTIONS, INC Method of determining statistically meaningful rules
6098063, Feb 15 1994 LSC COMMUNICATIONS LLC Device and method for identifying causes of web breaks in a printing system on web manufacturing attributes
6336086, Aug 13 1998 Bell Semiconductor, LLC Method and system for analyzing wafer processing order
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Oct 22 2004Rudolph Technologies, Inc.(assignment on the face of the patent)
Sep 03 2008YIELD DYNAMICS, INC MKS Instruments, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0214890065 pdf
Aug 11 2010MKS Instruments, IncRudolph Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0249460388 pdf
Date Maintenance Fee Events
Mar 28 2014M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Jun 21 20144 years fee payment window open
Dec 21 20146 months grace period start (w surcharge)
Jun 21 2015patent expiry (for year 4)
Jun 21 20172 years to revive unintentionally abandoned end. (for year 4)
Jun 21 20188 years fee payment window open
Dec 21 20186 months grace period start (w surcharge)
Jun 21 2019patent expiry (for year 8)
Jun 21 20212 years to revive unintentionally abandoned end. (for year 8)
Jun 21 202212 years fee payment window open
Dec 21 20226 months grace period start (w surcharge)
Jun 21 2023patent expiry (for year 12)
Jun 21 20252 years to revive unintentionally abandoned end. (for year 12)