A method for segmenting a small feature in a multidimensional digital array of intensity values in a data processor computes an edge metric along each ray of a plurality of multidimensional rays originating at a local intensity extreme (local maximum or minimum). A multidimensional point corresponding to a maximum edge metric on each said ray is identified as a ray edge point. Every point on each ray from the local extreme to the ray edge point is labeled as part of the small object. Further points on the feature are grown by labeling an unlabeled point if the unlabeled point is adjacent to a labeled point, and the unlabeled point has a more extreme intensity than the labeled point, and the unlabeled point is closer than the labeled point to the local extreme. The resulting segmentation is quick, and identifies boundaries of small features analogous to boundaries identified by human analysts, and does not require statistical parameterizations or thresholds manually determined by a user.

Patent
   RE43152
Priority
May 04 1998
Filed
Sep 12 2008
Issued
Jan 31 2012
Expiry
May 04 2019
Assg.orig
Entity
Large
0
34
EXPIRED
0. 37. A method of labeling a subset of pixels of an image, the method comprising:
labeling pixels of an image as belonging to a first object that is encompassed by a first edge;
labeling pixels of the image as belonging to a second object that is encompassed by a second edge; and
assembling the first and second objects into a third object that is larger than either of the first and second objects if a distance between the first and second edges is no more than a join distance.
0. 40. A non-transitory computer-readable medium having instructions stored thereon, the instructions comprising:
instructions for labeling pixels of an image as belonging to a first object that is encompassed by a first edge;
instructions for labeling pixels of the image as belonging to a second object that is encompassed by a second edge; and
instructions for assembling the first and second objects into a third object that is larger than either of the first and second objects if a distance between the first and second edges is no more than a join distance.
22. A computer program embodied in a non-transitory computer readable medium for performing the steps of:
computing an edge metric along each ray of a plurality of multidimensional rays originating at a local intensity extreme, identifying a multidimensional edge point corresponding to a maximum edge metric on each said ray, labeling every point on each said ray from said local intensity extreme to said edge point, and labeling an unlabeled point if the unlabeled point is adjacent to a labeled point and the unlabeled point has a more extreme intensity than the labeled point and the unlabeled point is closer than the labeled point to the local intensity extreme.
1. A method for segmenting a small feature in a multidimensional digital array of intensity values in a data processor, the method comprising:
computing an edge metric along each ray of a plurality of multidimensional rays originating at a local intensity extreme;
identifying a multidimensional edge point corresponding to a maximum edge metric on each said ray;
labeling every point on each said ray from said local intensity extreme to said edge point; and
labeling an unlabeled point if the unlabeled point is adjacent to a labeled point and the unlabeled paint point has a more extreme intensity than the labeled point and the unlabeled point is closer than the labeled point to the local intensity extreme.
0. 41. A method of labeling pixels of an image so as to designate portions of the image that are associated with an object, the method comprising:
identifying a first pixel as belonging to an object due to the first pixel having an intensity that is a local intensity extreme, wherein the first pixel is spaced from an edge of the object;
identifying as belonging to the object a second pixel that lies on a first substantially straight line on which the first pixel also lies;
identifying a third pixel as belonging to the object after having identified the second pixel as belonging to the object, wherein the third pixel lies on the line at a position that is closer to the first pixel than is the second pixel; and
labeling each of the first, second, and third pixels as belonging to the object.
0. 50. A non-transitory computer-readable medium having instructions stored thereon, the instructions comprising:
instructions for identifying a first pixel as belonging to an object due to the first pixel having an intensity that is a local intensity extreme, wherein the first pixel is spaced from an edge of the object;
instructions for identifying as belonging to the object a second pixel that lies on a first substantially straight line on which the first pixel also lies;
instructions for identifying a third pixel as belonging to the object after having identified the second pixel as belonging to the object, wherein the third pixel lies on the line at a position that is closer to the first pixel than is the second pixel; and
instructions for labeling each of the first, second, and third pixels as belonging to the object.
0. 23. A method of labeling pixels of an image so as to designate portions of the image that are associated with an object, the method comprising:
identifying a first pixel as belonging to an object due to the first pixel having an intensity that is a local intensity extreme, wherein the first pixel is at an interior of the object;
determining that a second pixel that lies on a ray that emanates from the first pixel has a maximum edge metric on the ray, wherein the second pixel has an intensity that is smaller in magnitude than the intensity of the first pixel;
labeling the second pixel as an edge pixel that lies on an edge of the object;
determining that a third pixel that is adjacent to the second pixel satisfies a predetermined criterion relative to one or more of the first and second pixels; and
labeling the third pixel as belonging to the object.
0. 51. A non-transitory computer-readable medium having instructions stored thereon, the instructions comprising:
instructions for identifying a first pixel as belonging to an object due to the first pixel having an intensity that is a local intensity extreme, wherein the first pixel is at an interior of the object;
instructions for determining that a second pixel that lies on a ray that emanates from the first pixel has a maximum edge metric on the ray, wherein the second pixel has an intensity that is smaller in magnitude than the intensity of the first pixel;
instructions for labeling the second pixel as an edge pixel that lies on an edge of the object;
instructions for determining that a third pixel that is adjacent to the second pixel satisfies a predetermined criterion relative to one or more of the first and second pixels; and
instructions for labeling the third pixel as belonging to the object.
19. A data processing apparatus for segmenting a small feature in a multidimensional digital array of intensity values comprising:
an input for a plurality of intensity values arranged along regular increments in each of a plurality of dimensions;
a memory medium for storing the plurality of intensity values as a multidimensional digital array;
a processor configured to detect a local intensity extreme in the multidimensional digital array, to identify points along a plurality of rays originating at the total local intensity extreme, to identify one edge point on each ray of said plurality of rays, said edge point associated with a maximum edge metric along said ray, to label each point on each ray from the local intensity extreme to the edge point, and to label an unlabeled point adjacent to a labeled point if the unlabeled point has a more extreme intensity than the labeled point and the unlabeled point is closer than the labeled point to the local intensity extreme until no more unlabeled points can be labeled; and
an output for providing the labeled points for subsequent processing.
18. A method for segmenting a small feature in a multidimensional digital array of intensity values in a dataprocessor, the method comprising:
computing an edge metric along each ray of plurality of multidimensional rays originating at a local intensity extreme:;
identifying a multidimensional edge point corresponding to a maximum edge metric on each said ray:;
labeling every point on each said ray from said local intensity extreme to said edge point;
labeling an unlabeled point if the unlabeled point is adjacent to a labeled labeled point and the unlabeled point has a more extreme intensity than the labeled point and the unlabeled point is closer than the labeled point to the local intensity extreme:; and
additionally labeling an unlabeled point if the unlabeled point is adjacent to a labeled point and has a more extreme intensity than the labeled point and is no farther from the local intensity extreme than the sum of a distance from the labeled point to the local intensity extreme plus an expansive tolerance distance less than the spacing between adjacent points; wherein
an expected size of a small feature is twice an integral number n times a spacing distance between adjacent points in the array,
n is greater than 1,
the maximum value of the difference in distances between the labeled point and the unlabeled point to the local intensity extreme (Gmax)=−N+√{square root over (n2+2))}, and
the expansive tolerance distance is less than about Gmax.
2. The method of claim 1 wherein intensity is a vector of values and an edge metric is a magnitude of a vector difference in intensities between two points along each said ray divided by a multidimensional distance between the same two points.
3. The method of claim 1 further comprising additionally labeling an unlabeled point if the unlabeled point is adjacent to a labeled point and has a more extreme intensity than the labeled point and is no farther from the local intensity extreme than the sum of a distance from the labeled point to the local intensity extreme plus an expansive tolerance distance less than the spacing between adjacent points.
4. The method of claim 1 further comprising also labeling an unlabeled point if the unlabeled point is adjacent to a labeled point and the unlabeled point has a less extreme intensity than the labeled point and the unlabeled point is closer than the labeled point to the local intensity extreme by an inclusion tolerance distance.
5. The method of claim 4, wherein the inclusion tolerance distance is about a spacing distance between adjacent points in the array or more.
6. The method of claim 1, wherein the edge metric at a ray point along each ray is computed as the quotient of the absolute value of an intensity difference between the local intensity extreme and the ray point divided by the absolute value of a distance between the ray point and the local intensity extreme.
7. The method of claim 1, wherein a ray length of each said ray is scaled by an expected size of a small feature.
8. The method of claim 1, wherein
the local intensity extreme is a point with the maximum intensity among a subarray of the multidimensional digital array of intensity values, the subarray having a certain multidimensional size, and
the intensity of the local intensity extreme exceeds a bright threshold intensity.
9. The method of claim 8, wherein the certain multidimensional size is an expected size of a small feature.
10. The method of claim 1, wherein
the local intensity extreme is a point with the minimum intensity among a subarray of the multidimensional digital array of intensity values, the subarray having a certain multidimensional size, and
the intensity of the local intensity extreme is less than a dark threshold intensity.
11. The method of claim 10, wherein the certain multidimensional size is an expected size of a small feature.
12. The method of claim 1, wherein the multidimensional array is a digital image, and each point is a pixel.
13. The method of claim 12, wherein the digital image is a digitized mammogram and the small feature is a microcalcification candidate.
14. The method of claim 12, wherein the digital image is a video frame of a military scene and the small feature is a candidate target of a tiring firing system.
15. The method of claim 1, wherein said labeling continues until no further unlabeled point can be labeled.
16. The method of claim 15, further comprising relabeling a labeled point as a feature edge point if an adjacent point is an unlabeled point.
17. The method of claim 16, further comprising joining a plurality of small features into a composite feature when a feature edge point from one small feature of the plurality of small features is within a join distance of a feature edge point of another small feature of the plurality of small features.
20. The apparatus of claim 19, wherein the plurality of intensity values arranged along regular increments in each of a plurality of dimensions is at least one digital image, and each point is a pixel.
21. The apparatus of claim 20, wherein the digital image is a digitized mammogram and the small feature is a microcalcification candidate.
0. 24. The method of claim 23, wherein the intensity of the first pixel is greater than the intensities of all pixels immediately adjacent to the first pixel.
0. 25. The method of claim 23, wherein the intensity of the first pixel is less than the intensities of all pixels immediately adjacent to the first pixel.
0. 26. The method of claim 23, wherein the edge metric comprises a slope quotient that compares a difference between intensities of the first pixel and a pixel that is being evaluated to a distance between the first pixel and the pixel that is being evaluated.
0. 27. The method of claim 23, wherein the predetermined criterion comprises the third pixel being disposed along a substantially straight line between the first and second pixels.
0. 28. The method of claim 23, wherein the predetermined criterion comprises:
an intensity of the third pixel being less than an intensity of the second pixel; and
a distance between the first and third pixels being smaller than a distance between the first and second pixels by no less than an inclusion tolerance distance.
0. 29. The method of claim 23 wherein the predetermined criterion comprises an intensity of the third pixel being greater than an intensity of the second pixel.
0. 30. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no less than an intensity of the second pixel; and
the third pixel being closer to the first pixel than the second pixel is to the first pixel.
0. 31. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no greater than an intensity of the second pixel; and
the third pixel being closer to the first pixel than the second pixel is to the first pixel.
0. 32. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no less than an intensity of the second pixel; and
a distance between the first and third pixels being no more than an expansive tolerance distance greater than a distance between the first and second pixels.
0. 33. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no greater than an intensity of the second pixel; and
a distance between the first and third pixels being no more than an expansive tolerance distance greater than a distance between the first and second pixels.
0. 34. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no less than an intensity of the second pixel; and
no less than an inclusion portion of the third pixel being on a side of a substantially straight inclusion line closest to the first pixel, the inclusion line intersecting the second pixel and being substantially perpendicular to a substantially straight line that intersects the first and second pixels.
0. 35. The method of claim 23 wherein the predetermined criterion comprises:
an intensity of the third pixel being no greater than an intensity of the second pixel; and
no less than an inclusion portion of the third pixel being on a side of a substantially straight inclusion line closest to the first pixel, the inclusion line intersecting the second pixel and being substantially perpendicular to a substantially straight line that intersects the first and second pixels.
0. 36. The method of claim 23, further comprising identifying as part of the edge of the object a fourth pixel that is immediately adjacent to at least one pixel that is identified as part of the object and that is immediately adjacent to at least four other pixels that are outside of the object.
0. 38. The method of claim 37 wherein assembling the first and second objects into the third object comprises identifying as part of the third object a pixel that is disposed between the first and second edges.
0. 39. The method of claim 37 wherein assembling the first and second objects into the third object comprises identifying as part of the third object a pixel that is disposed between the first and second edges and that is no farther than the join distance from the first edge and from the second edge.
0. 42. The method of claim 41, wherein the intensity of the first pixel is greater than the intensities of all pixels immediately adjacent to the first pixel.
0. 43. The method of claim 41, wherein the intensity of the first pixel is less than the intensities of all pixels immediately adjacent to the first pixel.
0. 44. The method of claim 41, wherein identifying the second pixel as belonging to the object comprises:
calculating respective slope quotients of respective differences between intensities of the first pixel and other pixels that are intersected by the line and respective distances between the first pixel and the other pixels, wherein the second pixel is one of the other pixels; and
determining that a magnitude of the slope quotient of the second pixel is larger than the magnitudes of the slope quotients for the remainder of the other pixels.
0. 45. The method of claim 41, further comprising identifying as belonging to the object at least a fourth pixel that intersects a second line that also intersects the first pixel but that does not intersect the second and third pixels, the fourth pixel being identified before any pixel other than one or more of the first, second, and third pixels is identified as belonging to the object.
0. 46. The method of claim 41, wherein the object comprises an edge and the second pixel forms at least a portion of the edge.
0. 47. The method of claim 41, wherein the second and third pixels are identified before any pixel other than the first pixel is identified as belonging to the object.
0. 48. The method of claim 41, wherein one or both of the second and third pixels have respective intensities that are smaller in magnitude than the intensity of the first pixel.
0. 49. The method of claim 48, wherein the intensity of the third pixel is less than the intensity of the second pixel.


The distance, d, between any two multidimensional points, Pa and Pb, with different indices {a1, a2, a3, . . . aD} and {b1, b2, b3, . . . bD}, can be defined as the square root of the sum of the squares of the differences in their indices. That is,

d ( P a , P b ) = d ( P ( a 1 , a 2 , , a D ) , P ( b 1 , b 2 , , b D ) ) = [ ( b 1 - a 1 ) 2 + ( b 2 - a 2 ) 2 + + ( b D - a D ) 2 ] ( 2 )

The intensity, f, varies with position in the multidimensional array and may be represented by the symbol f(P). The intensity f at each multidimensional point can be a single value, also called a scalar quantity. Alternatively, the intensity can be a vector of several values, e.g., f(P)={f1(P), f2(P), f3(P)}. For example, the three-color image can be treated as a three-dimensional array or can be treated as a two dimensional image with a three element vector intensity. In this terminology, the vector elements of the intensity are not used in the calculation of distance using Equation 2. Instead, the magnitude of intensity at point P could be any vector magnitude convention such as the square root of the sum of the squares of the vector components or the sum of the absolute values of the vector components. Similarly, the difference in intensity between two points Pa and Pb would be given by the magnitude of the difference in the components using any conventional method.

Thus, though the preferred embodiment is described in which the digital data array is an image having two dimensional pixels, each pixel having a scalar image intensity, the method can readily be extended to multiple dimensions using the above relationships. In the following, each pixel P has a first coordinate represented by x and a second coordinate represented by y and an intensity represented by f(P) or f(x,y). Separate pixels are designated by separate subscripts.

Though the invention applies to any imagery, the preferred embodiments segment two-dimensional images with a gray-scale intensity representative of a mam216
When the unlabeled pixel P is closer to the local maximum P0 than the unlabeled pixel Pr, then G is negative. Therefore, the negative of G is compared to the inclusion tolerance to determine if the unlabeled pixel is close enough to the local extreme to be engulfed, as shown in step 263 of FIGS. 2C and 2D. In the preferred embodiment, the inclusion tolerance is one pixel. Thus, lower intensity pixels closer to the local maximum than the already labeled point Pr by more than one pixel are close enough to be labeled. That is, a new pixel P with intensity f(P) less extreme than the intensity f(Pr) of the referenced pixel Pr is appended to the region if its distance to the local extreme is such that −G is ≧ the inclusion tolerance distance, as shown in step 265 of FIGS. 2C and 2D. If the unlabeled pixel with less extreme value is less than the inclusion tolerance closer to the local extreme or is farther from the local extreme, then the unlabeled pixel is not labeled, as shown in step 265 of FIGS. 2C and 2D.

The other branch from step 262 in FIGS. 2C and 2D is followed when the adjacent pixel P that is unlabeled has an intensity that is greater than or equal to the intensity of the labeled pixel Pr. This corresponds to the condition in the case of a local minimum that the unlabeled pixel has a lower intensity than the labeled pixel Pr. That is, the “yes” branch is followed from box 267, in general, if the unlabeled pixel P has an intensity that is no less extreme than the intensity at the labeled pixel Pr. Each of two different criteria can be used to determine whether the unlabeled pixel P is in a position that constitutes a step from the labeled pixel Pr toward the extreme pixel P0.

The first criterion, Criterion 1, is indicated in FIG. 2C and step 264a and is based on the angle of the line perpendicular to the line segment connecting the local extreme P0 with the reference pixel Pr. The line perpendicular to the segment connecting the local extreme to the labeled pixel is called the reference line 430 and is shown in FIG. 4. For arrays of more than two dimensions, the reference would be a surface with a number of dimensions at least one dimension less than the multidimensional array. The numbered pixels of FIG. 4 are approved for appending to the small feature if they fall within the list of approved pixels listed in Table 1 for the quadrant in which the angle θ varies from 0-90°. The angle θ

TABLE 1
Criterion 1 for First Quadrant.
xr yr 0 Approved Pixels
xr = xo yr < yo 1, 2, 3, 4, 8
xr > xo 0 < tan θ ≦ ⅓ 1, 2, 3, 4, 8
½ < tan θ < 1 1, 2, 3, 8
tan θ = 1 1, 2, 3, 7, 8
1 < tan θ ≦ 3 1, 2, 7, 8
3 < tan θ < ∞ 1, 2, 6, 7, 8
yr = yo 90° 1, 2, 6, 7, 8

between the reference line 430 and the x-axis is also shown in FIG. 4. The first two columns of Table 1 show the relationship between the coordinates of the reference pixel xr and yr of Pr and their relationship to the coordinates x0 and y0 of the local maximum P0. For different values of the angle θ or its tangent, tan θ, different of the numbered pixels in FIG. 4 are approved. Table 1 captures the condition that the unlabeled pixel P and the local maximum P0 must lie on the same side of the reference line 430. Among the eight pixels that surround a reference pixel, only some will meet the spatial criterion of Criterion 1, depending on the angel θ of the reference line. The angle θ is measured positive counterclockwise from the x-axis. The allowable pixels for values of θ in the other three quadrants are obtained in a symmetrical manner. An extended table would have to be drafted for data arrays of greater than two dimensions.


where τ is the tolerance parameter, and Fmax and Fmin are the current maximum and minimum values in the region grown that far. The value of τ is not manually selected by the user; the best value is automatically determined for each segmented structure by repeating the growth with multiple values of τ between 0.01 and 0.4 with steps of s=1/v, where v is the 8-bit value of the seed pixel. Three features are extracted from each region grown with a different tolerance level: shape compactness, center of gravity, and size. The algorithm determines the value of T that results in the minimal change in the vector of these three features with respect to the previous τ value in the sequence by computing a normalized difference between consecutive vectors. The vector with minimal difference indicates the best choice of τ.

The segmentation outcome of the multi-tolerance region growing procedure on 5 subtle microcalcification candidates depended partly on the intensity structure of the microcalcification. When the intensity transition from the edge to the background was relatively abrupt, the segmented region coincided closely to the visually perceived edge. When the intensity at the edge decreased gradually toward the background level, this algorithm generally produced a relatively large region. Nevertheless, the growth was consistently contained, i.e. it did not grow to an unacceptable size and it generated boundaries that can be used as an estimate of the immediate background around the microcalcification.

The active contours model represents the contour points as v(s)=(x(s),y(s)) The contour is obtained by minimizing the energy functional:
E[v(s)]=∫ΩEint[v(s)]+PE[v(s)]+Eext[v(s)]ds   (6)
where Eint is the internal energy due to the elasticity and the rigidity, PE is the potential energy obtained from the image data, Eext is the energy of external forces that can be applied to the contour. The integration is performed over the entire contour Ω. The internal energy is expressed by:
Eint=w1|v′(s)|2+w2|v″(s)|2   (7)
where w1 and w2 are coefficients that control the elasticity and rigidity, respectively, and primes denote differentiation. The choice of potential energy depends on the application; it is typically the negative squared gradient magnitude, and is so used for mammograms.

The active contour that minimizes E(v) satisfies the Euler-Lagrange equation:
−(w1v′)′+(w2v″)″=F(v)   (8)
where F(v) represents the force due to the combined effects of the potential energy and external energy. In this study we implemented the balloon forces and the image force normalization suggested, resulting in

F ( v ) = k 1 n ( s ) - k 2 PE PE ( 9 )
where n(s) is the unit vector normal to the contour at point v(s), oriented toward the outside of the contour, k1 is the magnitude of the balloon inflation force, and k2 is the coefficient of the normalized image force. The value of k2 is selected to be slightly larger than k1 to allow edge points to stop the inflation force.

The numerical solution was implemented using finite differences and the iterative evolution as suggested:
(I+τA)vt=(vt−1+τF(vt−1))   (10)
where I is the identity matrix, τ is the time step, A is the pentadiagonal matrix obtained with the finite difference formulation of Eint, vt is the active contour vector at time t, and F(vt) is the external force vector at time t. We used the negative squared magnitude of the image gradient as the potential energy. Pixels detected with an edge detector were not used in this study. The gradient of the image was computed with the Sobel operator.

The initial position of the contour was set automatically for each structure to be segmented. Since each structure of interest is a local intensity extreme, pixels were selected that were local maxima across the entire image. Each local maximum was used to segment a region around it. The width of the smallest microcalcifications considered in this study was about 0.25 mm and the majority of the microcalcifications in our database had widths in the range 0.3 to 0.5 mm. A circle of 0.2 mm diameter around the local maximum pixel was used as the initial position of the active contour. The initial contour points were 248-connected pixels forming this circle.

The selection of parameters for the active contour segmentation required some trial and error to obtain good segmentation. The segmentation of the same 5 subtle microcalcification candidates was performed using different active contours parameters. First, following the recommendations of Cohen (Cohen, L. D. “On Active Contour Models and Balloons,” CV GIP: Image Understanding, vol. 53, pp. 211-218, 1991), we selected the values of w1 and w2 as a function of the spatial discretization step size h, such that w1 was of the order of h2 and w2 was of the order of h4(w1=6, w2=40). Then τ was also set to 0.1. When k1 and k2 were relatively small (2 and 4), the image force and the balloon force did not act sufficiently on the active contour, producing contours that were only slightly different than the initial position. When these two parameters were increased (14 and 16), the resulting segmentation was very close to that expected visually. Increasing these parameters further (24 and 26) increased the combined effect of image gradient and balloon forces, producing contours that extended beyond the expected edges. Within this range, segmentation with the active contour model was not very sensitive to the values of the other parameters. The effect of doubling w1 to 12, is that contours became slightly smaller due to the increased stiffness of the active contour model. Sensitivity to w2 was also low. When w2 was doubled to 80, the contours became slightly smoother due to the increased rigidity of the model.

The segmentation steps of the hill climbing approach of the present invention are illustrated in FIG. 6. FIG. 6A shows a microcalcification candidate that has a width of about 0.3 mm. The 16 ray edge points 624 determined by the radial line search of the hill climbing algorithm are shown in FIG. 6B. The region grown using spatial Constraint 1 is in FIG. 6C. The region grown with spatial Constraint 2 was identical for this microcalcification candidate. The edge pixels 642 of the entire microcalcification candidate are shown in FIG. 6D. The segmentation of microcalcifications by the hill climbing method produces outcomes using the spatial Constraints 1 and 2 that were almost identical. In this study, about a quarter of microcalcifications were segmented identically by the two spatial constraints and the rest differed by a few pixels, resulting in a negligible change over the entire microcalcification. Both spatial constraints directed the growth of the regions successfully, resulting in regions that were compatible with visual interpretation.

The differences between the three methods are illustrated in FIG. 7. Three subtle microcalcifications candidates are shown in FIG. 7A. When the contrast of a microcalcification candidate was relatively low, or parts of it exhibited a very gradual decrease in intensity toward the background, the multi-tolerance algorithm (FIG. 7B) segmented a larger region than those of the other two algorithms. Good segmentation with active contours (FIG. 7C) was obtained using w1=6, w2=40, τ=0.1, k1=14 and k2=16, for all microcalcifications candidates of this study. Using these parameters, segmentation with active contours provided edges 735 that were smoother than edges 725 and 745 produced by segmentation with the other two methods. The selection of w1 and w2 provided the flexibility needed to adapt relatively well to the shape of diverse microcalcifications candidates. The elasticity level allowed the contour to grow to the highest gradient locations when the segmented structures were relatively large, and the rigidity level allowed the contour to develop sharp bends dictated by the data in some microcalcifications. The edges 745 of regions grown by the hill climbing algorithm shown in FIG. 7D were not as smooth as those 735 of the active contours, but the convolutions were consistent with visually perceived edges around microcalcifications candidates.

Segmentation of microcalcification candidates serves as an initial step for discriminating between the population of microcalcifications and that of background structures. The discrimination potential of each segmentation algorithm was quantified using features extracted from structures segmented around all the local maxima in the 5 mammograms. These structures consisted of the 124 microcalcifications mentioned above and 2,212 background structures segmented in the same mammograms. Four characteristics were used to assess the discrimination potential in this study.

1. Contrast was measured as the gray level difference between the local maximum pixel P0 in the structure, and the mean of pixels around its edge.

2. Relative contrast was computed as the ratio of the contrast to the value at the local maximum.

3. Area was computed as the number of labeled pixels in the grown region.

4. Edge sharpness was the mean of the gradient computed with a Sobel operator across all edge pixels. The Sobel operator is a mask which weights the eight neighbors of a pixel to compute a sum proportional to the gradient x, or the y gradient, or total gradient.

The discrimination ability of each feature was determined separately using the area under a receiver operating characteristic (ROC) curve obtained with that feature. The ROC curve pots the percentage of correctly detected microcalcifications against the percentage of detected background structures as a detection threshold is changed. The ROC curve area is higher when the feature has distributions that are more separable for a given property. When both populations overlap completely, the ROC curve area is 0.5. In general, effective discrimination power is indicated by a value above 0.8. Table 2 summarizes the results for all three procedures. The area feature had very low discrimination power for all three algorithms, indicating that the two types of structures cannot be discriminated well on the basis of their area segmented. However, the other

TABLE 2
Multi-tolerance Active Hill
Region Growing Contours Climbing
Contrast 0.80 0.82 0.83
Relative Contrast 0.83 0.90 0.90
Area 0.63 0.60 0.54
Sharpness 0.80 0.85 0.85

three features suggested good discrimination potential for all three algorithms. A comparison among algorithms shows that both the hill climbing method of the present invention and the active contours algorithm provide segmentation with the same discrimination power, and they both perform slightly better than the multi-tolerance segmentation. Thus, the hill climbing method produces edges as good as the best produced by the conventional approaches tested.

The significant advantage of the hill climbing algorithm is its speed. While the multi-tolerance algorithm provides a good solution to avoid the use of statistical models, local statistics estimators and the manual selection of threshold, its cost is multiple segmentations of the same structure and computation of features during the segmentation of each structure. Furthermore, in some cases, this algorithm segments regions that are somewhat larger than expected. Consequently, the time required for segmentation of a mammogram with this algorithm is high. The segmented regions were comparable to those of the other two algorithms in many cases. The differences were caused by the fact that the growth mechanism of this algorithm is constrained only by an intensity range criterion applied to a new pixel. In contrast, active contours are constrained by internal forces that regulate the growth away from the local maximum, and hill climbing has an inward growth mechanism based on edge points.

The active contours also circumvent the statistical and manual threshold selection issues for each mammogram, but the selection of the operational parameters for a set of mammograms requires some trial and error. However, when an appropriate set of parameters is determined, it appears to be valid for a wide range of microcalcifications so it need not be modified with each mammogram. The choice of negative squared gradient magnitude as the image energy function seems to be an appropriate one to segment microcalcifications.

The computational complexity cm of the multi-tolerance region growing algorithm is of the order O(4smo) where s is the number of steps in the tolerance search, m is the number of pixels in the region, and o is the number of operations per pixel. The factor 4 is included because the algorithm visits the 4-connected neighbors for each pixel in the region. Considering 125 to be an average intensity value for the local maximum, the average step size is 0.008 resulting on the average in about s=50 steps to cover the range 0.01 to 0.4. The average size of segmented structures is about 200 pixels. At each pixel the computations performed include intensity comparisons, update of Fmax and Fmin, and calculation of the center of gravity. Considering about 12 operations per pixel on the average, the numerical estimate for the average number of operations per segmentations is cm=480,000.

The computational complexity ca of the active contour model is O[2(n+n2)t] where n is the number of contour points, and t is the number of iterations. The factor of 2 is included due to the fact that the x and y coordinates of each contour point are computed separately, with identical operations. At each iteration, order n computations are needed to determine the normal vectors, and order 2n2 operations are needed to perform a matrix multiplication. In this study 24 contour points were used, and the number of iterations depended on the size of the structure. On the average however, the active contour model converged in about 20 iterations. This resulted in an average value of ca=47,040, a factor of ten improvement over the multi-tolerance method.

The complexity ch of the hill climbing method is O(KN+8 m) where K is the number of radial directions from the local maximum, N is the number of pixels searched in each direction, and m is the number of pixels in the grown region. A factor of 8 is included since all 8 neighbors of each pixel are visited. In this study K was 16 and N was 40, and considering an average structure size of m=200, the average estimate of the number of operations is ch=2,240, a factor of 20 improvement over the active contour methods, and 200 over the multi-tolerance method. The proportions of cm, ca and ch are approximately 214:21:1 respectively, with hill climbing far less complex than the other two methods.

The speed of the different methods was compared using a section of a mammogram containing 456 local maxima, 35 of which were in microcalcifications. The sizes of microcalcifications ranged between 0.25 mm and 0.5 mm. The times to complete the segmentation of this section of mammogram using the three algorithms implemented in C on a 10 million floating point operations per second (MFLOPS), IBM 6000 computer were 17 minutes 47 seconds for the multi-tolerance algorithm, 1 minute 47 seconds for the active contours, 7 seconds for hill climbing with spatial Constraint 1, and 5.4 seconds for hill climbing with spatial constraint 2.

Hill climbing with spatial Constraints 1 and 2 yielded practically identical segmentations; but the method was about 20% faster using spatial constraint 2, resulting in 11.8 ms on average for segmenting a structure, as opposed to 15.3 ms obtained with spatial Constraint 1.

A common technique to determine the edges of an object uses an edge enhancement algorithm such as the Sobel operator, thresholding to separate the pixels on edges, and pixel linking to string edge pixels that belong to the same object. Selection of the threshold is critical, and linking poses problems in segmenting microcalcifications because there are many closely spaced small structures in a background that are likely to produce considerable numbers of edge pixels. The hill climbing method of the preferred embodiment determines edge points that are on the edge of the same object by virtue of the radial line search emanating from the same local maximum. It does not require a threshold to separate edge pixels because the slope in Equation 3 is referred to the local maximum and is greatest at pixels that are on, or very near, the visually perceived edges. Finally, the hill climbing method avoids some pitfalls of the region growing mechanism by growing a region inward, toward the local maximum.

There has been disclosed a segmentation method and apparatus for data arranged in a multidimensional array which overcomes the problems of the prior art. Although the present invention has been described above by way of detailed embodiments thereof, it is clearly understood that variations and modifications may be made by one of ordinary skill in the art and still lie within the spirit and scope of the invention as defined by the appended claims and their equivalents.

Bankman, Isaac N., Nizialek, Tanya

Patent Priority Assignee Title
Patent Priority Assignee Title
4618989, Jan 21 1983 Michio Kawata, Director-General of Agency of Industrial Science and; Kabushiki Kaisha Meidensha Method and system for detecting elliptical objects
4948974, Jun 25 1984 High resolution imaging apparatus and method for approximating scattering effects
5116115, May 09 1990 Wyko Corporation Method and apparatus for measuring corneal topography
5163094, Mar 20 1991 INFRARED INDENTIFICATION, INC Method for identifying individuals from analysis of elemental shapes derived from biosensor data
5170440, Jan 30 1991 NEC Corporation Perceptual grouping by multiple hypothesis probabilistic data association
5185809, Aug 14 1987 The General Hospital Corporation; Massachusetts Institute of Technology Morphometric analysis of anatomical tomographic data
5239591, Jul 03 1991 U.S. Philips Corp.; NORTH AMERICAN PHILIPS CORPORATION A CORP OF DELAWARE Contour extraction in multi-phase, multi-slice cardiac MRI studies by propagation of seed contours between images
5309228, May 23 1991 FUJIFILM Corporation Method of extracting feature image data and method of extracting person's face data
5345941, Apr 24 1989 Massachusetts Institute of Technology Contour mapping of spectral diagnostics
5361763, Mar 02 1993 Wisconsin Alumni Research Foundation Method for segmenting features in an image
5365429, Jan 11 1993 North American Philips Corporation Computer detection of microcalcifications in mammograms
5412563, Sep 16 1993 General Electric Company Gradient image segmentation method
5421330, Apr 25 1991 Inria Institut National de Recherche en Informatique et en Automatique Method and device for examining a body, particularly for tomography
5452367, Nov 29 1993 Arch Development Corporation Automated method and system for the segmentation of medical images
5457754, Aug 02 1990 University of Cincinnati Method for automatic contour extraction of a cardiac image
5467404, Aug 14 1991 AGFA HEALTHCARE N V Method and apparatus for contrast enhancement
5506913, Feb 11 1993 AGFA HEALTHCARE N V Method of recognizing an irradiation field
5572565, Dec 30 1994 U S PHILIPS CORPORATION Automatic segmentation, skinline and nipple detection in digital mammograms
5574799, Jun 12 1992 The Johns Hopkins University; Johns Hopkins University, The Method and system for automated detection of microcalcification clusters in mammograms
5627907, Dec 01 1994 University of Pittsburgh Computerized detection of masses and microcalcifications in digital mammograms
5646742, Jul 27 1992 Xerox Corporation System for adjusting color intensity of neighboring pixels
5651042, May 11 1995 AGFA HEALTHCARE N V Method of recognizing one or more irradiation
5740266, Apr 15 1994 Base Ten Systems, Inc. Image processing system and method
5768333, Dec 02 1996 U S PHILIPS CORPORATION Mass detection in digital radiologic images using a two stage classifier
5768406, Jul 14 1994 U S PHILIPS CORPORATION Mass detection in digital X-ray images using multiple threshold levels to discriminate spots
5825910, Dec 30 1993 U S PHILIPS CORPORATION Automatic segmentation and skinline detection in digital mammograms
5835620, Dec 19 1995 AUTOCYTE NORTH CAROLINA, L L C Boundary mapping system and method
5854851, Aug 13 1993 SIEMENS COMPUTER AIDED DIAGNOSIS LTD System and method for diagnosis of living tissue diseases using digital image processing
5982916, Sep 30 1996 Siemens Aktiengesellschaft Method and apparatus for automatically locating a region of interest in a radiograph
6249594, Mar 07 1997 CMSI HOLDINGS CORP ; IMPAC MEDICAL SYSTEMS, INC Autosegmentation/autocontouring system and method
6535623, Apr 15 1999 Curvature based system for the segmentation and analysis of cardiac magnetic resonance images
6738500, Oct 26 1995 The Johns Hopkins University Method and system for detecting small structures in images
7155067, Jul 11 2000 ARRIS ENTERPRISES LLC Adaptive edge detection and enhancement for image processing
WO9957683,
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Sep 22 1999NIZIALEK, TANYAThe Johns Hopkins UniversityASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0283870891 pdf
Oct 12 1999BANKMAN, ISAAC N The Johns Hopkins UniversityASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0283870891 pdf
Sep 12 2008The Johns Hopkins University(assignment on the face of the patent)
Date Maintenance Fee Events
Apr 23 2018REM: Maintenance Fee Reminder Mailed.
Oct 15 2018EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Jan 31 20154 years fee payment window open
Jul 31 20156 months grace period start (w surcharge)
Jan 31 2016patent expiry (for year 4)
Jan 31 20182 years to revive unintentionally abandoned end. (for year 4)
Jan 31 20198 years fee payment window open
Jul 31 20196 months grace period start (w surcharge)
Jan 31 2020patent expiry (for year 8)
Jan 31 20222 years to revive unintentionally abandoned end. (for year 8)
Jan 31 202312 years fee payment window open
Jul 31 20236 months grace period start (w surcharge)
Jan 31 2024patent expiry (for year 12)
Jan 31 20262 years to revive unintentionally abandoned end. (for year 12)