A method and system for evaluating image segmentation is disclosed. In order to quantitatively evaluate an image segmentation technique, synthetic image data is generated and the synthetic image data is segmented to extract an object using the segmentation technique. This segmentation results in a foreground containing the extracted object and a background. The visibility of the extracted object is quantitatively measured based on the intensity distributions of the segmented foreground and background. The visibility is quantitatively measured by calculating the Jeffries-Matusita distance between the foreground and background intensity distributions. This method can be used to evaluate segmentation of vessels in fluoroscopic image sequences by coronary digital subtraction angiography (DSA).
|
1. A method for evaluating an image segmentation technique, comprising:
generating synthetic image data by
generating a ground truth image of an object to be segmented, and
combining the ground truth image with a background image resulting in a synthetic image;
segmenting the synthetic image data to extract the object using the segmentation technique, resulting in a foreground containing the extracted object and a background; and
quantitatively measuring a visibility of the extracted object based on intensity distributions of the segmented foreground and background.
12. An apparatus for evaluating an image segmentation technique, comprising:
means for generating synthetic image data comprising:
means for generating a ground truth image of an object to be segmented, and
means for combining the ground truth image with a background image resulting in a synthetic image;
means for segmenting the synthetic image data to extract an object using the segmentation technique, resulting in a foreground containing the extracted object and a background; and
means for quantitatively measuring a visibility of the extracted object based on intensity distributions of the segmented foreground and background.
18. A non-transitory computer readable medium encoded with computer executable instructions for evaluating an image segmentation technique, the computer executable instructions defining steps comprising:
generating synthetic image data by:
generating a ground truth image of an object to be segmented, and
combining the ground truth image with a background image resulting in a synthetic image;
segmenting the synthetic image data to extract an object using the segmentation technique, resulting in a foreground containing the extracted object and a background; and
quantitatively measuring a visibility of the extracted object based on intensity distributions of the segmented foreground and background.
2. The method of
generating a ground truth image based on human annotations.
3. The method of
adding noise to the synthetic image to simulate a certain noise levels.
4. The method of
generating a uniformly distributed white noise image;
blurring the noise image with a Gaussian filter; and
multiplying the noise image by a noise scale to obtain the certain noise level; and
adding the noise image to the synthetic image.
5. The method of
segmenting the synthetic fluoroscopic image sequence to extract vessels using coronary digital subtraction angiography (DSA).
6. The method of
generating a sequence of ground truth images of vessels from a fluoroscopic image sequence; and
summing the sequence of ground truth images with a sequence of background x-ray images in logarithm space to simulate the composition of x-ray images, resulting in a synthetic fluoroscopic image sequence.
7. The method of
estimating motion fields of vessel branches between frames of the synthetic fluoroscopic image sequence;
segmenting a vessel layer and a background layer based on the estimated motion fields; and
normalizing the extracted vessel layers.
8. The method of
separating the vessel branches into a plurality of sets;
tracking the vessel braches in each set to estimated the motion fields.
9. The method of
segmenting a plurality of vessel layers corresponding to the plurality of sets of vessel branches; and
combining the plurality of vessel layers to generate a single foreground layer.
10. The method of
calculating a Jeffries-Matusita (JM) distance between the intensity distributions of the segmented foreground and background.
11. The method of
comparing the quantitatively measured visibility of the extracted object with a qualitatively measured visibility of the object in the synthetic image data.
13. The apparatus of
means for adding noise to the synthetic image to simulate a certain noise levels.
14. The apparatus of
means for generating a sequence of ground truth images of vessels from a fluoroscopic image sequence; and
means for summing the sequence of ground truth images with a sequence of background x-ray images in logarithm space to simulate the composition of x-ray images, resulting in a synthetic fluoroscopic image sequence.
15. The apparatus of
means for estimating motion fields of vessel branches between frames of the synthetic fluoroscopic image sequence;
means for segmenting a vessel layer and a background layer based on the estimated motion fields; and
means for normalizing the extracted vessel layers.
16. The apparatus of
means for calculating a Jeffries-Matusita (JM) distance between the intensity distributions of the segmented foreground and background.
17. The apparatus of
means for comparing the quantitatively measured visibility of the extracted object with a qualitatively measured visibility of the object in the synthetic image data.
19. The non-transitory computer readable medium of
adding noise to the synthetic image to simulate a certain noise levels.
20. The non-transitory computer readable medium of
generating a sequence of ground truth images of vessels from a fluoroscopic image sequence; and
summing the sequence of ground truth images with a sequence of background x-ray images in logarithm space to simulate the composition of x-ray images, resulting in a synthetic fluoroscopic image sequence.
21. The non-transitory computer readable medium of
estimating motion fields of vessel branches between frames of the synthetic fluoroscopic image sequence;
segmenting a vessel layer and a background layer based on the estimated motion fields; and
normalizing the extracted vessel layers.
22. The non-transitory computer readable medium of
calculating a Jeffries-Matusita (JM) distance between the intensity distributions of the segmented foreground and background.
23. The non-transitory computer readable medium of
comparing the quantitatively measured visibility of the extracted object with a qualitatively measured visibility of the object in the synthetic image data.
|
This application claims the benefit of U.S. Provisional Application No. 60/974,099, filed Sep. 21, 2007, the disclosure of which is herein incorporated by reference.
The present invention relates to object segmentation in medical images, and more particularly to evaluating image segmentation based on visibility.
Angiography is a medical imaging technique in which X-ray images are used to visualize internal blood filled structures, such as arteries, veins, and the heart chambers. Since blood has the same radiodensity as the surrounding tissues, these blood filled structures cannot be differentiated from the surrounding tissue using conventional radiology. Thus, in angiography, a contrast agent is added to the blood, usually via a catheter, to make the blood vessels visible via X-ray. In many angiography procedures, X-ray images are taken over a period of time, which results in a sequence of fluoroscopic images, which show the motion of the blood over the period of time. Such fluoroscopic image sequences contain useful information that can be difficult to decipher due to the collapsing of 3-dimensional information into the 2-dimensional images.
Since different objects in a fluoroscopic image sequence have different patterns of motion, objects can be extracted from a fluoroscopic image sequence in layers based on motion patterns found in the fluoroscopic image sequence. Coronary digital subtraction angiography (DSA) is a method for segmenting vessels in the heart by extracting motion-based layers from fluoroscopic image sequences of the heart. Coronary DSA separates the vessels from background in the fluoroscopic images, such that the segmented vessels are highly visible. Although human perception can be used to see that the visibility of the segmented vessels has increased, there is no quantitative measurement of visibility that can be used to evaluate segmentation techniques.
The present invention provides a method and system for evaluating image segmentation based on visibility. Embodiments of the present invention utilize a quantitative measurement of visibility that is consistent with human perception. Furthermore, embodiments of the present invention can be used to evaluate image segmentation techniques for images with various levels of noise.
In one embodiment of the present invention, synthetic image data is generated. The synthetic image data can be generated by generating a ground truth image of an object to be segmented and combining the ground truth image with a background image. The synthetic image data is segmented to extract an object, resulting in a foreground containing the extracted object and a background. The visibility of the extracted object is quantitatively measured based on intensity distributions of the segmented foreground and background. The quantitative measure of visibility can be obtained by calculating the Jeffries-Matusita (JM) distance between the intensity distributions of the segmented foreground and background. The quantitative visibility measure of the extracted object can be compared to the quantitative visibility measure of the object in the synthetic image data.
In a particular embodiment of the present invention, segmentation of vessels in fluoroscopic image sequences by coronary digital subtraction angiography (DSA) is quantitatively evaluated based on visibility. A synthetic fluoroscopic image sequence can be generated by generating a sequence of ground truth images of vessels and summing the sequence of ground truth images with a sequence of background images in logarithm space. Coronary DSA can be used to segment vessels in the synthetic fluoroscopic image sequence by estimating motion fields of vessel branches between frames of the sequence, segmenting a vessel layer based on the motion fields, and normalizing the segmented vessel layer. The JM distance between the intensity distributions of the vessels and the background in the segmented vessel layer can then be calculated as a quantitative visibility measure of the segmented vessels.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention is directed to a method for evaluating image segmentation based on visibility. Although embodiments of the present invention described herein are directed toward evaluating segmentation of fluoroscopic images, the present invention is not limited thereto, and may be similarly applied to segmentation of other types of images, such as computed tomography (CT), magnetic resonance (MR), and ultrasound images. Embodiments of the present invention are described herein to give a visual understanding of segmentation evaluation method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
At step 102, synthetic image data is generated. In order to generate the synthetic image data, ground truth images of an object to be segmented are obtained, and the ground truth images are combined with background images, resulting in synthetic images having a known foreground (ground truth object) and background. It is also possible to add varying levels of noise and contrast to the synthetic image data in order to test robustness of an image segmentation technique to varying levels of noise and contrast.
The ground truth images can be obtained based on human annotation.
In order to generate a synthetic fluoroscopic image sequence for evaluating the coronary DSA method, a sequence of ground truth images are obtained as shown in
Returning to
At step 402, motion fields are estimated between frames of the synthetic image sequence. The motion fields can be estimated based on annotations. Portions of the vessel branches are detected and point-to-point matches between the vessel branches are obtained between two images (frames) in the synthetic image sequence. A thin plate spline is then used to generate the motion field for the vessel between the images. In practice, some vessel branches may overlap in the images and their topological relationship may not hold between the frames. This problem can be solved by separating the vessel branches into two disjunct sets so that there is no overlapping branch in each set, and separately tracking the motion in the branches in each set.
Returning to
Returning to
Returning to
JMij=∫[√{square root over (pi(x))}+√{square root over (pj(x))}]2dx
where i and j represent the foreground and background, respectively, and pi(x) and pj(x) are the intensity distributions of the foreground and background, respectively.
In a special case where the two classes can be modeled as Gaussian distributions, the JM distance becomes:
Σj is the covariance of class j and μj is the mean of class j. Bij is the Bhattacharyya distance between the two classes. A benefit of using the JM distance to measure visibility is that it is bounded with a range of [0,2]. When using the JM distance to measure visibility, since two classes are distributed in the 1D space of intensity, the covariance of an intensity distribution simplifies to be the square of the standard deviation of the intensity distribution. Accordingly, the Bhattacharyya distance can be rewritten as:
where σ is the standard deviation of an intensity distribution. Note that there is a tuning term ε in the denominator that can be used to increase or decrease the importance of the contrast between foreground and background according to human perception. For example, the value of ε can be tuned based on the input of multiple observers. In an exemplary implementation, ε can be set to 100. Based on the above equations, the JM distance can be calculated as quantitative measure of the visibility of the segmentation results. The JM distance can be calculated for the original synthetic image data and for the segmented image data, and the JM distances can be compared to evaluate the segmentation procedure used to segment the image data. For example, the JM distance can be calculated for the segmentation results for each frame of a synthetic fluoroscopic image sequence segmented using coronary DSA.
When generating the synthetic image data, it is possible to add varying levels of noise in order to test robustness of an image segmentation technique to varying levels of noise. The noise can be added by generating a uniformly distributed white noise image. The noise image can be blurred, for example, using a Gaussian filter of size 5. The generated noise image is then multiplied with different noise scales to simulate images with different levels of noise and added to the synthetic image data.
The visibility measured by JM distance can be used to compare the synthetic image data with the segmentation results to evaluate the segmentation technique used to segment the image data.
The visibility measured by JM distance can also be used to evaluate variations in an image segmentation technique. For example, the visibility measure can be used to evaluate coronary DSA when different numbers of branches are tracked for motion estimation.
The above-described methods for evaluating an image segmentation based on visibility can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
Zhang, Wei, Comaniciu, Dorin, Barbu, Adrian, Prummer, Simone, Ostermeier, Martin
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5557684, | Mar 15 1993 | Massachusetts Institute of Technology | System for encoding image data into multiple layers representing regions of coherent motion and associated motion parameters |
6826292, | Jun 23 2000 | SRI International | Method and apparatus for tracking moving objects in a sequence of two-dimensional images using a dynamic layered representation |
6987865, | Sep 09 2000 | Microsoft Technology Licensing, LLC | System and method for extracting reflection and transparency layers from multiple images |
7155032, | Sep 09 2000 | Microsoft Technology Licensing, LLC | System and method for extracting reflection and transparency layers from multiple images |
7756305, | Jan 23 2002 | The Regents of the University of California | Fast 3D cytometry for information in tissue engineering |
7936922, | Nov 22 2006 | Adobe Inc | Method and apparatus for segmenting images |
20060110036, | |||
20060285747, | |||
20070116356, | |||
20070165921, | |||
20070165943, | |||
20080247621, | |||
20090147919, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 04 2008 | Siemens Aktiengesellschaft | (assignment on the face of the patent) | / | |||
Oct 06 2008 | PRUMMER, SIMONE | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021747 | /0388 | |
Oct 07 2008 | OSTERMEIER, MARTIN | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021747 | /0388 | |
Oct 07 2008 | ZHANG, WEI | Siemens Corporate Research, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021747 | /0489 | |
Oct 07 2008 | COMANICIU, DORIN | Siemens Corporate Research, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021747 | /0489 | |
Oct 21 2008 | BARBU, ADRIAN | Siemens Corporate Research, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021747 | /0456 | |
Apr 03 2009 | Siemens Corporate Research, Inc | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022506 | /0596 | |
Jun 10 2016 | Siemens Aktiengesellschaft | Siemens Healthcare GmbH | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039271 | /0561 |
Date | Maintenance Fee Events |
May 14 2015 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 14 2019 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 14 2023 | REM: Maintenance Fee Reminder Mailed. |
Jan 29 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Dec 27 2014 | 4 years fee payment window open |
Jun 27 2015 | 6 months grace period start (w surcharge) |
Dec 27 2015 | patent expiry (for year 4) |
Dec 27 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 27 2018 | 8 years fee payment window open |
Jun 27 2019 | 6 months grace period start (w surcharge) |
Dec 27 2019 | patent expiry (for year 8) |
Dec 27 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 27 2022 | 12 years fee payment window open |
Jun 27 2023 | 6 months grace period start (w surcharge) |
Dec 27 2023 | patent expiry (for year 12) |
Dec 27 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |