A seed point is selected inside a structure that is to be segmented in image data. An adaptive model is defined around the seed point, and a preprocessing filter is applied only within the bounding region. A presegmentation of the preprocessed result is performed, and the bounding region is expanded if necessary to accommodate the presegmentation result. An adaptive model for post-processing may be used. The model is translated, rotated and scaled to find a best fit with the pre-segmented data.
|
1. A method for segmenting a structure in a set of image data, the structure being identified by one or more seed points, comprising a processor configured to perform the following steps of:
a) defining an adaptive model around the seed point having one or more boundaries and sectors;
b) applying prefiltering only on the area defined by the adaptive model;
c) applying presegmentation within the result of prefiltering; and,
d) analyzing results of presegmentation for rotation or translation of the adaptive model;
e) if the result of the presegmentation touches any of the one or more boundaries of the adaptive model, expanding the corresponding boundary and sector.
5. A method for segmenting a structure in a set of image data, the structure being identified by one or more seed points, comprising a processor configured to perform the following steps of:
(a) presegmenting the structure;
(b) defining a bounding shape around the seed point, the bounding shape having one or more boundaries;
(c) filtering at least a portion of the set of image data;
(d) determining whether any part of the one or more boundaries are within the bounding shape; and
(e) if any part of the one or more boundaries are within the bounding shape, growing a region within the bounding shape and if the region touches any of the one or more boundaries before reaching the structure, expanding the bounding shape.
13. A computer system for segmenting a structure in a set of image data, the structure being identified by one or more seed points, comprising:
one or more processors;
a memory readable by the processor, the memory comprising program code executable by the processor, the program code adapted to perform the following steps:
(a) presegmenting the structure;
(b) defining a bounding shape around the seed point, the bounding shape having one or more boundaries;
(c) filtering at least a portion of the set of image data;
(d) determining whether any part of the one or more boundaries are within the bounding shape; and
(e) if any part of the one or more boundaries are within the bounding shape, growing a region within the bounding shape and if the region touches any of the one or more boundaries before reaching the structure, expanding the bounding shape.
2. The method as claimed in
6. The method as claimed in
7. The method as claimed in
8. The method as claimed in
11. The method as claimed in
12. The method as claimed in
14. The system as claimed in
15. The system as claimed in
16. The system as claimed in
19. The system as claimed in
20. The system as claimed in
|
This application claims the benefit of U.S. Provisional Application No. 60/646,705, filed Jan. 25, 2005, which is incorporated herein by reference.
The present invention relates to the segmentation of anatomical structures and more particularly to the segmentation of structures within medical imaging data.
The identification of structures like tumors, the bladder, kidney, etc. is one of the most time consuming tasks in the workflow of the medical diagnostic workplace. One of the reasons is the difficulty in generalizing the steps necessary for the successful segmentation of these objects in medical imaging data.
Despite extensive research, the identification of these structures is still carried out by manual segmentation tools. Further, with the increasing resolution of the new generations of scanners, the segmentation time is expected to increase. This will lead to an even more substantial and time consuming process.
Each of these issues directly affects the quality of the measurements as well as the quality of the medical diagnosis and service provided to patients. The tedium of the present procedures can result in user fatigue as well as poor segmentation of the structures.
Accordingly, new and improved methods and systems for segmenting structures within imaging data are needed. In particular, it would be highly desirable to make available systems that reduce the amount of time and effort that medical personnel must exert to segment structures in medical imaging data.
The present invention is an image processing method and system that allows the segmentation of structures like tumors, kidneys, bladders with a “one mouse click” approach and which is based on the combination of several image processing filters. In accordance with one aspect of the present invention, a seed point is placed inside the structure to be segmented. The procedure performs preprocessing, pre-segmentation and post processing filters with advanced image processing tools that are based on partial differential equations.
The kind of structure that can be segmented is not limited—both inhomogeneous and homogeneous can be segmented successfully. Adaptive models are used to restrict the area where the segmentation procedures are applied so that the time for segmenting the structure is reduced to a minimum. The models are computed on the fly and are defined by predefined constraints based on the geometrical appearance of the structure, statistical description based on principal components and shape.
In accordance with one aspect of the present invention, a method for segmenting a structure in a set of image data, the structure being identified by a seed point. The method includes defining a bounding shape around the seed point, the bounding shape having different boundaries. The area covered by the bounding shape is pre-processed, presegmented and also tested within the bounding shape. If the region touches any of the one or more boundaries, the touched bounding shapes are expanded. In accordance with another aspect of the present invention, the bounding shape is expanded at any of the one or more boundaries that are touched by the region. The method further involves repeating the steps of and expanding the bounding shape, preprocessing and pre-segmentation until the pre-segmented region does not touch any of the one or more boundaries and generating a final segmentation of the structure.
Another aspect of the present invention also involves outputting the final segmentation to an output or storage device.
In accordance with a further aspect of the present invention, the bounding shape is a bounding box. In accordance with another aspect, the bounding shape is elliptic.
The present invention also involves pre-filtering the image data with an edge preserving smoothing filter. The filtering is preferably performed only within a selected area, that is, the area in the bounding shape, thereby minimizing processing time and requirements. It is preferred to use an anisotropic diffusion filter to pre-filter the image data.
In step 14, there are two different adaptive models that can be used in accordance with various aspects of the present invention. The first adaptive model is an adaptive bounding box model and the second is an elliptical model approach. The steps illustrated in
The first step 8 is to input medical image data. While the segmentation has been applied to CT, PET and MR images, there are no limitations on the kind of input images or modalities that the segmentation procedure of the present invention supports. Ultrasound images, MRI, etc. can be used as input images. There are no restrictions regarding the format of the image, any physical variation like pixel spacing, slice location, dimension of the image, etc. It is assumed that one can always convert the input images into gray-scale images, and that one can handle the gray values of the images. The segmentation procedure does not need any predefined models or settings.
The only input needed to begin the process is one or more mouse clicks on the structure to be segmented. The rest of the workflow is done automatically. Around seed-points, the initial adaptive model is set. The initial adaptive model can be a bounding box, an ellipsoid or other shapes. The refinement of the model can be also be performed semi-automatically in dialog with the user. The user can change the model in order to prevent the algorithm to grow out of all bounds which is specially important in extreme cases.
The next step 10 in the segmentation process is the preprocessing that is performed inside the initial adaptive model. The textures of the input images vary from case to case. One difficulty arises when the dataset evidences noise or the area to be segmented is inhomogeneous. Noise in the image or inhomogeneous areas reduce the capacity of the image processing filter to identify regions. In those cases high gradient textures may interfere with the expansion of the contour when using segmentation functions like region growing. Using a gradient approach without preprocessing could cause e.g. an under-segmentation of the structure. It is therefore necessary to include preprocessing into the segmentation workflow, preferably by using an edge-preserving smoothing filter.
Although filters like—Median and Gauss smooth the image area and can also be applied on a 3D dataset without straining the CPU, they tend to blur away the sharp boundaries and also to distort the fine structure of the image, thereby changing aspects of the anatomical shapes. For a discussion on the Median, Gauss filters, see Gonzalez R C, Woods R B: Digital Image Processing, Addison-Wesley, Reading, Mass., 1993.
Consequently it is preferred to apply more sophisticated techniques. In accordance with a preferred embodiment of the present invention, a partial differential (PDE) based method called “non linear diffusion” filter is used. See, for example, Morton K W, Mayern D F, Numerical Solution of Partial Differential Equations, Cambridge University Press, Cambridge, 1994 and Weickert J.: Non linear Isotropic Diffusion H: Semidiscrete and discrete Theory, Fakultaet fuer Mathematik and Informatik, Universitaet Mannheim, 2001. This type of filter was first proposed by Perona and Malik. See, for example, Perona P, Malik: Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis Machine Intelligence, 12:629-639, 1990 as well as Weickert J: Foundations and application of non linear anisotropic diffusion filtering, Zeitschrift fuer angewandte Mathematik und Mechanik, Vol 76, pp 283-286, 1996.
This filter offers the option to smooth the image and simultaneously enhance edges by using a non uniform process which is adapted to the local image structures and which applies less diffusion at those regions where high contrast is found. These areas can be measured by using the diffusion coefficient equation (Eqn 1) where λ is a constant and |∇I| is the image gradient magnitude. By solving the partial differential equation (Equ. 2), the diffusion and edge detection converge in one single process where I=I(x,y,z,t) is the 3D image and t represents the iteration step or time.
The anisotropic diffusion filter has been successfully applied for smoothing the inhomogeneous areas, reducing noise while preserving and also enhancing the contours of the image. The images become homogeneous so that region growing based approaches can be used to apply segmentation successfully.
The non linear diffusion filter requires a lot of computing power. The time necessary for applying it on a dataset with about 60 slices and a resolution of 512 by 512 on a Pentium 4 machine would exceed 10 minutes, which makes this preprocessing step unacceptable for the clinical routine. In order to use this filter and take advantage of the characteristics of the non linear diffusion filter, the preprocessing step is preferably applied only inside of the adaptive model that allows the selection of the region of interest reducing the time for preprocessing to an interval of less than a second to a couple of seconds, depending on the size of the structure to be segmented.
The pre-segmentation step involved the segmentation of the structure only inside of the area defined by the adaptive model (step 12). Image Processing Functions like Region Growing, Snake, etc. can be used to perform this step.
In step 14, an adaptive model is used. Two different adaptive models have been developed and implemented. They are: an adaptive bounding box model approach and an adaptive elliptical model based approach.
The bounding box model will be described first.
Referring now to
The procedure starts after a seed point is set inside the area to be segmented. Around this point an initial three dimensional bounding box (30) is set (e.g. 10×10×3). The non linear diffusion is performed only inside the area defined by the bounding box. After the preprocessing, a presegmentation filter is applied. For example, a region based filter like region growing described in Bernd J.: Digitale Bildverarbeitung, Springer-Verlag, Berlin, Heidelberg, New York, 1993, is applied to detect all the neighbor voxels with similar gray value intensities. The preprocessing smoothes the texture so that the region marked by the bounding box becomes homogenous.
The boundary test is applied on each side of the bounding box (30). In the case that one side of the bounding box is touched by the result of the presegmentation, this side is enlarged. So, for example, in
If at least one side was enlarged, then the procedure is repeated using the new enlarged bounding box as the input parameter. Thus, the model is rescaled. After rescaling the model, two different options can be used: the already segmented area can be reused or the procedure can perform the prefiltering and presegmentation the whole area based on the rescaled model.
The final segmentation step will now be discussed. If the intensity values of the area to be segmented differ from the surrounding area, a simple segmentation method such as region growing is sufficient to achieve the segmentation. However, in many images, this is not the case. Thus, other criteria were integrated to find the boundaries for the structures. One of the criteria used with success for preventing leaking is the process of monitoring the number of voxels found on each bounding box side in the iteration. When the number of voxels changes drastically in one side of the bounding box, the iteration on this side is stopped. Using this method a control on the growth procedure is performed.
In the case that none of the sides was enlarged, a post processing procedure is applied which is based on performing a morphological opening operation with an elliptical shaped filter element to separate the structure to be segmented from leak regions. On that result a connected component filter is applied and only the component touching the seed point is kept while the others are discarded.
The pseudo code below shows an implementation of the invention method after an initial bounding box has been placed around a seed point:
do {
// increase size of bounding box
if (isBoxAllowsToGrowFront) if(!increaseArea( FRONT )) I
isBoxAllowedToGrowFront =false;
if (isBoxAllowsToGrowBack) if(!increaseArea( BACK ))
isBoxAllowedToGrowBack =false;
if (isBoxAllowsToGrowTop) if(!increaseArea( TOP ))
isBoxAllowedToGrowTop =false;
if (isBoxAllowsToGrowBottom)if(!increaseArea( BOTTOM ) )
isBoxAllowedToGrowBottom=false;
if (isBoxAllowsToGrowRight) if(!increaseArea( RIGHT ))
isBoxAllowedToGrowRight = false;
if (isBoxAllowsToGrowLeft) if(!increaseArea( LEFT ))
isBoxAllowedToGrowLeft = false;
// run preprocessing filter (e.g. Non Linear Filter)
filteredRawImage=runPreprocessingFilter( );
// run criteria filter (e.g. region growing)
preSegmentedImager= runPresegmentationFilter(filteredRawImage);
// testing whether bounding box sides are touched by the result of the
criteria filter
if( ! ( letBoxGrowUp (FRONT, preSegmentedImage) ) )
isBoxAllowedToGrowFront = false;
if( ! ( letBoxGrowUp (BACK, preSegmentedImage) ) )
isBoxAllowedToGrowBack = false;
if( ! ( letBoxGrowUp (TOP, preSegmentedImage) ) )
isBoxAllowedToGrowTop = false;
if( ! ( letBoxGrowUp (BOTTOM, preSegmentedImage) ) )
isBoxAllowedToGrowBottom =false;
if( ! ( letBoxGrowUp (RIGHT, preSegmentedImage) ) )
isBoxAllowedToGrowRight = false;
if( ! ( letBoxGrowUp (LEFT, preSegmentedImage) ) )
isBoxAllowedToGrowLeft = false;
// keep trying until none area can grow up anymore
} while (isBoxAllowedToGrowFront ==
true∥isBoxAllowedToGrowRight == true∥
isBoxAllowedToGrowTop == true ∥ isBoxAllowedToGrowBack == true ∥
isBoxAllowedToGrowLeft == true ∥ isBoxAllowedToGrowBottom ==
true ) ;
Although the above embodiment discloses the use of an expanding rectangular bounding region, it should be noted that shapes other than rectangular may be used for the bounding region. Further, although this approach shows good results, there are some limitations that produce over segmented results together with some artifacts. These limitations are related to the failure to adapt a circular object (e.g. tumor) with a quadratic model (the 3D bounding box). When segmenting circular objects, leakage can not be resolved in many cases, because the corners of the bounding box can not be tested. The bounding box does not allow to take into account every part of the structure boundary. Therefore, the approach has been expanded to include an elliptical model to the segmentation procedure, where the 3D bounding box will still be used to limit the area where the preprocessing filter is applied.
This approach, called the adaptive elliptical model approach, is discussed next. Referring to
In comparison with using a bounding box to restrict the result of the pre-segmentation procedure, the use of an elliptical model allows more efficient boundary criteria tests when segmenting an ellipsoid structure, since its shape can be better modeled. The procedure is based on an ellipsoid model using the equation below, whose origin is the seed point selected by the user.
Parameters a, b and c are the radius of the ellipsoid along the three axes of the ellipsoid, and x, y and z are the coordinates of all points (i.e., pixels or voxels) within the ellipsoid. Arbitrary rotations around the three coordinate axes, and a translation to get the ellipsoid into best correspondence with the area to be segmented, are permitted. Rotation and translation parameters are determined by boundary tests, which are explained in the following.
At the beginning of the procedure, in step 68, an initial 3D bounding box is created around the seed point. The seed point also becomes the center of the ellipsoid, which is created inside the bounding box in accordance with the equation discussed above. As the size of the ellipsoid increases during the segmentation process, the size of the bounding box is increased as well. The purpose of the bounding box here is just to limit the area where the pre-processing filter and the pre-segmentation filter are applied. All boundary tests are performed by the ellipsoid.
After running a smoothing filter (as previously discussed) and the pre-segmentation inside the bounding box, the boundary voxels of the ellipsoid are determined and for each sector the number of boundary voxels that are not part of the pre-segmented region (i.e. background voxels) is calculated. One sector competes with the sector on its opposite side in the way that the sector that has more background voxels will cause a translation of the whole adaptive model to its opposite position. See
Reorientation of the model and finalizing Segmentation is the next step 72. A part of this workflow is to determine in which directions the ellipsoid has to grow in order to cover the whole structure. It is preferred to use principal components analysis to calculate the current axes of inertia of the pre-segmented structure within the ellipsoid (gray area inside the circle in
In the next step, the model is rescaled. For each pair of opposing sectors, the number of contour voxels intersecting the structure to be segmented is counted and set into comparison with the overall number of contour voxels in these sectors. If this ratio is higher than a specified threshold (0.75 has proven to work very well), the ellipsoid radius of the respective axis is increased. Otherwise the ellipsoid stops its growth process on this axis. This test is applied for each axis until all of them reach the stop criterion, i.e. the ratio is less than the threshold.
Referring to
Only two parameters are used in order to adapt the approach to different datasets: Elasticity and Range of the Gray values. Elasticity defines how good the ellipsoid will be adapted to the pre segmented data. Elasticity has a range between 0 and 4. When elasticity is set to zero, the ellipsoid conserves its shape. This option is used in the case when the boundaries are not very clear. The value four is used when the tumor is isolated, so that the ellipsoid can adapt itself 100% to the pre segmented image without cutting any leaking. The default parameter used in the implemented prototype is 3, which helps to segment up to 70% of the datasets. In the other cases this parameter has to be adapted.
The model has been tested by segmenting 66 liver tumors of different shapes and sizes in CT datasets of different shapes and dimensions. All of the tumors were successfully segmented.
Any type of computer system can be used, including without limitation, a personal computer, a workstation, a multi-processor system, a parallel processing system or the like. Further, the processor can be a GPU, a CPU or any other processor circuit.
The following references are incorporated herein by reference: (1) Russ J C: Image Processing Handbook, CRC Press, Boca Raton, London, New York, Washington D.C., 131-166, 2002; (2) Gonzalez R C, Woods R B: Digital Image Processing, Addison-Wesley, Reading, Mass., 1993; (3) Perona P, Malik: Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis Machine Intelligence, 12:629-639, 1990; (4) Weickert J: Foundations and application of non linear anisotropic diffusion filtering, Zeitschrift fuer angewandte Mathematic und Mechanic, Vol 76, pp 283-286, 1996; (5) Morton K W, Mayern D F, Numerical Solution of Partial Differential Equations, Cambridge University Press, Cambridge, 1994; (6) Weickert J.: Non linear Isotropic Diffusion H: Semldiscrete and discrete Theory, Fakultaet fuer Mathematik and Informatik, Universitaet Mannheim, 2001; (7) Bernd J.: Digitale Bildverarbeitung, Springer-Verlag, Berlin, Heidelberg, New Yofic, 1993; (8) Sabins FF: Remote Sensing: Principal and Interpretation, Freeman and Company, New York, US, 1996; (9) Anton, H: Elementary Linear Algebra, Wiley & Sons, 1987.
In the following disclosure, the term “pixel” is used to indicate a data structure that is used to compose an image. Although the term typically indicates a two-dimensional element, for purposes of the following disclosure, “pixel” is also intended to include three-dimensional picture elements, i.e., voxels.
While there have been shown, described and pointed out fundamental novel features of the invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the device illustrated and in its operation may be made by those skilled in the art without departing from the spirit of the invention. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.
Stingl, Thomas, Cardenas, Carlos E., Rose, Marc
Patent | Priority | Assignee | Title |
8073210, | Feb 14 2005 | University of Iowa Research Foundation | Methods of smoothing segmented regions and related devices |
8131075, | Mar 29 2007 | SIEMENS HEALTHINEERS AG | Fast 4D segmentation of large datasets using graph cuts |
8160346, | Jun 30 2006 | General Electric Company | Methods and apparatuses for correcting a mammogram with an implant, and for segmenting an implant |
8467588, | Nov 29 2010 | Institute of Nuclear Energy Research Atomic Energy Council | Volume-of-interest segmentation system for use with molecular imaging quantization |
8712130, | Jul 16 2010 | BVBA DR K COENEGRACHTS | Method and device for evaluating evolution of tumoural lesions |
9678620, | Oct 11 2011 | Koninklijke Philips Electronics N V | Workflow for ambiguity guided interactive segmentation of lung lobes |
Patent | Priority | Assignee | Title |
5048095, | Mar 30 1990 | Honeywell Inc. | Adaptive image segmentation system |
5859891, | Mar 07 1997 | CMSI HOLDINGS CORP ; IMPAC MEDICAL SYSTEMS, INC | Autosegmentation/autocontouring system and method for use with three-dimensional radiation therapy treatment planning |
5949905, | Oct 23 1996 | Model-based adaptive segmentation | |
6249594, | Mar 07 1997 | CMSI HOLDINGS CORP ; IMPAC MEDICAL SYSTEMS, INC | Autosegmentation/autocontouring system and method |
6813373, | Apr 03 2001 | Koninklijke Philips Electronics N V | Image segmentation of embedded shapes using constrained morphing |
6993174, | Sep 07 2001 | Siemens Aktiengesellschaft | Real time interactive segmentation of pulmonary nodules with control parameters |
7466848, | Dec 13 2002 | Rutgers, The State University of New Jersey | Method and apparatus for automatically detecting breast lesions and tumors in images |
7483023, | Mar 17 2005 | Siemens Medical Solutions USA, Inc | Model based adaptive multi-elliptical approach: a one click 3D segmentation approach |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 17 2006 | Siemens Medical Solutions USA, Inc. | (assignment on the face of the patent) | / | |||
Mar 08 2006 | STINGL, THOMAS | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017517 | /0548 | |
Mar 08 2006 | ROSE, MARC | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017517 | /0548 | |
Mar 16 2006 | CARDENAS, CARLOS E | Siemens Medical Solutions USA, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017508 | /0180 | |
Jun 10 2016 | Siemens Aktiengesellschaft | Siemens Healthcare GmbH | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039011 | /0400 | |
Dec 19 2023 | Siemens Healthcare GmbH | SIEMENS HEALTHINEERS AG | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 066088 | /0256 |
Date | Maintenance Fee Events |
Nov 06 2012 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Nov 15 2016 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Sep 24 2020 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 16 2012 | 4 years fee payment window open |
Dec 16 2012 | 6 months grace period start (w surcharge) |
Jun 16 2013 | patent expiry (for year 4) |
Jun 16 2015 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 16 2016 | 8 years fee payment window open |
Dec 16 2016 | 6 months grace period start (w surcharge) |
Jun 16 2017 | patent expiry (for year 8) |
Jun 16 2019 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 16 2020 | 12 years fee payment window open |
Dec 16 2020 | 6 months grace period start (w surcharge) |
Jun 16 2021 | patent expiry (for year 12) |
Jun 16 2023 | 2 years to revive unintentionally abandoned end. (for year 12) |