Optimizations are provided for segmenting tissue objects included in an ultrasound image. Initially, raw pixel data is received. Here, each pixel corresponds to ultrasound information. This raw pixel data is processed through a first fully convolutional network to generate a first segmentation label map. This first map includes a first set of objects that have been segmented into a coarse segmentation class. Then, this first map is processed through a second fully convolutional network to generate a second segmentation label map. This second map is processed using the raw pixel data as a base reference. Further, this second map includes a second set of objects that have been segmented into a fine segmentation class. Then, a contour optimization algorithm is applied to at least one of the second set of objects in order to refine that object's contour boundary. Subsequently, that object is identified as corresponding to a lymph node.
|
11. One or more hardware storage devices having stored thereon computer-executable instructions that, when executed by one or more processors of a computer system, cause the computer system to:
receive raw image data that is comprised of an array of pixels, each pixel within the array of pixels comprising ultrasound information;
process the raw image data through a first fully convolutional network to generate a first segmentation label map, wherein:
the first segmentation label map comprises a first set of objects that have been segmented into at least a coarse segmentation class, and
each object within the first set of objects corresponds to a group of pixels from the array of pixels;
process the first segmentation label map through a second fully convolutional network to generate a second segmentation label map, wherein:
processing the first segmentation label map through the second fully convolutional network is performed using the raw image data as a base reference,
the second segmentation label map comprises a second set of objects that have been segmented into a fine segmentation class, and
each object within the second set of objects corresponds to a group of pixels from the array of pixels;
apply a contour optimization algorithm to at least one object within the second set of objects, wherein the contour optimization algorithm refines a corresponding contour boundary for the at least one object; and
generate an identification that the at least one object corresponds to a lymph node.
16. A method for segmenting tissue objects that are included within an ultrasound image, the method being implemented by one or more processors of a computer system, the method comprising:
receiving raw image data that is comprised of an array of pixels, each pixel within the array of pixels comprising ultrasound information;
processing the raw image data through a first fully convolutional network to generate a first segmentation label map, wherein:
the first segmentation label map comprises a first set of objects that have been segmented into at least a coarse segmentation class, and
each object within the first set of objects corresponds to a group of pixels from the array of pixels;
processing the first segmentation label map through a second fully convolutional network to generate a second segmentation label map, wherein:
processing the first segmentation label map through the second fully convolutional network is performed using the raw image data as a base reference,
the second segmentation label map comprises a second set of objects that have been segmented into a fine segmentation class, and
each object within the second set of objects corresponds to a group of pixels from the array of pixels;
applying a contour optimization algorithm to at least one object within the second set of objects, wherein the contour optimization algorithm refines a corresponding contour boundary for the at least one object; and
generating an identification that the at least one object corresponds to a lymph node.
1. A computer system comprising:
one or more processors; and
one or more computer-readable hardware storage devices having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computer system to:
receive raw image data that is comprised of an array of pixels, each pixel within the array of pixels comprising ultrasound information;
process the raw image data through a first fully convolutional network to generate a first segmentation label map, wherein:
the first segmentation label map comprises a first set of objects that have been segmented into at least a coarse segmentation class, and
each object within the first set of objects corresponds to a group of pixels from the array of pixels;
process the first segmentation label map through a second fully convolutional network to generate a second segmentation label map, wherein:
processing the first segmentation label map through the second fully convolutional network is performed using the raw image data as a base reference,
the second segmentation label map comprises a second set of objects that have been segmented into a fine segmentation class, and
each object within the second set of objects corresponds to a group of pixels from the array of pixels;
apply a contour optimization algorithm to at least one object within the second set of objects, wherein the contour optimization algorithm refines a corresponding contour boundary for the at least one object; and
generate an identification that the at least one object corresponds to a lymph node.
2. The computer system of
3. The computer system of
4. The computer system of
5. The computer system of
6. The computer system of
8. The computer system of
9. The computer system of
10. The computer system of
12. The one or more hardware storage devices of
13. The one or more hardware storage devices of
14. The one or more hardware storage devices of
15. The one or more hardware storage devices of
18. The method of
19. The method of
20. The method of
|
This application claims priority to PCT Application No. PCT/US2017/065913 filed Dec. 12, 2017, entitled “SEGMENTING ULTRASOUND IMAGES,” which claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 62/432,849, filed Dec. 12, 2016 entitled “SEGMENTING ULTRASOUND IMAGES”. All of the aforementioned are incorporated herein by reference in their entirety.
One of the first tasks that a human learns as an infant is the process of recognizing objects. As an infant grows older, that infant's ability to immediately identify objects within his/her surroundings continuously improves. Eventually, infants get to the point where they can scan their surroundings and immediately understand the environment in which they are situated. Similar to scanning an environment, humans also have the ability to examine an image (e.g., a picture) and immediately understand the scene that is illustrated in the image. This ability to examine, recognize, and identify/categorize objects is a learned trait that is developed over time.
In contrast, this ability (i.e. recognizing objects in an image and then classifying those objects) is not an innate process for a computer system. To clarify, computers do not view images in the same manner that a human does. For instance, instead of seeing an artful canvas on which many different colors and objects are illustrated, a computer simply “sees” an array of pixels. The computer must then analyze each of these pixels to determine which pixels belong to which objects in the image.
Similar to how an infant progressively learns to recognize objects, a computer can also be trained to recognize objects. In the case of machine learning, this training process can be accomplished by providing the computer with a large number of images. The computer is then “taught” what a particular object is through a process of identifying that particular object within the images to the computer. By way of example, suppose a user wanted to teach the computer to recognize a dog within an image. To do so, the user will feed a selected number of dog images to the computer and tell the computer that a dog is present in each of those images. The computer can then learn (i.e. machine learning) about the various features of a dog.
For the most part, efforts in teaching a computer how to perform image recognition/classification have been focused on the use of natural images (i.e. images that capture real-world objects) as opposed to medical images (e.g., ultrasound images or MRI images). This bias is due, in part, to the unlimited availability of natural images as compared to the availability of medical images. Another reason is due to the limited number of personnel who are qualified to teach the computer system regarding the objects that are captured in a medical image.
To date, the analysis of medical images is mostly performed by human inspection. In many instances, this process can be quite laborious. Furthermore, the analysis can be wrought with inconsistencies and misidentifications. Accordingly, there exists a substantial need in the field of image recognition and classification to assist a human in analyzing medical images. Even further, there exists a substantial need in the field to enable a computer to examine, recognize, and identify/classify objects within a medical image.
In the case that computer systems are used to analyze medical images, significant processing and algorithm maintenance is required. Further, the resulting digital classifications of images can be error prone. While computer processing of medical images would provide significant technical advantages, the various inaccuracies and processing requirements associated with conventions computer systems place significant technical barriers to wide spread adoption.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is provided to illustrate only one exemplary technology area where some embodiments described herein may be practiced.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Disclosed embodiments are directed to systems, hardware storage devices, and methods for segmenting tissue objects that are included within an ultrasound image.
Initially, raw image data (e.g., an ultrasound image) is received. Here, this raw image data is comprised of an array of pixels, and each pixel comprises ultrasound information. This raw image data is processed through a first fully convolutional network to generate a first segmentation label map. This first segmentation label map includes a first set of objects that have been segmented into a “coarse” segmentation class. Of note, each object within this first set corresponds to a group of pixels from the array of pixels. Then, this first segmentation label map is processed through a second fully convolutional network to generate a second segmentation label map. When the first segmentation label map is processed through the second fully convolutional network, the second fully convolutional network uses the raw image data (e.g., the ultrasound image) as a base reference. The resulting second segmentation label map includes a second set of objects that have been segmented into a “fine” segmentation class. Here, each object within the second set also corresponds to a group of pixels from the array of pixels. Subsequently, a contour optimization algorithm is applied to at least one of the second set of objects in order to refine that object's contour boundary. Additionally, that object is identified as corresponding to a lymph node.
These and other objects and features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
To further clarify the above and other advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is appreciated that these drawings depict only illustrated embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Disclosed embodiments are directed to systems, hardware storage devices, and methods for segmenting tissue objects within an ultrasound image.
As used herein, the term “segmenting” generally refers to the process of examining, recognizing, and identifying/categorizing an object within an image. As used herein, “semantic segmentation” is an analogous term and can be interchangeably used in connection with “segmenting.” Further, as used herein an “object” comprises a visually distinguishable portion of an image that is distinct from at least another portion of the image. For example, an object within a medical image may comprise a particular organ, a portion of an organ, a tissue mass, or a particular type of tissue.
The embodiments may be implemented to overcome many of the technical difficulties and computational expenses associated with a computer performing image identification and classification (i.e. segmentation). In particular, the embodiments provide a computerized, automated method of accurately segmenting tissue image objects from within a complex ultrasound image. Such a process greatly assists medical practitioners when they conduct a medical examination. For instance, objects can be identified within medical images with greater accuracy and through the use of less computer resources than previously possible. Accordingly, medical practitioners will be able to provide more accurate and timelier medical assistance to patients.
The disclosed embodiments provide additional benefits by not only identifying objects within a medical image, but by also removing any uncertainties that are associated with those objects. For instance, some objects within a medical image may have visual impairments (e.g., blurred edges or other irregular features) as a result of being captured in the medical image. Disclosed embodiments are able to correct these visual impairments and provide an accurate depiction of those objects.
Additionally, one of skill in the art will appreciate that some tissues may appear to be visually similar to other tissues (e.g., a lymph node may appear to be visually similar to a certain type of blood vessel). It may be difficult for a trained professional, much less a conventional image processing system, to correctly identify tissue types from a medical image. Nevertheless, disclosed embodiments are able to accurately distinguish between visually similar tissue types. Accordingly, the disclosed embodiments provide significant advances in diagnosis and disease identification.
The present embodiments also improve the underlying functionality of a computer system that performs image processing. For instance, the disclosed embodiments are able to perform semantic segmentation in one or more stages. By utilizing a unique staging of the segmentation process, the disclosed embodiments significantly improve how the computer system operates because the computer system's resources are utilized in a much more efficient manner.
To achieve these benefits (and others), the disclosed embodiments segment tissue objects that are included within an ultrasound image. At a high level, the embodiments initially receive raw image data (e.g., an ultrasound image). Here, this raw image data is comprised of an array of pixels, and each pixel comprises ultrasound information. This raw image data, in the form of the array of pixels, is processed through a first fully convolutional network to generate a first segmentation label map. This first segmentation label map includes a first set of objects that have been segmented into a “coarse” segmentation class. Of note, each object within this first set corresponds to a group of pixels from the array of pixels. Then, this first segmentation label map is processed through a second fully convolutional network to generate a second segmentation label map. Of note, this second segmentation label map is processed using the raw image data as a base reference. Further, this second segmentation label map includes a second set of objects that have been segmented into a “fine” segmentation class. Here, each object within the second set also corresponds to a group of pixels from the array of pixels. Then, a contour optimization algorithm is applied to at least one of the second set of objects in order to refine that object's contour boundary. Subsequently, that object is identified as corresponding to a lymph node.
Having just described various high-level features and benefits of the disclosed embodiments, the disclosure will now turn to
As illustrated in
The storage 125 may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system 100 is distributed, the processing, memory, and/or storage capability may be distributed as well. As used herein, the term “executable module,” “executable component,” or even “component” can refer to software objects, routines, or methods that may be executed on the computing system 100. The different components, modules, engines, and services described herein may be implemented as objects or processors that execute on the computing system 100 (e.g. as separate threads).
The disclosed embodiments may comprise or utilize a special-purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such as processor 105) and system memory (such as storage 125), as discussed in greater detail below. Embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions in the form of data are physical computer storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example and not limitation, the current embodiments can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
Computer storage media are hardware storage devices, such as RAM, ROM, EEPROM, CD-ROM, solid state drives (SSDs) that are based on RAM, Flash memory, phase-change memory (PCM), or other types of memory, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code means in the form of computer-executable instructions, data, or data structures and that can be accessed by a general-purpose or special-purpose computer.
The computer system 100 may also be connected (via a wired or wireless connection) to external sensors 140 (e.g., ultrasound devices, MRI devices, etc.). Further, the computer system 100 may also be connected through one or more wired or wireless networks 135 to remote systems(s) that are configured to perform any of the processing described with regard to computer system 100.
The graphics rendering engine 115 is configured, with the processor(s) 105 and the GPU 110, to render one or more objects on a user interface.
A “network,” like the network 135 shown in
Upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a network interface card or “NIC”) and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable (or computer-interpretable) instructions comprise, for example, instructions that cause a general-purpose computer, special-purpose computer, or special-purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the embodiments may be practiced in network computing environments with many types of computer system configurations, including personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The embodiments may also be practiced in distributed system environments where local and remote computer systems that are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network each perform tasks (e.g. cloud computing, cloud services and the like). In a distributed system environment, program modules may be located in both local and remote memory storage devices.
As discussed above, computer systems are able to provide a broad variety of different functions. One such function includes performing image processing. Accordingly, attention will now be directed to
As illustrated, computer system 200 includes a Fully Convolutional Network (FCN) component A 205 and a FCN component B 210. Computer system also includes a post-processing component 215 and storage. Included within this storage is a set of rules 220. The computer system 200 is configured to segment tissue image objects from within an ultrasound image. Further detail on computer system 200's components will be provided later in the disclosure in connection with the methods that are presented herein. Accordingly, attention will now be directed to
There are various different methods for analyzing a digital image. Such methods include object recognition/detection and semantic segmentation, to name a few. Briefly, object recognition is the process of generally identifying one or more objects within an image and distinguishing those objects from one another through the use of bounding boxes. In contrast, semantic segmentation is the process of classifying one or more pixels of a digital image so that each classified pixel belongs to a particular object. Semantic segmentation is a more comprehensive classification scheme. In view of this understanding, the remainder of this disclosure will focus on semantic segmentation.
Turning now to
It will be appreciated that this array of pixels 310 may be any size. For example, the size of the array of pixels 310 may be 1020×1020, meaning that the array of pixels 310 is 1020 pixels in height by 1020 pixels in width. Depending on whether the image is a color image or a black and white image, the array of pixels 310 may have another dimension value. For example, if the digital image 305 is a color image, then the size of the array of pixels 310 may be 1020×1020×3, where the 3 indicates that there are three color channels (e.g., RGB). Alternatively, if the digital image 305 is a black and white image, then the size of the array of pixels 310 may be 1020×1020×1, where the 1 indicates that only a single-color channel is present. Here, it will be appreciated that these values are being used for example purposes only and should not be considered as binding or limiting in any manner.
As such, when a computer analyzes the digital image 305, it is actually analyzing the array of pixels 310. Accordingly, the end result of the semantic segmentation process is to enable the computer to accurately examine, recognize, and identify/categorize each object that is present in the digital image 305.
To perform semantic segmentation, the computer system analyzes each pixel that is included in a digital image (e.g., the digital image 305). After understanding the digital image at a pixel-level, the computer system then attempts to group each pixel so that it is associated with a particular identifiable object. As such, the computer system assigns each pixel to an object class.
In the scenario presented in
As discussed earlier and as will be discussed in more detail later, a computer system is trained on how to recognize an image object. For example, at an earlier time, the computer system was provided with a selected number of vase images, table images, background images, etc. By processing these training images through a machine learning algorithm, the computer system learns what a vase looks like, what a table looks like, and so on. When the computer system encounters a new image, such as digital image 305, then the computer system is able to examine the image and use its past learning to identify the objects within that image. As discussed earlier, the computer system assigns a probability metric, or value, to each pixel. This metric indicates a level of confidence that the computer system has with regard to its classifying a particular pixel to a particular object class (e.g., a vase class, a table class, etc.).
The goal of semantic segmentation is to not only accurately identify each object within an image but to also distinguish between the contour boundaries for each of those objects. In the context of
As can be seen in
Accordingly, in at least one embodiment, semantic segmentation is a process for examining, recognizing, and identifying/categorizing the various objects that are included within an image. Currently, various methods exist for performing semantic segmentation. One such method for performing semantic segmentation is through the use of a “fully convolutional network” (hereinafter FCN). Additional details on a FCN will be discussed later. Now, however, attention will be directed to an introductory discussion on medical imaging.
Turning now to
For example,
To provide some background, an ultrasound device is a widely used device for imaging lymph nodes and other tissues for clinical diagnosis. Indeed, ultrasound imaging is a common first-line imaging device used during patient examinations for those patients who have certain kinds of medical issues (e.g., neck lumps). An ultrasound device is often used first because it is non-invasive and readily available in most hospitals.
The remaining portion of this disclosure will focus on lymph nodes. It will be appreciated, however, that the disclosed embodiments are able to operate with any kind of tissue and not just lymph nodes. For brevity, however, only lymph nodes will be discussed hereinafter.
Quantitative analysis of lymph nodes' size, shape, morphology, and their relations in an ultrasound image provides useful and reliable information for clinical diagnosis, cancer staging, patient prognosis, and treatment planning. It also helps obtain a better understanding of what are solid and effective features for diagnosing lymph node related diseases.
Returning to
Furthermore, an ultrasound image (e.g., the ultrasound image 605) may contain multiple lymph nodes (e.g., the multiple lymph nodes labeled as lymph nodes 610). In some instances, lymph node areas in the ultrasound image may be unclear and the contour boundaries may be blurred. While some systems have been developed to perform semantic segmentation on natural images, such systems are inadequate when it comes to performing semantic segmentation on medical images because medical images are significantly more complex and less intuitive than a natural image. Furthermore, additional non-trivial difficulties arise because of the stark differences between natural images and medical images. By way of example, lymph node object areas can be in dark or bright conditions, and non-lymph node objects (e.g., blood vessels and background tissue) can also contain dark or bright areas. As a result, using only pixel-level intensity will not ensure satisfactory segmentation results. Accordingly, existing techniques for semantic segmentation are deficient when it comes to segmenting a medical image because those techniques either (1) have no detection part and require manual delineation of the detection methods (e.g., based on intensity level) or (2) are too simple to give accurate results. The disclosed embodiments provide significant advantages because they provide accurate segmentation results in medical images.
A fully convolutional network (FCN) is a window-based method for performing semantic segmentation. According to the disclosed embodiments, a “coarse-to-fine” stacked FCN model is provided. This model is structured to incrementally learn segmentation knowledge from a non-expert level to an expert level for tissue (e.g., lymph node) segmentation. As discussed earlier, a computer system is trained to recognize image objects. According to the principles disclosed herein, the embodiments recognize image objects in a coarse-to-fine approach, which will be discussed in more detail momentarily.
A FCN module is a deep learning model that mainly contains “convolutional layers” and does not contain any “fully connected layers” which is in contrast to a “convolutional neural network” (aka a CNN). Each FCN module is able to process an image to identify objects within that image. The disclosed embodiments are able to support a stacked configuration in which multiple FCN modules are stacked, or rather staged, together. By staged, it is meant that the output of one FCN module is used as the input to another FCN module. As a result, the disclosed embodiments are configured to support any number of serially-arranged FCN modules. By stacking a number of FCN modules, the embodiments are able to realize a much more accurate understanding of the objects included within an image.
For semantic segmentation on a 2D image (e.g., the ultrasound image 605 of
In a different example, for a pixel with coordinates (x, y), if that pixel belongs to object class 1 (as determined by the ground truth understanding of the digital image), then (x, y, l) in the output tensor should have a very large probability value (close to 1), meaning that if the FCN module accurately segmented that pixel, then the FCN module should have a high level of confidence for that class. Similarly, that pixel will have a very low probability value for the other object classes. To illustrate, for that same pixel (x, y, i), but where i=2, . . . , s, then the resulting probability values should all be quite low (close to 0). As a result, a single pixel may have multiple probability values associated to it, one probability value for each of the identified object classes. Accordingly, each pixel is given a probability metric, or value, which value indicates a level of confidence that the FCN module has in its classifying that pixel as belonging to a particular object class.
In some embodiments, objects (i.e. groups of pixels) that have been segmented into a first object class/set will have associated therewith a similarity probability that satisfies a first threshold level. In this context, the first threshold level indicates that the FCN module is sufficiently confident in its classification of that pixel. If the probability is below that first threshold level, then the FCN module is not sufficiently confident. By way of example and not limitation, supposed the FCN module determines that a pixel must have a probability value of at least 65% to be accurately categorized as belonging to a particular class. Now, suppose there are three object classes within an image. Further, suppose that the pixel is assigned a probability of 33% for object class A, 33% for object class B, and 34% for object class C. Here, none of the probabilities satisfy the 65% threshold value. As a result, it can be determined that the FCN module is not sufficiently confident in labeling that pixel as belonging to a particular object class.
As another example, consider lymph node objects and other tissue objects. Some of the other tissue objects may appear to be visually similar to a lymph node. For this first threshold value, the FCN module may determine that if an object has a 60% probability of being a lymph node, then it satisfies the first threshold level and may be initially categorized as a lymph node. Accordingly, this first threshold level acts as an initial gate in classifying objects as lymph nodes.
As such, the first threshold level may be set so as to differentiate between objects that are visually similar to lymph nodes and objects that are not visually similar to lymph nodes. In at least one embodiment, the first threshold level is used as an initial filter for distinguishing between tissues that are visually similar to lymph nodes and tissues that are not visually similar. In this manner, if a pixel is given a probability value that satisfies the first threshold level, then the FCN module is at least somewhat confident that the pixel corresponds to a lymph node. Of note, the first threshold level is simply a minimum confidence level. As a result, some false positives may be present, as discussed above.
In this manner, the similarity probability is based on an estimated similarity in visual appearance between each of the objects in the first class/set and an identifiable lymph node. Such a first threshold value may be used during a first stage FCN module. In other words, after a first FCN module processes the digital image, the FCN module may use this first threshold value to distinguish between objects that appear to be similar to lymph nodes and objects that are not visually similar to lymph nodes.
For subsequent FCN stages, a second threshold level may be used. For example, a second stage FCN module may classify objects into a second class/set. Here, these second class/set objects all have a similarity probability that satisfies the second threshold level, which is stricter than the first threshold level. To clarify, the second threshold level indicates that the FCN module is confident that those objects are actually lymph nodes and not just objects that appear to be visually similar to lymph nodes. By way of example and not limitation, the second threshold level may be set to 90% (whereas the first threshold level was set at 60%). After processing the image data through the second FCN module, the model will have a better understanding of the objects that are in the digital image. During the pass through the first FCN module, the segmentation was a “coarse” segmentation, during subsequent passes through FCN modules, the segmentation becomes a better, or rather “fine,” segmentation.
In this manner, objects may be accurately segmented into lymph nodes and non-lymph nodes. Accordingly, the above discussion illustrates how each pixel is assigned a likelihood of belonging to a particular object class.
After the image data is processed through a first FCN module, the first FCN module generates an “intermediate” segmentation label map. Similar to the above discussion, this “intermediate” segmentation label map is coarse because it may contain one or more false positives (i.e. objects that were classified as lymph nodes even though they are not actually lymph nodes). After the image data is passed through one or more subsequent FCN modules, a final segmentation label map will be produced. This final segmentation label map is a “fine” segmentation label map because it has an expert-level understanding of the image data.
Accordingly, in some of the disclosed embodiments, there are at least two object classes/sets for the intermediate segmentation label map. The first class includes objects that are visually similar to lymph nodes while the second class includes objects that are not visually similar to lymph nodes. Relatedly, the final segmentation label map also includes at least two object classes, namely, objects that are real lymph nodes and objects that are other types of tissues and/or background images.
Turning now to
Making the model deeper (i.e. adding more max-pooling layers) can help the model capture larger-scale object-level information. As shown in
By fusing (i.e. the element-wise addition function shown in
Accordingly, each of the FCN modules (i.e. each FCN stage) may be designed in the manner presented in
Turning now to
Initially, as shown in
This intermediate segmentation result is then fed into a second segmentation module (i.e. segmentation module B) at step 1020. In addition to the intermediate segmentation result, the raw image data is also fed as input into the segmentation module B. Here, the raw image data acts as a base reference for the segmentation module B. The segmentation module B then produces a final segmentation result at step 1025. This final segmentation result accurately identifies all lymph nodes and distinguishes those lymph nodes from all other tissues, even tissues that appear to be visually similar to a lymph node. Accordingly, because the final segmentation result (i.e. the final segmentation label map) includes an accurate identification of the lymph nodes, this final segmentation label map is considered to be a “fine” label map. As a result, the disclosed segmentation process is a coarse-to-fine segmentation process.
Accordingly, as can be seen in
Accordingly, segmentation module A is trained to learn segmentation knowledge from the raw input image to produce a segmentation label map (an intermediate result) that shows all the areas that are visually similar to lymph nodes. Here, it will be appreciated that this intermediate result is based on non-expert knowledge and may include false positives. In contrast to segmentation module A, segmentation module B is trained to use the intermediate result combined with the raw image to produce the final (i.e. expert-level) lymph node segmentation label map.
Up to this point, the disclosure has focused on embodiments that use two stages (as shown in
Additionally, the disclosed embodiments are able to perform a post-processing method. This post-processing will be discussed in much more detail later on. However, by way of a brief introduction, the post-processing step includes the implementation of a convex-shape constraint based graph search method to improve the lymph node contour boundaries. This post-processing significantly improves the accuracy of the final lymph node segmentation label map.
As discussed earlier, the FCN modules are trained to recognize lymph nodes and other tissues. In this manner, some of the disclosed embodiments make use of a multi-stage incremental learning concept for designing deep learning models. Based on this concept, the deep learning model learns how to perform semantic segmentation in a coarse-to-fine, simple-to-complex manner. Furthermore, some of the disclosed embodiments use a stacked FCN model with the guidance of the coarse-to-fine segmentation label maps (i.e. the intermediate segmentation label map and the final segmentation label map).
Returning to
With regard to training the FCN modules, a non-expert is permitted to train the first FCN module (i.e. segmentation module A in
The disclosed embodiments provide significant advantages in that they improve the training process when training the FCN modules. Accordingly, the following disclosure presents some of the methods for training FCN modules.
For example, in some training situations, a stochastic gradient descent based method (e.g., Adam or RMSProp) may be applied to train the modules. In some instances, all of the FCN modules are trained at the same time using the same image data. Here, each FCN module influences all of the other FCN modules. In a different scenario, the first FCN module is trained using only intermediate segmentation label maps while subsequent FCN modules are trained using only final segmentation label maps. Here, the intermediate segmentation label maps influence the subsequent FCN modules but the final segmentation label maps and the subsequent FCN modules do not influence the first FCN module. In yet another training scenario, the first FCN module may be trained using intermediate segmentation label maps. Then, the first FCN module is fixed and the subsequent FCN modules are trained. In this context, the first FCN module influences the subsequent FCN modules, but not vice versa. Different from the earlier training scenario, in this scenario the influence from the first FCN module to the subsequent FCN modules remains the same for the same image samples in different situations.
Attention will now be directed to
In the scenario presented in
Accordingly,
As illustrated, the final segmentation label map includes class one objects 1210, class two objects 1215, and class three objects 1205. Here, the class two objects 1215 are analogous to the class two objects 1110 of
Returning to
Similar to the intermediate segmentation label map of
Having just provided a practical example of the semantic segmentation process according to the disclosed principles, attention will now be directed to
As a general matter, most of the time lymph nodes have a convex shape when portrayed in an ultrasound image. Although alternative shapes are possible, it is not very common to find concave points on the contour boundary of a lymph node. In light of this phenomenon, the disclosed embodiments are configured to use a soft convex-shape constraint to refine the border contours of lymph nodes. Such a refinement process helps generate a more accurate lymph node segmentation.
This contour optimization is modeled as a shortest path problem on a graph. For instance, given a contour C for a lymph node segmented according to the principles discussed earlier, some of the embodiments uniformly sample g points on C in a clockwise manner on the input image (i.e. the original ultrasound image). For each sample point aj, let rj be a ray of h pixels orthogonal to the direction of the curvature of C at aj (rj centers at ajϵC).
Now, denote the i-th point (pixel) on the ray rj as pij=(xp
along C′, for any |i′−i|≤s, where i=1, 2, . . . , h, and j=1, 2, . . . , g (where s may be chosen to be 5 in this instance, but some other value may also be used).
Some embodiments also enforce a convexity shape constraint in that any concave edge-to-edge connection to pij−1pi′j to pi′jpi″j+1) along C′ is penalized by incurring a large connection cost. A graph G is then built on the sample points (graph nodes) of these rays with node weights reflecting inverse image gradient responses and edge weights reflecting the degrees of convexity at the internal angles of the sought contour C′. A parameter w is used to control the relative importance between the node weights and edge weights in G. Computing the optimal convex-shape constrained closed contour C′ in G takes O(s3h2g) amount of time. Using these principles, this boundary refinement process produces a cleaner and more accurate lymph node segmentation. Accordingly, some embodiments use the contour optimization algorithm to refine an object's boundary as a function of convexity.
Turning now to
To this point, the disclosure has focused on embodiments that refine the contours of only final segmentation label map objects. It will be appreciated, however, that other embodiments apply the refinement process at other stages of the segmentation process. For instance, some embodiments apply refinements to the intermediate segmentation label map. Still further, other embodiments apply refinements to both the intermediate segmentation label map and the final segmentation label map. Even further, some embodiments apply refinements to every resulting segmentation label map produced during the segmentation process. By way of example and not limitation, if the segmentation process included five stages, then the refinement process may be performed five separate times. Accordingly, from this disclosure it will be appreciated that the refinement process may be performed any number of times and may be implemented at any stage throughout the segmentation process.
The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed. The methods are implemented by one or more processors of a computer system (e.g., the computer system 100 of
Turning now to
Method 1500 is also shown as including an act of processing the raw image data through a first fully convolutional network to generate a first segmentation label map (act 1510). In some instances, this first segmentation label map comprises a first set of objects that have been segmented into at least a coarse segmentation class (e.g., the class one objects 1105 of
Method 1500 also includes an act of processing the first segmentation label map through a second fully convolutional network to generate a second segmentation label map (act 1515). Here, the processing may be performed using the raw image data as a base reference (e.g., as shown in
Method 1500 also includes an act of applying a contour optimization algorithm to at least one object within the second set of objects (act 1520). As discussed earlier, this contour optimization algorithm refines a corresponding contour boundary for the object. This act is performed using the rules 220 stored in the storage shown in
Method 1500 also includes an act of generating an identification that the at least one object corresponds to a lymph node (act 1525). Here, this act is performed by the FCN Component B 210 of
FCN Module B produces a second segmentation label map. Here, this second segmentation label map includes objects that have been segmented into a third class (e.g., the class three objects 1205 shown in
Next, a set of rules are evaluated against the final segmentation label map. Here, the set of rules defines a contour optimization algorithm that is evaluated against at least one of the third-class objects. This algorithm refines the contour boundaries of that object so as to remove any fuzziness or irregular portions. As a result of evaluating the set of rules against the final segmentation label map, a refined final (i.e. second) segmentation label map is produced, which map includes one or more refined elements that belong to the third class.
Having just described various example methods, the remaining disclosure will discuss various example user interfaces for displaying the resulting segmentation label maps.
For example,
Accordingly, some example user interfaces are configured to display the original raw image, the intermediate segmentation label map, the final segmentation label map, or various combinations of the above.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Yang, Lin, Zhang, Yizhe, Chen, Danny Ziyi, Ying, Michael Tin-Cheung, Ahuja, Anil Tejbhan
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10751548, | Jul 28 2017 | Elekta, Inc.; ELEKTA, INC | Automated image segmentation using DCNN such as for radiation therapy |
9224068, | Dec 04 2013 | GOOGLE LLC | Identifying objects in images |
9589374, | Aug 01 2016 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
20040109595, | |||
20080260239, | |||
20090129641, | |||
20120002852, | |||
20150078641, | |||
20170124415, | |||
20170249744, | |||
20180061046, | |||
20190139216, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 12 2017 | University of Notre Dame du Lac | (assignment on the face of the patent) | / | |||
Dec 12 2017 | Honk Kong Polytechnic University | (assignment on the face of the patent) | / | |||
Dec 12 2017 | Chinese University of Hong Kong | (assignment on the face of the patent) | / | |||
May 04 2018 | AHUJA, ANIL TEJBHAN | Chinese University of Hong Kong | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049041 | /0253 | |
May 30 2018 | CHEN, DANNY ZIYI | University of Notre Dame du Lac | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049041 | /0208 | |
May 30 2018 | YANG, LIN | University of Notre Dame du Lac | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049041 | /0208 | |
Jun 01 2018 | ZHANG, YIZHE | University of Notre Dame du Lac | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049041 | /0208 | |
Jun 07 2018 | YING, MICHAEL TIN-CHEUNG | Hong Kong Polytechnic University | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049041 | /0243 | |
May 17 2019 | University of Notre Dame | NATIONAL SCIENCE FOUNDATION | CONFIRMATORY LICENSE SEE DOCUMENT FOR DETAILS | 049283 | /0509 |
Date | Maintenance Fee Events |
Apr 30 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
May 08 2019 | SMAL: Entity status set to Small. |
Nov 11 2024 | REM: Maintenance Fee Reminder Mailed. |
Date | Maintenance Schedule |
Mar 23 2024 | 4 years fee payment window open |
Sep 23 2024 | 6 months grace period start (w surcharge) |
Mar 23 2025 | patent expiry (for year 4) |
Mar 23 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 23 2028 | 8 years fee payment window open |
Sep 23 2028 | 6 months grace period start (w surcharge) |
Mar 23 2029 | patent expiry (for year 8) |
Mar 23 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 23 2032 | 12 years fee payment window open |
Sep 23 2032 | 6 months grace period start (w surcharge) |
Mar 23 2033 | patent expiry (for year 12) |
Mar 23 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |