An image processing apparatus includes: a region-of-interest setting unit configured to set a region of interest in an image; a linear convex region extracting unit configured to extract, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels; an intra-region curvature feature data computing unit configured to compute curvature feature data based on curvatures of one or more arcs along the linear region; and an abnormality determining unit configured to determine whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
|
8. An image processing method comprising:
setting a region of interest in an image;
extracting, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels;
computing curvature feature data based on curvatures of one or more arcs along the linear region; and
determining whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
9. A non-transitory computer-readable recording medium with an executable program stored thereon, the program instructing a processor to execute:
setting a region of interest in an image;
extracting, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels;
computing curvature feature data based on curvatures of one or more arcs along the linear region; and
determining whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
1. An image processing apparatus comprising:
a region-of-interest setting unit configured to set a region of interest in an image;
a linear convex region extracting unit configured to extract, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels;
an intra-region curvature feature data computing unit configured to compute curvature feature data based on curvatures of one or more arcs along the linear region; and
an abnormality determining unit configured to determine whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
2. The image processing apparatus according to
the intra-region curvature feature data computing unit comprises a size feature data computing unit configured to compute distance information and the curvatures of one or more arcs, and
the abnormality determining unit is configured to determine that there is an abnormal portion in the region of interest when the distribution of the curvatures is within a range smaller than a predetermined threshold value, the predetermined threshold value being determined according to the distance information.
3. The image processing apparatus according to
the size feature data computing unit comprises:
a curvature computing unit configured to compute the curvatures of one or more arcs from each of sections of the linear region, the sections being delimited by endpoints of the linear region and/or intersection points of linear regions;
a curvature representative value computing unit configured to compute a representative value from the curvatures of one or more arcs; and
a distance information computing unit configured to compute the distance information from an imaging position of the image to the linear region.
4. The image processing apparatus according to
the intra-region curvature feature data computing unit comprises a shape feature data computing unit configured to compute a variation in the curvatures of one or more arcs, and
when the variation is greater than a predetermined value, the abnormality determining unit is configured to determine that there is an abnormal portion in the region of interest.
5. The image processing apparatus according to
the shape feature data computing unit comprises:
a curvature computing unit configured to compute the curvatures of one or more arcs along the linear region from each of sections of the linear region, the sections being delimited by endpoints of the linear region and/or intersection points of linear regions; and
a curvature standard deviation computing unit configured to compute, for each of the sections, a standard deviation of the curvatures of one or more arcs, wherein
the abnormality determining unit is configured to make a determination based on the standard deviation.
6. The image processing apparatus according to
the intra-region curvature feature data computing unit comprises a gradient direction feature data computing unit configured to compute directions going toward a curvature center from each of the one or more arcs, and directions in which an object is inclined in a depth direction of the image, at positions of each of the one or more arcs, and
when a variance of frequency of the directions going toward the curvature center is less than or equal to a predetermined threshold value, the abnormality determining unit is configured to determine that there is an abnormal portion in the region of interest, the variance of frequency being created for each of the directions in which the object is inclined.
7. The image processing apparatus according to
|
This application is a continuation of PCT international application Ser. No. PCT/JP2013/084137 filed on Dec. 19, 2013 which designates the United States, incorporated herein by reference.
1. Technical Field
The disclosure relates to an image processing apparatus, an image processing method, and a computer-readable recording medium for determining whether there is an abnormal portion in an image obtained by imaging an in-vivo lumen.
2. Related Art
For image processing performed on an intraluminal image (hereinafter, also simply referred to an image) which is acquired by observing the mucosa of the large intestine with a magnification endoscope, for example, Japanese Laid-open Patent Publication No. 2007-236956 discloses a technique for classifying a pit pattern of a mucosal surface (called a large intestine pit pattern) into a plurality of types. Specifically, Gabor filters with m frequencies and k phase orientations are applied to each pixel in a region of interest which is set in an image, by which m×k-dimensional feature vectors are computed, and the mean or variance of the feature vectors is computed. Then, a large intestine pit pattern is classified based on the mean or variance of the feature vectors, and it is determined whether the large intestine pit pattern is abnormal.
In some embodiments, an image processing apparatus includes: a region-of-interest setting unit configured to set a region of interest in an image; a linear convex region extracting unit configured to extract, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels; an intra-region curvature feature data computing unit configured to compute curvature feature data based on curvatures of one or more arcs along the linear region; and an abnormality determining unit configured to determine whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
In some embodiments, an image processing method includes: setting a region of interest in an image; extracting, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels; computing curvature feature data based on curvatures of one or more arcs along the linear region; and determining whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
In some embodiments, a non-transitory computer-readable recording medium with an executable program stored thereon is provided. The program instructs a processor to execute: setting a region of interest in an image; extracting, from the region of interest, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than pixel values of neighboring pixels; computing curvature feature data based on curvatures of one or more arcs along the linear region; and determining whether there is an abnormal portion in the region of interest, based on a distribution of the curvature feature data.
The above and other features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Image processing apparatuses, image processing methods, and image processing programs according to some embodiments of the present invention will be described below with reference to the drawings. The present invention is not limited to these embodiments. The same reference signs are used to designate the same elements throughout the drawings.
As illustrated in
The control unit 10 is implemented by hardware such as a CPU. By reading various types of programs recorded in the recording unit 50, the control unit 10 performs, for example, instructions and data transfer to the units composing the image processing apparatus 1, according to image data inputted from the image acquiring unit 20, an operation signal inputted from the input unit 30, etc., and thereby performs overall control of the entire operation of the image processing apparatus 1.
The image acquiring unit 20 is composed as appropriate, according to the mode of a system including an endoscope. For example, when a portable recording medium is used to pass image data with a capsule endoscope, the image acquiring unit 20 is composed of a reader device that allows the recording medium to be removably placed therein and that reads image data of a recorded image. In addition, when a server that saves image data of images captured by the endoscope is set up, the image acquiring unit 20 is composed of a communication device or the like which is connected to the server, and acquires image data by performing data communication with the server. Alternatively, the image acquiring unit 20 may be composed of an interface device or the like that accepts as input an image signal from the endoscope through a cable.
The input unit 30 is implemented by input devices, e.g., a keyboard, a mouse, a touch panel, and various types of switches, and outputs an accepted input signal to the control unit 10.
The display unit 40 is implemented by a display device such as an LCD or an EL display, and displays various types of screens including an intraluminal image, under control of the control unit 10.
The recording unit 50 is implemented by, for example, various types of IC memories such as a ROM and a RAM, e.g., updatable and recordable flash memories, a hard disk which is built in or connected by a data communication terminal, or an information recording device such as a CD-ROM and a reading device therefor. The recording unit 50 stores a program that causes the image processing apparatus 1 to operate and causes the image processing apparatus 1 to perform various functions, data to be used during the execution of the program, etc., in addition to image data acquired by the image acquiring unit 20. Specifically, the recording unit 50 stores an image processing program 51 for determining whether there is an abnormal portion of a villus on a mucosal surface, various information to be used during the execution of the program, etc.
The calculating unit 100 is implemented by hardware such as a CPU. By reading the image processing program 51, the calculating unit 100 performs image processing on an intraluminal image, and performs various calculation processes for determining whether there is an abnormal portion of a villus on a mucosal surface.
Next, a configuration of the calculating unit 100 will be described.
As illustrated in
Of them, the region-of-interest setting unit 110 includes a mucosal region extracting unit 111 that extracts a mucosal region by excluding regions other than mucosa such as residues and dark portions from the image, and sets the extracted mucosal region as a region of interest.
The linear convex region extracting unit 120 includes a convex shape high-frequency component computing unit 121, an isolated-point excluding unit 122, and a thinning unit 123.
The convex shape high-frequency component computing unit 121 computes the strengths of components whose spatial frequencies are greater than or equal to a predetermined value (hereinafter, referred to as high-frequency components), in a pixel region with higher pixel values than its neighboring pixels. Note that in the following the pixel region with higher pixel values than its neighboring pixels is also referred to as a convex-shaped region or simply a convex shape, and a pixel region that has the convex shape and has high-frequency components whose strengths are greater than or equal to the predetermined value is also referred to as a convex-shaped high-frequency region. In addition, the strengths of high-frequency components in the convex-shaped high-frequency region are also referred to as projection shape's high-frequency components.
The isolated-point excluding unit 122 excludes isolated points as normal villi, based on the projection shape's high-frequency components computed by the convex shape high-frequency component computing unit 121. Here, the isolated point refers to a region in the convex-shaped high-frequency region that has a smaller number of consecutive pixels than a predetermined threshold value in all circumferential directions of the convex-shaped high-frequency region.
The thinning unit 123 thins the convex-shaped high-frequency region.
The intra-region curvature feature data computing unit 130 includes a size feature data computing unit 131 that computes the curvatures of one or more arcs along a linear region and distance information corresponding to imaging distances to the arcs, as arc size feature data; and a frequency distribution creating unit 132 that creates a frequency distribution of the arc size feature data. Of them, the size feature data computing unit 131 includes a curvature computing unit 131a that computes the curvatures of one or more arcs from a linear region; a curvature representative value computing unit 131b that computes a representative value from the computed curvatures of one or more arcs; and a distance information computing unit 131c that computes distance information from an intraluminal image imaging position (i.e., the position of the capsule endoscope) to the convex region that forms a linear shape.
Next, the operation of the image processing apparatus 1 will be described.
First, at step S10, the calculating unit 100 reads image data recorded in the recording unit 50 and thereby acquires a processing target intraluminal image.
At subsequent step S20, the region-of-interest setting unit 110 sets a region of interest in the intraluminal image.
At step S201, the mucosal region extracting unit 111 divides the intraluminal image into a plurality of small regions, based on the edge strengths of the respective pixels in the intraluminal image. The process of dividing the intraluminal image into a plurality of small regions will be described in detail with reference to
At step S2012, the mucosal region extracting unit 111 computes edge strengths by performing a filtering process on the G-component image, and creates an edge image that uses the edge strengths as the pixel values of the respective pixels. For the filtering process for computing edge strengths, a first derivative filter such as a Prewitt filter or a Sobel filter, a second derivative filter such as a Laplacian filter or a LOG (Laplacian of Gaussian) filter, or the like, is used (reference: Computer Graphic Arts Society, “Digital Image Processing”, pp. 114-121).
The mucosal region extracting unit 111 further performs region division based on the edge strengths of the edge image. As an example of a region division technique, in the first embodiment, a method disclosed in WO 2006/080239 A is applied. Specifically, at step S2013, a smoothing process is performed on the edge image to remove noise. At subsequent step S2014, a direction in which the maximum gradient of edge strength is obtained (maximum gradient direction) is acquired for each pixel position in the edge image. Then, at step S2015, a search is performed in the edge image from each pixel (starting pixel) in the maximum gradient direction, and a pixel position is acquired where the pixel value (edge strength) reaches an extreme value. Furthermore, at step S2016, a labeled image is created in which starting pixels that obtain an extreme value at the same pixel position are labeled as one region. Here, the labeled image refers to an image that uses label numbers (1 to n, n is the number of labels) which are obtained by labeling, as the pixel values of the respective pixels. A pixel region having the same pixel value in the labeled image corresponds to each small region obtained by performing region division on the intraluminal image. Thereafter, processing returns to the main routine.
Note that for a method of dividing the intraluminal image into regions, any publicly known method other than the above-described method may be applied. Specifically, a watershed algorithm (reference: Luc Vincent and Pierre Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 6, pp. 583-598, June 1991) may be used. The watershed algorithm is a method of dividing an image such that, when water is filled into terrain where the pixel value information of an image is regarded as a height, boundaries are created between water accumulated in different depressions.
In addition, in the first embodiment, region division based on the edge is performed for the purpose of reducing the influence of isolated pixel noise and facilitating an abnormal determination at a subsequent stage by region division along mucosal or residue boundaries, etc.; however, as another region division technique, the intraluminal image may be divided into rectangular regions of a predetermined size. In this case, processes such as creation of an edge image and region division based on the edge strength become unnecessary, enabling to reduce processing time. Alternatively, without performing region division, each pixel may be used as a processing unit in each step which will be described below. In this case, the process at step S201 can be omitted.
At step S202 subsequent to step S201, the mucosal region extracting unit 111 computes a mean value of feature data for each small region which is obtained by the region division. More specifically, the mucosal region extracting unit 111 computes a mean value of each of R, G, and B components from the pixel values of the respective pixels in the small region. Then, by performing HSI conversion (reference: Computer Graphic Arts Society, “Digital Image Processing”, p. 64 (HSI conversion and inverse conversion)) on these mean values, a hue mean value of the small region is computed. The reason that the hue mean value is computed here is that objects such as the mucosa and the contents in a lumen can be determined to a certain extent by color feature data such as hue, from differences in the absorption characteristics of blood, bile, etc., which are the components of the mucosa and the contents in a lumen. Basically, the hues of stomach mucosa, intestinal mucosa, and contents change in this order from a red-based hue to a yellow-based hue.
Note that a hue mean value of each small region may be obtained by computing hues by performing HSI conversion on the pixel values (R, G, and B components) of the respective pixels in the small region, and then computing a mean value of the hues of the respective pixels. Alternatively, instead of hues, for example, a mean value of feature data such as color differences or color ratios (G/R or B/G) may be computed.
At step S203, the mucosal region extracting unit 111 determines, based on the mean values of feature data, whether each small region is a dark region. More specifically, the mucosal region extracting unit 111 first creates a determination result list for the small regions. Here, the determination result list refers to a list where the label numbers in the labeled image obtained when the intraluminal image is divided into a plurality of small regions are associated with pieces of flag information for the label numbers, respectively. Note that the size of the determination result list corresponds to the number of labels n. Subsequently, the mucosal region extracting unit 111 initializes the determination result list, and assigns a mucosal region flag (0: mucosal region) to all of the small regions. Then, using, as feature data, G components that provide the best representation of the structures of objects such as mucosal bumps and residue boundaries, a dark region flag (1: dark region) is assigned to a small region whose mean value of the feature data is less than or equal to a predetermined threshold value. Note that the threshold value used at this time is determined within a range where a change in the hue mean value, etc., of the small region maintains linearity with respect to a change in intensity. This is because in the dark region the linearity breaks down due to the influence of noise, etc.
At step S204, the mucosal region extracting unit 111 clusters a distribution of the mean values of feature data (hue mean values) which are computed at step S202. Here, the clustering is a technique for dividing a data distribution in a feature space into chunks called clusters, based on the similarity between data, and can be performed by various publicly known techniques such as a hierarchical method and a k-means method (reference: Computer Graphic Arts Society: “Digital Image Processing”, pp. 231-232 (clustering)). In the first embodiment, since a process using a histogram of data in a hue space is performed at a subsequent stage, there is shown a procedure of clustering based on the histogram of data, with the feature space used as a one-dimensional space of hue. Note that the clustering does not need to be limited thereto, and other techniques may be used.
The process of clustering a distribution of hue mean values will be described in detail below with reference to
Subsequently, a distribution of the hue mean values is divided into a plurality of clusters using the valleys of the histogram as boundaries.
For such frequency data, a gradient direction for each coordinate is determined. Here, the gradient direction is a direction that is determined based on the difference between frequency data for a coordinate of interest and frequency data for coordinates adjacent to the coordinate of interest. A direction in which the value of frequency increases is represented by an arrow. Note that a coordinate provided with “extreme” is an extreme-value (maximum-value) coordinate whose frequency is higher than any of its adjacent coordinates.
Furthermore, an extreme-value coordinate is searched for from each coordinate in a gradient direction. In
As a result of this search, the coordinates “1” to “9” that obtain the coordinate “5” as an extreme-value coordinate are set as a first cluster, and the coordinates “10” to “15” that obtain the coordinate “12” as an extreme-value coordinate are set as a second cluster, by which the data distribution can be divided into clusters using the valleys of the histogram as boundaries.
At step S205 subsequent to step S204, the mucosal region extracting unit 111 determines unnecessary regions such as residues, based on the distribution of feature data mean values. Here, in the lumen, in general, due to the difference in absorption characteristics between bile and blood, unnecessary regions such as residues have a yellow-based color. Hence, of the clusters obtained at step S204, a cluster whose hue leans toward yellow-based is estimated as a cluster including hue mean value data of small regions which are unnecessary regions (hereinafter, referred to as an unnecessary region cluster), by which the unnecessary region cluster is identified.
More specifically, first, a centroid ωa bar (the bar indicates that the symbol “-” representing “mean” is described above ω) of each cluster “a” (a is the cluster number, and in the case of
In equation (1),
where
In addition, the symbol i represents the label number of a small region. The symbol n represents the number of labels, i.e., the number of small regions in the image. The symbol hi represents the hue mean value of the small region i. The symbol a_min represents the minimum value of the hue mean values of the cluster “a”, and the symbol a max represents the maximum value of the hue mean values of the cluster “a”. Furthermore,
where
In the histogram of hue mean values illustrated in
Furthermore, the mucosal region extracting unit 111 assigns an unnecessary region flag (2: unnecessary region) to small regions belonging to the unnecessary region cluster among the small regions that are assigned the mucosal region flag (0: mucosal region) in the determination result list.
Note that in the determination of unnecessary regions, the same process may be performed using the mean values of feature data other than hue.
At subsequent step S206, the region-of-interest setting unit 110 acquires coordinate information of mucosal regions (regions other than the unnecessary regions) in the image, based on the labeled image and the flag information in the determination result list, and outputs the coordinate information of mucosal regions as a region of interest. Thereafter, processing returns to the main routine.
At step S30 subsequent to step S20, the linear convex region extracting unit 120 extracts, from the region of interest in the image, linear regions each having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than those of their neighboring pixels.
At step S301, the convex shape high-frequency component computing unit 121 computes the strengths of projection shape's high-frequency components. More specifically, first, the convex shape high-frequency component computing unit 121 creates a G-component image using G components included in the pixel values of the respective pixels in the intraluminal image. The reason that the G components are used is that, as described above, the G components are close to the blood absorption band and thus provides the best representation of the structures of objects. Note that although in the first embodiment the G components are used, instead, for example, other color components, or a luminance value, a color difference (YCbCr conversion), hue, saturation, intensity (HSI conversion), or a color ratio, which is obtained by converting a pixel value (R, G, and B components) may be used.
Subsequently, the convex shape high-frequency component computing unit 121 performs a top-hat transform by grayscale morphology (reference: By Hidefumi Kobatake, “Morphology”, CORONA PUBLISHING CO., LTD., pp. 103-104) on the G-component image. Here, the top-hat transform refers to a process in which, as illustrated in
Note that the strengths of projection shape's high-frequency components may be computed using a high-pass filter process in a Fourier space, Difference of Gaussian (DoG), etc., in addition to the above.
At subsequent step S302, the isolated-point excluding unit 122 extracts isolated points based on the strengths of projection shape's high-frequency components computed at step S301, and excludes the isolated points as normal villi.
Specifically, the isolated-point excluding unit 122 first performs a threshold value process on the strengths of projection shape's high-frequency components, and thereby creates a binarized image. At this time, the isolated-point excluding unit 122 provides 0 to a pixel whose strength of a projection shape's high-frequency component is less than or equal to a predetermined threshold value, and provides 1 to a pixel whose strength is greater than or equal to the threshold value. Note that for the threshold value, for example, the value of a predetermined ratio to one representative value (maximum value or mean value) which is computed from the strengths of projection shape's high-frequency components in the region of interest is set. Alternatively, as the threshold value, a fixed value greater than 0 may be set. In the latter case, the processes of calculating a representative value for projection shape's high-frequency components and calculating a threshold value from the representative value can be omitted. Note that the binarized image created by this process serves as an image from which convex-shaped high-frequency regions have been extracted.
Subsequently, the isolated-point excluding unit 122 performs particle analysis and thereby excludes isolated points from the binarized image. In the first embodiment, as the particle analysis, analysis is performed based on the areas of target regions (particles) in the binarized image. Specifically, the isolated-point excluding unit 122 labels regions with the pixel value 1 in the binarized image and thereby creates a labeled image that is assigned label numbers 1 to s (s is the number of labels). Then, by scanning the labeled image, the number of pixels is counted for each label number, by which an area list is created. Here, the area list is composed of the label numbers and areas for the respective label numbers, and the size of the area list corresponds to the number of labels s. The isolated-point excluding unit 122 further performs a threshold value process on the areas of labeled regions associated with the respective label numbers, using a predetermined threshold value, and determines a labeled region with a label number whose corresponding area is less than or equal to the threshold value, to be an isolated point. The label number of the labeled region determined to be an isolated point is set to 0.
Note that, when an isolated-point exclusion process is performed, particle analysis may be performed using the perimeter length of a target region, a Feret diameter (maximum Feret diameter), an absolute maximum length indicating the maximum value of a distance between any two points on a contour of a target region, an equivalent circular diameter indicating the diameter of a circle having an equal area to a target region, or the like, in addition to the area of a target region.
At step S303, the thinning unit 123 thins convex-shaped high-frequency regions. More specifically, the thinning unit 123 performs a thinning process on regions whose label numbers are 1 or more in the labeled image where isolated points have been excluded by the isolated-point excluding unit 122. To do so, the thinning unit 123 first provides the pixel value 0 to a region with the label number 0 and provides the pixel value 1 to a region with the label number 1 or more in the labeled image, and thereby creates a binarized image. Subsequently, the thinning unit 123 performs a thinning filtering process on the binarized image. The thinning filtering process refers to a process in which patterns M1 to M8 of local regions including 3×3 pixels such as those illustrated in
Furthermore, at step S304, the linear convex region extracting unit 120 outputs the binarized image having been subjected to the thinning process (hereinafter, also referred to as a linear image). Thereafter, processing returns to the main routine.
At step S40 subsequent to step S30, the intra-region curvature feature data computing unit 130 computes curvature feature data based on the curvatures of arcs along the linear regions.
At step S401, the size feature data computing unit 131 computes, as arc size feature data, the curvatures of an arc along a linear region which is observed in the linear image, and distance information corresponding to an imaging distance from an imaging position (capsule endoscope) to the arc. The process performed by the size feature data computing unit 131 will be described in detail with reference to
At step S4011, the size feature data computing unit 131 labels a continuous linear region in the linear image. In the first embodiment, labeling for an eight-connection connecting component is performed. Specifically, the size feature data computing unit 131 first performs a raster scan on the linear image and searches for a pixel that is not assigned a label number among the pixels with the pixel value 1. Then, when a pixel that is not assigned a label number is found, the pixel is set as a pixel of interest.
When a pixel adjacent above or present at the upper left of the pixel of interest has a label number, the size feature data computing unit 131 assigns the label number of the pixel adjacent above or present at the upper left to the pixel of interest. Thereafter, when the label number of a pixel adjacent to the left side of the pixel of interest differs from the label number of the pixel of interest, the fact that the label numbers of the pixel of interest and the pixel adjacent to the left side thereof belong to the same connecting component is recorded in a look-up table which is prepared in advance.
In addition, when the label numbers of the pixels adjacent above and present at the upper left of the pixel of interest are 0 (no label number) and the pixel adjacent to the left side has a label number, the label number of the pixel adjacent to the left side is assigned to the pixel of interest.
Furthermore, when a label number is not assigned to any of the pixels adjacent above, adjacent to the left side, and present at the upper left of the pixel of interest, the size feature data computing unit 131 assigns a new label number to the pixel of interest.
The size feature data computing unit 131 performs these processes on all pixels in the linear image. Then, finally, a raster scan is performed again, and by referring to the look-up table, the smallest label number among label numbers that are assigned to a pixel group belonging to the same connecting component is selected and reassigned to the pixel group (reference: Computer Graphic Arts Society, “Digital Image Processing”, pp. 181-182).
At step S4012, as illustrated in
More specifically, the curvature computing unit 131a scans the linear image and thereby detects a pixel at an endpoint Pt of a linear region or an intersection point Pc of linear regions. Then, as illustrated in
Subsequently, the curvature computing unit 131a computes a slope of a tangent line at the start point pixel M and a slope of a tangent line at the end point pixel N. In the present embodiment, the slopes of the respective tangent lines are represented by angles DEGs and DEGe formed with an x-axis. The angle DEGi (i=s and e) formed between each tangent line and the x-axis is computed using the following equation (2) by setting two pixels (coordinates (xe, ye) and (xb, yb)) which are separated forward and backward from the start point pixel M (or the end point pixel N) along the linear region L by a predetermined number of pixels.
Note that when the upper left of the linear image is set to the coordinates (0, 0) and each row is scanned in the right direction from the upper left, the coordinates (xa, ya) are forward coordinates of the start point pixel M (or the end point pixel N), and the coordinates (xb, yb) are backward coordinates of the start point pixel M (or the end point pixel N). Note, however, that when coordinates go out of the linear region L by the separation from the start point pixel M (or the end point pixel N) by the predetermined number of pixels (when coordinates go beyond the endpoint Pt or the intersection point Pa), the coordinates of the endpoint Pt or the intersection point Pc of the linear region L are set to either one of the above-described coordinates (xa, ya) and (xb, yb).
Subsequently, the curvature computing unit 131a computes a central angle α of an arc between the start point pixel M and the end point pixel N. The central angle α corresponds to the difference between the angle DEG, and the angle DEGe, i.e., DEGe−DEGs.
The curvature computing unit 131a further computes a curvature κ of the arc between the start point pixel M and the end point pixel N. Here, the curvature κ is the reciprocal of a curvature radius R of the arc, and the number of pixels Δs corresponding to the length of the arc can be approximated by R×α×(π/180), and thus, the curvature κ is given by the following equation (3):
As illustrated in
The curvature computing unit 131a performs such computation of the curvature κ and an assignment of a label number on all linear regions L that are detected from the linear image G1 illustrated in
At subsequent step S4013, the curvature representative value computing unit 131b computes a representative value of the curvature of an arc for each linear region that is assigned the same label number by the curvature computing unit 131a. Note that although in the first embodiment a median is computed as the representative value, the mean value, maximum value, minimum value, etc., of the curvature may be computed as the representative value.
At subsequent step S4014, the distance information computing unit 131c computes distance information in a depth direction from an imaging position to the linear regions. Specifically, the distance information computing unit 131c acquires R component values that have low hemoglobin absorption among the pixel values of the respective pixels and thus provide the best representation of the shape of a mucosal surface layer, from a portion of the original intraluminal image corresponding to the region of interest, and creates an R-component image.
Note that at this step S4014, distance information may be acquired from an image created by other techniques, provided that the image has only a small influence on the microstructures of villi on the mucosa. For example, an image having been subjected to an opening process by grayscale morphology which is described in the process of the convex shape high-frequency component computing unit 121 may be used.
Subsequently, the distance information computing unit 131c computes distance information for the position of each arc whose curvatures are computed by the curvature representative value computing unit 131b. Specifically, R component values for the positions of pixels on an arc are acquired, and a representative value such as the mean value, median, minimum value, or maximum value of the R component values is acquired for each arc. In the first embodiment, the representative value of the R component values is used as distance information corresponding to an imaging distance. Note that in this case the larger the value of the distance information the shorter the imaging distance, and the smaller the value of the distance information the longer the imaging distance. Note also that an R component value for the position of a pixel at the center of curvature of an arc or R component values of an inner region of a circular region or fan-shaped region which is made up of the center of curvature of an arc and a curvature radius may be used as distance information of the arc.
Furthermore, at step S4015, the size feature data computing unit 131 outputs the curvatures and distance information of the arcs as arc size feature data. Thereafter, processing returns to the main routine.
At step S402 subsequent to step S401, the frequency distribution creating unit 132 creates a frequency distribution of the feature data outputted from the size feature data computing unit 131. More specifically, a frequency distribution consisting of two axes, arc curvature and distance information, is created. At this time, the frequency distribution creating unit 132 also computes an amount of statistics such as variance, from the created frequency distribution.
Furthermore, at step S403, the intra-region curvature feature data computing unit 130 outputs, as curvature feature data, the frequency distribution consisting of two axes, arc curvature and distance information. Thereafter, processing returns to the main routine.
At step S50 subsequent to step S40, the abnormality determining unit 140 determines whether there is an abnormal portion in the region of interest, based on the curvature feature data.
Here, arc-shaped regions seen in the intraluminal image include, for example, an abnormal portion where villi on the mucosal surface are swollen, a bubble region where intraluminal fluid forms a bubble shape, and a contour of a structure such as a mucosal groove (hereinafter, referred to as a mucosal contour). In the following, the swollen villi are referred to as abnormal villi.
In addition, the curvatures of arcs along mucosal contours generally take small values compared to abnormal villi and bubble regions.
Hence, the abnormality determining unit 140 makes a determination as follows.
First, when the variance of curvature for given distance information Larb is greater than a predetermined threshold value (variance threshold value) TH1 which is determined in advance according to the distance information Larb, a bubble region is displayed in the region of interest and thus the abnormality determining unit 140 determines that there is no abnormal portion.
In addition, in the case in which the variance of curvature for the distance information Larb is less than or equal to the variance threshold value TH1, when the curvatures are distributed in a range smaller than a predetermined threshold value (curvature threshold value) TH2, mucosal contours are displayed in the region of interest and thus the abnormality determining unit 140 determines that there is no abnormal portion.
On the other hand, in the case in which the variance of curvature for the distance information Larb is less than or equal to the variance threshold value TH1, when the curvatures are distributed in a range greater than or equal to the curvature threshold value TH2, abnormal villi are displayed in the region of interest and thus the abnormality determining unit 140 determines that there is an abnormal portion.
At subsequent step S60, the calculating unit 100 outputs a result of the determination as to whether there is an abnormal portion which is made at step S50. Accordingly, the control unit 10 displays the result of the determination on the display unit 40 and records the result of the determination in the recording unit 50 so as to be associated with the image data of the processing target intraluminal image.
As described above, according to the first embodiment, a mucosal region which is extracted from an intraluminal image is set as a region of interest, and high-frequency components included in convex regions, each of which is composed of a pixel group having higher pixel values than its neighboring pixels, are extracted from the region of interest. Then, as curvature feature data, curvatures and distance information are computed for linear regions obtained by thinning the high-frequency components, and an object displayed in the region of interest is identified based on a frequency distribution of the feature data. Accordingly, erroneous detection (overdetection) of mucosal contours, bubbles, etc., is suppressed, and also changes in the size of microstructures of villi according to the imaging distance and changes in shape according to the orientation of villi with respect to the imaging direction are dealt with, by which abnormal villi are accurately identified, enabling to determine whether there is an abnormal portion.
Modification 1-1
Although, in the above-described first embodiment, the entire remaining region obtained after removing unnecessary regions from an intraluminal image is set as one region of interest, each of a plurality of regions into which the remaining region is divided may be set as a region of interest. In this case, it becomes possible to identify at which location in the intraluminal image an abnormal portion is present. A division method is not particularly limited, and for example, a remaining region obtained after removing unnecessary regions may be simply divided into a matrix pattern. Alternatively, distance information corresponding to an imaging distance may be acquired from the pixel value of each pixel in a remaining region obtained after removing unnecessary regions, and the region may be divided on a hierarchy-by-hierarchy basis, the distance information being divided into hierarchies. In this case, the process at step S4014 illustrated in
Modification 1-2
Next, a modification 1-2 of the first embodiment will be described.
The calculating unit 100-2 includes a linear convex region extracting unit 150 instead of the linear convex region extracting unit 120 illustrated in
The overall operation of the calculating unit 100-2 is similar to that illustrated in
First, at step S311, the ridge-shape extracting unit 151 extracts a ridge-shaped region from a region of interest. More specifically, the ridge-shape extracting unit 151 sequentially sets pixels of interest in the region of interest, and calculates a maximum direction (hereinafter, referred to as a maximum gradient direction) and a minimum direction (hereinafter, referred to as a minimum gradient direction) of a gradient of a pixel value for each pixel of interest. Then, curvatures of a shape whose pixel values change are computed for each of the maximum gradient direction and the minimum gradient direction, and the ridge-shaped region is detected based on a ratio between these curvatures.
To do so, the ridge-shape extracting unit 151 first solves an eigenvalue equation of a Hessian matrix H shown in the following equation (4) for a pixel of interest, and thereby computes, as eigenvalues, a curvature k1 for the maximum gradient direction and a curvature k2 for the minimum gradient direction.
det(H−λE)=0 (4)
In equation (4), the Hessian matrix H is given by the following equation (5):
In addition, the symbol E is the identify matrix and λE is as shown in the following equation (6):
Subsequently, a Gaussian curvature κG and a mean curvature Ak which are given by the following equations (7-1) and (7-2) are computed from the curvatures k1 and k2.
KG=k1×k2 (7-1)
Ak=(k1+k2)/2 (7-2)
At this time, a pixel group where Ak<0 and KG≈0, i.e., a pixel group where the absolute value |KG| is less than or equal to a predetermined threshold value, is determined to be a ridge-shaped region. The ridge-shape extracting unit 151 assigns the pixel value 1 to the pixel group determined to be a ridge-shaped region, and assigns the pixel value 0 to other pixels, and thereby creates a binarized image.
Note that after computing the curvatures k1 and k2, a ridge-shaped region may be determined using any publicly known technique that recognizes arbitrary three-dimensional surface shapes like, for example, shape index and curvedness (reference: Chitra Dorai, Anil K. Jain, “COSMOS—A Representation Scheme for Free-Form Surfaces”).
At subsequent step S312, the isolated-point excluding unit 152 extracts isolated points from the ridge-shaped region which is extracted at step S311, and excludes the isolated points as normal villi. More specifically, a pixel group where the number of continuously-arranged pixels is less than or equal to a predetermined value (i.e., a region of a size less than or equal to a predetermined value) is extracted as an isolated point from the binarized image created at step S311. Then, the pixel value 0 is assigned to the pixel group extracted as an isolated point.
At subsequent step S313, the thinning unit 153 thins the ridge-shaped region obtained after excluding the isolated points. More specifically, the thinning unit 153 performs a thinning filtering process using the patterns M1 to M8 illustrated in
Furthermore, at step S314, the linear convex region extracting unit 150 outputs the binarized image having been subjected to the thinning process. Thereafter, processing returns to the main routine.
Modification 1-3
In the first embodiment, for the process of thinning a convex-shaped high-frequency region which is extracted from a region of interest (see step S303 in
First, as a first step, in a binarized image from which a convex-shaped high-frequency region has been extracted, pixels that satisfy six conditions shown below are deleted sequentially as boundary pixels, among pixels Pk which are target pixels for thinning. Here, k is the pixel number of a pixel in the binarized image (k is a natural number). In addition, the pixel value of the pixel Pk is represented by B(Pk) (B(Pk)=1). In this case, the first step corresponds to a process of replacing the pixel value B(Pk) from 1 to −1. Note that the pixel value of a non-target pixel for thinning is B(Pk)=0.
Condition 1: A pixel of interest is a target pixel for thinning. That is, the following equation (a1) is satisfied.
B(Pk)=1 (a1)
Condition 2: A pixel value of any one of pixels that are adjacent to the pixel of interest in vertical and horizontal directions is 0. That is, the following equation (a2) is satisfied.
Condition 3: Not an end point. That is, the following equation (a3) is satisfied.
Condition 4: Not an isolated point. That is, the following equation (a4) is satisfied. Here, a result obtained by a sequential process is represented by B(Pk) and a result obtained when an immediately previous raster operation is completed is represented by B′(Pk), by which the results are distinguished from each other. Note that the pixel value B′(Pk) does not take −1.
Condition 5: Connectivity is maintained. That is, any of patterns M11 to M14 illustrated in
Condition 6: For a line segment with a line width of 2, only one of them is deleted. That is, the result B′(Pk) obtained when an immediately previous raster operation is completed applies to any one of patterns M21 to M24 illustrated in
Next, as a second step, the pixel values of the pixels having been deleted sequentially as boundary pixels (i.e., the pixels whose pixel values B(Pk) have been replaced from 1 to −1) are replaced by the pixel value of a non-target region B(Pk)=0.
These first and second steps are repeated until there is no more replacement to the pixel value of a non-target region. By that, thinning of a target region is performed.
Next, a second embodiment of the present invention will be described.
The intra-region curvature feature data computing unit 210 includes a shape feature data computing unit 211 that computes the curvatures of one or more arcs along each linear region which is extracted by the linear convex region extracting unit 120, and computes variation in those curvatures as feature data; and a frequency distribution creating unit 212 that creates a frequency distribution of the feature data. More specifically, the shape feature data computing unit 211 includes a curvature computing unit 211a that computes the curvatures of one or more arcs from each section of a linear region that is delimited by endpoints of the linear region and/or intersection points of linear regions; and a curvature standard deviation computing unit 211b that computes a standard deviation of the curvatures of one or more arcs which are computed from each section.
The abnormality determining unit 220 determines whether there is an abnormal portion, based on the variation in the curvatures.
Next, the operation of the image processing apparatus 2 will be described.
The overall operation of the image processing apparatus 2 is similar to that illustrated in
At step S4213 subsequent to step S4212, the curvature standard deviation computing unit 211b computes, for each section of the linear region, a standard deviation of the curvatures of one or more arcs. Specifically, the curvature standard deviation computing unit 211b computes a standard deviation σ which is given by the following equation (8), from m curvatures κi (i=1 to m) which are computed for an arc having the same label number assigned by the curvature computing unit 211a.
In equation (8), the κ bar where a bar is provided above κ is the mean value of the curvature κi, and is given by the following equation (9):
At subsequent step S4214, the shape feature data computing unit 211 outputs the standard deviation computed for each section of the linear region, as feature data representing the shape of the linear region. Thereafter, processing returns to the main routine.
At step S422 subsequent to step S421, the frequency distribution creating unit 212 creates, as a frequency distribution of the feature data, a frequency distribution of the standard deviations outputted from the shape feature data computing unit 211.
Furthermore, at step S423, the intra-region curvature feature data computing unit 210 outputs the frequency distribution of the standard deviations as curvature feature data. Thereafter, processing returns to the main routine.
Next, processes performed by the abnormality determining unit 220 will be described.
The abnormality determining unit 220 determines whether there is an abnormal portion in a region of interest, based on the frequency distribution of standard deviations which is curvature feature data.
Here, when there are abnormal villi in the region of interest, as illustrated in
On the other hand, when there is a bubble region in the region of interest, as illustrated in
Hence, the abnormality determining unit 220 determines whether there is an abnormal portion, based on the frequency distribution of standard deviations. Specifically, when the frequency distribution of standard deviations leans toward a range greater than a predetermined threshold value, it is determined that there is an abnormal portion in the region of interest.
As described above, according to the second embodiment, a mucosal region which is extracted from an intraluminal image is set as a region of interest, and high-frequency components included in convex regions, each of which is composed of a pixel group having higher pixel values than its neighboring pixels, are extracted from the region of interest. Then, for linear regions obtained by thinning the high-frequency components, a frequency distribution of standard deviations of curvatures of arcs along the linear regions is computed, and it is determined whether there is an abnormal portion in the region of interest, based on the frequency distribution of standard deviations. Accordingly, abnormal villi and bubble regions are accurately identified, enabling to improve the accuracy of determination of an abnormal portion.
In addition, according to the second embodiment, since a determination is made without using an imaging distance (distance information) to an object in the region of interest, a calculation process can be simplified.
Next, a third embodiment of the present invention will be described.
The intra-region curvature feature data computing unit 310 includes a direction feature data computing unit 311 that computes curvature feature data representing the central directions of one or more arcs along each linear region which is extracted by the linear convex region extracting unit 120; and a frequency distribution creating unit 312 that creates a frequency distribution of the curvature feature data. Of them, the direction feature data computing unit 311 includes a curvature central direction computing unit 311a that computes a direction going toward a curvature center from each arc (hereinafter, referred to as a curvature central direction); and a gradient direction computing unit 311b that computes a gradient direction of an object for the position of each arc. Here, the gradient direction of an object indicates a direction in which the object (specifically, a mucosal structure) is inclined in a depth direction of an image.
The abnormality determining unit 320 determines whether there is an abnormal portion, based on the gradient direction of the mucosal structure and the distribution of the curvature central directions.
Next, the operation of the image processing apparatus 3 will be described.
The overall operation of the image processing apparatus 3 is similar to that illustrated in
At step S4311, the curvature central direction computing unit 311a computes a curvature central direction of an arc along a linear region. Specifically, first, in the same manner as step S4012 illustrated in
Subsequently, the curvature central direction computing unit 311a performs the Hough transform using position coordinates (x, y) on the arc and the curvature radii R to create an approximate circle (reference: Computer Graphic Arts Society, “Digital Image Processing”, pp. 213-214). Here, the approximate circle obtained by the Hough transform can be created by voting a circle with a radius R passing through the position coordinates (x, y) on the arc into a parameter space consisting of the central coordinate (a, b) of the circle and the radius R, and by checking results of the voting.
Furthermore, the curvature central direction computing unit 311a computes a direction vector going toward the center of the approximate circle from an arbitrary point (e.g., a central point) on the arc, and outputs the direction vector as a curvature central direction.
At subsequent step S4312, the gradient direction computing unit 311b computes a gradient direction of an object at the position of the arc along the linear region. Specifically, the gradient direction computing unit 311b acquires, from a portion of an original intraluminal image corresponding to a region of interest as a processing target, R component values that have low hemoglobin absorption among the pixel values of the pixels and are closest to the shape of a mucosal surface layer, and creates an R-component image.
Note that at this step S4312, a gradient direction may be acquired from an image created by other techniques, provided that the image has only a small influence on the microstructures of villi on the mucosa. For example, an image obtained by performing an opening process by grayscale morphology which is described in the process of the convex shape high-frequency component computing unit 121 may be used.
Subsequently, the gradient direction computing unit 311b computes, from the R-component image, a gradient direction for the position of the arc whose curvature central direction is computed by the curvature central direction computing unit 311a. Specifically, a first derivative filter process (a Prewitt filter, a Sobel filter, etc.) which is a filtering process for computing edge strength (reference: Computer Graphic Arts Society, “Digital Image Processing”, pp. 114-117) is performed.
At step S4313, the direction feature data computing unit 311 outputs the curvature central direction of the arc and the gradient direction as feature data. Thereafter, processing returns to the main routine.
At step S432 subsequent to step S431, the frequency distribution creating unit 312 creates a frequency distribution of the feature data outputted from the direction feature data computing unit 311. Specifically, a frequency distribution of the curvature central directions of the arcs is created for each arc gradient direction.
At step S433, the intra-region curvature feature data computing unit 310 outputs the frequency distribution of the curvature central directions of the arcs as curvature feature data.
Next, processes performed by the abnormality determining unit 320 will be described.
The abnormality determining unit 320 determines whether there is an abnormal portion in the region of interest, based on the frequency distribution of curvature central directions which is curvature feature data.
Here, if there are abnormal villi in the region of interest and when imaging is performed on the inclination of an object (mucosal structure) in one imaging direction as illustrated in
On the other hand, if there is a bubble region in the region of interest and when imaging is performed on the inclination of an object (mucosal structure) in one imaging direction as illustrated in
Hence, when there is a bias in curvature central direction in the frequency distribution of curvature central directions created for each gradient direction, i.e., the variance of curvature central directions is less than or equal to a predetermined threshold value, the abnormality determining unit 320 determines that there is an abnormal portion in the region of interest.
As described above, according to the third embodiment, a mucosal region which is extracted from an intraluminal image is set as a region of interest, and high-frequency components included in convex regions, each of which is composed of a pixel group having higher pixel values than its neighboring pixels, are extracted from the region of interest. Then, for linear regions obtained by thinning the high-frequency components, curvature central directions of arcs along the linear regions and gradient directions are computed, and it is determined whether there is an abnormal portion in the region of interest, based on a frequency distribution of the curvature central directions created for each gradient direction. Accordingly, abnormal villi and bubble regions are accurately identified according to the orientations of villi that change depending on the imaging direction, enabling to improve the accuracy of determination of an abnormal portion.
Next, a fourth embodiment of the present invention will be described.
The region-of-interest setting unit 410 includes a mucosal region extracting unit 111 that extracts a mucosal region by excluding regions other than mucosa, such as residues and dark portions, from a processing target intraluminal image; and a region dividing unit 411 that further divides the extracted mucosal region into a plurality of regions, and sets each of the divided regions as a region of interest. Note that the operation of the mucosal region extracting unit 111 is the same as that of the first embodiment.
The intra-region curvature feature data computing unit 420 includes a size feature data computing unit 421 including a curvature computing unit 131a and a curvature representative value computing unit 131b; and a frequency distribution creating unit 132. Note that the operation of the curvature computing unit 131a, the curvature representative value computing unit 131b, and the frequency distribution creating unit 132 is the same as that of the first embodiment.
The abnormality determining unit 430 determines whether there is an abnormal portion in each region of interest, based on curvature feature data computed by the intra-region curvature feature data computing unit 420.
Next, the operation of the image processing apparatus 4 will be described.
The overall operation of the image processing apparatus 4 is similar to that illustrated in
At step S241 subsequent to step S205, the region-of-interest setting unit 410 divides a mucosal region remaining after removing unnecessary regions, into a plurality of regions, each having a predetermined size or less. A division method is not particularly limited, and in the fourth embodiment the mucosal region is divided into rectangular regions. In addition, the size of one divided region is preset such that the difference in imaging distance to an object in the divided region falls within a predetermined range.
At subsequent step S242, the region-of-interest setting unit 410 outputs coordinate information of each of the divided regions as a region of interest.
At step S30, the linear convex region extracting unit 120 extracts linear regions from each of the regions of interest outputted at step S20.
At step S40, the intra-region curvature feature data computing unit 420 computes, for each region of interest outputted at step S20, curvature feature data based on curvatures of arcs along the linear regions. Note that as the curvature feature data, a frequency distribution of representative values of curvatures of one or more arcs which are computed from each section of the linear regions is computed similarly to the first embodiment, but unlike the first embodiment, the distance information is not computed.
At step S50, the abnormality determining unit 430 determines, for each region of interest, whether there is an abnormal portion in the region of interest, based on the frequency distribution of representative values of curvatures of arcs which is curvature feature data. Specifically, first, when the curvatures are smaller than a predetermined threshold value (curvature threshold value), mucosal contours are displayed in the region of interest and thus it is determined that there is no abnormal portion.
In addition, when the curvatures are greater than the curvature threshold value and the variance of curvature is greater than a predetermined threshold value (variance threshold value), a bubble region is displayed in the region of interest and thus the abnormality determining unit 430 determines that there is no abnormal portion. This is because even if the imaging distance is uniform, a bubble region essentially includes bubbles with various curvatures.
On the other hand, when the curvatures are greater than the curvature threshold value and the variance of curvature is less than or equal to the predetermined threshold value (variance threshold value), abnormal villi are displayed in the region of interest and thus the abnormality determining unit 430 determines that there is an abnormal portion. This is because essentially, abnormal villi in a neighboring region have shapes similar to each other, and thus, when the size of a region of interest is small and the difference in imaging distance in one region of interest can be ignored, the curvatures of arcs corresponding to the contours of abnormal villi match each other.
At step S60, the calculating unit 400 outputs results of the determination made for the respective regions of interest at step S50, together with the coordinate information of the regions of interest. Accordingly, a control unit 10 displays the results of the determination on a display unit 40, and records the results of the determination in a recording unit 50 so as to be associated with image data of the processing target intraluminal image. At this time, the control unit 10 may display a mark or the like that indicates the position of a region of interest having been determined to have an abnormal portion, such that the mark or the like is superimposed on the intraluminal image displayed on the display unit 40.
As described above, according to the fourth embodiment, a plurality of regions of interest are set by dividing a mucosal region extracted from an intraluminal image, and high-frequency components included in convex regions, each of which is composed of a pixel group having higher pixel values than its neighboring pixels, are extracted from each region of interest. Then, for linear regions obtained by thinning the high-frequency components, a frequency distribution of curvatures is computed as curvature feature data, and an object displayed in the region of interest is determined based on the feature data. Accordingly, abnormal villi and bubble regions and mucosal contours are accurately identified, enabling to determine whether there is an abnormal portion.
In addition, according to the fourth embodiment, the mucosal region extracted from the intraluminal image is divided into regions of a size at which the difference in imaging distance to an object can be ignored, and each of the divided regions is set as a region of interest, and then, each process for determining whether there is an abnormal portion is performed. Thus, a distance information computation process and an abnormal portion determination process according to distance information can be omitted, enabling to simplify a calculation process.
In addition, according to the fourth embodiment, since a determination as to whether there is an abnormal portion is made for each region of interest, a region having an abnormal portion in the intraluminal image can be identified.
Modification 4-1
Next, a modification 4-1 of the fourth embodiment will be described.
The size of a region of interest to be set by the region-of-interest setting unit 410 may be variable. The operation of the image processing apparatus 4 performed when the size of a region of interest is variable will be described below.
At step S20, the region-of-interest setting unit 410 sets a mucosal region remaining after removing unnecessary regions, as one region of interest.
The contents of processes at subsequent steps S30 to S50 are the same as those of the fourth embodiment.
At step S71 subsequent to step S50, the calculating unit 400 determines whether a determination result indicating that there is an abnormal portion in any part of the set region of interest has been obtained. Note that in the first loop of the flowchart, the number of regions of interest is one.
If a determination result indicating that there is an abnormal portion has been obtained (step S71: Yes), processing proceeds to step S60. Note that the content of the process at step S60 is the same as that of the fourth embodiment.
On the other hand, if a determination result indicating that there is an abnormal portion has not been obtained (step S71: No), the calculating unit 400 determines whether the size of the currently set region of interest is less than or equal to a predetermined size (step S72). Note that the predetermined size at this time is set to a size at which the difference in imaging distance to an object in the region of interest can be ignored.
If the size of the region of interest is less than or equal to the predetermined size (step S72: Yes), processing proceeds to step S60.
On the other hand, if the size of the region of interest is greater than the predetermined size (step S72: No), the region-of-interest setting unit 410 reduces the size of the currently set region of interest and re-sets the region of interest in a mucosal region (step S73). Thereafter, processing proceeds to step S30.
As described above, according to this modification 4-1, while the size of a region of interest is gradually reduced, a determination process as to whether there is an abnormal portion is performed for each region of interest. Here, by reducing the size of a region of interest, the difference in imaging distance to an object in each region of interest is reduced. Thus, the accuracy of determination of an abnormal portion for each region of interest increases by repeating the loop of steps S30 to S73. By thus changing the accuracy of determination of an abnormal portion in an intraluminal image from low to high, overlooking of determination of an abnormal portion can be suppressed while an abnormal portion determination process is made efficient.
The image processing apparatuses according to the above-described first to fourth embodiments and modifications thereof can be implemented by executing an image processing program recorded in a recording device, by a computer system such as a personal computer or a workstation. In addition, such a computer system may be used connected to other computer systems or devices such as a server, through a local area network, a wide area network (LAN/WAN), or a public line such as the Internet. In this case, the image processing apparatuses according to the first to fourth embodiments may acquire image data of an intraluminal image through these networks, or output image processing results to various output devices (a viewer, a printer, etc.) which are connected through these networks, or store image processing results in storage devices (e.g., a recording device and a reading device therefor) which are connected through these networks.
According to some embodiments, a linear region having a predetermined number or more of continuously-arranged pixels whose pixel values are higher than those of their neighboring pixels is extracted, curvature feature data based on the curvatures of one or more arcs along the linear region is computed, and it is determined whether there are abnormal villi, based on a distribution of the curvature feature data. With this feature, even if one image includes regions with different imaging distances to an object, it is possible to accurately determine whether there are abnormal villi.
Note that the present invention is not limited to the first to fourth embodiments and the modifications thereof, and various inventions can be formed by appropriately combining together a plurality of components which are disclosed in each embodiment or modification. For example, an invention may be formed by excluding some components from all components shown in each embodiment or modification, or an invention may be formed by appropriately combining together components shown in different embodiments or modifications.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Kono, Takashi, Kanda, Yamato, Kitamura, Makoto, Kamiyama, Toshiya
Patent | Priority | Assignee | Title |
11216952, | Mar 26 2018 | MITSUBISHI HEAVY INDUSTRIES, LTD | Region extraction apparatus and region extraction method |
Patent | Priority | Assignee | Title |
7747055, | Nov 24 1999 | WAKE FOREST UNIVERSITY HEALTH SCIENCES | Virtual endoscopy with improved image segmentation and lesion detection |
9721341, | Jan 02 2008 | Bio-Tree Systems, Inc. | Methods of obtaining geometry from images |
20040064029, | |||
20050053270, | |||
20120070049, | |||
20120076419, | |||
EP2057932, | |||
JP2006166990, | |||
JP2007236956, | |||
JP2008278963, | |||
JP2012073953, | |||
JP2014023566, | |||
JP62159292, | |||
WO2006080239, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 20 2016 | KONO, TAKASHI | Olympus Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038929 | /0692 | |
May 20 2016 | KANDA, YAMATO | Olympus Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038929 | /0692 | |
May 20 2016 | KITAMURA, MAKOTO | Olympus Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038929 | /0692 | |
May 20 2016 | KAMIYAMA, TOSHIYA | Olympus Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038929 | /0692 | |
Jun 16 2016 | Olympus Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jul 21 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 30 2021 | 4 years fee payment window open |
Jul 30 2021 | 6 months grace period start (w surcharge) |
Jan 30 2022 | patent expiry (for year 4) |
Jan 30 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 30 2025 | 8 years fee payment window open |
Jul 30 2025 | 6 months grace period start (w surcharge) |
Jan 30 2026 | patent expiry (for year 8) |
Jan 30 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 30 2029 | 12 years fee payment window open |
Jul 30 2029 | 6 months grace period start (w surcharge) |
Jan 30 2030 | patent expiry (for year 12) |
Jan 30 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |