Volume data is used for extracting a contour of a measurement object and measurement information describing anatomical structure useful for diagnosis is acquired from the contour. When volume data of a subject is inputted (S301), an image processing device detects feature points in the volume data (S302); detects contours of a plurality of parts in the volume data based on the detected feature points and anatomical definitions (S303); and optimizes boundary lines defining contours of parts contacting each other, out of the detected plural parts, so as to combine together the optimized contours of the plural parts for creating a contour of the measurement object (S304). Measurements are taken on diagnostic items useful for diagnosis based on the created contour (S305); and the acquired measurement information is outputted as measurement results (S306) which are displayed at a display.
|
6. An image processing method for an image processing device that includes a processor coupled to a memory, the processor executing instructions stored in the memory comprising:
detecting feature points in volume data of a measurement object;
extracting contours of a plurality of parts of the measurement object based on the detected feature points and anatomical definitions;
optimizing the contours of parts contacting each other, out of the extracted contours of the plural parts, to combine the plural parts to create a contour of the measurement object;
acquiring measurements of diagnostic items of anatomical structure from the optimized contours; and
creating boundary lines between the contours of the parts contacting each other,
wherein out of the extracted contours of the parts, the processor modifies the contours of the parts contacting each other by optimizing the boundary lines between the contours of the parts contacting each other; and displays the modified contours,
wherein the processor displays the extracted contours of the plural parts and the contour of the measurement object on a display,
wherein the processor displays a control bar on the display for parameter adjustment and modifies the contours of the plural parts in response to an instruction for correction by adjustment of the control bar, and
wherein the processor further executes instructions stored in the memory to optimize the boundary lines by determining distances from the respective contours of the parts contacting each other to the boundary line between the respective contours of the parts contacting each other and minimizing the distances, and determining deviations of the boundary line with respect to shape model information of the contours of the parts contacting each other and minimizing the deviations.
1. An image processing device, comprising:
a processor;
a display coupled to the processor; and
a memory coupled to the processor storing instructions that when executed by the processor, cause the processor to:
detect feature points in volume data of a measurement object;
extract contours of a plurality of parts of the measurement object based on the detected feature points and anatomical definitions;
optimize the contours of parts contacting each other, out of the extracted contours of the plural parts, to combine the plural parts to create a contour of the measurement object; and
acquire measurements of diagnostic items of anatomical structure from the optimized contours; and
create boundary lines between the contours of the parts contacting each other,
wherein out of the extracted contours of the parts, the processor modifies the contours of the parts contacting each other by optimizing the boundary lines between the contours of the parts contacting each other; and displays the modified contours on the display,
wherein the processor displays the extracted contours of the plural parts and the contour of the measurement object on the display,
wherein the processor displays a control bar for parameter adjustment on the display and modifies the contours of the plural parts in response to an instruction for correction by adjustment of the control bar, and
wherein executing the stored instructions further causes the processor to optimize the boundary lines by determining distances from the respective contours of the parts contacting each other to the boundary line between the respective contours of the parts contacting each other and minimizing the distances, and determining deviations of the boundary line with respect to shape model information of the contours of the parts contacting each other and minimizing the deviations.
2. The image processing device according to
wherein the processor modifies one of the contours of the parts contacting each other, concurrently modifying the contour of the other one of the parts contacting each other; and displays the modified contours on the display.
3. The image processing device according to
wherein the processor measures a distance between particular regions of the measurement object based on the contour of the measurement object and displays the measurement results on the display.
4. The image processing device according to
wherein the processor modifies the contours of the plural parts in response to an instruction for correction of the contours of the plural parts as inputted from the input portion.
5. The image processing device according to
wherein in response to a drag-and-drop operation for a desired region of the contours of the plural parts displayed on the display, the processor modifies the contour of the desired region.
7. The image processing method according to
wherein the image processing device modifies one of the contours of the parts contacting each other, concurrently modifying the contour of the other one of the parts contacting each other and displays the modified contours.
8. The image processing method according to
wherein the image processing device measures a distance between particular regions of the measurement object based on the contour of the measurement object and displays the measurement results.
|
The present invention relates to an image processing device and more particularly, to an image processing technique that acquires measurement information for diagnostic use by extracting a contour of a measurement object from three-dimensional information.
Ultrasonic systems such as ultrasonic diagnostic equipment have a characteristic of enabling the observation of the inside of a subject without destroying the subject. Particularly in the medical field, the ultrasonic systems negate the need for surgical operations such as laparotomy for treatment of human body and hence, have found wide ranging applications as means for providing safe observation of internal organs. The heart is one of the subjects of the ultrasonic systems. With the advent of the aging society, the recent years have seen the increase in the number of people suffering cardiac valvular diseases such as mitral regurgitation. Valvuloplasty, valve-sparing surgery and the like have been widely performed as a method of treatment of the cardiac valvular diseases. For the sake of achieving success in the surgery, an exact diagnosis of the disease based on echocardiographic examination must be done before surgery.
In a conventional practice, an examiner as a user acquires as follows measurement information pieces, such as annulus diameter, valve height and valvular area, which are necessary for making a diagnosis. Namely, the examiner captures two-dimensional echocardiographic images and manually extracts a contour of the cardiac valve while watching the cross-sectional images. Unfortunately, the manual operations of extracting the cardiac valve contour and taking measurements of various diagnostic items involve complicated processes and take much time. It is also difficult to clarify a complicated three-dimensional structure such as of the cardiac valve by using the two-dimensional sectional images. More recently, a system has been provided which uses a special ultrasonic probe for acquiring volume data or three-dimensional information such as stereoscopic ultrasonic image of heart. The system automatically acquires measurement information for diagnosis from the volume data.
Patent Literature 1 is cited as an example of the related prior art documents. The technique of Patent Literature 1 is for acquisition of clinically required information such as annulus area, height and the like of the cardiac mitral valve. For acquisition of a three-dimensional image of the cardiac valve, a three-dimensional echocardiography image is generated from a two-dimensional echocardiography images obtained by scanning with an echocardiographic machine. Namely, the patent literature relates to a method of automatically extracting the three-dimensional cardiac valve image through computer processing, where the three-dimensional cardiac valve image providing for the measurement of clinically required data is automatically extracted by optimizing a fitting evaluation function (potential energy) of an annulus model in a fitting model considering the physical shapes of the heart and the annulus by means of a replica exchange method/expansion slow cooling method.
PTL 1: International Publication No. WO2006/068271
According to Patent Literature 1, the contour of the cardiac valve as regarded as one shape is extracted from the volume data. However, the object of measurement such as the cardiac valve has such a complicated shape that it is never easy to extract the whole contour of the subject at one stroke and with a high degree of precisions. It is reported that the cardiac valve includes therein a plurality of parts based on anatomical definitions, every one of which is useful for diagnosis of disease. It is therefore necessary to extract not only the whole contour of the measurement object but also boundary lines between the parts which define an internal structure. However, Patent Literature 1 does not disclose a method for extracting the boundary lines between the parts of the measurement object.
Accordingly, it is an object of the invention to provide an image processing device capable of extracting the contour of the measurement object with high precisions and measurement information necessary for diagnosis of the measurement object as well as a method thereof, which address the above-described problem.
According to an aspect of the invention for achieving the above object, an image processing device including a processor has an arrangement wherein the processor detects feature points in volume data of a measurement object; extracts contours of a plurality of parts of the measurement object based on the detected feature points and anatomical definitions; optimizes the contours of parts contacting each other, out of the extracted contours of the plural parts, so as to combine together the plural parts for creating a contour of the measurement object; and acquires measurement information therefrom.
According to another aspect of the invention for achieving the above object, an image processing method of an image processing device has an arrangement wherein the image processing device detects feature points in volume data of a measurement object; extracts contours of a plurality of parts of the measurement object based on the detected feature points and anatomical definitions; optimizes the contours of parts contacting each other, out of the extracted contours of the plural parts, so as to combine together the plural parts for creating a contour of the measurement object; and acquires measurement information therefrom.
According to the invention, the high-precision contour and measurement information of the measurement object can be acquired.
A variety of examples of the invention will be specifically described as below with reference to the accompanying drawings. Throughout the figures illustrating the examples hereof, equal or similar reference numerals are principally assigned to equal or similar components, which are explained only once in most cases to avoid repetitions. It is noted that feature points and a plurality of parts in the subject are defined to mean artificially set positions or regions of a subject organ such as heart, which are regarded as being anatomically meaningful. Out of the contours of the plural parts in the subject, a contour of a portion where the parts contact each other is referred to as a boundary line.
Example 1 illustrates an image processing device of an ultrasonic imaging system. The image processing device has a configuration where a processor acquires measurement information by taking the steps of: detecting feature points of a measurement object which are contained in volume data acquired by transmitting/receiving an ultrasonic wave; extracting contours of plural parts of the measurement object based on the detected feature points and anatomical definitions; and optimizing the contours of the parts contacting each other (out of the detected contours of the plural parts) and combining together the plural parts so as to create a contour of the measurement object. Further, the example illustrates an image processing method of the image processing device. In this method, the image processing device acquires the measurement information by: detecting the feature points in the volume data of the measurement object; extracting the contours of the plural parts of the measurement object based on the detected feature points and the anatomical definitions; and optimizing the contours of the parts contacting each other (out of the detected contours of the plural parts) and combining together the plural parts so as to create the contour of the measurement object.
As shown in
<System Configuration and Operations>
A detailed description is made as below on a specific configuration of the ultrasonic imaging system of the example. In addition to the ultrasonic probe 7, the image generating portion 107 and the image processing device 108 as shown in
The transmitter 102 generates a transmission signal under control of the controller 106 and delivers the signal to each of the plural ultrasonic elements constituting the ultrasonic probe 7. This triggers the plural ultrasonic elements of the ultrasonic probe 7 to transmit ultrasonic waves to the subject 120, respectively. The ultrasonic waves reflected by the subject 120 go back to the plural ultrasonic elements of the ultrasonic probe 7 where the ultrasonic waves are converted to electric signals. The signals received by the ultrasonic elements are transmitted to the receiver 105 via the duplexer 101 while the receiver 105 delays the signals by predetermined delay amounts corresponding to reception focus points and adds the delayed signals. That is, the signals are phase-regulated and added. Such signal transmission and reception are repeated for each of the plural reception focus points. The phased and added signal is delivered to the image generating portion 107. The duplexer 101 selectively connects the transmitter 102 or the receiver 105 to the ultrasonic probe 7.
The image generating portion 107 generates an ultrasonic image as the volume data by performing processing for receiving the phased and added signals from the receiver 105 and arranging the received signals at positions corresponding to the reception focus points. The image processing device 108 receives the volume data from the image generating portion 107 and extracts standard sections. It is noted here that the standard section is defined to mean a sectional image complying with guidelines for standard section acquisition. Although
Referring to an exemplary hardware configuration of the image processing device 108 and the user interface 121 shown in
As shown in
At least one of the ROM2 and the RAMS is previously stored with an arithmetic processing program for the CPU1 and a variety of data pieces which are considered as necessary for implementation of the operations of the image processing device 108. The various processes of the image processing device 108, which will be described hereinafter, are implemented by the CPU1 executing the previously stored program in at least one of the ROM2 and the RAM3. Incidentally, the programs executed by the CPU1 may be previously stored in a storage medium 12 such as an optical disk such that the medium input portion 11 such as an optical disk drive may read the program into the RAM3.
Further, the storage unit 4 may be previously stored with the program such that the program in the storage unit 4 may be loaded into the RAM3. Otherwise, the ROM2 may be previously stored with the program. The storage unit 4 includes an unillustrated shape model database. The shape model database includes average shapes of plural parts of a test object such as cardiac valve of the subject, shape parameters of a principal component and the like as information on the contours of anatomical parts to be described hereinafter. The contour of the object is extracted by fitting the average shape in the database and the shapes of the principal components in the image.
The input device 14 of the user interface 121 is for receiving a user operation and includes, for example, a keyboard, trackball, operation panel, foot switch and the like. The input controller 13 receives an operation instruction inputted by a user via the input device 14. The operation instruction received by the input controller 13 is processed and executed by the CPU1.
Next, the operations of the image processing device 108 of the example are described with reference to a flow chart of
First in Step 301 (hereinafter, written as S301), ultrasonic volume data generated by the image generating portion 107 is inputted.
In S302, feature points are detected from the inputted volume data. As will be described with reference to
In S303, the object organ is divided into plural parts based on the anatomical definitions and a contour of the respective parts is detected. In a case where the object organ is a cardiac mitral valve, for example, the plural parts are six anatomical parts including lateral margin of anterior cusp, a lateral margin of posterior cusp and the like, as shown in
Constructed profile models and shape models are previously stored in the shape model database in the storage unit 4. These models are retrieved such that the shape model is fitted in an image of an inputted unknown volume data using the feature point detected in S302 as an initial position. Next, the positions of the vertices are optimized based on the restriction in the image feature value by the profile model and the restriction in the shape by the shape model. The shape of the anatomical part can be extracted by repeating this process till all the vertices reach stable positions.
In S302, the contours of the parts that contact each other, out of the plurality of parts, namely the boundary lines are optimized. As will be described hereinafter with reference to
In S305, measurements of the diagnostic items useful for the diagnosis such as annulus diameter, valve height, and valvular area are taken based on the created overall contour.
In S306, the CPU1 of the image processing device 108 outputs the contours of the parts extracted in S303 (part contour extraction), the whole contour reconstructed by the boundary line optimization in S304, and the measurement information of the annulus diameter, valve height, valvular area and the like as measured in S305, while the display controller 15 transmits the outputs to the display 16 which displays the outputs.
Next, a detailed description is made on the feature point detection in Step 302 of the example with reference to
Methods such as Hugh Forest and Support Vector Machine (SVM) are used for detection of the feature point. In a case where Hugh Forest is used, a decision tree is prepared for making decisions on details about direction and distance between various particular regions in the volume data such that a feature classifier capable of acquiring the direction and distance between arbitrary feature points from unknown volume data the same way can be generated by combining a plurality of the decision trees. That is, when unknown volume data is inputted in the feature classifier, vector information to the optimum retrieval feature point can be finally found by sequentially passing decision trees where the direction and the positional relation of the inputted region image match with those of an object region.
Next, a detailed description is made on Step 303 of extraction of the contours of plural parts according to the example.
A method based on well-known Active Shape Model (ASM) is used for contour extraction. The shape model is a whole series of vertices constituting an outline of an organ present on the medical image.
The profile model 601 is for extraction of the image feature value based on a method such as edge detection. The image feature value is extracted from a local region 604 of a vertex 603 defining the outline. The shape model 602 is constructed using Principal Component Analysis (PCA). The shape model 602 includes an average shape and a shape principal component vector representing a variation type. The average shape can be determined by calculating the average of all the shapes and construed as a feature that all the mitral valves have in common. The variation type is determined by subtracting the average shape from each of all shapes, representing how much each shape varies from the average shape. Hence, the shape of a particular region is generated by choosing some variation types as bases and adding the average shape to the combination of these base values.
The contour extraction from the constructed profile model 601 and shape model 602 is performed as follows. The feature points detected in Step 302 as represented by the hatched circles on the parts 501 to 506 in
It is noted that in place of the above-described ASM method, well known Active Appearance Model method, Constrained Local Model (CLM) method or the like may also be applied to the shape extraction of the particular region.
Now referring to
φ(X)=ε1Σi=1n(PiXi+QiXi)+ε2Σi=1n|Xi−
It is noted here that X=(X1, X2, . . . Xn) denotes a boundary line 705 defining a contour of a portion where the parts contact each other; Xi denotes the i-th vertex on the boundary line 705; Pi denotes the i-th vertex on the boundary line 703 of the part 701; Qi denotes the i-th vertex on the boundary line 704 of the part 702; PiXi denotes a distance between the vertex Pi and the vertex Xi; QiXi denotes a distance between the vertex Qi and the vertex Xi; S with bar above denotes an average shape of a boundary line contacting the part 701 and the part 702, the average shape stored in the storage unit; and 81, 82 denote weighting factors of distance information and deviation from the average shape.
The first half of the expression 1 denotes information about distances from the boundary line 705 to the boundary line 703 and to the boundary line 704. The latter half of the expression 1 denotes shape information indicating deviations of the boundary line 705 and the average shape stored by the storage unit 4. In this expression 1, the boundary line 705 is updated by minimizing both the distance information in the first half of the expression and the shape information in the latter half of the expression. An optimum boundary line 707 can be obtained by repeating the processing till the function ø(x) reaches a value equal to a threshold value or less. Namely, out of the extracted contours of the plural parts, a processor of the image processing device 108 optimizes the boundary lines defining the contours of the parts contacting each other based on the distance information and shape information of the contours of the parts contacting each other. A contour 706 of the whole measurement object including the plural parts can be created by combining together the contours of the plural parts obtained by optimizing the boundary lines.
As just described, the processor of the image processing device according to the example displays the extracted contours of the plural parts and the whole contour of the measurement object on the display as the image display portion. Out of the extracted contours of the plural parts, the processor also modifies the contours of the parts contacting each other by optimizing the boundary lines defining the contours of the parts contacting each other and displays the modified contours on the display. Further, when correcting one of the contours of the parts contacting each other, the processor also modifies the contour of the other one of the contacting parts in conjunction with the correction, and displays the modified contours on the display.
Next, measurement is taken on the items useful for diagnosis such as annulus diameter, valve height, and valvular area based on the extracted contour. For instance, the annulus diameter means the maximum radius between two points on the valve ring, while the annulus height means a distance between the highest point and the lowest point on a valve ring spline. Based on the created contour, the maximum diameter and area of the valve ring are automatically calculated. The extracted contour and the measurement information including the measured annulus diameter, valve height, valvular area and the like are transmitted to the display 16 for display purpose. As just described, the processor of the image processing device is adapted to measure a distance between particular regions of the measurement object based on the contour of the measurement object and to indicate the measurement information as the measurement results on the display portion.
The ultrasonic imaging system of the example as described above is adapted to acquire the robust, high-precision contour and measurement information by taking the steps of: extracting, from the three-dimensional information on the subject, the contours of the plural parts based on the anatomical definitions of the measurement object; optimizing the boundary lines defining the contours of the parts contacting each other, out of the plural parts; and measuring the necessary diagnostic items from the optimum contour.
In the image processing device of the ultrasonic imaging system of Example 1, Example 2 illustrates a configuration where if the user checking the extracted contour on the screen determines that the contour needs correction, the user can modify the contour by manual adjustment or can semi-automatically modify the contour by way of shape parameter adjustment.
If it is determined that the contour needs correction (YES), the contour is manually corrected via the interface or semi-automatically corrected by way of parameter adjustment or the like in S906 of
The user can check a numerical value of a shape which is extracted and displayed at the measurement value display region 1008. If the shape needs correction, the user can manually make a fine adjustment of the shape. Specifically, when the user presses down the manual correction button 1003, a previously stored manual correction program becomes executable on the CPU 1. The user consents to modification by dragging and dropping a desired region 1006 of a particular regional shape 1007 displayed in the image display region 1002 so that the user can manually correct a local contour of an area around the related region. Further, the whole shape can be adjusted by way of parameter adjustment. The user can semi-automatically accomplish scaling, rotation, shape modification and the like of a corresponding mitral valve by manipulating the control bar 1004. Next, the size, area and the like of a region to be observed is calculated from the corrected contour 1002. These calculated values are redisplayed at the measurement value display region 1008 on the screen of the interface 1000.
As just described, the image processing device of the example includes the input portion such as the interface 1000 while the processor modifies the contours of the plural parts in response to a command to correct the contours of the plural parts inputted from the input portion. The processor of the image processing device displays the parameter adjustment control bar in the image display portion and modifies the contours of the plural parts in response to the correction command by way of adjustment of the control bar. Further, the processor of the image processing device modifies the desired region in conjunction with the drag and drop of the desired region of the contours of the plural parts shown at the display portion.
According to the example, more accurate contour can be extracted to support the measurement of the diagnostic items because the user checks the extracted contour on the screen and if the contour needs correction, the user can correct the contour manually or semi-automatically by way of adjustment of the shape parameters.
In the image processing device of the ultrasonic imaging system according to Examples 1 and 2, Example 3 illustrates a configuration where a contour of two or more adjoining organs, such as the mitral valve and aortic valve, is wholly extracted so as to provide measurement information useful for diagnosis.
It is noted that the invention is not limited to the foregoing examples but includes examples corresponding to a variety of organs. The foregoing examples, for example, are the detailed illustrations to clarify the invention. The invention is not necessarily limited to what includes all the components described above. Some component of one example can be replaced by some component of another example. Further, some component of one example can be added to the arrangement of another example. A part of the arrangement of each example permits addition of some component of another example, the omission thereof or replacement thereof.
The above-described components, functions, processors and the like have been described by way of the example where a program of implementing a part or all of the components, functions, processors and the like is generated and the CPU executes the program. It goes without saying that a part or all of the components, functions, processors and the like can be implemented in a hardware by designing a part or all of the components, functions, processors and the like in an integrated circuit, for example. Namely, all or a part of the functions of the image processing device can be implemented in an integrated circuit such as ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array), and FPGA (Field Programmable Gate Array) in place of the program.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5889524, | Sep 11 1995 | Washington, University of | Reconstruction of three-dimensional objects using labeled piecewise smooth subdivision surfaces |
20010024517, | |||
20050148852, | |||
20080085043, | |||
20090077497, | |||
20120281895, | |||
20130261447, | |||
20150070523, | |||
20150371420, | |||
20160086049, | |||
JP2005169120, | |||
JP2007312971, | |||
WO2006068271, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 26 2017 | Hitachi, Ltd. | (assignment on the face of the patent) | / | |||
Oct 12 2018 | ZHU, PEIFEI | Hitachi, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048395 | /0387 | |
Oct 12 2018 | LI, ZISHENG | Hitachi, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048395 | /0387 | |
Oct 13 2021 | Hitachi Ltd | FUJIFILM Healthcare Corporation | CORRECTIVE ASSIGNMENT TO CORRECT THE THE PROPERTY AND APPLICATION NUMBERS PREVIOUSLY RECORDED AT REEL: 058026 FRAME: 0559 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT | 058917 | /0853 |
Date | Maintenance Fee Events |
Feb 21 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jun 29 2024 | 4 years fee payment window open |
Dec 29 2024 | 6 months grace period start (w surcharge) |
Jun 29 2025 | patent expiry (for year 4) |
Jun 29 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 29 2028 | 8 years fee payment window open |
Dec 29 2028 | 6 months grace period start (w surcharge) |
Jun 29 2029 | patent expiry (for year 8) |
Jun 29 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 29 2032 | 12 years fee payment window open |
Dec 29 2032 | 6 months grace period start (w surcharge) |
Jun 29 2033 | patent expiry (for year 12) |
Jun 29 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |