A visibility index for medical images. The method includes generating a visibility index from a training set of images;
|
1. A method for using a visibility index for medical images, the method comprising:
performing a plurality of reader studies on a set of training images to assign a training visibility score to each member of the set of training images, to build a set of training visibility scores;
operating a processor to map the set of training visibility scores to a visibility index so that the visibility index is derived from the plurality of reader studies, wherein the visibility index has a range including a portion corresponding to typical radiologist performance so as to allow an objective determination as to whether a particular abnormality on a particular image was reasonably missed by a reader;
operating a processor for making a number of measurements of a set of features from an image of a selected abnormality that is not a member of the training set; and
operating a processor for combining the number of measurements and to generate an assigned visibility score mapped to the visibility index where the assigned visibility score's mapping to the visibility index determines whether the selected abnormality was reasonably missed by a reader.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The method of
11. The process of
12. The process of
13. The process of
14. The process of
|
This application is related to co-pending provisional application of Yankelevitz, et al., application No. 60/784,683, filed Mar. 22, 2006, entitled “MEDICAL IMAGING VISIBILITY INDEX SYSTEM AND METHOD FOR CANCER LESIONS” and, by this reference, claims the benefit of the priority filing date of the co-pending provisional application. All of the contents of the aforementioned co-pending provisional application No. 60/784,683 are incorporated by reference.
The present invention relates generally to analysis of medical imaging data, and, more particularly to analysis of medical imaging data using a visibility index for cancer lesions.
A primary reason for malpractice suits against radiologists is missed cancer. Such suits are often based on missed breast cancer on mammography and missed lung cancer on chest x-rays. To a smaller degree, errors on CT scans have also led to legal actions. Currently, there is no objective standard for measuring the effectiveness of human observers and/or computer controlled visioning equipment for finding cancer lesions. Missed cancers are often reviewed using hindsight and knowledge of facts not available to the original observer. As a result, such reviews often produce diametrically opposed opinions regarding whether an incidence of missed cancer fell below a standard of care.
In a typical scenario, a patient has undergone an imaging procedure producing a medical image. Although present in the medical image, a lesion is missed and not identified by a reader, such as a radiologist. Subsequently, cancer is discovered when the patient either has the imaging done again or has become symptomatic. A malpractice claim may result where it is typically alleged that the radiologist should have seen the lesion on the original study. It may be further alleged that, because the lesion was missed, it has progressed making the lesion less amenable to treatment and thereby increasing the risk of death.
One of the most challenging aspects of such malpractice cases turns on whether the missed abnormality was readily identifiable on the initial examination according to accepted medical practice. Unfortunately, criteria for determining the visibility of a cancerous abnormality are quite vague. A lesion may have low conspicuity on an early scan image. However, once a lesion is known to exist in a specific location, an informed observer may opine that the lesion is fairly obvious on a medical image. Using such hindsight, a fact-finding body, such as a jury, may make a determination of malpractice in questionable circumstances. As a general rule, when an expert participates in a case, the expert reviews the images and determines whether a particular lesion should have been missed or found. Often the expert reviews a series of images taken over time and determines the point at which the lesion should have been visible. Ultimately, there are no objective standards for determining the visibility of lesions on medical images.
A visibility index for medical imaging has heretofore been lacking. As a result no objective standard for measuring the effectiveness of the interpretation of a medical image, whether by human or automated system, has been made available.
In one example, a system and method for creating and using a visibility index for medical images is described. The method includes generating a visibility index from a training set of images;
making a number of measurements of a set of features from an image of an abnormality that is not a member of the training set; and
combining the number of measurements to generate a visibility score mapped to the visibility index.
While the novel features of the invention are set forth with particularity in the appended claims, the invention, both as to organization and content, will be better understood and appreciated, along with other objects and features thereof, from the following detailed description taken in conjunction with the drawings, in which:
Preliminarily, it should be noted that while a particular system and method described in detail herein is for analyzing medical imaging data, such as radiology data, this is not by way of limitation, but solely for the purposes of illustration, and the system and method described may be employed for analyzing data of other types.
Referring now to
Referring now to
Each of the visibility scores 30 is also assigned to its respective medical image and later used as one of a set of target output scores for training an automated classifier using at least a portion of the set of training images 10 as described below with reference to
The medical imaging data may include portions of medical scans and/or entire scans. The set of training images 10 of known conditions may include, for example, radiology data, radiology images, medical image data, pathology image data, digital images of medical data, photographic images, scanned images molecular imaging data and medical genetic imaging data. The medical imaging data may be generated from medical imaging procedures, including, for examples Computerized Tomography (CT) scans, Magnetic Resonance Imaging (MRI), Positron Emission Technology (PET), X-Rays, Vascular Interventional and Angiogram/Angiography procedures, Ultrasound imaging, and similar procedures. A set of training images preferably comprises images of like disease conditions from the same type of medical imaging device. However, the process of the invention is not so limited and there may be applications wherein training sets of dissimilar images may prove useful.
In other cases, simulated nodules and/or synthetically generated images may be used for training, validating or other purposes. In one example, synthetic or actual images may be employed to rate an imaging system or a computer aided diagnostic device using the visibility index. In another example, the visibility index may be employed for grading a database of nodule images.
Referring now to
In one exemplary embodiment, the classifier 130 may comprise a neural network. Neural networks are characterized by having processing units{uj}, where each uj has a certain activation level aj(t) at any point in time. Weighted interconnections between the various processing units determine how the activation of one unit leads to input for another unit. An activation rule acts on the set of input signals at a unit to produce a new output signal, or activation. A learning rule that specifies how to adjust the weights for a given input/output pair may also be optionally included.
In other exemplary embodiments, the classifier 130 may advantageously employ conventional techniques such as linear regression algorithms, nearest neighbor thresholds, discriminant analysis, Bayesian approaches, adaptive pattern recognition, fuzzy-set theory, and adaptive processing, as well as artificial neural networks, Kohonen maps and equivalents. For a list of features and classifiers previously used to classify nodule candidates for computer-aided diagnosis see van Ginneken, et al.) “Computer-Aided Diagnosis in Chest Radiography: A Survey,” IEEE Transactions on Medical Imaging Vol. 20, No. 12, pp. 1228-1237, 2001.
Referring now to
Examples of useful features for classifying nodule candidates from chest radiography images include radius, contrast, distance from hilum, shape features, features based on histograms and filter outputs applied to regions of interest (ROIs), location, degree of circularity, degree of irregularity, density, texture, power spectrum features, diameter, size, gradient measures, statistical parameters including mean, maximum, width, and standard deviation, combinations of such features and the like. The features may also be image quality automatically assigned values based on grey-scale, color, size, border definition and other characteristics typically used in image processing. In one exemplary embodiment, the feature extraction processor, classifier, error processor and parameter correction processor may advantageously be operated as software programs residing in a computer, such as a personal computer or the like.
Referring now particularly to
Referring now to
Note that, in some cases a nodule may be present in an image being scored, but due to shadows, lighting or proximity to other organs, may be particularly difficult to discern. Since the location of the nodule is known, a radiologist may employ available computer drawing tools to outline the nodule, using, for example, boundary 216, before it is scored by the classifier in order to ensure a better result. One example of such useful computer drawing tools include those described in U.S. patent application Ser. No. 11/552,516 to Yankelevitz et al., filed Oct. 24, 2006 and entitled, “MEDICAL IMAGING SYSTEM FOR ACCURATE MEASUREMENT EVALUATION OF CHANGES IN A TARGET LESION,” the full disclosure and contents of which are incorporated herein by reference.
Referring now to
For lung nodules a classifier may be constructed using images representing a plurality of varying categories of nodules as inputs for training and reader studies. Further, the varying categories of nodules may advantageously comprise actual cases or computer generated nodule images. The visibility index may be generated from reader study tests where combinations of cases, with a range of findings, are presented to a group of radiologists for interpretation.
The reader study is not limited to human readers. The visibility index may also be applied to automated medical imaging systems in order to provide an indication of the quality and/or reliability of such medical imaging systems. For specific uses, a medical imaging system can be tested to measure its performance on nodules with pre-selected criteria.
While specific embodiments of the invention have been illustrated and described herein it is realized that numerous modifications and changes will occur to those skilled in the art. It is therefore to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit and scope of the invention.
Yankelevitz, David F., Henschke, Claudia Ingrid, Reeves, Anthony P.
Patent | Priority | Assignee | Title |
11635414, | Feb 13 2015 | The University of Liverpool | Method and apparatus for creating a classifier indicative of a presence of a medical condition |
9295406, | May 05 2011 | Siemens Healthcare GmbH | Automatic or semi-automatic whole body MR scanning system |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 21 2007 | Cornell Research Foundation, Inc. | (assignment on the face of the patent) | / | |||
May 01 2007 | YANKELEVITZ, DAVID F | Cornell Research Foundation, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019371 | /0046 | |
May 01 2007 | HENSCHKE, CLAUDIA INGRID | Cornell Research Foundation, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019371 | /0046 | |
May 31 2007 | REEVES, ANTHONY P | Cornell Research Foundation, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019371 | /0046 | |
Apr 16 2009 | HENSCHKE, CLAUDIA, DR | YANKELEVITZ, DAVID, DR | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022552 | /0246 | |
Apr 16 2009 | HENSCHKE, CLAUDIA DR | YANKELEVITZ, DAVID DR | CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE ADDRESS PREVIOUSLY RECORDED ON REEL 022552 FRAME 0243 ASSIGNOR S HEREBY CONFIRMS THE CORRECT OFTHE ASSIGNEE ADDRESS FROM 37 WEST 69TH STREET, NEW YORK, NY 10023 TO 1201 EAST 21ST, BROOKLYN, NEW YORK, 11210 | 022645 | /0114 |
Date | Maintenance Fee Events |
Jan 11 2011 | ASPN: Payor Number Assigned. |
Jul 18 2014 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Jul 18 2018 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Sep 05 2022 | REM: Maintenance Fee Reminder Mailed. |
Feb 20 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 18 2014 | 4 years fee payment window open |
Jul 18 2014 | 6 months grace period start (w surcharge) |
Jan 18 2015 | patent expiry (for year 4) |
Jan 18 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 18 2018 | 8 years fee payment window open |
Jul 18 2018 | 6 months grace period start (w surcharge) |
Jan 18 2019 | patent expiry (for year 8) |
Jan 18 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 18 2022 | 12 years fee payment window open |
Jul 18 2022 | 6 months grace period start (w surcharge) |
Jan 18 2023 | patent expiry (for year 12) |
Jan 18 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |