An optical system for the detection of skin disease, such as melanoma, acquires images of a lesion on a subject's skin at different wavelengths and utilizes a sweeping arm rotating about the lesion in a clock-like sweep to produce diagnostically relevant metrics and classifiers from the image data so as to enhance detection of the skin disease.
|
9. A method for detecting skin disease in a lesion on a subject's skin, comprising:
with a processor, obtaining image data from an image of a lesion on the subject's skin; forming a line segment between the center of the lesion image and the border of the lesion image; rotating the line segment around the center of the lesion with one end of the line segment fixed at the center of the lesion image, obtaining brightness of pixels on the line segment as the line segment rotates around the center of the lesion; evaluating pixel brightness in the image as a function of rotation angle to obtain metrics and/or one or more classifiers defining the rotational symmetry of the lesion; and displaying a likelihood of the presence or absence of skin disease in the subject based on the metrics and/or one or more classifiers.
1. An apparatus for detecting skin disease in a lesion on a subject's skin, comprising:
a mechanical fixture having a flat surface to position or press against the subject's skin to define a distal imaging plane containing said lesion;
a camera adapted to obtain a lesion image and image data from light reflected by said distal imaging plane;
a processor adapted to process the image data with a clock-like sweep algorithm, comprising forming a line segment between the center of the lesion image and the border of the lesion image, evaluating brightness of pixels on the line segment as the line segment rotates around the center of the lesion with one end of the line segment fixed at the center of the lesion image, and evaluating the pixel brightness in the image to obtain metrics and/or one or more classifiers defining the rotational symmetry of the lesion; and
an output device that indicates a likelihood of the presence or absence of skin disease in the subject from the metrics and/or one or more classifiers.
2. The apparatus according to
3. The apparatus according to
4. The apparatus according to
5. The apparatus according to
6. The apparatus according to
7. The apparatus according to
8. The apparatus according to
|
This application is a continuation of U.S. application Ser. No. 16/415,295, filed May 17, 2019, which is a continuation of U.S. application Ser. No. 14/907,208, filed Jan. 22, 2016, which is a United States national stage application of International Application No. PCT/US14/47636, having international filing date Jul. 22, 2014, which is a continuation-in-part of U.S. Ser. No. 14/051,053, filed Oct. 10, 2013 and claims the benefit of U.S. Provisional Application No. 61/857,143, filed Jul. 22, 2013, all of which are incorporated by reference.
A computer program listing submitted on CD-ROM was provided as a computer program listing appendix and is incorporated by reference. One disc is provided, with one file having file name Table 4. The listing describes and enables in detail the data processing steps described herein.
The invention is directed to systems and methods for optical detection of skin disease and in particular apparatus and methods adapted to detect the presence of melanoma and to distinguish, for example, malignant melanoma from non-malignant dysplastic nevi and/or common nevi, using metrics and classifiers obtained from rotational analysis of image data obtained from a subject's skin lesion. The data obtained may be processed by one or more computer processors, and the processed data, a diagnosis or an indicator of the presence of absence of skin disease may be output to and displayed by one or more display modules.
Melanoma, the most lethal skin cancer, incurs immense human and financial cost. Early detection is critical to prevent metastasis by removal of primary tumors. The early lateral growth phase is a vastly preferable detection window to the subsequent phase of metastatic initiation. Optical detection technologies for automated quantitative metrics of malignancy are needed to more accurately guide decisions regarding the need to biopsy and to make preoperative determination of adequate margins for surgical excision. After invasive biopsy or excision, diagnosis obtained by histopathologic evaluation is nearly 100% accurate; however deciding which lesions to biopsy is challenging. Only 3% to 25% of surgically-excised pigmented lesions are diagnosed as melanomas. Hence there is a need for noninvasive screening mechanisms that are both widespread and more accurate.
Dermoscopy is a common dermatological technique to evaluate skin lesions which may or may not be pigmented lesions. A dermatoscope typically consists of a light emitting diode (LED) illuminator, a low magnification microscope, and a clear window surface to flatten the skin against. The use of polarization enables partial rejection of deeply penetrating light, which can enhance superficial features of particular diagnostic interest. A digital imaging camera may also be attached to the dermatoscope.
U.S. Pat. Nos. 7,006,223, 7,027,153, 7,167,243, and 7,167,244 describe handheld dermoscopic epiluminescence devices.
Methods and apparatuses for evaluating optical image data obtained from a skin lesion on a subject's body are taught in U.S. Pat. Nos. 6,208,749 and 7,894,651, assigned to Mela Sciences, Inc.
U.S. Pat. No. 7,603,031 is directed to a multi-flash wide-field photography system adapted for medical or cosmetic facial photography. U.S. Pat. No. 8,218,862 describes feature detection for computer aided skin analysis using related wide-field photographic techniques. U.S. Pat. No. 8,498,460 describes wide-field imaging methods and apparatuses used to estimate the diffuse reflection component of an image of tissue, such as skin, which can then be further processed to obtain red and brown pigmentation images to indicate the distribution of hemoglobin and melanin in the skin.
One of the objects of the present invention is to employ algorithms that perform evaluations of image data obtained from reflecting light off of skin lesions with greater sensitivity, specificity and overall diagnostic accuracy, and which can be used to produce diagnostically relevant quantitative metrics in real time, in some cases without further evaluation of the lesion. (It will be understood that this application sometimes refers to the image of the lesion and the lesion itself interchangeably.)
Another object of the invention is to combine a dermatoscope, digital camera and automated screening by computer vision to bridge the diagnostic accuracy gap between invasive and noninvasive pathological analyses. Though the sophistication of the human brain may never be matched by computers, the present invention provides at least three benefits over traditional dermatologist screening: standardization, quantification and the enhanced ability to perform brute-force calculations. As outlined in the following description and claims, objective analytical diagnostic technologies have the potential to dramatically improve the diagnostic accuracy of widespread melanoma screening.
In particular, using rotational analysis of image data obtained from a skin lesion yields improved diagnostic accuracy compared to the prior art. The novel mathematical descriptors generated by the polar transformation of the image data may be trained on a set of skin lesions of known pathology to yield classifiers which provide a percent likelihood that a given lesion is malignant melanoma, paired with a percentage uncertainty for the prediction. The invention also provides enhanced opportunities to visualize the data obtained. In addition to a standard red-green-blue (RGB) image of the lesion, the present invention provides the user (doctor or patient) with a version of the image with suspicious regions highlighted, and the user may toggle between these display modes. The user may cycle through a set of gray scale images obtained at different wavelengths. The display may be toggled between x-y coordinates and a brightness map in polar coordinates (r, Θ). In addition, rotational analysis may be performed using a clock sweep arm integrated with imaging at successively finer resolution, such as confocal microscopy and Raman spectroscopy.
Still another object of the invention is to use a wide field imaging system to image a large portion of skin and identify one or more skin lesions for further analysis, and then use a second imaging system with a narrower field of view to conduct such analysis.
In one aspect, the invention is an apparatus for detecting skin disease in a lesion on a subject's skin, comprising: a mechanical fixture having a flat surface to position or press against the subject's skin to define a distal imaging plane containing said lesion; a camera adapted to obtain image data from the lesion; a processor adapted to process the image data with a clock-like sweep algorithm to obtain metrics and/or one or more classifiers defining the rotational symmetry of the pigmented lesion; and an output device that indicates a likelihood of the presence or absence of skin disease in the subject obtained from the metrics and/or one or more classifiers. In this context, “metrics and/or one or more classifiers” means the likelihood may be obtained from metrics, from one or more classifiers or from a combination of metrics and one or more classifiers.
The clock-like sweep algorithm, for example, evaluates the brightness of pixels on a line segment between the center of the lesion image and the lesion image border as the line segment rotates around the center of the lesion with one end of the line segment fixed at the center of the lesion image. Rotational symmetry refers to different information obtained on the line segment at different angular positions. Such information may be directly related to the image, such as the image brightness, or may be information indirectly related to the image such as the average pigmented network branch length for the pigmented network branches encountered by a line segment. In the case of indirect information, pre-processing of the image is completed to define such information for each part of the image. Continuing the example, a circle with uniform brightness throughout exhibits perfect rotational symmetry. However, if the distance from the border of the lesion to the center of the lesion is different at different angular positions, or if the brightness of pixels differs at different positions on the line segment, or at different angular positions of the line segment, then the lesion is not rotationally symmetric, but asymmetric. This asymmetry may be quantified and used to produce diagnostically relevant metrics and/or one or more classifiers.
In another aspect of the invention, the camera is adapted to obtain multispectral images. For example, the skin lesion is illuminated with an array of LEDs that emit light of different spectral profiles (including, importantly, one or more LEDs that emit light in the non-visible UV range, such as 300 nm to 400 nm). The camera acquires M images, storing each pixel in the image as a set of M numbers that form a spectral measurement, which are then fitted as the weighted sum of N chromophores.
In another aspect, the invention is a method for obtaining an indication of a likelihood of the presence or absence of skin disease in a lesion on a subject's skin, comprising the steps of illuminating the subject's skin including the lesion (preferably flattened); obtaining image data from the reflection of light off the illuminated subject's skin with a camera; and processing the image data with a computer processor adapted to implement a clock-like sweep algorithm to obtain diagnostically relevant metrics and/or one or more classifiers defining the rotational symmetry of the lesion on the subject's skin. In the method, at least one processor transforms the image data into diagnostically relevant metrics and/or one or more classifiers defining the rotational distribution of one or more properties selected from the group consisting of [a] spatial texture features; [b] brightness features; [c] features of the edge/border; [d] color variation; [e] variations in features of a pigmented network including the length, shape brightness and organization of pigmented network segments in the network; and [f] oxygen saturation of tissue as defined by the amount and ratio of oxyhemoglobin and deoxyhemoglobin. This group of properties may also include [g] the heterogeneity of pigment species such as eumelanin, pheomelanin and other species of pigment.
In still another aspect, the invention is embodied as a system for detecting skin disease in a skin lesion on a subject's skin utilizing a commercially-widespread imaging device, such as a cellular phone having an integrated processor and camera. In this embodiment, the image data may be obtained using an illumination system selected from an external illumination system or a built-in flash. As used herein, “cellular phone” includes, for example, a smart phone which has image capture capability, and may also have a built-in flash (which may or may not be disabled during image acquisition), image processing capability, the ability to download data processing applications, and/or transmit images for remote processing. The cellular phone processor is adapted (for example, by using an application downloaded to the cellular phone) to process image data obtained using a sweeping arm positioned between the border of the lesion and the center of the lesion and rotated with a clock-like sweep around the lesion to obtain metrics and/or one or more classifiers defining the rotational symmetry of the lesion and generate a display depicting regions of interest in the subject's skin and/or an indication of a likelihood of the presence or absence of skin disease in the subject.
Processed data obtained with a cellular phone application in the form of a diagnostic indication or a representation of a region of interest on the subject's skin may be transmitted to a remote processor. For example, a patient may photograph his or her lesion and transmit the image to a doctor's office, database or other facility for further processing. As used herein, “a display depicting regions of interest in the subject's skin and/or an indication of a likelihood of the presence or absence of skin disease in the subject” may mean that a display i) provides only a processed image of the lesion with regions highlighted, or ii) only an indication that skin disease is more or less likely to be present (for example in the form of a number or text), or iii) the display may provide both a processed image and an indication that skin disease is more or less likely to be present (which display formats may toggle back and forth). Other forms of display are also within the scope of this invention.
In yet still another aspect, the invention is a method of diagnostically imaging of one or more skin lesions on a subject's skin, comprising the steps of: illuminating with a first illumination system a first area on a subject's skin; obtaining wide field image data from the illuminated skin using a camera having a wide field of view; processing the wide field image data to identify a target area within the first area which includes at least one skin lesion; illuminating the target area with a second illumination system, and obtaining narrow field image data from the illuminated target area with a camera having a narrow field of view; and processing the narrow field image data to obtain diagnostic information pertaining to the at least one skin lesion.
These and other aspects of the invention are shown and described below.
System and Apparatus
One embodiment of the present invention is directed to a system including a camera, a mechanical fixture for illuminating the subject's skin and positioning the camera fixedly against the subject's skin, at least one processor adapted to perform the clock sweep algorithm, and at least one output device.
The camera is preferably a digital camera and may include a charged coupled device (CCD) sensor or complementary metal oxide semiconductor (CMOS), as known in the art. The camera may be a commercially available portable camera with an integrated illumination system or flash and a sensor array detecting Red Green and Blue (RGB) light. Alternatively an external illumination system may be provided and the camera sensor array may be adapted to receive “hyperspectral” light, meaning light divided into more spectral wavelength bands than the conventional RGB bands, which may be in both the visible and non-visible range. Hyperspectral imaging is described in more detail below.
In the clinical embodiment depicted in
An illumination apparatus, such as LED mounting ring 15, includes LEDs positioned around the optical axis of the camera which may be located proximally of the distal imaging plane which frames the skin lesion, but still forward of the imaging apparatus. The illumination apparatus includes a set of devices that emit light of different spectral profiles to illuminate the skin lesion with light at desired wavelengths. In
The LED wavelengths are selected based on the methods used to extract relevant information from the image data to identify diagnostically relevant patterns in the lesion. For example, it is known in the art that blue light is absorbed by melanin (one of N chromophores in the skin). Thus, at least one of the LEDs in the array, and preferably a plurality, emit light in the violet-indigo-blue wavelength ranges, 400-500 nm. Blood absorbs in the green, so that at least one of the LEDs in the array and preferably a plurality, emit light in the 500-600 wavelength range. Pigment at the deepest portion of a lesion, in a relatively deep lesion, has absorption shifted to the red, so that one or more LEDs emit light in the range of 600 nm to 750 nm, and even into the infrared (IR) (780 nm and above) which may be helpful to determine the deepest portion of a lesion to be excised, for example. Illumination in the non-visible ultraviolet (UV) range to obtain information about the skin lesion is another novel aspect of the invention. Thus at least one, and preferably a plurality of LEDs in the array, are adapted to illuminate the skin at a wavelength of 300 nm to 400 nm. At least one, and preferably a plurality of LEDs are adapted to illuminate the skin in accordance with the absorption profile of eu-melanin as distinct from the absorption profile of pheo-melanin. In this way, at each angular position of the sweeping arm, as the camera acquires M images at different wavelengths, each pixel in the image is stored as a set of M numbers that form a spectral measurement which may be fit as the weighted sum of N chromophores in the skin lesion.
In embodiments, particularly where off-the-shelf LEDs are used, the illumination system may comprise a set of LEDs having illumination spectra that overlap. In this case, correction may be made digitally, providing a processor adapted to remove overlapping regions of the spectra, thereby improving spectral resolution. For example, a set of LEDs may have spectra L1, L2, L3 . . . Ln that overlap, and image data obtained at one illumination, I_Li, may be used to correct the illumination at I_(Li-1) by subtracting C*(I_Li) from L(Li-1), where C is a constant related to the amount of overlap between the two spectra. Alternatively, during the fitting process wherein N chromophores are specified by fitting M reflectance values at the M wavelengths, the known absorption spectra of the chromophores can be integrated over each of the M LED emission spectra so that the absorption from each chromophore at each wavelength is uniquely specified.
The correction for overlapping spectra may be programmed in advance based on the specifications from the manufacturer of the LED. Alternatively, an apparatus according to the invention may be provided with an internal spectrometer to measure the emission spectra of the LED or other illumination device during the skin imaging or during calibration procedures, and that measurement may be used to implement the correction for overlapping spectra. A fiber optic element located distally of the illumination devices guides the actual emission spectra of each illumination device to the onboard spectrometer which provides the spectrum to the processor to perform the steps described above for resolving the overlapping spectra.
Thus, an appropriate array of LEDs for an illumination system may be selected from commercially available LEDs by the person of ordinary skill in the art, taking care to match the input requirements and output levels for different LEDs, as well as differences between output wavelengths provided in the manufacturer's specifications and measured wavelength. Preferably 3 to 50, and more preferably 10 to 30 LEDs are included in an array.
Conventional dermoscopy, with imaging by the eye or by conventional digital cameras, illuminated with white light and obtained three intensity images at the red, green and blue (RGB) wavelength ranges where the three cones in the human retina are sensitive. RGB imaging technology in a conventional digital camera was developed to mimic the three cones found in the retina. In addition, an RGB camera's sensor optically couples to the target skin through a slight magnifier (<10×), and produces images that have a bit depth of 8 bits. This means that only 28 (256) different brightness levels can be detected.
Hyperspectral dermoscopic images according to the present invention, however, are acquired by illuminating the skin with light emitting diodes (LEDs) at a plurality of different wavelengths sequentially for short duration (on the order of 100 ms). The resulting set of images yields more information about different features in the skin than RGB wavelengths, because the hyperspectral light interacts in a unique way with the complex biological structures found in the skin.
Additionally, more information may be obtained with the hyperspectral images, because the images are stored with increased “bit depth.” Hyperspectral dermoscopy acquires images at preferably 4-50 wavelengths (an embodiment described herein uses 21 LEDs at distinct wavelengths) and each image has a bit depth of at least 12 bits. This means that at least 212 (4096) different brightness levels can be obtained—sixteen times greater than conventional cameras. This increased bit depth results in a greater ability to resolve brightness differences within dark regions such as pigmented lesions.
The augmented spectral content available by acquiring images at, for example, 21 wavelengths instead of 3, has two advantages: the device can “see” colors outside of the visible spectrum such as the UVA and near infrared (nIR) ranges, and also distinguish colors that are too similar for the eye or the conventional RGB imaging sensors to resolve. Thus, hyperspectral dermoscopy has both a wider spectral range of imaging and better spectral resolution, which may result in enhanced detection of melanoma.
An exemplary array covering the hyperspectral range was constructed from commercially available LEDs having the following wavelengths, as specified by the manufacturer(s) (“λ spec”). In addition, the measured wavelength of the LEDs (“λ meas”) were obtained with an onboard spectrometer, with the ability to feed the measured information to the processor. Although peak measured LED emission wavelength is provided in Table 1, the spectrometer is capable of measuring the entire spectral emission profile which may also be fed back to the processor to optimize operation and data collection.
TABLE 1
λ(spec)
Resistance
I_meas
λ(meas)
LED
nm
Ohms
mA
nm
1
361
100
30
364
2
375
22
25
374
3
385
39
24
386
4
400
22
24
396
5
405
39
20
400
6
440
33
24
434
7
470
100
14
466
8
490
56
24
488
9
507
56
16
508
10
525
100
14
518
11
557
56
27
558
12
571
56
22
571
13
590
56
25
593
14
610
82
23
610
15
630
82
26
632
16
645
82
26
655
17
680
0
32
677
18
740
56
33
740
19
770
18
34
766
20
850
82
22.5
843
21
940
22
20
934
The camera used for hyperspectral imaging may be a gray-scale camera or an RGB camera where the three color channels are added together to form one gray-scale image. The radiometric calibration enables specification of the fractional reflectance at any particular location in the imaging plane. Such a calibration is performed by obtaining an image with a calibration standard (e.g., Labsphere Spectralon diffuse reflection standard calibration target) and using said image in combination with the skin image and the relevant exposure information.
As shown in
To calibrate the apparatus, as shown in
As depicted in
In the embodiment of
In one aspect of the invention, means are provided to adjust the focal length of the lens system at different wavelengths of illuminated light to adjust for the different focal length of the lens at different wavelengths. A lens generally has a different refractive index at different wavelengths of light. A motor may be provided in a fixture between the sensor array and the skin lesion to move the lens system according to the wavelength of illuminating light. Under illumination of a particular wavelength an image may be processed to obtain a metric that measures the focal degree. This metric may be maximized to optimize the focus at the particular metric either in real time as the camera focuses or in post-processing to calculate the optimum position of the lens to obtain focus at the particular wavelength. This process may be repeated for each wavelength and the focal positions thereby determined may be stored to instruct the lens movement during skin imaging. In embodiments, the motor may receive programmed instructions to adjust the position of the lens according to the LED wavelength specified by the manufacturer of the LED. Alternatively, the motor may be programmed to position the lens system according to wavelengths of light measured at the lesion site with a spectrometer. The spectrometer may be a fiber optic element positioned near the site of the skin lesion.
The processing functions may be shared between first and second processors. The first processor is typically an onboard processor such as circuit board 11 adapted to drive the camera and illumination system to acquire the image data and provide real time information display to the user. The first processor may transmit image data to a second processor adapted to perform data-intensive processing functions which cannot readily be provided as real time display. The second processor may deliver messages back to the first processor for display. The second processor, if present, is typically a remote processor. The second processor may create data files, image files, and the like, for later use.
In the embodiment of
As shown in
Provided sufficient image data are obtained at different wavelengths, diagnostically relevant areas of interest on the skin lesion may be identified and differentiated using a variety of display modules. Thus, colors or hyperspectral signatures correlating to blood vessels within the lesion border; colors correlating to blue and blue white structures in the lesion; colors correlating to pigmented networks which may be regular or irregular; colors correlating to negatively pigmented networks; patterns of oxygen saturation; and patterns of eumelanin and pheomelanin (which have different absorption profiles) all may be highlighted and separately displayed with the display modules described below.
The processor(s) is adapted to transform the image data into diagnostically relevant metrics and/or one or more classifiers indicating the likelihood that skin disease is present in a lesion by defining one or more properties selected from the group consisting of [a] spatial texture features; [b] brightness features; [c] features of the edge/border; [d] color variation of a lesion on the subject's skin; [e] variations in the features of the pigmented network including the length, shape, brightness and organization of the pigmented network segments; and [f] oxygen saturation of the tissue defined by the amount and ratio of oxyhemoglobin and deoxyhemoglobin. These characteristics may be displayed in one or more display modules to render a version of the lesion image depicting the lesion, or segments of the lesion, with one or more of these features of interest highlighted on a display for the user. In one display module, depicted in
In another display module schematically depicted in
The display module of
In identifying pigmented networks, especially to distinguish a pigmented network from a blood vessel structure, the variation in brightness across wavelengths is useful, because the blood vessel structure absorbs at different wavelengths than the pigmented structure.
The ratio of length to width is used to differentiate globular pigment patterns (where the ratio is closer to 1), from reticular patterns (where the ratio is much greater than 1). Variation in the ratio across the angular sweep produced is another metric correlated with melanoma.
A network includes branches connected by nodes and ends that are not connected to other branches. The ratio of the number of nodes to the number of ends produces a metric correlated with melanoma because a broken network (i.e., a lower node:end ratio) correlates to melanoma.
In addition to LCD viewer 19, the apparatus may comprise additional display outputs, adapted to display the M black-and-white or color coded scale images taken at M wavelengths as views in sequence or in a selectable manner, which may be facilitated by a server application between a computer and the data acquisition device. Data analysis of the multispectral imaging described herein was performed in the Matlab environment. However, transferring these program elements to a different programming platform is within the skill of one having ordinary skill in the art and this transfer is contemplated for commercial applications.
The camera may also be controlled with a server application that facilitates the image acquisition process and which can be operated independently or controlled through any separate software system capable of file input and output and simulating keystrokes. The server application acts as a bridge between the data gathering process and the data analysis code, to power the LEDs that illuminate the sample, to send image acquisition triggers to the camera, and to receive image information from the camera for data analysis in an efficient manner. The server application works by waiting for keystrokes (real or simulated) using a Windows message loop, which it then interprets and uses to send different commands to the camera. Additional data transfer between the server application and third party programs is accomplished using standard file input/output (“I/O”) functions.
This server may be developed as a console application in C++ computer language, for example, with the ability to be re-implemented as a windows application, to handle image acquisition and changing resolution, exposure time and gains settings with the ability to add additional functionality as necessary. By enabling the server to be controlled by keystrokes, it can be used on its own to acquire images from the camera or in conjunction with third party applications that can simulate keystrokes. Total acquisition time for imaging 21 different wavelengths of light can be reduced to about 30 seconds or less (as opposed to around 60 seconds using software provided by the camera manufacturer). This server also enables a live feed display, enabling the user to position the assembly 100 around a suspicious lesion, for example, with a frame rate of at least 5 frames/second. Additional features may be included in the script to prevent accidental keyboard input from interfering with the server application while it is being controlled by a third-party application.
The functionality of the server application is enhanced by code that controls the lighting process, allowing for images to be taken at different wavelengths of light and with exposure times individually suited to each wavelength and as necessary, adjusted on the fly to prevent under-exposure or saturation, as well as code that enables the images to be displayed as a live feed either on a monitor or on a small screen attached to the imaging device.
The flow chart of
A refinement of the flow of commands and data acquisition is shown in
In the embodiment depicted in
According to the embodiment of
In the embodiment of
Where the camera's built in illumination system is used to illuminate the lesion, a mirror may be used to direct light from the source to the surface of the lesion at an oblique angle, so as to avoid glare caused by reflection from the camera lens window. As shown in
As with the clinical apparatus, external server applications may be adapted to drive the camera provided with the cellular phone and external illumination systems. The cellular phone or smart phone generally has a screen which serves as the output device which provides the user with an indication that a skin lesion is melanoma. The output may take the form of a percentage likelihood that a skin lesion is melanoma, together with a percentage uncertainty, or the program may provide the user with a qualitative message, such as “suspicious lesion: see your dermatologist.”
In another embodiment, the invention combines wide and narrow field of view imaging systems for effectively delivering the technology to the end user, i.e., patients, doctors and the public. This combination may include a first illumination system for illuminating a first area on a subject's skin; a camera having a wide field of view for obtaining wide field image data from the illuminated skin; a processor for processing the wide field image data to obtain a target area within the first area which includes at least one skin lesion; a second illumination system for illuminating the target area; a camera having a narrow field of view for obtaining narrow field image data from the illuminated target area; and a processor for processing the narrow field image data to obtain diagnostic information pertaining to the at least one skin lesion. The wide field image data can be processed with rotational analysis using the clock-sweep algorithm described above, or other techniques may be employed to identify a target area containing a lesion on the subject's skin. Narrow field image data may then be obtained from the target area with a camera having a second field of view narrower than the field of view of the first camera, using a second illumination system.
The wide field of view is intended to image a relatively large portion of a subject's skin, potentially containing plurality of skin lesions (“target areas” or “areas of interest”) for further evaluation. Areas of interest, such as a skin lesion, are identified in this wide field area and then isolated, for example, by adapting techniques and apparatus for facial photography described in U.S. Pat. Nos. 7,603,031, 8,218,862, and 8,498,460, referenced above. Alternatively, wide field image data may be obtained with a cellular phone or smart phone. In still another embodiment, a wearable computer, capable of projecting images to the wearer with interactive processing capability, is well suited to obtain the initial wide field image data according to this aspect of the invention. In preferred embodiments of the invention, the wide field image data is processed to obtain statistical evaluation of the size and irregularity of lesions in the first area.
In this aspect of the invention, narrow field image data may be RGB image data obtained with a conventional smart phone camera, or more preferably, hyperspectral image data obtained and processed using the apparatus, methods and systems described above. That is, after a lesion is identified, a camera adapted with an illumination and sensor array for hyperspectral imaging processes the image data with a clock sweep algorithm to obtain diagnostically relevant metrics and/or one or more classifiers defining the rotational symmetry on a per lesion basis from the rotational distribution of properties selected from the group consisting of: [a] spatial texture features; [b] brightness features or [c] features of the lesion image edge/border, including the sharpness with which the lesion borders normal skin; [d] color variation of a lesion on the subject's skin; [e] variations in features of a pigmented network including the length, shape, brightness and organization of pigmented network segments; and [f] oxygen saturation of tissue as defined by the amount and ratio of oxyhemoglobin and deoxyhemoglobin. This group of properties may also include [g] the heterogeneity of pigment species such as eumelanin, pheomelanin and other species of pigment.
Thus, successively more sensitive and selective diagnostic indications are obtained, first on the meter scale, with a wide field image data acquisition system, and thereafter on the centimeter scale with narrow field image data. When a target area is identified in the wide field image data, the narrow field image data processor is able to locate a center and border of the lesion and determine that the lesion is in fact the target area.
Successively finer resolution imaging systems may be used to provide increased diagnostic sensitivity and selectivity. For example, after a lesion is evaluated with the narrow field image data processing and an indication of the likelihood of the presence or absence of skin disease is obtained, the clock sweep algorithm may be applied to more finely resolved image data, for example, image data obtained with a confocal microscope. The identified lesion, or a target area within a lesion, may be evaluated with a still finer resolution image acquisition system, such as a Raman spectroscope.
Methods, Metrics and Classifiers
The methods according to the invention may be described as a series of conceptual “steps.” As would be apparent to the person of ordinary skill in the art, the steps may be followed sequentially, or in an order different from the order stated; the steps may be done in parallel, done at the same time, or done iteratively, without departing from the scope of the invention. Describing a step as the “first step” or “next step” is for convenience only. The image data obtained from a subject's skin lesion may be manipulated by computer according to these steps and output to display modules.
The first step of the method consists of obtaining image data from a subject's skin with a camera. Generally, this means photographing a lesion on the skin. The resulting image data will comprise data from the lesion and the surrounding skin, and may include data which are not part of the lesion or surrounding skin, including hair, markings made by a dermatologist or other data elements that are not analyzed and simply need to be removed from the image. To complete this step, the processor may replace pixel brightness and color values of the hair-containing locations with pixel brightness and color values of the skin underlying or immediately adjacent the hair, for example.
The image data consists of pixel gray-scale or brightness information in M different color layers. As used herein, a “multispectral image” is an image obtained at a plurality of wavelengths or “layers,” so that each pixel in the image is associated with M numbers that form a spectral measurement, and each is a brightness or gray scale measurement at a different color layer. Thus, the image data consists of M images sequentially acquired by the camera while illuminating the skin at wavelengths that range from 300 nm to 950 nm. The spectral measurement is fit as the weighted sum of N chromophores, corresponding to the number M of images obtained. Typically, pixel brightness information is obtained at least in the red-green-blue (“RGB”) layers, but pixel brightness information is also preferably obtained for other spectral bands. Relevant information is obtained using illumination and detecting reflected light in the visible and non-visible range, including the blue and UV range at 300 nm to 500 nm, and even more particularly in the non-visible 300 nm to 400 nm UV range.
As used herein, “chromophores” refers to color components found in a skin lesion, such as melanin (including eu-melanin distinct from pheo-melanin), oxygenated hemoglobin and deoxygenated hemoglobin. Generally, at least these four have distinct absorption profiles such that the spectral images can be analytically fit as the weighted sum of at least these four chromophores. However, skin contains water, which absorbs in the infrared, bilirubin, which has a distinct absorption in the visible spectrum, and potentially could be found to contain other diagnostically relevant components, such that a measurement could be fit as a weighted sum of N chromophores, wherein N is 4, 5, 6, or more chromophores.
Once the image data is obtained, the border, shape and center of the lesion are identified. The first step in determining the shape is known as “segmenting” and various computer implemented techniques known in the art may be used to identify the shape and border of a lesion. Briefly, segmenting results in a mask being applied so that pixel brightness at a given wavelength is reduced to a mask image, in which pixels have brightness value of 1 inside the lesion and 0 outside the lesion. A “mask” as used herein is an image having a brightness value of 1 inside the image border and 0 outside the image border.
In a subsequent step, the center of the lesion (or close approximation of the center) is determined. The center of the lesion may be calculated as the center of mass or geometric centroid of the mask image, such that each region of the lesion shape is treated as having identical density. Alternatively, the center of mass may take into account the variation of brightness in the shape. Unless stated otherwise, in the following examples, the center of mass is obtained from a mask image, such that the lesion is treated as having uniform brightness to determine the center of mass. As the image will have a different mask and therefore a different border at each wavelength, the image at each wavelength may be associated with a respective center, and the distance between the “centers” (“Δr”) may be used with other metrics. The variance (“var Δr”), range (“range Δr”) and mean (“mean Δr”) may also be combined into classifiers.
A sweeping arm is a line segment connecting the center of the lesion to the border. The “clock-like” sweep as used herein, means rotating the sweeping arm about the fixed center of the image in either a clockwise or counter-clockwise direction to obtain information about the pixels on the sweeping arm as a function of rotation angle. To obtain metrics from the image data, the sweeping arm rotates around the center with one end fixed at the center for 2 pi (2π) radians or 360° (one complete sweep). Data is sampled at regular intervals of radians or degrees.
As used herein, “metrics” are values calculated from the image data which bear a correlation to disease states (melanoma in the preferred examples). Examples of metrics are listed in Table 2.
TABLE 2
V1
Angular brightness range
V2
Mean standard deviation (S.D.) of brightness
V3
Range in S.D. of brightness
V4
Standard deviation (S.D.) of S.D. in radial
brightness over all angles
V5
Mean absolute brightness shift between
successive angular positions
V6
S.D. of absolute brightness shifts
V7
Sum of the brightness shifts over full sweep
V8
Maximum border asymmetry
V9
Border asymmetry evaluated at 90° with respect
to the minimum asymmetry axis
V10
Lesion border length/lesion area
V11
Mean lesion demarcation (edge slope)
V12
S.D. of lesion demarcation
V13
Fractal dimension
V14
Lesion brightness variation over all lesion
V15
Mean demarcation (edge slope) fit error
V16
S.D. demarcation (edge slope) fit error
V17
Lesion brightness variation over all lesion
V18
Mean length/area of pigment segments
V19
S.D. length/area of pigment segments
Metrics V1 through V7 and V14 capture measurements and statistical information relating to the variation in brightness of pixels on the sweeping arm in relation to other pixels on the sweeping arm, and over different angular positions of the sweeping arm. Metrics V8 through V13 and V15 through V19 capture measurements and statistical information relating to the edge characteristics and presence of reticulated structures in the lesion.
The metrics enable quantitative analysis of parameters familiar from conventional dermatological examination, such as the ABCD technique of lesion screening, which evaluates the asymmetry (A) of a lesion, and lesion border (B), color (C) and dermoscopic structures (D). But the systems and methods of the invention also provide a wealth of information that cannot be obtained from conventional screening, ultimately yielding a percent likelihood that a lesion is melanoma or nevus, which conventional screening could never do. According to the invention, the factors relevant to conventional dermatology are synthesized in a series of metrics, which are then combined in one or more classifiers that may be trained on a set of lesions of known pathology to yield a system of diagnosis of skin disease.
One metric that may be obtained from the angularly sampled data is the angular brightness range (V1), defined as the maximum value of mean brightness on the sweeping arm minus the minimum value of mean brightness over the full rotational sweep. Thus, the mean brightness of the pixels on the sweeping arm is calculated at each angular sample position of the sweeping arm, and the minimum value calculated is subtracted from the maximum value to obtain (V1). The angular brightness range (V1) will vary more if the lesion has overall non-uniformity in pigment.
The right hand side of
Another metric that may be obtained is the range in standard deviation of brightness (V3). A standard deviation is obtained from all the values of brightness on the sweeping arm at each angular position and the range of these values over all angular positions is calculated to obtain (V3). For the standard deviation of the values of variance along a single instantaneous radial brightness over all the possible angles (V3), the individual standard deviations are plotted as vertical black lines. The mean standard deviation of the radial brightness is evaluated over all angular positions. This variable (V3) measures the variation of brightness along the radial clock arm that sweeps the lesion. Though this variable (V3) will be higher for heterogeneous pigment distribution, it does not distinguish between globular and reticular pigmented patterns.
Another metric is the standard deviation over all angles of the standard deviations at each angular position (V4). This variable describes to what degree the heterogeneity of pigment distribution itself is heterogeneous. This variable (V4) would be high, for example, if there were some angles at which the lesion contained an even pigment distribution and other angles that contained a reticular or globular pattern of bright/dark areas.
Other metrics evaluate the brightness shift (absolute value) at successive angular positions (V5) the standard deviation of the absolute value of the brightness shift over all angular positions (V6), and the sum of the brightness shift (absolute value) over all angular positions (V7). The mean instantaneous brightness shift at successive angular positions (V5) is the average derivative of remittance of the angular brightness over all possible angles. The average derivative of remittance adds up the instantaneous changes in brightness over all possible angles, in this way, the variable (V5) is similar to variable (V2). The standard deviation of the absolute value of the brightness shift over all angular positions (V6) is the derivative variance. The variance of the derivative of remittance describes how much variability exists in the instantaneous change in brightness over the angular sweep. If some angular ranges are flat (i.e. low intra-range brightness derivative) and some ranges vary wildly, the variable (V6) will have a high value. The sum of the brightness shift over all angular positions (V7) is the total variance. For a uniformly colored lesion, the variable (V7) is zero.
The person of ordinary skill in the art of computer-implemented diagnostic analysis of dermoscopic images will recognize that the angularly sampled spectral image data lend themselves to mathematical combination and statistical manipulation once the data is obtained, so that the foregoing list of metrics having correlation to disease states is not exhaustive.
The maximum border asymmetry (V8) is another metric, along with the border asymmetry perpendicular to the axis of most symmetry (V9). The border asymmetry is calculated by flipping the silhouette of the lesion and dividing the mismatched area by the total area. An irregularly shaped border will result in a high value for this variable (V8). In embodiments, border asymmetry was obtained by converting the lesion segment in the blue channel to a binary mask and flipping the binary lesion about a bisecting axis, thereafter rotating the axis in 10 degree increments from zero to 180 degrees to obtain 18 samples of asymmetry as a function of analysis axis. The subtraction of the original mask from its flipped counterpart yielded a map where overlapping regions had a zero values (1−1=0), regions not occupied by either the original or flipped mask had zero values (0−0=0) and regions of mismatch had an absolute value of 1 (1−0=1 or 0−1=−1). The absolute value for a perfect circle would be zero everywhere and the sum would be zero, indicating perfect symmetry. Real lesions had mismatched areas, which lead to non-zero values in the subtraction map, which when summed and divided by the sum of just the original mask, equaled the fractional area of mismatch, and represented the asymmetry of the border of the lesion. The angle at which the minimum asymmetry factor occurred was designated as the axis of most symmetry. Then, the asymmetry of the lesion was evaluated at 90 degrees with respect to the symmetry axis. The individual asymmetry images are depicted in
Some of the metrics obtained from scanning and analysis of the pixel brightness information are obtained for a given wavelength. Other metrics require a combination and/or comparison of image data obtained at different wavelengths. Regions of interest in a lesion may be associated with different colors, including blood vessels (red) within the lesion border, blue or blue-white skin structures, pigmented networks (associated with eumelanin (brown) or pheomelanin (red).
The border roughness metric (V10) is the length of the border of the lesion segment squared divided by the area of the lesion. For a circle, this would be the circumference squared divided by the area. The border roughness (V10) describes how much the radius of the lesion varies during the clock sweep scan of the lesion. For a circle, the variable will be minimized but for a lesion that has many fingers protruding into the normal skin, this variable (V10) will be high.
Initially, the clock sweep may be used to enhance the determination of the border. An edge fit algorithm runs during the clock sweep and utilizes the variation in pixel brightness at the edge of the lesion shape to iteratively determine a more accurate edge.
The “edge slope” metric (V11) is the mean gradient in brightness at the border during the transition from dark (inside the lesion) to light (outside the lesion) over the full sweeping arm rotational range. Also characterized as the edge slope or edge sharpness (V11) quantifies lesion demarcation. If the lesion has an abrupt border, as in melanoma, this variable (V11) will have a high value. The standard deviation of edge slope over all angular positions produces the standard deviation of lesion demarcation (V12). For the standard deviation of the lesion demarcation (V12), the variation in the edge sharpness will be high if the lesion border in some locations is sharply demarked and in other locations is a more gradual transition. An edge fit algorithm may be used to produce a function defining the border of the lesion from the edge slope which also produces edge slope fit error (V15). An edge slope fit error for the standard deviation of lesion demarcation (V16) may be similarly obtained. The fractal dimension (V13), The fractal dimension (V13), which can be a Hausdorf fractal dimension of the lesion silhouette at a particular wavelength is another measure of the border irregularity which may be calculated according to known methods. The length to area ratio of pigment segments (V18) and standard deviation of this ratio (V19) are also metrics which bear correlation to melanoma.
Additional metrics include variables (V20) through (V30). The ratio of the mean diameter of the lesion segment to the maximum correlation distance (V20) describes the size of the pigmented network features relative to the size of the lesion. For a lesion with small pigmented features such as a well-defined reticular pattern, this variable (V20) will be high while for a lesion that has large areas of dark pigment such as globules, this variable (V20) will be low.
The eccentricity factor of the cross correlation matrix of the lesion segment image (V21) is the ratio of correlation in X to correlation in Y. This variable (V21) quantifies asymmetry in the cross correlation of the lesion. If a lesion has long pigmented ridges, for example, the correlation length along the direction of the fingers will be high while the correlation length perpendicular to the fingers will be small. Such a lesion will have a high value for this variable (V21).
The standard deviation of the lengths of the pigmented network branches in the entire lesion (V22) is another metric. For the branch analysis, which skeletonizes the pigmented network, this variable (V22) quantifies the variability in branch lengths. If the lesion has some areas with many small branches but other areas with long branches, this variable (V22) will be high.
The standard deviation of the brightness of the pigmented network branches in the entire lesion (V23) is another metric. If the branches have variable intensities (i.e. some branches are dark and some are light), this variable (V23) will be high.
The mean value of the standard deviation in branch brightness over the mean branch brightness, over all the branches (V24) describes the intra-branch variability in intensity.
The average eccentricity of the original dark network segments (long axis diameter divided by short axis diameter) (V25) describes how elliptical the original (un-skeletonized) pigmented regions are. If the pigmented regions are natural branches such as in a reticular pattern, this variable (V25) will be high. If the pattern is globular and the pigmented regions are more round, this variable (V25) will be low.
The standard deviation of the eccentricity of the original dark network segments (long axis diameter divided by short axis diameter) (V26) quantifies the variation in elliptical factors of the pigmented regions. If a lesion has a globular component as well as a reticular component, this variable (V26) will be high.
The connectedness of the pigmented network (V27) is the number of branch points divided by the number of end points. The connectedness of the pigmented network will be higher for a globular pattern than a reticular pattern because globules do not connect. This variable (V27) will also be higher for a reticular pattern if the branches are broken.
The range of the average branch length in an incremental angular zone evaluated over all possible angles (V28) evaluates how the branch lengths change in different directions. If a reticular lesion has an irregular pigmented network where the branches in some regions are longer than in others, this variable (V28) will be high.
The range of the average branch brightness in an incremental angular zone evaluated over all possible angles (V29) quantifies the brightness of the branches in the same angular way that the previous variable (V28) quantifies branch lengths.
The range of the average number of branches in an incremental angular zone evaluated over all possible angles (V30) is another metric.
Correlations of metrics to disease states may be obtained from a sample of lesions obtained from human subjects, containing known melanoma and nevi, and applying two-sided unpaired t-tests. Table 3 below tabulates P-values for preferred metrics in two-sided unpaired t-tests applied using the methods of the invention to a sample including melanoma and non-cancerous nevi (n=115 samples). In Table 3, a lower P-value represents a higher correlation of the metric with the correct prediction that a given lesion is melanoma. The discriminatory power of the metrics improves as the wavelength of illuminating light is shifted toward shorter wavelengths, particularly into the blue and ultraviolet. This is shown in Table 3, wherein metrics M1 through M16 correspond to (V1) through (V16) described above, and M17 through M27 correspond to (V20) through (V30) described above. Table 3 shows the P-values—the statistical correlation between a particular metric prediction and the occurrence of melanoma in a lesion—repeated for each wavelength of red, green and blue illuminating light. The P-values trend lower as the wavelength of illuminating light moves toward the blue.
TABLE 3
Metric
Red Channel
Green Channel
Blue Channel
M1
0.038512853
0.005974978
0.005413393
M2
0.064100668
0.004186356
0.000931948
M3
0.051076855
0.049752151
0.004417105
M4
0.015508981
0.004775704
0.000322272
M5
0.053177386
0.000288015
3.11E−05
M6
0.083413521
0.0017528
0.000203987
M7
0.053177386
0.000288015
3.11E−05
M8
0.06168296
0.355771648
0.373633602
M9
0.18969333
0.941812711
0.51577414
M10
0.764701562
0.118919328
0.071004505
M11
0.223854987
0.017938675
0.001834162
M12
0.595301519
0.341014351
0.566527499
M13
0.000128953
0.014482528
0.023037402
M14
0.019109506
0.050021307
0.041666677
M15
0.013434262
0.005961503
0.000900939
M16
0.042338391
0.068554129
0.046165566
M17
1.67E−05
0.005296628
0.00494726
M18
0.707233508
0.794075037
0.825754151
M19
0.013854117
0.770162679
0.99699408
M20
0.13132109
0.018472359
0.004819414
M21
0.464474471
0.192611265
0.167729501
M22
0.050291628
0.032035539
0.047297197
M23
0.066784433
0.041333049
0.052544662
M24
0.105241821
0.404152353
0.474939953
M25
0.166005642
0.044997689
0.200169654
M26
0.021380908
0.339045255
0.857779693
M27
7.43E−05
0.000717461
0.027130568
As used herein “classifiers” are combinations of metrics in functions built using multivariate methods to increase the predictive ability of the method to distinguish melanoma from nevi. Classifiers may be obtained and optimized according to known techniques by maximizing the performance of a set of classifiers in receiver operator curve (“ROC”) maximization. An example of ROC maximization for classifiers distinguishing melanoma from nevi is reproduced in
The output of a classifier is a percent likelihood that a lesion is melanoma, which may be coupled with a percent error or uncertainty for the classifier. This can be output for the user in any desired format. A dermatologist may want to see the underlying statistical information displayed as numbers and graphs, either on the device LCD screen, or on the screen of the computer communicating with the device. The ordinary patient may prefer an intuitive system of identifying potentially dangerous lesions, where the lesions most likely to be melanomas are identified with a red light and the least dangerous with a green light.
In order to develop the classifiers, a sample of nevi of known pathology was obtained and classifiers were developed using a “training” subset of the sample using both linear techniques (such as regressions and linear discriminant analysis) and nonlinear techniques (such as neural networks and decision tree algorithms) The following linear classifier is an example of a classifier developed from a training set having some predictive ability to discriminate between nevi and melanoma:
L=0.16*range−0.87*edge+0.68
where range and edge are metrics defined above and L represents a classifier that may be compared to a threshold to yield a classification of melanoma or nevus. More robust classifiers can be created by incorporating more of the metrics, such as the classifier in the accompanying computer code that uses all the metrics. “Training” was possible because the pathology of the lesions was known from prior pathologist screening, and the metrics and constants may be selected to maximize the area under the ROC curve. Subsequently, the “trained” classifiers are applied to lesions having unknown pathology. (In the experimental setting this means that the investigator was blind to the pathology of the lesions and did not adjust the classifiers; in the real world setting the device will typically be applied only to lesions having unknown pathology, and the thresholds and classifiers will be pre-programmed) As would be apparent to one of ordinary skill in the art, a larger training sample and slight variation of the metrics will likely yield improved classifiers, without departing from the scope of the invention. Once obtained, the classifiers are applied to the image data of lesions whose pathology is unknown. According to the invention, a sensitivity/specificity of 86%/91% was obtained, with an overall diagnostic accuracy of 89%. This result is expected to improve with routine optimization at 99% sensitivity, the specificity was as high as 56% in some test sets, showing significant improvement over the prior art.
Using the camera in the embodiment of
The foregoing description of the preferred embodiments is for illustration and is not to be deemed as limiting the invention defined by the following claims. The primary application of the invention is to detect melanoma in humans and to distinguish cancerous from non-cancerous lesions. However, in principle, the apparatus and methods have broad application in the detection and display of other skin diseases and diseases in other human tissues. Moreover, using the clock sweep method of analyzing multispectral image data according to the invention lends itself to the development of improved metrics and more discriminating classifiers for the detection of melanoma, without departing from the scope of the invention. The foregoing descriptions of a clinical apparatus and cellular phone apparatus enable the person of ordinary skill to practice variants thereof without departing from the scope of the invention.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10182757, | Jul 22 2013 | The Rockefeller University | System and method for optical detection of skin disease |
10307098, | Jul 22 2013 | The Rockefeller University | System and method for optical detection of skin disease |
5016173, | Apr 13 1989 | VANGUARD IMAGING LTD | Apparatus and method for monitoring visually accessible surfaces of the body |
5706821, | Dec 30 1994 | HAMAMATSU PHOTONICS K K | Spectroscopic method for quantitatively determining the change of concentration of a light or other radiation absorbing compound in a medium which is interrogated through an intervening medium |
5836872, | Apr 13 1989 | VANGUARD IMAGING LTD | Digital optical visualization, enhancement, quantification, and classification of surface and subsurface features of body surfaces |
5944598, | Aug 23 1996 | HER MAJESTY THE QUEEN IN RIGHT OF CANADA AS REPRESENTED BY THE DEPARTMENT OF AGRICULTURE AND AGRI-FOOD CANADA | Method and apparatus for using image analysis to determine meat and carcass characteristics |
6208749, | Feb 28 1997 | MELA SCIENCES, INC | Systems and methods for the multispectral imaging and characterization of skin tissue |
6993167, | Nov 12 1999 | Polartechnics Limited | System and method for examining, recording and analyzing dermatological conditions |
7006223, | Mar 07 2003 | 3GEN, LLC | Dermoscopy epiluminescence device employing cross and parallel polarization |
7027153, | Mar 07 2003 | 3GEN,LLC | Dermoscopy epiluminescence device employing multiple color illumination sources |
7167243, | Mar 07 2003 | 3gen, LLC. | Dermoscopy epiluminescence device employing cross and parallel polarization |
7167244, | Mar 07 2003 | 3gen, LLC. | Dermoscopy epiluminescence device employing multiple color illumination sources |
7603031, | Dec 15 2004 | Canfield Scientific, Incorporated | Programmable, multi-spectral, image-capture environment |
7894651, | Mar 02 2007 | STRATA SKIN SCIENCES, INC | Quantitative analysis of skin characteristics |
8218862, | Feb 01 2008 | Canfield Scientific, Incorporated | Automatic mask design and registration and feature detection for computer-aided skin analysis |
8498460, | Feb 22 2010 | Canfield Scientific, Incorporated | Reflectance imaging and analysis for evaluating tissue pigmentation |
8971609, | Sep 09 2009 | Oregon Health & Science University | Automated detection of melanoma |
20030078482, | |||
20040267102, | |||
20050228264, | |||
20050232474, | |||
20070232930, | |||
20080123106, | |||
20080132794, | |||
20080214907, | |||
20080275315, | |||
20090016491, | |||
20090174878, | |||
20090220415, | |||
20090279760, | |||
20100056928, | |||
20100185064, | |||
20100255795, | |||
20100256469, | |||
20100271470, | |||
20100302358, | |||
20110013006, | |||
20120041284, | |||
20120041285, | |||
20120071764, | |||
20120170828, | |||
20120172685, | |||
20120259229, | |||
20120320340, | |||
20130014868, | |||
20130053701, | |||
20130108981, | |||
20130114868, | |||
20130116538, | |||
20140036054, | |||
20140213909, | |||
20140350395, | |||
20150025343, | |||
20150051498, | |||
20150082498, | |||
20160199665, | |||
20170205344, | |||
20190307391, | |||
CN101686819, | |||
GB2502672, | |||
JP2005111260, | |||
JP2005192944, | |||
JP2006074259, | |||
JP2007511243, | |||
JP2010520774, | |||
JP2013514520, | |||
WO2006078902, | |||
WO2011087807, | |||
WO2011112559, | |||
WO2012162596, | |||
WO2015013288, | |||
WO2017027881, | |||
WO2020146489, | |||
WO9013901, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 05 2013 | GAREAU, DANIEL | The Rockefeller University | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 066312 | /0374 | |
Jan 24 2023 | The Rockefeller University | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 24 2023 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Feb 14 2023 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Mar 19 2027 | 4 years fee payment window open |
Sep 19 2027 | 6 months grace period start (w surcharge) |
Mar 19 2028 | patent expiry (for year 4) |
Mar 19 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2031 | 8 years fee payment window open |
Sep 19 2031 | 6 months grace period start (w surcharge) |
Mar 19 2032 | patent expiry (for year 8) |
Mar 19 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2035 | 12 years fee payment window open |
Sep 19 2035 | 6 months grace period start (w surcharge) |
Mar 19 2036 | patent expiry (for year 12) |
Mar 19 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |