Embodiments include applying a compensation to an image signal based on nonuniformity of a display device. The compensation is based on information about variations in light-output response among elements of the display device. The compensation is also modified based on a characteristic of a desired use of the display.

Patent
   RE43707
Priority
May 17 2005
Filed
Dec 28 2011
Issued
Oct 02 2012
Expiry
May 23 2025
Assg.orig
Entity
Large
7
46
all paid
0. 96. A method of image processing comprising:
receiving an image signal representative of at least one physical and tangible object;
obtaining correction data configured to produce a desired non-uniform light-output response; and
generating a display signal based on the image signal and the correction data,
wherein the display signal is configured to cause the display to depict a display image of the at least one physical and tangible object.
0. 62. An image processing apparatus comprising:
an array of logic elements configured to generate a display signal based on a map and an image signal that represents at least one physical and tangible object,
wherein the display signal is configured to cause a display to depict a display image of the at least one physical and tangible object, and
wherein the map comprises correction data configured to produce a desired non-uniform light-output response.
0. 105. An image processing apparatus comprising:
an array of logic elements configured to generate a display signal based on a map and an image signal that represents at least one physical and tangible object,
wherein the display signal is configured to cause a display to depict a display image of the at least one physical and tangible object,
wherein the map comprises correction data calculated based on whether at least a portion of a pixel is defective.
0. 100. An image processing apparatus comprising:
an array of logic elements configured to generate a display signal based on a map and an image signal that represents at least one physical and tangible object,
wherein the display signal is configured to cause a display to depict a display image of the at least one physical and tangible object,
wherein the map comprises correction data configured to correct for pixel non-uniformity only when the pixel non-uniformity is outside of a tolerance level.
0. 58. A method of processing an image for a display, the display comprising a plurality of pixels, said method comprising:
based on a modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object,
wherein the modified map is based on a measure of a light-output response of at least a portion of each pixel in at least a portion of the plurality of pixels at a plurality of driving levels.
0. 98. An image processing apparatus comprising:
an array of logic elements configured to generate a display signal based on a map and an image signal that represents at least one physical and tangible object,
wherein the display signal is configured to cause a display to depict a display image of the at least one physical and tangible object,
wherein the map comprises correction data configured to produce a light-output response with a lower degree of non-uniformity for driving levels within a first range of driving levels than for driving levels outside the first range of driving levels.
1. A method of image processing, said method comprising:
for each of a plurality of pixels of a display, obtaining a measure of a light-output response of at least a portion of the pixel at each of a plurality of driving levels;
to increase a visibility of a characteristic of a displayed image during a use of the display, modifying a map that is based on the obtained measures; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
0. 53. A method of processing an image for a display, the display comprising a plurality of pixels, said method comprising:
modifying a map to increase a visibility of a characteristic of a displayed image, wherein the map is based on a measure of a light-output response of at least a portion of each pixel in at least a portion of the plurality of pixels at each of a plurality of driving levels; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
37. A method of image processing, said method comprising:
for each of a plurality of pixels of a display, obtaining a measure of a luminance of at least a portion of the pixel in response to each of a plurality of different electrical driving levels;
to increase a visibility of a characteristic of a displayed image during a use of the display, modifying a map that is based on the obtained measures; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
0. 60. An image processing apparatus comprising:
an array of storage elements configured to store, for each pixel in at least a portion of a plurality of pixels of a display, data representative of a light-output response of at least a portion of the pixel at a plurality of driving levels; and
an array of logic elements configured to obtain, based on a modified map and an image signal that represents at least one physical and tangible object, a display signal that is configured to cause the display to depict the at least one physical and tangible object,
wherein the modified map increases a visibility of a characteristic of a displayed image during use of the display.
0. 41. A method of image processing, the method comprising:
for each of a plurality of groups of pixels of a display, obtaining corresponding data representative of a light-output response of the group of pixels at each of a plurality of driving levels;
to increase a visibility of a characteristic of a displayed image during use of the display, modifying a map that is based on the obtained corresponding data for each of the plurality of groups; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
34. An image processing apparatus comprising:
an array of storage elements configured to store, for each of a plurality of pixels of a display, a measure of a light-output response of at least a portion of the pixel at each of a plurality of driving levels; and
an array of logic elements configured to modify a map based on the stored measures and to obtain, based on the modified map and an image signal that represents at least one physical and tangible object, a display signal that is configured to cause the display to depict the at least one physical and tangible object,
wherein the array of logic elements is configured to modify the map to increase a visibility of a characteristic of a displayed image during a use of the display.
0. 108. A method of processing an image for a display, the method comprising:
obtaining first data representative of a light-output response of a first group of pixels of the display at a first plurality of driving levels;
obtaining second data representative of a light-output response of a second group of pixels of the display at a second plurality of driving levels;
to increase a visibility of a characteristic of a displayed image during use of the display, modifying a map that is based on the obtained first and second data; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
0. 114. An image processing apparatus comprising:
an array of storage elements configured to store first data representative of a light-output response of a first group of pixels of a display and second data representative of a light-output response of a second group of pixels of the display; and
an array of logic elements configured to modify a map based on the stored first and second data and to obtain, based on the modified map and an image signal that represents at least one physical and tangible object, a display signal that is configured to cause the display to depict the at least one physical and tangible object,
wherein the array of logic elements is configured to modify the map to increase a visibility of a characteristic of a displayed image during use of the display.
0. 50. An image processing apparatus comprising:
an array of storage elements configured to store, for each of a plurality of groups of pixels of a display, corresponding data representative of a light-output response of the group of pixels at each of a plurality of driving levels; and
an array of logic elements configured to modify a map based on the stored corresponding data for each of the plurality of groups and to obtain, based on the modified map and an image signal that represents at least one physical and tangible object, a display signal that is configured to cause the display to depict the at least one physical and tangible object,
wherein the array of logic elements is configured to modify the map to increase a visibility of a characteristic of a displayed image during use of the display.
36. A method of image processing, said method comprising:
for each of a plurality of pixels of a display, obtaining a measure of a light-output response of at least a portion of the pixel at each of a plurality of driving levels;
modifying a map of the display that is based on the obtained measures, said modifying including, with respect to a magnitude of a component having a spatial period between one and fifty millimeters, decreasing a magnitude of a component having a spatial period less than one millimeter and decreasing a magnitude of a component having a spatial period greater than fifty millimeters; and
based on the modified map and an image signal that represents at least one physical and tangible object, obtaining a display signal that is configured to cause the display to depict the at least one physical and tangible object.
2. The method of image processing according to claim 1, wherein the map comprises at least one of a luminance map of the display and a chrominance map of the display.
3. The method of image processing according to claim 1, wherein said modifying a map is based on a characteristic of a feature to be detected during display of an image.
4. The method of image processing according to claim 1, wherein said modifying a map is based on a characteristic of a class of images to be displayed.
5. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes:
obtaining a characteristic of an image to be displayed; and
modifying the map according to the obtained characteristic.
6. The method of image processing according to claim 1, wherein said modifying a map includes modifying the map according to a desired frequency response of the display.
7. The method of image processing according to claim 1, wherein said modifying a map includes modifying the map according to a desired response of the display to a predetermined image characteristic.
8. The method of image processing according to claim 1, wherein said modifying a map includes attenuating a magnitude of a first component of the map relative to a magnitude of a second component of the map, wherein the second component has a higher spatial frequency than the first component.
9. The method of image processing according to claim 8, wherein said modifying a map includes attenuating a magnitude of a third component of the map relative to the magnitude of the second component of the map, wherein the third component has a higher spatial frequency than the second component.
10. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map to increase a visibility of an image area having a spatial frequency greater than 0.1 cycles per degree.
11. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map to increase a perceptibility of features of a displayed image that are mutually separated by more than one arc-minute.
12. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map to increase a visibility of an area in accordance with a contrast of the area.
13. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map according to a contrast sensitivity function.
14. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map according to a predetermined feature of interest.
15. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map to increase a visibility of a clinically relevant feature.
16. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map according to a shape and size of a clinically relevant feature.
17. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map to increase a visibility of rounded shapes.
18. The method of image processing according to claim 1, wherein the image signal is derived from a photographic representation of living tissue obtained using at least one among a penetrating radiation and a penetrating emission.
19. The method of image processing according to claim 1, wherein the image signal is derived from an X-ray photograph.
20. The method of image processing according to claim 1, said method comprising verifying that a desired structure was not removed from the map during said modifying.
21. The method of image processing according to claim 20, wherein said verifying includes calculating a difference between the map and the modified map.
22. The method of image processing according to claim 1, wherein said modifying a map to increase a visibility of a characteristic of a displayed image includes modifying the map according to a first selected characteristic in a first region of the display, and modifying the map according to a second selected characteristic in a second region of the display,
wherein the first characteristic is different than the second characteristic, and wherein the first region is separate from the second region.
23. The method of image processing according to claim 1, said method comprising calculating a plurality of correction functions based on the modified map.
24. The method of image processing according to claim 23, wherein said obtaining a display signal comprises applying, to a value of the image signal that corresponds to a pixel of the display, a correction function that corresponds to the pixel from among the plurality of correction functions.
25. The method of image processing according to claim 1, wherein the map comprises a plurality of correction functions, each of the plurality of correction functions corresponding to at least one of the plurality of pixels.
26. The method of image processing according to claim 25, wherein said obtaining a display signal comprises applying, to a value of the image signal that corresponds to a pixel of the display, a correction function that corresponds to the pixel from among the plurality of correction functions.
27. The method of image processing according to claim 1, wherein the luminance resolution of the display signal is greater than the luminance resolution of the image signal.
28. The method of image processing according to claim 1, wherein the luminance resolution of the display signal is greater than the luminance resolution of the display.
29. The method of image processing according to claim 28, said method comprising displaying the display signal on the display, said displaying including performing an error diffusion technique based on the display signal.
30. The method of image processing according to claim 1, said method comprising attenuating a component of the image signal according to a characterization of noise of an image detector.
31. The method of image processing according to claim 1, wherein said modifying a map includes modifying the map to reduce a visibility of a defective pixel of the display.
32. A data storage medium having machine-readable instructions describing the method of image processing according to claim 1.
33. The method of image processing according to claim 1, wherein said method comprises using a correction circuit to perform said obtaining a display signal.
35. The image processing apparatus according to claim 34, wherein said array of logic elements is configured to attenuate a magnitude of a first component of the map relative to a magnitude of a second component of the map, wherein the second component has a higher spatial frequency than the first component.
0. 38. The method of image processing of claim 1, wherein the plurality of pixels excludes a second plurality of pixels of the display.
0. 39. The image processing apparatus of claim 34, wherein the plurality of pixels excludes a second plurality of pixels of the display.
0. 40. The method of image processing of claim 1, wherein said obtaining a measure of a light-output response comprises retrieving the measure from storage.
0. 42. The method of image processing according to claim 41, wherein the plurality of driving levels is less than a number of driving levels of the display.
0. 43. The method of image processing according to claim 41, wherein the plurality of groups of pixels excludes at least one other pixel of the display.
0. 44. The method of image processing according to claim 41, wherein at least one of the plurality of groups comprises a plurality of portions of pixels.
0. 45. The method of image processing according to claim 41, wherein, for at least one of the plurality of groups, each pixel of the group comprises a plurality of portions, and the corresponding data comprises data representative of a light-output response of the group of pixels for each of the plurality of portions.
0. 46. The method of image processing according to claim 41, wherein the modifying includes, with respect to a magnitude of a component having a spatial period between one and fifty millimeters, decreasing a magnitude of a component having a spatial period less than one millimeter and decreasing a magnitude of a component having a spatial period greater than fifty millimeters.
0. 47. The method of image processing according to claim 41, wherein the light-output response comprises luminance.
0. 48. The method of image processing according to claim 41, wherein the light-output response comprises chromaticity.
0. 49. The method of image processing according to claim 41, wherein the modified map comprises correction data configured to produce a desired non-uniform light-output response.
0. 51. The image processing apparatus of claim 50, wherein at least one of the plurality of groups comprises a plurality of portions of pixels.
0. 52. The image processing apparatus of claim 50, wherein the modified map comprises correction data configured to produce a desired non-uniform light-output response.
0. 54. The method of processing an image for a display of claim 53, wherein the modifying includes, with respect to a magnitude of a component having a spatial period between one and fifty millimeters, decreasing a magnitude of a component having a spatial period less than one millimeter and decreasing a magnitude of a component having a spatial period greater than fifty millimeters.
0. 55. The method of processing an image for a display of claim 53, wherein the light-output response comprises luminance.
0. 56. The method of processing an image for a display of claim 53, wherein the light-output response comprises chromaticity.
0. 57. The method of processing an image for a display of claim 53, wherein the modified map comprises correction data configured to produce a desired non-uniform light-output response.
0. 59. The method of processing an image for a display of claim 58, wherein the modified map comprises correction data configured to produce a desired non-uniform light-output response.
0. 61. The method of processing an image for a display of claim 60, wherein the modified map comprises correction data configured to produce a desired non-uniform light-output response.
0. 63. The image processing apparatus according to claim 62, wherein said desired non-uniform light-output response is a desired spatially non-uniform light-output response of the display.
0. 64. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a lower degree of non-uniformity for pixels substantially at a center of the display than for pixels substantially at edges of the display.
0. 65. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises lower display noise for pixels substantially at a center of the display than for pixels substantially at edges of the display.
0. 66. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a constant contrast for a majority of the display.
0. 67. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a lower degree of non-uniformity for luminance than for color.
0. 68. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a lower degree of non-uniformity for driving levels in a first range of driving levels than for driving levels outside the first range of driving levels.
0. 69. The image processing apparatus of 68, wherein the first range of driving levels comprises a plurality of driving levels between a minimum driving level of the display and a maximum driving level of the display.
0. 70. The image processing apparatus of claim 68, wherein the first range of driving levels comprises less than a total number of driving levels of the display.
0. 71. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises:
a first degree of non-uniformity at a first range of driving levels; and
a second degree of non-uniformity at a second range of driving levels.
0. 72. The image processing apparatus of claim 71, wherein a value of the image signal corresponding to a maximum driving level is substantially uncorrected.
0. 73. The image processing apparatus of claim 71, wherein a value of the image signal corresponding to a minimum driving level is substantially uncorrected.
0. 74. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises:
a first degree of non-uniformity for driving levels within a first range of driving levels; and
a second degree of non-uniformity for driving levels outside the first range of driving levels.
0. 75. The image processing apparatus of claim 74, wherein the first degree of non-uniformity is less than twenty percent and the second degree of non-uniformity is greater than twenty percent.
0. 76. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a contrast that is at least ninety percent of an uncorrected display contrast.
0. 77. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a first contrast substantially at a center of the display and a second contrast substantially at edges of the display.
0. 78. The image processing apparatus of claim 77, wherein the first contrast is greater than the second contrast.
0. 79. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises contrasts substantially at a center of the display that are greater than contrasts substantially at edges of the display.
0. 80. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises a lower degree of non-uniformity within a first range of spatial frequencies than outside the first range of spatial frequencies.
0. 81. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises less correction for a defective pixel than for a non-defective pixel.
0. 82. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response comprises less correction for pixels having a degree of non-uniformity greater than or equal to a first threshold than for pixels having a degree of non-uniformity less than the first threshold.
0. 83. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response is based in part on a type of image being displayed.
0. 84. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response is based in part on a physical characteristic of an image being displayed.
0. 85. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response is based in part on an effect of a post-processing system.
0. 86. The image processing apparatus of claim 85, wherein a resulting light-output that includes post processing is substantially uniform.
0. 87. The image processing apparatus of claim 85, wherein the post-processing system comprises a projection system.
0. 88. The image processing apparatus of claim 85, wherein the post-processing system comprises a lens.
0. 89. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response is based in part on a desired frequency response.
0. 90. The image processing apparatus of claim 62, wherein the desired non-uniform light-output response increases the visibility of a feature of the at least one physical and tangible object.
0. 91. The image processing apparatus of claim 62, wherein the map includes at least one lookup table.
0. 92. The image processing apparatus of claim 62, wherein the map is implemented in a compressed form.
0. 93. The image processing apparatus of claim 62, wherein the map includes a plurality of coefficients of a polynomial description.
0. 94. The image processing apparatus of claim 62, wherein the map has a lower resolution than a resolution of the display.
0. 95. The image processing apparatus according to claim 62, wherein the correction data is based on at least one characteristic of an image represented in the image signal.
0. 97. The method of image processing according to claim 96, wherein said desired non-uniform light-output response is a desired spatially non-uniform light-output response of the display.
0. 99. The image processing apparatus of claim 98, wherein a degree of non-uniformity gradually increases from a driving level within the first range of driving levels to a driving level outside the first range of driving levels.
0. 101. The image processing apparatus of claim 100, wherein the tolerance level varies among pixels of the display.
0. 102. The image processing apparatus of claim 101, wherein at least one pixel of the display has a plurality of portions and wherein the tolerance level varies among portions of the pixel.
0. 103. The image processing apparatus of claim 101, wherein the tolerance level varies among pixels of the display based in part on a position of a pixel on the display.
0. 104. The image processing apparatus of claim 103, wherein the tolerance level is greater for pixels substantially at edges of the display than for pixels substantially at a center of the display.
0. 106. The image processing apparatus according to claim 105, wherein the map comprises correction data for a first pixel in the display based in part on a light-output response of at least one defective pixel in the display adjacent to the first pixel.
0. 107. The image processing apparatus according to claim 105, wherein the map comprises correction data for a first pixel in the display based in part on a light-output response of at least one pixel in the display adjacent to the first pixel that has a degree of non-uniformity greater than a predetermined threshold.
0. 109. The method of processing according to claim 108, wherein said display includes a third group of pixels.
0. 110. The method of processing according to claim 108, wherein said first group of pixels includes at least one pixel that is excluded from said second group of pixels.
0. 111. The method of processing according to claim 108, wherein said first plurality of driving levels is the same as said second plurality of driving levels.
0. 112. The method of processing according to claim 108, wherein said first group of pixels comprises at least one pixel of the display, and wherein said second group of pixels comprises at least one pixel of the display.
0. 113. The method of processing according to claim 108, wherein said first group of pixels comprises at least one portion of a pixel of the display, and wherein said second group of pixels comprises at least one portion of a pixel of the display.
0. 115. The image processing apparatus according to claim 114, wherein said first data is representative of a light-output response of the first group at a first plurality of driving levels, and wherein said second data is representative of a light-output response of the second group at a second plurality of driving levels.

This application claims benefit of U.S. Provisional Patent Application No. 60/681,429, entitled “METHODS, APPARATUS, AND DEVICES FOR NOISE REDUCTION,” filed May 17, 2005.


where the color output of the pixel is L1*, u1*, v1* in L*u*v* space and the required color is L2*, u2*, v2*. For derivation and application of this equation, see the book by Myers mentioned above. Provided this error figure is small enough, then small deviations in the color output go unnoticed. This means that there is a certain tolerance on differences in luminosity relationships of sub-pixel elements which still provide an apparently uniform display. Therefore an optimization of a color display according to one embodiment includes capturing the luminosity and/or color output of all the pixels and/or pixel elements and optimizing the drive characteristics so that all pixels (or a selected number of such pixels, the others being defective pixels) have a luminosity and color range within the acceptable limits as defined by the above equation.

A similar technique can also be used to realize a desired non-uniformity of the screen with respect to its light-output (i.e. color and/or luminance) behavior. Instead of realizing a flat, spatially uniform behavior of the light-output, it can be the object of the matrix display to realize a non-uniform spatial behavior that corresponds to a target spatial function. As an example, certain visualization systems may include post-processing of the image displayed by the matrix display element, e.g. optical post-processing, which introduces spatial non-uniformities. Examples of such systems are for example, but not limited thereto, projection systems and tiled display systems using magnification lenses. The techniques disclosed herein can be used to introduce a non-uniformity of the light-output behavior to pre-correct for the behavior of the post-processing system so as to realize a better uniform behavior of the image that is produced as the result of the combination of said matrix display and said optical post-processing system.

In particular, the scope of disclosed embodiments includes configurations in which certain pixels are defined as defect pixels, i.e. that certain pixels are deliberately allowed to provide sub-optimal luminosity rather than reduce the brightness of the rest of the display in order to bring the operation of the remaining pixels within the range of the sub-optimal pixels. Such defect pixels may be dealt with in accordance with a method or device as described in U.S. patent application Ser. No. 10/719,881, entitled “Method and device for avoiding image misinterpretation due to defective pixels in a matrix display”. Thus embodiments may include at least two user-defined states: a maximum brightness display in which some of the pixels perform less than optimally but the remaining pixels are all optimized so that each pixel element operates within the same luminance range as other pixel elements having the same function, e.g. all blue pixel elements.

Storing a large amount of data as suggested above (i.e. one luminance response function for every individual pixel 4) is technologically possible, but may not be a cost-effective way. Accordingly, the range of embodiments includes a method to classify a pixel's luminance response and thus reduce the data required for correction implementations. For example, the characterization data may be classified into a predetermined number N of categories, where N is greater than one and less than the number of pixels, with the characterization data of at least two pixels being assigned to one of the categories.

As explained above, every pixel 4 has its own characteristic luminance response. It is possible to characterize the luminance response function, and hence the required correction function for a pixel 4, into a set of parameters. More specifically, it may be desired to map the behavior of a pixel 4, although possibly different for each individual pixel 4, into categories that describe the required correction for a set of pixels. In that sense, various similarly behaving pixels can be categorized as suitable for using the same correction curve.

A potential advantage of this technique is to obtain a reduction of the data volume and associated storage memory that may be needed to realize the correction in hardware circuitry. As an example, a one-megapixel display 2 would have for each pixel one characteristic luminance response, which can be stored under the form of e.g. a LUT. This means that one million LUTs may need to be stored, in the absence of a reduction as described herein.

It may be an objective to define every possible correction curve for this display using a value which does not need to be able to point to all of the one million LUTs, for example using only an 8-bit value. This means that maximally 256 different correction curves, thus 256 categories, are available for correction of the one million pixels 4 of the display 2. The objective of the technique of data reduction is to find similarly behaving pixels 4 that can be corrected with one and the same correction curve, so that an 8-bit value for each pixel (and the 256 (correction) curves) suffices for correction of the complete display. It is to be remarked that the technique of data reduction may be applied on the pixel characteristic curve itself, or on the correction curve that is associated with this pixel, since the latter is derived from the former.

Another possibility is storing an additional correction value for each pixel next to the category to which the pixel belongs. In one example, an offset value is stored, which technique may be used to avoid storage of many characteristic curves that only differ in an offset value. Of course also other additional correction values can be stored (e.g. but not limited to gain, offset, shift, maximum).

An embodiment includes classifying the actual luminance response functions or curves found into a set of typical curves, based on closest resemblance of the actual curves with the typical curves. Different techniques exist that may be used to classify the pixel response functions or curves, or just curves in general, ranging from minimized least squares approaches, over k-means square approaches and harmonic mean square approaches, to neural network and Hopfield net based techniques. The type of technique used is not generally a limitation of the invention except as may be claimed below, but the fact that data reduction techniques are used to select a typical correction curve is an aspect of some embodiments.

The end result is that e.g. for a set of more than a million pixels that make up a typical computer data display, the correction curves can be fully defined by a limited amount of data (e.g. an 8-bit value per pixel), reducing the required hardware (mainly memory) to implement the correction. This data set may be called the pixel profile map (PPM). For example, the 8-bit value may be a pointer to a set of 256 different types of different typical functions. These typical functions may e.g. be stored as curves (a set of data points), as look-up tables, or in any other suitable form, such as for example as polynomials or by describing each curve by a vector to its points, in a memory for later use.

A further embodiment does not use classification into typical curves to obtain the PPM. This method describes the actually found pixel characterization data (PCD) by means of a polynomial description of the form:
y=a+bx+cx2+ . . . +zx″.

Instead of storing the typical curves (as in a method as described above), the coefficients a, b, . . . , z will be stored for each pixel in this case. Dependent upon the desired precision, and the implementation method to be used (e.g. software versus hardware), an order of the polynomial form can be selected. To a first approximation, the PCD can for example be approximated by a linear curve defining just an offset (coefficient a) and gain (coefficient b) parameter. In that case, for every pixel, the coefficients a and b may be stored in memory for later use. The parameters can be quantified with various resolutions depending on the desired precision. Any combination of typical curves and polynomial description (or any other mathematical description method such as but not limited to sine or cosine series, etc.) is also possible.

The overall result of the pixel characterization and classification is that the PPM is obtained for every pixel 4 of the display device 2 under test (or selected portion of the display). It may be desirable to obtain the PPM offline (e.g. within the factory), and then to perform correction based on the PPM on-line (in real-time).

Based on the PPM, an embodiment provides a correction circuit to generate a required pixel response curve in real time. The correction circuit will apply a specific transfer curve correction for each individual pixel 4, which application may be performed synchronously with a pixel clock. Hereinafter, different embodiments of implementation methods are provided as an illustration. The methods are not meant to be exhaustive.

In a first embodiment, the transfer curve correction is realized by means of a look-up table. The correction circuit provides a dynamic switching of the look-up table at the frequency of the pixel clock. Associated with every pixel-value is the information about its typical luminance response curve. Thus, at every pixel, the correct look-up table is pointed to, e.g. that look-up table containing the right correction function for that individual pixel.

In the first implementation example, the video memory 40 is 16 bits wide per color (e.g. a 48-bit-wide digital word to define a color pixel). It contains for every (sub)pixel the pixel-value itself (8-bit value) and another 8-bit value identifying the pixel's response curve. This latter value is the result of the characterization process of the pixel, followed by the classification process of the pixel's response curve. At read-out of the pixel value from the video memory 40 at the rhythm of the pixel clock, this pixel value is used as a pointer to a bank of different 8- to 8-bit look-up tables 42, actually representing 256 different correction classes available for this display's pixels. The principle of look-up tables is well known by persons skilled in the art and allows for a real-time implementation at the highest pixel clock speeds as found in today's display controllers.

A second embodiment can be based on the second classification method that stores the pixel correction curves by means of polynomial descriptors. In such a case, the required response will be calculated by a processing unit capable of calculating the required drive to the pixel based on the polynomial form:
y=a+bx+cx2+ . . . +zx″.
A processing unit will retrieve for every pixel the stored coefficients a, b, c, . . . , z and will calculate in real time or off-line the required drive y for the pixel, at a given value x defined by the actual pixel value.

For embodiments that include a correction task, a correction of the drive value to the pixels can be applied in real time using hardware or software methods, but it can also be carried out off-line (not in real time), e.g. by means of software. Software methods may be preferred where cost must be minimized or where dedicated hardware is not available or is to be avoided. Such software methods may be based upon a microprocessor, embedded microcontroller, or similar processing engine such as Programmable Logic Arrays (PLA), Programmable Array Logic (PAL), Gate Arrays especially Field Programmable Gate Arrays (FPGA) for executing methods as described herein. In particular, such processing engines may be embedded in dedicated circuitry such as a VLSI.

As an example of the latter case, the PPM of the complete display 2 can be made accessible by a software application. This application may be configured to process every individual pixel with a LUT correction as defined by the PPM data. In that way, the image will be pre-corrected according to the actual display characteristics, before it is generated by the imaging hardware.

It is to be understood that although preferred embodiments, specific constructions and configurations, as well as materials, have been discussed herein for devices according to the present invention, various changes or modifications in form and detail may be made without departing from the scope and spirit of this invention.

Above a basic algorithm for correction of non-uniformities was explained. In some applications, however, problems may exist that impair the usefulness of this basic version of the algorithm.

A first problem relates to pixel defects. Pixel defects are for instance defective pixels that are stuck in one state, such as the bright state or dark state. These defective pixels often are result of a short or open transistor. In some applications of the basic algorithm, pixel defects would just be treated as any other form of non-uniformity. However, this could result in making the defects even more visible instead of less visible.

Such a principle will now be explained: a typical medical monochrome display (such as the dual-domain five-mega-pixel monochrome medical LCD from International Display Technology Co., Ltd. (Yasu, Japan) has pixels where each pixel consists of three sub pixels. If one of those sub pixels is defective, for instance always dark, then this pixel (measured) as a unit will be perceived as being too dark when driven at values larger than zero. The result would be that a basic algorithm as described above could drive the pixel (meaning the two sub pixels that are still functioning normally) as to have higher luminance. However, doing this will further increase the contrast between the two normally functioning sub pixels in that pixel and the defective sub pixel in that pixel. The result will be that the defective sub pixel becomes much more visible than if no correction would have been applied. Note that the same principle is valid for other pixel organizations and if more than one sub pixel inside one LCD pixel is defective. Also the defective sub pixel(s) can have another luminance value than completely black or completely white.

Some embodiments may be configured to solve this problem by first analyzing the display system for defective pixels and adding this information to the luminance map (and/or chrominance map) of the display. In addition to the transfer curve of each individual pixel, for example, information about pixel defects may also be added to this map. The correction algorithm may then behave differently if a pixel is to be corrected that is marked as being defective or if a pixel is being corrected that has a defective pixel in its neighborhood. For example, the correction algorithm may try to make the luminance output as uniform as possible and also try to minimize the visibility of the defect. This can be done for instance by applying a special correction algorithm for defective pixels and pixels in the neighborhood of the defect. An algorithm for masking faulty sub-pixels by modifying values of nearby sub-pixels as described in International Patent Publication No. WO03/100756 may be used.

Another correction algorithm that may be used is described in European Patent Application No. EP1536399 (03078717.0), entitled “Method and device for visual masking of defects in matrix displays by using characteristics of the human vision system.” At least some embodiments including such an algorithm may be applied to solve the problem of defective pixels and/or sub-pixels in matrix displays by making them almost invisible for the human eye under normal usage circumstances. This may be done by changing the drive signal of “masking elements,” or non-defective pixels and/or sub-pixels in the neighborhood of the defective pixel or sub-pixel. The document EP1536399 describes, for example, a method and device for making pixel defects less visible and thus avoiding an incorrect image interpretation even without repair of the defective pixels, the method being usable for different types of matrix displays without a trial and error method being required to obtain acceptable correction results. Such a method and device are now described.

By a defective pixel or sub-pixel is meant a pixel that always shows the same luminance, i.e. a pixel or sub-pixel stuck in a specific state (for instance, but not limited to, always black, or always full white) and/or color behavior independent of the drive stimulus applied to it, or a pixel or sub-pixel that shows a luminance or color behavior that shows a severe distortion compared to non-defective pixels or sub-pixels of the display (due e.g. to contamination). For example a pixel that reacts to an applied drive signal, but that has a luminance behavior that is very different from the luminance behavior of neighboring pixels, for instance significantly more dark or bright than surrounding pixels, can be considered a defective pixel. By visually masking is meant minimizing the visibility and/or negative effects of the defect for the user of the display.

A defect may be caused by a defective display element or by an external cause, such as dust adhering on or between display elements for example. One method for reducing the visual impact of defects present in a matrix display comprising a plurality of display elements, as described in EP1536399, includes providing a representation of a human vision system. Providing a representation of the human vision system may comprise calculating an expected response of a human eye to a stimulus applied to a display element.

For calculating the expected response of a human eye to a stimulus applied to a display element, use may be made of any of a point spread function, a pupil function, a line spread function, an optical transfer function, a modulation transfer function or a phase transfer function of the eye. These functions may be described analytically, for example based on using any of Taylor, Seidel or Zernike polynomials, or numerically.

The range of embodiments is not limited to any particular manner of describing the complex pupil function or the PSF. The description may be done analytically (for instance but not limited to a mathematical function in Cartesian or polar coordinates, by means of standard polynomials, or by means of any other suitable analytical method) or numerically by describing the function value at certain points. It is also possible to use (instead of the PSF) other (equivalent) representations of the optical system such as but not limited to the ‘Pupil Function (or aberration)’, the ‘Line Spread Function (LSF)’, the ‘Optical Transfer Function (OTF)’, the ‘Modulation Transfer function (MTF)’ and ‘Phase Transfer Function (PTF)’. Clear mathematical relations exist between such representation-methods, so that it may be possible to transform one form into another form.

In one example, such a method includes a mathematical model that is able to calculate the optimal driving signal for the masking elements in order to minimize the visibility of the defect(s). It may be possible to use the same algorithm for different display configurations, because it uses some parameters that describe the display characteristics. The model may be based on characteristics of the human eye, describing algorithms to calculate the actual response of the human eye to the superposition of the stimulus applied (in this case, to the defect and to the masking pixels). In this way the optimal drive signals of the masking elements can be described as a mathematical minimization problem of a function with one or more variables. It is possible to add one or more boundary conditions to this minimization problem. Examples when extra boundary conditions may be needed may include cases of defects of one or more masking elements, limitations to the possible drive signal of the masking elements, dependencies in the drive signals of masking elements, etc.

A method for reducing the visual impact of defects present in a matrix display as described in EP1536399 also includes characterizing at least one defect present in the display, the defect being surrounded by a plurality of non-defective display elements. Characterizing at least one defect present in the display may comprise storing characterization data characterizing the location and non-linear light output response of individual display elements, the characterization data representing light outputs of an individual display element as a function of its drive signals.

A method may further comprise generating the characterization data from images captured from individual display elements. Generating the characterization data may comprise building a display element profile map representing characterization data for each display element of the display.

A method for reducing the visual impact of defects present in a matrix display as described in EP1536399 also includes deriving drive signals for at least some of the plurality of non-defective display elements in accordance with the representation of the human vision system and the characterizing of the at least one defect, to thereby minimize an expected response of the human vision system to the defect. Minimizing the response of the human vision system to the defect may comprise changing the light output value of at least one non-defective display element surrounding the defect in the display. When minimizing the response of the human vision system to the defect, boundary conditions may be taken into account. Minimizing the response of the human vision system may be carried out in real-time or off-line.

A method for reducing the visual impact of defects present in a matrix display as described in EP1536399 also includes driving at least some of the plurality of non-defective display elements with the derived drive signals.

In a system as described in EP1536399 for reducing the visual impact of defects present in a matrix display comprising a plurality of display elements and intended to be looked at by a human vision system, first characterization data for a human vision system is provided. For example, the first characterization data may be provided by a vision characterizing device having calculating means for calculating the response of a human eye to a stimulus applied to a display element.

A system as described in EP1536399 includes a defect characterizing device for generating second characterization data for at least one defect present in the display, the defect being surrounded by a plurality of non-defective display elements. The defect characterizing device may comprise an image capturing device for generating an image of the display elements of the display. The defect characterizing device may also comprise a display element location identifying device for identifying the actual location of individual display elements of the display.

A system as described in EP1536399 for reducing the visual impact of defects present in a matrix display also includes a correction device for deriving drive signals for at least some of the plurality of non-defective display elements in accordance with the first characterization data and the second characterizing data, to thereby minimize an expected response of the human vision system to the defect. The correction device may comprise means to change the light output value of at least one non-defective display element surrounding the defect in the display. Such a system may also include means for driving at least some of the plurality of non-defective display elements with the derived drive signals.

A control unit as described in EP1536399 for use with a system for reducing the visual impact of defects present in a matrix display, the display comprising a plurality of display elements and intended to be looked at by a human vision system, includes a first memory for storing first characterization data for a human vision system and a second memory for storing second characterization data for at least one defect present in the display. The first and the second memory may physically be a same memory device.

Such a control unit also includes modulating means for modulating, in accordance with the first characterization data and the second characterization data, drive signals for non-defective display elements surrounding the defect so as to reduce the visual impact of the defect. A matrix display device as described in EP1536399 for displaying an image intended to be looked at by a human vision system may include such a control unit and a plurality of display elements.

In another configuration, the correction algorithm skips the pixels in the neighborhood of a defect, which may avoid a problem that correction makes the defect more visible. A further configuration uses an average correction of pixels and/or subpixels in a neighborhood of the defect (wherein the average may include or exclude the defective pixel itself) to correct the defective pixel and/or pixels in the neighborhood of the defective pixel.

Note that the same principle may be valid even if the pixel (or sub pixel) is not completely defective. For example, the luminance behavior may differ significantly due to reasons that may include, but are not limited to, dust in the LC cell (which may result in small bright or dark spots), dirt or contamination in or on the LC glass, and dirt or contamination in or on a protective glass above the LCD. Information on such types of “defects” may be added to the luminance map of the display, and the correction algorithm may use such information to change its behavior in a neighborhood of the defect.

For example, contamination or dirt in the LC cell of a pixel may cause light to scatter, resulting in severe light leakage inside the cell. One result may be tiny but extremely bright and visible spots if that pixel is driven to a dark video level. If the cell is driven to display a dark video level, then this leakage may be several magnitudes more bright than the normal luminance for a pixel driven to that video level, and thus may be extremely visible. At a bright video level, however, the effect of the defect may become nil or negligible.

It is possible that due to the brightness of the light leakage, even neighboring LCD pixels are perceived as being bright by the imaging device. In one example, the light leaking from this cell is captured by an imaging device that is used to characterize the LCD. In front of that imaging device may be a lens in which light scattering takes place. If the leakage light is very bright, then it could even impact the luminance of neighboring LCD pixels due to scatter in the LCD display and/or in the lens in front of the imaging device. Alternatively, the bright spots due to leakage can cause saturation in a sensor of the imaging device, such as a CCD (charge-coupled device) sensor. In that case, the saturated site may affect neighboring sites (blooming), such that a bright pixel can affect how neighboring pixels are imaged, resulting in smear in the captured image. In such a situation of scattering and/or saturation, applying a regular correction algorithm that does not account for such effects could result in severe and very visible artifacts in a large area around this defect, as the algorithm may be configured to decrease the output luminance of the LC cell containing the defect and also of a lot of other LCD pixels in the neighborhood of the defect.

In such case, information on this defect may be added to the luminance map of the display, and the correction algorithm may be configured to behave differently in the neighborhood of such a defect. For example, the correction algorithm may ignore the defect and correct an area around the defect (such as an area of which the luminance of LCD pixels is influenced because of the defect) using an average or typical correction of a broader area in that area of the display or of an area of the display that has similar characteristics as that area.

To summarize, one improvement to the basic algorithm for correction of non-uniformities is to add information on display defects to the luminance map that is an input for the correction algorithm. In one example, these display defects are detected by an imaging device that is used to image the display. These defects can be divided into categories such as, but not limited to, dead sub pixel, dead pixel, bright sub pixel, bright pixel, contamination in LC cell, dust in the LC cell, dust on or in the LC glass, and dust on or in the protective glass on the display.

FIGS. 13-20 show numerous examples of a neighborhood of a defective pixel or subpixel as discussed herein. FIG. 13A shows an organization of red, blue, and green subpixels in a pixel of a color LCD, and FIG. 13B shows a subpixel organization of a monochrome (greyscale) display, which may be obtained by removing or omitting color filters from the LCD panel. The panel may be, for example, a monochrome dual-domain IPS (in-plane switching) or MVA (multidomain vertical switching) display for medical applications. The example organization of pixels in a matrix of FIG. 13C is repeated in the examples of FIGS. 14-20, where the defective pixel or subpixel is indicated in black, and the neighborhood portions are indicated with crosses. Typically a center-to-center distance between pixels of a LCD panel has a value between 0.1 and 0.5 mm (e.g. 0.15 to 0.3 mm) in each of the vertical and horizontal directions.

A neighborhood may be square (e.g. 3×3, 5×5, etc., as in the example of FIG. 14), or may approximate another shape such as a circle (as in the examples of FIGS. 15 and 16). It may include subpixels of one color (as in the example of FIG. 17), or of all the colors (as in the example of FIG. 18), or be weighted in favor of one or more colors. In some applications, a neighborhood that is not continuous may be desired (as in the example of FIG. 19). In some cases, such as where only a part of a pixel is defective (e.g. due to contamination), part of the pixel itself (e.g. a nondefective part) may be included in the neighborhood (as in the example of FIG. 20). It is expressly contemplated that the disclosure of neighborhoods herein is also extended to greyscale displays, which may also include pixels having subpixels. It is also noted that these or other neighborhood configurations may be used with different panel and/or pixel organizations, such as a PenTile RGBW pixel structure, and that many neighborhoods other than the examples expressly shown here are contemplated.

Of course, other parameters may be stored in addition to the defect type. Such parameters may include without limitation exact defect location (possibly a floating point number for row and column, since some defects may not be directly linked to a particular pixel: for example, contamination in the glass can be in between two LCD pixels) and other information such as luminance value (for instance, for light leakage). Instead of (or in addition to) measuring or obtaining the list of defects using the imaging device, it is also possible to obtain this map of defects from another source such as the manufacturer of the device (for example, stored in a non-volatile memory of the device), or it can even be created by manually inspecting the device.

Once such information on defects is added to the luminance map, the correction algorithm can use this information to change its behavior in a neighborhood of such a defect. Potential advantages of such a configuration include obtaining a more optimal correction and avoiding an increase in visibility of the defect. Such a configuration may be described as prefiltering of the correction and/or luminance map.

Another reason to prefilter the luminance and/or correction map could be that the measurement data is rather noisy, so that it may be desirable to apply a low-pass filter to the luminance and/or correction map (for example, to partially remove this noise) or to reject outliers in the map and replace these values by more typical values. Many statistical methods exist that may be used for automated detection and/or correction of outliers in measurement data. For example, a temporal filter and/or a median filter or k-nearest-neighbor median filter may be used. Alternatively, outliers may be detected based on a comparison of a threshold value to a distance measure (such as Euclidean distance or the Mahalanobis distance), and a detected outlier value may be replaced by e.g. an average of the measured values in a neighborhood of the outlier.

Another potential disadvantage of a basic version of the algorithm for correcting non-uniformities is a severe reduction in display luminance and contrast. If the display response is made perfectly uniform across all pixels, then the maximum brightness of the display may be constrained to the minimum of all pixels (as determined when all pixels are driven to maximum). This is because we cannot increase the actual brightness of pixels if they are already driven to their maximum value. The same holds for the minimum brightness (low video level), in that the lowest display luminance may be constrained to the luminance of the brightest pixel of the display when the pixels are driven to minimum brightness. These two constraints may lead to a reduction in the contrast ratio of the display.

Further embodiments may be configured to provide a solution for such a problem. For example, a correction apparatus or method may be configured not to make the display as uniform as possible but rather to make the user perceive the display as uniform as possible. Such an embodiment may be configured in accordance with information that the human eye is far from perfect and that some variations in luminance and color (and therefore also non-uniformities) cannot be perceived.

Several models of the human eye exist, such as the Barten mode, which is a model describing the contrast sensitivity function of the human eye (e.g. as described in Barten, Peter G. J. (1999), Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham. Wash.), and also more complex models such as the proprietary JND-Metrix model (Samoff Corporation, Princeton, N.J.). Any model of the human visual system may be used to modify a correction algorithm to increase the contrast and peak luminance of a display system while still keeping the same impression of luminance and/or color non-uniformity. For example, any of these models of the human visual system may be used to configure the correction algorithm to correct predominantly or exclusively for those non-uniformities in luminance and/or color that can actually be perceived by the human observer (or any other observer, such as but not limited to another type of animal, a sensor as may be used in a machine vision application, etc.).

For example, a model of the contrast sensitivity function describes which spatial sine-wave patterns can be perceived by the (e.g. human) observer. For each specific sine-wave frequency (in cycles per degree, for instance), the model describes an amplitude threshold (% modulation) that is required in order for a human observer to be able to see this sine-wave pattern. Consequently, if a display system has non-uniformities containing spatial frequencies for which the amplitude is below the visual threshold, then according to the model these non-uniformities will not be visible for the human observer. Therefore there is also no need to try to correct for these non-uniformities, and modifying the correction algorithm according to the model may result in smaller corrections required and therefore less loss in peak luminance and contrast ratio.

Such a principle may also be used to only partially correct for non-uniformities. If a particular non-uniformity is visible for the human eye, according to the model, then it may be desirable to apply a correction, not to achieve perfect uniformity, but on the other hand to achieve sufficient uniformity so that the remaining non-uniformity is not noticeable anymore for the human observer (based on the model). A potential advantage of only partially correcting for non-uniformities is to increase the remaining peak luminance and contrast ratio of the display system after correction. Of course it is possible to take some safety margin in the correction to make sure that more sensitive observers (i.e. exceptional cases) will not be able to see the remaining non-uniformities.

A typical example is the luminance fall-off near to the borders of the display. This luminance fall-off is typically a low spatial frequency. Especially for lower spatial frequencies the human eye is not very sensitive, so that it will be very difficult for the eye to perceive this luminance fall-off. At the same time, this luminance fall-off is typically rather large (30% lower luminance at the borders as compared to the center), so that if the luminance fall-off were corrected perfectly, a large loss of peak luminance and contrast ratio could result. Embodiments include configurations in which this luminance fall-off is not corrected for or, alternatively, is corrected for only partially so that the luminance fall-off just becomes invisible for the human eye. A potential advantage of such a configuration is that the contrast and peak luminance loss will be much smaller.

FIG. 12 shows an example of a contrast sensitivity function, here expressed as a plot of sensitivity vs. spatial frequency (in cycles per degree). This image shows the sensitivity (so not the threshold) of the eye to sine-wave patterns of specific spatial frequency. Also note that this contrast sensitivity function depends on the absolute luminance value (in this example, 500 cd/m2). According to this perceptibility model, it may be desirable to correct nonuniformities to increase visibility of an area of an image having a spatial frequency greater than 0.1 cycles per degree, while nonuniformities affecting visibility of spatial frequencies greater than 10 cycles per degree may be ignored (in another example, the range of frequencies of interest may be narrowed to, e.g., 1-8 cycles per degree). The distance in the viewing plane of the display device which corresponds to a degree or portion thereof may be determined according to a customary or recommended viewing distance, which may be in the range of 50 to 120 centimeters (e.g. 65-100 cm) for medical applications. For example, a frequency of one cycle per degree corresponds to a period of about 8.7 millimeters at a distance of 50 cm, and to a period of about 2.1 cm at a distance of 120 cm.

In another embodiment, perceptibility is determined on a basis of separation of features by arc-minutes (sixtieths of a degree). For example, it may be assumed that points less than one arc-minute apart will not be distinguished by the observer. Again, the distance in the viewing plane of the display device which corresponds to a degree or portion thereof may be determined according to a customary or recommended viewing distance, which may be in the range of 50 to 120 centimeters (e.g. 65-100 cm) for medical applications. For example, an angle of one arc-minute corresponds to a distance of about 0.15 mm in a perpendicular plane 50 cm distant, and to a distance of about 0.35 mm in a perpendicular plane 120 cm distant.

The range of embodiments includes configurations in which the magnitude of one or more components corresponding to a range of frequencies is increased relative to a magnitude of components corresponding to frequencies above and below the range (alternatively, a magnitude of components corresponding to frequencies above and below the range is reduced). For example, a magnitude of a component having a spatial period between one and fifty millimeters may be increased relative to a magnitude of a component having a spatial period less than one millimeter and a magnitude of a component having a spatial period greater than fifty millimeters.

As already mentioned, much more complex models can be used to determine whether or not non-uniformity is visible for the human observer or not, and the scope of disclosed embodiments is not limited to any particular model or set of models. The same idea of perceptibility-based correction may also be applied for correction of color displays, in that correction of luminance and/or color difference may be limited according to a determination of what can actually be observed by the human observer. In such case, a program, package, or methodology such as JNDMetrix (Sarnoff Corp. Princeton, N.J.), for example, may be used in making the determination. Alternatively or additionally, an extension of the contrast sensitivity function to color perception may be applied.

A method of analysis and correction that takes account of visibility need not be limited to a frame-by-frame basis. Typically, a video sequence of images will be differently perceived compared to still image frames. Therefore deciding if specific noise is visible or not may also include analyzing a sequence of images, and also in this situation a program, package, or methodology such as the JNDMetrix tool may be used. The various modes available for the correction of a particular defect may also be increased by applying a different modification to the same pixel or area of a display at different times. For example, such corrections may be applied in accordance with an expected temporal sensitivity of the observer. A series of two or more different corrections over time may be applied even in a case where the original image signal mapped to that pixel or area remains constant or relatively constant.

Embodiments configured to correct non-uniformities according to their perceptibility or observability may include preprocessing the correction map. Such preprocessing may be performed on a basis other than pixel-by-pixel. For example, the correction value of a particular pixel may depend not only on that pixel but also on pixels and/or sub-pixels in its neighborhood, or even on the behavior (e.g. required correction, luminance behavior, color behavior) of many or all other pixels of the display, and possibly even on one or more pixel values of the display during another frame.

It may be desirable to perform such a correction (e.g. based on perceivability or observability) on luminance values rather than on digital driving values. In one example, the “visibility” analysis is done in the luminance domain, and the result is a required correction in the luminance domain. The correction may then be translated (e.g. by means of the inverse transfer curve of each pixel) to the digital driving level domain.

A determination of whether a non-uniformity (luminance and/or color) can be perceived may be done in different ways. One could do this visibility test based on the luminance and/or color behavior of the pixels (transfer curve) as measured by the imaging device. In one implementation, for each pixel of the display (or of some portion of interest of the display), the transfer curve is measured, resulting in a luminance map for the display. For example, the transfer curve may be measured for each pixel at several luminance levels, resulting in a luminance map for the display at several luminance levels. Based on the luminance map, a best correction is calculated, e.g. the correction that results in lowest reduction in luminance and/or contrast when only those non-uniformities that are shown to be visible on that map are corrected. Then one could use this calculated correction map in the future to pre-compensate images to be shown on that display. In most situations such a method will work fine and will give nearly optimal performance.

However, in theory such a method may not be the optimal solution. Indeed, the actual image contents shown on the display (the actual image being displayed at any time) also affects whether and to what extent a particular uniformity in that image will be visible. For instance, it is typically much easier to see a non-uniformity when a uniform background is shown on the display, as compared to when a realistic photographic image is displayed. Therefore, it may be desirable to calculate a correction map (or a modification of such a map) based on a characteristic of the actual image being displayed, and potential advantages of such an implementation may include further reduction in peak-luminance and contrast ratio. Such calculation can be done by one or more of the same methods as described before (e.g. contrast sensitivity function, JNDmetrix), and it may be done in software (each time the image changes, for instance) and/or in hardware, and off-line and/or on-line (in real-time).

A particular implementation is described now as an example. In medical imaging, such as in mammography, radiologists often base their diagnosis on very small and subtle differences in the medical image. Therefore noise that becomes visible as small high-frequency structures may have much more negative impact on the quality of diagnosis as compared to large-area low-frequency noise structures. Therefore, it is often more important that high-frequency noise is reduced much more than low-frequency noise, or even that only high-frequency noise is reduced.

As described above, a potential advantage of not compensating for low-frequency components of the display noise is that the remaining peak luminance and contrast ratio after noise reduction will be higher. The terms “high-frequency” and “low-frequency,” as used here with reference to noise structures or noise components, indicate spatial frequencies (and/or direction: the noise could be in horizontal or vertical direction or a combination of both) that may depend on the relevant structures in the images being displayed. If relevant clinical structures in a medical image have a spatial period (where the period is defined as being 1/frequency) of only a few pixels, for example, then it may be desirable to reduce high-frequency noise in the form of a noise structure or component having a spatial period equal to or lower than the period of the relevant clinical structures. Of course it is possible to take some safety margin in calculating for which frequencies noise reduction should be applied.

In some cases, it may be desired to use complex mathematical models to predict the visibility of non-uniformities (in other words, noise). In other situations, it may be desired to use ad-hoc models such as splitting the noise pattern (or, alternatively, the luminance map and/or the correction map) into frequency bands and assigning gain factors for each band that determine whether and to what extent the noise pattern in that band will be reduced (i.e. to what extent the image will be compensated for that noise).

For example, if we split the noise pattern into two bands (e.g. a low-frequency band of noise patterns with period higher than or equal to 32 display pixels, and a high-frequency band with frequencies having periods lower than 32 display pixels), then we could assign a gain factor such as 1.0 to the high-frequency band, meaning that the noise patterns in this frequency band will be completely corrected for. The low-frequency band could be assigned a gain factor such as 0.2, meaning that the ideal correction coefficients needed to compensate for the noise in that band will be multiplied by 0.2, resulting in a less than complete compensation for noise in that frequency band. In some embodiments, it may be desired to apply a low-valued gain factor to a low-frequency component, a high-valued gain factor to a high-frequency component, and a low-valued gain factor to a component of even higher frequency, such that less visible components on both sides of the spectrum are less compensated than a more visible component between them.

Note that the scope of disclosed embodiments is not limited to any particular number or range of frequency bands. Also, it is possible to define continuous bands in order to reduce or avoid any discontinuities at the border between two bands. For example, such a method or apparatus may be implemented so that a gain factor changes gradually from one band to another.

Also note that in practice it may not be optimal to apply the gain factor to the correction map in a case where the native transfer function of the display is not linear. To illustrate, suppose we split the display noise into two frequency bands: a low-frequency band and a high-frequency band. Also assume that a specific pixel, when driven at level 234, should have corrected pixel value 250 when corrected perfectly (with a target of perfect uniformity over the display area). This means that the correction value would be +16 video levels for that pixel. Assume for example that we only want to correct for the high-frequency noise patterns, such that we apply a gain factor of zero for the low frequencies. Furthermore as an example assume that the correction of +16 video levels includes a +12 correction due to low-frequency noise and a +4 correction due to high-frequency noise. Then a simple reasoning would suggest that the desired correction value for correcting high-frequency noise would be +4. However, this would only be correct if the transfer function of the display is linear in this interval. Thus, it may be desirable to consider the correction to be applied in the luminance domain rather than in the digital driving level domain. In this case, although the +4 correction corresponds to the desired high-frequency correction around video level 250, it might not correspond to the desired (or correct) correction at the lower video level 234 if the transfer curve of that pixel is not sufficiently linear in the range being considered.

Thus, it may be desirable to do the split between the frequency bands not on the digital driving level values (on the correction values) but rather in the luminance domain. In one example, the noise map is expressed in absolute luminance values (e.g. in cd/m2), and the division into frequency bands is done on this data. Such a dividing into frequency bands can for example be done using a transformation to a frequency domain such as the Fourier domain. The noise map is transformed to the Fourier domain, and the result is a map of coefficients in the Fourier domain. Then in this map the gain factors can be applied directly to the Fourier coefficients. If we then apply the inverse transform back to the luminance domain, then we have the desired corrected luminance values corresponding to the correction according to the frequency bands selected and gain factors selected.

The scope of disclosed embodiments is not limited to any particular method of performing a division into frequency bands or of applying the gain factors. Other methods that may be used, for example, are based on a wavelet or other transform and/or are based solely on operations in the luminance domain. Once the luminance values corresponding to the desired correction have been obtained, these luminance values can be easily transformed into digital driving value correction values by, for example, applying the inverse transfer curve for each individual pixel being corrected.

Methods as described herein may also be applied to correction of color non-uniformity. For example, determining whether a non-uniformity is visible may again be done according to a luminance map (or luminance and chromaticity map) and/or according to an actual image to be shown on the display, and correction may include processing of chromaticity values.

Yet another improvement to a basic algorithm for correction of non-uniformities is increasing the number of gray shades. Consider a display system with 1024 shades of gray. After correcting for the non-uniformities, not all of the pixels will be driven between their minimum (0) and maximum (1024) value anymore. This is because some pixels have higher or lower peak luminance value and minimum luminance value than other pixels. Moreover, the number of gray shades may now depend on the spatial location on the display.

For demanding applications such as medical imaging, an inability to guarantee that a small difference in gray level can be perceived at all locations on the display (suppose we want to show an image containing 1024 shades of gray) may be unacceptable. The fact alone that a small difference could be visible at one location on the display and not at another location on the display is most likely not acceptable for many applications.

Embodiments also include configurations in which the number of output shades of gray of the display system is chosen to be higher than the maximum number of gray shades that are input to the display system. For example, if the input to the display has 1024 shades of gray, then the correction map in such an configuration has a resolution of more than 1024 shades of gray (for instance, 2048 or 4096, although the scope of disclosed embodiments is not limited to either of these examples or to any particular numbers).

In one such example, a particular pixel input gray value of 234 (in a resolution of 1024 shades of gray) is converted to a corrected output gray value 256.25 (as denoted in a system of 1024 shades of gray) or, equally, 1025 (as denoted in a system of 4096 shades of gray). Thus 4096 correction values are available in this example for the correction of 1024 input values. It may be desirable to choose the number of output shades of gray to be high enough that it will always be possible, for any pixel, to select correction values such that no pair of input values is mapped to the same output value after correction, although a smaller degree of expansion may be acceptable for other applications.

It is possible to apply such a method so that at all locations on the display, the display system has the same number of gray shades after correction. Note that such a technique may also be used for correction of color non-uniformities. The higher number of gray shades may, for instance, be created by using an error diffusion technique such as dithering. Such a technique may be applied spatially (e.g. over some neighborhood) and/or temporally (e.g. over two or more frames). Any error diffusion or dithering technique may be used, such as (but not limited to) Floyd-Steinberg, Stucki, or Stevenson-Arce dithering. In at least some cases, such a technique may even be applied to display more gray shades than are natively available on the display system.

Although correction of individual pixel values is described above, it is also expressly contemplated that correction as disclosed herein may be performed on sets or zones of pixels, where individual pixel correction values may be obtained by interpolation (e.g. linear, bilinear, cubic) or any other method that calculates individual correction values out of the zonal correction values. A potential advantage of set- or zone-based correction is a reduction in the computational complexity of the algorithms.

According to further embodiments, a technique of noise reduction is optimized for medical imaging. Ultimately, it may be desired to apply noise reduction in such a way that the quality of diagnosis is increased. Therefore it may be desirable to understand which noise structures really can impact the accuracy of diagnosis and/or what the effects of noise reduction are (e.g. both positive effects, such as lower noise, and negative effects, such as lower contrast ratio and peak luminance).

In one example of such an algorithm, a first task includes measuring the luminance behavior of the display, e.g. as described above. In particular, such a task may include measuring the luminance behavior (and therefore noise) of each individual pixel (or groups of pixels). A next task includes determining whether or not the measured noise can lower the quality of diagnosis for the particular application being executed at that particular time (for instance, but not limited to, mammogram reading, chest X-ray reading, lung nodule detection, CT scan reading, etc.). This task may be based on information about the clinical features in the medical images being studied. For example, such information may be used to determine whether the noise pattern of the display can have negative impact on the accuracy of diagnosis for this particular application. Further, such information may be used to determine which parts or components of the display noise pattern may have a negative impact on the accuracy of diagnosis and/or how strong such negative impact is expected to be.

In reading mammogram images, for example, the contrast ratio of the display may be very important, and also high-frequency noise may have a significant negative impact on a quality of the diagnosis. Moreover, it may be likely that a radiologist examining the image will be looking especially for circular or elliptical structures. Embodiments may be configured to apply the noise reduction in such a way that, for example, circular and/or ellipse-like noise structures will be compensated for. Such an embodiment may be configured such that the only operation is to compensate for circular or ellipse-like structures, or other noise reductions as disclosed herein may be applied as well. A task of identifying such structures may include applying one or more shape-discriminant filters (such as Gabor filters) to the luminance map. In an application such as this particular one, it may also be desirable to achieve a balance between contrast ratio and applying noise reduction. Moreover, it may be desirable to apply noise reduction only to high-frequency noise. Such a principle may also be applied to color non-uniformities, and identification of relevant structures may be done on the luminance map and/or a chromaticity map.

In one implementation, a luminance noise map is analyzed according to the following algorithm. In a first task, the luminance noise map is transformed to the Fourier domain. In the Fourier domain, frequencies corresponding to clinically relevant structures are multiplied with a gain factor of 1.0, and other coefficients not corresponding to those frequencies are multiplied with a gain coefficient of zero. To avoid artifacts or discontinuities, coefficients in between the two extremes (corresponding to relevant frequencies, and not corresponding to relevant frequencies) may be assigned gradually changing multiplication factors between 1.0 and zero.

Once the multiplication factors are applied, the luminance noise map is transformed back from the Fourier domain to the luminance domain. Then a difference between the original luminance noise map and the resulting map (after processing in the Fourier domain) is calculated. This difference luminance noise map is analyzed for relevant structures, such as the circular and ellipse-like structures that may be relevant for mammograms. The presence of such structures in the difference luminance map indicates that they were removed in the Fourier domain and that a displayed image would not have been compensated for them. Of course this is not the desired result for this application, so if such relevant structures are present in the difference map then these structures are isolated from the background of the difference map and added to the map that was the result of the processing in the Fourier domain.

In such manner, the difference map may be used to verify that no clinically relevant noise structures were removed (and so would not be compensated for) in the processing in the Fourier domain. Any of various methods may be used to add such clinically relevant structures back to the luminance map once they are located, such as isolating the features from the background of the difference map (background subtraction).

Once a final luminance noise map has been determined, then the transfer curve of each individual pixel can be used to transform the luminance values to digital driving level values. One or more of the algorithms described above may then be used to actually carry out the noise reduction in the display.

Note that the above description of a mammography application is only intended to be an example, as other types of structures and other frequency bands may be clinically important for other types of images. A desired trade-off between peak luminance, contrast ratio, and noise levels may also differ for other image types. In some situations, it may suffice to execute fewer than all of the tasks in the above description, and also the order of executing these tasks may vary for different images or applications.

In some cases, an image signal may contain a noise structure or component that has the same shape as a feature it is desired to detect (such as a clinically relevant feature). For example, the image signal may include noise from a detector such as an X-ray device. It may be desirable to remove such noise from the image signal before it is displayed.

In some cases, the noise structure or component may be distinguished from the feature it is desired to detect by its contrast. For example, the noise may have a smaller amplitude (e.g. a lower contrast) than the feature. In a further embodiment, the image signal is processed to remove a signal or level representing a noise floor of the detector. For example, the noise representation may be subtracted from the image signal. It may be desirable to limit such removal to a particular frequency band of the image signal (e.g. a band in which the noise is considered to have a significant component) or to some other component of the image signal (e.g. as distinguished by a shape-discriminant filter). Such an operation may be performed before applying a nonuniformity correction as disclosed herein, or after such correction. For example, an image corresponding to a component of the noise floor of the detector which relates to the feature it is desired to detect (e.g. in terms of its shape) may be subtracted from an image signal that has already been corrected for nonuniformity according to an algorithm as disclosed herein.

It may be desirable to use different specific noise reduction methods for different images or image types (such as a mammogram; a chest image; a CT, MRI, or PET scan). Profiles may be created, for example, that link the use of a specific program to the use of a specific noise reduction method, or that link specific images to the use of specific noise reduction methods. Such a profile may be created at least in part in software. Detecting which noise reduction method to apply may be done automatically according to the image or video sequence to be shown on the display (for instance, based on (without limitation) neural networks that classify images or on statistical characteristics of the images/video). Alternatively or additionally, such detection may be based on inputs (hints or messages) from the applications running on the host PC or on inputs of the user of the display. Embodiments may also be configured such that different parts of the display may simultaneously use different noise reduction methods, as in a case where on the left-hand side a PET image is shown and on the right-hand side a CT image is shown, for example. In some cases, it may be desirable to change the type of noise reduction algorithm used dynamically over time.

Combinations of techniques as described above (such as visibility analysis, tailoring correction according to diagnostic relevance, frequency-based correction) are also expressly contemplated, as are applications of such combinations to greyscale images and to color images as appropriate.

FIG. 21 shows a flow chart of a method M100 according to an embodiment. For each of a plurality of pixels of a display, task T100 obtains a measure of a light-output response of at least a portion of the pixel at each of a plurality of driving levels. For example, task T100 may obtain the measures from an image capturing device or may retrieve the measures from storage (e.g. a non-volatile memory of the display). To increase a visibility of a characteristic of a displayed image during a use of the display, task T200 modifies a map that is based on the obtained measures. Based on the modified map and an image signal, task T300 obtains a display signal. Method M100 may be implemented as one or more sets (e.g. sequences) of instructions to be executed by one or more arrays of logic elements such as microprocessors, embedded controllers, or IP cores.

FIG. 22 shows a flow chart of an implementation M110 of method M100. Task T150 creates a light-output map based on the obtained measures. For example, task TI50 may create a luminance map and/or a chrominance map. Task T210 is an implementation of task T200 that modifies the light-output map according to an image characteristic (e.g. a frequency or feature of interest). Task T250 calculates a correction map based on the modified light-output map. Task T310 is an implementation of task T300 that obtains a display signal based on the correction map and an image signal.

FIG. 23 shows a flow chart of an implementation M120 of method M100. Task T260 calculates a correction map based on the light-output map. Task T220 is an implementation of task T200 that modifies the correction map according to an image characteristic (e.g. a frequency or feature of interest). Task T320 is an implementation of task T300 that obtains a display signal based on the modified correction map and an image signal. In some applications, task T220 may be altered at run-time to modify the correction map according to a different image characteristic.

FIG. 24 shows a block diagram of an apparatus 100 according to an embodiment. Transformation circuit 110 stores a correction map that may include one or more lookup tables or other correction functions (e.g. according to a classification). For each pixel value, correction circuit 120 obtains a corresponding function or value from transformation circuit 110 and outputs a corrected value for display. As shown in FIG. 11, an apparatus according to an embodiment may also be configured to output display values directly from a lookup table. FIG. 25 shows a block diagram of a system 200 according to an embodiment that also includes video memory 40 and a display 130. Transformation circuit 110 may be implemented as an array of storage elements (e.g. a semiconductor memory such as DRAM or flash RAM) and may be implemented in the same storage device as video memory 40.

FIG. 26 shows a block diagram of an implementation 102 of apparatus 100 that includes a modifying circuit 130 configured to calculate the correction map of transformation circuit 110 (e.g. from another correction map, or from a light-output response map) according to a characteristic of an image feature that it is desired to distinguish. Modifying circuit 130 may be implemented to perform any of the methods or algorithms described herein, such as applying different gain factors to different frequency bands of the map. One or both of correction circuit 120 and modifying circuit 130 may be implemented as an array of logic elements (e.g. a microprocessor or embedded controller) or as one of several tasks executing on such an array.

The foregoing presentation of the described embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments are possible, and the generic principles presented herein may be applied to other embodiments as well. For example, operations described as obtaining or being performed on or with a luminance map may also be used to obtain or be performed on or with a chrominance map. An embodiment may be implemented in part or in whole as a hard-wired circuit; as a circuit configuration fabricated into a device such as an application-specific integrated circuit (ASIC), application-specific standard product (ASSP), or field-programmable gate array (FPGA) or other programmable array.

An embodiment may also be implemented in part or in whole as a firmware program loaded into non-volatile storage (for example, an array of storage elements such as flash RAM or ferroelectric memory) or a software program loaded from or into a data storage medium (for example, an array of storage elements such as a semiconductor or ferroelectric memory, or a magnetic or optical medium such as a disk) as machine-readable code, such code being instructions executable by an array of logic elements such as a microprocessor, embedded microcontroller, or other digital signal processing unit. Embodiments also include computer program products for executing any of the methods disclosed herein, and transmission of such a product over a communications network (e.g. a local area network, a wide area network, or the Internet). Thus, the present invention is not intended to be limited to the embodiments shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein.

Matthijs, Paul, Kimpe, Tom

Patent Priority Assignee Title
10181282, Jan 23 2015 IGNIS INNOVATION INC Compensation for color variations in emissive devices
11756285, Jun 10 2021 Bank of America Corporation Image processing system and method for image noise removal
9507321, Sep 17 2013 City University of Hong Kong Converting complex holograms to phase holograms
9541899, Nov 11 2013 City University of Hong Kong Fast generation of pure phase digital holograms
9773128, Oct 16 2014 City University of Hong Kong Holographic encryption of multi-dimensional images
9798290, Sep 25 2015 City University of Hong Kong Holographic encryption of multi-dimensional images and decryption of encrypted multi-dimensional images
9823623, Mar 27 2014 City University of Hong Kong Conversion of complex holograms to phase holograms
Patent Priority Assignee Title
5208689, Sep 13 1990 U.S. Philips Corporation Electro-optic display device with increased number of transmission levels
5225919, Jun 21 1990 Matsushita Electric Industrial Co., Ltd. Optical modulation element including subelectrodes
5359342, Jun 15 1989 Matsushita Electric Industrial Co., Ltd. Video signal compensation apparatus
5621821, Jun 04 1992 Sony Corporation; Sony United Kingdom Limited Apparatus and method for detecting distortions in processed image data
5706816, Jul 17 1995 Hitachi Aloka Medical, Ltd Image processing apparatus and image processing method for use in the image processing apparatus
5708451, Jul 20 1995 SGS-THOMSON MICROELECTRONICS, S R I Method and device for uniforming luminosity and reducing phosphor degradation of a field emission flat display
5774599, Mar 14 1995 Eastman Kodak Company; Washington University Method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities
5793344, Mar 24 1994 SEMICONDUCTOR ENERGY LABORATORY CO , LTD System for correcting display device and method for correcting the same
5838396, Dec 14 1994 MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD Projection type image display apparatus with circuit for correcting luminance nonuniformity
6084981, Mar 18 1996 Hitachi Medical Corporation Image processing apparatus for performing image converting process by neural network
6089739, Sep 30 1997 Sony Corporation Surface light source device
6115092, Sep 15 1999 TRANSPACIFIC EXCHANGE, LLC Compensation for edge effects and cell gap variation in tiled flat-panel, liquid crystal displays
6154561, Apr 07 1997 Photon Dynamics, Inc Method and apparatus for detecting Mura defects
6473065, Nov 16 1998 Canon Kabushiki Kaisha Methods of improving display uniformity of organic light emitting displays by calibrating individual pixel
6704008, Jan 26 2000 Seiko Epson Corporation Non-uniformity correction for displayed images
6738035, Sep 22 1997 RD&IP, L L C Active matrix LCD based on diode switches and methods of improving display uniformity of same
6782137, Nov 24 1999 General Electric Company Digital image display improvement system and method
6844883, Jun 24 2002 FUNAI ELECTRIC CO , LTD Color non-uniformity correction method and apparatus
6897842, Sep 19 2001 Intel Corporation Nonlinearly mapping video date to pixel intensity while compensating for non-uniformities and degradations in a display
6963321, May 09 2001 CLARE MICRONIX INTEGRATED SYSTEMS, INC Method of providing pulse amplitude modulation for OLED display drivers
7068333, Oct 16 2001 ENZO NANAO CORPORATION; Eizo Nanao Corporation Liquid crystal display with photodetectors having polarizing plates mounted thereon and its correcting method
7088318, Oct 22 2004 AGL OLED LIMITED System and method for compensation of active element variations in an active-matrix organic light-emitting diode (OLED) flat-panel display
7129920, May 17 2002 GOOGLE LLC Method and apparatus for reducing the visual effects of nonuniformities in display systems
7211452, Sep 22 2004 Global Oled Technology LLC Method and apparatus for uniformity and brightness correction in an OLED display
7227519, Oct 04 1999 MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD Method of driving display panel, luminance correction device for display panel, and driving device for display panel
7301618, Mar 29 2005 Global Oled Technology LLC Method and apparatus for uniformity and brightness correction in an OLED display
7502038, Oct 23 2003 EIZO Corporation Display characteristics calibration method, display characteristics calibration apparatus, and computer program
7576750, Aug 21 2003 Eizo GmbH Method and arrangement for optimizing a luminance characteristic curve
7952555, Nov 19 2003 EIZO Corporation Luminance control method, liquid crystal display device and computer program
20010024178,
20010041489,
20020047568,
20020154076,
20040174320,
20060071886,
20100033497,
EP30787170,
EP1424672,
EP1536399,
JP11295699,
JP2000305532,
JP2002116728,
JP60171573,
JP9198019,
WO3100756,
WO3100756,
/
Executed onAssignorAssigneeConveyanceFrameReelDoc
Dec 28 2011Barco N.V.(assignment on the face of the patent)
Date Maintenance Fee Events
Oct 18 2012ASPN: Payor Number Assigned.
Mar 18 2013M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Jun 20 2017M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jun 23 2021M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Oct 02 20154 years fee payment window open
Apr 02 20166 months grace period start (w surcharge)
Oct 02 2016patent expiry (for year 4)
Oct 02 20182 years to revive unintentionally abandoned end. (for year 4)
Oct 02 20198 years fee payment window open
Apr 02 20206 months grace period start (w surcharge)
Oct 02 2020patent expiry (for year 8)
Oct 02 20222 years to revive unintentionally abandoned end. (for year 8)
Oct 02 202312 years fee payment window open
Apr 02 20246 months grace period start (w surcharge)
Oct 02 2024patent expiry (for year 12)
Oct 02 20262 years to revive unintentionally abandoned end. (for year 12)