An apparatus for compensating for pixel distortion while reproducing hologram data includes an extraction unit, a determination and calculation unit, a table, and a compensation unit. The extraction unit extracts a reproduced data image from a reproduced image frame including the reproduced data image and borders. The determination and calculation unit determines position values of edges of the extracted reproduced data image, and calculates average magnification error values of pixels within line data from position values of start and end point pixels thereof, which are based on the determined position values of the edges. The table stores misalignment compensation values for the pixels within the line data, wherein the misalignment compensation values correspond to predetermined references for average magnification error values. The compensation unit compensates for pixel positions in the extracted reproduced data image using the misalignment compensation values that correspond to the calculated average magnification error values.

Patent
   RE45496
Priority
Jun 24 2004
Filed
Oct 22 2013
Issued
Apr 28 2015
Expiry
Feb 02 2025
Assg.orig
Entity
Large
0
23
EXPIRED
0. 17. A method of reading data recorded as a holographic image on an optical storage medium, comprising:
reading an image frame from the optical storage medium;
obtaining a reproduced data image having pixels and at least one alignment mark using a 1:1 pixel matching operation;
determining at least one misalignment correction value based on the at least one alignment mark;
producing an aligned data image by aligning the pixels of the reproduced data image from a start point of origin according to the at least one misalignment correction value;
determining one or more average values associated with magnification error based on the at least one misalignment correction value;
selecting a number of blocks on the aligned data image based on the one or more average values;
mapping the one or more average values to one or more misalignment compensation values; and
compensating pixels for each block based on the one or more misalignment compensation values.
0. 28. A method of reading data recorded as a holographic image on an optical storage medium, comprising:
reading an image frame from the optical storage medium to obtain a reproduced image frame, and at least two alignment marks;
photoelectrically converting the image frame using a 1:1 pixel matching operation, to obtain a reproduced data image, a border and at least two alignment marks provided in the image frame:
determining a vertical (x) and a horizontal (y) misalignment correction value based on the alignment marks and aligning the pixels of the reproduced data image from a start point of origin and the misalignment correction value, to obtain an aligned data image;
ascertaining a vertical (X) and a horizontal (Y) average magnification error per pixel value based on the respective misalignment correction values and selecting a number of blocks on the aligned data image based on the average magnification error per pixel values;
mapping the X and Y average magnification error per pixel values to a misalignment compensation value and performing pixel compensation for each block using the respective misalignment compensation value.
0. 1. A method of using a pixel compensation apparatus to compensate for pixel distortion while reproducing hologram data, which is recorded in a storage medium as an interference pattern that is obtained through interference of reference light with signal light that is modulated in accordance with the data, the method comprising the steps of:
reading an image frame from said storage medium to obtain a reproduced image frame;
said pixel compensation apparatus extracting a reproduced data image from said reproduced image frame including the reproduced data image and borders;
said pixel compensation apparatus determining position values of edges of the extracted reproduced data image;
said pixel compensation apparatus determining position values of start and end point pixels of line data based on the determined position values of the edges;
said pixel compensation apparatus calculating average magnification error values of pixels within the line data based on the determined position values of the start and end point pixels, wherein the average magnification error values represent a ratio of a difference between a size of the extracted reproduced data image and an actual data size to the actual data size; and
said pixel compensation apparatus compensating for pixel positions in the extracted reproduced data image using predetermined misalignment compensation values for the pixels that correspond to the calculated average magnification error values,
wherein the calculated average magnification error values include an x-directional average magnification error value and an y-directional average magnification error value,
wherein the step of calculating the average magnification error values includes the steps of:
calculating an x-directional magnification error value X mag error and a y-directional magnification error value Y mag error using the following Equations; and

X mag error=(x2−x1)−X data size

Y mag error=(y3−y1)−Y data size
calculating the degree of x-directional misalignment per pixel x and the degree of y-directional misalignment per pixel y using the following Equations,

x=X mag error/X data size

y=mag error/Y data size, and
wherein X data size is an actual data size in the x direction, X1 is the position value of the start point pixel in the x direction, x2 is the position value of the end point pixel in the x direction, Y data size is an actual data size in the y direction, y1 is the position value of the start point pixel in the y direction, and y3 is the position value of the end point pixel in the y direction.
0. 2. The method of claim 1, further comprising the step of differentiating data sections, which determine compensation directions of the pixel positions, based on the position values of the start or end point pixels and the calculated average magnification error values,
wherein the respective pixel positions are compensated for in accordance with the compensation directions of the differentiated data sections.
0. 3. The method of claim 1, wherein the predetermined misalignment compensation values for the pixels are values that are set to correspond to each of predetermined references for average magnification error values, and are stored in a table.
0. 4. A method of using a pixel compensation apparatus to compensate for pixel distortion while reproducing hologram data, which is recorded in a storage medium as an interference pattern that is obtained through interference of reference light with signal light that is modulated in accordance with the data, the method comprising the steps of:
reading an image frame from said storage medium to obtain a reproduced image frame;
said pixel compensation apparatus extracting a reproduced data image and alignment marks inserted into predetermined positions from said reproduced image frame including the reproduced data image and borders;
said pixel compensation apparatus detecting degrees of misalignments of pixels within the reproduced data image using the extracted alignment marks, and calculating misalignment correction values based on the detection results;
said pixel compensation apparatus correcting pixel positions of the reproduced data image based on the calculated misalignment correction values, determining position values of edges of the reproduced data image the pixel positions of which are corrected, and determining position values of start and end point pixels of line data based on the determined position values of the edges;
said pixel compensation apparatus calculating average magnification error values of pixels based on the determined position values of the start and end point pixels, wherein the average magnification error values represent a ratio of a difference between a size of the extracted reproduced data image and an actual data size to the actual data size; and
said pixel compensation apparatus compensating for pixel positions in the extracted reproduced data image using predetermined misalignment compensation values for the pixels that correspond to the calculated average magnification error values,
wherein the calculated average magnification error values include an x-directional average magnification error value and a y-directional average magnification error value,
wherein the step of calculating the average magnification error values includes the steps of:
calculating an x-directional magnification error value X mag error and a y-directional magnification error value Y mag error using the following Equations; and

X mag error=(x2−x1)−X data size

Y mag error=(y3−y1)−Y data size
calculating the degree of x-directional misalignment per pixel x and the degree of y-directional misalignment per pixel y using the following Equations,

x=X mag error/X data size

y=Y mag error/Y data size, and
wherein X data size is an actual data size in the x direction, x1 is the position value of the start point pixel in the x direction, x2 is the position value of the end point pixel in the x direction, Y data size is an actual data size in the y direction, y1 is the position value of the start point pixel in the y direction, and y3 is the position value of the end point pixel in the y direction.
0. 5. The method of claim 4, further comprising the step of differentiating data sections, which determine compensation directions of the pixel positions, based on the calculated average magnification error values,
wherein the pixel positions are compensated for in accordance with the compensation directions of the differentiated data sections.
0. 6. The method of claim 4, wherein the calculated average magnification error values include an x-directional average magnification error value and a y-directional average magnification error value.
0. 7. The method of claim 4, wherein the alignment marks are formed at the predetermined positions in a boundary region between the borders and the reproduced data image.
0. 8. The method of claim 4, wherein the alignment marks are formed at the predetermined positions in the borders.
0. 9. The method of claim 4, wherein each of the alignment marks has a 4×4 block size and a shape in which 2×2 on/off sub-blocks are alternately arranged.
0. 10. A method of using a pixel compensation apparatus to compensate for pixel distortion while reproducing hologram data, which is recorded in a storage medium as an interference pattern that is obtained through interference of reference light with signal light that is modulated in accordance with the data, the method comprising the steps of:
reading an image frame from said storage medium to obtain a reproduced image frame;
said pixel compensation apparatus extracting a reproduced data image and a plurality of alignment marks inserted into predetermined positions from a reproduced image frame including the reproduced data image, borders and the plurality of alignment marks;
said pixel compensation apparatus dividing the extracted reproduced data image into a plurality of sub-image blocks based on the plurality of extracted alignment marks;
said pixel compensation apparatus determining position values of edges of each of the sub-image blocks;
said pixel compensation apparatus determining position values of start and end point pixels of line data based on the determined position values of the edges;
said pixel compensation apparatus calculating average magnification error values of pixels within the respective sub-image blocks, based on the determined position values of the start and end point pixels, wherein the average magnification error values represent a ratio of a difference between a size of the sub-image block and an actual data size to the actual data; and
said pixel compensation apparatus compensating for pixel positions in the sub-image blocks using predetermined misalignment compensation values for the pixels that correspond to the calculated average magnification error values,
wherein the calculated average magnification error values include an x-directional average magnification error value and a y-directional average magnification error value,
wherein the step of calculating the average magnification error values includes the steps of:
calculating an x-directional magnification error value X mag error and a y-directional magnification error value Y mag error using the following Equations; and

X mag error=(x2−x1)−X data size

Y mag error=(y3−y1)−Y data size
calculating the degree of x-directional misalignment per pixel x and the degree of y-directional misalignment per pixel y using the following Equations,

x=X mag error/X data size

y=Y mag error/Y data size, and
wherein X data size is an actual data size in the x direction, X1 is the position value of the start point pixel in the x direction, x2 is the position value of the end point pixel in the x direction, Y data size is an actual data size in the y direction, y1 is the position value of the start point pixel in the y direction, and y3 is the position value of the end point pixel in the y direction.
0. 11. The method of claim 10, further comprising the step of differentiating data sections, which determine compensation directions of the pixel positions within the respective sub-image blocks, based on the position values of the start or end point pixels and the calculated average magnification error values,
wherein the pixel positions are compensated for in accordance with the compensation directions of the differentiated data sections.
0. 12. The method of claim 11, wherein each of the compensation directions with respect to each of the data sections is changed whenever a value, which is obtained by accumulating the calculated average magnification error values on the position value of the start or end point pixel, reaches 0.5×n, wherein n is an integer.
0. 13. The method of claim 11, wherein the predetermined misalignment compensation values for the pixels are values that are set to correspond to each of predetermined references for average magnification error values, and are stored in a table.
0. 14. The method of claim 10, wherein the alignment marks are formed at the predetermined positions in a boundary region between the borders and the reproduced data image at predetermined intervals.
0. 15. The method of claim 10, wherein the alignment marks are formed at the predetermined positions in the borders at predetermined intervals.
0. 16. The method of claim 10, wherein each of the alignment marks has a 4×4 block size and a shape in which 2×2 on/off sub-blocks are alternately arranged.
0. 18. The method of claim 17, wherein the blocks are selected using points on the aligned data image where a fraction part of accumulation of the at least one misalignment correction value for the pixels in at least one direction becomes a predetermined value.
0. 19. The method of claim 18, wherein the fraction part of accumulation of misalignment correction value for the pixels in respective vertical and horizontal directions becomes a predetermined value.
0. 20. The method of claim 18, wherein the predetermined value is one of half a pixel and a pixel.
0. 21. The method of claim 17, further comprising determining a direction of compensating the pixels.
0. 22. The method of claim 20, wherein determining the direction of compensating the pixels comprises:
dividing the aligned data image into three data sections starting from the start point of origin; and assigning different compensation directions to adjacent sections.
0. 23. The method of claim 17, wherein said determining at least one misalignment correction value comprises determining a vertical (x) and a horizontal (y) misalignment correction value.
0. 24. The method of claim 22, wherein determining a vertical (x) and a horizontal (y) misalignment correction value comprises:
detecting a misalignment based on amounts of light in pixels in vicinity of off-pixels or on-pixels in the at least one alignment marks; and
calculating the vertical (x) and the horizontal (y) misalignment correction value based on detecting the misalignment.
0. 25. The method of claim 22, wherein determining one or more average values associated with magnification error comprises ascertaining a vertical (X) and the horizontal (Y) average magnification error per pixel value based on the vertical (x) and a horizontal (y) misalignment correction value.
0. 26. The method of claim 24, wherein ascertaining a vertical (X) and a horizontal (Y) average magnification error per pixel value comprises:
calculating a vertical magnification error value and a horizontal magnification error value based on position values of start and end point pixels; and
calculating a degree of a vertical misalignment per pixel based on the vertical magnification error value and a degree of a horizontal misalignment per pixel based on the horizontal magnification error value.
0. 27. The method of claim 17, wherein the mapping between the one or more average magnification error per pixel values and the one or more misalignment compensation values is provided in a lookup table.


Y_mag_error=(y_3−y_1)−Y_data_size   Eq. 6

Thereafter, average magnification error values per pixel are calculated by calculating the degree of a x-directional misalignment per pixel Δx and the degree of a y-directional misalignment per pixel Δy using the following Equations 7 and 8, respectively. The calculated average magnification error values per pixel are transferred to the pixel compensation block 1126 through a line L13.
Δx=X_mag_error/X_data_size   Eq. 7
Δy=Y_mag_error/Y_data_size   Eq. 8

In Equations 5 to 8, X_data_size is an actual data size in the x direction, x_1 is the position value of the start point pixel in the x direction, x_2 is the position value of the end point pixel in the x direction, Y_data_size is an actual data size in the y direction, y_1 is the position value of the start point pixel in the y direction, and y_3 is the position value of the end point pixel in the y direction.

For example, if x_1 is 0.4 pixel, x_2 is 101.3 pixels and X_data_size are 100 pixels, Δx is 0.9 pixel/100=0.009 pixel. That is, the average magnification error value per pixel (i.e., a misalignment value) in the x direction is 0.009 pixel.

Thereafter, the pixel compensation direction determination block 1125 divides the reproduced data image region into a predetermined number of sections based on the average magnification error values per pixel (i.e., the misalignment values) that are received from the pixel magnification error value calculation block 1124. In detail, accumulatively added misalignment values that are (for example, Δx, 2Δx, 3Δx, . . . , as shown in FIG. 5A) generates a compensation direction determination signal that causes compensation directionality to be different in accordance with the divided region (the data section), and then provides the generated compensation direction determination signal to the pixel compensation block 1126 through a line L14.

That is, the data image region is divided by determining a point (position) where the fraction part of the accumulation of the misalignment value becomes 0.5 (half pixel) or 0. For example, as described above, if it is assumed that x_1 is 0.4 pixel and Δx is 0.009 pixel, a point (position) where the fraction part of the accumulation of the misalignment value becomes greater than 0.5 pixel is a minimum integer n_1 that satisfies the equation 0.4+n_1*ΔΔx>0.5, and accordingly, n_1 becomes a minimum integer that is greater than (0.5−0.4)/Δx, that is, 12 (11.1111). Further, a point (position) where the fraction part of the accumulation of the misalignment value becomes 0 is a minimum integer n_2 that satisfies the equation 0.4+n_2*ΔΔx>1.0, and accordingly, n_2 becomes a minimum integer which is greater than 1.0−0.4/Δx, that is, 67 (66.6666). Furthermore, a point (position) where the fraction part of the accumulation of the misalignment value becomes greater than 0.5 again is a minimum integer n_3 that satisfies the equation 0.4+n_3*Δx>1.5, and accordingly, n_3 becomes a minimum integer that is greater than 1.5−0.4/Δx, that is, 123 (123.2222). However, since the size of the data image is 100 pixels, n_3 is disregarded.

Accordingly, in consideration of the above-assumed values, the pixel compensation direction determination block 1125 divides the image region into a data section from 1st pixel point to 11th pixel point, a data section from 12th pixel point to 66th pixel point, and a data section from 67th pixel point to 100th pixel point. The divided data sections have selective compensation directionalities having different compensation directions, respectively, as shown in FIG. 5B. That is, the data section from the 1st pixel point to the 11th pixel point has a compensation direction indicated by an arrow A, the data section from the 12th pixel point to the 66th pixel point has a compensation direction indicated by an arrow B, and the data section from the 67th pixel point to the 100th pixel point has a compensation direction indicated by an arrow C that is identical to that indicated by the arrow A.

Thereafter, when the average magnification error value per pixel is received through the line L13, the pixel compensation block 1126 searches the lookup table 1127 to determine a corresponding reference among a plurality of predetermined references for average magnification error values; and performs position compensation on the pixels of the reproduced data image, which is provided through the line L11, using the misalignment compensation values for the pixels that correspond to the reference for the average magnification error values. For this purpose, the lookup table 1127 stores the plurality of predetermined references for average magnification error values, and the misalignment compensation values for the pixels that correspond to each of the references for the average magnification error values.

Accordingly, in accordance with the present invention, it is possible to realize the reproduction of hologram data that can suppress a degradation of picture quality, through position compensation per pixel using a 1:1 pixel matching method, even without using an oversampling technique.

A second preferred embodiment of the present invention will be described hereinafter in detail with reference to the accompanying drawings. In the present preferred embodiment, identical reference numerals are used to designate components identical to those of the first preferred embodiment, thus omitting descriptions thereof.

FIG. 6 is a block diagram of a holographic reproduction system that employs an apparatus for compensating for the pixel distortion in reproduction of hologram data in accordance with a second preferred embodiment of the present invention.

With reference to FIG. 6, the holographic reproduction system includes a spindle motor 102, a storage medium 104, a reading light path 106, a reproduced light path 108, an image detection block 110 and a reproduced pixel compensation block 612. In this case, the reproduced pixel compensation block 612 refers to a reproduced pixel compensation apparatus in accordance with the present preferred embodiment. The reproduced pixel compensation apparatus of the present preferred embodiment includes a border/alignment mark detection block 6121, a sub-block generating block 6122, a misalignment correction value calculation block 1122, a pixel position determination block 1123, a pixel magnification error value calculation block 1124, a pixel compensation direction determination block 1125, a pixel compensation block 1126 and a lookup table 1127.

That is, the holographic reproduction system that employs the reproduced pixel compensation apparatus of the present preferred embodiment is provided with the storage medium 104 rotated by the spindle motor 102. And, the storage medium 104 is provided with the reading light path 106 along which reading light necessary to reproduce recorded hologram data is irradiated onto the storage medium 104; and a reproduced light path 108 along which data image light (i.e., a checker-shaped pattern of binary data), that is reproduced through the irradiation of the reading light, is output.

Furthermore, the image detection block 110, such as a CCD camera, is provided on the reproduced light path 108. The CCD camera generates a reproduced image frame that is photoelectrically converted by a 1:1 pixel matching method, and transfers the generated image frame to the border/alignment mark detection block 6121 in the reproduced pixel compensation block 612, while the conventional holographic reproduction system that represents each of the pixels of reproduced image light into n×n pixels (for example, 3×3 pixels) and outputs the data. For example, when data image light has a resolution size of 240×240 and 3-bit upper, lower, right and left borders, the reproduced light path 108 generates a reproduced image frame including a 240×240-sized reproduced data image and 3-pixel upper, lower, right and left borders.

Thereafter, the border/alignment mark detection block 6121 detects the borders of the reproduced data image using, e.g., the total brightness of pixels with respect to each line of the reproduced image frame; detects a plurality of alignment marks inserted into, e.g., a boundary region between a border region and an image; and extracts the reproduced data image from the reproduced image frame, based on information on the detected borders. In this case, the alignment marks are arranged at predetermined intervals along the four sides of the data image region. The reason for this is to divide the reproduced data image into a plurality of sub-image blocks having a predetermined size based on the alignment marks, so that it is possible not only to compensate for misalignment attributable to the magnification error of pixels and the linear distortion of an entire image but also to compensate the nonlinear distortion of the entire image by approximating the nonlinear distortion to linear distortions of the sub-image blocks.

That is, as shown in FIG. 7, when hologram data is recorded in a storage medium, a plurality of alignment marks AM1 to AM12 arranged at predetermined intervals can be inserted into a boundary region 704 between a border region 702 and a data image region 706. The border/alignment mark detection block 6121 extracts the plurality of alignment marks AM1 to AM12 and the reproduced data image, and provides them to the sub-block generating block 6122. In FIG. 7, reference numerals SB1 to SB9 refer to sub-image blocks that are formed to have a uniform size (or different sizes) based on the alignment marks AM1 to AM12, respectively. These sub-image blocks SB1 to SB9 will be described in detail later.

Thereafter, the sub-block generating block 6122 divides the data image into predetermined-sized sub-image blocks based on the plurality of alignment marks AM1 to AM12 extracted from the boundary region 704 between the borders and the image. For example, as shown in FIG. 7, if it is assumed that the 12 alignment marks AM1 to AM12 are inserted into the boundary region 704 between the borders and the image at predetermined intervals, the data image extracted from the reproduced frame is divided into the 9 sub-image blocks SB1 to SB9. For example, if it is assumed that each of the alignment marks has a 4×4 size, the data image can be divided into the sub-image blocks by cutting the data image with reference to the central point of each of the alignment marks. Although the case where the data image is divided into the 9 sub-image blocks has been described as an example, the present invention is not limited thereto. That is, the data image can be divided into a larger number of sub-image blocks by inserting a larger number of alignment marks into a frame at regular intervals (or irregular intervals) at the time of recording alignment marks.

If it is assumed that linear/nonlinear distortion of an entire image has not occurred in a reproduction frame, the reproduction frame can have a shape as shown in FIG. 7. However, linear/nonlinear distortion can occur due to the factors described above in the section of background of the invention. As an example, nonlinear distortion, wherein, e.g., sides that connect vertexes are nonlinearly curved into a reproduced frame (indicated by reference characters “a” and “b”) while the locations of the vertexes of the reproduction frame are kept intact, may occur in an entire image, as shown in FIG. 8.

Therefore, if the linear/nonlinear distortion occurs, the region “b” becomes a region where a data image should have existed but does not exist actually. If a data image is extracted using the conventional oversampling technique after the linear/nonlinear distortion has occurred, there may occur a phenomenon in which pixels are extracted from a region where an actual data image does not exist (i.e., region “b”).

In the present invention, distortions of pixels in linear distortion are compensated for with considering that the pixels have been shifted. Meanwhile, when the linear/nonlinear distortion occurs in the entire image, the alignment marks undergo almost the same linear/nonlinear distortion as the data image. Further, in the present invention, the nonlinear distortion can be approximated to linear distortions of the sub-image blocks, and data pixels are extracted from the sub-image blocks, which are formed based on the alignment marks, using the 1:1 matching method. Accordingly, the case of nonlinear distortion, where data pixels are extracted from a region where a data image does not exist (i.e., region “b”), as well as the case of linear distortion can be prevented.

Thereafter, the sub-block generating block 6122 provides the sub-image blocks SB1 to SB9, which are obtained using the plurality of alignment marks AM1 to AM12, to the pixel position determination block 1123 and the pixel compensation block 1126 in a sequential manner (i.e., from SB1 to SB9) through the line L11. Furthermore, the sub-block generating block 6122 provides the alignment marks, which are extracted at the predetermined positions of the reproduced image frame (for example, predetermined positions within the boundary region between the border region and the reproduced data image region, or predetermined positions within the border region), to the misalignment correction value calculation block 1122. For example, the alignment marks are provided in such a way that, when the sub-image block SB1 is provided onto the line L11, the alignment marks AM1, AM2 and AM12 are selectively provided to the misalignment correction value calculation block 1122, and when the sub-image block SB2 is provided onto the line L11, the alignment marks AM1, AM2, AM3 and AM12 are selectively provided to the misalignment correction value calculation block 1122.

Hereinafter, the misalignment correction value calculation block 1122, the pixel position determination block 1123, the pixel magnification error value calculation block 1124, the pixel compensation direction determination block 1125, the pixel compensation block 1126 and the lookup table 1127 perform operations, which are identical to those performed on the alignment marks and the reproduced data image in the first preferred embodiment, on the sub-image blocks and alignment marks corresponding thereto, respectively, thus performing position compensation on the respective pixels in the respective sub-image blocks.

Therefore, in accordance with the present invention, the reproduction of hologram data, which can effectively prevent a degradation of reproduced picture quality due to linear/nonlinear distortion of an entire image, can be realized in such a way that a data image is divided into a plurality of sub-image blocks using a plurality of alignment marks and position compensation is performed on the pixels of the respective sub-image blocks.

As described above, the conventional method divides a data image into equal parts the number of which is identical to the number of distorted pixels, and performs distortion compensation by a pixel unit. In contrast, the present invention finds the position values of the quadrilateral edges of a reproduced data image, calculates the average magnification error values of respective pixels using the position values of the quadrilateral edges, and compensates for the respective pixel positions of the reproduced data image extracted using the misalignment compensation values for the respective pixels corresponding to the calculated average values. Alternatively, the present invention divides a reproduced data image into a plurality of sub-image blocks using a plurality of alignment marks that are inserted into and recorded in predetermined regions in a reproduced image frame at the time of recording, finds the position values of the quadrilateral edges of the divided sub-image blocks, calculates the average magnification error values of respective pixels within the sub-image blocks using the position values of the quadrilateral edges, and compensates for the pixel positions of the divided sub-image blocks using the misalignment compensation values for the respective pixels that correspond to the calculated average values. Accordingly, compared to the conventional method using the oversampling technique, the present invention is advantageous in that it can not only realize the compact size and low price of a holographic reproduction system but can also effectively prevent the problems of a reduction in reproduction rate.

While the invention has been shown and described with respect to the preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Yoon, Pil-Sang

Patent Priority Assignee Title
Patent Priority Assignee Title
5285438, Oct 31 1991 Regents of the University of California Motionless parallel readout head for an optical disk recorded with arrayed one-dimensional holograms
5511058, Dec 23 1993 FORCETEC CO , LTD Distortion correction of a reconstructed holographic data image
5694488, Dec 23 1993 FORCETEC CO , LTD Method and apparatus for processing of reconstructed holographic images of digital data patterns
5940537, Jan 16 1996 FORCETEC CO , LTD Method and system for compensating for geometric distortion of images
6064586, Dec 31 1998 RESEARCH INVESTMENT NETWORK, INC Method for holographic data storage and retrieval
6104420, Feb 20 1998 Toshiba Tec Kabushiki Kaisha; Kabushiki Kaisha Toshiba Image forming apparatus and exposure scanning apparatus
6222754, Mar 20 1998 Pioneer Electronic Corporation Digital signal recording/reproducing method
6233083, Oct 13 1998 Pioneer Corporation Light modulation apparatus and optical information processing system
6281994, Dec 22 1998 Nippon Telegraph and Telephone Corporation Method and apparatus for three-dimensional holographic display suitable for video image display
6369831, Jan 22 1998 Sony Corporation Picture data generating method and apparatus
6384942, Feb 27 1998 NEC Corporation Image scanning unit
6430125, Jul 03 1996 Xylon LLC Methods and apparatus for detecting and correcting magnification error in a multi-beam optical disk drive
6573919, Jul 31 2001 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and apparatus for alignment of image planes
6738332, Mar 23 2000 Pioneer Corporation Optical information recording and reproducing apparatus
7200254, Feb 14 2002 NGK Insulators, Ltd Probe reactive chip, sample analysis apparatus, and method thereof
20030044084,
GB2387060,
JP2000122012,
JP2002366014,
JP2003078746,
JP200378746,
WO9743669,
WO9743669,
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Oct 22 2013Maple Vision Technologies Inc.(assignment on the face of the patent)
Oct 06 2016MAPLE VISION TECHNOLOGIES INC TAIWAN SEMICONDUCTOR MANUFACTURING CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0411740710 pdf
Date Maintenance Fee Events
Sep 18 2017REM: Maintenance Fee Reminder Mailed.
Mar 05 2018EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Apr 28 20184 years fee payment window open
Oct 28 20186 months grace period start (w surcharge)
Apr 28 2019patent expiry (for year 4)
Apr 28 20212 years to revive unintentionally abandoned end. (for year 4)
Apr 28 20228 years fee payment window open
Oct 28 20226 months grace period start (w surcharge)
Apr 28 2023patent expiry (for year 8)
Apr 28 20252 years to revive unintentionally abandoned end. (for year 8)
Apr 28 202612 years fee payment window open
Oct 28 20266 months grace period start (w surcharge)
Apr 28 2027patent expiry (for year 12)
Apr 28 20292 years to revive unintentionally abandoned end. (for year 12)