In described examples, a geometric progression of structured light elements is iteratively projected for display on a projection screen surface. The displayed progression is for determining a three-dimensional characterization of the projection screen surface. Points of the three-dimensional characterization of a projection screen surface are respaced in accordance with a spacing grid and an indication of an observer position. A compensated depth for each of the respaced points is determined in response to the three-dimensional characterization of the projection screen surface. A compensated image can be projected on the projection screen surface in response to the respaced points and respective compensated depths.
|
14. A method comprising:
projecting, by a projector, a sparse structured light pattern on a projection screen surface as displayed structure light elements;
receiving, by a camera, an image of the displayed structure light elements;
determining, by at least one processor, based on the sparse structured light pattern and based on the image of the displayed structure light elements, a three-dimensional point cloud characterizing the projection screen surface;
determining, by the at least one processor, an indication of an observer position relative to the projection screen surface;
respacing, by the at least one processor, points of the three-dimensional point cloud characterizing the projection screen surface, wherein the points are respaced in accordance with a spacing grid;
determining, by the at least one processor, compensated depths for the respaced points in response to the three-dimensional point cloud characterizing the projection screen surface; and
projecting, by the at least one processor, an inverse image in response to the respaced points and the compensated depth.
1. A method comprising:
projecting, by a projector, a sparse structured light pattern on a projection screen surface as displayed structure light elements;
receiving, by a camera, an image of the displayed structure light elements;
determining, by at least one processor, based on the sparse structured light pattern and based on the image of the displayed structure light elements, a three-dimensional point cloud characterizing the projection screen surface;
determining, by the at least one processor, an indication of an observer position relative to the projection screen surface;
generating, by the at least one processor, a flattened characterization of the projection screen surface with respect to the observer position, the flattened characterization including two-dimensional points associated with a positions on the projection screen surface;
respacing, by the at least one processor, two-dimensional points of the flattened characterization in accordance with a spacing grid;
determining, by the at least one processor, a depth for the respaced two-dimensional points in response to the three-dimensional point cloud characterizing the projection screen surface; and
determining, by the at least one processor, compensated control points for generating an inverse image in response to the respaced two-dimensional points and the depth determined for the two-dimensional points.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
15. The method of
16. The method of
17. The method of
18. The method of
|
When a projection system projects onto a non-uniform surface, local aspect ratio distortion can occur. For example, surface irregularities of the projection surface can cause the projected image to be distorted as observed from an observer perspective. In addition to screen surfaces being irregular, keystone distortion can occur when the projection optical axis is not perpendicular to the screen. The combination of keystone and projection surface distortion can severely limit the ability of an observer to correctly perceive the projected images.
In described examples, a geometric progression of structured light elements is iteratively projected for display on a projection screen surface. The displayed progression is for determining a three-dimensional characterization of the projection screen surface. Points of the three-dimensional characterization of a projection screen surface are respaced in accordance with a spacing grid and an indication of an observer position. A compensated depth for each of the respaced points is determined in response to the three-dimensional characterization of the projection screen surface. A compensated image can be projected on the projection screen surface in response to the respaced points and respective compensated depths.
In this description: (a) the term “portion” can mean an entire portion or a portion that is less than the entire portion; (b) the terms “angles” and “coordinates” can overlap in meaning when geometric principles can be used to convert (or estimate) values or relationships therebetween; (c) the term “screen” can mean any surface (whether symmetrical or asymmetrical) for displaying a portion of a projected image; (d) the term “asymmetric” can mean containing non-planar features (e.g., notwithstanding any other symmetrical arrangement or property of the non-planar features); (e) the term “non-planar” can mean being non-planar with respect to a plane perpendicular to an axis of projection or observation (for viewing or capturing images); and (f) the term “correction” can mean a partial correction, including compensation determined in accordance with an asymmetrical screen surface.
Correcting keystone and projection surface distortion includes scaling or warping an image and/or video to be scaled or warped before projection so a rectangular image is perceived by an observer. Unlike keystone distortion, which can be parameterized in the form of screen rotation angles and manually corrected by users, manual surface correction for projection on irregular surfaces is extremely difficult, if not virtually impossible, for a novice observer to successfully accomplish. For example, arbitrary non-planar projection surfaces/screens cannot be easily parameterized (e.g., be represented as a set of geometric equations), especially by novice observers.
A camera-assisted arbitrary surface characterization and correction process and apparatus is described hereinbelow. The described process for arbitrary surface characterization and correction implicitly corrects for incidental keystoning and arbitrary screen features (e.g., including arbitrary surface irregularities). The process includes generating an accurate characterization of the projection screen in response to a geometry of components of the projection system and an observer location. The described camera assisted and structured light-based system accurately determines the distance, position, shape and orientation of features of the projection and stores the determined characteristics as a 3D point cloud (a three-dimensional point cloud for characterizing the projection screen surface). Control points for determining an inverse image are created in response to the projection screen surface 3D point cloud so an observer located at a determined position perceives a rectangular image on the projection surface. Accordingly, the projected inverse image compensates for the otherwise apparent distortion introduced by the projection surface. The described camera assisted and structured light-based system can also correct for any keystone distortion.
The perspective transformations between the camera and the projector are analytically based (e.g., rather than numerically processed in response to user estimations and measurements), which increases the accuracy of results determined for a particular system geometry and reduces the iterations and operator skill otherwise required for obtaining satisfactory results. Iterative (and time-consuming) user-assisted calibration routines can be avoided because system geometry (such as relative screen offset angles) are implicitly determined by planar and non-planar homographic transformations performed in response to camera-provided input (e.g., which reduces or eliminates user intervention otherwise involved).
Example embodiments include dual-camera systems for minimizing the effects of tolerances occurring in projector manufacturing. For example, the dual-camera systems can include projection, image capturing and analysis of structured light patterns for correcting for manufacturing tolerances and for reducing the amount of calibration for a projector and projection system, which also lowers costs.
The compensation process described hereinbelow includes both projecting and detection of sparse structured light pattern elements, which greatly reduces an often-large amount of image processing for accurately characterizing a non-planar correction surface. The detection and analysis of sparse structured light pattern elements can also reduce the implemented resolution and cost of a digital camera (e.g., built-in or coupled to the projector), which greatly reduces the number of computations for characterizing a screen surface. Accordingly, various embodiments can be imbedded as embodied within an ASIC (application-specific integrated circuit).
The described compensation process includes three-dimensional interpolation and extrapolation algorithms for estimating projection screen surface 3D data for substituting for missing portions of the projected spare structured light pattern elements not captured by the camera (e.g., due to ambient light and other sources of noise). The estimation of uncaptured data points increases the robustness of the described process (which is able to operate in a wide variety of ambient light conditions).
Accordingly, predetermined information about the projection surface is not necessarily required by the described compensation process, which determines a system geometry for reducing distortion introduced by the projection screen surface geometries without requiring calibration input operations by the end-user.
Distortion results when a digital projector (e.g., 110a or 110b) projects images onto a non-perpendicular projection surface (e.g., 130a) and/or non-planar (e.g., 130b). The resulting distortion often can lead to cognitive incongruity (e.g., in which a viewed image does not agree with an image expected by an observer). Accordingly, the distortion degrades the viewing experience of an observer. Both asymmetrical and non-perpendicular screen surfaces can cause local aspect-ratio distortion in which subportions of the projected image appear deformed and non-rectangular with respect to the rest of the displayed image projected image. Such distortion affects the ability of observers to correctly perceive information from projected images so the user experience of observing the image is adversely impacted.
In contrast to planar keystone distortion (which can often be avoided by aligning a projector with a projection surface until a rectangular image is observed), geometric compensation determination can be automatically (and quickly) determined for asymmetrical surface distortion resulting from projection of an image upon a non-planar (or otherwise asymmetrical) fixed screen. The geometries and processes for such geometric compensation are described hereinbelow.
A projection screen surface characterization is generated by determining the surface topography of asymmetric screen in three-dimensional space. The projection screen surface characterization is a detailed (e.g., pointwise) characterization of the projection surface. For example, the projection screen surface characterization includes parameters such as position, shape and orientation with respect to the screen surface. The projection screen surface characterization information (e.g., stored as a “3D point cloud”) is combined with information about the position of the observers to generate a pre-warped (or otherwise compensated in an inverse manner) image for projection and display upon the characterized projection screen surface. The pre-warping tends to reduce (if not virtually eliminate) any observed distortion of the displayed projected pre-warped image when observed from a determined position of an observer.
In flow 300, an image for projection is obtained in 310. In 312, the image for projection is projected onto an asymmetric screen surface (which can be one or both of non-planar and keystoned). In 314, a camera (e.g., camera 120a and/or 120b) captures the displayed projected image as distorted by the asymmetric screen surface. In 320, an image to be corrected is “pre-warped” (e.g., corrected for projection) by adjusting pixels within the image to values inversely correlated with corresponding pixels of a screen surface characterization of the asymmetric screen. (The image to be pre-warped need not be the same image used for generating the screen surface characterization.) In 322, the pre-warped image for projection is projected onto the asymmetric screen surface. In 324, the pre-warped image displayed on the asymmetric screen appears to be similar to the original (or similar to an image conceptualized by the observer), so many, if not all, of the distortions are compensated for and the viewing experience is enhanced (e.g., as compared with a projection of an image that is not pre-warped).
Each point of the set of control points 410 and 510 defines a localized, progressive degree of warping for warping images by a warping engine for camera-assisted asymmetric characterization and correction. Because image warping is processed in response to each of the control points, various portions of an image to be pre-warped can be warped locally (e.g., with respect to other non-adjacent control points). Because warping can be accomplished with localized portions of an image, highly complex images can be generated in response to screens having highly asymmetric surfaces. While each one of the control points could be manually moved by a human operator, such manual movement of individual control points for the generation of highly complex warped images would be excessively time consuming and tedious (and often resulting in errors). Further, the number of such control points increases quadratically as resolution images increase (where the manual input of the increased numbers of control points by an observer increases input time and the probability of errors).
In contrast, the warping engine for camera-assisted asymmetric characterization and correction includes automated methods for defining the warping engine control points without otherwise requiring input from the user. In accordance with various embodiments, a single camera system, or a dual-camera system, can be used for analyzing and generating screen surface characterizations of asymmetric screens.
Various embodiments include a single-camera or dual-camera systems as described hereinbelow. Various embodiments of dual-camera systems can optionally perform functions described herein with respect to a single-camera system embodiment.
In general, flow 600 includes operation 602 in which a determination is made between a single-camera mode and a dual-camera mode. When the determination is made for a single-camera mode, the process flow proceeds through operation 610 (system calibration), operation 612 (surface characterization), operation 614 (observer position location), operation 616 (observer perspective correction) and operation 630 (inverse image generation). When the determination is made for a dual-camera mode, the process flow proceeds through operation 620 (system calibration), operation 622 (surface characterization), operation 624 (observer position location), operation 626 (observer perspective correction) and operation 630 (inverse image generation). Accordingly, as a whole, the described process flow 600 comprises two main processing branches: single and dual camera modes.
While the various operations of the process flow for the single-camera system and the process flow from the dual-camera system are similar in name and function, certain details can vary between respective operations. Accordingly, the various operations of the various flows can share code between modes as well as having unique code for execution in a particular mode of operation.
The five described operations for each mode are subsequently described below in greater detail. Operation in a single-camera mode is described below with respect to
The flow of single-camera mode process 700 begins in 710, where the projection system is calibrated by camera-projector calibration techniques in accordance with a pin-hole camera model. Pinhole camera-based system calibration is described hereinbelow with respect to
In 720, the projection screen surface is characterized by capturing information projected on the projection screen surface. For example, sparse discrete structured light patterns are projected by the projector upon a projection screen surface in sequence and captured by the camera. The sparse discrete structured light patterns include points for representing the pixels of maximum illumination wherein each point can be associated with the peak of a Gaussian distribution of luminance values. The positions of various points in the sparse discrete structured light patterns in the captured frames are skewed (e.g., shifted) in response to non-planar or keystoned portions of the projection screen surfaces. The captured camera frames of the structured light patterns can be stored in an embodying ASIC's memory for processing and for generation of the three-dimensional point cloud (3D point cloud) for characterizing the projection screen surface. The structured light pattern processing is described hereinbelow with respect to
In 730, the projection screen surface is characterized in accordance with optical ray intersection parameters (such as shape, distance and orientation with respect to the projector optical axis). The projection screen surface is characterized, for example, and stored as points of the 3D point cloud. Accordingly, the 3D point cloud includes data points for modeling the screen surface from the perspective of the projector. The projection screen surface characterization is described hereinbelow with respect to
In 740, the observer position coordinates can be determined in various ways: retrieved from storage in the ASIC memory; entered by the observer at run-time; or determined at run-time in response to triangulation of points of a displayed image. The observer position coordinates can be determined by analysis of (e.g., triangulation of) displayed images captured by a digital camera in a predetermined spatial relationship to the projector. When the observer position is determined by triangulation, the observer position is assumed to be perpendicular to the projection screen (and more particularly, the orientation of the observer can be presumed to be perpendicular to a best fitted plane passing through the points in the 3D point cloud). The calculations for determining an observer perspective are described hereinbelow with respect to
In 750, the three-dimensional points of the 3D point cloud are rotated and translated to internally model what an observer would perceive from the observer position. With the perspective of the observer being determined, points in the 3D point cloud are rearranged to form (e.g., in outline form) a rectangular shape with the correct aspect ratio. The correction of points in the 3D point cloud is described hereinbelow with respect to
In 760, the rearranged points are transformed (e.g., rotated back) to the projector perspective for input as warping points to the warping engine. The warping engine generates warped (e.g., inverse) image in response to the warping points. Processing for the inverse image generation is described hereinbelow with respect to
With reference to 710 again, camera-projector calibration data is obtained (in the single camera mode). Camera-projector calibration is in response to a simple pinhole camera model, which includes the position and orientation of the optical rays for single camera-assisted asymmetric characterization and correction. In the pinhole camera model, the following parameters can be determined: focal length (fc), principal point (cc), pixel skew (αc) and a distortion coefficients vector (kc). Two sets of parameters are determined, one set for the camera and a second set for the projector (which is modeled as an inverse camera).
The intrinsic camera/projector parameters characterize information about the camera/projector internal geometry. Accordingly, a system for single-camera-assisted asymmetric characterization and correction is calibrated in accordance with a first set of intrinsic parameters for the camera and a second set of intrinsic parameters for the projector. In contrast, extrinsic parameters are determined for describing the relative position and orientation of the camera with respect to the projector's optical axis. The camera/projector system extrinsic parameters (described hereinbelow) include the translation vector (the distance Tcam from the projector to the camera in millimeters) and the rotation matrix (including the angles ψ and φ) of the camera with respect to the projector.
As shown in
The translation vector 925 Tcam is the distance from the camera center 920 (xc, yc, zc) to the origin 950 (0,0,0) and can be expressed in millimeters. The rotation matrix Rcam (which includes the first rotation ψ and the second rotation φ) accounts for the relative pitch, yaw and roll of the camera normal vector {right arrow over (n)}cam with respect to the projector normal vector (e.g., the optical axis {right arrow over (n)}proj). Both intrinsic and extrinsic parameters can be obtained by iterative camera calibration methods described hereinbelow. The calibration data can be stored in a file in ASIC memory and retrieved in the course of executing the functions described herein.
With reference to 720 again, sparse discrete structured light patterns are displayed by the projector in temporal sequence, where each projected and displayed pattern is captured by the camera and processed (in the single camera mode). Discrete and sparse structured light patterns are used to establish a correlation between the camera and the projector: the correlation is determined in response to the positions of the structured light elements in both the projected (e.g., undistorted) and captured (e.g., distorted by an asymmetric non-planar screen) images.
As introduced above, the warping engine provides a set of discrete control points for warping an image distributed over an entire programmable light modulator (imager) such as a digital micromirror device (DMD) for DLP®-brand digital light projection. Individual portions of the image for projection can be moved/edited to warp the input image (e.g., to correct for surface distortion) before the DMD is programmed with the pre-warped image for projection. The number of points in the 3D point cloud (which includes projection screen surface spatial information) is usually the same as the number of (e.g., usable) control points in the warping engine. Accordingly, the position of each of the structured light pattern elements corresponds with a respective position of a warping engine control point (e.g., because the projection screen surface is characterized at locations corresponding to a respective warping engine control points).
Given the discrete and relatively sparse nature of the warping engine control points (e.g., in which a single warping engine control point is used to warp multiple pixels in a local area), the correspondence between the camera and projector image planes is usually determined (e.g., only) at the structured light elements positions. The structured light elements are projected, captured and processed to establish the camera-projector correspondence. Sparse structured light patterns such as circles, rectangles or Gaussians (such as mentioned above) can be more rapidly processed as compared with processing other more complex structured light patterns (e.g., sinusoidal and De Bruijn sequences). Additionally, because of the degree of ambient light in a usual projection environment (e.g., which ranges from complete darkness to varying degrees of ambient light), bright elements over a dark background can be relatively easily identified and noise minimized.
Determining the correspondence between the camera and projector includes matching the positions (e.g., the centroids) of structured light elements from each projected pattern to each camera-captured patterns. Accordingly, a correspondence is determined in which each element in the projected image corresponds to a respective element displayed and captured in the camera image plane. While the element centroids in the projected image are known (e.g., the initial positions of the warping engine control points are normally predetermined), the elements centroids in the camera-captured image are unknown (e.g., due to being skewed by projection for display on an asymmetric surface). The determination of the centroids correspondence is normally computationally intensive.
To help determine the correspondence between each centroid of the projected image 1100 and a respective centroid of the displayed image 1102, time multiplexed patterns are used. As shown in
In an embodiment, Gray codes are used to encode the column and row indexes, so each structured light element is associated with two binary Gray-encoded sequences: a first sequence is for encoding column information for a particular structured light element and a second sequence is for encoding row information for the particular structured light element. The binary sequences are obtained from patterns projected, displayed and captured in sequence: each successive, displayed pattern contributes additional bits (e.g., binary digits) for appending to the associated sequence. In each pattern, a series of elements, usually entire rows or columns, can be turned on (displayed) or off (not displayed). Each successive pattern halves the width of the previously projected entire column (or halves the height of the previously projected row) so the number of columns (or row) in the successive pattern is doubled in accordance with a Gray code encoding. For example, a Gray code series can be 1, 01, 0110, 01100110, 0110011001100110, . . . . Accordingly, the number of patterns for encoding column information is a log function, which exponentially reduces the number of (e.g., required) unique codes. The number of unique codes for the column information Nh is:
Nh=log2(N) (Eq. 1)
where N is the number of columns in the array. The number of structured light patterns for encoding row information Nv is:
Nv=log2(M) (Eq. 2)
where M is the number of rows in the array.
In order to optimize memory utilization in an example, one pattern is processed at a time and the search for element bit information is limited to regions in a captured image in which a structured light element is likely to exist. Initially, an entire structured light pattern (e.g., in which all of the structured light elements are visible) is projected, displayed and captured. The captured structured light elements are identified in accordance with a single-pass connected component analysis (CCA) process. The single-pass CCA is a clustering process in which connected components with intensities above a predetermined threshold value are searched for (e.g., which lowers processing of false hits due to low intensity noise). Identified clusters of connected elements are assigned local identifiers (or tags) and their centers of mass (e.g., centroids) and relative areas are numerically determined. When the entire image has been searched and all of the detected connected component clusters identified and tagged, the connected components are sorted in descending order in response to the relative area of each connected component. For a camera-captured image in which N elements are expected, only the first N elements in area-based descending order are considered; any remaining connected element clusters of lesser areas are considered noise and discarded. Any remaining connected element clusters of lesser areas can be discarded in accordance with a heuristic in which the structured light pattern elements are the larger (and brighter elements) in the camera captured image and in which the smaller bright elements are noise, which then can be discarded. In contrast, two-pass methods for detecting connected components and blobs in images consume a considerable amount of memory and processing power. The single-pass CCA method described herein retrieves the camera-captured image from the embedded DLP® ASIC memory as a unitary operation and analyzes the image line by line. The line-by-line analysis optimizes execution speed without sacrificing robustness, for example.
Because the positions of any displayed structured light elements do not change from pattern to pattern, (e.g., only) the full (e.g., first) pattern is analyzed in response to the single-pass CCA. Subsequent patterns are (e.g., only) searched in locations in which a structured light element is positioned for display. The narrowing of search areas helps optimize ASIC memory utilization by processing patterns in sequence. Accordingly, images containing the Gray-encoded column and row pattern information are not necessarily fully analyzed. For example, only the pixels corresponding to the locations of the structured light elements identified from the entire pattern are analyzed, which greatly improves the execution speed of the detection of the captured structured light elements.
The projected structured light elements are matched with the corresponding captured and identified structured light elements. The centroids of the structured light elements and the associated row-column information are stored in the ASIC memory for retrieval during subsequent processing described herein. The centroid information for each identified structured light element and geometry information determined during system calibration are provided is input for processes for determining an optical ray orientation (described hereinbelow with respect to
While Gray-encoding of rows and columns of the structured light elements is relatively robust with respect to matching the projected and the captured structured light elements, it is possible to have missing points (control point data holes, or “holes”) for which structured light elements are projected but insufficient data is captured by processing captured images. A hole is a structured light element for which sufficient information (e.g., for fully characterizing a point on a screen) is not detected by the single-pass CCA. Holes can result when not all projected Gray sequences of structured light elements are captured. For example, holes can occur when the ambient light conditions are not optimal, when an external illumination source contaminates the scene and/or when a structured light element is projected on a screen surface discontinuity.
Data for filling holes (e.g., sufficient data for characterizing a warping engine control point) can be generated (e.g., estimated) in response to three-dimensional interpolation and extrapolation of information sampled from valid neighboring the structured light elements when sufficient neighboring information is available. Each unfilled warping engine control point (e.g., a control point for which sufficient control point data have not been captured and extracted) is indicated by storing the addresses of each unfilled warping engine control point memory in an array structure (hole map) for such missing data. The hole map contains status information for all possible row and column pairs for the structured light element array and indicates (for example) whether row-column information was correctly decoded (e.g., a point having information determined in response to the analysis of Gray-code encoding), estimated by interpolation and/or extrapolation, or undetermined (in which case a hole remains).
With reference to 730 again, the projection screen surface is characterized in accordance with optical ray intersection parameters (in the single camera mode). The optical ray intersection parameters are determined for estimating information for filling holes in the hole map. The accuracy of the screen surface characterization parameters is increased by “filling in” holes in the hole map by populating the centroid array with information estimated (e.g., by interpolation and/or extrapolation) of neighboring structured light element information. The observed or estimated screen surface characterization parameters are stored for indexed retrieval in the 3D point cloud. The points of the 3D point cloud represent the projection screen surface from the perspective of the projector.
The projector center 1410 is a point located at the origin (0, 0, 0) of a 3D Cartesian coordinate system. The projector center 1410 point is considered to be the center of projection. Optical rays 1412 originate at the center of projection 1410, pass through each one of the structured light pattern elements in the projector image plane 1414, pass through the projection surface 1440 at point 1442 and extend onto infinity.
The camera center 1420 is a point located at a certain distance (baseline) 1421 from the center of projection 1410. The camera center 1420 is determined by the camera-projector calibration data and is represented by a translation vector and a rotation matrix (discussed above with respect to
Each one of the optical rays from the projector intersects a corresponding (e.g., matched) camera ray and intersect exactly at a respective point 1442 of the projection screen surface 1440. When the length of the baseline 1421 (e.g., in real units) is determined, the real position of each intersection point 1442 can be determined. The set of intersection points 1442 lying on the projection screen surface form the 3D point cloud for characterizing the projection screen surface.
With reference to 1510, the orientation of the projector optical rays is determined in response to the location of the centroids of the structured light elements in the projected structured light pattern and is determined in response to the projector calibration data. The structured light elements are positioned at the initial locations of the warping engine control points. The optical rays are defined as vectors in 3D dimensional space originating at the center of projection.
The lenses of a projector introduce tangential and radial distortion to each structured light pattern projected through the lenses. Accordingly, the projector optical rays are distorted in accordance with:
where fcx and fcy are the x and y components of the projector's focal length in pixels and ccx and ccy are the x and y components of the principal point. The optical distortion can be corrected in response to an inverse distortion model. After correcting for the projection lens-induced distortion, the optical rays are normalized to be unit vectors in accordance with:
With reference to 1520 again, the orientation of the camera optical rays is determined. Similarly to the origin of the projector optical rays, the camera optical rays originate at the camera center, pass through the centroids of each one of the camera-captured structured light pattern elements and extend onto infinity. The equations for characterizing the optical rays of the camera are similar to the projector optical rays: optical rays are defined as vectors in 3D space (e.g., being undistorted and normalized).
However, at least two differences between the camera optical rays and projection optical rays exist. Firstly, the intrinsic parameters of the camera are for defining the orientation of the optical rays and the distortion coefficients of the camera are for correcting the tangential and radial distortion introduced by the camera optics. Secondly, each of the undistorted and normalized camera optical rays is rotated (e.g., multiplied by the extrinsic rotation matrix Rcam) to compensate for the relative orientation of the camera with respect to the projector's optical axis. Accordingly, the equations of the camera optical rays are as follows:
With reference to 1530, the intersection point for each projector optical ray and a respective camera optical ray is determined in three-dimensional space. The intersection points for each optical ray pair are determined in accordance with geometric and vector principles described hereinbelow.
The two sets of optical rays (e.g., a first set of projector rays and a second set of camera rays) intersect (or pass each other within a margin resulting from numerical errors and/or rounding factors) in accordance with a surface of a projection screen being characterized. For N elements in the structured light patterns, there exist N projector optical rays, N camera optical rays and N intersection points in 3D space.
The magnitude of the translation vector {right arrow over (T)}cam 1820 can be expressed in mm (or other convenient unit of distance) and represents an actual position of the camera with respect to the projector optical axis. Accordingly, the coordinates for the XYZ position can be expressed in millimeters and can be determined for each of the N intersection points for a first set of optical rays extending through a camera image plane and a second set of set of optical rays extending through a projector image plane. For each intersection of a ray pair {right arrow over (P)}n and {right arrow over (C)}n, an XYZ position with respect to the projector center is:
The closest point XYZn (in vector form) between the two rays {right arrow over (P)}n and {right arrow over (C)}n is:
An intersection is determined for each N optical ray pair so the actual displayed position of each structured light element projected onto the screen is determined. The (e.g., entire) set of such intersection points defines a 3D point cloud for describing the spatial information of the projection screen surface.
With reference to 1540, spatial constraints are imposed on points in the 3D point cloud 1930. Although the ray intersection usually produces very accurate results, it remains possible to obtain 3D points that do not accurately characterize the projection surface, for example. Accordingly, some points of the 3D point cloud do not lie within (e.g., fall within a pixel-width) of the projection screen surface. The inaccurate pixels can produce visually apparent errors in subsequent processing operating while relying upon the 3D point cloud as input. Such errors can be caused by errors in the camera-projection calibration data or inaccuracies in the centroid information of the camera-captured structured light pattern elements.
Because points in the cloud are arranged in raster scan order, they can be analyzed line by line and column by column. Heuristics are applied to detect any outliers (i.e. points not lying in the projection screen surface), wherein the set of applied heuristics includes heuristics for application in response to a raster scan order. The outliers can be can be normalized or otherwise compensated for in response to one or more locations of neighboring points determined (e.g., by the heuristics) to lie within the projection screen surface.
For example, a first heuristic is for determining whether distortion introduced by the projection surface results in neighboring points being mapped out of order in horizontal or vertical raster scans. In an example situation, a first optical ray located to the left of a second neighboring (e.g., adjacent and/or diagonally adjacent) optical ray should result in corresponding 3D point-cloud points calculated to have x-value in the same order as the first and second optical rays: no error is determined when the 3D point associated with the first optical ray is to the left of the 3D point-cloud point associated with the second optical ray; in contrast, an error is determined when the 3D point-cloud point associated with the first optical ray is to the right of the 3D point-cloud point associated with the second optical ray.
A second heuristic is for determining whether a discontinuity (e.g., caused by an uneven surface) of the actual projection screen surface is supported by the 3D point cloud and whether any such discontinuities are so large that they can cause a vertical or horizontal reordering of any 3D point-cloud points (e.g., as considered to be an error by the first heuristic).
In view of the first and second example heuristics, all 3D point-cloud points in any single row are considered to be monotonically increasing so the x-component of successive points increase from left to right. When a 3D point-cloud point is out of order (when compared to neighboring points in the same row), the out-of-order 3D point-cloud point is considered to be invalid. The same constraint applies to all 3D point-cloud points in column: out-of-order 3D point-cloud points are considered to be invalid in the event any y-components of successive points do not increase monotonically from the top to the bottom of a single column.
Any 3D point-cloud point determined to be invalid is stored (e.g., indicated as invalid) in the holes map. In the event a location in the holes map was previously indicated to be valid, the said indication is overwritten so the invalid 3D point-cloud point is considered to be hole.
With reference to 1550, holes in the holes map can be replaced with indications of valid values when sufficient information exists to determine a heuristically valid value. For example, a hole can be “filled” with valid information when sufficient information exists with respect to values of neighboring valid 3D point-cloud points so 3D interpolation and/or 3D extrapolation can determine sufficiently valid information (e.g., information associated with a heuristically valid value).
Because the locations of the projected structured light elements are defined in response to the initial positions of the warping engine points, each point in the 3D point cloud is evaluated (e.g., in raster order) to determine whether each 3D point-cloud point is heuristically located within the projection screen surface. Errors (e.g., in which a 3D point-cloud point does not lie within the projection screen surface) can occur during calibration or as a result of a failure to correctly determine the row-column information in the structured light pattern processing block (e.g., see 720). In such cases, the missing point (or hole) 3D information can be estimated (e.g., calculated) in response to sufficient neighboring point spatial information. In an embodiment, the 3D point cloud is searched for missing points in response to missing and/or invalid holes indicated by the holes map. When a hole is indicated, an estimation of a correct 3D position can usually be calculated by 3D extrapolation or interpolation in response to linear sequences of successive and valid 3D point-cloud points. After a correct 3D position is estimated for a “former” hole, the estimated position can be used in determining estimated position information for any remaining hole(s).
The horizontal sequencing of holes indicated by the hole map can be examined on a row-by-row basis. Each of the points of a row of the 3D point cloud is examined to determine whether any hole (including other indications of invalidity) exists for the examined point. When a hole is indicated to be associated with the examined point, the two closest (e.g., valid) neighbors in the 3D point cloud associated with valid 3D information are selected. For example, the closest neighbors in the row are 3D point-cloud points not indicated to be holes themselves (or otherwise invalid themselves). Scans for searching for the two closest neighbors can proceed in an outward direction from (e.g., proceeding to the left and proceeding to the right of) the 3D point-cloud point determined to be a hole.
Various scenarios exist in which a distance, a directional vector and a reference vector can be determined for sufficiently generating missing point data of the 3D point cloud. The various scenarios can include the possibility both left and right neighbors were found, the possibility only the left neighbor was found, the possibility only the right neighbor was found and the possibility no neighbors were found. Methods for generating missing point data of the 3D point cloud are described hereinbelow.
A directional vector is defined with p1 as its origin and its orientation defined by the 3D position of points p1 and p2:
The distances Δ1 and Δ2 are defined in terms of the 3D point cloud array column indexes of the missing point and its two neighbors:
The reference vector is set to be equal to {right arrow over (v)}1:
The vectors and distances for “filling” holes are:
The vectors and distances are:
When no right- and left-side neighboring valid three-dimensional point-cloud points can be found, information from sources other than the neighboring points can be used to supply an estimated missing point 3D position. For example, when a row n, which includes (e.g., exclusively) invalid 3D point-cloud points and a row n+1, which (e.g., exclusively) includes valid 3D point-cloud points, insufficient data exists to generate valid data for the invalid 3D point-cloud points.
Notwithstanding determination of missing points by interpolation or extrapolation techniques, a new 3D position value for the missing point can be determined as:
Accordingly, horizontal scanning processes generate information for filling control point data holes in response to 3D position data from valid neighbors arrayed in a first dimension (e.g., along the same row). When incomplete spatial information results from processing data from neighbors arrayed in a first dimension, further spatial information for filling holes is obtained from vertical scans of valid neighbor 3D point-cloud points. The vertical scans include searching the 3D point cloud array on a column-by-column basis (e.g., where a selected n-th element of each row is scanned as a single column). The vertical scanning process is similar to the horizontal scanning process described hereinabove (except that the searching is scanned in a top-to-bottom order, instead of in a left-to-right order, or scanned in a bottom-to-top order, instead of in a right-to-left order). Accordingly, the associated vector equations described hereinabove are applicable for both the described vertical and horizontal scanning processes. The data for filling the new 3D point-cloud point can be in response to the average of the filled values obtained in the horizontal and vertical scans.
With reference to 740 again, the observer position coordinates are determined (in the single camera mode). While the obtained 3D point cloud can be used for accurately describing the projection surface position and orientation in real-world units, the origin of the associated coordinate system is the center of projection. Accordingly, the projection screen surface is characterized from the projector perspective (as compared to being characterized from the perspective of the camera or an observer). The position coordinates are transformed to a second perspective for correcting for distortion introduced by non-planar and non-smooth projection screen surfaces so a rectangular image with an appropriate aspect ratio can be observed from the second perspective (e.g., observer perspective) without perceiving (e.g., uncorrected) local area distortion during projection of an image.
Accordingly, the 3D point cloud of the projection screen surface is transformed in accordance with the observer position coordinates. The observer position can be defined in response to a user input or an optimal observer position heuristically estimated from projection screen surface information from the 3D point cloud, for example. Defining the observer position in response to user input includes the user defining the position of the observer as a pair of pitch and yaw angles with respect to the projection surface. Defining the observer position in response to estimation includes optimizing the image correction for conditions in which the observer is assumed beforehand to be located in a position perpendicular (e.g., generally perpendicular) to a point of the projection screen surface.
The process flow 2300 begins in 2310, in which it is determined whether an observer position has been provided or otherwise determined. For example, the observer position can be determined from user inputs via a user interface, a predetermined location or Wi-Fi imaging. When it is determined an observer position has been provided, the process flow proceeds to 2340 (described further below). Otherwise the process flow proceeds to 2320.
In 2320, the plane is fitted through 3D point-cloud points. For example, each point of the 3D point-cloud points is translated in a rotation in accordance with the determined position of the observer. In order to move the points and calculate the observer's perspective or point of view. As mentioned above, the position of the observer can be defined as a pair of pitch and yaw angles with respect to the center of the projection surface and can be either defined by the user or assumed to be a central location generally perpendicular to a center portion of the screen. To move the points, a plane that best fits the entire 3D point cloud (or at least portions of the 3D point cloud) can be determined in response to a least squares fitting analysis.
The least squares fitting analysis is for generating coefficients of the equation of a plane in 3D spaces in response to 3D point-cloud points. For example:
Ax+Bx+Cz=D (Eq. 36)
where D is the determinant defined by the coefficients A, B and C.
Expressed in terms of z:
{right arrow over (a)}→z=a1x+a2y+a3 (Eq. 37)
where a is the determinant defined by the coefficients a1, a2 and a3.
The x-y points in the 3D point cloud are received as input for determining a pseudo-Vandermonde matrix for the objective matrix input when executing a least squares optimization analysis. The z points are received as input for determining the right-hand side (RHS) vector of the equation:
{right arrow over (b)}=V{right arrow over (a)} (Eq. 38)
For N points in the point cloud, the pseudo-Vandermonde matrix and the RHS z-vector are:
The solution to Eq. 38, which contains the coefficients of the plane as specified in Eq. 37 is:
{right arrow over (a)}=(VTV)−1VT{right arrow over (b)} (Eq. 40)
With reference back again to 2330, the fitted plane (e.g., 2440) orientation is determined with respect to the angles formed between the projector optical axis and the fitted plane normal vector. The plane's normal vector coefficients are obtained from the plane equation coefficients of Eq. 36:
The pitch and yaw angles of the fitted plane can be determined by calculating the angles between the projector optical axis and the screen plane normal vector. The optical axis of the projector can be defined as the vector:
A yaw angle (φ) of the fitted plane normal vector 2650 {right arrow over (n)}plane is described with reference to
With reference again to 2340, a rotation matrix is determined (e.g., built) for a rotation operation of the fitted plane and the 3D point cloud. The rotation operation is defined by a 3D rotation matrix. For proper rotation, each point in the point cloud (in vector form) is respectively multiplied by the rotation matrix. Assuming a roll angle of zero degrees, the rotation matrix M is:
In 2350, the 3D point cloud is rotated in accordance with the rotation operation. The rotation operation is in response to the pitch and yaw angles of the observer with respect to the fitted plane. The rotated 3D points ({right arrow over (p)}r) can be expressed from the observer point of view (the observer perspective) as follows:
{right arrow over (p)}r=M{right arrow over (p)} (Eq. 48)
With reference to 750 again, the three-dimensional points of the 3D point cloud are rotated (in the single camera mode) and translated for mathematically modeling what an observer would see from the observer position. Accordingly the distortion perceived by the observer can be both identified and corrected. The rotation of the 3D point cloud is for rearranging the points in the 3D point cloud to create a rectangular image with little or no apparent local aspect ratio distortion. The points in the 3D point cloud are rearranged to form (e.g., in outline) a rectangular area with the correct aspect ratio being in response to the determined perspective of the observer. The correction of various points within the 3D point cloud is described hereinbelow with respect to
The points in the rotated 3D point cloud represent the projection screen surface position, shape and orientation as perceived by an observer. The points of the 3D point cloud are mathematically derived in response to the intersection of (e.g., notional) optical rays emanating from the each of the projector and the camera. The projector optical rays, in turn, are defined the positions of the warping engine control points, each of which controls a portion of an entire imager plane area (e.g., an array of micromirrors) so the entire imager plane area is controlled by the warping engine control points. While the 3D point cloud ideally covers the largest image that could be projected by the projector, the projection of the corrected image can entail reducing the number of pixels of the projected image. Accordingly, portions of the edges of the 3D point cloud are bounded by an interior bounding box before projection of the corrected image.
The interior bounding box defines which pixels are to be corrected (e.g., defining a limited subset of pixels to be processed reduces processing requirements and speeds performance). The interior bounding box also forms the rectangular outside border of the displayed image (e.g., projected onto an asymmetric screen surface and observed from the observer coordinates).
The interior bounding box (IBB) defines the working area, that is, the valid area in the x-y plane in which all of the points are moved for correction of distortion. The point cloud raster scan array is searched along the edges of the 3D point cloud to help determine the maximum and minimum values in the x and y directions.
The corners of the IBB 3010 are defined with respect to the edges of the IBB 3010. For example: the left edge (LE) corner is adjacent to the top-most x component of the first column in the array; the right edge (RE) corner is adjacent to the bottom-most x component of the last column in the array; the bottom edge (BE) corner is adjacent to the left-most y component of the last row in the array; and the top edge (TE) corner is adjacent to the right-most y component of the first row in the array. (Accordingly, the corner being denoted lies in a clockwise direction adjacent to the describing edge.)
With reference to 2920, the optimal image size and placement is evaluated. The optimal placement for the corrected image is evaluated in response to the edges of the IBB 3010 are defined. The aspect ratio of the IBB 3010 is determined in accordance with the expression:
Similarly, the aspect ratio of the imager (e.g., DMD, or liquid crystal display) is calculated as follows:
Depending on the relationship between the IBB 3010 and DMD aspect ratios, the corners of the corrected image (x-y points) from the observer perspective are:
a. If Aibb≥ADMD:
b. If ADMD<AIBB:
where q is the vertical or horizontal (depending on the aspect ratio) padding for helping define the location of the first corrected point inside the IBB 3010 so the aspect ratio is maintained in the corrected image.
In 2930, the x-y components of the 3D point cloud interior points of the rotated point cloud are moved to a respective new location inside the rectilinear interior bounding box. The x-y component of the 3D points are rearranged into a rectangular grid for reconstructing the distorted image (in which the distortion results from non-planar and non-smooth regions in the projection screen surface). Local aspect ratio distortion can result from uneven rotation and rearrangement of the x-y components of the 3D point cloud interior points of the rotated point cloud. The local aspect ratio distortion is corrected by spacing the points in the x and y directions evenly. The spacing between x components and the spacing between y components is:
where N is the number of columns and M is the number of rows in the point cloud array, respectively. After the vertical and horizontal spacings are determined, each of the points in the point cloud is mapped to a respective new location in the observer perspective x-y plane, where the respective new locations are:
When the uncorrected x-y spacing points 3142 are respectively mapped to corrected x-y spacing points (e.g., as shown by rectilinear grid 3144), The points of the corrected x-y spacing point cloud appear to be uniformly distributed in a normal grid pattern. Accordingly, the corrected x-y spacing points in the observer perspective x-y plane 3100b are separated from adjacent corrected x-y spacing points in accordance with a uniform aspect ratio distributed across the entire set of corrected x-y spacing points.
The corrected x-y spacing points are generated by mapping the point cloud points to a new location in the x-y plane independently of depth information (e.g., generated in accordance with Eq. 67 without regard for changes in depth resulting from the respacing). Correcting the spacing the x-y point cloud points without adjusting the depths of the pixels being mapped can lead to distortion when the corrected x-y point cloud points are repositioned over an asymmetric screen surface with an uneven topology.
With reference again to 2940, the depth of each corrected x-y spacing point of the observer-perspective two-dimensional plane is generated in accordance with localized plane fitting on a non-rectangular grid. For example, the compensated depths can be determined by replacing uncorrected depth values with “reconstituted” depth values determined from the original 3D point cloud.
The newly generated depth information is calculated by a local plane fitting algorithm over a non-regular grid array. The original point cloud is aligned with the corrected x-y (but not z) spacing cloud so depth information of each point corrected x-y (but not z) spacing cloud is determined by the depth information of the closest neighbors in a geometric projection of the 3D point cloud in accordance with the observer perspective x-y plane.
For example, searching, selection, fitting and depth generation operations are performed for determining the compensated depth information for each point {right arrow over (p)}c in the corrected x-y (but not z) spacing cloud. The searching operation includes searching for the closest point in a geometric projection of the 3D point cloud in accordance with the observer perspective x-y plane. The selection operation includes selecting the closest point neighbors on the original 3D point cloud as determined by associated vector-angles. The fitting operation includes fitting a plane through the selected point neighbors. The depth determination operation includes determining a depth in response to the fitted plane and the selected closest point neighbors.
In the searching operation for depth determination, the closest point {right arrow over (p)}ref closest to {right arrow over (p)}c is searched for (e.g., only) in a geometric projection of the 3D point cloud into an observer perspective x-y plane. Accordingly, a Euclidian distance is minimized, wherein the Euclidian distance from the point (for which depth information is being generated) to the closest point in (the geometric projection of the 3D point cloud in accordance with) the observer perspective x-y plane. To reduce processing requirements, the 3D point cloud interior points are searched, while the 3D point cloud exterior points can be excluded (see
{right arrow over (p)}ref=min(∥pc−pu,all∥) (Eq.68)
The geometric projection 3320 determines an uncorrected grid, whereas the corrected x-y spacing points 3330 determine a corrected grid. The uncorrected grid includes a set 3310 of closest neighbors for surrounding a closest neighbor {right arrow over (p)}ref. Four quadrilaterals are formed by the set 3310 of closest neighbors and the closest neighbor {right arrow over (p)}ref. One of the four quadrilaterals surrounds a particular point {right arrow over (p)}c of the corrected grid for which depth information is being determined.
Accordingly, the (e.g., entire) geometric projection 3320 of the 3D point cloud into the observer perspective x-y plane is searched to determine the closest neighbor {right arrow over (p)}ref to the particular point {right arrow over (p)}c for which depth information is being determined. After the closest neighbor {right arrow over (p)}ref to the particular point {right arrow over (p)}c is determined, the particular one of the four quadrilaterals surrounding a particular point {right arrow over (p)}c is identified and selected.
In the selection operation for depth determination, each of the four quadrilaterals is evaluated to identify and select the particular quadrilateral surrounding a particular point {right arrow over (p)}c. Except for the apparent distortion in the uncorrected array, the corrected point would otherwise be located inside a rectangle. However, when such distortion occurs, the corrected point {right arrow over (p)}c occurs in one of the four adjacent, irregular quadrilaterals (of the uncorrected grid), which share the closest point {right arrow over (p)}ref as one corner of each of the four quadrilaterals. A two-dimensional, corrected-point locator vector (in the observer perspective x-y plane) {right arrow over (v)}c defines a spatial relationship between the {right arrow over (p)}ref vector and the particular point {right arrow over (p)}c vector. Accordingly:
{right arrow over (v)}c={right arrow over (p)}c−{right arrow over (p)}ref (Eq. 69)
The angle ωc of the directional vector is:
The four directional vectors {right arrow over (v)}d1-4 of the edges of a quadrilateral are respectively:
{right arrow over (v)}d1={right arrow over (p)}c(row,col+1)−{right arrow over (p)}c(row,col) (Eq. 71)
{right arrow over (v)}d2={right arrow over (p)}c(row−1,col)−{right arrow over (p)}c(row,col) (Eq. 72)
{right arrow over (v)}d3={right arrow over (p)}c(row,col−1)−{right arrow over (p)}c(row,col) (Eq. 73)
{right arrow over (v)}d4={right arrow over (p)}c(row+1,col1)−{right arrow over (p)}c(row,col) (Eq. 74)
The angles of the four directional vectors {right arrow over (v)}d1-4 are calculated similarly as in Eq. 70 and are stored as a vector {right arrow over (ω)}d. The angles ωc and ωd are rotated with respect to the angle of the directional vector corresponding to the right directional vector {right arrow over (v)}d1.
ωc=ωc−{right arrow over (ω)}d1 (Eq. 75)
ωd={right arrow over (ω)}d−{right arrow over (ω)}d1 (Eq. 76)
Accordingly, each of the four directional vectors {right arrow over (v)}d1-4 is defined in terms of the vector {right arrow over (v)}d1. The quadrilateral in which the point {right arrow over (p)}c vector is located is determined in response to each of the four directional vectors {right arrow over (v)}d1-4. The identification and selection of the enclosing quadrilateral defines which points in the original 3D point cloud are for recalculating the depth information of the corrected x-y spacing points in the observer perspective x-y plane.
The four quadrilaterals can be defined with respect to the row and column number of the a closest point ({right arrow over (p)}ref) 3420. A preceding row is one row before the row including the closest point in a vertical direction. A succeeding row is one row after the row including the closest point in the vertical direction. A preceding column is one column before the column including the closest point in a horizontal direction. A succeeding column is one column after the column row including the closest point in the horizontal direction.
A first quadrilateral Q1 includes a first point (which is the closest point 3420), a second point (which is pointed to by the vector {right arrow over (v)}d2 and is in a preceding row and the same column with respect to the closest point 3420), a third point (which is in the preceding row and a succeeding column with respect to the closest point 3420) and a fourth point (which is pointed to by the vector {right arrow over (v)}d1 and is in the same row and the succeeding column with respect to the closest point 3420).
A second quadrilateral Q2 includes a first point (which is the closest point 3420), a second point (which is pointed to by the vector {right arrow over (v)}d2 and is in a preceding row and the same column with respect to the closest point 3420), a third point (which is in the preceding row and a preceding column with respect to the closest point 3420) and a fourth point (which is pointed to by the vector {right arrow over (v)}d2 and is in the same row and the preceding column with respect to the closest point 3420).
A third quadrilateral Q3 includes a first point (which is the closest point 3420), a second point (which is pointed to by the vector {right arrow over (v)}d4 and is in a succeeding row and the same column with respect to the closest point 3420), a third point (which is in the succeeding row and a preceding column with respect to the closest point 3420) and a fourth point (which is pointed to by the vector {right arrow over (v)}d3 and is in the same row and the preceding column with respect to the closest point 3420).
A fourth quadrilateral Q4 includes a first point (which is the closest point 3420), a second point (which is pointed to by the vector {right arrow over (v)}d4 and is in a succeeding row and the same column with respect to the closest point 3420), a third point (which is in the succeeding row and a succeeding column with respect to the closest point 3420) and a fourth point (which is pointed to by the vector {right arrow over (v)}d1 and is in the same row and the succeeding column with respect to the closest point 3420).
The quadrilaterals and the points from the uncorrected point cloud (e.g., original 3D point cloud for characterized the projection screen surface). Collectively the quadrilaterals are defined in terms of {right arrow over (C)}. Each of the quadrilaterals of {right arrow over (C)} are searched to determine which quadrilateral includes (encloses) a geometrical projection (e.g., from the observer perspective) of the corrected point {right arrow over (p)}c. The determined enclosing quadrilateral is for determining a depth for the corrected point {right arrow over (p)}c in response to local plane fitting of the enclosing quadrilateral.
For Q1, and assuming ωd1≤ωc<ωd2, the vertices of the quadrilateral Q1 can be expressed in 3D space:
For Q2, and assuming ωd2≤ωc<ωd3, the vertices of the quadrilateral Q2 can be expressed in 3D space:
For Q3, and assuming ωd3≤ωc<ωd4, the vertices of the quadrilateral Q3 can be expressed in 3D space:
For Q, and assuming ωd4≤ωc, the vertices of the quadrilateral Q4 can be expressed in 3D space:
In the fitting operation for depth determination, a plane is fitted in response to corners selected from the enclosing quadrilateral. Depth information can be generated for the corrected point {right arrow over (p)}c 3430 in response to the fitted plane. The generated depth information can be a z component for projecting the corrected point {right arrow over (p)}c 3430 onto a virtual surface define by the original 3D point cloud (e.g., so a new point is generated for an image for generating an inverse image for distortion reduction in accordance with the observer perspective). The planes can be fitted through four points of the enclosing quadrilateral of {right arrow over (C)}, in accordance with equations 77-80. Accordingly, the equation of a fitted local plane is calculated similarly with respect to Eq. 39-40:
In the operation for determining a depth determination, the depth information is generated for the corrected point {right arrow over (p)}c in response to the fitted local plane. The depth information for the corrected point {right arrow over (p)}c can be determined in accordance with:
{right arrow over (p)}c,z=a1{right arrow over (p)}c,x+a2{right arrow over (p)}c,y+a3 (Eq. 84)
With reference to 760 again, the compensated-depth 3D point cloud is transformed (e.g., rotated back) to the projector perspective for input as warping points to the warping engine. The warping engine generates a warped (e.g., inverse) image in response to the warping points. Processing for the inverse image generation is described hereinbelow with respect to
The projector-perspective compensated-depth 3D point cloud describes a compensated point arrangement so an observer perceives a substantially rectangular, distortion-free image with a correct aspect ratio when viewed from the determined observer perspective. Each point of the compensated-depth 3D point cloud can be defined in a 3D space in real-world linear units (such as mm, inches, meters or feet). The translated x-y positions of the compensated-depth 3D point-cloud points determine adjustments for the warping engine to generate the inverse image for projection onto the asymmetric projection screen surface.
The point information from the compensated-depth 3D point cloud, however, is not directly input to the warping engine because the points of the compensated-depth 3D point cloud are defined from the observer perspective and because the points are denominated in three-dimensional real-world units.
The process flow 3600 begins in 3610, where the compensated-depth 3D point cloud is transformed to the projector perspective. The transformation includes rotating the compensated-depth 3D point cloud from the observer perspective to the projector perspective. Each of the points in the compensated-depth 3D point cloud is multiplied by a rotation matrix. The rotation matrix encodes the rotation transformation for executing the inverse operation of the rotation in Eq. 47. Accordingly, the inverse rotation is:
R=M−1=MT (Eq. 85)
The observer perspective compensated-depth 3D point cloud is transformed to generate a projector-perspective compensated-depth 3D point cloud. For each point in the observer-perspective compensated-depth 3D point cloud:
{right arrow over (p)}co=R{right arrow over (p)}c (Eq. 86)
where {right arrow over (p)}co is a corrected point-cloud point in 3D space from the projector perspective and c is the corrected point but from the observer perspective.
In 3620, the projector-perspective compensated-depth 3D point cloud is geometrically projected for fitting the image plane of a DMD. The point cloud is scaled for projection from the projector perspective. However, the points of the projector-perspective compensated-depth 3D point cloud are defined in terms of real-world linear units whereas the digital micromirror image plane is defined in terms of pixels and camera-projector calibration data defined in terms of projector optical rays.
For each point in the projector-perspective compensated-depth 3D point cloud, the corresponding projected pixel position is:
The imager plane 3740 includes a projector-perspective compensated-depth 3D point cloud system geometry geometrically projected. The image plane 3740 includes a two-dimensional geometric projection of the projector-perspective compensated-depth 3D point cloud 3730. Accordingly, the points of the projector-perspective compensated-depth 3D point cloud 3730 are usually defined in pixels rather than being defined in three-dimensional space.
With reference again to 3630, the warping engine control points are edited. The warping map points can be “edited” by mapping (e.g., transferring) the points of the two-dimensional geometric projection of the projector-perspective compensated-depth 3D point cloud 3730 (e.g., as geometrically projected onto the imager plane) to the warping engine.
With reference to 3640, the warping engine generates one or more inverse images in response to the warping engine control points. For example, warping engine receives (and inputs) the warping engine control points to the warping engine hardware so real-time frame rates can be achieved to create the appearance of uninterrupted moving images (e.g., movies). The output of the warping engine is a frame that includes an inverse image with respect to a raw (e.g., uncorrected) input image for projection (e.g., where the input images are received as a video stream). The inverse image, when projected onto the projection screen surface, compensates for the physical irregularities of the asymmetric projection screen surface, so an observer looking at the screen will perceive a rectangular image substantially free of distortion.
As described following, the cameras 4020 and 4030 are arranged for automatically capturing views of the screen 4040 image for geometric compensation determination. For example, the first camera 4030 is distanced from (e.g., the projection lens of) projector 4010 by a baseline distance of 4031, whereas the second camera 4020 is distanced from (e.g., the projection lens of) projector 4010 by a baseline distance of 4021.
While the single-camera mode system described hereinabove accurately characterizes a projection surface and compensates for anomalies therein to reduce distortion, the single-camera mode system is dependent upon the accuracy of the camera-projector calibration. Accordingly, increased inaccuracies in calibration reduce the quality of such compensation.
The camera-projector calibration is dependent upon the focal length of the projector. Accordingly, when the focal length of the projector changes (such as when being “zoomed”) the camera-projector calibration (e.g., files and settings) are no longer optimal, and the quality of the asymmetrical screen compensation decreases. For example, when the projector focal length (zoom) changes from a setting upon which the calibration is determined, the projector optical rays are reoriented so the intersection points (e.g., between the projector and camera rays) determined during calibration become less accurate for representing the projection screen surface.
The dependence upon the focal length of the projector often results in less-than-satisfactory results when the projector focal length is re-zoomed. Further, dependence upon the focal length of the projector affects the cost and precision of mass-scale production of systems implementing asymmetric screen surface characterization and compensation (e.g., selected manufacturing tolerances for the projection optics determined in response to a variable projector focal length and principal point are not usually achieved without incurring additional cost).
Mechanical sensors attached to the zoom and focus control rings of a projector can provide real-time feedback on the zoom and focus controls. Also, statistical methods can estimate a focal length of a projector by capturing and statistically analyzing images projected onto a screen. While control feedback and statistical methods can be used to detect a change in a projector focal length, the calibration files (e.g., dependent upon a different focal length) do not necessarily remain acceptably valid. For example, a considerable amount of calibration data and user expertise are often employed for acceptably (e.g., so the changes in compensation are not noticed by a human observer) responding to changes in the projector focal length. Further, such control feedback and statistical methods often implement relatively tight tolerances for manufacturing the projector optics.
In contrast, the dual-camera mode asymmetric screen surface characterization and compensation systems described herein can generate calibration files, wherein the calibration files are independent of any relationship with a particular focal length of a projector. In such a dual-camera-based asymmetric screen surface characterization and compensation system, a second digital camera is coupled mechanically and/or electronically to the projection system. The second camera is positioned in a different place (and oriented with a different orientation) from the first camera. The second camera is modeled as an optical ray source, which differs from the projector-based optical ray source of the single-camera mode systems.
Accordingly, the projector in the dual-camera mode asymmetric screen surface characterization and compensation system projects the structured light patterns for display, and the projected structured light patterns as displayed on a screen are captured by both of the dual cameras. As described hereinbelow, the optical rays of each camera originate in each respective camera center, pass through the centroids of the structured light elements and intersect on the projection screen surface.
The dependence of the optical rays upon the respective cameras theoretically eliminates dependence upon the projector as an origin for generation of calibration geometries. Accordingly, any post-calibration change in the projector zoom ratio and focus do not affect the robustness and accuracy of the calibration processing for determining variable geometries (including the determination of positioning and orientation of distant objects with respect to an observer). Manufacturing tolerances of dual-camera-mode asymmetric screen surface characterization and compensation systems (which are not necessarily functionally dependent on focal length as an input variable) are relaxed with respect to tolerances of single-camera mode asymmetric screen surface characterization and compensation system. Accordingly, satisfactory results can be obtained during the mass production of dual-camera-mode asymmetric screen surface characterization and compensation systems by determining system geometries in accordance with structured light-based surface metrology statistical processing.
The flow of dual-camera mode process 4100 begins in 4110, where the projection system is calibrated by camera-projector calibration techniques in accordance with a dual-camera pin-hole camera model. The dual-camera-based system calibration is described hereinbelow with respect to
In 4120, the projection screen surface is characterized by capturing information (e.g., structured light patterns) projected on the projection screen surface. For example, sparse discrete structured light patterns are projected by the projector upon a projection screen surface in sequence and captured by the camera. The sparse discrete structured light patterns include points for representing the pixels of maximum illumination wherein each point can be associated with the peak of a Gaussian distribution of luminance values. The positions of various points in the sparse discrete structured light patterns in the captured frames are skewed (e.g., shifted) in response to non-planar or keystoned portions of the projection screen surfaces. The captured camera frames of the structured light patterns are stored in the ASIC's memory for processing and for generation of a three-dimensional point cloud (3D point cloud) for characterizing the projection screen surface. (Structured light pattern processing is described hereinabove with respect to
In 4130, the projection screen surface is characterized in accordance with optical ray intersection parameters (such as shape, distance and orientation with respect to the projector optical axis). The projection screen surface is characterized, for example, and stored as points of the 3D point cloud. Accordingly, the 3D point cloud includes data points for modeling the screen surface. (Projection screen surface characterization is discussed above with respect to
In 4140, the observer position coordinates can be determined in various ways: retrieved from storage in the ASIC memory; entered by the observer at run-time; or determined at run-time in response to triangulation of points of a displayed image. The observer position coordinates can be determined by analysis of displayed images captured by a digital camera in a predetermined spatial relationship to the projector. When the observer position is determined by triangulating points in a captured image, the observer position can be assumed to be generally perpendicular to the projection screen (and more particularly, the orientation of the observer can be presumed to be perpendicular to a best fitted plane passing through the points in the 3D point cloud). (The calculations for determining an observer perspective are described hereinabove with respect to
In 4150, the three-dimensional points of the 3D point cloud are rotated and translated to internally model what an observer would perceive from the observer position. With the perspective of the observer being determined, points in the 3D point cloud are rearranged to form, in outline, a rectangular shape with the correct aspect ratio. The correction of points in the 3D point cloud is described hereinbelow with respect to
In 4160, the rearranged points are transformed (e.g., rotated back) to the projector perspective for input as warping points to the warping engine. The warping engine generates a warped image (e.g., warped inverse image) in response to the warping points. Processing for the inverse image generation is described hereinbelow with respect to
With reference to 4110 again, camera-to-camera calibration data is obtained (in the dual camera mode). The extrinsic parameters (e.g., camera-to-camera calibration data) of each of the cameras are defined with respect to the center of projection or other common point (e.g., projector origin and orientation). The two cameras system extrinsic parameters include the translation vector (e.g., distance from projector to each camera in millimeters) and the rotation matrix of each camera with respect to a common orientation (e.g., the projector orientation). For convenience (and to maintain some commonality with the equations described hereinabove), the origin can be selected to be the center of projection of the projector, from which the optical axis of the projector points towards the −z axis.
The geometry 4200 includes a center of the projector 4220 positioned at the origin (0,0,0) and having an orientation 4210 defined by the x, y and z axes. The projector normal vector 4230 {right arrow over (n)}proj is oriented in the opposite direction of the z axis.
The translation vectors Tcam include the position of a respective camera center (xc, yc, zc) to the origin and is expressed in millimeters. For example, a center of the camera 4250 is positioned at the point (xc1, yc1, zc1) and includes a camera normal vector 4240 {right arrow over (n)}cam1. The center of the camera 4250 is offset from the center of the projector 4220 by the offset distance Tcam1, which extends from point (0,0,0) to point (xc1, y1, zc1). A center of the camera 4270 is positioned at the point (xc2, y2, zc2) and includes a camera normal vector 4260 {right arrow over (n)}cam2. The center of the camera 4270 is offset from the center of the projector 4220 by the offset distance Tcam2, which extends from point (0,0,0) to point (xc2, y2, zc2).
Extrinsic parameters also include a rotation matrix Rcam for the relative pitch, yaw and roll of each camera normal vector {right arrow over (n)}cam with respect to the projector normal vector (i.e. the optical axis) {right arrow over (n)}proj or other common point between the cameras 4250 and 4270.
Accordingly, two sets of translation vectors and rotation matrices are determined in calibration. The extrinsic (as well as the intrinsic) parameters for the two cameras 4250 and 4270 can be obtained by iterative calibration methods. The calibration data can be stored in a file in ASIC memory and retrieved in the course of executing instructions for performing the functions described herein.
With reference to 4120 again, the projection screen surface is characterized by capturing information projected on the projection screen surface. For example, sparse discrete structured light patterns are projected by the projector upon a projection screen surface in sequence and captured by the camera. The positions of various points in the sparse discrete structured light patterns in the captured frames are skewed (e.g., shifted) in response to non-planar or keystoned portions of the projection screen surfaces. The captured camera frames of the structured light patterns are stored in the ASIC's memory for processing and for generation of a three-dimensional point cloud (3D point cloud) for characterizing the projection screen surface. Because both cameras capture the displayed structured light patterns, the processing requirements are usually increased over a single-camera mode system. For an N×M sparse structured light element set of patterns, the number of images processed by the ASIC (e.g., encoded by Gray encoding) is:
X=log2(N)+log2(M)+2 (Eq. 90)
In 1430, the projection screen surface is characterized in accordance with the intersections of pairs of optical rays (where each intersecting ray originates from a different camera), where the intersection is associated with a respective point of a structured light element displayed on the projection screen surface. The real 3D position of each one of the structured light elements projected onto the surface is calculated by determining the intersection between each pair of optical rays, each originating from respective cameras. The projection screen surface characterization is described hereinbelow with respect to
The projector center 4310 is a point located at the origin (0, 0, 0) of a 3D Cartesian coordinate system. The projector center 4310 point is considered to be the center of projection. Projector rays 4312 originate at the center of projection 4310, pass through a centroid of each one of the structured light pattern elements in the projector image plane 4314, intersect the projection surface 4340 at point 4342 and extend into infinity.
The first camera center 4320 is a point located at a certain distance (baseline) 4321 from the center of projection 4310. The first camera center 4320 is determined by the camera-projector calibration data and is represented by a translation vector and a rotation matrix. Optical rays 4322 originate at the camera center 4320, pass through each one of the centroids of the camera captured structured light elements of the camera image plane 4324, intersect the projection surface 4340 at point 4342 and extend into infinity.
The second camera center 4330 is a point located at a certain distance (baseline) 4331 from the center of projection 4310. The second camera center 4330 is determined by the camera-projector calibration data and is represented by a translation vector and a rotation matrix. Optical rays 4332 originate at the camera center 4330, pass through each one of the centroids of the camera captured structured light elements of the camera image plane 4334, intersect the projection surface 4340 at point 4342 and extend into infinity.
Each one of the optical rays from the projector intersects a corresponding (e.g., matched) camera ray and intersect exactly at a respective point 4342 of the projection screen surface 4340. When the lengths of the baseline 4321 and baseline 4331 are determined (e.g., in real units), the real position of each intersection point 4342 can be determined. The set of intersection points 4342 lying on the projection screen surface form the 3D point cloud for characterizing the projection screen surface.
Accordingly, the formulation of the (dual-camera mode) cameras optical rays equations is similar to the (single-camera mode) projector optical rays equations in which optical rays are defined as vectors in 3D space, undistorted and normalized. The orientation of the optical rays is in response to the intrinsic parameters of each respective camera and distortion coefficients can be used to correct for the tangential and radial distortion introduced by the optics of a respective camera.
To compensate for the relative orientation of a camera with respect to the projector optical axis or other common point, each of the undistorted and normalized camera optical rays are rotated (e.g., multiplied by the extrinsic rotation matrix Rcam). The equations of the camera optical rays are accordingly defined as follows, in which each camera is defined in terms of the intrinsic and extrinsic parameters associated with the respective camera:
With reference to 4130 again, the projection screen surface is characterized similarly to the projection screen surface characterization of the single-camera mode systems discussed above. Accordingly, the optical ray intersection parameters (such as shape, distance and orientation with respect to the projector optical axis) determine points of the 3D point cloud.
In 4140, the observer position coordinates can be determined in a manner similar to the observer perspective calculation of the single-camera mode systems discussed above. Accordingly, the observer position coordinates can be determined by analysis of displayed images captured by a digital camera in a predetermined spatial relationship to the projector. The analysis can include triangulation of centroid points captured in the displayed images.
In 4150, the observer perspective corrections can be determined in a manner similar to the observer perspective corrections of the single-camera mode systems described hereinabove; however, the depth recalculations are determined by homography transformations described with respect to
In 4620, the optimal image size and placement is evaluated. The optimal placement for the corrected image is evaluated in response to the edges of the IBB. Operation 4620 is similar to operation 2920 described hereinabove.
In 4630, the x-y components of the 3D point cloud interior points of the rotated point cloud are moved (e.g., respaced) to a respective new location inside the rectilinear interior bounding box. The local aspect ratio distortion is corrected by spacing the points in the x and y directions evenly. Operation 4630 is similar to operation 2930 described hereinabove.
In 4640, the depth information is determined in accordance with a homographic mapping of a corrected point in an geometric projection 3320 of the (e.g., uncorrected) 3D point cloud into an imager plane (e.g., see image plain 3740). For example, the neighboring points of the corrected point cloud are identified by the vector-angle searching on the uncorrected point irregular grid as described hereinabove with respect to a closest point (pref) 3420. However, the depth for the corrected point is determined in accordance with a local homography between the uncorrected point cloud quadrilateral (e.g., defined by the identified neighboring points) and mapping the corrected point to a corresponding rectangle (e.g., square) in the imager plane.
The rectangle in the imager plane can be addressed in response to the row and column indexes of the trapezoid corners in the uncorrected point cloud and the locations of the warping engine control points. (As described hereinabove, the point cloud and the warping engine are arranged as a rectangular array, where the array elements can be addressed via respective row and column indexes.)
Each local homography is encoded in a 3×3 matrix and represents the necessary perspective transform to map a point 4710a from the irregular uncorrected point cloud grid 4700a to a point 4710b of the imager plane 4700b. The grid of the imager plane 4700b is regular and corresponds to the grid of the warping engine control points. Because of the irregularity of the uncorrected point cloud x-y grid imager plane 4700a, corrected point mapping is implemented by local homography matrices: each quadrilateral in the irregular grid of the irregular uncorrected point cloud grid 4700a is related to a rectangle in the regular grid of the imager plane 4700b via a unique homography. For each quadrilateral-rectangle pair, the homography matrix is calculated by a weak correlation approach:
where the matrix A is an optimization or coefficients matrix, Vector {right arrow over (b)} is the right hand side vector and vector {right arrow over (h)} is the objective vector for including the coefficients of the homography matrix. Solving for vector {right arrow over (h)}:
A{right arrow over (h)}={right arrow over (b)}→{right arrow over (h)}=(ATA)−1AT{right arrow over (b)} (Eq. 96)
The homography matrix His populated in accordance with Eq. 96:
The corrected point {right arrow over (p)}c is mapped to its corresponding location in the imager plane as follows:
where
is the pixel position of the corrected point (e.g., 4710b) in the imager plane.
Accordingly, the corrected point in imager plane space,
is calculated directly from the corrected and uncorrected point clouds from the observer perspective (e.g., so the projector perspective does not need to be recovered). In the dual-camera mode, corrected points can be directly mapped from the observer perspective to the imager plane. Accordingly, some of the intermediate operations in the single-camera mode (e.g., such as rotation to the projector perspective and projection from 3D real-world coordinates to the imager plane) are not necessary.
The geometric projection of the 3D point cloud 4842 into the observer perspective x-y plane and the rectilinear grid 4844 of corrected x-y spacing points in the observer perspective x-y plane are transformed (similarly as described hereinabove with reference to
Accordingly, the corrected control points 4846 are defined in pixel space rather than being defined in three-dimensional space. The corrected control points 4846 are input to the warping engine so the warping engine is arranged for receiving an image for projection and warping the received image for projection in response to the corrected control points 4846. The warped image for projection is projected on the characterized projection screen surface so, for example, the projected warped image is displayed on the characterized projection screen surface as being corrected (and/or compensated for) non-planarities or keystoning of the characterized projection screen surface.
Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
De La Cruz, Jaime Rene, Kempf, Jeffrey Mathew
Patent | Priority | Assignee | Title |
11350066, | Dec 10 2020 | Texas Instruments Incorporated | Camera-assisted projection optics distortion characterization and correction |
11394940, | Apr 16 2021 | Texas Instruments Incorporated | Dynamic image warping |
Patent | Priority | Assignee | Title |
7970230, | Dec 30 2002 | Texas Instruments Incorporated | Image processing with minimization of ringing artifacts and noise |
8306350, | Dec 19 2008 | Texas Instruments Incorporated | Aspect ratio distortion minimization |
20020061131, | |||
20110157159, | |||
20120314031, | |||
20130215298, | |||
20160014385, | |||
20160173841, | |||
20170090177, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 13 2017 | DE LA CRUZ, JAIME RENE | Texas Instruments Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044476 | /0790 | |
Nov 13 2017 | KEMPF, JEFFREY MATTHEW | Texas Instruments Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044476 | /0790 | |
Nov 14 2017 | Texas Instruments Incorporated | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Nov 14 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Nov 22 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 16 2023 | 4 years fee payment window open |
Dec 16 2023 | 6 months grace period start (w surcharge) |
Jun 16 2024 | patent expiry (for year 4) |
Jun 16 2026 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 16 2027 | 8 years fee payment window open |
Dec 16 2027 | 6 months grace period start (w surcharge) |
Jun 16 2028 | patent expiry (for year 8) |
Jun 16 2030 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 16 2031 | 12 years fee payment window open |
Dec 16 2031 | 6 months grace period start (w surcharge) |
Jun 16 2032 | patent expiry (for year 12) |
Jun 16 2034 | 2 years to revive unintentionally abandoned end. (for year 12) |