Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
|
21. A content-adaptive image processing apparatus comprising:
means for classifying a first plurality of pixels of a frame into one class of a plurality of classes;
means for determining a respective weighting-value for each of the first plurality of pixels of the frame, wherein each weighting-value is based at least in part on a comparison of a classification of a particular pixel with the classification of other pixels near the particular pixel; and
means for generating a content indicator for the frame based at least in part upon the weighting-value of at least one pixel of the first plurality of pixels.
1. A method of context-adaptive pixel processing comprising:
classifying, via an electronic device, a first plurality of pixels of a frame into one class of a plurality of classes;
determining a respective weighting-value for each of the first plurality of pixels of the frame, wherein each weighting-value is based at least in part on a comparison of a classification of a particular one of the pixels with the classification of other pixels near the particular pixel; and
generating a content indicator for the frame based at least in part upon the weighting-value of at least one pixel of the first plurality of pixels.
30. A content-adaptive image processing apparatus comprising:
a memory; and
an image processor configured to:
classify a first plurality of pixels of a frame into one class of a plurality of classes;
determine a respective weighting-value for each of the first plurality of pixels of the frame, wherein each weighting-value is based at least in part on a comparison of a classification of a particular one of the pixels with a classification of other pixels near the particular pixel; and
generate a content indicator for the frame based at least in part upon the weighting-value of at least one pixel of the first plurality of pixels.
11. A non-transitory computer readable medium comprising instructions for context-adaptive pixel processing between frames, said instructions, when executed, cause an apparatus to:
classify a first plurality of pixels of a frame into one class of a plurality of classes;
determine a respective weighting-value for each of a first plurality of pixels of the frame, wherein each weighting-value is based at least in part on a comparison of a classification of a particular one of the pixels with the classification of other pixels near the particular pixel; and
generate a content indicator for the frame based at least in part upon the weighting-value of at least one pixel of the first plurality of pixels.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
12. The non-transitory computer readable medium of
13. The non-transitory computer readable medium of
14. The non-transitory computer readable medium of
15. The non-transitory computer readable medium of
16. The non-transitory computer readable medium of
17. The non-transitory computer readable medium of
18. The non-transitory computer readable medium of
19. The non-transitory computer readable medium of
20. The non-transitory computer readable medium of
22. The apparatus of
23. The apparatus of
24. The apparatus of
25. The apparatus of
26. The apparatus of
27. The apparatus of
28. The apparatus of
29. The apparatus of
31. The apparatus of
32. The apparatus of
33. The apparatus of
34. The apparatus of
35. The apparatus of
36. The apparatus of
37. The apparatus of
38. The apparatus of
|
This application is a continuation of prior U.S. Non-Provisional patent application Ser. No. 13/160,457, filed Jun. 14, 2011 which issued on Oct. 8, 2013 as U.S. Pat. No. 8,553,943. The foregoing application is hereby expressly incorporated by reference in its entirety. Furthermore, any and all priority claims identified in the Application Data Sheet, or any correction thereto, are hereby incorporated by reference under 37 C.F.R. §1.57.
1. Field
The present embodiments relate to machine vision, and in particular, to content-adaptive pixel processing systems, method and apparatus flow.
2. Background
A wide range of electronic devices, including mobile wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, and the like, employ machine vision techniques to provide versatile imaging capabilities. These capabilities may include functions that assist users in recognizing landmarks, identifying friends and/or strangers, and a variety of other tasks.
Augmented reality functions may also identify motion of one or more objects within an image. Optical flow is a known method for motion tracking. Rather than first trying to recognize an object from raw image pixel data and then track the motion of the object among a sequence of image frames, optical flow determination instead tracks the motion of features from raw image pixel data. There are, however, a number of issues, such as computational complexity, that make it difficult to implement known optical flow determination techniques on a mobile platform.
Various embodiments of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the desirable attributes described herein. Without limiting the scope of the appended claims, some prominent features are described herein. After considering this discussion, and particularly after reading the section entitled “Detailed Description” one will understand how the features of various embodiments are used to contextually adapt the processing of pixels from a sequence of image frames, performed on at least one computer processor.
One aspect of the disclosure is a method of determining pixel displacement between frames. The method includes determining a respective weighting-value for each of a first plurality of pixels. Each weighting-value is based at least in part on the relationship of an attribute of a particular pixel with the corresponding attributes of other pixels near the particular pixel. The method further includes determining an optical flow indicator of pixel displacement between two frames based at least in part upon the weighting-value of at least one pixel of the first plurality of pixels.
Another aspect of the disclosure is a computer program product for determining pixel displacement between frames that when executed cause an apparatus to determine a respective weighting-value for each of a first plurality of pixels. Each weighting-value is based at least in part on the relationship of an attribute of a particular pixel with the corresponding attributes of other pixels near the particular pixel. The instructions further cause the apparatus to determine an optical flow indicator of pixel displacement between two frames based at least in part upon the weighting-value of at least one pixel.
Another aspect of the disclosure is an apparatus configured to determine pixel displacement between frames comprising: means for determining a respective weighting-value for each of a first plurality of pixels, wherein each weighting-value is based at least in part on the relationship of an attribute of a particular pixel with the corresponding attributes of other pixels near the particular pixel; and means for determining an optical flow indicator of pixel displacement between two frames based at least in part upon the weighting-value of at least one pixel.
Another aspect of the disclosure is an apparatus configured to determine pixel displacement between frames comprising: a memory; and a processor configured to: determine a respective weighting-value for each of a first plurality of pixels, wherein each weighting-value is based at least in part on the relationship of an attribute of a particular pixel with the corresponding attributes of other pixels near the particular pixel; and determine an optical flow indicator of pixel displacement between two frames based at least in part upon the weighting-value of at least one pixel.
So that the manner in which features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Implementations disclosed herein provide systems, methods and apparatus configured to provide optical flow determinations for augmented and mediated reality applications on mobile platforms. Generally, augmented reality is a term for a live direct and/or indirect view of a real-world environment whose elements are supplemented by computer-generated sensory inputs, such as audible, visual and/or tactile features. Augmented reality is a subset of mediated reality, in which a real-world image is modified by adding and/or simplifying elements of the image using computer-based techniques. For the sake of convenience and brevity, the terms “augmented reality” and “mediated reality” will be used interchangeably in the description provided below. In some applications the use of mediated reality techniques allows information about the environment accessible to a user to become interactive.
Object motion tracking is a useful tool for augmented reality applications. Optical flow is a known method for motion tracking. Rather than first trying to recognize an object from raw image pixel data and then track the motion of the object among a sequence of image frames, optical flow determination instead tracks the motion of features from raw image pixel data. However, motion tracking using optical flow determination techniques is computationally expensive, thus making known optical flow determination techniques impractical for implementation on many current mobile platforms. The present embodiments contemplate modifying optical flow determination techniques so as to provide for content-adaptive optical flow determination.
Some embodiments relate to a method or system for providing optical flow determinations from a sequence of image frames, performed on at least one computer processor. According to examples of the method, weighting values associated with corresponding portions of an image may be generated. In some implementations, each weighting value represents the information value of one or more pixels. Pixels that have relatively low information value are deemphasized and pixels that have relatively high information value are emphasized in optical flow computations.
The amount of information provided by a pixel may be based on a determination of the context that the pixel provides within an image relative to the neighboring pixels. For example, in one implementation, a pixel that is surrounded by highly structured and/or organized pixel neighbors may be considered a pixel with a relatively high information value. Such a pixel is assumed to belong to a discernable feature, whose motion can be tracked within a sequence of image frames. On the other hand, a pixel surrounded by uniform or pseudo-random cluster of pixel neighbors may be considered a pixel with a relatively low information value. Such a pixel is assumed not to belong to a feature that is easily discernible within the image.
In another example, the weighting values may also be a function of dispersion. Dispersion generally refers to the separation of similar pixels from one another in an image region. Dispersion is lower where similar pixels reside near one another. Dispersion is higher where many intervening dissimilar pixels separate the similar pixels. One may also define dispersion as “high” when the pixels are so similar that no distinguishable groups can be said to exist. In turn, such pixels may be categorized as having low information values. Note that similar pixels may be “dispersed” as a whole, even if individual similar pixels reside beside one another in widely separated “clumps” or groups in the image. A “dispersion value” may indicate the degree of dispersion and provide a basis for determining the information value of one or more pixels.
In some implementations, low information value pixels are deemphasized in optical flow determinations based on the respective weighting values determined. One skilled in the art will appreciate that the weighting values referred to are with regard to the degree of information conveyed by one or more pixels, and that an actual implementation may represent a threshold or a range in a variety of forms (such as an inverse, or range). Thus, as described herein “exceeding a threshold” indicates that a sufficient amount of information is present, regardless of whatever particular implementation is employed to describe that condition. Various implementations of the optical flow determination methods may be useful, for example, in a cellular telephone that is performing motion tracking on a sequence of captured image frames. By appropriately weighting portions of the image that have low information values, the system can deemphasize those portions from an optical flow determination process. Other regions, such as regions of relatively high information value pixels may be emphasized. This method may save computational resources by reducing the number and/or complexity of optical flow computations related to pixels that are unlikely to provide useful information.
In the following description, specific details are given to provide a thorough understanding of the examples. However, it will be understood by one of ordinary skill in the art that the examples may be practiced without these specific details. For example, electrical components/devices may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, such components, other structures and techniques may be shown in detail to further explain the examples.
It is also noted that the examples may be described as a process, which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, or concurrently, and the process can be repeated. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a software function, its termination corresponds to a return of the function to the calling function or the main function.
Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Various aspects of embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
As noted above, optical flow is a known method for motion tracking. Rather than trying to first recognize an object from raw image pixel data and then track the motion of the object among a sequence of images, optical flow determination instead tracks the motion of features from raw image pixel data. Optical flow techniques have a number of uses, such as determining the disparity of stereo vision in 3D imaging. A method of optical flow determination known as the total variation method with L1 norm regularization (TV-L1) can be implemented to provide real-time performance using a modern desktop graphics processor unit (GPU). There are, however, a number of issues that make it difficult to implement the TV-L1 method, as well as other optical flow techniques, on a mobile platform.
The TV-L1 method can be formulated according to the following cost-function:
The terms I0 and I1 are first and second consecutive image frames, respectively. The term u(x)=(u1(x), u2(x))T is the two-dimensional displacement field and λ is a weighting parameter between the first and the second terms in the cost function. The first integral term within equation (1) is a smoothing term that penalizes high variation in u(x) to obtain smooth displacement fields between sequential images. The second integral term within equation (1) is a fidelity term. The fidelity term is formulated under the assumption that the intensity values in the second image, I1(x+u(x)), do not vary substantially as compared to the first image, I0(x). With additional local linearization and convex condition, the equation (1) can be solved iteratively.
The TV-L1 method, however, has a number of problems that make it impractical to implement on a mobile platform with limited processing resources. First, the TV-L1 method heavily regularizes motion discontinuities so that the method has difficulties processing relatively large motion fields. To compensate for the heavy regulation a “coarse-to-fine” image pyramid is typically used. Although the coarse-to-fine approach alleviates the constraint on relatively large motion, two additional problems are created. First, the coarse-to-fine image pyramid adds significant computational cost because computations have to be repeated on every level of the image pyramid. Second, the coarse-to-fine image pyramid effectively converts the original convex method into a non-convex method. Those skilled in the art will appreciate that a convex method is typically defined by a solution that will converge with iterative processing. On the other hand, for a non-convex method, while convergence may occur, it is less certain that convergence will occur after iterative processing.
Additionally, the TV-L1 method does not work well on poorly textured regions such as a uniform or pseudo-random cluster of pixels, where matching cannot be performed reliably. For example, it is difficult to reliably match pixels between frames of features such as tree leaves swaying with the wind. Conventional solutions rely on strong regularization to propagate the motion from structured into the unstructured regions. Finally, the magnitude of a computed flow using the TV-L1 method may converge whereas the direction of the flow vectors may not, leaving the results useful for object segmentation, but not useful for motion tracking or 3D reconstruction.
As mentioned above, optical flow methods may be used to track motion within a sequence of images. In some embodiments, database 103 may comprise a collection of features used to identify objects and/or features within images whose motion may be tracked. When an application on the mobile device 101 attempts to track motion within a sequence of images, the mobile device 101 may retrieve features from database 103 in order to apply the features as part of a motion tracking process. Alternatively, the features and other commonly recognized patterns may be stored locally on the mobile device 101. Server 102 may also be in communication with the network 104 and receive images or features remotely from the network. Thus, either mobile device 101 or database 103 may contain images or features stored for recognition, which may in turn be used to aid motion tracking methods.
Although
Mobile device 101 may also comprise a modem 204 for transmitting or receiving information, including images and optical flow data via an antenna 208 or connector 207. Antenna 208 may be a wireless antenna used for connecting to a WI-FI, cellular network, or the like. Once optical flow determinations have been made, the user may upload them via modem 204 to database 103. Alternatively, the optical flow determinations may be stored locally and used locally.
The image capture system 201 may comprise a stand-alone hardware, software or firmware module. However, in some embodiments the image capture system may be integrated into the processor 202 as depicted in
Processor 202 may comprise an optical flow determination module 203 which may itself comprise software running on the processor 202, a dedicated component within a processor or standalone hardware, software, or firmware and/or a combination thereof. The optical flow determination module 203 may implement a modified content-adaptive TV-L1 method as described below. The optical flow determination module 203 may comprise software, firmware, or hardware designed to perform one or more additional optical flow processes. As such, the optical flow determination module 203 may be able to select among various optical flow determination processes given a particular set of circumstances. One skilled in the art will readily recognize alternative applications of the present embodiments to other optical flow determination processes.
Implementations of the present invention include a content-adaptive, total variation (CA-TV) method that, in some implementations, allows optical flow computations to converge more efficiently and accurately than some implementations of the TV-L1 method including the “coarse-to-fine” image pyramid technique. More specifically, the CA-TV method disclosed herein gives more weight on pixels that are surrounded by highly structured/organized pixel neighbors in the iteration process while de-emphasizing low-information pixels such as pixels surrounded by a uniform or a pseudo-random cluster. This information or content adaptive use of image pixels enables the method, in some implementations, to converge more accurately with a reduced number of iterations, thereby reducing computational complexity. In other words, by reducing the computations concerning low-information pixel clusters, computational speed may improve so that the method may be implementable on a mobile platform having limited computational resources.
Thus, in contrast to total variation methods previously available, such as the TV-L1 method discussed above, implementations of methods in accordance with aspects of the invention provide a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to optical flow computations. In turn, computational resources and effort is focused on pixels with higher weights, which are more pertinent to optical flow determinations. In other words, the weighting-values enable a scheme that discriminates between pixels that are more pertinent to optical flow computations and pixels that are less pertinent to optical flow computations.
Referring again to equation (1) provided above, all pixels have the same scaling factor, λ (i.e. weight), for the fidelity term in the conventional TV-L1 method, implying that all pixels contribute equally to the fidelity term in the optic flow computation. However, as recognized by aspects of the embodiments disclosed herein, not all pixels have the same significance in optical flow computations. A pixel cluster with almost uniform or pseudo-random distribution, in fact, contributes little because such a cluster cannot be matched reliably between sequential image frames. In accordance with aspects of the present invention, low information pixels are preferably de-emphasized in the optic flow computations whereas useful pixels are preferably emphasized. In turn, emphasizing and de-emphasizing various pixels in this way may encourage the method of solving the cost-function to converge more efficiently and accurately. Because information level or signal strength of an image is an important factor, the content-adaptive weighting of the fidelity term effectively increases the signal to noise ratio of the computation, leading to a potentially fast and reliable solution in the iterative process of solving equation (1).
In accordance with some implementations, a content-adaptive weighting term, λ(x), can be used to weight the fidelity term so that equation (1) becomes equation (2) as follows:
In equation (2), the content-adaptive weighting value, λ(x), is a function of location variable x=(x1,x2) that measures the information content in the neighborhood of a pixel at the location. In a non-limiting example, the content-adaptive weighting value, λ(x), can be defined as the Fisher discrimination function that measures the informational/structural strength of the patch surrounding the pixel:
λ(x)=c*J(x) (3)
In equation (3), the term c is a scaling constant and x is an element of Z, the set of all N locations of the neighborhood of a pixel. Suppose Z is classified into C classes. In a non-limiting example, the number of classes, C, represents the number of different colors obtained as result of color quantization. In one possible implementation mi represents the mean of the Ni data points of class Zi and the following relationships can be defined.
Equations (3) to (8) can be used to discriminate between orderly and diffused local patterns of pixels. Some examples are shown in
Moreover, those skilled in the art will appreciate that the content-adaptive modification made to convert equation (1) to equation (2) may be applied to other optical flow determination methods. For example, the content-adaptive weighting value, λ(x), can be used to modify the total variation method with L2 regularization (TV-L2) to produce equation (9) as follows:
Those having skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and process steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. One skilled in the art will recognize that a portion, or a part, may comprise something less than or equal to a whole. For example, a portion of a collection of pixels may refer to a sub-collection of those pixels.
The various illustrative logical blocks, modules, and circuits described in connection with the implementations disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or process described in connection with the implementations disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory storage medium known in the art. An exemplary computer-readable storage medium is coupled to the processor such the processor can read information from, and write information to, the computer-readable storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal, camera, or other device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal, camera, or other device.
Headings are included herein for reference and to aid in locating various sections. These headings are not intended to limit the scope of the concepts described with respect thereto. Such concepts may have applicability throughout the entire specification.
Moreover, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
The previous description of the disclosed implementations is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these implementations will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Bi, Ning, Qi, Yingyong, Zhang, Xuerui
Patent | Priority | Assignee | Title |
9323988, | Jun 14 2011 | Qualcomm Incorporated | Content-adaptive pixel processing systems, methods and apparatus |
Patent | Priority | Assignee | Title |
5680487, | Dec 23 1991 | Texas Instruments Incorporated | System and method for determining optical flow |
5930379, | Jun 16 1997 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method for detecting human body motion in frames of a video sequence |
6507661, | Apr 20 1999 | NEC Corporation | Method for estimating optical flow |
7085401, | Oct 31 2001 | F POSZAT HU, L L C | Automatic object extraction |
7418113, | Apr 01 2005 | Mitsubishi Electric Research Laboratories, Inc | Tracking objects in low frame rate videos |
20100054536, | |||
20100124361, | |||
20100215282, | |||
20100245672, | |||
20120321139, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 26 2011 | QI, YINGYONG | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031771 | /0612 | |
May 26 2011 | BI, NING | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031771 | /0612 | |
May 26 2011 | ZHANG, XUERUI | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031771 | /0612 | |
Oct 03 2013 | Qualcomm Incorporated | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Apr 16 2018 | REM: Maintenance Fee Reminder Mailed. |
Oct 08 2018 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 02 2017 | 4 years fee payment window open |
Mar 02 2018 | 6 months grace period start (w surcharge) |
Sep 02 2018 | patent expiry (for year 4) |
Sep 02 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 02 2021 | 8 years fee payment window open |
Mar 02 2022 | 6 months grace period start (w surcharge) |
Sep 02 2022 | patent expiry (for year 8) |
Sep 02 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 02 2025 | 12 years fee payment window open |
Mar 02 2026 | 6 months grace period start (w surcharge) |
Sep 02 2026 | patent expiry (for year 12) |
Sep 02 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |