Methods and systems may provide for an image processing pipeline having a hardware module to spatially filter a raw image in the horizontal direction to obtain intermediate image data. The pipeline can also include a set of instructions which, if executed by a processor, cause the pipeline to spatially filter the intermediate image data in the vertical direction.
|
4. An apparatus comprising:
a hardware module to spatially filter a raw image in a horizontal direction to obtain intermediate image data, each pixel in a row of the intermediate image data including red plus green values or blue plus green values;
a computer readable storage medium including a set of stored instructions which, if executed by a processor, cause the apparatus to spatially filter the intermediate image data in a vertical direction, each pixel in a column of the intermediate image data including red plus green values or blue plus green values; and
an output pixel requestor to select coefficients for a first filter and a green module based on pixel position,
wherein the hardware module includes:
the first filter to determine red-blue average values for pixels in the raw image on a row-by-row basis;
the green module to determine green values for pixels in the raw image on a row-by-row basis; and
a summation module to correct the red-blue average values based on the green value.
1. A system comprising:
a processor;
an image sensor to generate a raw image;
a hardware module to spatially filter the raw image in a horizontal direction to obtain intermediate image data, each pixel in a row of the intermediate image data including red plus green values or blue plus green values;
a computer readable storage medium including a set of stored instructions which, if executed by the processor, cause the system to spatially filter the intermediate image data in a vertical direction, each pixel in a column of the intermediate image data including red plus green values or blue plus green values; and
an output pixel requestor to select coefficients for a first filter and a green module based on pixel position, and to generate a valid output flag based on a down-sample rate
wherein the hardware module includes:
the first filter to determine red-blue average values for pixels in the raw image on a row-by-row basis;
the green module to determine green values for pixels in the raw image on a row-by-row basis; and
a summation module to correct the red-blue average values based on the green values.
13. A method comprising:
using a hardware module to spatially filter a raw image in a horizontal direction to obtain intermediate image data, each pixel in a row of the intermediate image data including red plus green values or blue plus green values; and
using software to spatially filter the intermediate image data in a vertical direction, each pixel in a column of the intermediate image data including red plus green values or blue plus green values, wherein using the software to spatially filter the intermediate image data includes determining red-blue average values in the intermediate image data on a column-by-column basis, determining green values for pixels in the intermediate image data on a column-by-column basis, correcting the red-blue average values based on the green values, calculating a green average value for pixels in the intermediate image data on a column-by-column basis, calculating a green nearest neighbor value for pixels in the intermediate image data on a column-by-column basis, calculating relative weights for the green average values and the green nearest neighbor values based on a difference calculation for pixels in the intermediate image data on a column-by-column basis, and calculating the green values based on the green average values, the green nearest neighbor values and the relative weights.
2. The system of
a second filter to calculate a green average value for pixels in the raw image on a row-by-row basis;
a third filter to calculate a green nearest neighbor value for pixels in the raw image on a row-by-row basis
a difference calculator to calculate relative weights for the green average values and the green nearest neighbor values based on a difference calculation for pixels in the raw image on a row-by-row basis; and
a blend module to calculate the green values based on the green average values, the green nearest neighbor values and the relative weights.
3. The system of
determine red-blue average values for pixels in the intermediate image data on a column-by-column basis,
determine green values for pixels in the intermediate image data on a column-by-column basis, and
correct the red-blue average values based on the determined green values.
5. The apparatus of
a second filter to calculate a green average value for pixels in the raw image on a row-by-row basis;
a third filter to calculate a green nearest neighbor value for pixels the raw image on a row-by-row basis;
a difference calculator to calculate relative weights for the green average values and the green nearest neighbor values based on a difference calculation for pixels in the raw image on a row-by-row basis; and
a blend module to calculate the green values based on the green average values, the green nearest neighbor values and the relative weights.
6. The apparatus of
8. The apparatus of
10. The apparatus of
determine red-blue average values for pixels in the intermediate image data on a column-by-column basis,
determine green values for pixels in the intermediate image data on a column-by-column basis, and
correct the red-blue average values based on the green values.
11. The apparatus of
calculate a green average value for pixels in the intermediate image data on a column-by-column basis,
calculate a green nearest neighbor value for pixels in the intermediate image data on a column-by-column basis,
calculate relative weights for the green average values and the green nearest neighbor values based on a difference calculation for pixels in the intermediate image data on a column-by-column basis, and
calculate the green values based on the green average values, the green nearest neighbor values and the relative weights.
12. The apparatus of
14. The method of
applying a first filter to the raw image to determine red-blue average values for pixels in the raw image on a row-by-row basis;
applying a green module to the raw image to determine green values for pixels in the raw image on a row-by-row basis; and
using a summation module to correct the red-blue average values based on the green values.
15. The method of
|
Digital cameras include image processing pipelines that re-sample and spatially filter (e.g., interpolate) raw image data. For example, camera pipeline components such as de-mosaicing, down-sampling, optical distortion correction and chromatic aberration correction components could all apply interpolation techniques to a single image. Conventional image processing pipelines may implement these functions in series and entirely in hardware. Such series processing could degrade the image quality due to the application of several low-pass type filters in succession. Conducting the interpolation fully in hardware can also have efficiency shortcomings. Each of these concerns may be particularly relevant in high data rate operation modes such as preview and video recording.
The various advantages of the embodiments of the present invention will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
Embodiments may provide for a method in which a hardware module is used to spatially filter a raw image in a horizontal direction to obtain intermediate image data. The method can also involve the use of software to spatially filter the intermediate image data in a vertical direction.
Embodiments can also include an apparatus having a hardware module to spatially filter a raw image in a horizontal direction to obtain intermediate image data. In addition, the apparatus may include a computer readable storage medium including a set of stored instructions which, if executed by a processor, cause the apparatus to spatially filter the intermediate image data in a vertical direction.
Other embodiments may include a system having a processor, an image sensor to generate a raw image, and a hardware module to spatially filter the raw image in a horizontal direction to obtain intermediate image data. The system can also include a computer readable storage medium including a set of stored instructions which, if executed by the processor, cause the system to spatially filter the intermediate image data in a vertical direction.
Turning now to
Generally, the illustrated hardware module 18 spatially filters the raw images 24 in the horizontal direction to obtain geometrically corrected and horizontally down-sampled intermediate image data 26a, which may be stored to a buffer/memory 28 as rows are processed by the hardware module 18 (e.g., on a row-by-row basis). As will be discussed in greater detail below, each pixel in a row of the intermediate image data 26a may include red and green (R+G) values or blue and green (B+G) values, wherein these values might be expressed on any appropriate scale (e.g., 0-256, 0-4096, etc.). When a sufficient number of rows have been processed by the hardware module 18 for the vertical interpolation software module 20 to begin processing columns (e.g., five rows before+five rows after a row in question=eleven rows), intermediate image data 26b may be retrieved from the memory 28 on a column-by-column basis. Each pixel in a column of the intermediate image data 26b might include R+G values or B+G values.
The illustrated software module 20 may be implemented as a set of instructions which, if executed by a processor, cause the software module 20 to spatially filter the intermediate image data 26b in a vertical direction to obtain geometrically corrected and vertically down-sampled final image data 30 that may be further processed by other software modules 32. In one example, each pixel of the final image data 30 may include red, green and blue (R+G+B, RGB) values.
In the illustrated example, the hardware module 34 processes a raw pixel stream 38 having alternating R/G and B/G lines and uses a low pass (LP) R/B filter 36 to determine R/B average values 40 (e.g., R/B_AV) for pixels in the raw image on a row-by-row basis. Thus, the pixels of each row output from the LP R/B filter 36 might have either a red or a blue value based on the filter coefficients established for the LP R/B filter 36. These coefficients can be set via a coefficient line 50 from an output pixel requestor 54 based on a pixel position obtained from a pixel counter input 52. For example, the coefficients established by the output pixel requestor 54 may depend on the exact sampling point relative to the input raw data grid. The illustrated output pixel requestor 54 may also select the filter coefficients for a green module 42, discussed in greater detail below. In addition, the output pixel requestor 54 may generate a valid output flag 56 based on a down-sample rate (e.g., 1.875) obtained from a control signal 58.
The hardware module 34 may also include a green module 42 to determine green values 44 (e.g., G_OUT) for pixels in the raw image on a row-by-row basis. A multiplication module 48 (e.g., having multiplication value “K”) and a summation module 46 can be used to correct the R/B average values 40 based on the green values 44, wherein the LP R/B filter 36 could also include a high pass (HP) green (G) filtering component to capture the derivative of the green pixels in each row. The HP G filtering component may be associated with the value (i.e., “K”) of the multiplication module 48. As a result, corrected R/B values 47 may be output from the summation module 46, wherein each pixel in a row can have a green value 44 and a corrected R/B value 47 (i.e., either R+G or B+G).
In the illustrated example, the green module 42 includes an LP G filter 60 to determine a green average value 62 (e.g., G_AV) for pixels in the raw image on a row-by-row basis. Thus, each green average value 62 may represent the average green value over a certain number of pixels in a row. The LP G filter 60 could also have an HP R/B filtering component to capture the derivative of the R/B pixels in each row. In addition, a green nearest neighbor filter 64 can be used to calculate a green nearest neighbor value 66 (e.g., G_NN) for pixels in the raw image on a row-by-row basis. Each green nearest neighbor value 66 may therefore indicate the green value of the nearest pixel in the row (e.g., the pixel on either side of the pixel in question). The filters of the green module 42 and the LP R/B filter 36 may include polyphase filters designed to support a wide range of sampling ratios.
The illustrated green module 42 also includes a difference calculator 68 to calculate relative weights for the green average values 62 and the green nearest neighbor values 66 based on a difference calculation for pixels in the raw image on a row-by-row basis. The relative weights might be expressed in a single parameter signal 70 (e.g., alpha), wherein a blend module 72 can be used to calculate the green values 44 based on the green average values 62, the green nearest neighbor values 66, and the relative weights reflected in signal 70. For example, the blend module 72 might use the following expression to calculate each green value,
blend_out=alpha*G—NN+(1−alpha)*G—AV (1)
Thus, as the calculated row-based pixel difference (alpha) increases, an edge/border is more likely to be present in the row of the image and the green values 44 can be more heavily weighted towards the green nearest neighbor values 66 to better capture the edge/border. Alternatively, as the calculated row-based pixel difference decreases, the row of the image is likely to be smooth in texture and the green values may be more heavily weighted towards the green average values 62. Simply put, the larger the variability in a certain direction, the narrower the interpolation in that direction.
Turning now to
Processing block 76 provides for determining R/B average values for pixels in the intermediate image data on a column-by-column basis and processing block 78 provides for determining green values for pixels in the intermediate image data on a column-by-column basis. In addition, the R/B average values may be corrected based on the green values at block 80.
Turning now to
The illustrated network controller could provide off-platform communication functionality for a wide variety of purposes such as cellular telephone (e.g., W-CDMA (UMTS), CDMA2000 (IS-856/IS-2000), etc.), WiFi (e.g., IEEE 802.11, 1999 Edition, LAN/MAN Wireless LANS), Bluetooth (e.g., IEEE 802.15.1-2005, Wireless Personal Area Networks), WiMax (e.g., IEEE 802.16-2004, LAN/MAN Broadband Wireless LANS), Global Positioning System (GPS), spread spectrum (e.g., 900 MHz), and other radio frequency (RF) telephony purposes.
The illustrated platform 92 also includes a digital camera image sensor 112 and a horizontal interpolation hardware module 114, wherein the image sensor 112 can generate raw images at high bit rates (e.g., from image preview and/or video capture operations) and the hardware module may spatially filter the raw images in the horizontal direction to obtain intermediate image data. Thus, the hardware module 18 (
The above described techniques may therefore provide an efficient image processing implementation that produces high quality results when the data is captured at high rates. In addition, combining minimized dedicated hardware with software processing can enable mobile computers and/or smaller handheld devices to stay within performance and power consumption requirements.
Embodiments of the present invention are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLA), memory chips, network chips, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be thicker, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
Example sizes/models/values/ranges may have been given, although embodiments of the present invention are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments of the invention. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments of the invention, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that embodiments of the invention can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.
Some embodiments may be implemented, for example, using a machine or tangible computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.
The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. are used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments of the present invention can be implemented in a variety of forms. Therefore, while the embodiments of this invention have been described in connection with particular examples thereof, the true scope of the embodiments of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
Patent | Priority | Assignee | Title |
9538076, | Dec 16 2014 | Samsung Electronics Co., Ltd.; SOGANG UNIVERSITY RESEARCH FOUNDATION | Image processing devices for suppressing color fringe, and image sensor modules and electronic devices including the same |
9672599, | Dec 18 2014 | Samsung Electronics Co., Ltd.; SOGANG UNIVERSITY RESEARCH FOUNDATION | Image processing devices for suppressing color fringe, and image sensor modules and electronic devices including the same |
Patent | Priority | Assignee | Title |
5790205, | Aug 23 1996 | Texas Instruments Incorporated | Method of increase sharpness in digital displays |
5838010, | Jul 14 1997 | General Electric Company | Spatial resolution improvement for gamma camera |
6628330, | Sep 01 1999 | Faust Communications, LLC | Color interpolator and horizontal/vertical edge enhancer using two line buffer and alternating even/odd filters for digital camera |
7324707, | Oct 12 2004 | Altek Corporation | Interpolation method for generating pixel color |
7609307, | Sep 13 2006 | National Chiao Tung University | Heterogeneity-projection hard-decision interpolation method for color reproduction |
7787442, | Jun 15 2004 | Alaxala Networks Corporation | Communication statistic information collection apparatus |
7826658, | May 15 2002 | Sony Corporation | Image processing system, image processing method, image processing recording medium, and program suitable for extraction processing |
8145014, | Nov 27 2007 | Samsung Electro-Mechanics Co., Ltd. | Apparatus and method of removing color noise of digital image |
20010045988, | |||
20020003578, | |||
20050276230, | |||
20060050159, | |||
20060055794, | |||
20060078229, | |||
20070009165, | |||
20070242081, | |||
20080013801, | |||
20080013855, | |||
20080062479, | |||
20090066821, | |||
20090136127, | |||
20090196498, | |||
20090226115, | |||
20100097491, | |||
20100202262, | |||
20110148888, | |||
20120044391, | |||
20120182441, | |||
WO2012009077, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 24 2010 | STANHILL, DAVID | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024618 | /0161 | |
Jun 28 2010 | Intel Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 11 2013 | ASPN: Payor Number Assigned. |
May 04 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 12 2021 | REM: Maintenance Fee Reminder Mailed. |
Dec 27 2021 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Nov 19 2016 | 4 years fee payment window open |
May 19 2017 | 6 months grace period start (w surcharge) |
Nov 19 2017 | patent expiry (for year 4) |
Nov 19 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 19 2020 | 8 years fee payment window open |
May 19 2021 | 6 months grace period start (w surcharge) |
Nov 19 2021 | patent expiry (for year 8) |
Nov 19 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 19 2024 | 12 years fee payment window open |
May 19 2025 | 6 months grace period start (w surcharge) |
Nov 19 2025 | patent expiry (for year 12) |
Nov 19 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |