An information processing device, configured to perform color gamut conversion for compressing or enlarging the color gamut of image data, includes: a selecting unit configured to select a plurality of coordinate movement directions to be synthesized for determining the coordinate movement destination of a pixel to be processed during the color gamut conversion; a coordinate moving unit configured to move the coordinates of the pixel to be processed in each of the selected plurality of directions; and a synthesizing unit configured to synthesize coordinate movement in the selected plurality of directions.

Patent
   RE45927
Priority
Dec 13 2007
Filed
Nov 21 2014
Issued
Mar 15 2016
Expiry
Dec 10 2028
Assg.orig
Entity
Large
0
29
currently ok
0. 8. An information processing method to perform color gamut conversion for compressing or enlarging the color gamut of input image data, comprising the steps of:
selecting a plurality of coordinate movement directions to be synthesized for determining the coordinate movement destination of a pixel to be processed during said color gamut conversion;
moving the coordinates of said pixel to be processed in each of the selected plurality of directions; and
synthesizing, by a processor, coordinate movement in the selected plurality of directions so as to generate synthesized image data,
wherein color gamut conversion is performed according to original color gamut information and target color gamut information, and
wherein the original color gamut information and the target color gamut information is generated according to a cusp table which is rendered according to one of an index and xy chromaticity data of three primary colors,
wherein a plurality of said coordinate movement directions are selected based on whether or not color gamut enlargement processing for enlarging a color gamut is performed as said color gamut conversion,
wherein
a saturation direction, and a rectilinear direction which connects a black point or white point and said pixel to be processed as said coordinate movement directions are selected in a case wherein said color gamut enlargement processing is not performed, and
a saturation direction, and a rectilinear direction which connects a point, which is disposed on a luminance axis, having the same luminance value as that of the maximum saturation point, and said pixel to be processed as said coordinate movement directions are selected in a case wherein said color gamut enlargement processing is performed.
1. An information processing device configured to perform color gamut conversion for compressing or enlarging the color gamut of image data, comprising:
selecting means configured to select a plurality of coordinate movement directions to be synthesized for determining the coordinate movement destination of a pixel to be processed during said color gamut conversion;
coordinate moving means configured to move the coordinates of said pixel to be processed in each of the selected plurality of directions; and
synthesizing means configured to synthesize coordinate movement in the selected plurality of directions,
wherein color gamut conversion is performed according to original color gamut information and target color gamut information, and wherein the original color gamut information and the target color gamut information is generated according to a cusp table which is rendered according to one of an index and xy chromaticity data of three primary colors,
wherein said selecting means select selects a plurality of said coordinate movement directions based on regarding whether or not color gamut enlargement processing for enlarging a color gamut is performed as said color gamut conversion,
wherein said selecting means select selects a saturation direction, and a rectilinear direction which connects a black point or white point and said pixel to be processed as said coordinate movement directions in a case wherein said color gamut enlargement processing is not performed, and select selects a saturation direction, and a rectilinear direction which connects a point, which is disposed on a luminance axis, having the same luminance value as that of the maximum saturation point, and said pixel to be processed as said coordinate movement directions in a case wherein said color gamut enlargement processing is performed.
0. 2. An information processing device configured to perform color gamut conversion for compressing or enlarging the color gamut of input image data, comprising:
selecting unit configured to select a plurality of coordinate movement directions to be synthesized for determining the coordinate movement destination of a pixel to be processed during said color gamut conversion;
coordinate moving unit configured to move the coordinates of said pixel to be processed in each of the selected plurality of directions; and
synthesizing unit configured to synthesize coordinate movement in the selected plurality of directions so as to generate synthesized image data,
wherein color gamut conversion is performed according to original color gamut information and target color gamut information, and
wherein the original color gamut information and the target color gamut information is generated according to a cusp table which is rendered according to one of an index and xy chromaticity data of three primary colors,
wherein said selecting unit selects a plurality of said coordinate movement directions based on whether or not color gamut enlargement processing for enlarging a color gamut is performed as said color gamut conversion,
wherein said selecting unit
selects a saturation direction, and a rectilinear direction which connects a black point or white point and said pixel to be processed as said coordinate movement directions in a case wherein said color gamut enlargement processing is not performed, and
selects a saturation direction, and a rectilinear direction which connects a point, which is disposed on a luminance axis, having the same luminance value as that of the maximum saturation point, and said pixel to be processed as said coordinate movement directions in a case wherein said color gamut enlargement processing is performed.
0. 14. A non-transitory computer readable recording medium having stored thereon a program enabling a computer to execute an information processing method to perform color gamut conversion for compressing or enlarging the color gamut of input image data, the method comprising the steps of:
selecting a plurality of coordinate movement directions to be synthesized for determining the coordinate movement destination of a pixel to be processed during said color gamut conversion;
moving the coordinates of said pixel to be processed in each of the selected plurality of directions; and
synthesizing coordinate movement in the selected plurality of directions so as to generate synthesized image data,
wherein color gamut conversion is performed according to original color gamut information and target color gamut information, and
wherein the original color gamut information and the target color gamut information is generated according to a cusp table which is rendered according to one of an index and xy chromaticity data of three primary colors,
wherein a plurality of said coordinate movement directions are selected based on whether or not color gamut enlargement processing for enlarging a color gamut is performed as said color gamut conversion,
wherein
a saturation direction, and a rectilinear direction which connects a black point or white point and said pixel to be processed as said coordinate movement directions are selected in a case wherein said color gamut enlargement processing is not performed, and
a saturation direction, and a rectilinear direction which connects a point, which is disposed on a luminance axis, having the same luminance value as that of the maximum saturation point, and said pixel to be processed as said coordinate movement directions are selected in a case wherein said color gamut enlargement processing is performed.
0. 3. The information processing device according to claim 2,
wherein said coordinate moving unit moves said pixel to be processed in the rectilinear direction which connects the black point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is brighter than the luminance of the maximum saturation point, and moves said pixel to be processed in the rectilinear direction which connects the white point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is darker than the luminance of the maximum saturation point.
0. 4. The information processing device according to claim 2,
wherein said synthesizing unit synthesizes coordinate movement performed in the selected plurality of directions with a ratio based on a blend function.
0. 5. The information processing device according to claim 4,
wherein the blend function changes depending on a hue.
0. 6. The information processing device according to claim 2,
wherein the input image data and the synthesized image data are a YCH format.
0. 7. The information processing device according to claim 6, further comprising:
a format conversion unit configured to convert the synthesized image data from the YCH format into a YCC format.
0. 9. The information processing method according to claim 8,
wherein said pixel to be processed is moved in the rectilinear direction which connects the black point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is brighter than the luminance of the maximum saturation point, and said pixel to be processed is moved in the rectilinear direction which connects the white point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is darker than the luminance of the maximum saturation point.
0. 10. The information processing method according to claim 8,
wherein coordinate movement performed in the selected plurality of directions are synthesized with a ratio based on a blend function.
0. 11. The information processing device according to claim 10,
wherein the blend function changes depending on a hue.
0. 12. The information processing method according to claim 8,
wherein the input image data and the synthesized image data are a YCH format.
0. 13. The information processing method according to claim 12, further comprising the steps of:
converting the synthesized image data from the YCH format into a YCC format.
0. 15. The non-transitory computer readable recording medium according to claim 14,
wherein said pixel to be processed is moved in the rectilinear direction which connects the black point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is brighter than the luminance of the maximum saturation point, and said pixel to be processed is moved in the rectilinear direction which connects the white point and said pixel to be processed in a case wherein the luminance of said pixel to be processed is darker than the luminance of the maximum saturation point.
0. 16. The non-transitory computer readable recording medium according to claim 14,
wherein coordinate movement performed in the selected plurality of directions are synthesized with a ratio based on a blend function.
0. 17. The non-transitory computer readable recording medium according to claim 16,
wherein the blend function changes depending on a hue.
0. 18. The non-transitory computer readable recording medium according to claim 14,
wherein the input image data and the synthesized image data are a YCH format.
0. 19. The non-transitory computer readable recording medium according to claim 18, the method further comprising the steps of converting the synthesized image data from the YCH format into a YCC format.

The present

A virtual clip boundary (V-boundary) 191 is determined from the YC coordinates of the clip point Cusp_V. For example, as shown in FIG. 21, the virtual clip boundary (V-boundary) 191 of the Cusp point is made up of a line segment with the clip point Cusp_V and a white point as both ends, and a line segment with the clip point Cusp_V and a black point as both ends.

That is to say, the V-boundary 191 is determined with the above-mentioned compression function, and the ratio (p:q) between the distance to the L-boundary 153 and the distance to the U-boundary 152 of a pixel to be processed. In other words, pixels to be processed having the same ratio (p:q) between the distance to the L-boundary 153 and the distance to the U-boundary 152 share the V-boundary 191.

Note that description has been made so far regarding the case of compressing a color gamut, but the method for determining the V-boundary 191 in the case of enlarging a color gamut is basically the same as that in the case of compressing a color gamut.

Now, description will be back to FIG. 5. In step S106, the mapping processing unit 114 executes blend mapping processing wherein each pixel to be processed is mapped (subjected to coordinate movement) on the V-boundary 191 corresponding to each pixel to be processed determined such as described above, in a direction where multiple mapping directions are blended. A detailed processing flow example of this blend mapping processing will be described later.

Upon the processing in step S106 being ended, the color gamut conversion device 100 ends the color gamut conversion processing. As described above, the color gamut conversion device 100 converts a color gamut from an original color gamut to a target color gamut appropriately.

Next, a flow example of the blend mapping processing executed in step S106 in FIG. 5 will be described with reference to the flowchart in FIG. 22. Description will be made with reference to FIGS. 23 through 30 as appropriate.

Upon the blend mapping processing being started, in step S121 the combination selecting unit 121 determines whether or not both of the color gamut compression processing and color gamut enlargement processing are performed regarding input picture content data to be subjected to color gamut conversion as color gamut conversion. At this time, the combination selecting unit 121 references the LU table to determines whether or not the enlargement processing is performed depending on regarding whether or not there is a value less than 1 in the values of the L-boundary 153. In a case wherein determination is made that the enlargement processing is also performed, the combination selecting unit 121 selects the C-direction mapping processing unit 122 and Cusp-direction mapping processing unit 123, and advances the processing to step S122.

In step S122, the C-direction mapping processing unit 122 executes C-direction mapping processing wherein a pixel to be processed is moved (mapped) onto the virtual clip boundary (V-boundary) 191 in the saturation (C) direction.

FIG. 23 is a diagram illustrating a situation example of the C-direction mapping. As shown in FIG. 23, in this case, only the saturation direction is compressed, but the luminance direction is not compressed. That is to say, pixels mapped onto the same virtual clip boundary (V-boundary) 191 (i.e., mapping destination) have a mutually different luminance value, so are mapped in a mutually different position. That is to say, a pixel to be processed corresponds to the mapping destination thereof one on one. Accordingly, in FIG. 23, only an example in the compression direction is shown, but the C-direction mapping may be applied to the enlargement direction (reversible). The C-direction mapping has an advantage in the compression direction wherein colors are eliminated, and has an advantage in the enlargement direction wherein colors are remained.

Now, description will be back to FIG. 22. In step S123, the Cusp-direction mapping processing unit 123 performs mapping processing wherein the YC coordinates are moved (mapped) onto the virtual clip boundary (V-boundary) 191 in a rectilinear direction connecting a point (Ycp, 0) and a pixel to be processed.

FIG. 24 is a diagram illustrating a situation example of Cusp-direction mapping. As shown in FIG. 24, in this case, a pixel to be processed is mapped onto the virtual clip boundary (V-boundary) 191 with a point (Ycp, 0) on the luminance (Y) axis having the same luminance value as that of the Cusp point as a convergent point. Accordingly, in this case as well, pixels to be mapped onto the same virtual clip boundary (V-boundary) 191 (i.e., mapping destination) are mapped in a mutually different position. Accordingly, in FIG. 24, only an example in the compression direction is shown, but the Cusp-direction mapping may be applied to the enlargement direction (reversible). The Cusp-direction mapping has an advantage in the compression direction wherein colors are remained to some extent, and has an advantage in the enlargement direction wherein colors are eliminated to some extent.

Now, description will be back to FIG. 22. Upon the C-direction mapping processing and Cusp-direction mapping processing being completed, the processing proceeds to step S126.

Also, in a case wherein determination is made in step S121 that only the compression processing is performed regarding the input picture content data to be subjected to color gamut conversion, and the enlargement processing is not performed, the combination selecting unit 121 selects the C-direction mapping processing unit 122 and BW-direction mapping processing unit 124, and advances the processing to step S124.

In step S124, the C-direction mapping processing unit 122 executes the C-direction mapping processing in the same way as in the case of step S122.

In step S125, the BW-direction mapping processing unit 124 performs mapping processing wherein in a case in which the luminance of a pixel to be processed is brighter than the luminance of the Cusp point, the pixel to be processed is moved (mapped) onto the virtual clip boundary (V-boundary) 191 in a rectilinear direction connecting a black point and the pixel to be processed, and in a case in which the luminance of the pixel to be processed is darker than the luminance of the Cusp point, the pixel to be processed is moved (mapped) onto the virtual clip boundary (V-boundary) 191 in a rectilinear direction connecting a white point and the pixel to be processed.

FIG. 25 is a diagram illustrating a situation example of the BW-direction mapping. As shown in FIG. 25, in this case, a pixel to be processed which is brighter than the Cusp point is mapped onto the virtual clip boundary (V-boundary) 191 with a black point as a convergent point, and a pixel to be processed which is darker than the Cusp point is mapped onto the virtual clip boundary (V-boundary) 191 with a white point as a convergent point. In this case, all of the pixels to be processed disposed in a portion filled with slanting lines in FIG. 25 are mapped onto the Cusp point. Accordingly, this method is available only in the case of the compression direction, and is not unavailable in the enlargement direction (irreversible). Of the above-mentioned three fixed mapping methods, the BW-direction mapping is mapping wherein colors are remained most, as compression-direction mapping.

Now, description will be back to FIG. 22. Upon the C-direction mapping processing and BW-direction mapping processing being ended, the processing proceeds to step S126.

In step S126, the synthesis processing unit 125 blends the mapping results in the two mapping directions, performed in steps S122 and S123, or in steps S124 and S5125, based on a blend function.

With the above-mentioned three fixed mapping methods, as shown in FIG. 26, the mapping directions differ mutually. In FIG. 26, a white circle denotes an example of a pixel to be processed, Pc denotes a mapping destination example of the pixel to be processed according to the C-direction mapping, Pcp denotes a mapping destination example of the pixel to be processed according to the Cusp-direction mapping, and Pbw denotes a mapping destination example of the pixel to be processed according to the BW-direction mapping.

In order to determine a final mapping direction, the synthesis processing unit 125 blends at least the two selected by the combination selecting unit 121, of the multiple fixed mapping directions of which the directions differ mutually. At this time, two mapping directions of which the properties differ, such as a direction for remaining colors, and a direction for eliminating colors, are blended, whereby the synthesis processing unit 125 can adjust a desired mapping direction according to the blended ratio thereof. In the case of the above-mentioned three fixed mapping directions, as described above, the following two methods can be conceived, for example.

That is to say, there are a method for synthesizing the C-direction mapping and BW-direction mapping (steps S124 and S125), and a method for synthesizing the C-direction mapping and Cusp-direction mapping (steps S122 and S123). The combination selecting unit 121 selects which combination is employed depending on whether or not there is conversion in the enlargement direction.

The method for synthesizing the C-direction mapping and BW-direction mapping is a combination of two mapping directions wherein the properties for eliminating colors and the properties for remaining colors differ most, so adjustment can be readily made (adjustable range is wide). In particular, with regard to the BW-direction mapping, colors are remained with a deeper hue, so contrast adjustment width is very wide, and accordingly, an image can be adjusted to obtain more natural appearance. However, the BW-direction mapping is in an irreversible mapping direction, and is accordingly prevented from being employed for the enlargement processing.

On the other hand, in the case of synthesizing the C-direction mapping and Cusp-direction mapping, the properties of the Cusp-direction mapping is somewhat ambiguous as compared to the BW-direction mapping, an adjustable range in the case of color gamut compression or the like is narrower as compared to the case of synthesizing the C-direction mapping and BW-direction mapping. An image which is a compression result also has an appearance with insufficient contrast in some cases as compared to the case of synthesizing the C-direction mapping and BW-direction mapping. However, the combined mapping directions are both reversible, and can also be employed for the enlargement processing.

That is to say, in general, in the case of performing color gamut compression alone, the method for synthesizing the C-direction mapping and BW-direction mapping can obtain a more natural appearance result as compared to the method for synthesizing the C-direction mapping and Cusp-direction mapping, but in the event of performing color gamut enlargement, the method for synthesizing the C-direction mapping and Cusp-direction mapping can obtain a more desirable result.

In general, in order to approximate to an ideal clip direction, the combination selecting unit 121 defines at least two types of fixed mapping directions, but as shown in FIG. 27, mapping (direction A) wherein only the saturation direction is compressed, and colors are eliminated is taken as one of the fixed mapping directions, and mapping (direction B) wherein both of the saturation direction and luminance direction are moved, and colors are remained is taken as the other. A final mapping direction is determined by the synthesis processing unit 125 blending the two directions with an appropriate ratio. With the example in FIG. 27, the directions A and B are blended with a ratio of 1:2. That is to say, if the blend ratio between the fixed mapping directions can be defined appropriately for each pixel to be processed, mapping can be performed so as to approximate to an ideal mapping direction. Therefore, the synthesis processing unit 125 performs mapping by employing a blend function wherein a mixed ratio is specified for each hue.

An example of such a blend function is shown in FIG. 28. In the case of this blend function, as to a color gamut 300 shown in the left side of FIG. 28, the use ratio of the C-direction mapping point is exhibited regarding a pixel to be processed of which the luminance is around a white or black point such as an area A shown with both arrows 301 and both arrows 302 shown in the center of FIG. 28, and the use ratio of the BW-direction mapping point is exhibited regarding a pixel to be processed of which the luminance (curve 305 on the right side of FIG. 28) is around the Cusp such as an area B shown with both arrows 303. As shown in the right side of FIG. 28, upon a blend function such as a curve 305 is given to one of the two mapping directions to be blended, a value obtained by subtracting the value shown in the curve 305 from a value “1.0”, i.e., a curve 304 is given to the other mapping direction as a blend function.

Note that, this blend function may be defined so as to blend two mapping directions such as shown in FIG. 28, or three mapping directions including the Cusp-direction mapping. Consequently, this blend function is defined by adjusting this so as to realize a mapping direction such as shown in FIG. 28.

With regard to the blend functions shown in the curves 304 and 305 such as shown in the upper stage 311 in FIG. 29, in reality, as shown in the middle stage 312 in FIG. 29, two types of blend functions (curves 321 and 322) are prepared wherein only a BW-direction use ratio is defined, for example. One (curve 322) is a function corresponding to a pixel to be processed on a brighter side than the Cusp point, and the other (curve 321) is a function corresponding to a pixel to be processed on a darker side than the Cusp point. Let us say that the horizontal axis of the blend function (graph) shown in the middle stage 312 represents luminance wherein the luminance of the Cusp point through a white point, the luminance of the Cusp point through a black point are normalized with 0.0 through 1.0, respectively. Note that a C-direction use ratio can be obtained by subtracting the BW-direction use ratio from 1.0.

The luminance and saturation of the Cusp point of a color gamut differ significantly depending on a hue, for example, such as shown in a curve 351 in the graph in the upper stage of FIG. 30, or the like, and the shape of the color gamut is also accordingly changed (color gamut 361A through 367A in the middle stage in FIG. 30). Accordingly, the blend function is desirable to be changed depending on a hue, and is defined such as shown in the middle stage 312 in FIG. 29, whereby the synthesis processing unit 125 can change the blend function appropriately for each hue in accordance with the luminance position of the Cusp point of the color gamut. For example, situations of the blend function at hues A and B wherein the luminance of the Cusp point is lower or higher are shown in the upper stage 311 and lower stage 313 in FIG. 29, respectively. There can be confirmed situations wherein the blend function is changed in accordance with the luminance of the Cusp point. Thus, upon changing the blend function, even if a color gamut shape is changed for each hue as shown in the middle stage in FIG. 30, such as color gamut 362B through color gamut 367B shown in the lower stage of FIG. 30, a direction wherein colors are eliminated around a white or black point, and a direction wherein colors are remained around the Cusp point, i.e., an ideal mapping direction can be realized.

As described above, let us say that the blend function is referenced by employing the luminance Yi of a pixel to be processed, and the obtained BW-direction use ratio is taken as UseR_BW. A final mapping point Pout(Yo, Co) can be calculated such as the following Expressions (7) and (8) by employing a C-direction mapping point (Yc, Cc), and BW-direction mapping point (Ybw, Cbw).
Yo=UseR_BW×Ybw+(1.0−UseR_BW)×Yc  (7)
Co=UseR_BW×Cbw+(1.0−UseR_BW)×Cc  (8)

Now, description will be back to FIG. 22. In step S127, the format conversion unit 126 converts the format of output content data, for example, from the YCH to YCC. The format conversion unit 126 employs the following Expressions (9) through (12) to convert the YC coordinates Pout(Yo, Co) of the obtained final mapping point from the YCH coordinates to YCC coordinates, and calculates the YCC coordinates Pout (Yo, Cbo, Cro) of the final mapping point.
Ho=Hi  (9)
Yo=Yo  (10)
Cbo=Co×cos(Ho)  (11)
Cro=Co×sin(Ho)  (12)

Upon the processing in step S127 being completed, the blend mapping processing is ended, the processing is returned to step S106 in FIG. 5, and the color gamut conversion processing is ended.

As described above, with the color gamut conversion, multiple mapping directions which differ mutually are blended with an appropriate ratio to determine a final mapping direction, whereby the color conversion device 100 can realize mapping direction control with higher flexibility, and can readily realize a more appropriate mapping direction according to a purpose.

Description has been made so far wherein three examples of fixed mapping directions are exemplified, and the mapping processing unit 114 selects two therefrom to synthesize these, but the fixed mapping direction may be another direction other than the above-mentioned directions. Also, the number of fixed mapping directions to be prepared may be four or more. Further. the mapping processing unit 114 may synthesize multiple fixed mapping directions with a combination other than the above-mentioned combinations. For example, the mapping processing unit 114 may select and synthesize three or more mapping directions.

Also, description has been made so far wherein mapping directions to be synthesized are selected by the mapping processing unit 114 depending on whether to perform the color gamut enlargement, but a selection condition of mapping directions may be any condition, and mapping directions to be selected according to each condition is arbitrary as long as there is no inconvenience. For example, with the flowchart in FIG. 22, description has been made wherein in a case in which the color gamut enlargement is not performed, the C-direction mapping and BW-direction mapping are selected and synthesized, but the present invention is not restricted to this, other mapping directions may be selected. For example, even in a case wherein the color gamut enlargement is not performed, the mapping processing unit 114 may select the C-direction mapping and Cusp-direction mapping according to a color gamut or the like of an output device.

That is to say, any kind of method may be employed as long as the method can select a color gamut appropriately in accordance with a predetermined condition, and also, conditions for selecting each method, and the number of directions to be synthesized are arbitrary.

Information processing system examples employing a color gamut conversion method such as described above are shown in FIGS. 31A and 31B.

The respective information processing systems shown in FIGS. 31A and 31B are information processing systems to which an embodiment of the present invention has been applied. The color gamut conversion such as described above is performed in the case of picture content data being exchanged between multiple devices, or in the case of expecting picture content data to be exchanged between multiple devices. With regard to a combination of devices to perform exchange of picture content data, and the exchange method thereof, there can be conceived various combinations and various methods, but in FIGS. 31A and 31B, description will be made regarding a case wherein with an information processing system configured of a supply-side device 401 for supplying picture content data, and an obtaining-side device 402 for obtaining the picture content data, the color gamut conversion is performed, for convenience of explanation.

FIG. 31A illustrates an example in the case of performing the color gamut conversion at the obtaining-side device 402. As shown in FIG. 31A, the supply-side device 401 supplies input picture content data 411 and original color gamut information 412 to the obtaining-side device 402. The obtaining-side device 402 has the same function as that of the color gamut conversion device 100 in FIG. 3, includes a color gamut conversion unit 421 for performing similar processing, and has further obtained target color gamut information 422. The color gamut conversion unit 421 performs color gamut conversion based on the original color gamut information 412 supplied from the supply-side device 401, and the target color gamut information 422 to convert the input picture content data 411 supplied from the supply-side device 401 into output picture content data 423.

FIG. 31B illustrates another example in the case of performing the color gamut conversion at the supply-side device 401. As shown in FIG. 31B, the supply-side device 401 includes the color gamut conversion unit 421, and has obtained the input picture content data 411 and original color gamut information 412. Also, the obtaining-side device 402 supplies the target color gamut information 422 to the supply-side device 401. The color gamut conversion unit 421 performs color gamut conversion based on the original color gamut information 412, and the target color gamut information 422 supplied from the obtaining-side device 402 to convert the input picture content data 411 into output picture content data 423. The supply-side device 401 supplies the converted output picture content data 423 to the obtaining-side device 402.

As described above, the present invention may be applied to any kind of device as long as the device has the same configuration as that of the color gamut conversion device 100 in FIG. 1, and includes the color gamut conversion unit 421 for performing similar processing. That is to say, for example, as described with reference to FIG. 31, the color gamut conversion unit 421 can select an appropriate color gamut conversion method according to a device and conditions, and can perform color gamut conversion appropriately according to more various conditions.

The above-mentioned series of processing can be executed not only by hardware but also by software. In this case, for example, the above-mentioned series of processing may be configured as a personal computer such as shown in FIG. 32.

In FIG. 32, a CPU (Central Processing Unit) 501 of a personal computer 500 executes various types of processing in accordance with a program stored in ROM (Read Only Memory) 502, or a program loaded into RAM (Random Access Memory) 503 from a storing unit 513. Data or the like used by the CPU 501 to execute various types of processing is also stored in the RAM 503 as appropriate.

The CPU 501, ROM 502, and RAM 503 are mutually connected through a bus 504. An input/output interface 510 is also connected to the bus 504.

The input/output interface 510 is connected with an input unit 511 made up of a keyboard, mouse, and so forth, a display made up of CRT (Cathode Ray Tube), LCD (Liquid Crystal Display), or the like, an output unit 512 made up of a speaker and so forth, a storing unit 513 configured of a hard disk or the like, and a communication unit 514 configured of a modem or the like. The communication unit 514 performs communication processing through a network including the Internet.

The input/output interface 510 is also connected with a drive 515 as appropriate, on which a removable medium 521 such as a magnetic disk, optical disc, magneto-optical disk, semiconductor, or the like is mounted as appropriate, and a computer program read out therefrom is installed into the storing unit 513 as appropriate.

In a case wherein the above-mentioned series of processing is executed by software, a program making up the software thereof is installed from a network or recording medium.

The recording medium is not restricted to being configured of, separately from the device main unit such as shown in FIG. 32 for example, the removable medium 521 made up of a magnetic disk (including a flexible disk), optical disc (including CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc)), magneto-optical disk (including MD (Mini Disc)), semiconductor memory, or the like, wherein a program to be distributed to a user is recorded, but also may be the ROM 502, a hard disk included in the storing unit 513, or the like, wherein a program to be distributed to a user in a state built into a device main unit beforehand is recorded.

Note that, with the present Specification, steps describing a program to be recorded in a recording medium include not only processing performed in time series along a described order but also processing executed in parallel or individually even though not necessarily performed in time series.

Also, with the present Specification, the term “system” represents the entirety of equipment configured of multiple devices.

Note that the configuration described above as a single device may be configured as multiple devices. Conversely, the configuration described above as multiple devices may be configured as a single device collectively. Also, a configuration other than the above-mentioned configuration may be added to the configuration of each device. Further, if the configuration and operation as the entire system are substantially the same, a part of the configuration of a certain device may be included in another device. That is to say, embodiments of the present invention are not restricted to the above-mentioned embodiment, and various changes can be made without departing from the essence and spirit of the present invention.

It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.

Mizukura, Takami, Katoh, Naoya

Patent Priority Assignee Title
Patent Priority Assignee Title
5933253, Sep 29 1995 Sony Corporation Color area compression method and apparatus
5987165, Sep 04 1995 FUJI XEROX CO , LTD Image processing system
6301383, Sep 10 1996 Sony Corporation Image processing apparatus and method
6437792, Jan 22 1999 Sony Corporation IMAGE PROCESSING APPARATUS AND METHOD, COLOR GAMUT CONVERSION TABLE CREATING APPARATUS AND METHOD, STORAGE MEDIUM HAVING IMAGE PROCESSING PROGRAM RECORDED THEREIN, AND STORAGE MEDIUM HAVING RECORDED THEREIN COLOR GAMUT CONVERSION TABLE CREATING PROGRAM
6560356, Apr 17 1998 Mitsubishi Denki Kabushiki Kaisha Color gamut compression method and color gamut compression device
6628822, Feb 21 1997 Sony Corporation TRANSMISSION APPARATUS, TRANSMITTING METHOD, RECEPTION APPARATUS, RECEPTION METHOD, PICTURE PROCESSING SYSTEM, PICTURE PROCESSING METHOD, PICTURE DATA PROCESSING APPARATUS, PICTURE DATA PROCESSING METHOD AND FURNISHED MEDIUM
6724507, Jul 02 1998 Fuji Xerox Co., Ltd. Image processing method and image processing apparatus
6882445, May 31 1999 Mitsubishi Denki Kabushiki Kaisha Color gamut compression apparatus and method
7177465, Jul 16 1999 FUJIFILM Corporation Method of compressing/extending color reproducing space, color reproducing method and color reproducing apparatus
20050253866,
20060170940,
20070081178,
20070188783,
20070195345,
20090310154,
JP10084487,
JP11032228,
JP2000278546,
JP2000354171,
JP2003323609,
JP2005311805,
JP2005354228,
JP2006311447,
JP2007142494,
JP5298437,
JP7236069,
JP9098298,
JP9135360,
WO9955074,
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Oct 21 2008MIZUKURA, TAKAMISony CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0344080553 pdf
Oct 21 2008KATOH, NAOYASony CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0344080553 pdf
Nov 21 2014Sony Corporation(assignment on the face of the patent)
Date Maintenance Fee Events
Oct 11 2017M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Sep 24 2021M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Mar 15 20194 years fee payment window open
Sep 15 20196 months grace period start (w surcharge)
Mar 15 2020patent expiry (for year 4)
Mar 15 20222 years to revive unintentionally abandoned end. (for year 4)
Mar 15 20238 years fee payment window open
Sep 15 20236 months grace period start (w surcharge)
Mar 15 2024patent expiry (for year 8)
Mar 15 20262 years to revive unintentionally abandoned end. (for year 8)
Mar 15 202712 years fee payment window open
Sep 15 20276 months grace period start (w surcharge)
Mar 15 2028patent expiry (for year 12)
Mar 15 20302 years to revive unintentionally abandoned end. (for year 12)