A neighboring vector, which is a boundary portion between two overlapping objects, is extracted. To calculate luminance levels of the objects on both sides of the neighboring vector, a predetermined number of coordinate points (sample points) in the vicinity of the neighboring vector are extracted at least from the image side. A rendering process is performed on an area including all the extracted sample points to acquire color values at the sample points. The luminance level of the image is calculated based on the acquired color values, and the luminance levels of the objects on both sides of the neighboring vector are compared to each other to determine the position (direction) in which to generate a trap graphic.
|
17. An image processing method for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the method comprising:
a sample coordinate extraction step for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition step for acquiring color values representing colors at the sample coordinate points; and
a tggp determination step for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition step.
1. An image processing apparatus for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the apparatus comprising:
a sample coordinate extraction portion for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition portion for acquiring color values representing colors at the sample coordinate points; and
a tggp determination portion for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition portion.
9. A non-transitory computer-readable recording medium having recorded therein an image processing program for use with an image processing apparatus for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the program causing the apparatus to execute the following steps:
a sample coordinate extraction step for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition step for acquiring color values representing colors at the sample coordinate points; and
a tggp determination step for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition step.
2. The image processing apparatus according to
3. The image processing apparatus according to
4. The image processing apparatus according to
5. The image processing apparatus according to
6. The image processing apparatus according to
7. The image processing apparatus according to
8. The image processing apparatus according to
wherein, for the character string consisting of the plurality of characters, the tggp determination portion determines the position in which to generate the trap graphic per character when the selection portion selects a determination per character, and
wherein, for the character string consisting of the plurality of characters, the tggp determination portion determines the position in which to generate the trap graphic when the selection portion selects a determination per character string, such that a relative positional relationship between the neighboring vector and the trap graphic is identical among all the plurality of characters.
10. The computer-readable recording medium according to
11. The computer-readable recording medium according to
12. The computer-readable recording medium according to
13. The computer-readable recording medium according to
14. The computer-readable recording medium according to
15. The computer-readable recording medium according to
wherein, in the tggp determination step, a determination is made to generate the trap graphic so as to center around the neighboring vector when a difference in luminance level between the two overlapping objects is less than or equal to a value entered through the value entry portion.
16. The computer-readable recording medium according to
wherein, in the tggp determination step, for the character string consisting of the plurality of characters, the position in which to generate the trap graphic is determined per character when a determination per character is selected by the selection step, and
wherein, in the tggp determination step, for the character string consisting of the plurality of characters, the position in which to generate the trap graphic is determined when a determination per character string is selected by the selection step, such that a relative positional relationship between the neighboring vector and the trap graphic is identical among all the plurality of characters.
|
1. Field of the Invention
The present invention relates to image processing apparatuses and methods for printing and plate making, and more specifically to technology for trapping between objects including image objects (data).
2. Description of the Background Art
In the field of printing and plate making, a personal computer or such like is used to first perform an edit process based on characters that are to be contained in a print and a plurality of types of other print elements, such as logos, patterns, and illustrations, and generate page data containing a print target written in a page-description language. Thereafter, a RIP process is performed on the page data to generate image data for use in production of a plate (a press plate) that is to be placed in a printing machine.
Incidentally, in the case of multicolor printing, in order to prevent an underlying portion from being exposed at a boundary between colors due to misregistration, a trapping process is performed before the performance of the RIP process on the page data. The trapping process is to dispose, along a boundary portion between two adjacent colors on an image, a hairline graphic (hereinafter, referred to as a “trap graphic”) which has a color containing color elements from both sides of the boundary portion. For example, in the case where a Y color halftone tint object 71, a C color halftone tint object 72, and an M color halftone tint object 73 are located so as to overlap with each other as shown in
In the above-described trapping process for objects with colored line drawing such as halftone tints, strokes and characters (such objects, for which a plurality of colorplates are used, are referred to hereinafter as “color objects”), a value (a luminance level) that represents brightness is calculated for each color object, and thereafter the trap graphic is generated on a color object at a lower luminance level, i.e., a relatively dark-colored object. This makes the trap graphic less noticeable, preventing image quality from being reduced by the trapping process.
On the other hand, in the trapping process between image objects (hereinafter simply referred to as “images”) or between an image and a color object, the luminance level of each image varies from one pixel to another, and therefore the side on which a trap graphic is to be generated (hereinafter, referred to as a “trap direction”) cannot be determined from among the two objects according to the luminance levels. Thus, for example, the trap direction is determined according to a trap rule designated by the user from among preset trap rules (rules for automatically determining the trap direction).
Examples of the trap rules between the image and the color object include “no trap graphic”, “increase in area of color object relative to image”, “decrease in area of color object relative to image”, “centering”, etc. Note that each of the trap rules can be designated per (color object) type, such as halftone tint, stroke, or character (e.g., “the halftone tint can be increased in area relative to the image”). In addition, examples of the trap rules between images include “no trap graphic”, “increase in area of top (front) image relative to bottom (back) image”, “decrease in area of top (front) image relative to bottom (back) image”, “centering”, etc.
Trap graphics generated according to the trap rules will be described with reference to
Note that the following techniques have been disclosed in relation to the trapping process. Japanese Laid-Open Patent Publication No. 2004-34636 discloses a technique related to a trapping process for use in the case where black characters overlap a design. Japanese Laid-Open Patent Publication No. 2006-5481 discloses a technique for predicting a direction in which a plate significantly deviates, and setting a wide trap width in that direction. Japanese Laid-Open Patent Publication No. 2004-122692 discloses a technique for changing the size of a trap area depending on printing methods. Japanese Laid-Open Patent Publication No. 2001-129986 discloses a technique for eliminating variations between printing and proofing in terms of the accuracy of trapping and the degree of garbling. Japanese Laid-Open Patent Publication No. 2003-87548 discloses a technique related to a trapping process for a document with a plurality of pages. Japanese Laid-Open Patent Publication No. 2004-155001 discloses a technique related to a trapping process between graphics. Japanese Laid-Open Patent Publication No. 2006-202198 discloses a technique for speeding up a trapping process. Japanese Laid-Open Patent Publication No. 2006-129007 discloses a technique for speeding up a trapping process using an object-type-information bitmap, and a processing parameter per object, as well as allowing the user to readily designate a trapping parameter.
However, when the trapping process is automatically carried out between images or between an image and a color object according to the above-described trap rules, a trap graphic might be generated in an undesirable direction. Such a case will be described with reference to
The first assumption is that relatively dark color objects (a halftone tint, 92 and characters 93) are disposed on (in front of) a relatively bright image 91 prior to the trapping process as shown in
The next assumption is that relatively bright color objects (a halftone tint 95 and characters 96) are disposed on (in front of) a relatively dark image 94 prior to the trapping process as shown in
In addition, when the data as shown in
As described above, when the trapping process is automatically carried out between objects including an image, a trap graphic might not be generated in a desirable direction, leading to a reduction in image quality. In addition, the user is required to manually correct any portion in which the trap graphic is generated in an undesirable direction, leading to a reduction in working efficiency.
Therefore, an object of the present invention is to provide an image processing apparatus and an image processing method that are capable of automatically generating a trap graphic in a desirable direction between an image and a color object or between images, without requiring the user's manual operation.
The present invention has the following features to attain the above object.
One aspect of the present invention is directed to an image processing apparatus for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the apparatus comprising:
a sample coordinate extraction portion for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition portion for acquiring color values representing colors at the sample coordinate points; and
a trap graphic generation position (TGGP) determination portion for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition portion.
The image processing apparatus thus configured includes: the sample coordinate extraction portion for extracting coordinate points in the vicinity of a neighboring vector, which is a boundary portion between objects, as sample coordinate points; the color value acquisition portion for acquiring color values at the extracted sample coordinate points; and the TGGP determination portion for determining a trap direction based on the color values at the sample coordinate points. Accordingly, even if one of two overlapping objects is an image object, the trap direction can be determined based on the color values at the sample coordinate points in the vicinity of the neighboring vector. Therefore, when generating a trap graphic between image objects or between an image object and a color object, the trap direction is determined based on brightness of the two objects in the vicinity of the neighboring vector. Thus, even if the image object contains color objects with various levels of brightness, it is possible to generate the trap graphic in a desirable trap direction between the objects.
In the apparatus as configured above, the color value acquisition portion preferably acquires the color values at the sample coordinate points by rendering a rectangular area in the smallest possible size, the rectangular area containing all the sample coordinate points extracted by the sample coordinate extraction portion.
According to this configuration, a rendering process is performed only on the smallest possible rectangular area including all the sample coordinate points to acquire the color values at the sample coordinate points. Thus, it is possible to prevent increase of processing time for a trapping process.
In the apparatus as configured above, the sample coordinate extraction portion preferably extracts a predetermined number of coordinate points from among all coordinate points on the neighboring vector as reference coordinate points for sample coordinate extraction, which are referenced for extracting the sample coordinate points, and the sample coordinate extraction portion preferably also extracts predetermined points as the sample coordinate points, the predetermined points being present on a line that is perpendicular to the neighboring vector and extends through the reference coordinate points for sample coordinate extraction.
According to this configuration, a predetermined number of coordinate points on the neighboring vector are extracted for sample coordinate extraction. Accordingly, coordinate points to be referenced for sample coordinate extraction can be extracted based on the entire length of the neighboring vector and lengths of line segments constituting the neighboring vector. Thus, coordinate points used for obtaining average brightness of each object in the vicinity of the neighboring vector can be suitably and readily extracted.
In the apparatus as configured above, it is preferable that the TGGP determination portion calculates luminance levels of the image objects based on the color values at the sample coordinate points acquired by the color value acquisition portion, and determines the position in which to generate the trap graphic with reference to the neighboring vector, such that the trap graphic is generated on one of the two overlapping objects that has a lower luminance level.
According to this configuration, luminance levels of two objects are compared to each other to generate a trap graphic on an object with a lower luminance level. Thus, it is possible to generate a trap graphic on a darker one of two objects including an image without requiring the user's manual operation.
Preferably, the apparatus as configured above further includes a selection portion for externally selecting whether to determine the position in which to generate the trap graphic per character or per character string when one of the two overlapping objects is a color object including a character string consisting of a plurality of characters,
wherein, for the character string consisting of the plurality of characters, the TGGP determination portion determines the position in which to generate the trap graphic per character when the selection portion selects a determination per character, and
wherein, for the character string consisting of the plurality of characters, the TGGP determination portion determines the position in which to generate the trap graphic when the selection portion selects a determination per character string, such that a relative positional relationship between the neighboring vector and the trap graphic is identical among all the plurality of characters.
According to this configuration, the user can select whether to determine the trap direction per character or per character string when generating a trap graphic between an image object and a color object including the character string. Thus, through the selection by the user, it is possible to set the same trap direction for all characters in the character string.
Another aspect of the present invention is directed to a computer-readable recording medium having recorded therein an image processing program for use with an image processing apparatus for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the program causing the apparatus to execute the following steps:
a sample coordinate extraction step for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition step for acquiring color values representing colors at the sample coordinate points; and
a TGGP determination step for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition step.
Still another aspect of the present invention is directed to an image processing method for reading print data containing a plurality of objects including image objects each having one or more colors and color objects each having one color, and generating a trap graphic along a neighboring vector, which is a boundary portion between any two overlapping objects from among the plurality of objects, the boundary portion being included in a contour of one of the two overlapping objects that is disposed on top of the other object, the method comprising:
a sample coordinate extraction step for extracting coordinate points in the vicinity of the neighboring vector as sample coordinate points from among all coordinate points at which the image object is rendered, wherein at least one of the two overlapping objects is an image object;
a color value acquisition step for acquiring color values representing colors at the sample coordinate points; and
a TGGP determination step for determining a position in which to generate the trap graphic with reference to the neighboring vector, based on the color values at the sample coordinate points acquired by the color value acquisition step.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
Hereinafter, an embodiment of the present invention will be described with reference to the accompanying drawings.
<1. Hardware Configuration of the Image Processing Apparatus>
A program P for image processing (hereinafter, referred to as an “image processing program”) is stored in the auxiliary storage device 20, and when the image processing apparatus is activated, the image processing program P is loaded into the memory device 110 via the disk interface portion 130. Then, the CPU 100 executes the image processing program P to implement image processing for a trapping process. Note that the image processing program P is provided, for example, through a computer-readable recording medium, such as a CD-ROM, which has the program recorded therein. Specifically, after purchasing a CD-ROM as a recording medium containing the image processing program P, the user installs the image processing program P into the auxiliary storage device 20 by inserting the CD-ROM into a CD-ROM drive unit to cause the CD-ROM drive unit to read the image processing program P from the CD-ROM. Alternatively, the image processing program P may be received via the LAN 24, and installed into the auxiliary storage device 20.
<2. General Outline of the Trapping Process Between Objects Including an Image>
Described next is the general outline of the trapping process between objects including an image in the present embodiment. In the present embodiment, first, a neighboring vector between the objects is extracted. The neighboring vector as used herein refers to a reference line for determining a position in which to generate a trap graphic. The reference line is a portion of one of two overlapping objects that is disposed below (behind) the other object (i.e., the top (front) object) and the portion is a part of the contour of the top (front) object. After the neighboring vector is extracted, coordinate points (sample coordinate points), which are used as samples for calculating brightness of the objects, are extracted in order to compare the brightness of one object bordered by the neighboring vector to the brightness of the other object. Then, luminance levels of the image objects are calculated based on color values at the extracted coordinate points, and compared to each other to determine a trap direction. Note that the color value refers to a ratio of halftone dot areas per unit area among plates used for printing (C plate, M plate, Y plate, and K plate), and the color value is expressed in percentage, and also referred to as a “dot percentage”, or a “tone values”.
The following description is given on the assumption that a halftone tint 32 is disposed as a color object on an image 31 as shown in
<3. Procedure of Image Processing for the Trapping Process>
The CPU 100 first loads page data, which is written in a page-description language in the format of, for example, PostScript (registered trademark of Adobe Systems Incorporated) or PDF (Portable Document Format), from the auxiliary storage device 20 to the memory 110 (step S100). Then, depending on the format of the loaded page data, the data is internally converted to a format that can be handled by the image processing program P according to the present embodiment (step S110).
After the internal conversion of the data, a process (hereinafter, referred to as an “ID rendering process”) is performed for correlating each pixel within a display area with an object that is to be displayed by the pixel (step S120). By the ID rendering process, each object is assigned a unique identification number (hereinafter, referred to as “ID”). For example, in the case where n objects are present within a page, the objects are assigned IDs “1”, “2”, . . . , “n” in order from bottom (back) to top (front). Where the objects are disposed in the order as shown in
In the example shown in
After the ID rendering process, a list (hereinafter, referred to as a “related graphic list”) that indicates overlapping of the objects (relative positional relationships in the vertical direction) is generated based on the ID rendering result obtained in step S120 (step S130).
First, each pixel within the display area is scanned, focusing on “ID=1”. In this case, if an ID other than “1” is rendered for any pixel adjacent to a pixel for which “1” is rendered, the ID is added to the upward-direction related graphic listing. In
Next, each pixel within the display area is scanned, focusing on “ID=2”. In this case, when an ID other than “2” is rendered for any pixel adjacent to a pixel for which “2” is rendered, the ID is added to the downward-direction related graphic listing if the ID has a value less than “2” or to the upward-direction related graphic listing if more than “2”. Accordingly, as shown in
In step S140, a trap rule is applied to all the objects within the page. The “trap rule” as used herein refers to configuration information, which has been previously set for determining the attributes (color, width, trap direction, etc.) of a trap graphic. In addition, the “application of the trap rule” is to determine (set) the attributes for each trap graphic that is to be generated, in accordance with the trap rule. In the present embodiment, for any trap graphic that is to be generated between objects including an image, the trap direction is set to “automatic”.
In step S150, whether the trap direction is set at “automatic” is determined for each trap graphic to be generated. If the determination result is that the trap direction is set at “automatic”, the procedure advances to step S160. On the other hand, if the trap direction is not set at “automatic”, the procedure advances to step S200, and then to step S210 after normal processing is performed.
In step S160, a neighboring vector is extracted. For example, in the case where a halftone tint 45 is disposed on an image 44 as shown in
Incidentally, at the time of extracting the neighboring vector in step S160, the graphic path direction is corrected, for example, such that the left side of the graphic boundary of each object is the inside of the object, and the right side is the outside of the object. This will be described with reference to
When there is an object 51 rendered counterclockwise (leftward) as shown in
In addition, if there is an object including a graphic 53 having been subjected to a process called “knockout” as shown in
After the neighboring vector is extracted, the procedure advances to step S170. In step S170, sample points are extracted. The sample points as used herein refer to coordinate points extracted for calculating average brightness in the vicinity of a neighboring vector for two overlapping objects as a luminance level. The sample points are extracted based on, for example, coordinate points on a neighboring vector (or coordinates at each point therein), the number of coordinate points that are to be extracted as sample points (hereinafter, the number of coordinate points that are to be extracted as sample points and present on one object is referred to as a “sample point count”), and the distances between the sample points and the neighboring vector. Note that the sample point count is preset for each system or designated by the user. In addition, the distances between the sample points and the neighboring vector are determined based on the width of a trap graphic, and the resolution for the rendering process to be described later.
First, assuming that the entire length of the neighboring vector 54 is 4 L as shown in
Note that when both the first object and the second object are images, the sample points must be extracted for each of the two objects, but when only one of them is an image, the sample points may be extracted only for that image object.
First, the length of each line segment of a neighboring vector 60 is calculated. Thereafter, coordinates at a midpoint (a reference coordinate point for sample coordinate extraction) are acquired for each of the four longest line segments. A line perpendicular to the neighboring vector 60 is drawn from the acquired midpoint as described above, and coordinate points on the perpendicular line are acquired such that their distances from the midpoint are equal to a distance determined based on the width of the trap graphic and the resolution for the rendering process to be described later. For example, when the order in length of the line segments (the order from the longest to the shortest line segment) is as shown in
Note that instead of first acquiring the coordinates at the midpoints of the four longest line segments, the coordinates at the midpoint of the longest line segment may be acquired first, after which are acquired coordinates at the midpoint of the longest line segment from among all other line segments, which include line segments obtained by dividing the longest of the four longest line segments at the acquired midpoint. Such a process for re-acquiring coordinates at the midpoint of the longest line segment is repeated to extract four sample points.
After the extraction of the sample points, the procedure advances to step S180. In step S180, color values at the sample points are acquired. The color values at the sample points are acquired by rendering a rectangular area including all the sample points in the smallest possible size with a predetermined resolution. For example, when sample points 63 are extracted as shown in
In step S190, the trap direction between objects at least one of which includes an image is determined. The trap direction is determined by comparing an average color value among sample points on the left side of a neighboring vector to an average color value among sample points on the right side. Specifically, the average color value is calculated per plate for each object, and the calculated averages (of the plates) are substituted into the following equation (1) to calculate luminance level L.
L=100−(0.3×C+0.59×M+0.11×Y+K) (1)
where C is the average color value for the C plate, M is the average color value for the M plate, Y is the average color value for the Y plate, and K is the average color value for the K plate.
After the luminance level L is calculated for each object (the objects on both sides of the neighboring vector) in the manner as described above, the trap direction is determined in the following manner. Note that when the objects on both sides of the neighboring vector are both images, the luminance levels of the objects must both be calculated by the above equation (1) but when one of the objects is a color object, the luminance level of the color object may be acquired based on the attributes of the color object, rather than by the above equation (1). For example, when a halftone tint 62 is disposed on an image 61 as shown in
When the luminance level of the object on the left side of the neighboring vector is lower than that of the object on the right side, a determination is made to generate the trap graphic on the object present on the left side of the neighboring vector. On the other hand, when the luminance level of the object on the left side of the neighboring vector is higher than that of the object on the right side, a determination is made to generate the trap graphic on the object present on the right side of the neighboring vector. In addition, when the two objects have the same luminance level, a determination is made to generate the trap graphic centering around the neighboring vector.
For example, in the example shown in
When the trap direction between the objects including an image is determined, the procedure advances to step S210. In step S210, the trap graphic is outputted for all objects within the page. Thereafter, data conversion is performed in accordance with the format of page data which is output data (step S220) and the page data that has been subjected to the trapping process is outputted to the auxiliary storage device 20 (step S230). Thus, the image processing for the trapping process is completed.
Note that in the present embodiment, a sample coordinate extraction portion/step is implemented by step S170, a color value acquisition portion/step is implemented by step S180, and a trap graphic generation position (TGGP) determination portion/step is implemented by step S190.
<4. Effect>
According to the present embodiment, the image processing for the trapping process includes: a process for extracting a predetermined number of coordinate points (sample points) in the vicinity of a neighboring vector, which is a boundary between objects; a process for acquiring color values at the extracted sample points; and a process for calculating luminance levels of the objects based on the color values at the sample points, and comparing the luminance levels with each other to determine a trap direction. Accordingly, even if one of two overlapping objects is an image, average brightness of the image in the vicinity of the neighboring vector is calculated as the luminance level. Therefore, when generating a trap graphic between images or between an image and a color object, the trap direction can be determined based on the brightness of the two objects, rather than a predesignated trap rule. As a result, even if an image contains color objects with various levels of brightness, the trap graphic is generated in a desirable trap direction between the objects. Thus, the burden of manual correction, etc., on the user is lessened, resulting in efficient processing.
<5. Variant>
In the above embodiment, if there is any slight difference in luminance level between the object on the left side of the neighboring vector and the object on the right side, the trap graphic is generated on the object with a lower luminance level, but the present invention is not limited thereto. When the difference in luminance level between the two objects is relatively small, it might be desirable to generate the trap graphic centering around the neighboring vector, rather than on one object. Therefore, a menu (a dialog or suchlike) may be provided as a value entry portion for the user to enter the difference in luminance level, so that if the difference in luminance level between the two objects is less than or equal to the value entered by the user, a determination is made to generate the trap graphic centering around the neighboring vector.
In addition, when a character string consisting of a plurality of characters is disposed on an image as a color object, it might be desirable to determine the trap direction per character string, rather than per character. The following description is given, for example, on the assumption that a character string as shown in
Note that examples of the method for determining the trap direction per character string include a trap direction determination method in which any character at a luminance level less than or equal to the level designated by the user is “decreased in area relative to the image” regardless of the luminance level of the image; and a trap direction determination method in which the user designates a plate and a color value, and any character for which the actual color value of the designated plate is equal to or more than the color value designated by the user is “decreased in area relative to the image” regardless of the luminance level of the image.
Furthermore, the sample point extraction methods are not limited to those described in the above embodiment, and other methods are applicable so long as average brightness in the vicinity of the neighboring vector can be obtained for each object.
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Note that the present application claims priority to Japanese Patent Application No. 2006-352243, titled “IMAGE PROCESSING APPARATUS AND PROGRAM FOR PRINTING AND PLATE MAKING”, filed on Dec. 27, 2006, which is incorporated herein by reference.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5581667, | Dec 23 1994 | Xerox Corporation | Electronic trapping system for digitized text and images |
5613046, | Mar 31 1993 | Agfa Corporation | Method and apparatus for correcting for plate misregistration in color printing |
6366361, | Sep 03 1997 | Adobe Systems Incorporated | Peeker detection and correction |
6378983, | Aug 25 1999 | Mitsubishi Paper Mills Limited | Digital prepress system |
7391536, | Jul 09 2004 | Xerox Corporation | Method for smooth trapping suppression of small graphical objects using color interpolation |
20030048475, | |||
EP445066, | |||
JP2004122692, | |||
JP2004155001, | |||
JP200434636, | |||
JP2006129007, | |||
JP2006202198, | |||
JP20065481, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 04 2007 | Dainippon Screen Mfg. Co., Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Jul 12 2014 | 4 years fee payment window open |
Jan 12 2015 | 6 months grace period start (w surcharge) |
Jul 12 2015 | patent expiry (for year 4) |
Jul 12 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 12 2018 | 8 years fee payment window open |
Jan 12 2019 | 6 months grace period start (w surcharge) |
Jul 12 2019 | patent expiry (for year 8) |
Jul 12 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 12 2022 | 12 years fee payment window open |
Jan 12 2023 | 6 months grace period start (w surcharge) |
Jul 12 2023 | patent expiry (for year 12) |
Jul 12 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |