This image processing apparatus includes an additional-object registration unit and a read-image processing unit. A setting form contains: (a) an additional-object specification field used to present an additional object that is placed onto a document in order to specify an extract area to be extracted from an image read from the document; and (b) a processing specification field used to select processing to be performed on information obtained from the extract area. The additional-object registration unit identifies an image of the additional object presented in the additional-object specification field and the processing selected in the processing specification field, and registers the image and the process associated therewith. The read-image processing unit searches the read image of the document for the image of the additional object and performs the processing associated with the additional object image on the information obtained from the extract area specified by the additional object image.
|
1. An image processing apparatus comprising:
an additional-object registration unit that refers to an image read from a setting form,
the setting form including
(a) an additional-object specification field used by a user to present an additional object that is placed onto a document in order to specify an extract area to be extracted from an image read from the document, and
(b) a processing specification field used by the user to specify processing to be performed on information obtained from the extract area,
identifies an image of the additional object presented in the additional-object specification field and processing specified in processing specification field on the read image of the setting form, establishes an association between the image of the additional object and the processing, and registers the image of the additional object and the processing associated therewith,
a read-image processing unit that searches the read image of the document for the image of the additional object and performs the processing associated with the image of the additional object on the information obtained from the extract area specified by the image of the additional object;
the setting form includes a plurality of additional-object specification fields including the additional-object specification field and a plurality of processing specification fields including the processing specification field, the additional-object specification fields being associated with the processing specification fields, respectively,
the additional-object registration unit identifies images of the additional objects presented in the additional-object specification fields and the processing selected by the processing specification fields on the read image of the setting form, establishes associations between the images of the additional objects and processing, and registers the images of the additional objects and the processing associated therewith, and
the read-image processing unit searches the read image of the document for the images of the additional objects and performs the processing associated with the images of the additional objects on information obtained from the extract areas specified by the images of the additional objects,
the plurality of additional objects are transparent sticky notes having different colors, and
the read-image processing unit searches for the images of additional objects in consideration of the color mixture of the transparent sticky notes.
2. The image processing apparatus according to
the additional object is a sticky note having a predetermined shape, color and pattern, and
the read-image processing unit searches the read image of the document for the image of the additional object by pattern-matching.
3. The image processing apparatus according to
the plurality of additional objects are sticky notes having the same shape, but different patterns.
4. The image processing apparatus according to
the processing is at least one of: (a) creation of a file name of a file for the read image of the document based on information extracted from the extract area; (b) character recognition processing performed on the information extracted from the extract area; and (c) creation of metadata of the file based on the information extracted from the extract area.
5. The image processing apparatus according to
after the additional-object registration unit establishes an association between the image of the additional object and the processing and registers the image and processing associated therewith, a registration information sheet containing the image of the additional object and positional information of the extract area is output.
6. The image processing apparatus according to
the registration information sheet includes a two-dimensional code containing the positional information of the extract area and information about the processing associated with the extract area.
7. The image processing apparatus according to
the registration information sheet includes an image in the extract area obtained from the read image of the document.
8. The image processing apparatus according to
a key used to execute a predetermined function in response to a user's single operation, wherein
the function assigned to the key is an output operation of the registration information sheet.
9. The image processing apparatus according to
the setting form includes a translation specification field used by a user to specify whether to use a translation rule to translate a plurality of similar character strings in the extract area into a single character string,
when the additional-object registration unit detects that the translation specification field specifies to use the translation rule, the additional-object registration unit acquires the translation rule, establishes an association between the translation rule and the image of the additional object associated with the extract area, and registers the translation rule, and
the read-image processing unit translates the character string obtained from the extract area under the translation rule associated with the extract area.
10. The image processing apparatus according to
a printing device that prints the setting form, and an image reading device that obtains the image read from the setting form with the additional object presented thereon and the image read from the document with the additional object placed thereon.
|
The disclosure of Japanese Patent Application No. 2013-271375 filed on Dec. 27, 2013 including the specification, drawings and abstract is incorporated herein by reference in its entirety.
This disclosure relates to an image processing apparatus.
Some systems use a business management server that extracts specific information from images of application forms using suitable clipping patterns for various types of business document formats.
An image processing apparatus according to an aspect of the present disclosure includes an additional-object registration unit and a read-image processing unit. A setting form contains: (a) an additional-object specification field used by a user to present an additional object that is placed onto a document in order to specify an extract area to be extracted from an image read from the document; and (b) a processing specification field used by the user to select processing to be performed on information obtained from the extract area. The additional-object registration unit identifies an image of the additional object presented in the additional-object specification field and the processing selected in the processing specification field on the read image of the setting form. The additional-object registration unit establishes an association between the image of the additional object and the processing, and registers the image and the processing. The read-image processing unit searches the read image of the document for the image of the additional object. The read-image processing unit performs the processing associated with the image of the additional object on the information obtained from the extract area specified by the image of the additional object.
With reference to the accompanying drawings, embodiments of the present disclosure will be described below.
The communication device 11 is connectable to a server 2 via a network and performs data communications using a predetermined communications protocol. The server 2 is used to perform character recognition processing to read handwritten characters.
The printing device 12 is an internal device to print an image of an original document onto a printer sheet in, for example, an electrophotographic method. The printing device 12 subjects original image data to predetermined image processing, such as rasterization, color conversion, and screen processing, to produce output image data that is in turn printed out. The printing device 12 is used to print various kinds of forms and sheets which will be described later.
The image reading device 13 is an internal device that optically reads an image from an original document (various forms and sheets described below) to produce image data of the original document image.
The processing device 14 is a computer equipped with a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM) and other components and functionally operates as various processing units by loading a program stored in the storage device 15, such as the ROM, into the RAM and executing the program with the CPU. The storage device 15 is a nonvolatile storage device that stores data, programs, etc.
The processing device 14 functions as an additional-object registration unit 21 and a read-image processing unit 22.
A setting form includes: (a) an additional-object specification field used by a user to present an additional object that is placed onto a document to specify an extract area to be extracted from an image read from the document; and (b) a processing specification field used by the user to select processing to be performed on information obtained from the extract area. The additional-object registration unit 21 identifies an image of the additional object presented in the additional-object specification field and the processing selected in the processing specification field on the read image of the setting form, establishes an association between the image of the additional object and the processing, and registers the image and processing associated therewith.
The read-image processing unit searches the read image of the document for the image of the additional object and performs the processing associated with the image of the additional object on information obtained from the extract area specified by the image of the additional object.
In the first embodiment, a single setting form includes a plurality of additional-object specification fields and a plurality of processing specification fields associated with the additional-object specification fields, respectively. The additional-object registration unit 21 identifies a plurality of images of additional objects presented in the additional-object specification fields and a plurality of types of processing selected in the processing specification fields on the read image of the setting form, establishes associations between each of the images of the additional objects and the processing, and registers the images and the processing associated therewith. Then, the read-image processing unit 22 searches the read image of the document for the registered images of the additional objects and performs processing associated with the detected images of the additional objects on information obtained from extract areas specified by the images of the detected additional objects.
In the first embodiment, the additional objects are sticky notes (repositionable notes) having a predetermined shape, a color and a pattern. The read-image processing unit 22 searches the read image of the document for the images of the additional objects by pattern-matching.
In addition, the additional objects in the first embodiment may be sticky notes having the same shape, but different patterns (e.g., sequential numbers, 1, 2, 3 . . . or alphabetical letters, a, b, c . . . ).
Furthermore, the aforementioned “processing” in the first embodiment includes: (a) creation of a file name of a file for the read image of the document based on the information extracted from the extract areas; (b) character recognition processing performed on the information extracted from the extract areas; and (c) creation of metadata of the file based on the information extracted from the extract areas. The metadata includes various types of attribute data contained in the files.
The setting form shown in
Each of the check box arrays 42 includes three check boxes ((1), (2), (3) of FILE NAME) to create a file name, two check boxes ((1), (2) of FOLDER NAME) to create a folder name, a check box to select typewritten character recognition processing (OCR TYPE), a check box to select handwritten character recognition processing (OCR HAND), a check box to cut out an image in the extract areas (CUT IMAGE), and five check boxes (HEADING, NUMBER, DATE, ADDRESS, and NAME of METADATA) to create metadata (HEADING; NUMBER, DATE, ADDRESS, and NAME). If a checkmark is placed in the check box of “OCR HAND”, the read-image processing unit 22 transmits an image in an extract area to the server 2 through the communication device 11, causes the server 2 to perform the handwritten character recognition processing on the image in the extract area, and receives the processing results from the server 2.
The check box (i) (i=1, 2, 3) of FILE NAME is used to designate text obtained from the corresponding extract area as the i-th word of a file name, while the check box (i) (i=1, 2) of FOLDER NAME is used to designate text obtained from the corresponding extract area as the i-th word of a folder name. The i-th word and the (i-th+1) word are linked with a punctuation character that is selected by a user who places a checkmark in a check box for selecting a punctuation character in the setting form.
For example,
The setting form shown in
As shown in
Next, the operation of the image processing apparatus will be described.
(1) Registration of Additional Object Used to Specify Extract Area in Document
In response to predetermined user operation, the additional-object registration unit 21 causes the printing device 12 to print out a setting form. Since an image data of the setting form is stored in the storage device 15 in advance, the setting form is printed out from the image data. Then, a user uses the printed setting form as shown in
In the image processing apparatus 1, the image reading device 13 produces image data of the read image of the setting form with the additional objects placed in the additional-object specification fields 41 and the checkmarks placed in the check boxes of the check box arrays 42. The additional-object registration unit 21 refers to the image data to extract images of the additional objects in the additional-object specification fields 41, while identifying check boxes with the checkmarks in the check box arrays 42 associated with the additional objects to identify processing specified by the user based on the identified check boxes. The additional-object registration unit 21 establishes associations between the images of the additional objects and the processing and stores the images and processing in the storage device 15.
Through the procedure, the user's desired additional objects and the associated processing are registered.
(2) Document Processing
After registration of the additional objects and processing associated therewith, for example, the user uses a document to be read as shown in
In the image processing apparatus 1, the image reading device 13 produces image data of the read image of the document with the additional objects placed thereon. The read-image processing unit 22 refers to the image data to search the read image of the document for the registered images of the additional objects by pattern-matching.
Upon detecting two additional object images of one kind, the read-image processing unit 22 identifies an extract area enclosed by the two additional object images. For example, the extract area identified is a rectangle with a diagonal line connecting the two additional object images at the shortest distance. Alternatively, if the images of the additional objects are in a predetermined shape, like a rectangle, the extract area may be configured to be a rectangle with a diagonal line connecting predetermined vertices of the two additional object images at the shortest distance.
Then, the read-image processing unit 22 extracts an image in the identified extract area and performs specified processing on the image in the extract area (e.g., character recognition, creation of file name and metadata).
For example, if the additional objects and processing on the setting form as shown in
According to the above-described first embodiment, the read image of the setting form includes: (a) the additional-object specification fields used by a user to present the additional objects that are placed onto a document to specify extract areas to be extracted from the image read from the document; and (b) the processing specification fields used by the user to select processing to be performed on information obtained from the extract areas. The additional-object registration unit 21 identifies the images of the additional objects presented in the additional-object specification fields and the processing selected in the processing specification fields, establishes associations between the images of the additional objects and the processing, and registers the images of the identified additional objects and the processing associated therewith. The read-image processing unit 22 searches the read image of the document for the images of the additional objects and performs the processing associated with the image of the additional objects on the information obtained from the extract areas specified by the images of the additional objects.
Thus, the user can set a position to extract particular information from various types of documents in a suitable way for the respective document formats.
An image processing apparatus 1 according to the second embodiment includes a function of outputting a registration information sheet from a printing device 12 or other output units in response to a user's predetermined operation after the additional-object registration unit 21 registers images of additional objects and processing associated with the additional objects, in addition to functions the same as those of the image processing apparatus 1 of the first embodiment.
In the case where the image processing apparatus 1 of the second embodiment is equipped with a key (e.g., a shortcut key) that performs a predetermined function in response to a user's single operation, the key may be assigned to an output operation of the registration information sheet as the function. This allows the user to print out the registration information sheet from a simple apparatus and to see the images (extracted images) in the extract areas obtained from the read image of the document.
The other configurations of the image processing apparatus 1 of the second embodiment are the same as those of the first embodiment, and therefore the explanation will not be reiterated.
An image processing apparatus 1 according to the third embodiment enables use of a plurality of transparent sticky notes in different colors as additional objects. In the third embodiment, the read-image processing unit 22 searches for the images of the additional objects by pattern-matching in consideration of the color mixture of the transparent sticky notes.
The other configurations of the image processing apparatus 1 of the third embodiment are the same as those of the first and second embodiments, and therefore the explanation will not be reiterated.
In the fourth embodiment, the setting form includes a translation specification field used by a user to select whether to use a translation rule to change a plurality of similar character strings in an extract area into a single character string. The translation specification field in the fourth embodiment is check boxes 81 in
In the image processing apparatus 1 of the fourth embodiment, when the additional-object registration unit 21 detects that the translation rule is specified to be used in a translation specification field, the additional-object registration unit 21 acquires the translation rule, establishes an association between the translation rule and an image of an additional object associated with an extract area, and registers the image of the additional object and the translation rule associated therewith.
If at least one check box 81 is marked in the fourth embodiment, a user operates the image processing apparatus 1 to cause the image reading device 13 to read a condition sheet on which translation rules including the aforementioned translation rule are written and causes the additional-object registration unit 21 to identify the translation rule from the read image of the condition sheet or text information obtained from the read image through character recognition processing.
Then, the read-image processing unit 22 translates the character string obtained from the extract area under the translation rule associated with the extract area and performs the aforementioned processing (e.g., creation of a file name) on the translated character string.
Accordingly, even if there are different character strings of the same meaning, the character strings are replaced with a single character string that is in turn subjected to subsequent processing.
The other configurations of the image processing apparatus 1 of the fourth embodiment are the same as those of the first to third embodiments, and therefore the explanation will not be reiterated.
Although the foregoing embodiments are preferred examples of the present disclosure, it is to be noted that the present disclosure is not limited by the embodiments, and that various modifications and changes can be made without departing from the spirit of the present disclosure.
For example, the additional objects are sticky notes through the first to fourth embodiments; however, the additional objects can be characters or symbols handwritten with ink or graphite of a pen, a pencil or other writing implements.
In addition, the pattern-matching performed in the first to fourth embodiments can detect inclined additional objects, and therefore users are allowed to place the additional objects at an angle.
The present disclosure is applicable to, for example, multifunctional peripherals.
Patent | Priority | Assignee | Title |
9560222, | May 29 2014 | KYOCERA Document Solutions Inc. | Document reading device and image forming apparatus |
Patent | Priority | Assignee | Title |
7450268, | Jul 02 2004 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Image reproduction |
20080270879, | |||
20100153887, | |||
20130083176, | |||
JP2010086295, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 11 2014 | HARADA, HIROYUKI | Kyocera Document Solutions Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034537 | /0810 | |
Dec 18 2014 | KYOCERA Document Solutions Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 08 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 16 2023 | REM: Maintenance Fee Reminder Mailed. |
Apr 01 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |