Methods and systems for training a neural network to distinguish between text documents and image documents are described. A corpus of text and image documents is obtained. A page of a text document is scanned by shifting a text window to a plurality of locations. In accordance with a determination that the text in the window at a respective location meets text line criteria, the text in the window is stored as a respective text snippet. A plurality of image windows are superimposed over at least one page of an image document. In accordance with a determination that the content of a respective image window meets image criteria, content of the image window is stored as a respective image snippet. The respective text snippet and the respective image snippet are provided to a classifier.
|
1. A method implemented by an electronic device having one or more processors for determining if a document is a text page, the method comprising:
partitioning the document into a plurality of cells;
scaling each of the cells to a standardized number of pixels to provide a corresponding snippet for each of the cells;
classifying the snippets, using a neural network, to determine (i) a first set of cells classified as text and (ii) a second set of cells classified as non-text;
determining a volume of text for the document based on a total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells; and
in response to a determination that (i) the total amount of text in the document is within a predetermined range and (ii) the first set of cells are aligned to one or more horizontal or vertical lines, determining that the document is a text page.
12. A non-transitory computer readable medium storing one or more programs, the one or more programs comprising instructions, which when executed by a device with a camera, cause the device to:
partition a document into a plurality of cells;
scale each of the cells to a standardized number of pixels to provide a corresponding snippet for each of the cells;
classify the snippets, using a neural network, to determine (i) a first set of cells classified as text and (ii) a second set of cells classified as non-text;
determine a volume of text for the document based on a total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells; and
in response to a determination that (i) the total amount of text in the document is not within a predetermined range and (ii) the first set of cells is aligned to one or more horizontal or vertical lines, determine that the document is a text page.
18. A device with a camera, the device comprising:
one or more processors; and
memory storing one or more instructions that, when executed by the one or more processors, cause the device to perform operations including:
partitioning a document into a plurality of cells;
scaling each of the cells to a standardized number of pixels to provide a corresponding snippet for each of the cells;
classifying the snippets, using a neural network, to determine (i) a first set of cells classified as text and (ii) a second set of cells classified as non-text;
determining a volume of text for the document based on a total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells; and
in response to a determination that (i) the total amount of text in the document is not within a predetermined range and (ii) a boundary of the first set of cells has a geometry aligned along one or more horizontal or vertical lines, determining that the document is a text page.
2. The method of
in response to a determination that (i) the total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are met for the second set of cells, partitioning the second set of cells to form a partitioned set of cells;
scaling each of the partitioned cells of the partitioned set of cells to a standardized number of pixels to provide a respective snippet for each of the partitioned cells of the partitioned set of cells;
classifying the respective snippets, using a neural network, to determine (i) a first set of partitioned cells classified as text and (ii) a second set of partitioned cells classified as non-text;
determining an updated volume of text for the document based on an updated total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells and each cell of the first set of partitioned cells; and
in response to a determination that the updated total amount of text in the document is not within the predetermined range, determining that the document is a text page.
3. The method of
in response to a determination that the updated total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are not met for the second set of partitioned cells, determining whether the first set of cells and the first set of partitioned cells have a satisfactory geometry; and
in response to a determination that the first set of cells and the first set of partitioned cells have a satisfactory geometry, determining that the document is a text page.
4. The method of
in response to a determination that the first set of cells and the first set of partitioned cells do not have a satisfactory geometry, determining that the document is not a text page.
6. The method of
7. The method of
8. The method of
9. The method of
11. The method of
13. The non-transitory computer readable medium of
in response to a determination that (i) the total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are met for the second set of cells, partition the second set of cells to form a partitioned set of cells;
scale each of the partitioned cells of the partitioned set of cells to a standardized number of pixels to provide a respective snippet for each of the partitioned cells of the partitioned set of cells;
classify the respective snippets, using a neural network, to determine (i) a first set of partitioned cells classified as text and (ii) a second set of partitioned cells classified as non-text;
determine an updated volume of text for the document based on an updated total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells and each cell of the first set of partitioned cells; and
in response to a determination that the updated total amount of text in the document is not within the predetermined range, determine that the document is a text page.
14. The non-transitory computer readable medium of
in response to a determination that the updated total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are not met for the second set of partitioned cells, determine whether the first set of cells and the first set of partitioned cells have a satisfactory geometry; and
in response to a determination that the first set of cells and the first set of partitioned cells have a satisfactory geometry, determine that the document is a text page.
15. The non-transitory computer readable medium of
in response to a determination that the first set of cells and the first set of partitioned cells do not have a satisfactory geometry, determine that the document is not a text page.
16. The non-transitory computer readable medium of
17. The non-transitory computer readable medium of
19. The device of
in response to a determination that (i) the total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are met for the second set of cells, partitioning the second set of cells to form a partitioned set of cells;
scaling each of the partitioned cells of the partitioned set of cells to a standardized number of pixels to provide a respective snippet for each of the partitioned cells of the partitioned set of cells;
classifying the respective snippets, using a neural network, to determine (i) a first set of partitioned cells classified as text and (ii) a second set of partitioned cells classified as non-text;
determining an updated volume of text for the document based on an updated total amount of text in the document corresponding to a sum of an amount of text in each cell of the first set of cells and each cell of the first set of partitioned cells; and
in response to a determination that the updated total amount of text in the document is within the predetermined range, determining that the document is a text page.
20. The device of
in response to a determination that the updated total amount of text in the document is not within the predetermined range and (ii) that partitioning criteria are not met for the second set of partitioned cells, determining whether the first set of cells and the first set of partitioned cells have a satisfactory geometry; and
in response to a determination that the first set of cells and the first set of partitioned cells have a satisfactory geometry, determining that the document is a text page.
|
This application is a continuation of U.S. patent application Ser. No. 16/455,543, filed on Jun. 27, 2019, entitled “FAST IDENTIFICATION OF TEXT INTENSIVE PAGES FROM PHOTOGRAPHS,” now U.S. Pat. No. 11,195,003, which is a divisional of U.S. patent application Ser. No. 15/272,744, filed Sep. 22, 2016 and entitled “FAST IDENTIFICATION OF TEXT INTENSIVE PAGES FROM PHOTOGRAPHS,” now U.S. Pat. No. 10,372,981, which claims priority to U.S. Provisional Patent Application No. 62/222,368, filed Sep. 23, 2015, entitled, “FAST IDENTIFICATION OF TEXT INTENSIVE PAGES FROM PHOTOGRAPHS,” all of which are hereby incorporated by reference in their entirety.
This application is directed to the field of image processing, and more particularly to the field of estimating volume of text on photographs of physical media via fast iterative process based on machine learning.
Mobile phones with digital cameras are broadly available in every worldwide market. According to market statistics and forecasts, by 2018, annual smartphone shipments are expected to grow to 1.87 billion units; over 80% of all mobile phones will be arriving to customers with embedded digital cameras. New shipments will expand the already massive current audience of approximately 4.3 billion mobile phone users and 6.7 billion mobile subscribers; they will also update mobile phones currently used by the subscribers.
The volume of photographs taken with phone cameras is growing rapidly and begins to dominate online image repositories and offline storage alike. According to Pew Research, photographing with phone cameras remains the most popular activity of smartphone owners. InfoTrends has reported that the annual volume of digital photographs has nearly tripled between 2010 and 2015 and is expected to reach 1.3 trillion photographs in 2017, while the number of stored photos in 2017 may approach five trillion. It is projected that of the total 2017 volume of digital photographs, 79% will be taken by phone cameras, 8% by tablets and only 13% by conventional cameras. On social photo sharing sites, the volume of images taken with smartphones has long exceeded the quantity of photographs taken with any other equipment.
Hundreds of millions smartphone users are blending their everyday mobile work and home digital lifestyles with paper habits. Paper documents retain a significant role in the everyday information flow of business users and households. Digitizing and capturing of paper based information has further progressed with the arrival of multi-platform cloud-based content management systems, such as the Evernote service and software developed by Evernote Corporation of Redwood City, Calif., the Scannable software application for iPhone and iPad by Evernote and other document imaging software. These applications and services offer seamless capturing of multiple document pages and provide perspective correction, glare mitigation, advanced processing, grouping and sharing of scanned document pages. After the documents are captured and stored, the Evernote software and service further enhance user productivity with advanced document search capabilities based on finding and indexing text in images. Additionally, photographs that include images without significant amounts of surrounding text may be enhanced using advanced color correction methods for storage, sharing, printing, composition of documents and presentations, etc.
Determination of a relevant processing path for a scanned document page presents a challenging aspect of smartphone based scanning solutions. After initial pre-processing steps for a photographed page image have been accomplished (which may include glare mitigation, perspective and other spatial corrections, etc.), there may be several different directions for further image processing. Pages with significant amounts of text may be optimized for text retrieval and search purposes; accordingly, processing algorithms may increase contrast between the page text and the page background, which in many cases may result in a black-and-white image where the text is reliable separated from the rest of the image. On the other hand, images taken for aesthetical, illustration and presentation purposes typically undergo color correction and color enhancement steps that enrich color palette and attempt to adequately reproduce lighting conditions and provide a visually pleasing balance between contrasting and smooth image areas. Therefore, errors in determining adequate processing paths for captured images may lead to expensive and unnecessary post-processing diagnostics, double processing steps and an undesired need for user intervention.
Accordingly, it would be useful to develop efficient mechanisms for quick automatic identification of document page photographs as text vs. image types at early processing steps of automatic mobile image scanning and processing.
According to the system described herein, determining if a document is a text page includes partitioning the document into a plurality of cells, scaling each of the cells to a standardized number of pixels to provide a corresponding snippet for each of the cells, using a classifier to examine the snippets to determine which of the cells are classified as text and which of the cells are not classified as text, determining a volume of text for the document based on a total amount of text in the document corresponding to a sum of an amount of text in each of the cells classified as text, and determining that the document is a text page in response to the total amount exceeding a pre-determined threshold. In response to the total amount being less than the pre-determined threshold, cells not classified as text may be examined further. Further examining cells not classified as text may include subdividing ones of the cells not classified as text to provide further subdivisions and using the classifier to determine which of the subdivisions are classified as text and to determine a revised total amount based on an additional volume of text according to the subdivisions classified as text to add to the total amount. Determining if a document is a text page may also include determining that the document is a text page in response to the revised total amount exceeding the pre-determined threshold. The classifier may examine the subdivisions in a random order or in an order that prioritizes subdivisions adjacent to snippets previously classified as text. Determining if a document is a text page may also include determining that the document is a text page in response to cells that are classified as text having a satisfactory geometry. At least some of the cells corresponding to snippets that are classified as text may be aligned to form at least one text line and the at least one text line may be horizontal or vertical. The snippets that are not classified as text may be classified as images. The snippets that are not classified as text may be classified images or unknown. The document may be partitioned into six cells. The document may be captured using a smartphone. The classifier may be provided by training a neural net using a plurality of image documents and a plurality of text pages having various formats, layouts, text sizes, ranges of word, line and paragraph spacing.
According further to the system described herein, training a neural network to distinguish between text documents and image documents includes obtaining a corpus of text and image documents, for each of the text documents, creating text snippets by scanning each of the text document with a window that is shifted horizontally and vertically and discarding text documents for which the window contains less than a first number of lines of text or more than a second number of lines of text, for each of the image documents, creating image snippets by scanning each of the image document with a window that is shifted horizontally and vertically, normalizing resolution of the windows, and providing the text snippets and the image snippets to a classifier. Normalizing resolution of the windows may include converting each of the windows to a 32×32 pixel resolution. The first number of lines of text may be two and the second number of lines may be text is four. The classifier may be an MNIST-style Neural Network, provided through Google TensorFlow.
According further to the system described herein, a non-transitory computer readable medium contains software that determines if a document is a text page. The software includes executable code that partitions the document into a plurality of cells, executable code that scales each of the cells to a standardized number of pixels to provide a corresponding snippet for each of the cells, executable code that uses a classifier to examine the snippets to determine which of the cells are classified as text and which of the cells are not classified as text, executable code that determines a volume of text for the document based on a total amount of text in the document corresponding to a sum of an amount of text in each of the cells classified as text, and executable code that determines that the document is a text page in response to the total amount exceeding a pre-determined threshold. In response to the total amount being less than the pre-determined threshold, cells not classified as text may be examined further. Further examining cells not classified as text may include subdividing ones of the cells not classified as text to provide further subdivisions and using the classifier to determine which of the subdivisions are classified as text and to determine a revised total amount based on an additional volume of text according to the subdivisions classified as text to add to the total amount. The software may also include executable code that determines that the document is a text page in response to the revised total amount exceeding the predetermined threshold. The classifier may examine the subdivisions in a random order or in an order that prioritizes subdivisions adjacent to snippets previously classified as text. The software may also include executable code that determines that the document is a text page in response to cells that are classified as text having a satisfactory geometry. At least some of the cells corresponding to snippets that are classified as text may be aligned to form at least one text line and the at least one text line may be horizontal or vertical. The snippets that are not classified as text may be classified as images. The snippets that are not classified as text may be classified images or unknown. The document may be partitioned into six cells. The document may be captured using a smartphone. The classifier may be provided by training a neural net using a plurality of image documents and a plurality of text pages having various formats, layouts, text sizes, ranges of word, line and paragraph spacing.
According further to the system described herein, a non-transitory computer readable medium contains software that trains a neural network to distinguish between text documents and image documents using a corpus of text and image documents. The software includes executable code that creates, for each of the text documents, text snippets by scanning each of the text document with a window that is shifted horizontally and vertically and discarding text documents for which the window contains less than a first number of lines of text or more than a second number of lines of text, executable code that creates, for each of the image documents, image snippets by scanning each of the image document with a window that is shifted horizontally and vertically, executable code that normalizes resolution of the windows, and executable code that provides the text snippets and the image snippets to a classifier. Normalizing resolution of the windows may include converting each of the windows to a 32×32 pixel resolution. The first number of lines of text may be two and the second number of lines may be text is four. The classifier may be an MNIST-style Neural Network, provided through Google TensorFlow.
The proposed system offers an automatic identification of document page photographs as text intensive pages (or not) by selective hierarchical partitioning and zooming down of page areas into normalized snippets, classifying snippets using a pre-trained text/image classifier, and accumulating reliably identified text areas until a threshold for sufficient text content is achieved; if an iterative process has not revealed a sufficient amount of text, the page is deemed not to be a text page (i.e., an image page).
At a preliminary phase of system development, large corpuses of text and image content may be obtained and used for training of a robust text/image classifier based on neural network or other classification mechanisms. The classifier is built to distinguish small snippets of text pages that enclose low number of text lines (and therefore have a characteristic linear geometry) from snippets of images that represent a non-linear variety and more complex configuration of shapes within a snippet.
Accordingly, at a pre-processing phase for the corpus of training textual material, the following preparation steps preceding automatic classification are performed:
Similarly, portions of individual images in the image corpus may be obtained, preprocessed substantially in the same way as text pages, normalized to the same snippet size and stored. The differences in building text vs. image snippet collections are the criteria for choosing or discarding a square portion of content:
The two collections of content snippets (text and images snippets) are subsequently used for training and testing a text/image classifier using standard methods, such as neural networks. Depending on the use of the classifier (e.g. one or two acceptance thresholds), it may categorize a new input snippet using a binary response <text/image> or a ternary response <text/image/unknown>.
After the text/image classifier has been created, the runtime system functioning may include the following:
In some embodiments, various empiric optimization techniques may be used to further accelerate the decision process. Examples may include, without limitation:
Embodiments of the system described herein will now be explained in more detail in accordance with the figures of the drawings, which are briefly described as follows.
The system described herein provides a mechanism for fast identification of text intensive pages from page photographs or scans by selective hierarchical partitioning and zooming down of page areas into normalized snippets, classifying snippets using a pre-trained text/image classifier, and accumulating reliably identified text areas until a threshold for sufficient text content is achieved.
In an embodiment herein, text lines on each page of an arbitrary text document in the text corpus 110 are identified (e.g., by an operator) prior to adding the text document to the text corpus 110. A separate training module (not shown) scans the text document with a small window that is shifted horizontally and vertically along the page. Windows that contain a predefined range of text lines (in an embodiment, two to four lines of text, irrespective of text size in each line), are stored for future input and training of the classifier 190. Prior to training, a size of windows is normalized to a standard low-res format (in an embodiment, 32×32 pixels) so that all text snippets reflecting configurations of text lines and a split into words of the text lines have a same size. The training module also obtains image snippets from the image corpus 150 in a similar manner and then provides the text snippets along with image snippets to the classifier 190 for training.
The process of subsequent partitions may continue until either the document page 230 is categorized as a text intensive page or process termination criteria are met, as explained elsewhere herein (and the page is not declared text intensive).
Referring to
Referring to
After the step 540, processing proceeds to a test step 545, where it is determined whether text cells (cells of the current partition for which normalized snippets have been classified as text) are present. If so, processing proceeds to a step 550 where a previous count of total text volume of the document page is augmented with a cumulative text volume in the text cells of the current partition. After the step 550, processing proceeds to a test step 555, where it is determined whether a total text volume detected in all previously identified text cells is sufficient to identify the document page as a text intensive page. If not, processing proceeds to a test step 560, where it is determined whether a next partition level is feasible, according to criteria explained elsewhere herein. Note that the step 560 can also be reached directly from the test step 545 if it was determined at the step 545 that text cells are not present in a current partition. If the next partition level is feasible, processing proceeds to a step 565, where the system builds a next level of page partition, as illustrated in
Various embodiments discussed herein may be combined with each other in appropriate combinations in connection with the system described herein. Additionally, in some instances, the order of steps in the flowcharts, flow diagrams and/or described flow processing may be modified, where appropriate. Subsequently, elements and areas of screen described in screen layouts may vary from the illustrations presented herein. Further, various aspects of the system described herein may be implemented using software, hardware, a combination of software and hardware and/or other computer-implemented modules or devices having the described features and performing the described functions. The mobile device used for page capturing may be a cell phone with a camera, although other devices are also possible.
Note that the mobile device(s) may include software that is pre-loaded with the device, installed from an app store, installed from a desktop (after possibly being pre-loaded thereon), installed from media such as a CD, DVD, etc., and/or downloaded from a Web site. The mobile device may use an operating system such as iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS.
Software implementations of the system described herein may include executable code that is stored in a computer readable medium and executed by one or more processors, including one or more processors of a desktop computer. The desktop computer may receive input from a capturing device that may be connected to, part of, or otherwise in communication with the desktop computer. The desktop computer may include software that is pre-loaded with the device, installed from an app store, installed from media such as a CD, DVD, etc., and/or downloaded from a Web site. The computer readable medium may be non-transitory and include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, a flash drive, an SD card and/or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer readable medium or computer memory on which executable code may be stored and executed by a processor. The system described herein may be used in connection with any appropriate operating system.
Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.
Pashintsev, Alexander, Gorbatov, Boris, Livshitz, Eugene, Glazkov, Vitaly
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4741046, | Jul 27 1984 | Konishiroku Photo Industry Co., Ltd. | Method of discriminating pictures |
5754709, | Nov 10 1994 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Method and apparatus for gradation correction and image edge extraction |
5933525, | Apr 10 1996 | Raytheon BBN Technologies Corp | Language-independent and segmentation-free optical character recognition system and method |
5995659, | Sep 09 1997 | Siemens Corporation | Method of searching and extracting text information from drawings |
6466954, | Mar 20 1998 | Kabushiki Kaisha Toshiba | Method of analyzing a layout structure of an image using character recognition, and displaying or modifying the layout |
20010019334, | |||
20020018071, | |||
20020126313, | |||
20060013472, | |||
20060098231, | |||
20060120627, | |||
20070165950, | |||
20080095442, | |||
20080244384, | |||
20080273796, | |||
20080309970, | |||
20100215272, | |||
20100220927, | |||
20100246961, | |||
20110188759, | |||
20120224765, | |||
20130129222, | |||
20130177246, | |||
20140079316, | |||
20140233055, | |||
20140241629, | |||
20150117784, | |||
20150172508, | |||
20150254555, | |||
20150269135, | |||
20160092754, | |||
20160104052, | |||
20160210533, | |||
20170004374, | |||
JP2010165217, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 06 2021 | EVERNOTE CORPORATION | (assignment on the face of the patent) | / | |||
Dec 29 2023 | EVERNOTE CORPORATION | BENDING SPOONS S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 066288 | /0195 |
Date | Maintenance Fee Events |
Dec 06 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Dec 15 2021 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Aug 01 2026 | 4 years fee payment window open |
Feb 01 2027 | 6 months grace period start (w surcharge) |
Aug 01 2027 | patent expiry (for year 4) |
Aug 01 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 01 2030 | 8 years fee payment window open |
Feb 01 2031 | 6 months grace period start (w surcharge) |
Aug 01 2031 | patent expiry (for year 8) |
Aug 01 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 01 2034 | 12 years fee payment window open |
Feb 01 2035 | 6 months grace period start (w surcharge) |
Aug 01 2035 | patent expiry (for year 12) |
Aug 01 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |