A method of analyzing one or more images of a user to determine the likelihood of user interest in materials that can be sent for display to the user includes selecting one or more images by a user; automatically analyzing the one or more user images to determine the likelihood that materials in a set will be of interest to the user; and selecting one or more items of materials based on their likelihood of interest to the user.
|
1. A method of analyzing one or more images of a user to determine the likelihood of user interest in product or promotional materials that can be sent to the user, comprising:
a) selecting one or more images having pixels by a user; b) providing a plurality of product and promotional materials; c) automatically analyzing, without using the product or promotional materials, the pixels of the one or more user images to determine the likelihood that one or more particular product or promotional materials will be of interest to the user; and d) selecting one or more of the product or promotional materials based on their likelihood of interest to the user.
3. A method of analyzing one or more images of a user to determine the likelihood of user interest in product or promotional materials that can be sent to the user, comprising:
a) receiving one or more user images having pixels from the user; b) storing in a database sets of database images, and sets of product or promotional materials, wherein each set of product or promotional materials corresponds to a different set of database images; c) automatically analyzing the pixels of the one or more user images and the received database images to determine based upon the analysis if there is a likelihood that one or more of sets of product or promotional materials will be of interest to the user; and d) selecting the one or more likely sets of product or promotional materials based on their likelihood of interest to the user.
2. A computer storage product having at least one computer storage medium having instructions stored therein causing one or more computers to perform the method of
4. The method of
e) automatically extracting a representation from the one or more sets of database images in terms of one or more features that characterize the one or more sets of database images; and f) storing the one or more sets of database image and the representation in a database.
5. The method according to
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method according to
12. The method of
13. The method according to
|
Reference is made to commonly assigned U.S. patent application Ser. No. 09/291,857, filed Apr. 14, 1999 entitled "Perceptually Significant Feature-Based Image Archival and Retrieval" by Wei Zhu et al., the disclosure of which is incorporated herein by reference.
The present invention relates to analyzing one or more images of a user to determine the likelihood of user interest in materials that can be sent to the user.
Photographic imaging service applications have been extended to include digital imaging technology. For Internet-based photographic service applications a consumer is provided a means for displaying his/her digital images with the digital images residing on a remote computer. Typical scenarios for these Internet-based digital imaging applications includes the viewing thumbnail versions of a collection of digital images, selection of a particular digital image for viewing,,enhancement, and/or printing. While there exist many different methods for an Internet-based photographic service provider to receive payment for the service rendered, many have chosen to display advertisement messages on the consumer's display screen and collect payment not from the consumer but from an advertisement client. At present, it is possible for the photographic service provider to perform directed advertisement if prior knowledge of the consumer in the form of a consumer profile is available. However, if no prior knowledge of the consumer is available directed advertising is not possible. Furthermore, the consumer profile may not be up-to-date. Moreover, the profile may not account for some facets of a consumer's buying habits. If an employee of the photographic service provider were to view the consumer's photographs, the employee could make intelligent decisions as to which advertisement client would most likely desire directed advertisement to the particular consumer. Aside from issues of privacy of consumer photographs, the process of humans observing photographs and making directed advertising decisions is too costly for consideration. Research has shown that unrelated directed advertisements are often considered nuisance by the consumer while directed advertisement which relate to the interests of the consumer are considered desirable. Digital imaging algorithms have for a long while been devised to analyze the content of digital images. In particular, the methods disclosed by Cullen et al. in U.S. Pat. No. 5,933,823, Ravela et al. in U.S. Pat. No. 5,987,456, and De Bonet et al. in U.S. Pat. No. 5,819,288 analyze digital images. In these digital imaging applications a database of digital images is maintained. For each digital image in the database a set of image features, expressed in mathematical form, are calculated. A query digital image is selected, usually initiated from the user of the digital imaging application, and compared to the digital images in the database. The same set of image features is calculated for the query digital image. A comparison between the calculated image features for the query digital image and the database digital images is performed and yields an image similarity value for each of the database digital images as a measure of overall similarity. The image similarity values are analyzed and the digital images with the highest image similarity values are displayed for the user.
While these digital image query applications are capable of analyzing digital images, none of the above mentioned disclosed methods relate the content of a set of consumer digital images to the likelihood of an advertisement client's desire to direct advertisement material to that particular consumer.
It is an object of the present invention to provide a digital imaging algorithm which can make intelligent directed advertising decisions by analyzing the image content of consumer digital image.
It is another object of the present invention to make use of a content of a user's image(s) to determine the likelihood that materials would be of interest to the user. Such materials can include products or services promotional materials.
This object is achieved by a method of analyzing one or more images of a user to determine the likelihood of user interest in materials that can be sent for display to the user, comprising:
a) selecting one or more images by a user;
b) automatically analyzing the one or more user images to determine the likelihood that materials in a set will be of interest to the user; and
c) selecting one or more items of materials based on their likelihood of interest to the user.
It is an advantage of the present invention that it provides an advertiser or other purveyor of information with the opportunity to automatically make intelligent directed advertising decisions by analyzing the image content of consumer digital image.
It is another advantage of the present invention that by making use of a content of a user's image(s), the likelihood can efficiently and effectively be determined that materials would be of interest to the user. It is a feature of the invention that such materials can include products or service promotional materials.
In the following description, a preferred embodiment of the present invention will be described as a software program. Those skilled in the art will readily recognize that the equivalent of such software may also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the method in accordance with the present invention. Other aspects of such algorithms and systems, and hardware and/or software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein may be selected form such systems, algorithms, components, and elements known in the art. Given the description as set forth in the following specification, all software implementation thereof is conventional and within the ordinary skill in such arts.
The present invention may be implemented with multiple computers connected via a communications network. A communications network of multiple computers is illustrated in FIG. 2. Such a network of connected computers provides a means of sending and receiving information between any two or more connected computers. A communications network may include physical connections from one computer to another such as can be achieved with a conventional communications phone line. It is also possible for the communications network to include non-physically connected communications lines such as can be achieved with microwave communications links, radio communications links, coaxial cable television communications links, fiber optic communication links, or cellular telephone communications links. Thus the present invention may be practiced with any of the communications systems mentioned above, but is not limited solely to these systems since the present invention relies on exchange of information not the means of achieving the exchange of information.
An image capable computer 100 is any device capable of executing a series of computational instructions which includes the manipulation of digital image data. Although fundamentally any image capable computer may have the capability to perform any computational instructions, the image capable computers illustrated in
The connection computer 120 shown in
Computers not shown in diagram of
A personal computer 150, a mobile computer 160, and a kiosk computer 170 are shown connected to the communications computer network 110 via a connection computer 120. These computers have the capability for the exchange and display of information. In particular, as it relates to the present invention, these computers have the ability to, but are not limited to, the display of text, graphic, and image information. Such a computer is typically connected to the Internet with software which understands a variety of protocols and manages the visual display of information. One such combination of display software and software protocol is a World Wide Web (WWW) browser which understands Hypertext Markup Language (HTML). Other display software and other software protocols exist. The present invention is not limited to a Web browser processing HTML documents and may be practiced with any combination of software which manages and displays information.
A personal computer 150 represents a computer which is often operated by a single person at a time. Typical personal computers are installed in homes and businesses. Individual users may access the Internet with a connected personal computer. Personal computers may be portable units such as a lap-top computer. If a personal computer is connected to the Internet with a wireless connection it may be located almost anywhere. In such a configuration, the personal computer may represent a mobile computer 160. Fundamentally, a mobile computer and personal computer may differ mostly in size and weight.
A kiosk computer 170 represents a computer which may be dedicated to a commercial task of performing a specialized service. These computers are generally owned and maintained by businesses and operated primarily by consumers. An automatic teller machine (ATM) is an example of a kiosk computer. A typical kiosk computer might include a means for displaying information, selecting service choices, and indicating a payment method for the service selection. Although these three features of a typical kiosk computer are common, the present invention may be practiced with kiosk computers with fewer or more features than the ones described.
A retail computer 130 represents a computer which may also be dedicated to a commercial task of performing a specialized service set in a retail business. These computers are generally owned and maintained by the retail business and operated either by consumers or store personnel. Typical retail computers may include a variety of devices connected.
Referring to
It is assumed that all of the above mentioned computers may have the capability to store the computational instructions, or software in a variety of means which include, but are not limited to, random access memory (RAM), read only memory (ROM), or some form of off-line storage means such as magnetic or optical storage devices.
An image can refer to any form of visual information in recorded or displayed form. Examples of recorded images may include, but are not limited to, a photographic film negative, a photographic slide film, a motion picture film, and a photographic print. Displayed forms of images may include, but are not limited to, visual presentations made on electronic displays such as CRT monitors, LCD panels, electroluminescent devices, and LASER projection systems.
A digital image is comprised of one or more digital image channels. Each digital image channel is comprised of a two-dimensional array of pixels. Each pixel value relates to the amount of light received by the image capture device 10 corresponding to the geometrical domain of the pixel. For color imaging applications a digital image will typically consist of red, green, and blue digital image channels. Although the preferred embodiment of the present invention is practiced with digital images produced with a capture device 10, the present invention is not limited to pixel data relating to photographs . For example, graphic or other synthetic data may be merged with photographically captured pixel data and still be considered a digital image. Other configurations are also practiced, e.g. cyan, magenta, and yellow digital image channels. For monochrome applications, the digital image consists of one digital image channel. Motion imaging applications can be thought of as a time sequence of digital images. Those skilled in the art will recognize that the present invention may be applied to, but is not limited to, a digital image for any of the above mentioned applications. Although the present invention describes a digital image channel as a two dimensional array of pixel values arranged by rows and columns, those skilled in the art will recognize that the present invention can be applied to mosaic (non rectilinear) arrays with equal effect.
The present invention may be implemented in a combination of computer hardware software as shown in
The general control processor 40 shown in
It should also be noted that the present invention implemented in a combination of software and/or hardware is not limited to devices which are physically connected and/or located within the same physical location. One or more of the devices illustrated in
The diagram illustrated in
The distribution items are received by a personal computer 150 as shown in FIG. 3.
The present invention may also be practiced with distribution items that do not have a visual representation such as auditory messages in the form of audio clips. These audio forms of distribution items the personal computer 150 can display the information through its internal speaker.
An alternative embodiment of the present invention is practiced with distribution items that are not in electronic form. For example, such distribution items may include, but are not limited to, printed materials or free promotional materials products or services. An example of a promotional material or product would be a sample of shampoo, soap, or toothpaste. An example of a promotional service would be a coupon for a car wash. For this embodiment of the present invention the image capable computer 100 shown in
Alternatively, the present invention may be practiced with a set of comparison digital images which are not received by the image capable computer 100 shown in
The consumer may also have a set of photographic film negatives which are physically brought to the retail store. For this case, an image capture device 10, as shown in
The digital imaging components of the image capable computer 100 shown in
The individual digital images which make up a set of database digital images are related images which have been chosen to be representative of a particular theme. Digital images relating to an ocean/beach resort may be collected to form a set of database images representing a ocean/beach resort theme. For example, the digital images in a set of database digital images may include images of the beach with sky and clouds, tropical trees, images with sailboats, or images with swimming pools. The important aspect of a set of database digital images is that the set of images as a whole represent the particular theme. Another example of a particular theme would be automobiles. An example set of database images might include individual digital images of automobiles, groups of automobiles, distant automobiles on a race track, or people congregating around a food stand. Here, too, the individual digital images constituting a set of database digital images are related which supports a particular theme.
Let the multiple sets of database digital images be represented by E with each set of database digital images identified with by an index j. Thus Ej refers to the jth set of database digital images. Each set of database digital images may contain a different number of digital images. Let N represents multiple numbers with Nj representing the jth number indicating the number of digital images in the jth set of database digital images received by the digital image processor 20. The individual digital images of a set of database digital images are identified with an index k. Thus the kth digital image of the jth set of database digital images is represented symbolically as Fjk. The jth set of database digital images is represented as
The individual digital images contained in the set of query digital images also relate to a single person, a single family, or group of related people and represents a collection of consumer digital images. The set of query digital images may vary in number and may have been collected over relatively short period of time of minutes or hours or over a relatively long period of time such as months or years. The important aspect of the set of query digital images is the fact that the individual digital images relate to one or more activities of importance to the person or people who are either photographed in the digital images or photographed by the person or people. The individual digital images of the set of query digital images are identified with by an index i. The set of query digital images is represented as
where M represents the number of digital images included in the set of query digital images and Qi represents the ith individual digital image of the set.
Referring to
The digital images processor 20 analyzes the set of query digital images with respect to multiple set of database digital images. More specifically, the individual digital images of the set of query digital images are analyzed with respect to the individual digital images of the multiple sets of database digital images.
The numerical analysis performed by the digital image processor 20 results in a database similarity table 252, represented by the variable γ, which is a set of database similarity values, i.e. one database similarity value corresponding to each set of database digital images where γj represents the database similarity value of the jth set of database digital images. The database similarity values included in the database similarity table 252 are single numerical values which are an indication of how similar the set of query digital images 220 are to each set of database digital images. Thus the database similarity table 252 contains information which relates the set of query digital images 220 to the individual sets of database digital images 221, 222, and 223.
The general control processor 40 shown in
The details of the general control processor 40 shown in
the resulting database ranking table 254 is given by
Referring to
The display screen on the consumer's personal computer 150 can display one or more than one distribution items. The selected set of distribution materials may have more distribution items than the personal computer can display at one time. The preferred embodiment of the present invention cycles through the selected set of distribution materials by selecting individual distribution items for transmission to the personal computer 150. For example, if the personal computer 150 has the ability of displaying two distribution items and the selected set of distribution materials includes ten distribution items, the first two distribution items are selected, transmitted, and displayed on the personal computer 150. After a time period of five seconds, the next two distribution items, the third and fourth, are selected, transmitted, and displayed on the personal computer 150. This process is repeated until all of the individual distribution items included in the selected set of distribution items has been transmitted and displayed at which point the process of cycling through the distribution items is repeated.
Those skilled in the art will recognize that the present invention may be used effectively with other configurations of sequencing through a set of distribution items. For example, once the set of distribution materials is determined, the individual distribution items may be selected at random. This type of display of information breaks up the monotony for the person viewing the display screen. It should also be noted that the time period of displaying distribution items is completely up to the digital imaging application designer.
An alternative embodiment of the present invention displays distribution items from more than one set of distribution. In this embodiment, individual distribution items from two or more sets of distribution materials are selected, transmitted, and displayed on the personal computer 150. The two sets of database digital images with the two highest associated database similarity values are selected (indicated by indices R1 and R2). If the personal computer has the ability to display two distribution items at a time, one distribution item from the first set of distribution materials is displayed in a portion of the personal computer display while one distribution item from the second set of distribution materials is displayed in the other portion. Each of the sets of the distribution materials is cycled through the respective individual distribution items as described above.
Another alternative embodiment of the present invention displays distribution items from one or more sets of distribution materials simultaneously and varies the length of time that the distribution items are displayed on the personal computer 150. In this embodiment, individual distribution items from two or more sets of distribution materials are selected, transmitted, and displayed on the personal computer 150. In each portion of the personal computer 150 devoted to displaying a distribution item, a distribution item selected from the set of distribution materials with the highest associated database similarity value is selected (R1), transmitted, and displayed. These distribution items are displayed for eight seconds. After eight seconds have elapsed, a distribution item from the set of distribution materials with the next highest associated database similarity value is selected (R2), transmitted, and displayed. These distribution items are displayed for three seconds. In this manner, the length of time devoted to the display of distribution items is related to the corresponding database similarity values. The process continues until all of the sets of distribution materials have been selected.
Those skilled in the art will recognize that the present invention may be used effectively with other configurations of varying the length of time devoted to the display of distribution items. For example, the set of distribution materials may be selected at random with the individual distribution items cycled sequentially. Each time a distribution item is selected, the length of time it remains on the personal computer display is determined by its associated database similarity value. It should also be noted, as above, that the actual length of time for displaying a distribution item is completely up to the designer of the digital imaging application.
The digital image processor 20 as part of an image capable computer 100 shown in
The image similarity value resulting from the image-to-image comparison of the ith digital image of the set of query digital images and the kth digital image of the jth set of database digital images is represented by βijk.
The image database evaluator 250 receives the image similarity values from the image similarity calculator 240 and derives a table of database similarity values. Each database similarity value is a single numerical value that relates to the degree to which set of query digital images is similar to a set of database digital images. Recall that the variable γj represents the database similarity value corresponding to the jth set of database digital images.
The database similarity values may be calculated in variety of different ways. The preferred embodiment of the present invention uses the arithmetic mean of the image similarity values to derive a database similarity value. The equation for calculating the γj is given by
which represents the average similarity value.
An alternative embodiment of the present invention uses a subset of the image similarity values to calculate a database similarity value. The image similarity values βijk (there are M Nj values) corresponding the jth set of database digital images are ranked in numerical order. An average of only the five highest image similarity values is used to calculate the database similarity value. Those skilled in the art will recognize that the present invention can be practiced with a different number of the highest image similarity values. Using only the highest image similarity values raises the numerical values of the database similarity values. Since only a few digital images contribute to the calculation of the database similarity value no penalize is placed on the inclusion of a large number of digital images for a given set of database digital images.
The present invention can be practiced with any method of producing image similarity values. The essential elements of a digital image processing method required to practice the present invention are the calculation of one or more image features for a database of digital images and a query digital image and a method for calculating a single measure of similarity of the calculated image features. For example, the present invention could be practiced with adapted versions of, but is not limited to, the methods disclosed by Cullen et al. in U.S. Pat. No. 5,933,823; Ravela et al. in U.S. Pat. No. 5,987,456; De Bonet in U.S. Pat. No. 5,819,288; Choo et al. in U.S. Pat. No. 5,832,131; Barber et al in U.S. Pat. No. 5,579,471; and described by M. J. Swain and D. H. Ballard in "Color indexing, Intl. Journal, of Computer Vision," Vol. 7, No. 1, 1991, pp. 11-32; and G. Pass, et al. in "Comparing images using color coherence vectors," Proceedings ACM Multimedia Conf., 1996 since all of these methods are image similarity methods based on one or more calculated image features.
The present invention provides a depictive feature-based image comparison system, which consists of two functional phases. In the first phase, called the image feature representation phase, every digital image in the multiple sets of database images managed by the system is processed to automatically extract its depictive feature-based representation. The image feature representation and the digital image are stored in a database and a search index is updated to enable the image feature representation to participate in future depictive feature-based image comparisons. The second phase, called the image comparison phase, is concerned with the comparison of the digital images included in the set of query digital images with the digital images included in the sets of database digital images. Note that the image color space can be transformed into any predefined or desired color space for archival and retrieval phase. The embodiment details given below are applicable to digital images of any color space (e.g., RGB, YCC, HSV, CIE color spaces, etc.). Also, digital images can be transformed to a desired compressed dynamic range in both phases to reduce the computational cost and storage requirements.
The key steps of the image feature representation phase are shown in FIG. 7. Each input digital image is analyzed to build its representation. A digital image can be represented in terms of several different depictive features such as color, texture, and color composition. Referring to
According to the present invention, color feature-based representation of a digital image is in terms of perceptually significant colors present in the digital image. The preferred approach to identifying perceptually significant colors of a digital image is based on the assumption that significantly sized coherently colored regions of a digital image are perceptually significant. Therefore, colors of significantly sized coherently colored regions are considered to be perceptually significant colors. The preferred embodiment offers two different methods for the identification of perceptually significant colors of a digital image. One of these methods is selected for setting up a database. The key steps of the first approach are shown in FIG. 8. For every input digital image, its coherent color histogram is first computed, S100. A coherent color histogram of a digital image is a function of the form H (c)=number of pixels of color c that belong to coherently colored regions. Here c is a valid color in the dynamic range of the digital image. A pixel is considered to belong to a coherently colored region if its color is equal or similar to the colors of a pre-specified minimum number of neighboring pixels. The present implementation has two definitions of coherency: (i) a minimum of 2 matching or similar neighbors, and (ii) all neighbors are matching/similar. The same coherency definition must be used for analyzing all digital images in both the image archival and retrieval phases. Two colors are considered equal if all the corresponding channel values are equal. Two colors c1 and c2 are considered similar if their difference diff(c1, c2) is less than a user specified threshold CT. The preferred value of CT is in the range of 15% to 20% of the maximum possible value of diff(c1, c2). Several different color difference computation methods are possible. In the preferred embodiment one of the following three methods for comparing two L-channel colors can be selected at the system initialization time:
(i) Color cx and cy are considered similar if |cxi-cyi|<CTi, where cki denotes the value of the ith color digital image channel ch and Chi denotes the pre-specified threshold value for the difference of the ith color digital image channel values.
(ii) Color cx and cy are considered similar if Σi=1,Lwi. (cxi-chi)2<CT, where cki denotes the value of the ith color digital image channel ch, W1 is the weight of the ith color digital image channel, and CT denotes the pre-specified threshold value.
(iii) Color cx and cy are considered similar if Σi=1,Lwi.|(cxi-chi)|<CT, where cki denotes the value of the ith color digital image channel ch, W1 is the weight of the ith color digital image channel, and CT denotes the pre-specified threshold value.
Then the coherent color histogram is analyzed to identify the perceptually significant colors, S110. A color k is considered to a perceptually significant color if H(k)>T. Here T is a threshold. In the present implementation T=0.5% of total numbers of pixels in the image. The next step is to represent the digital image in terms of its perceptually significant colors, S120. Specifically, a digital image I is represented by a vector of the form
Here, N is the number of perceptually significant colors in digital image I, Z=Σ Si, Ci is the color value of the ith perceptually significant color of digital image I, and Si is the ratio of H(Ci) and the total number of pixel in image I.
The key steps of the second method for identifying perceptually significant colors of a digital image are shown in FIG. 9. This method is an extension of the first method. In this case, steps S100 and S110 of the first method are performed to detect perceptually significant colors, S200. The set of perceptually significant colors so obtained is considered the initial set of perceptually significant colors and it is refined to obtain the set of dominant perceptually significant colors. The refinement processed starts with the finding of connected components composed solely of the pixels of colors belonging to the initial set of perceptually significant colors, S210. This is accomplished by performing connected component analysis on the input digital image considering only the pixels of perceptually significant colors (i.e., considering them as the object pixels) and ignoring others (i.e., considering them as the background pixels). Two neighboring pixels (4 or 8-neighbors) with perceptually significant colors (i.e., colors in the initial set of the perceptually significant colors) are considered connected only if they are of matching/similar colors. The connected components so obtain are analyzed to determine the set of dominant perceptually significant colors, S220. A connected component of size greater than a pre-specified threshold Ts is considered a dominant perceptually significant segment. In the present implementation, Ts=0.25% of the total number of pixel in the digital image. Colors belonging to a dominant perceptually significant segments form the set of perceptually significant colors for image feature representation. The final step is again to represent the digital image in terms of its perceptually significant colors, S230. Note that this final set of perceptually significant colors is a subset of the initial set of perceptually significant colors.
Those skilled in art would recognize that several variations of the above two color-based image feature representations are possible within the scope of this work. For example, one straightforward extension is a combination of the two representations, where the representation of method 1 is extended by qualifying each perceptually significant color by a type which indicates whether or not that colors belongs to a dominant perceptually significant segment.
According to the present invention, texture feature-based representation of a digital image is in terms of perceptually significant textures present in the digital image, random or structured. The preferred approach to identifying perceptually significant textures of a digital image is based on the assumption that each perceptually significantly texture is composed of large numbers of repetitions of the same color transition(s). Therefore, by identifying the frequently occurring color transitions and analyzing their textural properties, perceptually significant textures can be extracted and represented. The preferred embodiment for the identification of perceptually significant textures of a digital image is shown in FIG. 10. For every input digital image, the first step in the process is to detect all the color transitions that are present in the image, S300. A color transition occurs between a current pixel (c) and its previous pixel (p) if a change of color value, dist(c, p), is greater than a predefined threshold th. The preferred value of th is in the range of 15% to 20% of the maximum possible value of dist(c, p). A pixel where a color transition occurs is referred to as a color-transition-pixel. In the present embodiment, one of the following two methods for comparing two L-channel colors can be selected to determine the occurrence of a change of color value, hence, a color transition:
(i) The current pixel is identified to be a color-transition-pixel if
where c.chi represents the ith color digital image channel value of the current pixel, p.chi represents the ith color channel value of the previous pixel, and th.chi represents the predefined difference threshold for the ith color digital image channel.
(ii) The current pixel is identified to be a color-transition-pixel if
where c.chi represents the ith color digital image channel value of the current pixel, p.chi represents the ith color digital image channel value of the previous pixel, and th represents the predefined color difference threshold.
Those skilled in art would recognize that the concept of color transition can be defined as gray-level or brightness transition in case of monochromatic images. They would also recognize that other color difference metrics can be employed for determining the existence of a color transition within the scope of this invention. A digital image is scanned horizontally and vertically to identify all color-transition-pixels using one of the above methods. Every color-transition-pixel signals a color transition and each color transition is represented by the two colors (c1, c2) corresponding to the previous and the current pixel color values that form the color transition. The second step in the process is to identify all the frequently occurring color transitions, S310. Two dimensional color transition histograms with c1 and c2 being the two dimensions are constructed to record the frequency of various color transitions found in the previous step. The preferred embodiment of the present invention offers three options for constructing and populating the color transition histograms. The first option involves the construction of a global color transition histogram, which will be populated by all the color transitions found in the image. Finding all the peaks in the color-transition histogram that also exceeds a predefined minimum frequency of occurrence threshold identifies the frequently occurring color transitions. The preferred minimum frequency threshold for identifying frequently occurring color transitions for global color transition histogram is 0.25% of the total number of pixels in the digital image. The second option involves tessellating the digital image into non-overlapping sections and then constructing a set of sectional transition histograms, which are populated by color transitions, found in corresponding image sections. In the present embodiment, a set of 24 sectional histograms is constructed. Finding all the peaks in all of the sectional transition histograms that also exceeds a predefined minimum frequency of occurrence threshold identifies the frequently occurring color transitions. The preferred minimum frequency threshold for identifying frequently occurring color transitions for sectional color transition histogram is 2.5% of the total number of pixels in each tessellated section. The final option is a combination of the two above-mentioned methods where both the global and the sectional histograms are constructed and all the peaks are identified in the above-mentioned manner. These peaks represent the most frequently occurring color transitions, which correspond to perceptually significant textures in the image. The third step in the process is texture property analysis of frequently occurring color transitions to represent perceptually significant textures, S320. For every frequently occurring color transition, all the occurrences of this particular color transition in the entire digital image are found, and a scale and a gradient value is calculated. In the current embodiment, scale is calculated as the distance in terms of pixels, between the occurrence of color c1 and color c2. Gradient is calculated as tan-1 (gy/gx) where gy and gx are the vertical and horizontal edge information at the color transition respectively, calculated using the Sobel operator. Note that other techniques for calculating scale and gradient values are possible without exceeding the scope of this invention. The calculated scale and gradient values for each occurrence are used to construct a scale-gradient histogram. After all the occurrences have been accounted, the scale-gradient histogram is used to analyze the textural properties of the perceptually significant texture. For random textures, the scale gradient histogram is randomly distributed, while for structured textures, a significantly sharp mode in scale, gradient, or both can be detected in the scale-gradient histogram. For a color transition corresponding to a random texture, the scale-gradient histogram is used to compute the scale-gradient mean vector and the scale-gradient co-variance matrix. For a color transition corresponding to a structured texture, the corresponding histogram mode is used to compute the scale-gradient mean vector and the scale-gradient co-variance matrix. These properties are used to represent a perceptually significant texture. The final step is to represent a digital image in terms of its perceptually significant textures, S330. A digital image I is represented by a vector of the form:
Here N is the number of dominant perceptually significant textures in image I; Z=ΣSi; C1i nd C2i are the color value of the frequently occurring color transition corresponding to the ith perceptually significant texture; Pi is the textural type of the ith perceptually significant texture, taking on one of the following possible values: random, scale-structured, gradient-structured, or scale-gradient-structured; Mi and Vi are the scale-gradient mean vector and scale-gradient covariance matrix of the ith perceptually significant texture in the set, respectively, and Si is the total area coverage of the ith perceptually significant texture calculated by accumulating all the scale values over all the occurrences of the frequently occurring color transition corresponding to the ith perceptually significant texture. Those skilled in art would recognize that other textural properties, or a subset/superset of IT can be employed to represent a perceptually significant texture.
After generating the perceptually significant feature-based image feature representation, the next step is to insert the digital image and the associated representation into the corresponding database and appropriate index structures. Those skilled in art would recognize that the overall database organization is dependent on the underlying database/file management system. In the present implementation, the digital images reside in the image database. The image feature representations (metadata) are also stored in the database, as well as in the indexing structures. In addition to the perceptually significant feature representations, an image feature representation (metadata) also contains the image identifier/locator which act as a reference to the digital image file. The image name/id acts as a locator of its representation. Note that in the current implementation color and texture representations are organized in separate structures, but they share the common digital image.
The steps outlined in
In the image comparison phase, each query digital image in the set of query digital images is analyzed separately. A image similarity value is computed for the comparison between a query digital image and a database digital image. This process is repeated for all the query images. Before a similarity value is calculated between a query digital image and a database digital image, certain constraints must be satisfied by the database image with respect to the query image. In the preferred embodiment of the present invention, these constraints on the perceptually significant features are: (i) minimum number perceptually significant query digital image's features that are present in the database image's features; (ii) the minimum percentage of the total size of the perceptually significant features that are common between the query digital image and the database digital image; and (iii) a logical combination of the first two constraints. To accomplish this, the appropriate index search is first performed to select database digital images that contain one or more perceptually significant features (or principal perceptually significant features if principal perceptually significant feature-based index is employed) of the query digital image. The representation of every selected database digital image is then analyzed to determine if it satisfies the specified size constraints mentioned above, and for database digital images that satisfy the constraints, a measure of similarity with the query digital image, or image similarity value, is computed. For the database digital image which are not selected as part of the above procedure, a image similarity value of zero is given since the comparison between the query digital image and the database digital image did not satisfy the constrains on the perceptually significant features.
The key steps of the example image-based similar image retrieval/selection process are shown in FIG. 12. Given a query digital image, its desired representation (i.e., color or texture-based) is computed shown in step S500. Next, the database and the associated index structure is searched to find candidate database digital images that can potentially meet the search criteria, S510. This is accomplished by searching the index structure to identify digital images that contain at least one perceptually significant feature (or principal perceptually significant feature if principal perceptually significant feature-based representation is employed) common with the query digital image (as indicated in step S520). For the preferred index structure, this is accomplished by searching the index structure for every perceptually significant (or principal perceptually significant) feature fp to find database digital images with feature fp as a perceptually significant (or principal perceptually significant) feature. For each database digital image that satisfies the search/retrieval constraints, a image similarity value is computed. For database digital images that do not satisfy the search/retrieval constraints, a image similarity value of zero is assigned.
For color-based image comparison, the index structure based on perceptually significant or principal perceptually significant colors is searched to find database digital images containing in their representation at least one of the query digital image's perceptually significant (or principal perceptually significant) color. The preferred options for a measure of similarity, the calculation of the image similarity value as indicated in step S520, for the color-based representations are:
where in N is the number of matching colors of query digital image q and database digital image d; Siq and Sid are the size attribute values for the ith matching color of images q and d, respectively; diff is a normalized distance function of the type Lx|.| norm for a given x; and Ωq and Ωd are the set of size attribute values of the corresponding perceptually significant colors of digital images q and d.
For texture-based image comparison, the index structure based on perceptually significant (or principal perceptually significant) textures is searched to find database digital images that both contain at least one of the query digital image's perceptually significant (or principal perceptually significant) textures in their representation. The resulting set of candidate database digital images is further compared with the query digital image to determine the candidate's texture-based similarity to the query image. The preferred measure of similarity between the query digital image and the candidate digital image is dependent on the similarity of matching, or common, perceptually significant textures, and also the total area coverage in the query digital image and the candidate digital image by the matching/common perceptually significant textures. Two perceptually significant textures are matching/common if they have matching color values C1, C2, and the same texture property value P (random or structured) in their representation. In the preferred embodiment, for each matching/common perceptually significant texture, the similarity of matching perceptually significant texture, the image similarity value, is calculated from the scale-gradient mean vector M and the scale-gradient covariance matrix V using either Euclidean distance or Mahalanobis distance. Note that other distance functions may also be used within the scope of the present invention. The overall image similarity score between the candidate and the query digital image is determined as the sum of the similarity value of all the matching perceptually significant textures multiplied by the relative area coverage S of the texture in the digital image. Note that in general, the Mahalanobis distance is not a symmetric distance, the distance from distribution A to distribution B is different from the distance from distribution B to distribution A. In addition, the relative area coverage S is different in the candidate and the query digital image. Therefore, two image similarity values generally result from the similarity calculation: one from the query to the candidate Sq-c, and one from the candidate to the query Sc-q. The preferred embodiment has 5 different options for obtaining one single image similarity value. The first two options take either Sq-c or Sc-q as the final image similarity value; the third option takes the maximum of Sq-c and Sc-q; the fourth option takes the average of Sq-c and Sc-q; and the fifth option takes the product of Sq-c and Sc-q. Other combinatorial methods can also be used without exceeding the scope of the present invention.
The calculation of the image similarity value as described in the preferred embodiment of the present invention which includes index structure searches and similarity calculation only on the candidate database images can result in many query/database digital image comparisons that have a zero value. In an alternative embodiment of the present invention, the indexing structure can be eliminated. In this embodiment, every database digital image will be selected for further evaluation. The method of calculating the image similarity value is employed as in the preferred embodiment as indicated by equations (10) and (11).
Those skilled in art would recognize that other similarity measures could be employed within the scope of the present invention.
A computer program product may include one or more storage medium, for example; magnetic storage media such as magnetic disk (such as a floppy disk) or magnetic tape; optical storage media such as optical disk, optical tape, or machine readable bar code; solid-state electronic storage devices such as random access memory (RAM), or read-only memory (ROM); or any other physical device or media employed to store a computer program having instructions for practicing a method according to the present invention.
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.
10 image capture device
20 digital image processor
30 image output device
40 general control processor
42 monitor device
46 offline memory device
100 image capable computer
110 communications computer network
120 connection computer
130 retail computer
140 wholesale computer
150 personal computer
160 mobile computer
170 kiosk computer
201 active image
202 thumbnail image
203 thumbnail image
204 thumbnail image
205 thumbnail image
206 thumbnail image
207 thumbnail image
210 distribution item
211 distribution item
212 distribution item
220 set of query digital images
221 set of database digital images
222 set of database digital images
223 set of database digital images
231 set of distribution materials
232 set of distribution materials
233 set of distribution materials
240 image similarity calculator
250 image database evaluator
252 database similarity table
254 database ranking table
280 distribution controller
Mehrotra, Rajiv, Gindele, Edward B., Zhu, Wei
Patent | Priority | Assignee | Title |
10600090, | Dec 30 2005 | GOOGLE LLC | Query feature based data structure retrieval of predicted values |
10769431, | Sep 27 2004 | Kyocera Corporation | Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device |
10838584, | Oct 31 2016 | Microsoft Technology Licensing, LLC | Template based calendar events with graphic enrichment |
11823476, | May 25 2021 | Bank of America Corporation | Contextual analysis for digital image processing |
12112564, | May 25 2021 | Bank of America Corporation | Contextual analysis for digital image processing |
7051086, | Jul 27 1995 | DIGIMARC CORPORATION AN OREGON CORPORATION | Method of linking on-line data to printed documents |
7289643, | Dec 21 2000 | DIGIMARC CORPORATION AN OREGON CORPORATION | Method, apparatus and programs for generating and utilizing content signatures |
7366682, | May 07 1999 | WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT | System, method, and code for providing promotions in a network environment |
7469262, | Dec 29 2003 | Oracle International Corporation | Customizable metadata merging framework |
7519200, | May 09 2005 | GOOGLE LLC | System and method for enabling the use of captured images through recognition |
7542610, | May 09 2005 | GOOGLE LLC | System and method for use of images with recognition analysis |
7548916, | Apr 30 2003 | Canon Kabushiki Kaisha | Calculating image similarity using extracted data |
7657100, | May 09 2005 | GOOGLE LLC | System and method for enabling image recognition and searching of images |
7657126, | May 09 2005 | GOOGLE LLC | System and method for search portions of objects in images and features thereof |
7660468, | May 09 2005 | GOOGLE LLC | System and method for enabling image searching using manual enrichment, classification, and/or segmentation |
7760917, | May 09 2005 | GOOGLE LLC | Computer-implemented method for performing similarity searches |
7783135, | May 09 2005 | GOOGLE LLC | System and method for providing objectified image renderings using recognition information from images |
7809192, | May 09 2005 | GOOGLE LLC | System and method for recognizing objects from images and identifying relevancy amongst images and information |
7809722, | May 09 2005 | GOOGLE LLC | System and method for enabling search and retrieval from image files based on recognized information |
7836093, | Dec 11 2007 | Apple Inc | Image record trend identification for user profiles |
7930546, | May 16 1996 | DIGIMARC CORPORATION AN OREGON CORPORATION | Methods, systems, and sub-combinations useful in media identification |
7945099, | May 09 2005 | GOOGLE LLC | System and method for use of images with recognition analysis |
7974436, | Dec 21 2000 | DIGIMARC CORPORATION AN OREGON CORPORATION | Methods, apparatus and programs for generating and utilizing content signatures |
7990556, | Dec 03 2004 | Kyocera Corporation | Association of a portable scanner with input/output and storage devices |
8019648, | Apr 01 2004 | Kyocera Corporation | Search engines and systems with handheld document data capture devices |
8023773, | Dec 21 2000 | DIGIMARC CORPORATION AN OREGON CORPORATION | Methods, apparatus and programs for generating and utilizing content signatures |
8064700, | Apr 19 2004 | Kyocera Corporation | Method and system for character recognition |
8065184, | Dec 30 2005 | GOOGLE LLC | Estimating ad quality from observed user behavior |
8077911, | Dec 21 2000 | DIGIMARC CORPORATION AN OREGON CORPORATION | Methods, apparatus and programs for generating and utilizing content signatures |
8081849, | Dec 03 2004 | Kyocera Corporation | Portable scanning and memory device |
8103641, | Dec 29 2003 | Oracle International Corporation | Customizable metadata merging framework |
8146156, | Apr 01 2004 | Kyocera Corporation | Archive of text captures from rendered documents |
8179563, | Aug 23 2004 | Kyocera Corporation | Portable scanning device |
8214387, | Apr 01 2004 | Kyocera Corporation | Document enhancement system and method |
8233702, | Aug 18 2006 | GOOGLE LLC | Computer implemented technique for analyzing images |
8261094, | Apr 19 2004 | Kyocera Corporation | Secure data gathering from rendered documents |
8275221, | May 29 2008 | Apple Inc | Evaluating subject interests from digital image records |
8311289, | May 09 2005 | GOOGLE LLC | Computer-implemented method for performing similarity searches |
8315442, | May 09 2005 | GOOGLE LLC | System and method for enabling image searching using manual enrichment, classification, and/or segmentation |
8320707, | May 09 2005 | GOOGLE LLC | System and method for use of images with recognition analysis |
8345982, | May 09 2005 | GOOGLE LLC | System and method for search portions of objects in images and features thereof |
8346620, | Jul 19 2004 | Kyocera Corporation | Automatic modification of web pages |
8385633, | Mar 12 2006 | GOOGLE LLC | Techniques for enabling or establishing the use of face recognition algorithms |
8416981, | Jul 29 2007 | GOOGLE LLC | System and method for displaying contextual supplemental content based on image content |
8418055, | Feb 18 2009 | Kyocera Corporation | Identifying a document by performing spectral analysis on the contents of the document |
8429012, | Dec 30 2005 | GOOGLE LLC | Using estimated ad qualities for ad filtering, ranking and promotion |
8442331, | Apr 01 2004 | Kyocera Corporation | Capturing text from rendered documents using supplemental information |
8447066, | Mar 12 2009 | Kyocera Corporation | Performing actions based on capturing information from rendered documents, such as documents under copyright |
8447111, | Apr 01 2004 | Kyocera Corporation | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
8447144, | Apr 01 2004 | Kyocera Corporation | Data capture from rendered documents using handheld device |
8488836, | Dec 21 2000 | DIGIMARC CORPORATION AN OREGON CORPORATION | Methods, apparatus and programs for generating and utilizing content signatures |
8489624, | May 17 2004 | Kyocera Corporation | Processing techniques for text capture from a rendered document |
8505090, | Apr 01 2004 | Kyocera Corporation | Archive of text captures from rendered documents |
8515816, | Apr 01 2004 | Kyocera Corporation | Aggregate analysis of text captures performed by multiple users from rendered documents |
8521772, | Apr 01 2004 | Kyocera Corporation | Document enhancement system and method |
8542870, | Dec 21 2000 | Digimarc Corporation | Methods, apparatus and programs for generating and utilizing content signatures |
8571272, | Mar 12 2006 | GOOGLE LLC | Techniques for enabling or establishing the use of face recognition algorithms |
8600196, | Sep 08 2006 | Kyocera Corporation | Optical scanners, such as hand-held optical scanners |
8619147, | Sep 27 2004 | Kyocera Corporation | Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device |
8619287, | Apr 01 2004 | Kyocera Corporation | System and method for information gathering utilizing form identifiers |
8620083, | Dec 03 2004 | Kyocera Corporation | Method and system for character recognition |
8620760, | Apr 01 2004 | Kyocera Corporation | Methods and systems for initiating application processes by data capture from rendered documents |
8621349, | Apr 01 2004 | Kyocera Corporation | Publishing techniques for adding value to a rendered document |
8630493, | Mar 12 2006 | GOOGLE LLC | Techniques for enabling or establishing the use of face recognition algorithms |
8630513, | May 09 2005 | GOOGLE LLC | System and method for providing objectified image renderings using recognition information from images |
8638363, | Feb 18 2009 | Kyocera Corporation | Automatically capturing information, such as capturing information using a document-aware device |
8649572, | May 09 2005 | GOOGLE LLC | System and method for enabling the use of captured images through recognition |
8712862, | May 09 2005 | GOOGLE LLC | System and method for enabling image recognition and searching of remote content on display |
8713418, | Apr 12 2004 | Kyocera Corporation | Adding value to a rendered document |
8732025, | May 09 2005 | GOOGLE LLC | System and method for enabling image recognition and searching of remote content on display |
8732030, | May 09 2005 | GOOGLE LLC | System and method for using image analysis and search in E-commerce |
8781228, | Apr 01 2004 | Kyocera Corporation | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
8793162, | Jul 19 2001 | Kyocera Corporation | Adding information or functionality to a rendered document via association with an electronic counterpart |
8799099, | May 17 2004 | Kyocera Corporation | Processing techniques for text capture from a rendered document |
8799303, | Apr 01 2004 | Kyocera Corporation | Establishing an interactive environment for rendered documents |
8831365, | Apr 01 2004 | Kyocera Corporation | Capturing text from rendered documents using supplement information |
8873794, | Feb 12 2007 | Shopper Scientist, LLC | Still image shopping event monitoring and analysis system and method |
8873851, | Jun 29 2012 | Monument Peak Ventures, LLC | System for presenting high-interest-level images |
8874504, | Dec 03 2004 | Kyocera Corporation | Processing techniques for visual capture data from a rendered document |
8892495, | Feb 01 1999 | Blanding Hovenweep, LLC; HOFFBERG FAMILY TRUST 1 | Adaptive pattern recognition based controller apparatus and method and human-interface therefore |
8897505, | May 09 2005 | GOOGLE LLC | System and method for enabling the use of captured images through recognition |
8903759, | Dec 03 2004 | Kyocera Corporation | Determining actions involving captured information and electronic content associated with rendered documents |
8948462, | Nov 02 2010 | Thiagarajar College of Engineering | Texture identification |
8953886, | Aug 23 2004 | Kyocera Corporation | Method and system for character recognition |
8989451, | May 09 2005 | GOOGLE LLC | Computer-implemented method for performing similarity searches |
8990235, | Mar 12 2009 | Kyocera Corporation | Automatically providing content associated with captured information, such as information captured in real-time |
9008435, | May 09 2005 | GOOGLE LLC | System and method for search portions of objects in images and features thereof |
9008465, | May 09 2005 | GOOGLE LLC | System and method for use of images with recognition analysis |
9014509, | Jun 29 2012 | Monument Peak Ventures, LLC | Modifying digital images to increase interest level |
9014510, | Jun 29 2012 | Monument Peak Ventures, LLC | Method for presenting high-interest-level images |
9030699, | Dec 03 2004 | Kyocera Corporation | Association of a portable scanner with input/output and storage devices |
9047654, | Jul 29 2007 | GOOGLE LLC | System and method for displaying contextual supplemental content based on image content |
9055276, | Jul 29 2011 | Apple Inc | Camera having processing customized for identified persons |
9075779, | Mar 12 2009 | Kyocera Corporation | Performing actions based on capturing information from rendered documents, such as documents under copyright |
9077696, | Apr 26 2012 | Qualcomm Incorporated | Transferring data items amongst computing devices using metadata that identifies a location of a transferred item |
9081799, | Dec 04 2009 | GOOGLE LLC | Using gestalt information to identify locations in printed information |
9082162, | May 09 2005 | GOOGLE LLC | System and method for enabling image searching using manual enrichment, classification, and/or segmentation |
9116890, | Apr 01 2004 | Kyocera Corporation | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
9143638, | Apr 01 2004 | Kyocera Corporation | Data capture from rendered documents using handheld device |
9171013, | May 09 2005 | GOOGLE LLC | System and method for providing objectified image renderings using recognition information from images |
9268852, | Apr 01 2004 | Kyocera Corporation | Search engines and systems with handheld document data capture devices |
9275051, | Jul 19 2004 | Kyocera Corporation | Automatic modification of web pages |
9286624, | Sep 10 2009 | GOOGLE LLC | System and method of displaying annotations on geographic object surfaces |
9323784, | Dec 09 2009 | Kyocera Corporation | Image search using text-based elements within the contents of images |
9324006, | Jul 29 2007 | GOOGLE LLC | System and method for displaying contextual supplemental content based on image content |
9430719, | May 09 2005 | GOOGLE LLC | System and method for providing objectified image renderings using recognition information from images |
9454764, | Apr 01 2005 | Kyocera Corporation | Contextual dynamic advertising based upon captured rendered text |
9514134, | Apr 01 2004 | Kyocera Corporation | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
9535563, | Feb 01 1999 | Blanding Hovenweep, LLC; HOFFBERG FAMILY TRUST 1 | Internet appliance system and method |
9542419, | May 09 2005 | GOOGLE LLC | Computer-implemented method for performing similarity searches |
9633013, | Apr 01 2004 | Kyocera Corporation | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
9678989, | May 09 2005 | GOOGLE LLC | System and method for use of images with recognition analysis |
9678991, | Feb 24 2014 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image |
9690979, | Mar 12 2006 | GOOGLE LLC | Techniques for enabling or establishing the use of face recognition algorithms |
Patent | Priority | Assignee | Title |
5579471, | Nov 09 1992 | GOOGLE LLC | Image query system and method |
5696964, | Apr 16 1996 | NEC Corporation | Multimedia database retrieval system which maintains a posterior probability distribution that each item in the database is a target of a search |
5819288, | Oct 16 1996 | Microsoft Technology Licensing, LLC | Statistically based image group descriptor particularly suited for use in an image classification and retrieval system |
5832131, | May 03 1995 | National Semiconductor Corporation | Hashing-based vector quantization |
5933823, | Mar 01 1996 | Ricoh Company Limited | Image database browsing and query using texture analysis |
5963670, | Feb 12 1996 | Massachusetts Institute of Technology | Method and apparatus for classifying and identifying images |
5987456, | Oct 28 1997 | MASSACHUSETTS, UNIVERSITY OF | Image retrieval by syntactic characterization of appearance |
6035055, | Nov 03 1997 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Digital image management system in a distributed data access network system |
6104835, | Nov 14 1997 | KLA-Tencor Corporation; Uniphase Corporation | Automatic knowledge database generation for classifying objects and systems therefor |
6128663, | Feb 11 1997 | WORLDWIDE CREATIVE TECHNIQUES, INC | Method and apparatus for customization of information content provided to a requestor over a network using demographic information yet the user remains anonymous to the server |
6154771, | Jun 01 1998 | Tata America International Corporation | Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively |
6285788, | Jun 13 1997 | RAKUTEN, INC | Method for fast return of abstracted images from a digital image database |
6295061, | Feb 12 1999 | DBM Korea | Computer system and method for dynamic information display |
6295526, | Oct 14 1997 | TUMBLEWEED HOLDINGS LLC | Method and system for processing a memory map to provide listing information representing data within a database |
6379251, | Feb 24 1997 | REALTIME ACQUISITION, LLC, A DELAWARE LIMITED LIABILITY COMPANY | System and method for increasing click through rates of internet banner advertisements |
6389417, | Jun 29 1999 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for searching a digital image |
6487538, | Nov 16 1998 | ConneXus Corporation | Method and apparatus for local advertising |
6496857, | Feb 08 2000 | MIRROR WORLDS TECHNOLOGIES, LLC | Delivering targeted, enhanced advertisements across electronic networks |
6513052, | Dec 15 1999 | Uber Technologies, Inc | Targeted advertising over global computer networks |
6519584, | Jun 26 1996 | Sun Microsystem, Inc. | Dynamic display advertising |
6539375, | Aug 04 1998 | Microsoft Technology Licensing, LLC | Method and system for generating and using a computer user's personal interest profile |
Date | Maintenance Fee Events |
Aug 02 2004 | ASPN: Payor Number Assigned. |
Jan 17 2008 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 27 2012 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jan 25 2016 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 31 2007 | 4 years fee payment window open |
Mar 02 2008 | 6 months grace period start (w surcharge) |
Aug 31 2008 | patent expiry (for year 4) |
Aug 31 2010 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 31 2011 | 8 years fee payment window open |
Mar 02 2012 | 6 months grace period start (w surcharge) |
Aug 31 2012 | patent expiry (for year 8) |
Aug 31 2014 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 31 2015 | 12 years fee payment window open |
Mar 02 2016 | 6 months grace period start (w surcharge) |
Aug 31 2016 | patent expiry (for year 12) |
Aug 31 2018 | 2 years to revive unintentionally abandoned end. (for year 12) |