An automated system checks networked computers, such as computers on the internet, for watermarked audio, video, or image data. A report listing the location of such audio, video or image data is generated, and provided to the proprietor(s) thereof identified by the watermark information.

Patent
   RE40919
Priority
Nov 18 1993
Filed
Jan 27 2004
Issued
Sep 22 2009
Expiry
Nov 18 2013
Assg.orig
Entity
Large
102
235
all paid
10. A method for surveying distribution of proprietary empirical data sets on computer sites accessible via the internet, comprising:
providing a master code signal useful for detecting steganographic coding within empirical data sets;
automatically downloading data, including empirical data sets, from a plurality of computer sites over the internet;
for each of a plurality of empirical data sets obtained by said downloading operation, discerning certain identification data, if any, steganographically encoded therein, said discerning employing said master code signal as a decoding key; and
generating a report identifying steganographically encoded empirical data sets identified by the foregoing steps, and the site from which each was downloaded;
wherein there is calibration data steganographically encoded within at least one empirical data set, said calibration data having one or more known properties facilitating identification thereof during the discerning step;
the method including identifying the calibration data within the empirical data set and using data obtained thereby to aid in discerning the identification data from the empirical data set;
wherein the empirical data set has been corrupted since being encoded, said corruption including a process selected from the group consisting of: misregistration and scaling of the empirical data set;
the method further including using said data to compensate for said corruption, wherein the identification data can nonetheless be recovered from the empirical data set notwithstanding said corruption.
1. A method for surveying distribution of proprietary empirical data sets, such as audio, image, or video data, on computer sites accessible via the internet, comprising:
automatically downloading data, including empirical data sets, from a plurality of computer sites over the internet;
for each of a plurality of empirical data sets obtained by said downloading operation, automatically screening same to identify the potential presence of identification data steganographically encoded therein;
for each of a plurality of empirical data sets screened by said screening operation, discerning identification data, if any, steganographically encoded therein; and
generating a report identifying steganographically encoded empirical data sets identified by the foregoing steps, and the site from which each was downloaded;
wherein there is calibration data steganographically encoded within at least one empirical data set, said calibration data having one or more known properties facilitating identification thereof during the discerning step;
the method including identifying the calibration data within the empirical data set and using data obtained thereby to aid in discerning the identification data from the empirical data set;
wherein the empirical data set has been corrupted since being encoded, said corruption including a process selected from the group consisting of: misregistration and scaling of the empirical data set;
the method further including using said data to compensate for said corruption, wherein the identification data can nonetheless be recovered from the empirical data set notwithstanding said corruption.
2. The method of claim 1 which includes providing a master code signal, and using said code signal is discerning said steganographically encoded identification data from said screened empirical data sets.
3. The method of claim 2 in which said master code signal has the appearance of unpatterned snow if represented in the a pixel domain.
4. The method of claim 1 in which said discerning of identification data from said downloaded empirical data is accomplished without previous knowledge of the audio, image, or video information represented thereby.
5. The method of claim 1 which includes identifying proprietors of empirical data sets by reference to identification data steganographically discerned therefrom, and reporting to said proprietors the sites from which their empirical data sets were downloaded.
6. The method of claim 5 in which said identification data includes information in addition to data identifying said proprietor, and the method includes providing said additional data to said proprietors.
7. The method of claim 5 in which said identification data is a serial number index into a registry database containing names and contact information for proprietors identified by said identification data.
8. The method of claim 1 in which the empirical data sets include image data, and the method includes:
converting said image data to pixel form, if not already in said form; and
performing a plurality of statistical analyses on said pixel form image data to discern the identification data therefrom.
9. The method of claim 8 in which each statistical analysis includes analyzing a collection of spaced apart pixels to decode a single, first bit of the identification data therefrom, said analysis to decode the first bit encompassing not just said spaced apart pixels, but also pixels adjacent thereto, said adjacent pixels not being encoded with said first bit.
11. The method of claim 10 which includes automatically screening each of a plurality of said empirical data sets obtained by said downloading operation, to identify the potential presence of identification data steganographically encoded therein and, for those data sets that pass said screening process, discerning identification data, if any, steganographically encoded therein.
12. The method of claim 10 in which said master code signal has the appearance of unpatterned snow if represented in the a pixel domain.
13. The method of claim 10 in which said discerning of identification data from said downloaded empirical data is accomplished without previous knowledge of the audio, image, or video information represented thereby.
14. The method of claim 10 which includes identifying proprietors of empirical data sets by reference to identification data steganographically discerned therefrom, and reporting to said proprietors the sites from which their empirical data sets were downloaded.
15. The method of claim 14 in which said identification data includes information in addition to data identifying said proprietor, and the method includes providing said additional data to said proprietors.
16. The method of claim 14 in which said identification data is a serial number index into a registry database containing names and contact information for proprietors identified by said identification data.
17. The method of claim 10 in which the empirical data sets include image data, and the method includes:
converting said image data to pixel form, if not already in said form; and
performing a plurality of statistical analyses on said pixel form image data to discern the identification data therefrom.
18. The method of claim 17 in which each statistical analysis includes analyzing a collection of spaced apart pixels to decode a single, first bit of the identification data therefrom, said analysis to decode the first bit encompassing not just said spaced apart pixels, but also pixels adjacent thereto, said adjacent pixels not being encoded with said first bit.

This application is a continuation in part of the following copending applications: PCT/US96/06618 (May 7, 1996), Ser. No. 08/637,531 (filed Apr. 25, 1996) allowed, Ser. No. 08/534,005 (filed Sep. 25, 1995) allowed, Ser. No. 10
Here, n and m are simple indexing values on rows and columns of the image ranging from 0 to 3999. Sqrt is the square root. V is the DN of a given indexed pixel on the original digital image. The < >brackets around the RMS noise merely indicates that this is an expected average value, where it is clear that each and every pixel will have a random error individually. Thus, for a pixel value having 1200 as a digital number or “brightness value”, we find that its expected rms noise value is sqrt(1204)=34.70, which is quite close to 34.64, the square root of 1200.

We furthermore realize that the square root of the innate brightness value of a pixel is not precisely what the eye perceives as a minimum objectionable noise, thus we come up with the formula:
<RMS Addable Noisen,m>=X*sqrt(4+(Vn,m−30)Y)   (2)
Where X and Y have been added as empirical parameters which we will adjust, and “addable” noise refers to our acceptable perceived noise level from the definitions above. We now intend to experiment with what exact value of X and Y we can choose, but we will do so at the same time that we are performing the next steps in the process.

The next step in our process is to choose N of our N-bit identification word. We decide that a 16 bit main identification value with its 65536 possible values will be sufficiently large to identify the image as ours, and that we will be directly selling no more than 128 copies of the image which we wish to track, giving 7 bits plus an eighth bit for an odd/even adding of the first 7 bits (i.e. an error checking bit on the first seven). The total bits required now are at 4 bits for the 0101 calibration sequence, 16 for the main identification, 8 for the version, and we now throw in another 4 as a further error checking value on the first 28 bits, giving 32 bits as N. The final 4 bits can use one of many industry standard error checking methods to choose its four values.

We now randomly determine the 16 bit main identification number, finding for example, 1101 0001 1001 1110; our first versions of the original sold will have all 0's as the version identifier, and the error checking bits will fall out where they may. We now have our unique 32 bit identification word which we will embed on the original digital image.

To do this, we generate 32 independent random 4000 by 4000 encoding images for each bit of our 32 bit identification word. The manner of generating these random images is revealing. There are numerous ways to generate these. By far the simplest is to turn up the gain on the same scanner that was used to scan in the original photograph, only this time placing a pure black image as the input, then scanning this 32 times. The only drawback to this technique is that it does require a large amount of memory and that “fixed pattern” noise will be part of each independent “noise image.” But, the fixed pattern noise can be removed via normal “dark frame” subtraction techniques. Assume that we set the absolute black average value at digital number ‘100,’ and that rather than finding a 2 DN rms noise as we did in the normal gain setting, we now find an rms noise of 10 DN about each and every pixel's mean value.

We next apply a mid-spatial-frequency bandpass filter (spatial convolution) to each and every independent random image, essentially removing the very high and the very low spatial frequencies from them. We remove the very low frequencies because simple real-world error sources like geometrical warping, splotches on scanners, mis-registrations, and the like will exhibit themselves most at lower frequencies also, and so we want to concentrate our identification signal at higher spatial frequencies in order to avoid these types of corruptions. Likewise, we remove the higher frequencies because multiple generation copies of a given image, as well as compression-decompression transformations, tend to wipe out higher frequencies anyway, so there is no point in placing too much identification signal into these frequencies if they will be the ones most prone to being attenuated. Therefore, our new filtered independent noise images will be dominated by mid-spatial frequencies. On a practical note, since we are using 12-bit values on our scanner and we have removed the DC value effectively and our new rms noise will be slightly less than 10 digital numbers, it is useful to boil this down to a 6-bit value ranging from −32 through 0 to 31 as the resultant random image.

Next we add all of the random images together which have a ‘1’ in their corresponding bit value of the 32-bit identification word, accumulating the result in a 16-bit signed integer image. This is the unattenuated and un-scaled version of the composite embedded signal.

Next we experiment visually with adding the composite embedded signal to the original digital image, through varying the X and Y parameters of equation 2. In formula, we visually iterate to both maximize X and to find the appropriate Y in the following.
Vdist;n,m=Vorig;n,m+
where dist refers to the candidate distributable image, i.e. we are visually iterating to find what X and Y will give us an acceptable image; orig refers to the pixel value of the original image; and comp refers to the pixel value of the composite image. The n's and m's still index rows and columns of the image and indicate that this operation is done on all 4000 by 4000 pixels. The symbol V is the DN of a given pixel and a given image.

As an arbitrary assumption, now, we assume that our visual experimentation has found that the value of X=0.025 and Y=0.6 are acceptable values when comparing the original image with the candidate distributable image. This is to say, the distributable image with the “extra noise” is acceptably close to the original in an aesthetic sense. Note that since our individual random images had a random rms noise value around 10 DN, and that adding approximately 16 of these images together will increase the composite noise to around 40 DN, the X multiplication value of 0.025 will bring the added rms noise back to around 1 DN, or half the amplitude of our innate noise on the original. This is roughly a 1 dB gain in noise at the dark pixel values and correspondingly more at the brighter values modified by the Y value of 0.6.

So with these two values of X and Y, we now have constructed our first versions of a distributable copy of the original. Other versions will merely create a new composite signal and possibly change the X slightly if deemed necessary. We now lock up the original digital image along with the 32-bit identification word for each version, and the 32 independent random 4-bit images, waiting for our first case of a suspected piracy of our original. Storage wise, this is about 14 Megabytes for the original image and 32*0.5 bytes*16 million=−256 Megabytes for the random individual encoded images. This is quite acceptable for a single valuable image. Some storage economy can be gained by simple lossless compression.

Finding a Suspected Piracy of our Image

We sell our image and several months later find our two heads of state in the exact poses we sold them in, seemingly cut and lifted out of our image and placed into another stylized background scene. This new “suspect” image is being printed in 100,000 copies of a given magazine issue let us say. We now go about determining if a portion of our original image has indeed been used in an unauthorized manner. FIG. 3 summarizes the details.

The first step is to take an issue of the magazine, cut out the page with the image on it, then carefully but not too carefully cut out the two figures from the background image using ordinary scissors. If possible, we will cut out only one connected piece rather than the two figures separately. We paste this onto a black background and scan this into a digital form. Next we electronically flag or mask out the black background, which is easy to do by visual inspection.

We now procure the original digital image from our secured place along with the 32-bit identification word and the 32 individual embedded images. We place the original digital image onto our computer screen using standard image manipulation software, and we roughly cut along the same borders as our masked area of the suspect image, masking this image at the same time in roughly the same manner. The word ‘roughly’ is used since an exact cutting is not needed, it merely aids the identification statistics to get it reasonably close.

Next we rescale the masked suspect image to roughly match the size of our masked original digital image, that is, we digitally scale up or down the suspect image and roughly overlay it on the original image. Once we have performed this rough registration, we then throw the two images into an automated scaling and registration program. The program performs a search on the three parameters of x position, y position, and spatial scale, with the figure of merit being the mean squared error between the two images given any given scale variable and x and y offset. This is a fairly standard image processing methodology. Typically this would be done using generally smooth interpolation techniques and done to sub-pixel accuracy. The search method can be one of many, where the simplex method is a typical one.

Once the optimal scaling and x-y position variables are found, next comes another search on optimizing the black level, brightness gain, and gamma of the two images. Again, the figure of merit to be used is mean squared error, and again the simplex or other search methodologies can be used to optimize the three variables. After these three variables are optimized, we apply their corrections to the suspect image and align it to exactly the pixel spacing and masking of the original digital image and its mask. We can now call this the standard mask.

The next step is to subtract the original digital image from the newly normalized suspect image only within the standard mask region. This new image is called the difference image.

Then we step through all 32 individual random embedded images, doing a local cross-correlation between the masked difference image and the masked individual embedded image. ‘Local’ refers to the idea that one need only start correlating over an offset region of ±1 pixels of offset between the nominal registration points of the two images found during the search procedures above. The peak correlation should be very close to the nominal registration point of 0,0 offset, and we can add the 3 by 3 correlation values together to give one grand correlation value for each of the 32 individual bits of our 32-bit identification word.

After doing this for all 32 bit places and their corresponding random images, we have a quasi-floating point sequence of 32 values. The first four values represent our calibration signal of 0101. We now take the mean of the first and third floating point value and call this floating point value ‘0,’ and we take the mean of the second and the fourth value and call this floating point value ‘1.’ We then step through all remaining 28 bit values and assign either a ‘0’ or a ‘1’ based simply on which mean value they are closer to. Stated simply, if the suspect image is indeed a copy of our original, the embedded 32-bit resulting code should match that of our records, and if it is not a copy, we should get general randomness. The third and the fourth possibilities of 3) Is a copy but doesn't match identification number and 4) isn't a copy but does match are, in the case of 3), possible if the signal to noise ratio of the process has plummeted, i.e. the ‘suspect image’ is truly a very poor copy of the original, and in the case of 4) is basically one chance in four billion since we were using a 32-bit identification number. If we are truly worried about 4), we can just have a second independent lab perform their own tests on a different issue of the same magazine. Finally, checking the error-check bits against what the values give is one final and possibly overkill check on the whole process. In situations where signal to noise is a possible problem, these error checking bits might be eliminated without too much harm.

Benefits

Now that a full description of the first embodiment has been described via a detailed example, it is appropriate to point out the rationale of some of the process steps and their benefits.

The ultimate benefits of the foregoing process are that obtaining an identification number is fully independent of the manners and methods of preparing the difference image. That is to say, the manners of preparing the difference image, such as cutting, registering, scaling, etcetera, cannot increase the odds of finding an identification number when none exists; it only helps the signal-to-noise ratio of the identification process when a true identification number is present. Methods of preparing images for identification can be different from each other even, providing the possibility for multiple independent methodologies for making a match.

The ability to obtain a match even on sub-sets of the original signal or image is a key point in today's information-rich world. Cutting and pasting both images and sound clips is becoming more common, allowing such an embodiment to be used in detecting a copy even when original material has been thus corrupted. Finally, the signal to noise ratio of matching should begin to become difficult only when the copy material itself has been significantly altered either by noise or by significant distortion; both of these also will affect that copy's commercial value, so that trying to thwart the system can only be done at the expense of a huge decrease in commercial value.

An early conception of this technology was the case where only a single “snowy image” or random signal was added to an original image, i.e. the case where N=1. “Decoding” this signal would involve a subsequent mathematical analysis using (generally statistical) algorithms to make a judgment on the presence or absence of this signal. The reason this approach was abandoned as the preferred embodiment was that there was an inherent gray area in the certainty of detecting the presence or absence of the signal. By moving onward to a multitude of bit planes, i.e. N>1, combined with simple pre-defined algorithms prescribing the manner of choosing between a “0” and a “1”, the certainty question moved from the realm of expert statistical analysis into the realm of guessing a random binary event such as a coin flip. This is seen as a powerful feature relative to the intuitive acceptance of this technology in both the courtroom and the marketplace. The analogy which summarizes the inventor's thoughts on this whole question is as follows: The search for a single identification signal amounts to calling a coin flip only once, and relying on arcane experts to make the call; whereas the N>1 embodiment relies on the broadly intuitive principle of correctly calling a coin flip N times in a row. This situation is greatly exacerbated, i.e. the problem of “interpretation” of the presence of a single signal, when images and sound clips get smaller and smaller in extent.

Another important reason that the N>1 case is preferred over the N=1 embodiment is that in the N=1 case, the manner in which a suspect image is prepared and manipulated has a direct bearing on the likelihood of making a positive identification. Thus, the manner with which an expert makes an identification determination becomes an integral part of that determination. The existence of a multitude of mathematical and statistical approaches to making this determination leave open the possibility that some tests might make positive identifications while others might make negative determinations, inviting further arcane debate about the relative merits of the various identification approaches. The N>1 embodiment avoids this further gray area by presenting a method where no amount of pre-processing of a signal—other than pre-processing which surreptitiously uses knowledge of the private code signals—can increase the likelihood of “calling the coin flip N times in a row.”

The fullest expression of the present system will come when it becomes an industry standard and numerous independent groups set up with their own means or ‘in-house’ brand of applying embedded identification numbers and in their decipherment. Numerous independent group identification will further enhance the ultimate objectivity of the method, thereby enhancing its appeal as an industry standard.

Use of True Polarity in Creating the Composite Embedded Code Signal

The foregoing discussion made use of the 0 and 1 formalism of binary technology to accomplish its ends. Specifically, the 0's and 1's of the N-bit identification word directly multiplied their corresponding individual embedded code signal to form the composite embedded code signal (step 8, FIG. 2). This approach certainly has its conceptual simplicity, but the multiplication of an embedded code signal by 0 along with the storage of that embedded code contains a kind of inefficiency.

It is preferred to maintain the formalism of the 0 and 1 nature of the N-bit identification word, but to have the 0's of the word induce a subtraction of their corresponding embedded code signal. Thus, in step 8 of FIG. 2, rather than only ‘adding’ the individual embedded code signals which correspond to a ‘1’ in the N-bit identification word, we will also ‘subtract’ the individual embedded code signals which correspond to a ‘0’ in the N-bit identification word.

At first glance this seems to add more apparent noise to the final composite signal. But it also increases the energy-wise separation of the 0's from the 1's, and thus the ‘gain’ which is applied in step 10. FIG. 2 can be correspondingly lower.

We can refer to this improvement as the use of true polarity. The main advantage of this improvement can largely be summarized as ‘informational efficiency.’

‘Perceptual Orthogonality’ of the Individual Embedded Code Signals

The foregoing discussion contemplates the use of generally random noise-like signals as the individual embedded code signals. This is perhaps the simplest form of signal to generate. However, there is a form of informational optimization which can be applied to the set of the individual embedded signals, which the applicant describes under the rubric ‘perceptual orthogonality.’ This term is loosely based on the mathematical concept of the orthogonality of vectors, with the current additional requirement that this orthogonality should maximize the signal energy of the identification information while maintaining it below some perceptibility threshold. Put another way, the embedded code signals need not necessarily be random in nature.

Use and Improvements of the First Embodiment in the Field of Emulsion-Based Photography

The foregoing discussion outlined techniques that are applicable to photographic materials. The following section explores the details of this area further and discloses certain improvements which lend themselves to a broad range of applications.

The first area to be discussed involves the pre-application or pre-exposing of a serial number onto traditional photographic products, such as negative film, print paper, transparencies, etc. In general, this is a way to embed a priori unique serial numbers (and by implication, ownership and tracking information) into photographic material. The serial numbers themselves would be a permanent part of the normally exposed picture, as opposed to being relegated to the margins or stamped on the back of a printed photograph, which all require separate locations and separate methods of copying. The ‘serial number’ as it is called here is generally synonymous with the N-bit identification word, only now we are using a more common industrial terminology.

In FIG. 2, step 11, the disclosure calls for the storage of the “original [image]” along with code images. Then in FIG. 3, step 9, it directs that the original be subtracted from the suspect image, thereby leaving the possible identification codes plus whatever noise and corruption has accumulated. Therefore, the previous disclosure made the tacit assumption that there exists an original without the composite embedded signals.

Now in the case of selling print paper and other duplication film products, this will still be the case, i.e., an “original” without the embedded codes will indeed exist and the basic methodology of the first embodiment can be employed. The original film serves perfectly well as an ‘unencoded original.’

However, in the case where pre-exposed negative film is used, the composite embedded signal pre-exists on the original film and thus there will never be an “original” separate from the pre-embedded signal. It is this latter case, therefore, which will be examined a bit more closely, along with observations on how to best use the principles discussed above (the former cases adhering to the previously outlined methods).

The clearest point of departure for the case of pre-numbered negative film, i.e. negative film which has had each and every frame pre-exposed with a very faint and unique composite embedded signal, comes at step 9 of FIG. 3 as previously noted. There are certainly other differences as well, but they are mostly logistical in nature, such as how and when to embed the signals on the film, how to store the code numbers and serial number, etc. Obviously the pre-exposing of film would involve a major change to the general mass production process of creating and packaging film.

FIG. 4 has a schematic outlining one potential post-hoc mechanism for pre-exposing film. ‘Post-hoc’ refers to applying a process after the full common manufacturing process of film has already taken place. Eventually, economies of scale may dictate placing this pre-exposing process directly into the chain of manufacturing film. Depicted in FIG. 4 is what is commonly known as a film writing system. The computer, 106, displays the composite signal produced in step 8, FIG. 2, on its phosphor screen. A given frame of film is then exposed by imaging this phosphor screen, where the exposure level is generally very faint, i.e. generally imperceptible. Clearly, the marketplace will set its own demands on how faint this should be, that is, the level of added ‘graininess’ as practitioners would put in. Each frame of film is sequentially exposed, where in general the composite image displayed on the CRT 102 is changed for each and every frame, thereby giving each frame of film a different serial number. The transfer lens 104 highlights the focal conjugate planes of a film frame and the CRT face.

Getting back to the applying the principles of the foregoing embodiment in the case of pre-exposed negative film . . . At step 9, FIG. 3, if we were to subtract the “original” with its embedded code, we would obviously be “erasing” the code as well since the code is an integral part of the original. Fortunately, remedies do exist and identifications can still be made. However, it will be a challenge to artisans who refine this embodiment to have the signal to noise ratio of the identification process in the pre-exposed negative case approach the signal to noise ratio of the case where the un-encoded original exists.

A succinct definition of the problem is in order at this point. Given a suspect picture (signal), find the embedded identification code IF a code exists at al. The problem reduces to one of finding the amplitude of each and every individual embedded code signal within the suspect picture, not only within the context of noise and corruption as was previously explained, but now also within the context of the coupling between a captured image and the codes. ‘Coupling’ here refers to the idea that the captured image “randomly biases” the cross-correlation.

So, bearing in mind this additional item of signal coupling, the identification process now estimates the signal amplitude of each and every individual embedded code signal (as opposed to taking the cross-correlation result of step 12, FIG. 3). If our identification signal exists in the suspect picture, the amplitudes thus found will split into a polarity with positive amplitudes being assigned a ‘1’ and negative amplitudes being assigned a

The new image is applied to the fast fourier transform routine and a scale factor is eventually found which minimizes the integrated high frequency content of the new image. This general type of search operation with its minimization of a particular quantity is exceedingly common. The scale factor thus found is the sought-for “amplitude.” Refinements which are contemplated but not yet implemented are where the coupling of the higher derivatives of the acquired image and the embedded codes are estimated and removed from the calculated scale factor. In other words, certain bias effects from the coupling mentioned earlier are present and should be eventually accounted for and removed both through theoretical and empirical experimentation.

Use and Improvements in the Detection of Signal or Image Alteration

Apart from the basic need of identifying a signal or image as a whole, there is also a rather ubiquitous need to detect possible alterations to a signal or image. The following section describes how the foregoing embodiment, with certain modifications and improvements, can be used as a powerful tool in this area. The potential scenarios and applications of detecting alterations are innumerable.

To first summarize, assume that we have a given signal or image which has been positively identified using the basic methods outlined above. In other words, we know its N-bit identification word, its individual embedded code signals, and its composite embedded code. We can then fairly simply create a spatial map of the composite code's amplitude within our given signal or image. Furthermore, we can divide this amplitude map by the known composite code's spatial amplitude, giving a normalized map, i.e. a map which should fluctuate about some global mean value. By simple examination of this map, we can visually detect any areas which have been significantly altered wherein the value of the normalized amplitude dips below some statistically set threshold based purely on typical noise and corruption (error).

The details of implementing the creation of the amplitude map have a variety of choices. One is to perform the same procedure which is used to determine the signal amplitude as described above, only now we step and repeat the multiplication of any given area of the signal/image with a gaussian weight function centered about the area we are investigating.

Universal Versus Custom Codes

The disclosure thus far has outlined how each and every source signal has its own unique set of individual embedded code signals. This entails the storage of a significant amount of additional code information above and beyond the original, and many applications may merit some form of economizing.

One such approach to economizing is to have a given set of individual embedded code signals be common to a batch of source materials. For example, one thousand images can all utilize the same basic set of individual embedded code signals. The storage requirements of these codes then become a small fraction of the overall storage requirements of the source material.

Furthermore, some applications can utilize a universal set of individual embedded code signals, i.e., codes which remain the same for all instances of distributed material. This type of requirement would be seen by systems which wish to hide the N-bit identification word itself, yet have standardized equipment be able to read that word. This can be used in systems which make go/no go decisions at point-of-read locations. The potential drawback to this set-up is that the universal codes are more prone to be sleuthed or stolen; therefore they will not be as secure as the apparatus and methodology of the previously disclosed arrangement. Perhaps this is just the difference between ‘high security’ and ‘air-tight security,’ a distinction carrying little weight with the bulk of potential applications.

Use in Printing, Paper, Documents, Plastic Coated Identification Cards, and Other Material Where Global Embedded Codes Can Be Imprinted

The term ‘signal’ is often used narrowly to refer to digital data information, audio signals, images, etc. A broader interpretation of ‘signal,’ and the one more generally intended, includes any form of modulation of any material whatsoever. Thus, the micro-topology of a piece of common paper becomes a ‘signal’ (e.g. it height as a function of x-y coordinates). The reflective properties of a flat piece of plastic (as a function of space also) becomes a signal. The point is that photographic emulsions, audio signals, and digitized information are not the only types of signals capable of utilizing the principles described herein.

As a case in point, a machine very much resembling a braille printing machine can be designed so as to imprint unique ‘noise-like’ indentations as outlined above. These indentations can be applied with a pressure which is much smaller than is typically applied in creating braille, to the point where the patterns are not noticed by a normal user of the paper. But by following the steps of the present disclosure and applying them via the mechanism of micro-indentations, a unique identification code can be placed onto any given sheet of paper, be it intended for everyday stationary purposes, or be it for important documents, legal tender, or other secured material.

The reading of the identification material in such an embodiment generally proceeds by merely reading the document optically at a variety of angles. This would become an inexpensive method for deducing the micro-topology of the paper surface. Certainly other forms of reading the topology of the paper are possible as well.

In the case of plastic encased material such as identification cards, e.g. driver's licenses, a similar braille-like impressions machine can be utilized to imprint unique identification codes. Subtle layers of photoreactive materials can also be embedded inside the plastic and ‘exposed.’

It is clear that wherever a material exists which is capable of being modulated by ‘noise-like’ signals, that material is an appropriate carrier for unique identification codes and utilization of the principles disclosed herein. All that remains is the matter of economically applying the identification information and maintaining the signal level below an acceptability threshold which each and every application will define for itself.

While the first class of embodiments most commonly employs a standard microprocessor or computer to perform the encodation of an image or signal, it is possible to utilize a custom encodation device which may be faster than a typical Von Neuman-type processor. Such a system can be utilized with all manner of serial data streams.

Music and videotape recordings are examples of serial data streams—data streams which are often pirated. It would assist enforcement efforts if authorized recordings were encoded with identification data so that pirated knock-offs could be traced to the original from which they were made.

Piracy is but one concern driving the need for applicant's technology. Another is authentication. Often it is important to confirm that a given set of data is really what it is purported to be (often several years after its generation).

To address these and other needs, the system 200 of FIG. 5 can be employed. System 200 can be thought of as an identification coding black box 202. The system 200 receives an input signal (sometimes termed the “master” or “unencoded” signal) and a code word, and produces (generally in real time) an identification-coded output signal. (Usually, the system provides key data for use in later decoding.)

The contents of the “black box” 202 can take various forms. An exemplary black box system is shown in FIG. 6 and includes a look-up table 204, a digital noise source 206, first and second scalers 208, 210, an adder/subtracter 212, a memory 214, and a register 216.

The input signal (which in the illustrated embodiment is an 8-20 bit data signal provided at a rate of one million samples per second, but which in other embodiments could be an analog signal if appropriate A/D and D/A conversion is provided) is applied from an input 218 to the address input 220 of the look-up table 204. For each input sample (i.e. look-up table address), the table provides a corresponding 8-bit digital output word. This output word is used as a scaling factor that is applied to one input of the first scaler 208.

The first scaler 208 has a second input, to which is applied an 8-bit digital noise signal from source 206. (In the illustrated embodiment, the noise source 206 comprises an analog noise source 222 and an analog-to-digital converter 224 although, again, other implementations can be used.) The noise source in the illustrated embodiment has a zero mean output value, with a full width half maximum (FWHM) of 50-100 digital numbers (e.g. from −75 to +75).

The first scaler 208 multiplies the two 8-bit words at its inputs (scale factor and noise) to produce—for each sample of the system input signal—a 16-bit output word. Since the noise signal has a zero mean value, the output of the first scaler likewise has a zero mean value.

The output of the first scaler 208 is applied to the input of the second scaler 210. The second scaler serves a global scaling function, establishing the absolute magnitude of the identification signal that will ultimately be embedded into the input data signal. The scaling factor is set through a scale control device 226 (which may take a number of forms, from a simple rheostat to a graphically implemented control in a graphical user interface), permitting this factor to be changed in accordance with the requirements of different applications. The second scaler 210 provides on its output line 228 a scaled noise signal. Each sample of this scaled noise signal is successively stored in the memory 214.

(In the illustrated embodiment, the output from the first scaler 208 may range between −1500 and +1500 (decimal) while the output from the second scaler 210 is in the low single digits, (such as between −2 and +2).)

Register 216 stores a multi-bit identification code word. In the illustrated embodiment this code word consists of 8 bits, although larger code words (up to hundreds of bits) are commonly used. These bits are referenced, one at a time, to control how the input signal is modulated with the scaled noise signal.

In particular, a pointer 230 is cycled sequentially through the bit positions of the code word in register 216 to provide a control bit of “0” or “1” to a control input 232 of the adder/subtracter 212. If, for a particular input signal sample, the control bit is a “1”, the scaled noise signal sample on line 10
where =\=is not equal to. This is somewhat an abstract notion to introduce at this point in the disclosure and will become more clear as FIG. 15 is discussed. The general idea, however, is that there will be a variety of algebras that can be used to optimize the pass-through of “invisible” signatures through compression procedures. Clearly, the same principles as depicted in FIG. 15 also work on still images and the JPEG or any other still image compression standard.

Turning now to the details of FIG. 15, we begin with the simple stepping through of all Z frames of a movie or video. For a two hour movie played at 30 frames per second, Z turns out to be (30*2*60*60) or 216,000. The inner loop of 700, 702 and 704 merely mimics FIG. 13's steps. The logo frame optionally can change during the stepping through frames. The two arrows emanating from the box 704 represent both the continuation of the loop 750 and the depositing of output frames into the rendered master Snowy Image 752.

To take a brief but potentially appropriate digression at this point, the use of the concept of a Markov process brings certain clarity to the discussion of optimizing the engineering implementation of the methods of FIG. 15. Briefly, a Markov process is one in which a sequence of events takes place and in general there is no memory between one step in the sequence and the next. In the context of FIG. 15 and a sequence of images, a Markovian sequence of images would be one in which there is no apparent or appreciable correlation between a given frame and the next. Imagine taking the set of all movies ever produced, stepping one frame at a time and selecting a random frame from a random movie to be inserted into an output movie, and then stepping through, say, one minute or 1800 of these frames. The resulting “movie” would be a fine example of a Markovian movie. One point of this discussion is that depending on how the logo frames are rendered and depending on how the encryption/scrambling step 702 is performed, the Master Snowy Movie 752 will exhibit some generally quantifiable degree of Markovian characteristics. The point of this point is that the compression procedure itself will be affected by this degree of Markovian nature and thus needs to be accounted for in designing the process of FIG. 15. Likewise, and only in general, even if a fully Markovian movie is created in the High Brightness Master Snowy Movie, 752, then the processing of compressing and decompressing that movie 752, represented as the MPEG box 754, will break down some of the Markovian nature of 752 and create at least a marginally non-Markovian compressed master Snowy Movie 756. This point will be utilized when the disclosure briefly discusses the idea of using multiple frames of a video stream in order to find a single N-bit identification word, that is, the same N-bit identification word may be embedded into several frames of a movie, and it is quite reasonable to use the information derived from those multiple frames to find that single N-bit identification word. The non-Markovian nature of 756 thus adds certain tools to reading and recognizing the invisible signatures. Enough to this tangent.

With the intent of preconditioning the ultimately utilized Master Snowy Movie 756, we now send the rendered High Brightness Master Snowy Movie 752 through both the MPEG compression AND decompression procedure 754. With the caveat previously discussed where it is acknowledged that the MPEG compression process is generally not distributive, the idea of the step 754 is to crudely segregate the initially rendered Snowy Movie 752 into two components, the component which survives the compression process 754 which is 756, and the component which does not survive, also crudely estimated using the difference operation 758 to produce the “Cheap Master Snowy Movie” 760. The reason use is made of the deliberately loose term “Cheap” is that we can later add this signature signal as well to a distributable movie, knowing that it probably won't survive common compression processes but that nevertheless it can provide “cheap” extra signature signal energy for applications or situations which will never experience compression. [Thus it is at least noted in FIG. 15]. Back to FIG. 15 proper, we now have a rough cut at signatures which we know have a higher likelihood of surviving intact through the compression process, and we use this “Compressed Master Snowy Movie” 756 to then go through this procedure of being scaled down 764, added to the original movie 766, producing a candidate distributable movie 770, then compared to the original movie (768) to ensure that it meets whatever commercially viable criteria which have been set up (i.e. the acceptable perceived noise level). The arrow from the side-by-side step 768 back to the scale down step 764 corresponds quite directly to the “experiment visually . . . ” step of FIG. 2, and the gain control 226 of FIG. 6. Those practiced in the art of image and audio information theory can recognize that the whole of FIG. 15 can be summarized as attempting to pre-condition the invisible signature signals in such a way that they are better able to withstand even quite appreciable compression. To reiterate a previously mentioned item as well, this idea equally applies to ANY such pre-identifiable process to which an image, and image sequence, or audio track might be subjected. This clearly includes the JPEG process on still images.

Additional Elements of the Realtime Encoder Circuitry

It should be noted that the method steps represented in FIG. 15, generally following from box 750 up through the creation of the compressed master snowy movie 756, could with certain modification be implemented in hardware. In particular, the overall analog noise source 206 in FIG. 6 could be replaced by such a hardware circuit. Likewise the steps and associated procedures depicted in FIG. 13 could be implemented in hardware and replace the analog noise source 206.

Recognition based on more than one frame: non-Markovian signatures

As noted in the digression on Markov and non-Markov sequences of images, it is pointed out once again that in such circumstances where the embedded invisible signature signals are non-Markovian in nature, i.e., that there is some correlation between the master snowy image of one frame to that of the next, AND furthermore that a single N-bit identification word is used across a range of frames and that the sequence of N-bit identification words associated with the sequence of frames is not Markovian in nature, then it is possible to utilize the data from several frames of a movie or video in order to recognize a single N-bit identification word. All of this is a fancy way of saying that the process of recognizing the invisible signatures should use as much information as is available, in this case translating to multiple frames of a motion image sequence.

The concept of the “header” on a digital image or audio file is a well established practice in the art. The top of FIG. 16 has a simplified look at the concept of the header, wherein a data file begins with generally a comprehensive set of information about the file as a whole, often including information about who the author or copyright holder of the data is, if there is a copyright holder at all. This header 800 is then typically followed by the data itself 802, such as an audio stream, DIAAppendix B

A software version of a steganographic marking/decoding “plug-in” for use with Adobe Photoshop software, presented as commented source code, is included in the file of this patent on a compact disc in a file named Appendix B.txt, which is incorporated by reference. The code was written for compilation with Microsoft's Visual C++ compiler, version 4.0, and can be understood by those skilled in the art.

If marking of images becomes widespread (e.g. by software compatible with Adobe's image processing software), a user of such software can decode the embedded data from an image and consult a public registry to identify the proprietor of the image. In some embodiments, the registry can serve as the conduit through which appropriate royalty payments are forwarded to the proprietor for the user's use of an image. (In an illustrative embodiment, the registry is a server on the Internet, accessible via the World Wide Web, coupled to a database. The database includes detailed information on catalogued images (e.g. name, address, phone number of proprietor, and a schedule of charges for different types of uses to which the image may be put), indexed by identification codes with which the images themselves are encoded. A person who decodes an image queries the registry with the codes thereby gleaned to obtain the desired data and, if appropriate, to forward electronic payment of a copyright royalty to the image's proprietor.)

Particular Data Formats

While the foregoing steganography techniques are broadly applicable, their commercial acceptance will be aided by establishment of standards setting forth which pixels/bit cells represent what. The following discussion proposes one set of possible standards. For expository convenience, this discussion focuses on decoding of the data; encoding follows in a straightforward manner.

Referring to FIG. 42, an image 1202 includes a plurality of tiled “signature blocks” 1204. (Partial signature blocks may be present at the image edges.) Each signature block 1204 includes an 8×8 array of sub-blocks 1206. Each sub-block 1206 includes an 8×8 array of bit cells 1208. Each bit cell comprises a 2×2 array of “bumps” 1210. Each bump 1210, in turn, comprises a square grouping of 16 individual pixels 1212.

The individual pixels 1212 are the smallest quanta of image data. In this arrangement, however, pixel values are not, individually, the data carrying elements. Instead, this role is served by bit cells 1208 (i.e. 2×2 arrays of bumps 1210). In particular, the bumps comprising the bits cells are encoded to assume one of the two patterns shown in FIG. 41. As noted earlier, the pattern shown in FIG. 41A represents a “0” bit, while the pattern shown in FIG. 41B represents a “1” bit. Each bit cell 1208 (64 image pixels) thus represents a single bit of the embedded data. Each sub-block 1206 includes 64 bit cells, and thus conveys 64 bits of embedded data.

(The nature of the image changes effected by the encoding follows the techniques set forth above under the heading MORE ON PERCEPTUALLY ADAPTIVE SIGNING; that discussion is not repeated here.)

In the illustrated embodiment, the embedded data includes two parts: control bits and message bits. The 16 bit cells 1208A in the center of each sub-block 1206 serve to convey 16 control bits. The surrounding 48 bit cells 1208B serve to convey 48 message bits. This 64-bit chunk of data is encoded in each of the sub-blocks 1206, and is repeated 64 times in each signature block 1204.

A digression: in addition to encoding of the image to redundantly embed the 64 control/message bits therein, the values of individual pixels are additionally adjusted to effect encoding of subliminal graticules through the image. In this embodiment, the graticules discussed in conjunction with FIG. 29A are used, resulting in an imperceptible texturing of the image. When the image is to be decoded, the image is transformed into the spatial domain, the Fourier-Mellin technique is applied to match the graticule energy points with their expected positions, and the processed data is then inverse-transformed, providing a registered image ready for decoding. (The sequence of first tweaking the image to effect encoding of the subliminal graticules, or first tweaking the image to effect encoding of the embedded data, is not believed to be critical. As presently practiced, the local gain factors (discussed above) are computed; then the data is encoded; then the subliminal graticule encoding is performed. (Both of these encoding steps make use of the local gain factors.))

Returning to the data format, once the encoded image has been thus registered, the locations of the control bits in sub-block 1206 are known. The image is then analyzed, in the aggregate (i.e. considering the “northwestern-most” sub-block 1206 from each signature block 1204), to determine the value of control bit #1 (represented in sub-block 1206 by bit cell 1208Aa). If this value is determined (e.g. by statistical techniques of the sort detailed above) to be a “1,” this indicates that the format of the embedded data conforms to the standard detailed herein (the Digimarc Beta Data Format).

According to this standard, control bit #2 (represented by bit cells 1208Ab) is a flag indicating whether the image is copyrighted. Control bit #3 (represented by bit cells 1208Ac) is a flag indicating whether the image is unsuitable for viewing by children. Certain of the remaining bits are used for error detection/correction purposes.

The 48 message bits of each sub block 1206 can be put to any use; they are not specified in this format. One possible use is to define a numeric “owner” field and a numeric “image/item” field (e.g. 24 bits each).

If this data format is used, each sub-block 1206 contains the entire control/message data, so same is repeated 64 times within each signature block of the image. If control bit #1 is not a “1,” then the format of the embedded data does not conform to the above described standard. In this case, the reading software analyzes the image data to determine the value of control bit #4. If this bit is set (i.e. equal to “1”), this signifies an embedded ASCII message.

The reading software then examines control bits #5 and #6 to determine the length of the embedded ASCII message.

If control bits #5 and #6 both are “0,” this indicates the ASCII message is 6 characters in length. In this case, the 48 bit cells 1208B surrounding the control bits 1208A are interpreted as six ASCII characters (8 bits each). Again, each sub-block 1206 contains the entire control/message data, so same is repeated 64 times within each signature block 1204 of the image.

If control bit #5 is “0” and control bit #6 is “1,” this indicates the embedded ASCII message is 14 characters in length. In this case, the 48 bit cells 1208B surrounding the control bits 1208A are interpreted as the first six ASCII characters. The 64 bit cells 1208 of the immediately-adjoining subblock 1220 are interpreted as the final eight ASCII characters.

Note that in this arrangement, the bit-cells 1208 in the center of sub-block 1220 are not interpreted as control bits. Instead, the entire sub-block serves to convey additional message bits. In this case there is just one group of control bits for two sub-blocks.

Also note than in this arrangement, pairs of sub-blocks 1206 contains the entire control/message data, so same is repeated 32 times within each signature block 1204 of the image.

Likewise if control bit #5 is “1” and control bit #6 is “0.” This indicates the embedded ASCII message is 30 characters in length. In this case, 2×2 arrays of sub-blocks are used for each representation of the data. The 48 bit cells 1208B surrounding control bits 1208A are interpreted as the first six ASCII characters. The 64 bit cells of each of adjoining block 1220 are interpreted as representing the next 8 additional characters. The 64 bits cells of sub-block 1222 are interpreted as representing the next 8 characters. And the 64 bit cells of sub-block 1224 are interpreted as representing the final 8 characters. In this case, there is just one group of control bits for four sub-blocks. And the control/message data is repeated 16 times within each signature block 1204 of the image.

If control bits #5 and #6 are both “1”s, this indicates an ASCII message of programmable length. In this case, the reading software examines the first 16 bit cells 1208B surrounding the control bits. Instead of interpreting these bit cells as message bits, they are interpreted as additional control bits (the opposite of the case described above, where bit cells normally used to represent control bits represented message bits instead). In particular, the reading software interprets these 16 bits as representing, in binary, the length of the ASCII message. An algorithm is then applied to this data (matching a similar algorithm used during the encoding process) to establish a corresponding tiling pattern (i.e. to specify which sub-blocks convey which bits of the ASCII message, and which convey control bits.)

In this programmable-length ASCII message case, control bits are desirably repeated several times within a single representation of the message so that, e.g., there is one set of control bits for approximately every 24 ASCII characters. To increase packing efficiency, the tiling algorithm can allocate (divide) a sub-block so that some of its bit-cells are used for a first representation of the message, and others are used for another representation of the messages.

Reference was earlier made to beginning the decoding of the registered image by considering the “northwestern-most” sub-block 1206 in each signature block 1204. This bears elaboration.

Depending on the data format used, some of the sub-blocks 1206 in each signature block 1204 may not include control bits. Accordingly, the decoding software desirably determines the data format by first examining the “northwestern-most” sub-block 1206 in each signature block 1204; the 16 bits cells in the centers of these sub-blocks will reliably represent control bits. Based on the value(s) of one or more of these bits (e.g. the Digimarc Beta Data Format bit), the decoding software can identify all other locations throughout each signature block 1204 where the control bits are also encoded (e.g. at the center of each of the 64 sub-blocks 1206 comprising a signature block 1204), and can use the larger statistical base of data thereby provided to extract the remaining control bits from the image (and to confirm, if desired, the earlier control bit(s) determination). After all control bits have thereby been discerned, the decoding software determines (from the control bits) the mapping of message bits to bit cells throughout the image.

To reduce the likelihood of visual artifacts, the numbering of bit cells within sub-blocks is alternated in a checkerboard-like fashion. That is, the “northwestern-most” bit cell in the “northwestern-most” sub-block is numbered “0.” Numbering increases left to right, and successively through the rows, up to bit cell 63. Each sub-block diametrically adjoining one of its corners (i.e. sub-block 1224) has the same ordering of bit cells. But sub-blocks adjoining its edges (i.e. sub-blocks 1220 and 1222) have the opposite numbering. That is, the “northwestern-most” bit cell in sub-blocks 1220 and 1222 is numbered “63.” Numbering decreases left to right, and successively through the rows, down to 0. Likewise throughout each signature block 1204.

In a variant of the Digimarc beta format, a pair of sub-blocks is used for each representation of the data, providing 128 bit cells. The center 16 bit cells 1208 in the first sub-block 1206 are used to represent control bits. The 48 remaining bit cells in that sub-block, together with all 64 bit cells 1208 in the adjoining sub-block 1220, are used to provide a 112-bit message field. Likewise for every pair of sub-blocks throughout each signature block 1204. In such an arrangement, each signature block 1204 thus includes 32 complete representations of the encoded data (as opposed to 64 representations in the earlier-described standard). This additional length allows encoding of longer data strings, such as a numeric IP address (e.g. URL).

Obviously, numerous alternative data formats can be designed. The particular format used can be indicated to the decoding software by values of one or more control bits in the encoded image.

In the Appendix B software, the program SIGN_PUBLIC.CPP effects encoding of an image using a signature block/sub-block/bit cell arrangement like that detailed above. As of this writing, the corresponding decoding software has not yet been written, but its operation is straight-forward given the foregoing discussion and the details in the SIGN-PUBLIC.CPP software.

Other Applications

Before concluding, it may be instructive to review some of the other fields where principles of applicant's technology can be employed.

One is smart business cards, wherein a business card is provided with a photograph having unobtrusive, machine-readable contact data embedded therein. (The same function can be achieved by changing the surface microtopology of the card to embed the data therein.)

Another promising application is in content regulation. Television signals, images on the internet, and other content sources (audio, image, video, etc.) can have data indicating their “appropriateness” (i.e. their rating for sex, violence, suitability for children, etc.) actually embedded in the content itself rather than externally associated therewith. Television receivers, internet surfing software, etc., can discern such appropriateness ratings (e.g. by use of universal code decoding) and can take appropriate action (e.g. not permitting viewing of an image or video, or play-back of an audio source).

In a simple embodiment of the foregoing, the embedded data can have one or more “flag” bits, as discussed earlier. One flag bit can signify “inappropriate for children.” (Others can be, e.g., “this image is copyrighted” or “this image is in the public domain.”) Such flag bits can be in a field of control bits distinct from an embedded message, or can—themselves—be the message. By examining the state of these flag bits, the decoder software can quickly apprise the user of various attributes of the image.

(As discussed earlier, control bits can be encoded in known locations in the image—known relative to the subliminal graticules—and can indicate the format of the embedded data (e.g. its length, its type, etc.) As such, these control bits are analogous to data sometimes conveyed in prior art file headers, but in this case they are embedded within an image, instead of prepended to a file.)

The field of merchandise marking is generally well served by familiar bar codes and universal product codes. However, in certain applications, such bar codes are undesirable (e.g. for aesthetic considerations, or where security is a concern). In such applications, applicant's technology may be used to mark merchandise, either through an innocuous carrier (e.g. a photograph associated with the product), or by encoding the microtopology of the merchandise's surface, or a label thereon.

There are applications—too numerous to detail—in which steganography can advantageously be combined with encryption and/or digital signature technology to provide enhanced security.

Medical records appear to be an area in which authentication is important. Steganographic principles—applied either to film-based records or to the microtopology of documents—can be employed to provide some protection against tampering.

Many industries, e.g. automobile and airline, rely on tags to mark critical parts. Such tags, however, are easily removed, and cap often be counterfeited. In applications wherein better security is desired, industrial parts can be steganographically marked to provide an inconspicuous identification/authentication tag.

In various of the applications reviewed in this specification, different messages can be steganographically conveyed by different regions of an image (e.g. different regions of an image can provide different internet URLs, or different regions of a photocollage can identify different photographers). Likewise with other media (e.g. sound).

Some software visionaries look to the day when data blobs will roam the datawaves and interact with other data blobs. In such an era, it will be necessary for such blobs to have robust and incorruptible ways of identifying themselves. Steganographic techniques again bold much promise here.

Finally, message changing codes—recursive systems in which steganographically encoded messages actually change underlying steganographic code patterns—offer new levels of sophistication and security. Such message changing codes are particularly well suited to applications such as plastic cash cards where time-changing elements are important to enhance security.

Again while applicant prefers the particular forms of steganographic encoding detailed above, the diverse applications disclosed in this specification can largely be practiced with other steganographic marking techniques, many of which are known in the prior art. Likewise, while the specification has focused on applications of this technology to images, the principles thereof are generally equally applicable to embedding such information in audio, physical media, or any other carrier of information.

Finally, while the specification has been illustrated with particular embodiments, it will be recognized that elements, components and steps from these embodiments can be recombined in different arrangements to serve different needs and applications, as will be readily apparent to those of ordinary skill in the art.

In view of the wide variety of implementations and applications to which the principles of this technology can be put, it should be apparent that the detailed embodiments are illustrative only and in no way limit the scope of my invention. Instead, I claim as my invention all such embodiments as come within the scope and spirit of the following claims and equivalents thereto.

Rhoads, Geoffrey B.

Patent Priority Assignee Title
10235465, Jun 22 2004 Digimarc Corporation Internet and database searching with handheld devices
10304152, Mar 24 2000 Digimarc Corporation Decoding a watermark and processing in response thereto
10510000, Oct 26 2010 Intelligent control with hierarchical stacked neural networks
11514305, Oct 26 2010 Intelligent control with hierarchical stacked neural networks
11868883, Oct 26 2010 Intelligent control with hierarchical stacked neural networks
7650009, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Controlling use of audio or image content
7697719, Nov 18 1993 DIGIMARC CORPORATION AN OREGON CORPORATION Methods for analyzing electronic media including video and audio
7805500, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Network linking methods and apparatus
7945781, Nov 18 1993 DIGIMARC CORPORATION AN OREGON CORPORATION Method and systems for inserting watermarks in digital signals
7949147, Aug 26 1997 DIGIMARC CORPORATION AN OREGON CORPORATION Watermarking compressed data
7953270, Nov 12 1996 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and arrangements employing digital content items
7953824, Aug 06 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Image sensors worn or attached on humans for imagery identification
7957553, Apr 24 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarking apparatus and methods
7961949, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Extracting multiple identifiers from audio and video content
7965863, Feb 19 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarks as a gateway and control mechanism
7970166, Apr 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Steganographic encoding methods and apparatus
7970167, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Deriving identifying data from video and audio
7974436, Dec 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Methods, apparatus and programs for generating and utilizing content signatures
7978874, Oct 21 2002 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarking for workflow by tracking content or content identifiers with respect to time
7983443, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Methods for managing content using intentional degradation and insertion of steganographic codes
7986845, Jul 27 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Steganographic systems and methods
7991182, Jan 20 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Methods for steganographic encoding media
7992003, Nov 18 1993 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and systems for inserting watermarks in digital signals
8000495, Jul 27 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarking systems and methods
8005254, Nov 12 1996 DIGIMARC CORPORATION AN OREGON CORPORATION Background watermark processing
8023691, Apr 24 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Methods involving maps, imagery, video and steganography
8023695, Nov 18 1993 Digimarc Corporation Methods for analyzing electronic media including video and audio
8027507, Sep 25 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Method and apparatus for embedding auxiliary information within original data
8027509, Apr 19 2000 Digimarc Corporation Digital watermarking in data representing color channels
8027510, Jan 13 2000 Digimarc Corporation Encoding and decoding media signals
8027520, Nov 12 1996 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and arrangements employing digital content items
8036419, Apr 16 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarks
8036420, Dec 28 1999 Digimarc Corporation Substituting or replacing components in sound based on steganographic encoding
8045748, Mar 18 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Watermark embedding functions adapted for transmission channels
8051169, Mar 18 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and systems useful in linking from objects to remote resources
8055014, Jun 01 2000 Digimarc Corporation Bi-directional image capture methods and apparatuses
8077911, Dec 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Methods, apparatus and programs for generating and utilizing content signatures
8078697, May 08 1995 Digimarc Corporation Network linking methods and apparatus
8091025, Mar 24 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Systems and methods for processing content objects
8094869, Jul 02 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Fragile and emerging digital watermarks
8099403, Jul 20 2000 Digimarc Corporation Content identification and management in content distribution networks
8103053, Nov 04 1999 Digimarc Corporation Method and apparatus for associating identifiers with content
8103542, Jun 29 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Digitally marked objects and promotional methods
8103879, Apr 25 1996 DIGIMARC CORPORATION AN OREGON CORPORATION Processing audio or video content with multiple watermark layers
8107674, Feb 04 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Synchronizing rendering of multimedia content
8108484, May 19 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
8116516, May 08 1995 Digimarc Corporation Controlling use of audio or image content
8123134, Aug 31 2001 Digimarc Corporation Apparatus to analyze security features on objects
8126201, Sep 11 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Watermark decoding from streaming media
8150032, May 08 1995 Digimarc Corporation Methods for controlling rendering of images and video
8155378, Feb 14 2000 Digimarc Corporation Color image or video processing
8160304, May 19 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Interactive systems and methods employing wireless mobile devices
8165341, Apr 16 1998 Digimarc Corporation Methods and apparatus to process imagery or audio content
8165342, Feb 14 2000 Digimarc Corporation Color image or video processing
8180844, Mar 18 2000 DIGIMARC CORPORATION AN OREGON CORPORATION System for linking from objects to remote resources
8181884, Nov 17 2003 DIGIMARC CORPORATION AN OREGON CORPORATION Machine-readable features for objects
8184849, May 07 1996 Digimarc Corporation Error processing of steganographic message signals
8184851, Nov 18 1993 Digimarc Corporation Inserting watermarks into portions of digital signals
8194915, Feb 14 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Wavelet domain watermarks
8230337, Oct 17 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Associating objects with corresponding behaviors
8243980, Apr 25 1996 DIGIMARC CORPORATION AN OREGON CORPORATION Image processing using embedded registration data to determine and compensate for geometric transformation
8256665, May 19 1999 Digimarc Corporation Methods and systems for interacting with physical objects
8301453, Dec 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Watermark synchronization signals conveying payload data
8312168, Mar 18 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Methods for linking from objects to remote resources
8355525, Feb 14 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Parallel processing of digital watermarking operations
8355526, Apr 16 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Digitally watermarking holograms
8364966, Feb 20 1997 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermark systems and methods
8379908, Aug 06 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Embedding and reading codes on objects
8391851, Nov 03 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Gestural techniques with wireless mobile phone devices
8429205, Jul 27 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Associating data with media signals in media signal systems through auxiliary data steganographically embedded in the media signals
8434073, Nov 03 2008 CA, INC Systems and methods for preventing exploitation of byte sequences that violate compiler-generated alignment
8447067, May 19 1999 Digimarc Corporation Location-based arrangements employing mobile devices
8457346, Apr 24 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Digital watermarking image signals on-chip
8457449, May 19 1999 Digimarc Corporation Wireless mobile phone methods
8483426, May 07 1996 Digimarc Corporation Digital watermarks
8489598, May 19 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and devices employing content identifiers
8520900, May 19 1999 Digimarc Corporation Methods and devices involving imagery and gestures
8528103, Oct 01 1998 Digimarc Corporation System for managing display and retrieval of image content on a network with image identification and linking to network content
8538064, May 19 1999 Digimarc Corporation Methods and devices employing content identifiers
8542870, Dec 21 2000 Digimarc Corporation Methods, apparatus and programs for generating and utilizing content signatures
8543661, May 19 1999 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
8543823, Apr 30 2001 Digimarc Corporation Digital watermarking for identification documents
8565473, Feb 04 2004 DIGIMARC CORPORATION AN OREGON CORPORATION Noise influenced watermarking methods and apparatus
8607354, Apr 20 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Deriving multiple fingerprints from audio or video content
8611589, Sep 25 1998 Digimarc Corporation Method and apparatus for embedding auxiliary information within original data
8615471, May 02 2001 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and related toy and game applications using encoded information
8644548, Apr 16 1998 Digimarc Corporation Digital watermarks
8645838, Oct 01 1998 DIGIMARC CORPORATION AN OREGON CORPORATION Method for enhancing content using persistent content identification
8745742, Nov 03 2008 CA, INC Methods and systems for processing web content encoded with malicious code
8792675, Feb 14 2000 Digimarc Corporation Color image or video processing
8825518, Dec 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Media methods and systems
8953908, Jun 22 2004 DIGIMARC CORPORATION AN OREGON CORPORATION Metadata management and generation using perceptual features
8976998, Apr 24 2001 Digimarc Corporation Methods involving maps, imagery, video and steganography
9053431, Oct 26 2010 Intelligent control with hierarchical stacked neural networks
9058388, Jun 22 2004 DIGIMARC CORPORATION AN OREGON CORPORATION Internet and database searching with handheld devices
9179033, Apr 19 2000 Digimarc Corporation Digital watermarking in data representing color channels
9275053, Mar 24 2000 Digimarc Corporation Decoding a watermark and processing in response thereto
9497341, May 19 1999 DIGIMARC CORPORATION AN OREGON CORPORATION Methods and systems for user-association of visual stimuli with corresponding responses
9792661, Apr 24 2001 Digimarc Corporation Methods involving maps, imagery, video and steganography
9843846, Dec 21 2000 DIGIMARC CORPORATION AN OREGON CORPORATION Watermark and fingerprint systems for media
9875440, Oct 26 2010 Intelligent control with hierarchical stacked neural networks
9940685, Apr 19 2000 Digimarc Corporation Digital watermarking in data representing color channels
Patent Priority Assignee Title
3493674,
3569619,
3585290,
3655162,
3703628,
3805238,
3809806,
3838444,
3914877,
3922074,
3971917, Aug 27 1971 Labels and label readers
3982064, Sep 04 1973 The General Electric Company Limited Combined television/data transmission system
3984624, Jul 25 1974 Weston Instruments, Inc. Video system for conveying digital and analog information
4025851, Nov 28 1975 A.C. Nielsen Company Automatic monitor for programs broadcast
4225967, Jan 09 1978 Fujitsu Limited Broadcast acknowledgement method and system
4230990, Mar 16 1979 JOHN G LERT, JR Broadcast program identification method and system
4231113, Jun 26 1964 International Business Machines Corporation Anti-jam communications system
4238849, Dec 22 1977 NOKIA DEUTSCHLAND GMBH Method of and system for transmitting two different messages on a carrier wave over a single transmission channel of predetermined bandwidth
4252995, Feb 25 1977 U.S. Philips Corporation Radio broadcasting system with transmitter identification
4262329, Mar 27 1978 COMPUTER PLANNING, INC Security system for data processing
4313197, Apr 09 1980 Bell Telephone Laboratories, Incorporated Spread spectrum arrangement for (de)multiplexing speech signals and nonspeech signals
4367488, Dec 08 1980 Sterling Television Presentations Inc. Video Data Systems Division Data encoding for television
4379947, Feb 02 1979 MUZAK, LLC AND MUZAK HOLDINGS, LLC System for transmitting data simultaneously with audio
4380027, Dec 08 1980 STERLING TELEVISION PRESENTATIONS, INC Data encoding for television
4389671, Sep 29 1980 Harris Corporation Digitally-controlled analog encrypton
4395600, Nov 26 1980 PROACTIVE SYSTEMS, INC Auditory subliminal message system and method
4423415, Jun 23 1980 LIGHT SIGNATURES, INC , FORMERLY NEW LSI, INC , 1901 AVENUE OF THE STARS, LOS ANGELES CA 90067 A CORP OF CA Non-counterfeitable document system
4425642, Jan 08 1982 APPLIED SPECTRUM TECHNOLOGIES, INC Simultaneous transmission of two information signals within a band-limited communications channel
4476468, Jun 22 1981 LIGHT SIGNATURES, INC , 1901 AVENUE OF THE STARS, LOS ANGELES CA 90067 Secure transaction card and verification system
4528588, Sep 26 1980 Method and apparatus for marking the information content of an information carrying signal
4532508, Apr 01 1983 Siemens Corporate Research & Support, Inc. Personal authentication system
4547804, Mar 21 1983 NIELSEN MEDIA RESEARCH, INC , A DELAWARE CORP Method and apparatus for the automatic identification and verification of commercial broadcast programs
4553261, May 31 1983 Document and data handling and retrieval system
4590366, Jul 01 1983 Esselte Security Systems AB Method of securing simple codes
4595950, Sep 26 1980 Method and apparatus for marking the information content of an information carrying signal
4637051, Jul 18 1983 Pitney Bowes Inc. System having a character generator for printing encrypted messages
4639779, Mar 21 1983 NIELSEN MEDIA RESEARCH, INC , A DELAWARE CORP Method and apparatus for the automatic identification and verification of television broadcast programs
4647974, Apr 12 1985 RCA Corporation Station signature system
4654867, Jul 13 1984 Motorola, Inc. Cellular voice and data radiotelephone system
4660221, Jul 18 1983 Pitney Bowes Inc. System for printing encrypted messages with bar-code representation
4663518, Sep 04 1984 YAMA CAPITAL, LLC Optical storage identification card and read/write system
4665431, Jun 24 1982 Technology Licensing Corporation Apparatus and method for receiving audio signals transmitted as part of a television video signal
4672605, Mar 20 1984 APPLIED SPECTRUM TECHNOLOGIES, INC Data and voice communications system
4675746, Jul 22 1983 Data Card Corporation System for forming picture, alphanumeric and micrographic images on the surface of a plastic card
4677435, Jul 23 1984 Communaute Europeenne de l'Energie Atomique (Euratom); Association pour la Promotion de la Technologie (Promotech) Surface texture reading access checking system
4682794, Jul 22 1985 PHOTON IMAGING CORP , A DE CORP Secure identification card and system
4703476, Sep 16 1983 ASONIC DATA SERVICES, INC Encoding of transmitted program material
4712103, Dec 03 1985 Door lock control system
4718106, May 12 1986 PRETESTING COMPANY, INC , THE Survey of radio audience
4739377, Oct 10 1986 Eastman Kodak Company Confidential document reproduction method and apparatus
4750173, May 21 1985 POLYGRAM INTERNATIONAL HOLDING B V , A CORP OF THE NETHERLANDS Method of transmitting audio information and additional information in digital form
4765656, Oct 15 1985 GAO Gesellschaft fur Automation und Organisation mbH Data carrier having an optical authenticity feature and methods for producing and testing said data carrier
4775901, Dec 04 1985 Sony Corporation Apparatus and method for preventing unauthorized dubbing of a recorded signal
4776013, Apr 18 1986 Rotlex Optics Ltd. Method and apparatus of encryption of optical images
4805020, Mar 21 1983 NIELSEN MEDIA RESEARCH, INC , A DELAWARE CORP Television program transmission verification method and apparatus
4807031, Oct 20 1987 KOPLAR INTERACTIVE SYSTEMS INTERNATIONAL, L L C Interactive video method and apparatus
4811357, Jan 04 1988 Rembrandt Communications, LP Secondary channel for digital modems using spread spectrum subliminal induced modulation
4811408, Nov 13 1987 Light Signatures, Inc. Image dissecting document verification system
4820912, Sep 19 1985 N. V. Bekaert S.A. Method and apparatus for checking the authenticity of documents
4835517, Jan 26 1984 The University of British Columbia Modem for pseudo noise communication on A.C. lines
4855827, Jul 21 1987 PHYXATION, INC Method of providing identification, other digital data and multiple audio tracks in video systems
4864618, Nov 26 1986 Pitney Bowes Inc Automated transaction system with modular printhead having print authentication feature
4866771, Jan 20 1987 The Analytic Sciences Corporation Signaling system
4874936, Apr 08 1988 UNITED PARCEL SERVICE OF AMERICA, INC , A DE CORP Hexagonal, information encoding article, process and system
4876617, May 06 1986 MEDIAGUIDE HOLDINGS, LLC Signal identification
4879747, Mar 21 1988 YAMA CAPITAL, LLC Method and system for personal identification
4884139, Dec 24 1986 Etat Francais, Represente Par Le Secretariat D'etat Aux Post Es Et Method of digital sound broadcasting in television channels with spectrum interlacing
4885632, Feb 29 1988 AGB TELEVISION RESEARCH AGB , 9145 GUILFORD ROAD, COLUMBIA, MD 21046 System and methods for monitoring TV viewing system including a VCR and/or a cable converter
4903301, Feb 27 1987 Hitachi, Ltd. Method and system for transmitting variable rate speech signal
4908836, Oct 11 1988 UNISYS CORPORATION, BLUE BELL, PA , A CORP OF DE Method and apparatus for decoding multiple bit sequences that are transmitted simultaneously in a single channel
4908873, May 13 1983 TOLTEK ELECTRONICS CORPORATION Document reproduction security system
4920503, May 27 1988 GALLUP, PATRICIA; HALL, DAVID Computer remote control through a video signal
4921278, Apr 01 1985 Chinese Academy of Sciences Identification system using computer generated moire
4939515, Sep 30 1988 GENERAL ELECTRIC COMPANY, A CORP OF NEW YORK Digital signal encoding and decoding apparatus
4941150, May 06 1987 Victor Company of Japan, Ltd. Spread spectrum communication system
4943973, Mar 31 1989 AT&T Company; AT&T INFORMATION SYSTEMS INC , 100 SOUTHGATE PARKWAY, MORRISTOWN, NJ 07960, A CORP OF DE; AMERICAN TELEPHONE AND TELEGRAPH COMPANY, 550 MADISON AVE , NEW YORK, NY 10022-3201, A CORP OF NY Spread-spectrum identification signal for communications system
4943976, Sep 16 1988 Victor Company of Japan, Ltd. Spread spectrum communication system
4944036, Dec 28 1970 Signature filter system
4963998, Apr 20 1988 Thorn EM plc Apparatus for marking a recorded signal
4965827, May 19 1987 GENERAL ELECTRIC COMPANY THE, P L C , 1 STANHOPE GATE, LONDON W1A 1EH,UNITED KINGDOM Authenticator
4967273, Apr 15 1985 NIELSEN MEDIA RESEARCH, INC , A DELAWARE CORP Television program transmission verification method and apparatus
4969041, Sep 23 1988 Tektronix, Inc Embedment of data in a video signal
4972471, May 15 1989 Encoding system
4972476, May 11 1989 Counterfeit proof ID card having a scrambled facial image
4977594, Oct 14 1986 ELECTRONIC PUBLISHING RESOURCES, INC Database usage metering and protection system and method
4979210, Jul 08 1987 Matsushita Electric Industrial Co., Ltd. Method and apparatus for protection of signal copy
4993068, Nov 27 1989 Motorola, Inc. Unforgeable personal identification system
4996530, Nov 27 1989 Agilent Technologies Inc Statistically based continuous autocalibration method and apparatus
5010405, Feb 02 1989 Massachusetts Institute of Technology Receiver-compatible enhanced definition television system
5027401, Jul 03 1990 ZERCO SYSTEMS INTERNATONAL, INC System for the secure storage and transmission of data
5036513, Jun 21 1989 ACADEMY OF APPLIED SCIENCE INC , 98 WASHINGTON ST NH, A CORP OF MA Method of and apparatus for integrated voice (audio) communication simultaneously with "under voice" user-transparent digital data between telephone instruments
5063446, Aug 11 1989 General Electric Company Apparatus for transmitting auxiliary signal in a TV channel
5073899, Jul 13 1988 U S PHILIPS CORPORATION Transmission system for sending two signals simultaneously on the same communications channel
5075773, Dec 07 1987 British Broadcasting Corporation Data transmission in active picture period
5077608, Sep 19 1990 PRODUCT ACTIVATION CORPORATION Video effects system able to intersect a 3-D image with a 2-D image
5077795, Sep 28 1990 Xerox Corporation Security system for electronic printing systems
5079648, Apr 20 1988 Thorn EMI plc Marked recorded signals
5086469, Jun 29 1990 ENTERASYS NETWORKS, INC Encryption with selective disclosure of protocol identifiers
5091966, Jul 31 1990 XEROX CORPORATION, STAMFORD, CT, A CORP OF NY Adaptive scaling for decoding spatially periodic self-clocking glyph shape codes
5095196, Dec 28 1988 OKI ELECTRIC INDUSTRY CO , LTD Security system with imaging function
5113437, Oct 25 1988 MEDIAGUIDE HOLDINGS, LLC Signal identification system
5128525, Jul 31 1990 XEROX CORPORATION, STAMFORD, CT A CORP OF NY Convolution filtering for decoding self-clocking glyph shape codes
5144660, Aug 31 1988 Securing a computer against undesired write operations to or read operations from a mass storage device
5148498, Aug 01 1990 AWARE, INC , A CORP OF MA Image coding apparatus and method utilizing separable transformations
5150409, Aug 13 1987 Device for the identification of messages
5161210, Nov 10 1988 U S PHILIPS CORPORATION Coder for incorporating an auxiliary information signal in a digital audio signal, decoder for recovering such signals from the combined signal, and record carrier having such combined signal recorded thereon
5166676, Feb 15 1984 Destron Fearing Corporation Identification system
5168147, Jul 31 1990 XEROX CORPORATION, STAMFORD, CT A CORP OF NY Binary image processing for decoding self-clocking glyph shape codes
5181786, Nov 15 1989 N V NEDERLANDSCHE APPARATENFABRIEK NEDAP A LIMITED COMPANY OF THE NETHERLANDS Method and apparatus for producing admission tickets
5185736, May 12 1989 ALCATEL NETWORK SYSTEMS, INC Synchronous optical transmission system
5199081, Dec 15 1989 Kabushiki Kaisha Toshiba System for recording an image having a facial image and ID information
5200822, Apr 23 1991 NATIONAL BROADCASTING COMPANY, INC Arrangement for and method of processing data, especially for identifying and verifying airing of television broadcast programs
5212551, Oct 16 1989 Method and apparatus for adaptively superimposing bursts of texts over audio signals and decoder thereof
5213337, Jul 06 1988 RPX Corporation System for communication using a broadcast audio signal
5228056, Dec 14 1990 InterDigital Technology Corp Synchronous spread-spectrum communications system and method
5243423, Dec 20 1991 NIELSEN MEDIA RESEARCH, INC , A DELAWARE CORP Spread spectrum digital data transmission over TV video
5245165, Dec 27 1991 Xerox Corporation Self-clocking glyph code for encoding dual bit digital values robustly
5245329, Feb 27 1989 SECURITY PEOPLE INC Access control system with mechanical keys which store data
5247364, Nov 29 1991 Cisco Technology, Inc Method and apparatus for tuning data channels in a subscription television system having in-band data transmissions
5253078, Mar 14 1990 LSI Logic Corporation System for compression and decompression of video data using discrete cosine transform and coding techniques
5257119, Mar 25 1991 Canon Kabushiki Kaisha Image processing apparatus which adds apparatus identification data to images
5258998, Oct 07 1985 Canon Kabushiki Kaisha Data communication apparatus permitting confidential communication
5259025, Jun 12 1992 Audio Digitalimaging, Inc. Method of verifying fake-proof video identification data
5267334, May 24 1991 Apple Inc Encoding/decoding moving images with forward and backward keyframes for forward and reverse display
5280537, Nov 26 1991 Nippon Telegraph and Telephone Corporation Digital communication system using superposed transmission of high speed and low speed digital signals
5293399, Oct 08 1987 DATAMARS SA Identification system
5295203, Mar 26 1992 GENERAL INSTRUMENT CORPORATION GIC-4 Method and apparatus for vector coding of video transform coefficients
5299019, Feb 28 1992 Samsung Electronics Co., Ltd. Image signal band compressing system for digital video tape recorder
5305400, Dec 05 1990 Deutsche ITT Industries GmbH Method of encoding and decoding the video data of an image sequence
5315098, Dec 27 1990 Xerox Corporation; XEROX CORPORATION, A CORP OF NY Methods and means for embedding machine readable digital data in halftone images
5319453, Jun 22 1989 Airtrax Method and apparatus for video signal encoding, decoding and monitoring
5319724, Apr 19 1990 RICOH COMPANY, LTD A CORP OF JAPAN; RICOH CORPORATION A CORP OF DELAWARE Apparatus and method for compressing still images
5319735, Dec 17 1991 Raytheon BBN Technologies Corp Embedded signalling
5325167, May 11 1992 CANON INC Record document authentication by microscopic grain structure and method
5327237, Jun 14 1991 PLYMOUTH DEWITT, INC Transmitting data with video
5337362, Apr 15 1993 RICOH COMPANY, LTD , A CORP OF JAPAN; RICOH CORPORATION, A DE CORP Method and apparatus for placing data onto plain paper
5349655, May 24 1991 Symantec Corporation Method for recovery of a computer program infected by a computer virus
5351302, May 26 1993 Method for authenticating objects identified by images or other identifying information
5379345, Jan 29 1993 NIELSEN COMPANY US , LLC, THE Method and apparatus for the processing of encoded data in conjunction with an audio broadcast
5387941, Jun 14 1991 PLYMOUTH DEWITT, INC Data with video transmitter
5394274, Jan 22 1988 Anti-copy system utilizing audible and inaudible protection signals
5396559, Aug 24 1990 Anticounterfeiting method and device utilizing holograms and pseudorandom dot patterns
5398283, Sep 21 1992 KRYPTOFAX PARTNERS L P Encryption device
5404160, Jun 24 1993 Berkeley Varitronics Systems, Inc. System and method for identifying a television program
5404377, Apr 08 1994 Intel Corporation Simultaneous transmission of data and audio signals by means of perceptual coding
5408542, May 12 1992 Apple Inc Method and apparatus for real-time lossless compression and decompression of image data
5418853, Jul 24 1992 Sony Corporation Apparatus and method for preventing unauthorized copying of video signals
5422963, Oct 15 1993 American Telephone and Telegraph Company Block transform coder for arbitrarily shaped image segments
5422995, Mar 30 1992 International Business Machiens Corporation Method and means for fast writing of run length coded bit strings into bit mapped memory and the like
5425100, Nov 25 1992 NIELSEN COMPANY US , LLC, THE Universal broadcast code and multi-level encoded signal monitoring system
5428606, Jun 30 1993 Wistaria Trading Ltd Digital information commodities exchange
5428607, Dec 20 1993 AT&T IPM Corp Intra-switch communications in narrow band ATM networks
5432542, Aug 31 1992 AMBATO MEDIA, LLC Television receiver location identification
5432870, Jun 30 1993 Ricoh Company, LTD Method and apparatus for compressing and decompressing images of documents
5446273, Mar 13 1992 Credit card security system
5450122, Nov 22 1991 NIELSEN COMPANY US , LLC, THE In-station television program encoding and monitoring system and method
5450490, Mar 31 1994 THE NIELSEN COMPANY US , LLC Apparatus and methods for including codes in audio signals and decoding
5461426, Aug 20 1993 SAMSUNG ELECTRONICS CO , LTD Apparatus for processing modified NTSC television signals, with digital signals buried therewithin
5469506, Jun 27 1994 Pitney Bowes Inc. Apparatus for verifying an identification card and identifying a person by means of a biometric characteristic
5473631, Apr 08 1924 Intel Corporation Simultaneous transmission of data and audio signals by means of perceptual coding
5479168, May 29 1991 Microsoft Technology Licensing, LLC Compatible signal encode/decode system
5481294, Oct 27 1993 NIELSEN COMPANY US , LLC Audience measurement system utilizing ancillary codes and passive signatures
5488664, Apr 22 1994 YEDA RESEARCH AND DEVELOPMENT CO , LTD Method and apparatus for protecting visual information with printed cryptographic watermarks
5499294, Nov 24 1993 The United States of America as represented by the Administrator of the Digital camera with apparatus for authentication of images produced from an image file
5515081, Nov 30 1993 Borland Software Corporation System and methods for improved storage and processing of BITMAP images
5524933, May 29 1992 Alpvision SA Method for the marking of documents
5530759, Feb 01 1995 International Business Machines Corporation Color correct digital watermarking of images
5530852, Dec 20 1994 Sun Microsystems, Inc Method for extracting profiles and topics from a first file written in a first markup language and generating files in different markup languages containing the profiles and topics for use in accessing data described by the profiles and topics
5532920, Apr 29 1992 International Business Machines Corporation Data processing system and method to enforce payment of royalties when copying softcopy books
5537223, Jun 02 1994 Xerox Corporation Rotating non-rotationally symmetrical halftone dots for encoding embedded data in a hyperacuity printer
5539471, May 03 1994 Rovi Technologies Corporation System and method for inserting and recovering an add-on data signal for transmission with a video signal
5539735, Jun 30 1993 Wistaria Trading Ltd Digital information commodities exchange
5541662, Sep 30 1994 Intel Corporation Content programmer control of video and data display using associated data
5541741, Sep 30 1991 Canon Kabushiki Kaisha Image processing with anti-forgery provision
5544255, Aug 31 1994 CIC ACQUISTION CORP ; CIC ACQUISITION CORP Method and system for the capture, storage, transport and authentication of handwritten signatures
5548646, Sep 15 1994 Sun Microsystems, Inc System for signatureless transmission and reception of data packets between computer networks
5557333, Jun 14 1991 PLYMOUTH DEWITT, INC System for transparent transmission and reception of a secondary data signal with a video signal in the video band
5559559, Jun 14 1991 PLYMOUTH DEWITT, INC Transmitting a secondary signal with dynamic injection level control
5568179, May 19 1992 THOMSON CONSUMER ELECTRONICS S A Method and apparatus for device control by data transmission in TV lines
5568570, Sep 30 1994 Intellectual Ventures Fund 83 LLC Method and apparatus for reducing quantization artifacts in a hierarchical image storage and retrieval system
5572010, Jan 03 1995 Xerox Corporation Distributed type labeling for embedded data blocks
5572247, Jun 14 1991 PLYMOUTH DEWITT, INC Processor for receiving data from a video signal
5576532, Jan 03 1995 Xerox Corporation Interleaved and interlaced sync codes and address codes for self-clocking glyph codes
5579124, Nov 16 1992 THE NIELSEN COMPANY US , LLC Method and apparatus for encoding/decoding broadcast or recorded segments and monitoring audience exposure thereto
5582103, Jun 04 1992 NATIONAL PRINTING BUREAU INCORPORATTED ADMINISTRATIVE AGENCY, JAPAN Method for making an anti-counterfeit latent image formation object for bills, credit cards, etc.
5587743, Jun 14 1991 PLYMOUTH DEWITT, INC Signal processors for transparent and simultaneous transmission and reception of a data signal in a video signal
5590197, Apr 04 1995 SSL SERVICES LLC Electronic payment system and method
5602920, May 31 1995 L G Electronics Inc Combined DCAM and transport demultiplexer
5606609, Sep 19 1994 SILANIS TECHNOLOGY INC Electronic document verification system and method
5611575, Jan 03 1995 Xerox Corporation Distributed state flags or other unordered information for embedded data blocks
5613004, Jun 07 1995 Wistaria Trading Ltd Steganographic method and device
5613012, Nov 28 1994 Open Invention Network, LLC Tokenless identification system for authorization of electronic transactions and electronic transmissions
5614940, Oct 21 1994 Intel Corporation Method and apparatus for providing broadcast information with indexing
5617148, Jun 14 1991 PLYMOUTH DEWITT, INC Filter by-pass for transmitting an additional signal with a video signal
5629770, Dec 20 1993 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Document copying deterrent method using line and word shift techniques
5629980, Nov 23 1994 CONTENTGUARD HOLDINGS, INC System for controlling the distribution and use of digital works
5636292, May 08 1995 DIGIMARC CORPORATION AN OREGON CORPORATION Steganography methods employing embedded calibration data
5638443, Nov 23 1994 CONTENTGUARD HOLDINGS, INC System for controlling the distribution and use of composite digital works
5638446, Aug 28 1995 NYTELL SOFTWARE LLC Method for the secure distribution of electronic files in a distributed environment
5646997, Dec 14 1994 Sony Corporation Method and apparatus for embedding authentication information within digital data
5647017, Aug 31 1994 Peripheral Vision Limited; CIC ACQUISITION CORP Method and system for the verification of handwritten signatures
5659726, Feb 23 1995 Regents of the University of California, The Data embedding
5659732, May 17 1995 Google, Inc Document retrieval over networks wherein ranking and relevance scores are computed at the client for multiple database documents
5661574, Sep 30 1994 Canon Kabushiki Kaisha Image processing method and apparatus for adding identifying information to an image portion and forming the portion at a lower of plural resolutions
5664018, Mar 12 1996 Watermarking process resilient to collusion attacks
5666487, Jun 28 1995 Verizon Patent and Licensing Inc Network providing signals of different formats to a user by multplexing compressed broadband data with data of a different format into MPEG encoded data stream
5721788, Jul 31 1992 DIGIMARC CORPORATION AN OREGON CORPORATION Method and system for digital image signatures
5862260, Nov 18 1993 DIGIMARC CORPORATION AN OREGON CORPORATION Methods for surveying dissemination of proprietary empirical data
DE3806411,
EP58482,
EP372601,
EP411232,
EP441702,
EP493091,
EP551016,
EP581317,
EP605208,
EP629972,
EP649074,
EP650146,
EP705025,
GB2063018,
GB2067871,
GB2196167,
GB2204984,
JP4248771,
JP5242217,
JP830759,
WO8908915,
WO9325038,
WO9510835,
WO9514289,
WO9520291,
WO9626494,
WO9627259,
////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jul 22 1996RHOADS, GEOFFREY B Digimarc CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0250080073 pdf
Nov 24 1999DIGIMARC CORPORATION AN OREGON CORPORATION DIGIMARC CORPORATION A DELAWARE CORPORATION MERGER SEE DOCUMENT FOR DETAILS 0249820707 pdf
Jan 27 2004Digimarc Corporation(assignment on the face of the patent)
Aug 01 2008DIGIMARC CORPORATION A DELAWARE CORPORATION DMRC LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0252170508 pdf
Aug 01 2008DMRC LLCDMRC CORPORATIONMERGER SEE DOCUMENT FOR DETAILS 0252270808 pdf
Sep 03 2008DMRC CORPORATIONDigimarc CorporationMERGER SEE DOCUMENT FOR DETAILS 0252270832 pdf
Oct 24 2008L-1 SECURE CREDENTIALING, INC FORMERLY KNOWN AS DIGIMARC CORPORATION DIGIMARC CORPORATION FORMERLY DMRC CORPORATION CONFIRMATION OF TRANSFER OF UNITED STATES PATENT RIGHTS0217850796 pdf
Apr 30 2010DIGIMARC CORPORATION A DELAWARE CORPORATION DIGIMARC CORPORATION AN OREGON CORPORATION MERGER SEE DOCUMENT FOR DETAILS 0243690582 pdf
Date Maintenance Fee Events
Jun 22 2010M1553: Payment of Maintenance Fee, 12th Year, Large Entity.
Jan 06 2011ASPN: Payor Number Assigned.
Jan 06 2011RMPN: Payer Number De-assigned.


Date Maintenance Schedule
Sep 22 20124 years fee payment window open
Mar 22 20136 months grace period start (w surcharge)
Sep 22 2013patent expiry (for year 4)
Sep 22 20152 years to revive unintentionally abandoned end. (for year 4)
Sep 22 20168 years fee payment window open
Mar 22 20176 months grace period start (w surcharge)
Sep 22 2017patent expiry (for year 8)
Sep 22 20192 years to revive unintentionally abandoned end. (for year 8)
Sep 22 202012 years fee payment window open
Mar 22 20216 months grace period start (w surcharge)
Sep 22 2021patent expiry (for year 12)
Sep 22 20232 years to revive unintentionally abandoned end. (for year 12)