A system and method for creating audiovisual programming has media elements, such as audiovisual clips, stored in a library. A database contains selected information about each of the media elements. The stored information in the database does not dictate the temporal sequence of the media elements. media elements are selected in response to a request for media programming, and arranged in a temporal organization. A user does not select the individual media elements or their temporal organization. Transitions between audiovisual clips are determined by the system based on information stored in the database and predetermined preferences as to types of transitions. Transition information includes a variety of possible transition points in an individual clip, capable of selection by the system. Separate transitions for the audio and video portions of audiovisual clips may be provided. For unique media programming, a unique sequence of cues may be included within the program for use in verification of viewing and comprehension. Upon completion of the selection of the media elements, the sequence, and the transitions, the media elements are assembled into a media program, such as a video tape.
|
30. A method of creating audiovisual media programming from a plurality of stored audiovisual media elements, comprising the steps of:
automatically selecting by a processor from a database containing information concerning said audiovisual media elements a plurality of said audiovisual media elements and automatically designating a temporal sequence for said selected audiovisual media elements, the media elements being selected for each position in the template in accordance with correspondence between definitions associated with each position and the information in the database; and
automatically selecting automatically by the processor transitions for each of said audiovisual media elements to create a file of element identifiers and transition information for creation of media programming.
36. A system for creating audiovisual programming from a plurality of stored audiovisual media elements, comprising:
means including a processor for automatically selecting from a database containing information concerning said audiovisual media elements a plurality of said audiovisual media elements and automatically designating a temporal sequence for said selected audiovisual media elements, the selecting and designating employing a template defining a sequence of temporal positions for the media elements, the media elements being selected for each position in the template in accordance with correspondence between definitions associated with each position and the information in the database; and
means including a processor for selecting automatically transitions for each of said audiovisual media elements.
11. A system of creating media programming from a library of media assets, comprising:
a database containing selected information about each of said media assets;
selection means including a processor for automatically selecting a plurality of said media assets in response to a request for media programming, and for automatically selecting a temporal organization for said selected media assets, employing definitions associated with the request, correspondence between the definitions and information in the database, and a sequence of temporal positions for the media elements, to select fewer than all the media elements in the database responsive to the request and to select the temporal organization, said temporal organization not being dictated by said selected information; and
assembling means including a processor for assembling said media elements into media programming.
1. A method of creating media programming, comprising the steps of:
maintaining in a memory device a database containing selected information about each of a plurality of media elements;
automatically selecting by a processor in communication with the memory device a plurality of said media elements in response to a request for media programming, and automatically selecting by the processor a temporal organization for said selected media elements, employing by the processor definitions associated with the request, correspondence between the definitions and information in the database, and a sequence of temporal positions for the media elements, to select fewer than all the media elements in the database responsive to the request and to select the temporal organization, said temporal organization not being dictated by said selected information; and
assembling said media elements into media programming.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
12. The system of
13. The system of claim 12 11, wherein said media elements are still photographs, and said media programming comprises a series of said still photographs.
14. The system of
15. The system of
16. The system of
17. The system of
18. The system of
19. The system of
20. The system of
0. 21. The system of
22. The system of
0. 23. A method for verifying viewing and comprehension of a unique media program, comprising the steps of:
providing in a unique media program a unique sequence of cues; and
receiving from a viewer of said unique media program information relative to said cues; and
comparing said received information to said sequence of cues.
0. 24. The method of
0. 25. The method of
0. 26. The method of
0. 27. The method of
0. 28. The method of
0. 29. The method of
31. The method of
32. The method of
33. The method of
34. The method of
35. The method of
37. The system of
38. The system of
39. The system of
40. The system of
41. The system of
0. 42. The method of claim 6 wherein said transition information comprises:
a transition point.
0. 43. The method of claim 6 wherein said transition information comprises:
a transition type.
0. 44. The method of claim 43 wherein said transition type is a dissolve.
0. 45. The method of claim 43 wherein said transition type is a cut.
0. 46. The method of claim 43 wherein said transition type is a fade.
0. 47. The method of claim 1 further comprising the step of obtaining desired content information concerning an intended viewer of the programming prior to said step of selecting, and employing said desired content information in said step of selecting.
0. 48. The method of claim 6 wherein said transition information comprises:
a modification parameter wherein said modification parameter is used to modify a transition.
0. 49. The method of claim 1 further comprising the step of obtaining desired style information concerning an intended viewer of the programming prior to said step of selecting, and employing said desired style information in said step of selecting.
0. 50. The method of claim 11 further comprising:
deriving said selected information from said media assets.
0. 51. The method of claim 11 further comprising:
automatically deriving said selected information from said media assets.
0. 52. The method of claim 16 wherein said transition information comprises:
a transition point.
0. 53. The method of claim 16 wherein said transition information comprises:
a transition type.
0. 54. The method of claim 53 wherein said transition type is a dissolve.
0. 55. The method of claim 53 wherein said transition type is a cut.
0. 56. The method of claim 53 wherein said transition type is a fade.
0. 57. The method of claim 30 wherein said transitions comprise a dissolve.
0. 58. The method of claim 30 wherein said transitions comprise a cut.
0. 59. The method of claim 30 wherein said transitions comprise a fade of an audio portion of said element.
0. 60. The method of claim 36 wherein said transitions comprise a dissolve.
0. 61. The method of claim 36 wherein said transitions comprise a cut.
0. 62. The method of claim 36 wherein said transitions comprise a fade of an audio portion of said element.
0. 63. The method of claim 1 further comprising:
assembling an automatically assembled media clip into said media programming.
0. 64. The method of claim 1 further comprising:
obtaining psychographic information concerning an intended view of the programming prior to said step of selecting, and employing said psychographic information in said step of selecting.
0. 65. The method of claim 1 wherein said step of selecting comprises:
filtering a first media element out of consideration for inclusion in said media programming wherein said filtering is performed by a mediating layer.
0. 66. The method of claim 5 wherein at least one of said tags is a taxonomic tag.
0. 67. The method of claim 5 wherein at least one of said tags is an attribute tag.
0. 68. The method of claim 5 wherein at least one of said tags is a reusability tag.
0. 69. The method of claim 1, wherein the temporal positions are in a sequence defined by a template stored in the database.
0. 70. The method of claim 69, wherein the media elements are further selected for each position of the template in accordance with demographic characteristics of an intended viewer.
0. 71. The method of claim 1, wherein the selecting comprises selecting media elements having an aggregate duration limited to a predetermined duration of the media programming.
0. 72. The method of claim 11, wherein the definitions associated with the request are further associated with a template.
0. 73. The method of claim 72, wherein the template defines the sequence of temporal positions for media elements.
|
Notice: More than one reissue application has been filed for the reissue of U.S. Pat. No. 6,032,156. The reissue applications are application Ser. Nos. 10/087,003 (the present application) and 10/616,602 (Now Pat. No. Re 41,493) which is a divisional reissue of U.S. Pat. No. 6,032,156.
This application claims priority from U.S. Provisional Patent Application No. 60/042,564, filed Apr. 1, 1997, which is hereby incorporated by reference in its entirety.
This invention relates to a method and computer-implemented system for creation of audiovisual programming.
There have been recent substantial advances in the capacity to design customized audio visual programs for specific purposes from a library of existing video clips and audio elements. Customization of audiovisual programing is useful in many applications. For example, in advertising certain products, and in particular automobiles, one promotional technique is to prepare promotional videotapes which are sent to potential customers on their request. The desirability of customizing such videotapes to demographic or other characteristics of individual consumers are of course substantial. Health care practitioners and managed care entities have begun to provide instructional videotapes to patients with information regarding managing of various diseases and conditions. Customizing of such information to the disease and condition of the individual, and demographic characteristics of the individual, such as age, income, educational level, psychographic characteristics such as perceived wellness and willingness to change behaviors, and other factors, would be valuable for increasing the effectiveness of such video tapes in communicating the information to the recipient.
In accordance with present technology, it is possible to create and store a library of brief video clips, and provide a database of information regarding these clips. However, in accordance with the present technology, a human editor must make the ultimate selection of individual clips, and provide the final editing decisions, creation and selection of transitions so that there is a smooth visual and audio transition between adjoining clips in the program, and checking of the content of the clips to determine that there is proper coverage of the appropriate subject matter in an appropriate sequence. Automating of this editing process would make possible substantial flexibility and new possibilities for creation of audiovisual programming.
Once videotapes have been provided to the user, it is difficult to verify whether or not the user has viewed the program. Even if the program has been viewed, the level of comprehension is difficult to assess.
It is accordingly an advantage of this invention that the disadvantages of the prior art may be overcome.
Additional advantages of the invention, and objects of the invention, will become apparent from the detailed description of a preferred embodiment which follows.
According to a first aspect of the invention, a system and method of creating media programming are provided. A database is provided which contains selected information about each of a large number of media elements. The media elements may be, for example, audiovisual clips. The elements themselves are maintained in a suitable library. The method provides for selecting some of those media elements in response to a request for media programming, and selecting a temporal organization for the media elements. However, the temporal organization is not dictated by the selected information that regarding each of the media elements. The system selects and orders the media elements according to the data in the request, and according to information, such as permitted transitions, regarding the media elements. The system prevents a user from selecting individual media elements. The media elements are then assembled into media programming.
In another aspect of the invention, a method is provided for verifying viewing and comprehension of a unique media program. The method includes providing, in the unique media program, a unique sequence of cues. The method includes receiving from a viewer of the unique media program information relative to said cues, such as responses to questions included on a questionnaire, or in response to a telephone call made by the viewer. The received information is then compared to the sequence of cues to determine whether or not the program was viewed, and the level of comprehension by the viewer.
In another aspect of the invention, a method of creating audiovisual programming from stored audiovisual media elements is provided. In a first step, from a database containing information concerning the audiovisual media elements, certain audiovisual media elements are selected. A temporal sequence for the selected elements is designated. Transitions between the media elements are automatically selected.
Referring to
Database.
Computer 20 is also suitably associated with a database 100. Database 100 contains unique identifying information for each clip and has associated therewith additional information often arranged in a hierarchical manner. Referring to
The organizational structure of the database may be hierarchical, with each layer of hierarchy defining a specific set of organizational principles. Referring to
Also at the highest level of organization, typically used in on-line applications only, there may be provided the viewer/user interface options which define the ways in which any given class and security level of user will be allowed to actively as well as passively interact with media assets. We will call this the INTERFACE LAYER 305. At this level of organization, the behaviors of ancillary assets such as promotional segments, information identifying the system, advertisements and news-flashes are defined. These assets embody aesthetic, program or instructional design, as well as market-driven, or viewer defined behaviors.
Immediately below this layer is preferably the meta-content layer. This is called the PROGRAM LAYER 310. Here are defined the type of assets and the core content descriptions of those assets. By way of example, the types of assets may be defined as training, informational, and entertainment assets. Examples of core subject matter would be “medical”, at the highest level, “health management”, at a lower level, and “diabetes mellitus”, at a still lower level.
Next in the hierarchy is the instructional design layer, or TEMPLATE LAYER 315. This layer is characterized by a family of defining values which describe the range of the target audience in specific demographic and psychographic terms. Additionally, the overall outline of the subject matter is contained in this layer and is associated with demography where appropriate. These outlining functions are secondary, however, to the temporal organizational templates embodied in this layer. Here the instructional designer, or interactive author, defines the preferred temporal modes of presentation of the universe of assets. For example, the instructional designer might define that the block of programming content called EFFECTS ON THE HEART is presented across three fundamental age groups, two levels of detail (summary/cursory and in-depth), both gender specific groups and four distinct ethnicity components. Within this multi-dimensional array of program assets, the instructional designer might also define that the material be presented in the preferred sequence of—INTRODUCTION TO THE HEART, IMPACT OF DIABETES ON THE CARDIOVASCULAR-VASCULAR STRUCTURES, EFFECTS OF DIET, EFFECTS OF EXERCISE, Q&A SESSION, SUMMARY.
Below the instructional design layer are the smaller organizational elements which allow for elasticity in the specifics of the implementation of the temporal design. This is called the MODULE LAYER 320 and in special instances the SEQUENCE LAYER. Fundamental to this layer are weighting factors which control likelihood of asset use, and allow for the deployment of elements which are free to float temporally in order to accomplish certain transitions and effective deployment of those elements which are slave to the temporality functions. These elements as a group are shorter sequentially-patterned program elements of content which organize under the temporality principles of the higher layer. The free floating elements may have various linking geometries or parameters at the opening and closing thereof. Such elements can be used to bridge elements that cannot themselves be linked because a direct link is either disallowed or would involve use of disfavored transitions.
The lowest level of organization is that of the individual media elements or assets themselves. This is called the CLIP LAYER 325. These elements carry tags which define their specific content, such as: DIABETIC HEART, LEFT VENTRICLE, DAMAGE TO, HYPERGLYCEMIA, MALE, AGE 50, TALKING HEAD. The first three content tags will be noted as being in hierarchical order from most general to most specific. The next two are examples of demographic tags, and the final tag is a simple example of a tag denoting style. These elements also carry production-specific control tags, which, as discussed in more detail below, define such characteristics as allowable exit/entrance transitions for both audio and video.
It is important to note that, in the operation of the system, there is an inhibition layer between the clip and the searching mechanism. The inhibition layer assures that the system does not include in the programming every clip that is responsive to a query provided by a user. The inhibition mechanism may be responsive to psychographic characteristics of the user, such as age, level of education, or even reason for the query. The tags are responsive to this type of information. The inhibition mechanism may be modified dynamically as the database is mined for suitable assets. The inhibition mechanism may be viewed as in a multi-dimensional set of psychographic requirements. Clips can be evaluated for their responsiveness in the various dimensions. The system may set, or the user may select, a length of time for the program, and the inhibition mechanism will operate to limit the total number of clips selected to the selected time, as well as choosing clips according to suitability for the viewer and the viewer's purpose.
Referring again to
Creation of Database.
The database is created by identifying each clip or other asset and defining values of the control tags and content tags for each. Values of the various control tags and content tags may be defined by a user, either during development of the script for the clip or upon completion of the clip. For example, the program may include screens prompting a user to select a value for each control tag from a menu of options. Different screens may be provided for different users, such as dialog editors, audio editors, and video editors, to enter values for appropriate control and content tags. Alternatively, values of various tags may be created automatically by truth tables or decision-capture systems, or other automated techniques, either with or without human assistance. Such systems may act from information derived from analysis of existing clips using image-recognition software, from analysis of scripts entered into text, or from other information provided by human editors
By way of example only, a system for creation of a database for use in connection with creation of medical programming will now be described with reference to
Each of the foregoing components operates as an independent entity in the exemplary system. Each component is started by the user selecting an icon from an appropriate folder. The user interface is preferably be graphical in nature. Actual keyed data entry is preferably kept to a minimum where most entries are made by selecting an option from a set or an entry from a list.
The Production Interface
The production interface 405 is the interface designed to provide a structured, interactive means of importing a video clip into the system. A standard container, such as OMF, may be utilized as a standardized vehicle for the transmission of the proprietary tags. When selected, the user will be presented with a screen containing fields for the entry of the following:
Clip ID—This will become the primary key or file name.
Source—Where the clip will be read from. Options may include DAT tape or network.
The user will also be presented with one or more interfaces to enter the audio/video transition fields and the coding (medical/patient demographic) information. Before being allowed to exit the process, defaults will be applied to the transition fields not specified by the user. The user may be required to preview the clip just imported.
The Coding Interface
The coding interface 440 is the GUI designed to perform the entry of the medical and socio-demographic selection codes which apply to the clip. These include the ICDs for which this clip is applicable, the socio-demographic and medical attributes of those patients assumed to be the most likely potential viewers of the clip, and any special fields which when present in the client/patient profile will cause this clip to be selected. All fields may be selected from preestablished lists or groups displayed as “radio-buttons”. As the ICD set may require the designation of hundreds or thousands of codes, a suitable method of use of the hierarchical nature of the ICD structure may be utilized to simplify the selection process. These may be selected during the script outlining or scripting process by the use of menus or existing fields incorporated in the scripting software.
The Patient Profile Entry Interface
The Patient profile entry interface 415 is the GUI designed to perform the entry of the general, medical and socio-demographic codes for the specific patient for which the product is to be produced. The general section may require the most data entry including patient name, address, social security number, insurance billing numbers, referring entity identification and any text which is to be added to the final video. Systems, such as those used in the health care industry, may be employed to extract relevant information from existing patient records. In the case of on-line use, the medical and socio-demographic sections will be “click to select” entry similar in nature to the entry process performed by the coder when the clip is imported. The result of this process will be to create an entry in the patient database 420. The information is then forwarded to the editor program 425.
Based on the topic requested, such as a condition which the patient has been diagnosed with, and the demographic information, editor program 425 will, as discussed below, produce a recommended decision list, or preliminary edit decision list (EDL) file for further processing. Subsequently, the final EDL is created in an evolutionary manner. At first, a sample order of clips, without transitions, is analyzed. The order and identities of the clips is then revised. When the order and identities of the clips has been finalized, the transitions are computed. The transitions are then executed and inserted into the stream by use of a system of removal of portions of the tag extremities used in temporal transitions.
The List Editor/Previewer
The List Editor/Previewer 430 is the GUI designed to provide the production editor with the ability to change, add to and preview the EDL that has been produced by the editor program 425 in response to information entered in the Patient Profile Entry Interface 415.
The Production Player
Each clip, clip component, or other media asset, is stored in three distinct segments:
1. lead-in segment
2. main body 20
3. exit segment
Transitions can only be performed on the lead-in and out segments. The production player 435 is launched with the name of the patient EDL to be played. The EDL will be analyzed and each transition rendition time estimated. The production player will then process each command in the EDL. Most commands, which require very little processing, cause a more detailed command to be written to the player FIFO which controls the player output thread. Other process intensive commands, such as a transition, will be rendered, stored to disc and the detailed command with its disc location is added to the player FIFO. As each EDL command is processed, the remaining estimated time flow is modified until the program determines that future transition rendering will be able to be completed before the player output thread requires it. At this point, the player output thread will be started taking its input instructions from the player FIFO. The player output thread will operate asynchronously, transferring each data component specified in the player FIFO to the appropriate hardware component, such as video and audio decoder modules, text merge processor, etc., whose output drives the device for recording the program, such as a videotape deck
Upon completion, the required managerial and billing information will be generated and stored in an appropriate database.
The transition process will require the identification of both an exit and lead-in point, and a transition function specification. This pair of fields are refined into a single recommended field as defined above, and then the final recommended EDL is obtained. These fields are contained in the preliminary EDL which is being played. The appropriate time positions are located in both the exit and lead-in segments. The first frame of each segment is decoded into separate buffers and then the transition function is applied to each matching pixel of both images to produce the third rendered frame buffer. A similar process is also applied to the matching audio sample points associated with the currently rendered frame. The video and audio is then re-compressed and written to the output transition file.
Values of certain control and content tags, such as key or tempo of music, may be determined by suitable software from the clip.
It is to be emphasized that the foregoing system is merely exemplary, and does not limit the scope of systems that may be implemented within the scope of the inventions described in this application.
Editor Program.
Programs are created from clips by an editor program.
The identify and order of the clips that comprise a program may be defined by a variety of methods varying in the degree of control exercised by the individual user. At the level of least control by the user, the user defines only the overall functional requirements for the program. For example, in the assembly of programs for education of patients about health issues, the user may define only the demographic characteristics of the intended recipient of the program and the information to be conveyed, e.g., management of a particular condition. In one embodiment, the system may simply select appropriate clips from the database and apply suitable transitions. Alternatively, an expert system included within the editor program selects and orders one or more suitable templates as shown in block 505 of
The tags may include information regarding such attributes as luminance, chrominance, music tempo and key, and colors, at the beginning and end of each clip. Alternatively, a MIDI record of the audio portion of the clip may be analyzed. The editor program may apply expert system software or truth tables to determine whether a direct transition between any two clips meets aesthetic requirements. If not, the system may identify a suitable type of transition for both clips, add a bridge between the clips, or determine that one of the clips must be discarded. For example, if the keys of the music in two adjacent clips are incompatible, the system may add a bridge consisting of a burst of percussion in the audio portion of the program.
The viewer or user may directly select templates, modules, or sequences, depending on the degree of control desired. Decisions regarding either specific production decisions or global stylistic choices or content or order can be captured by the system by, in addition to expert systems, truth-table or other decision tree programming specific decision collection fields provided to content developers. Expert systems, truth tables and other systems may be used to create the tags associated with the clips.
Use of expert systems, or decision-capture systems is of interest because the system might organically evolve stylistic tendencies which might mimic or mirror those of creative professionals. Templates might be provided to content providers or even to end-users, which would allow a specific style of editing, audio cutting or mixing, or program formation; perhaps a template may be provided associated with an individual editor. Such decision-capture systems already exist for other uses which could be adapted to the assembly of audio and video.
In one embodiment, the user creates queries depending on the qualities desired for the organization levels to be identified. The editor program then identifies suitable templates, modules or sequences based on the queries. The relative importance of different items in the queries may be weighted. The weighting may be supplied by the user in response to suitable prompts, or may be generated automatically by the system based on existing programming. Using suitable relationships among data items, the user may be presented by the editor program with one or more templates, modules or sequences that represent a fundamental response, another set that represent a secondary response, a third set that represent a tertiary response, and so forth.
Assembly Program.
Once the set of clips has been defined, the assembly of the clips from the library takes place. This is accomplished by an assembly program. Referring to
The assembly program may also dictate particular transitions For example, asymmetrical transitions in audio are advantageous. A leading edge audio transition which is exponential and short, and a trailing edge transition in audio which is linear and long is preferred. The video transitions need not match the audio transitions.
The following is a technique which is believed by the inventor to be suitable for improving performance of compression of the programming, for use in compression formats, of which MPEG is an example, which achieve performance using predictive compression which includes both forward and backward looking frames. The technique is the elimination of the complex frames from the MPEG stream using the last-in and first-out video marker tags to determine the amount of leading and trailing video which might, in a worst-case scenario, participate in a visual effect. By eliminating these (P and B) frames, it is possible to employ a partial decoding of the (Huffman) algorithm, rather than a full decode/encode cycle. Additionally, the elimination of these PB frames allows impunity in the concatenation points employed. This freedom is bought at the price of increased bandwidth requirements, but only for small segments of video rather than entire clips.
Another technique is applicable to compression formats, such as MPEG, where the chrominance data is subsidiary to the luminance data. This technique involves emulating the MPEG transitions by extracting luminance-only data. Because the chrominance data ‘rides on’ the luminance, this can still be employed to generate useable dissolves. Using this technique, the full decode process can be further reduced, and thus accelerated, by processing this luminance in averaged blocks of pixels. These pixel blocks are employed in the encode process. Even without the use of MPEG, it is possible that some or all of these shortcuts might be effectively employed to accelerate the creation of dissolves without use of a full decode/recode cycle.
Viewer Database.
Separately from the database of media assets described above, there may be provided a database of viewer information. This database defines the identity of each viewer, including name and address information, and may also include social, economic and medical data regarding the viewer, the history of items viewed by the viewer, and a file of preference information for the viewer. In an on-line environment, the viewer may be prompted to provide viewer identification information which may be entered into a database or may be retained temporarily as a viewer data record for use in creation of customized programming.
Verification of Viewing.
The verification of viewing of the delivered video program is difficult to accomplish. It is important for the well-being of the viewer when the program contains therapeutic or training video programming, and useful for all content to assess the effectiveness of delivering information via customized video programming. An effective system would: (a) allow the distributor of the programming to know if no effort had been made on the part of recipient to watch the delivered material; (b) provide disincentives for fast-forwarding and skipping through the program; (c) allow for the confidential verification of the particular program watched, thus allow for the confidential generation of viewer-comprehension testing; (d) provide incentives, perhaps monetary or material, for the successful viewing of the material; (e) maintain confidentiality of the viewer and/or of the precise content of the delivered program. The following viewing verification system performs all of the objectives outlined above.
Referring to
As indicated by block 715, the viewer is provided with a method for response. The response method includes suitable method to record these numbers, characters, colors, shapes or other cues contained within the sequence. For viewers who are receiving the programming online through a modem or Internet link, this might be a window which remains open and active during the playing of the video sequence, or which becomes open and active upon the presentation of the appropriate superimposed strings, which allows the viewer the opportunity to record that sequence. For videotapes delivered to the viewer, a card may be delivered together with the videotape. The card may contain boxes to be checked, blanks to be filled in, or choices of opaque coverings to be removed. Suitable instructions will be provided in the video for the viewer to scratch off or mark the card in an appropriate place.
In the case of scratch-off cards, or other pre-prepared cards, it is significant that colors and/or shapes might be employed rather than known characters, or as elements in known characters, like the commonly employed segments of characters used in LED displays. Some action is required of the viewer to cause these characters/signs to be recorded/transferred to the recording area. These cards might bear sponsorship or advertising data and this data might also generate or contain further fields. These cards, or on-line components might come from a third-party source. For instance the on-line window might be generated for, or by, an advertiser or vendor. A paper-based recording space which might be contained in a booklet, magazine, newspaper or as part of the packaging itself, may also be generated by or for an advertiser or vendor. Such a recording space may be a peel-off videocassette label or a peel-off, tear-off, scratch-off, or stick-on component of the videotape packaging or its electronic analog. This component might also contain one or more other characters, colors, icons or strings which might be included in the final string, or used separately. This component might also be custom-generated as part of the custom packaging. For instance, the sequence ABC-might already occupy a place in the sequence-recording fields such as ABC-xxx-xxx-xxx. The ABC-might be visible or might require manual intervention or problem solving to reveal. This problem solving might be related to the material presented on the video. For example, a scratch-off grid of multiple choice questions, or an on-line emulation of such a grid, presented by the programming or packaging itself might yield a unique sequence of characters or signs.
The final uses of this character string are manifold, but the overarching intention is to provide a motivational arena for the recording of this verification data. The verification data, unless used in an on-line environment where the respondent is known to the system, usually contains fixed identifying data used to confirm the identity of either the viewer/respondent or the specific program viewed. It may also include, or consist solely of, data strings generated in response to queries. It may further include gaming fields and or third-party sponsored information.
In the case of traditional videotaped or other passively consumed media, the response data can be collected swiftly by 800-number, or standard toll-line, call-in to an automated collection system, which has been programmed in a suitable manner to collect response data. The response data can also be collected by mail-in and even correlation with sealed-envelope or other hidden message systems such as peel-open and scratch off answer fields which provide interim instructions. For viewers with suitable computer equipment, the response data can be returned by electronic mail. The receipt of responses is indicated by box 720. The end result though is, at minimum, the verification of active viewing of the temporal stream of information. The responses are compared to cues and expected responses, as indicated by block 725. At maximum, the information is also capable of verifying time of viewing, comprehension of content, attitudes regarding content (by asking for reactions in some or all of the fields), as well as motivating viewers with the possibility of prizes, secrets, entry into further lotteries or drawings, or other perceived rewards. The information can be used to instruct viewers to view the material again, make calls or other responses to the system, view or read additional, supporting or further material. All of this, significantly, can be done while maintaining complete viewer privacy. It may also be important that some of the feedback can be local to the viewer, while other portions of the response-strings may involve system-collection of responses.
Techniques may be used for distinguishing between incorrect responses resulting from a failure to view or comprehend the programming and an error in completing a response card. Techniques may also be used for identifying the most likely error in optical character recognition review of responses. The sequence of numbers, letters, or other visual or audio cues which is generated for and inserted in the programming is also used to generate computer-readable printed information, such as bar code, corresponding uniquely to that sequence. Customized codes may be developed to correspond to cues other than alphanumeric characters. The printed information is applied to a return card supplied with a videocassette on which the programming has been recorded. The printed information may be encoded, by altering the order of codes or including additional codes or other known techniques, for security purposes. Correct answers to question fields may be embedded in the bar coded string.
Acceptable tolerances for error within which the respondent will be deemed to have viewed and/or adequately comprehended the programming may be determined for each position of the character string. Numerous considerations may be used in determining these tolerances. Exemplary considerations are the following: (1) the first position of the string is prone to a heightened incidence of general user error; (2) fields which call for the answering of questions or scratching off of a layer of obscuring material are subject to a heightened incidence of general user error; (3) for positions known to include only a limited number of possible responses, statistical weighting can be employed to assign unclear responses into one of the known responses, e.g., if the possible responses are to write out one of the capital letters A, B, C and D, and an OCR program reads a response as the numeral 0, the response will be assigned as the letter D; (4) in a system employing scratching off of selected areas on the face of a response card, columns which contain more than one removed field where one field is correct and not more than two fields are removed. A weighted tolerance system may also employ other strategies. For example, a parity field may be derived from a modulo-cycling of the total character string, where the base number of the modulus might be the number of total possible characters potentially employed in a single field of the string. By way of further example, a system may be used which causes an known and repeatable limitation to be placed on the possible use of a given character derived from the earlier or other character in the string. The source of such a check character is from within the printed information, whereas the checked characters are from the handwritten or scratch-off fields. For example, the presence of an even number in the first position of a string might dictate the use of an odd number in the second field. Alternatively, a template could be created which would dynamically or in a predetermined manner limit the possible characters or other cues which could be employed in each position of the generated string. This data can be used to optimize the OCR function of each position of the response string, as well as to dynamically shift the error tolerance on a position-by-position basis.
By creating a master parity character and a known pattern in the string of characters, it is many times more probable that a single incorrectly read character can be isolated by the logical cross-reference of the two strategies set forth above. For example, if the parity check is incorrect by one value, and the fourth position of the string should be an odd number, but is read as the numeral 8, it is likely that the fourth field contains the numeral 8 if it is also true that the fourth field cannot by design contain an even number. When such techniques are combined with dynamically weighted general error correction, the scheme becomes tolerant of user inaccuracies and OCR errors.
For the presentation of video-superimposed characters, a static template could also be used which matches an accompanying printed template. Alternatively, video-generated characters could be derived dynamically from the character sequence used to identify the viewer. In this scenario, multiple-choice questions asked of the viewer would be presented on screen as would be the appropriate multiple-choice answers, but the characters used to identify the answers would be dynamically generated in response to the viewer or programming identification sequence. For example, if an algorithm generated the output X, then the characters A, B, C and D would be generated next to four multiple-choice answers appearing on screen. If the algorithm generated an output of Y, the answers would be assigned the numerals 1, 2, 3 and 4. Such a technique has the advantage of allowing the video assets containing test materials to remain mutable while responding dynamically to the needs of the error-correcting/checking system. It will be understood that the foregoing techniques may be employed with any visual, sonic or other cue.
Various points of the systems that are believed by the inventor will now be emphasized. However, the enumeration of certain features believed to be novel should not be construed as implying that other features are not novel.
The system provides for a database of assets which must be arranged temporally in order to be used, but which does not dictate the temporal organization. Rather, the audio and video assets in the database are characterized by such parameters as content, aesthetic features, and suitability to viewers with certain demographic or psychographic characteristics. The user can select assets through a suitable query without attention to temporal organization. The organization of assets contrasts with the organization of assets in such fields as video games. In a video game, the permitted temporal orders of the assets are predetermined by a strict branching system. Such a branching system defines precisely which asset may be permitted to follow which under all possible circumstances. The database of the present system may be termed amorphous, in contrast to the rigid organization of a database having a branching system. The general principle can be extended to other types of assets that require temporal organization in order to be used. For example, a database can be constructed for use with still photographs to be arranged into a program with an audio track. Interactive assets, such as two-dimensional or three-dimensional models and graphical user interfaces can also be catalogued in a database of this type. The present also contrasts to traditional databases in that it mines non-linear assets for presentation in a linear manner, the linear manner not being contained within the assets. The use of a moderation layer that limits the assets responsive to the query that are actually presented to the user contrasts with traditional database organization, in which all items responsive to a query are presented to the user.
The system also provides for the automatic creation of new programming, uniquely configured to the requirements of the viewer, but which may be viewed passively. In contrast, in such fields of video games, while the creation of unique programming occurs, the viewer must interact with the system in order to create the programming.
The system is also capable of arranging assets in an order, and creating transitions between assets, to create a concatenated stream of audiovisual programming that has transitions that appear to the viewer to have been edited by a human editor. Even without temporal order, the assets contain sufficient transition information, and the system contains default transition choices, to permit assembly of programming with fluid transitions without the intervention of a human editor. Numerous methods are available for ordering the assets in such a program. As discussed, assets may be ordered by using a template that imposes sequential requirements on the assets, an expert system, a truth table, or a human editor may decide the order of the assets. The attributes of the assets impose certain limitations on the ordering of the assets. For example, it may not be possible for aesthetic reasons to place a certain clip immediately before or after a certain other clip.
The individual assets may have demographic characteristics. For example, an audiovisual clip may feature actors of a particular ethnic group, geographic origin, age, or other demographic characteristic. As a result, the assets may be selected based on demographic search criteria.
The system may permit, in an on-line environment, switching between a predetermined program and interactive video. For example, the system may provide delivery of a predetermined audiovisual program so long as the viewer does not seek to interact with the system, e.g., by positioning a mouse pointer on the screen and clicking. At this point, the system may select assets that are appropriate to the particular information that the viewer appears to be seeking based on the information on the screen at the time. The system will then generate suitable transitions and add those assets to the programming. For example, the program may be a tour of a three-dimensional model of the heart. When the user moves a pointer or mouse to a particular portion of the screen, and clicks on the screen, the system may select, for example, assets incorporating more detailed information on certain portions of the heart corresponding to material on the portion of the screen where the mouse was located when clicked. The system applies suitable transitions to the assets and incorporates these assets into the programming interactively in response to mouse clicks or other suitable input from the viewer. This interactive generation of programming for passive viewing contrasts with existing systems that include a predetermined loop if the viewer does not provide inputs.
The system may also be employed in an interactive system that leads the viewer to select a desired sequence. The system may, for example, provide more interesting visual data in areas of the screen that lead to the selection of material that the viewer should see, or provide smooth transitions from audiovisual assets incorporated in the program as a result of viewer input to audiovisual assets incorporated in the program to achieve a predetermined purpose, e.g., to convey certain information to the viewer.
While specific embodiments of the invention have been described in detail, it will be appreciated by those skilled in the art that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. Accordingly, the particular arrangements of the system and method disclosed are meant to be illustrative only and not limiting to the scope of the invention, which is to be given the full breadth of the following claims, and any and all embodiments thereof.
Patent | Priority | Assignee | Title |
10162486, | May 14 2013 | LEAF GROUP LTD | Generating a playlist based on content meta data and user parameters |
10585952, | Apr 24 2013 | LEAF GROUP LTD. | Systems and methods for determining content popularity based on searches |
10902067, | Apr 24 2013 | Silicon Valley Bank | Systems and methods for predicting revenue for web-based content |
11119631, | May 14 2013 | Silicon Valley Bank | Generating a playlist based on content meta data and user parameters |
9389754, | May 14 2013 | LEAF GROUP LTD | Generating a playlist based on content meta data and user parameters |
9626438, | Apr 24 2013 | LEAF GROUP LTD | Systems and methods for determining content popularity based on searches |
Patent | Priority | Assignee | Title |
4290141, | Jul 02 1979 | General Electric Company | Electronic voting system |
4377870, | Dec 21 1978 | General Electric Company | Electronic audience polling system |
4566030, | Jun 09 1983 | ARBITRON INC ; ARBITRON, INC A DELAWARE CORPORATION | Television viewer data collection system |
4744281, | Mar 29 1986 | Yamaha Corporation | Automatic sound player system having acoustic and electronic sound sources |
4959734, | Mar 08 1979 | INTERACTIVE VIDEO DISC SYSTEMS, INC , 102 CLINTON AVE WEST, STE 314, HUNTSVILLE, AL AN AL CORP | Prestored response processing system for branching control of interactive video disc systems |
5041972, | Apr 15 1988 | MARITZ INC | Method of measuring and evaluating consumer response for the development of consumer products |
5046004, | Dec 05 1988 | RICOS CO , LTD | Apparatus for reproducing music and displaying words |
5083491, | May 31 1991 | SANWA BANK CALIFORNIA | Method and apparatus for re-creating expression effects on solenoid actuated music producing instruments |
5109482, | Jan 11 1989 | Alcoa Fujikura Limited | Interactive video control system for displaying user-selectable clips |
5132992, | Jan 07 1991 | Greenwich Information Technologies, LLC | Audio and video transmission and receiving system |
5142961, | Nov 07 1989 | Method and apparatus for stimulation of acoustic musical instruments | |
5153829, | Nov 08 1988 | Canon Kabushiki Kaisha | Multifunction musical information processing apparatus |
5206929, | Jan 19 1990 | Sony Electronics INC | Offline editing system |
5208421, | Nov 01 1990 | International Business Machines Corporation | Method and apparatus for audio editing of MIDI files |
5227863, | Nov 14 1989 | Intelligent Resources Integrated Systems, Inc. | Programmable digital video processing system |
5247126, | Nov 27 1990 | Pioneer Electronic Corporation | Image reproducing apparatus, image information recording medium, and musical accompaniment playing apparatus |
5253275, | Jan 07 1991 | Greenwich Information Technologies, LLC | Audio and video transmission and receiving system |
5262940, | Aug 23 1990 | Portable audio/audio-visual media tracking device | |
5303042, | Mar 25 1992 | ONE TOUCH SYSTEMS, INC | Computer-implemented method and apparatus for remote educational instruction |
5307456, | Dec 04 1990 | Sony Electronics, INC | Integrated multi-media production and authoring system |
5317732, | Apr 26 1991 | AMIGA DEVELOPMENT LLC, A LIMITED LIABILITY COMPANY OF THE STATE OF DELAWARE | System for relocating a multimedia presentation on a different platform by extracting a resource map in order to remap and relocate resources |
5353391, | May 06 1991 | Apple Inc | Method apparatus for transitioning between sequences of images |
5388197, | Aug 02 1991 | Tektronix, Inc | Video editing system operator inter-face for visualization and interactive control of video material |
5388264, | Sep 13 1993 | Apple Inc | Object oriented framework system for routing, editing, and synchronizing MIDI multimedia information using graphically represented connection object |
5390138, | Sep 13 1993 | Apple Inc | Object-oriented audio system |
5393926, | Jun 07 1993 | Namco Holding Corporation | Virtual music system |
5414808, | Dec 30 1992 | International Business Machines Corporation | Method for accessing and manipulating library video segments |
5428774, | Mar 24 1992 | International Business Machines Corporation | System of updating an index file of frame sequences so that it indexes non-overlapping motion image frame sequences |
5440730, | Aug 09 1990 | TTI Inventions C LLC | Time index access structure for temporal databases having concurrent multiple versions |
5483276, | Aug 02 1993 | THE NIELSEN COMPANY US , LLC | Compliance incentives for audience monitoring/recording devices |
5486645, | Jun 30 1993 | SAMSUNG ELECTRONICS CO , LTD | Musical medley function controlling method in a televison with a video/accompaniment-music player |
5515490, | Nov 05 1993 | Xerox Corporation | Method and system for temporally formatting data presentation in time-dependent documents |
5519828, | Aug 02 1991 | The Grass Valley Group Inc. | Video editing operator interface for aligning timelines |
5543925, | Sep 19 1990 | U S PHILIPS CORPORATION | Playback apparatus with selective user preset control of picture presentation |
5550863, | Jan 07 1991 | Greenwich Information Technologies, LLC | Audio and video transmission and receiving system |
5550965, | Dec 27 1993 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Method and system for operating a data processor to index primary data in real time with iconic table of contents |
5553221, | Mar 20 1995 | International Business Machine Corporation | System and method for enabling the creation of personalized movie presentations and personalized movie collections |
5559548, | May 20 1994 | Rovi Guides, Inc; TV GUIDE, INC ; UV CORP | System and method for generating an information display schedule for an electronic program guide |
5576844, | Sep 06 1994 | UNILEARN, INC | Computer controlled video interactive learning system |
5616876, | Apr 19 1995 | Microsoft Technology Licensing, LLC | System and methods for selecting music on the basis of subjective content |
5633726, | Sep 19 1990 | U S PHILIPS CORPORATION | Digitized picture display system with added control files |
5634020, | Dec 31 1992 | AVID TECHNOLOGY, INC | Apparatus and method for displaying audio data as a discrete waveform |
5644686, | Apr 29 1994 | International Business Machines Corporation | Expert system and method employing hierarchical knowledge base, and interactive multimedia/hypermedia applications |
5659539, | Jul 14 1995 | ARRIS SOLUTIONS, INC | Method and apparatus for frame accurate access of digital audio-visual information |
5659793, | Dec 22 1994 | FLORESCERE ASSETS LIMITED LIABILITY COMPANY | Authoring tools for multimedia application development and network delivery |
5680639, | May 10 1993 | Apple Inc | Multimedia control system |
5687331, | Aug 03 1995 | Rovi Technologies Corporation | Method and system for displaying an animated focus item |
5689641, | Oct 01 1993 | Pragmatus AV LLC | Multimedia collaboration system arrangement for routing compressed AV signal through a participant site without decompressing the AV signal |
5713021, | Jun 28 1995 | Fujitsu Limited | Multimedia data search system that searches for a portion of multimedia data using objects corresponding to the portion of multimedia data |
5721815, | Jun 07 1995 | ECHOSTAR TECHNOLOGIES L L C | Media-on-demand communication system and method employing direct access storage device |
5721878, | Jun 07 1995 | ECHOSTAR TECHNOLOGIES L L C | Multimedia control system and method for controlling multimedia program presentation |
5729471, | Mar 31 1995 | The Regents of the University of California | Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
5748187, | Dec 13 1994 | Electronics and Telecommunications Research Institute; Korea Telecommunication Authority | Synchronization control of multimedia objects in an MHEG engine |
5748956, | Jan 13 1995 | COMCAST MO GROUP, INC | Method and system for managing multimedia assets for proper deployment on interactive networks |
5751883, | Jun 07 1995 | ECHOSTAR TECHNOLOGIES L L C | Multimedia direct access storage device and formatting method |
5752029, | Apr 10 1992 | Avid Technology, Inc. | Method and apparatus for representing and editing multimedia compositions using references to tracks in the composition to define components of the composition |
5754851, | Apr 10 1992 | Avid Technology, Inc. | Method and apparatus for representing and editing multimedia compositions using recursively defined components |
5765164, | Dec 21 1995 | Intel Corporation | Apparatus and method for management of discontinuous segments of multiple audio, video, and data streams |
5781730, | Mar 20 1995 | International Business Machines Corporation | System and method for enabling the creation of personalized movie presentations and personalized movie collections |
5799150, | Mar 21 1994 | Avid Technology, Inc. | System for sending list of media data objects to server which may be read by client and receiving from the server indicator of allocated resource |
5799282, | May 19 1992 | COGNIZANT TRIZETTO SOFTWARE GROUP, INC | Methods for establishing certifiable informed consent for a medical procedure |
5819286, | Dec 11 1995 | Industrial Technology Research Institute | Video database indexing and query method and system |
5826102, | Dec 22 1994 | FLORESCERE ASSETS LIMITED LIABILITY COMPANY | Network arrangement for development delivery and presentation of multimedia applications using timelines to integrate multimedia objects and program objects |
5852435, | Apr 12 1996 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | Digital multimedia editing and data management system |
5861880, | Oct 14 1994 | Fuji Xerox Co., Ltd. | Editing system for multi-media documents with parallel and sequential data |
5864682, | Jul 14 1995 | ARRIS SOLUTIONS, INC | Method and apparatus for frame accurate access of digital audio-visual information |
5864868, | Feb 13 1996 | Apple Inc | Computer control system and user interface for media playing devices |
5875305, | Oct 31 1996 | SENSORMATIC ELECTRONICS, LLC | Video information management system which provides intelligent responses to video data content features |
5949951, | Nov 09 1995 | OMNI REHAB SYSTEMS, INC | Interactive workstation for creating customized, watch and do physical exercise programs |
5966121, | Oct 12 1995 | Accenture Global Services Limited | Interactive hypervideo editing system and interface |
5999909, | May 19 1992 | COGNIZANT TRIZETTO SOFTWARE GROUP, INC | Methods for establishing certifiable informed consent for a procedure |
6002720, | Jan 07 1991 | Greenwich Information Technologies, LLC | Audio and video transmission and receiving system |
6144702, | Jan 07 1991 | Greenwich Information Technologies, LLC | Audio and video transmission and receiving system |
6314451, | Jan 26 1999 | ANDREAS ACQUISITION LLC | Ad controller for use in implementing user-transparent network-distributed advertising and for interstitially displaying an advertisement so distributed |
6317761, | Jan 26 1999 | ANDREAS ACQUISITION LLC | Technique for implementing browser-initiated user-transparent advertising and for interstitially displaying an advertisement, so distributed, through a web browser in response to a user click-stream |
EP564247, | |||
WO9323836, | |||
WO9608108, | |||
WO9619779, | |||
WO9704596, | |||
WO9717111, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 28 2002 | NTech Properties, Inc. | (assignment on the face of the patent) | / | |||
Mar 14 2008 | MARCUS, DWIGHT | NTECH PROPERTIES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020679 | /0981 | |
Dec 21 2023 | NTECH PROPERTIES, INC | HPCF LITIGATION FINANCE US I LLC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 066199 | /0758 |
Date | Maintenance Fee Events |
Jan 23 2012 | M2553: Payment of Maintenance Fee, 12th Yr, Small Entity. |
Jan 23 2012 | M2556: 11.5 yr surcharge- late pmt w/in 6 mo, Small Entity. |
Jan 26 2012 | ASPN: Payor Number Assigned. |
Jan 26 2012 | RMPN: Payer Number De-assigned. |
Date | Maintenance Schedule |
Sep 06 2014 | 4 years fee payment window open |
Mar 06 2015 | 6 months grace period start (w surcharge) |
Sep 06 2015 | patent expiry (for year 4) |
Sep 06 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 06 2018 | 8 years fee payment window open |
Mar 06 2019 | 6 months grace period start (w surcharge) |
Sep 06 2019 | patent expiry (for year 8) |
Sep 06 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 06 2022 | 12 years fee payment window open |
Mar 06 2023 | 6 months grace period start (w surcharge) |
Sep 06 2023 | patent expiry (for year 12) |
Sep 06 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |