Interactive interfaces to video information provide a displayed view of a quasi-object called a root image. The root image consists of a plurality of basic frames selected from the video information, arranged such that their respective x and y directions are aligned with the x and y directions in the root image and the z direction in the root image corresponds to time, such that base frames are spaced apart in the z direction of the root image in accordance with their time separation. The displayed view of the root image changes in accordance with a designated viewing position, as if the root image were a three-dimensional object. The user can manipulate the displayed image by designating different viewing positions, selecting portions of the video information for playback and by special effects, such as cutting open the quasi-object for a better view. A toolkit permits interface designers to design such interfaces, notably so as to control the types of interaction which will be possible between the interface and an end user. Implementations of the interfaces including editors and viewers are also disclosed.

Patent
   RE45594
Priority
Jul 03 1997
Filed
Jan 28 2011
Issued
Jun 30 2015
Expiry
Jul 03 2017

TERM.DISCL.
Assg.orig
Entity
unknown
0
22
EXPIRED
0. 33. A method of delivering video over a network, comprising:
storing video data representing a video sequence in a memory at a first device;
providing a hyper-media container in a primary storage format, the hyper-media container including data associated with the video data; and
sending the video data and the hyper-media container over the network to a second device, the hyper-media container being sent in a secondary storage format, the secondary storage format being a format different than the primary storage format and being a format that is readable at the second device,
wherein the hyper-media container in the secondary storage format includes address information of annotation data.
0. 39. A method for receiving data over a network, comprising:
receiving video data and a hyper-media container over the network from a device, the video data representing a video sequence, the hyper-media container including data associated with the video data, and the hyper-media container being received in a secondary storage format; and
storing the hyper-media container in the secondary storage format,
wherein the hyper-media container is received after the hyper-media container is stored at the device in a primary storage format, the secondary storage format is a format different than the primary storage format and is a format that is readable at the apparatus, and the hyper-media container in the secondary storage format includes address information of annotation data.
0. 34. An apparatus for transmitting data over a network, comprising:
a storage section configured to store video data and a hyper-media container, the video data representing a video sequence, the hyper-media container including data associated with the video data, and the hyper-media container being stored in a primary storage format; and
a transmitting section configured to transmit the video data and the hyper-media container over the network to a device, the hyper-media container being transmitted in a secondary storage format, the secondary storage format being a format different than the primary storage format and being a format that is readable at the device,
wherein the hyper-media container in the secondary storage format includes address information of annotation data.
0. 35. An apparatus for receiving data over a network, comprising:
a receiving section configured to receive video data and a hyper-media container over the network from a device, the video data representing a video sequence, the hyper-media container including data associated with the video data, and the hyper-media container being received in a secondary storage format; and
a storage section configured to store the hyper-media container in the secondary storage format,
wherein the hyper-media container is received at the receiving section after the hyper-media container is stored at the device in a primary storage format, the secondary storage format is a format different than the primary storage format and is a format that is readable at the apparatus, and the hyper-media container in the secondary storage format includes address information of annotation data.
0. 37. A non-transitory computer readable medium encoded with instruction which, when executed by a processor, cause the processor to execute a method for transmitting data over a network, said method comprising:
storing video data and a hyper-media container in a memory, the video data representing a video sequence, the hyper-media container including data associated with the video data, and the hyper-media container being stored in a primary storage format; and
transmitting the video data and the hyper-media container over the network to a device, the hyper-media container being transmitted in a secondary storage format, the secondary storage format being a format different than the primary storage format and being a format that is readable at the device,
wherein the hyper-media container in the secondary storage format includes address information of annotation data.
0. 38. A non-transitory computer readable medium encoded with instruction which, when executed by a processor of an apparatus, cause the apparatus to execute a method for receiving data over a network, said method comprising:
receiving video data and a hyper-media container over the network from a device, the video data representing a video sequence, the hyper-media container including data associated with the video data, and the hyper-media container being received in a secondary storage format; and
storing the hyper-media container in the secondary storage format at a memory,
wherein the hyper-media container is received after the hyper-media container is stored at the device in a primary storage format, the secondary storage format is a format different than the primary storage format and is a format that is readable at the apparatus, and the hyper-media container in the secondary storage format includes address information of annotation data.
0. 36. A system of delivering video over a network, comprising:
a first storage section configured to store video data representing a video sequence;
a processor configured to generate a hyper-media container, the hyper-media container including data associated with the video data, and the hyper-media container being generated into a primary storage format;
a transmitting section configured to transmit the video data and the hyper-media container over the network to a device, the hyper-media container being transmitted in a secondary storage format, the secondary storage format being a format different than the primary storage format and being a format that is readable at the device; and
a receiving section configured to receive the video data and the hyper-media container over the network; and
a second storage section configured to store the hyper-media container in the secondary storage format,
wherein the hyper-media container in the secondary storage format includes address information of annotation data.
0. 1. A method of delivering video over a network, comprising:
receiving video data representing a video sequence;
generating a hyper-media container containing data associated with the video data;
storing the video data;
storing the hyper-media container;
providing the video data and the hyper-media container available over the network to a remote user.
0. 2. The method as set forth in claim 1, wherein generating the hyper-media container comprises providing annotations to the video data.
0. 3. The method as set forth in claim 1, wherein generating the hyper-media container includes providing segmentation data associated with the video data.
0. 4. The method as set forth in claim 1, further comprising controlling access to the video data and the hyper-media container.
0. 5. The method as set forth in claim 4, wherein controlling access comprises controlling access to annotions.
0. 6. The method as set forth in claim 4, wherein controlling access comprises controlling access to annotation packs.
0. 7. The method as set forth in claim 4, wherein controlling access comprises controlling access to versions of annotations.
0. 8. The method as set forth in claim 1, wherein the providing the video data and the hyper-media container available to the remote user comprises publishing the video data and the hyper-media container.
0. 9. The method as set forth in claim 1, wherein the providing the video data and the hyper-media container available to the remote user includes distributing at least the hyper-media container.
0. 10. The method as set forth in claim 9, wherein the distributing comprises providing the hyper-media container available on-demand.
0. 11. The method as set forth in claim 9, wherein the distributing comprises streaming the video data to the remote user over the network.
0. 12. The method as set forth in claim 9, wherein the distributing comprises immerse streaming the video data to the remote user over the network.
0. 13. The method as set forth in claim 9, wherein the distributing comprises broadcasting the hyper-media container over the network to the remote user.
0. 14. The method as set forth in claim 1, further comprising indexing the video data.
0. 15. The method as set forth in claim 1, further comprising receiving modifications to one of the hyper-media container and the video data from the remote user and modifying the corresponding one of the hyper-media container and video data.
0. 16. The method as set forth in claim 1, further comprising allowing the remote user to collaborate with other remote users on at least one of the video data and the hyper-media container.
0. 17. The method as set forth in claim 16, further comprising maintaining version control of modifications to at least one of the video data and the hyper-media container.
0. 18. The method as set forth in claim 1, wherein generating the hyper-media container comprises including an identification of a location for the video data associated with the hyper-media container.
0. 19. The method as set forth in claim 1, wherein generating the hyper-media container comprises including an identifier for the video data associated with the hyper-media container.
0. 20. The method as set forth in claim 1, wherein the data in the hyper-media container that is associated with the video data comprises an identifier for a data object.
0. 21. The method as set forth in claim 1, wherein the receiving comprises receiving the video data over the network.
0. 22. The method as set forth in claim 1, wherein the receiving comprises receiving the video data from a second remote user.
0. 23. The method as set forth in claim 1, further comprising enabling the remote user to send the hyper-media container directly to a second remote user.
0. 24. A method of delivering video over a network, comprising:
receiving video data representing a video sequence;
generating a hyper-media container containing data associated with the video data;
storing the video data;
storing the hyper-media container;
providing the video data and the hyper-media container available over the network to a remote user;
wherein generating the hyper-media container includes analyzing the video data and associating results of the analyzing with the hyper-media container.
0. 25. The method as set forth in claim 24, wherein the analyzing comprises selecting an object from within the video.
0. 26. The method as set forth in claim 24, wherein the analyzing includes extracting an object from within the video.
0. 27. The method as set forth in claim 24, wherein the analyzing includes ranking frames of the video.
0. 28. The method as set forth in claim 24, wherein the analyzing includes analyzing camera motion.
0. 29. The method as set forth in claim 24, wherein the analyzing includes generating zooming effects.
0. 30. The method as set forth in claim 24, wherein the analyzing includes generating scripts effects.
0. 31. The method as set forth in claim 24, wherein the analyzing includes generating special effects.
0. 32. A method of delivering video over a network, comprising:
receiving video data representing a video sequence;
generating a hyper-media container containing data associated with the video data;
storing the video data;
storing the hyper-media container;
providing the video data and the hyper-media container available over the network to a remote user;
the method further comprising:
receiving modifications to one of the hyper-media container and the video data from the remote user and modifying the corresponding one of the hyper-media container and video data; and
publishing versions of the modifications from the remote user to other remote users.
0. 40. The method of delivering video according to claim 33, wherein the address information is a link to an application.
0. 41. The apparatus according to claim 34, wherein the address information is a link to an application.
0. 42. The apparatus according to claim 35, wherein the address information is a link to an application.
0. 43. The system according to claim 36, wherein the address information is a link to an application.
0. 44. The non-transitory computer readable medium according to claim 37, wherein the address information is a link to an application.
0. 45. The non-transitory computer readable medium according to claim 38, wherein the address information is a link to an application.
0. 46. The method for receiving data according to claim 39, wherein the address information is a link to an application.


3.2.2 Service Handler for the OMS
3.2.3 Service Handler for NetShow
3.2.4 Service Handler for NetShow Theater

NetShow Theater server is completely different from NetShow server. Its SDK gives access to several ActiveX controls that allow the management of the whole server. In particular, what the NetShow Theater SDK calls the MediaServer object, is an ActiveX control that permits client applications to retrieve useful information about the remote players, what title they are streaming, the available bandwidth, etc. The service handler for NetShow Theater internally uses this ActiveX control for calculating a global load measure.

4 Access Control

Beside replication management, the Obvious Site Manager also handles the access control to the site to which it belongs. For a given site, client application must first connect to the corresponding Obvious Site Manager for access control.

In addition to protocol-specific access control mechanisms, the Obvious Network Architecture defines its own security schema based on the following:

FIG. 19C enumerates the network protocols used during site management. The first protocol concerns the interaction between client applications and the Obvious Site Manager. The second and third protocols concern the interaction between the Obvious Site Manager and the Obvious Load Manager. Two protocols are defined: one for the management of the mapping of the services, the other for the management of load balancing. In the first one, the Obvious Site Manager acts as a server and answers to OLM's requests. In the other one, the Obvious Load Manager acts as a server and answers to OSM's requests.

6 Implementation

6.1 Modules

Site management is implemented as a set of 4 modules:

FIG. 20 gives the hierarchy of these modules. The example depicted in FIG. 20 correspond to the one described in previous section.

The Obvious Site Manager machine contains the ISAPI module that handles requests from client application. This ISAPI module manages access control and load balancing. The protocol used between client application and the Obvious Site Manager is described later in this chapter. The Obvious Site Manager machine also contains a NT Service that polls for the available services, retrieve their load and updates database tables in the OIS.

The three others machines contain one NT Service for the Obvious Load Manager, at least one service process (OMS, NS, RM, NST) and at least one Load Handler DLL. Each OLM NT Service handles requests from the Obvious Site Manager.

6.2 Interaction Between the OEM and the Load Handlers

The load handler of a given service must be implemented as a DLL with the following exported functions:

By calling this function, the OLM initializes the load handler. It also gets am opaque handle that contain service-specific data. TheOLM does not interpret the content of this handle and simply use it for subsequent calls to LH_GetLoad and LH_Release functions.

6.2.2 LH_Release

The OLM calls this functions before unloading the DLL from memory. It allows the load handler to free it resources.

6.2.3 LH_GetLoad

The OLM calls this function in order to get the current load estimation of the service. The load is returned in parameter pLoad, as a long value.

6.3 Load Handler for Real Server G2

The service handler for Real Server G2 uses the Real Server for retrieving the number of connected clients. It has been implemented as a monitor plugin, as described in the Real Server SDK documentation. This monitor plugin, implemented as a DLL, registers itself for receiving monitoring information from Real Server G2.

At runtime, this DLL is loaded by 2 processes:

The communication between the two corresponding threads (that run in these two separate processes) is implemented by using a segment of shared memory. First thread, running in the RealServer process, writes load values in this shared memory. The second thread, running in the OLM process, reads these values and transmit them, on-demand, to the OSM,

The DLL file that implements the load handler for Real Server G2 is called lh_rm.dll

6.4 Load Handler for NetShow Theater

With NetShow Theater, we don't need to write a plugin for accessing the internal state of the server (bandwidth, number of clients, etc.). The NetShow Theater SDK describes a set of ActiveX objects that can used by any client application for managing and tuning the server. For the purpose of the Load Handler, the MediaServer ActiveX control is of interest because it gives direct access to the number of connected players, the available bandwidth, the description of the title being played, etc.

6.5 Load Handler for the OSM

7 Protocol between the OSM and the OLM

The Obvious Load Manager creates a TCP/IP socket on port 15600 and listens for incoming connections from the Obvious Site Manager. Each connection corresponds to one request. Each request has a code, from one of the following values:

The binary message of each request is represented by a structure. The following sections gives the details of each structure. All requests are merged in a union structure as follows:

The vRequestType must be initialised to one of the REQ_TYPE_XXX values. Depending on that value, the corresponding entry of the union structure must be properly initialised.

7.1 Getting the OLM version

The OLM version get be retrieved by using a request with the following binary format:

The vRequestType member must be initialized to REQ_TYPE_GETVERSION.

7.2 Retrieving the Current Load

For retrieving the load of a particular service, the OSM sends a request to the OLM. The binary message corresponding to this request is defined in the following structure:

The vRequestType member must be initialized to REQ_TYPE_GETLOAD.

The OLM sends back the following response:

The vLoad member represents the load value, as extracted from the service (via the corresponding service handler). The range of possible values depends on the service. However, low values must correspond to a low load. High values must correspond to a significant. Load values retrieved from different types of services can not be compared. Load values from the same type of service are compared by the Obvious Site Manager to determine the best service for a given client session.

8 Protocol Between the OLM and the OSM

8.1 Concepts

An OLM can send its status to its parent OMS. This allows dynamic tracking of changes in the configuration of the services. When a new OLM is installed and deployed, it automatically register itself in the OMS, which will in turm add a corresponding entry in the OIS database. If a service stops (due to a failure or a manual operation from the administrator), the OLM sends the appropriate information to the OSM, which will update the OIS database accordingly.

8.2 Protocol Specification

8.2.1 Getting the OSM Version

The OLM can get the version of the remote OSM by sending a query whose binary structure is defined as follows:

8.2.2 Registering a Status Change

When the status of the OLM changes, it sends a message to the OLM. This message is defined by the following structure:

9 Protocol Between Client Applications and the OSM

9.1 Concepts

The protocol between client applications and the OSM is built on top of the HTTP protocol. The OSM module responsible for interacting with these client applications is implemented as an ISAPI script for Internet Information Server.

Three kinds of client applications can access the OSM:

This request allows a client application to open a runtime session. After performing access control and replication management, the OSM returns the necessary information that will allow the client application to access a video server and an OMS.

Syntax

Code values Client application
0 to 9 Obvious Media Viewer (OMV)
10 to 19 Obvious Media Editor (OME)
20 to 29 Obvious Media Manager (OMM)
30 to 39 Obvious Java Viewer (OJV)
40 to 49 Obvious Web Editor (OJE)

This request allows client application to get some information about the Obvious Asset Manager

Syntax

This request has no parameters.

Response

9.2.3 GetOAS

This request allows client application to get some information about the Obvious Administration Server.

Syntax

This request has no parameters.

Response

VII. The Obvious Media Server

1 Concepts

The Obvious Media Server is a server that allows client applications to view OBVIs. It is a runtime component in the sense that it is involved when an OBVI is opened and viewed from a client application. Here, “client application” refers to any application, developed by Obvious Technology (OMM/OME/OMV, Obvious Java Viewer) or a third-party, that is able to open, visualise or edit OBVIs.

The main goal of the Obvious Media Server is to serve metadata, images, structure, and annotations (it also accomplishes various others tasks that will be described later)

The Obvious Media Server acts as a service. It can be replicated and is subject to access control. Thus, it is handled by the Obvious Site Manager and has an entry in the SERVICE table of the OIS. An Obvious Load Manager can monitor its activity via a specific service handler.

An Obvious Media Server can handle several sites. This configuration is done during the installation process. An OMS internally manages a mapping between sites and DSNs, each DSN representing a connection to the OIS database of the corresponding site.

Serving metadata, structure information and annotations is a matter of extracting the information from the OIS database and sending it to the client in the appropriate format. However, serving images involves a more complex mechanism. The following sub-section will focus on that specific task.

2 Serving Images

For serving images, the Obvious Media Server uses an Image Proxy File (IPF). The IPF file contains the images that can be distributed by the OMS. When it receives a request for a specific image, the OMS determines the location of the IPF file corresponding to the desired media file (by looking at the OIS database). Then, it extracts the requested frame, scales it if necessary, and sends it to the client application.

In the simplest case, the IPF file is just the original video file. However, for improving performances, it is more efficient to build a special file, called OBF, that will be used by the OMS for extracting the requested frames. The OBF is designed to be a very efficient way for distributing video images.

In the other hand, building the OBF file is a time consuming process and implies a disk space overhead. Experiments show that the OBF file has approximately the same size than the original video file. In some cases it may be better to let the OMS use the original video file.

The current implementation of the Obvious Media Server can use both methods.

2.1 Supported Input Formats

2.1.1 Popular Formats

An IPF file is usually an AVI, a QT or an MPEG file, even if the OMS can virtually work with any video format recognised by DirectShow. In that case, the OMS works directly on the original video file and its performance depends on the video codec. For example, random access to MPEG frames is very slow. Some AVI codecs (Intel Indeo) allow very fast decompression and access time. Obvious Technology will recommend a set of codecs that should be used for the OMS. If the original video file is not encoded with one of these preferred codecs, then it is more efficient to choose the OBF file format, described below.

2.1.2 OBF Format

The OMS can also access video images from an OBF file (Obvious Backend Format). This file format has been designed for improving the performance of the Obvious Media Server. The OBF file is built from the original video file. During this build process, the user can choose the portion of the original video file that must be converted into OBF. He can also choose the image size of the output OBF file.

Allowing the user to build an OBF with a custom image size is very important. Original video files have often a high resolution and the OMS don't have to handle large images: most of the client applications that retrieve images from the OMS use them in thumbnails and 2D/3D storyboards. The image size needed in such applications is typically around 128×96.

An OBF file is a proprietary MJPEG format. It contains 3 sections: a header, an index and a body.

Concerning the header, its structure is described in the following table:

Field Name Size Description
NbFrames 4 bytes (DWORD) Number of images in the OBF
FrameRate 8 bytes (double) Frame rate
Width 4 bytes (DWORD) Image width (in pixels)
Height 4 bytes (DWORD) Image height (in pixels)

All the JPG images that constitute the OBF file have the same size, stored in the Width and Height fields. The common size of the images is chosen during the build process of the IPK file and depends on the desired image quality and the available bandwidth. A typical and recommended size is 128×96 (for a 4/3-ratio movie).

Right after the header, there is the OBF index. It is constituted by NbFrames entries. Each entry is defined by the following 12 bytes:

Field Name Size Description
NumFrame 4 bytes (DWORD) Frame number
Offset 4 bytes (DWORD) Offset of the beginning of the image
Size 4 bytes (DWORD) Size of the JPEG data
The body contains a set of JPG images. They can be compressed at different ratios. A typical compression ratio is 75%.

Concerning the size of the whole OBF file, it depends of course on the complexity of the images. A size of 5K can be achieved for a 128×96 image compressed at a 75% compression ratio. OBF files are designed to store low-resolution versions of original media files. They are designed to be efficiently used by the Obvious Media Server for distributing individual images.

2.2 Advanced Caching

The Obvious Media Server maintains a local cache of extracted images. This cache is used only when AVI, QT and MPEG files are used as the source. For OBF files, the cache is not used because the performance gain is not relevant (the extraction time of an image from an OBF file is very fast and there is no need to cache the extracted images).

An intelligent cache management is implemented: the OMS tries to predict future requests by pre-extracting images and putting them in the cache. The prediction algorithm involves to mechanisms

A cache cleanup mechanism has also been implemented. It allows to keep the cache size below a given threshold, specified during the installation process of the OMS. Each image in the cache has a counter that gives the number of times this image has been requested. After a certain amount of time, images that have a low counter value are removed from the cache.

3 Protocol Between the OMS and the OSM

3.1 Concepts

There are basically two kind of interactions between the OMM and the OSM. The first one concerns security and replication. As described above in Section VI, the OMS is considered as one of the services managed by OSM's security and replication mechanisms. Each OMS has a Load Handler that is permanently polled for the load. The corresponding protocol has already been described in previous pages.

The second kind of interaction between the OMS and the OSM concerns the dispatch of session keys. The Obvious Site Manager handles access control by verifying the user credentials for a given site. Then, it calculates a session key that must be transmitted to both the client application and the Obvious Media Server. The following sections describe the protocol used between the OMS and the OSM for enabling and disabling session keys.

The protocol between the OMS and the OSM is built on top of HTTP. Encryption of transmitted data between the OMS and the OSM is handled by using a DES algorithm in CBC mode. Authentication is handled by using a N passes Zero-Knowledge algorithm.

3.2 Protocol Specification

3.2.1 Setting a New Session Key

Syntax

Field Offset Size Description
RequestVersion 0  8 bits Version of the request (0 in current
implementation)
Reserved 1 24 bits Not used
SiteID 4 32 bits Site Identifier
ObviID 8 32 bits Obvi Identifier
VdocID 16 32 bits Vdoc Identifier
MediaID 20 32 bits Media Identifier
SessionKey 24 128 bits  Session key
Reserved 40 32 bits Not used

Two kind of requests are supported by the OMS:

1) Requests concerning the runtime phase, i.e. the viewing of an OBVI:

Given a specific version of an OBVI (referenced by two identifiers: ObviID and VersionID), these requests allow client application to get all the information necessary to view the OBVI.

2) Requests concerning the retrieval of an OBVI:

Given a specific OBVI version (referenced by two identifiers: ObviID and VersionID), these requests allow client application to retrieve the OBVI in a secondary storage format (OVI or XML)1. 1 OSF extraction is not implemented in the OMS because OSF streams are supposed to be used by multicast/unicast tools such as the Obvious Multicaster and the Obvious Multicast Receiver.

4.2 Protocol Specification

The protocol between client applications and the OMS is implemented on top of HTTP.

4.2.1 GetImage

This request allows the client to retrieve an image from a video media.

Syntax

This request allows to retrieve the metadata of a given video document.

Syntax

This request allows to retrieve the blocks of a given OBVI, at a specified level.

Syntax

This request allows to retrieve the structure of a given OBVI.

Syntax

This request allows client applications to retrieve the URL of an annotation. An annotation is represented by its identifier.

Syntax

This request allows client applications to send hints to the OMS concerning the image extraction process. These hints help the OMS to update its cache and improve its performances.

Syntax

This request allows to retrieve the metadata of a given OBVI.

Syntax

This request allows to retrieve the metadata of a given OBVI version. The GetObviMetadata request returns

Syntax

This request allows client applications to retrieve the VAMT results of a given registered Media. The user can retrieve a subset of the measures by specifying valid values for the FirstFrame and LastFrame parameters. If these parameters are both null, the whole set of measures is sent.

Syntax

This request allows client applications to retrieve an OBVI as an OVI file. This operation correspond to the transformation from the primary storage to the OVI secondary storage format.

Syntax

This request allows client applications to retrieve an OBVI as an XML file. This operation correspond to the transformation from the primary storage to the XML secondary storage format.

Syntax

The Obvious Media Server is currently implemented for Windows NT and is constituted by 2 modules:

The NT service is responsible for extracting images from corresponding IPF files (that can either OBF, AVI, QT or MPEG files). The extraction of images is not achieved in the ISAPI script because of multithreading constraints in DirectShow. The others requests are handled by the ISAPI itself.

The ISAPI script has a cached connection to the OIS database, improving the speed of the SQL requests. ADO is used for all database operations.

A running OMS is available on http://odyssee.opus.obvioustech.com/OMSscript/oms.dll

VIII. The Obvious Administration Server

1 Concepts

The Obvious Administration Server (OAS) allows remote administration of a given site. Administering a site is essentially a matter of modifying entries in the OIS database. Administration tools (developed by Obvious Technology or by third parties) never directly access the OIS database. They must send their requests to the OAS which is responsible for managing the database. By putting this additional layer between administration applications and the database repository, we ensure a higher level of security. We also facilitate the maintenance of the system: changes in the internal structure of the database will not have any impact on the administration tools as long as they use the standard interface of the OAS.

XML is extensively used for formatting the responses of the OAS. In particular, recordsets corresponding to data fetched from the OIS tables are formatted as XML documents and sent to the client application.

2 Protocol

The protocol between administration applications and the OAS is built on top of HTTP. The OAS is implemented as an ISAPI script for Internet Information Server. The following pages gives the definition of all requests accepted by the OAS.

2.1 Category/Vdoc Manipulation

2.1.1 GetVdocCategories

This request allows client applications to retrieve the list of Vdoc categories.

Syntax

This request allows client applications to add a new Vdoc category. They must specify a name, a description and the parent category identifier.

Syntax

This request allows client applications to delete a Vdoc category.

Syntax

The Obvious Asset Server has 2 roles:

Media files are typically created on client machines, with video acquisition cards, sound cards, closed-caption devices etc. One a media file is ready for . . .

1.2 Video Analysis

The Video Analysis and Measuring Tool (VAMT) allows fast video analysis of a media. Its current features allows automatic detection of scene changes on AVT, QT and MPEG files.

The goal of the VAMT is to analyse and gather various spatial and time related information from a video sequence. Its goal is not to find cuts. The VAMT process must be seen as a pre-processing step. The decision step is application dependant. For example, two different application may use the same results of the VAMT and interpret them differently, thus providing 2 completely different segmentations of the media.

The separation of the pre-processing step from the decision step is very important in the Obvious architecture. It ensure the reusability of the analysis processes (preserving time consuming analysis in applications where several different OBVIs may be built from the same source media). This features also allows to reinterpret at any time the measures collected during the pre-processing step, allowing for example the user to add or remove blocks. The addition and removal of blocks is simply a matter of reinterpreting the VAMT results with a different threshold.

The VAMT is designed to run on large amount of data. It is also designed to be used in parallel on multiples media sources. A special module called the VAMT Manager has been designed for handling multiple video analysis jobs. External applications (on the same machine or on remote locations) can access the VAMT Manager and perform the following tasks:

A job is defined as the process of analyzing a given media instance, from a timecode in to a timecode out. A media instance is uniquely identified by two identifiers VdocID and MediaID.

Several jobs can be ran in parallel. In addition, the current architecture defines a way for transparently using different VAMT algorithms and flavours.

2 Implementation

The implementation involves 3 modules:

The core engine of the VAMT is implemented as an DirectShow filter. Thus, it can be used to parse any file format recognised by the DirectX Media architecture. An improved version for the Pentium III processor is available. By using SIMD instructions for the comparison of image pixels, an improvement ratio of 70% can be achieved.

The current implementation of the VAMT works on the pixel domain. It handles the decoded frame buffer of a rendering chain for its computations. Future versions of the VAMT will handle specific file formats such as MPEG for rapid extraction of spatial and/or time related information.

The DirectShow filter implementing the VAMT is called vamt.ax. The Pentium III version is available in vamtkatmai.ax. Since these filters act as COM objects, they present a custom interface that can be used from a container application to control the behaviour of the filter. This COM interface is called IVAMTCustom and is described below.

Future versions of the VAMT Engine (MPEG domain processing for example) will also be encapsulated as DirectShow filters. That will allow a complete compatibility between different VAMT Engine implementations.

Any VAMT Engine implementation must:

A container application that wants to use the VAMT Engine must first create a rendering chain containing the VAMT Engine filter.

2.2 VAMT Service

The VAMT Service is implemented as a Windows NT service, as shown in FIG. 21.

The VAMT Service has 2 operating modes.

The first one is automatic: the VAMT Service is configured to scan a given directory structure on the local file system, take all video files and analyse them. The results of the analysis are automatically stored in the OIS database. The second mode has been developed for demonstration and testing purposes. It is NOT used in normal operations. It will be described here because it gives a good understanding of the internal structure of the VAMT Service.

2.2.1 First Operating Mode (Normal)

The Windows registry contains a list of directories that must be scanned by. The administrator can edit this list of directories by using the VAMT Manager, described in next pages. Each directory is scanned for recognised media files: MPEG, AVI and MOV. Each media file must correspond to a description file. The description file is an XML file that contains various information necessary for the analysis:

The description file is generated by the Obvious Management Console, when the user creates a new Media entry. The original media file and the description file are uploaded via FTP to one the Obvious Asset Manager machine.

2.2.2 Second Operating Mode (Testing Only)

The VAMT Service opens a TCP/IP socket and listens for incoming connections. The protocol used for controlling the VAMT Service is described below.

For each request sent by the client application, a TCP/IP connection must be open to the VAMT Service. On that connection, a binary-formatted message describing the request is sent. Once processed by the VAMT Service, a response message is sent back to the client application and the TCP/IP connection is dropped.

Each request is identified by a code. The possible code values are:

#define REQ_ADD_JOB 0#define REQ _GET_NB_JOB
l#define REQ_GET_JOB_INFO 2#define REQ_REMOVE_JOB
3#define REQ_GET_JOB_RESULT 4
#define REQ_SET_JOB_PRIORITY 5

The following structures describe, for each type of request, the binary message that must be sent by a client application to the VAMT Service. Every structure has a vType field that contains one of the predefined REQ_constants.

Adding a New Job

The VAMT Manager is the client application that can be used for driving the VAMT Service from a remote location. It is implemented in Visual C++ with the MFC library. FIG. 22 displays the main graphical user interface of the VAMT Manager. From this simple interface, a user can remotely control the VAMT Manager, starting and removing jobs.

As explained above, this operating mode is NOT used during normal operations. The VAMT Service is supposed to be autonomous and does not need the manual creation of analysis jobs. However, the VAMT Manager can be useful in many situations.

The list displays the current jobs. Each job is represented by:

When the user clicks on the Add button, the window in FIG. 23 appears. This window allows the user to select a pre-registered media to be analyzed. The user can also define the analysis range by entering the first frame number and the last frame number.

When the user clicks on the Set Job Priority button of the main interface, the window in FIG. 24 appears. This window allows the user to modify the priority of the selected job.

X. The Obvious Indexing System

The Obvious Indexing System (OIS) is the database technology used for managing and indexing all the objects of the system: machines, video, media, video servers, OBVIs, Obvious Media Servers, Obvious Site Managers, etc. Its is global repository for registering these objects. As explained above, the OIS is the central component of the Obvious Network Architecture.

1 Concepts

1.1 Vdoc and Media

A Video Document (Vdoc) is the format-independent concept of a video. Any physical copy of a Vdoc, in whole or in part, is called a Media, regardless of the copy's format. For example, from a Vdoc representing a TV movie, you can create 3 media:

The OIS architecture handles drop frame and non dropframe (29.97 fps) SMPTE timecodes. For non drop-frame SMPTE, the string representation of the timecode is HH:MM:SS:FF. For drop frame SMPTE, a semi-colon is used (HH:MM:SS;FF)

1.3 Annotation and Stratification

The basic process of annotation involves the creation of a relationship between a media chunk and a description. A media chunk is described by two timecodes. The description can be a combination of:

A stratum is a logical group of annotated chunks. All annotated chunks in a stratum share a same semantic. For example, a Who stratum may be constituted by a set of annotated chunks describing the persons present in the video. The What stratum describes the objects present in the video. The process of stratification i.e. the process of creating various strata, can occur at the user interface level (manual annotation) or as a result of a computerised process (object tracking, speaker identification, etc.).

The stratification process can occur as many times as necessary, for a particular application. New strata correspond to new entries in the OIS repository. As users create annotations, new users and automated processes can select media chunks of interest with ever-increasing precision.

2 Schemas

The OIS database is composed of 4 schemas.

2.1 Video Schema

This schema concerns the management and the cataloging of video documents and corresponding media. The OMS is the main component that uses this schema.

2.2 OBVI Schema

This schema concerns the management and the indexing of OBVIs. Published and indexed OBVIs are stored in this schema. The OMS is the main component that uses the OBVI Schema.

2.3 Access Control Schema

This schema concerns the access control facilities in the system. The OSM is the main component that uses this schema.

2.4 Replication Schema

This schema contains database objects that are related to the replication features. The OSM is the main component that uses this schema.

3 Oracle 8

The Obvious Indexing System is based on Oracle 8. Its advanced features for object management, content indexing, security and replication make it a good choice for supporting the core database technology of the OIS. Several extension modules called Cartridges can be used to add new features to the core Oracle database system. In particular, the Obvious Network Architecture extensively uses the Context Cartridge for implementing full search capabilities on OBVI annotations.

4 Database Schema Objects

The following pages describe all the database schema objects that have been defined in the OIS. These database schema objects concern the four schema previously defined . . . Users, tables and sequences.

Tables in the OIS database use only 4 built-in datatypes: NUMBER, VARCHAR2, CLOB, DATE. For more details about these type definitions, refer to the Oracle 8 documentation. Porting to another database environment should be easy for NUMBER, VARCHAR2 and DATE. Concerning the CLOB datatype it can be emulated by using a raw binary datatype.

An * symbol is used to show table columns that are part of a primary key.

4.1 Video tables

4.1.1 VDOC

Field Type Description
VideoID* NUMBER Unique Identifier
Title VARCHAR2 Title of the Vdoc
Description VARCHAR2 Description of the Vdoc
CategoryID VARCHAR2 Category identifier
Proxy VARCHAR2 Proxy
A Vdoc can be associated to a category. The CategoryID field is used to store the identifier of the category to which the Vdoc belongs.

4.1.2 VDOCCATEGORY

Field Type Description
CategoryID * NUMBER Unique Identifier
Name VARCHAR2 Name of the category
Description VARCHAR2 Description of the category
ParentCategoryID NUMBER Parent category identifier

Field Type Description
MediaID* NUMBER Unique Identifier
VideoID* NUMBER Unique video document ID
Name VARCHAR2 Name of the Media
StandardID NUMBER Standard Identifier
FormatID NUMBER Format Identifier
FrameRate NUMBER Frame rate in frames/sec
MediaDerivedID NUMBER Source media ID or 0 if source

4.1.4 DMEDIA

Field Type Description
MediaID * NUMBER Media Identifier
VideoID * NUMBER Vdoc Identifier
VcodecID NUMBER Not used yet
AcodecID NUMBER Not used yet
Fwidth NUMBER Frame width in pixels
Fheight NUMBER Frame height in pixels
Location VARCHAR2 Full location path
Datasize NUMBER Size in megabytes of the Media file
When DATE Creation date
VamtLocation VARCHAR2 Vamt file title and extension
IPFLocation VARCHAR2 Image Proxy File title and extension

4.1.5 FORMAT

Field Type Description
FormatID * NUMBER Unique Identifier
Name VARCHAR2 Format name
Description VARCHAR2 Format description

FormatID Name Description
1 Unknown Unknown format
2 D-1 D-1
3 D-2 D-2
4 D-3 D-3
5 VHS VHS
6 Hi-8 Hi-8
7 8 mm 8 mm
8 S-VHS S-VHS
9 Film Film
10 BetaSP BetaSP
11 BetaSP-30 BetaSP-30
12 BetaSP-60 BetaSP-60
13 AVI AVI format
14 MPEG MPEG format
15 QT Quick-Time format
16 Compressed Compressed

4.1.6 STANDARD

Field Type Description
StandardID * NUMBER Unique Identifier
Name VARCHAR2 Standard name
FrameRate NUMBER Not used yet
LinesPerFrame NUMBER Not used yet
VisibleLines NUMBER Not used yet

StandardID Name FrameRate LinesPerFrame VisibleLines
1 Unknown NULL NULL NULL
2 NTSC 29.97 525 483
3 SECAM 25.0 625 576
4 D-SECAM 25.0 625 576
5 K-SECAM 25.0 625 576
6 L-SECAM 25.0 625 576
7 PAL 25.0 625 576
8 PAL-M 29.97 525 483
9 PAL-N 25.0 625 576
10 PAL-B 25.0 625 576
11 PAL-G 25.0 625 576
12 PAL-H 25.0 625 576
13 PAL-I 25.0 625 576

4.1.7 VIDEOSTREAM

Field Type Description
StreamID * NUMBER Unique Identifier
MediaID * NUMBER Media Identifier
VideoID * Vdoc Identifier
Name VARCHAR2 Stream name
Description VARCHAR2 Stream description
BitrateMin Bitrate minimum
BitrateMax NUMBER Bitrate maximum
Filename VARCHAR2 Filename withoutpath

4.2 OBVI tables
4.2.1 OBVICATEGORY

Field Type Description
CategoryID * NUMBER Unique Identifier for the category
Name VARCHAR2 Name of the category
Description VARCHAR2 Description of the category
ParentCategoryID NUMBER Parent category identifier

4.2.2 OBVI

Field Type Description
ObviID * NUMBER Unique Identifier for the OBVI
VideoID NUMBER Vdoc identifier
MediaID NUMBER Media identifier
Name VARCHAR2 OBVI name
Description VARCHAR2 OBVI description
ObviCategoryID NUMBER OBVI category identifier
ExportFlag NUMBER Export possibilities

Field Type Description
VersionID NUMBER Unique Identifier for the Version
ObviID NUMBER OBVI identifier
Author VARCHAR2 Author of the version
CreationDate DATE Creation date
OVIURL VARCHAR2 URL of the OVI file
XMLURL VARCHAR2 URL of the XML file
OSFURL VARCHAR2 URL of the OSF file
ParentVersionID NUMBER Identifier of the parent version

4.2.4 STRATA

Field Type Description
StrataID * NUMBER Unique Identifier for the strata
Description VARCHAR2 Description of the strata

StrataID Description
1 Who
2 What
3 Where
4 Free Annotation
5 Speech

4.2.5 CHUNK

Field Type Description
ChunkID * NUMBER Unique Identifier for the chunk
TcIN VARCHAR2 Timecode In
TcOUT VARCHAR2 Timecode Out

4.2.6 ANNOTATION

Field Type Description
ObviID * NUMBER Unique Identifier for the Obvi
StrataID * NUMBER Strata identifier
ChunkID * NUMBER Chunk identifier
Text CLOB Indexed text of the annotation
Location VARCHAR2 Location of the annotation

4.2.7 ANNOTFOROBVI

Field Type Description
ObviID * NUMBER Unique Identifier for the Obvi
VersionID NUMBER Strata identifier
AnnotID NUMBER Chunk identifier

4.2.8 BLOCK

Field Type Description
BlockID * NUMBER Unique Identifier for the Block
TcIN VARCHAR2 Timecode In
TcOUT VARCHAR2 Timecode Out
ParentBlockID NUMBER Identifier of the parent Block

4.2.9 BLOCKFOROBVI

Field Type Description
BlockID * NUMBER Unique Identifier for the Block
ObviID * NUMBER OBVI identifier
VersionID * NUMBER Version identifier

4.2.10 FILTER

Field Type Description
FilterID * NUMBER Unique Identifier for the Filter
Name NUMBER Name of the filter
AnnotationType NUMBER Annotation type
Guid VARCHAR2 GUID of the COM object

Field Type Description
FilterID * NUMBER Unique Identifier for the Filter
Name NUMBER Name of the filter
AnnotationType NUMBER Annotation type
Guid VARCHAR2 GUID of the COM object

Field Type Description
UnitID * NUMBER Unique Identifier
IP VARCHAR2 Unit IP address
Name VARCHAR2 Unit name
Position VARCHAR2 Position of the unit

Field Type Description
ServiceID * NUMBER Unique Identifier
UnitID NUMBER Unit identifier
Type NUMBER OMS or VS
Name VARCHAR2 Service name
Protocol VARCHAR2 Protocol used
IP VARCHAR2 Unit IP
VirtualDir VARCHAR2 Unit Virtual Directory
FtpVirtualDir VARCHAR2 Ftp Virtual Directory
FtpLogin VARCHAR2 Ftp login
FtpPassword VARCHAR2 Ftp password
Load NUMBER Loading Balance

4.3.3 DUPOMS

Field Type Description
VideoID * VARCHAR2 Vdoc identifier
MediaID * VARCHAR2 Media identifier
UnitID NUMBER Unit identifier

4.3.4 DUPVS

Field Type Description
StreamID * NUMBER Stream identifier
MediaID * NUMBER Media identifier
VideoID * NUMBER Vdoc identifier
UnitID NUMBER Unit identifier

4.3.5 STATOMS

Field Type Description
UnitID * NUMBER Unique Identifier
Date DATE Self explanatory
Load NUMBER Load value

4.3.6 STATVS

Field Type Description
UnitID * NUMBER Unique Identifier
Date DATE Self explanatory
Load NUMBER Load value

4.4 Security Tables
4.4.1 USERACCOUNT

Field Type Description
UserID NUMBER Unique Identifier
Login VARCHAR2 Login string
Password VARCHAR2 Password string (message digest)
Description VARCHAR2 Description of the user account
GroupID NUMBER Unique Identifier for the group
(A EFFACER !!!!)
VideoAdmin NUMBER Flag for video administration
ObviAdmin NUMBER Flag for obvi administration
RepAdmin NUMBER Flag for replication administration
SecAdmin NUMBER Flag for security administration

4.4.2 GROUPACCOUNT

Field Type Description
GroupID NUMBER Unique Identifier
Name VARCHAR2 Name of the group
Description VARCHAR2 Description for the group account
VideoAdmin NUMBER Flag for video administration
ObviAdmin NUMBER Flag for obvi administration
RepAdmin NUMBER Flag for replication administration
SecAdmin NUMBER Flag for security administration

4.4.3 USERMAPPING

Field Type Description
UserID NUMBER Unique Identifier for the user
GroupID NUMBER Unique Identifier for the group

4.5 Site Tables

These tables are not available in all sites. They will be present on master sites, such as the Obvious Technology's site, that will host the site directory service (via the Obvious Site Directory server detailed in in more detail below in Section V).

4.5.1 SITE

Field Type Description
SiteID NUMBER Unique Identifier
IP VARCHAR2 List of IP addresses for the OSMs
Name VARCHAR2 Name of the site
Description VARCHAR2 Description of the site
CategoryID NUMBER Identifier of the category
Email NUMBER Email
Web NUMBER Web address

4.5.2 SITECATEGORY

Field Type Description
CategoryID NUMBER Unique Identifier
Name VARCHAR2 Name of the category
ParentCategoryID VARCHAR2 Parent category identifier

4.6 Sequences
4.6.1 OSequence

A Sequence object, as defined by Oracle 8, is used to generate unique identifiers for various purposes. In the OIS, one Sequence object is created for each database instance. This Sequence object is called OSequence and is created with the following SQL command:

CREATE SEQUENCE OSequence START WITH 1

4.7 Package

A Oracle package called OBVIPACKAGE has been created. It contains several stored procedures and functions internally used by several components.

4.7.1 Types Definitions

4.7.2 Stored Procedures

For more details on searching see Section XIV entitled OBVI Searching, two stored procedures are used: PROC_SEARCH and PROC_ADVSEARCH.

4.7.3 Functions

5 Installing the Obvious Indexing System

5.1 Creating the OIS database

Oracle 8 must be properly installed on the system. The following procedures describe how to create the OIS database and prepare Oracle 8 for hosting.

The easiest way to build the OIS database is to use the Oracle Database Assistant tool, provided with the standard installation of Oracle 8. The following Figures, denoted below, show how to parameter the OIS database. Another method is to use the provided SQL scripts that will automatically create and setup the OIS database.

Step 1—FIG. 25

The custom way of creating databases must be selected.

Step 2—FIG. 26:

Select the ConText Cartridge if your are installing the indexing components. The Advanced Replication option must be selected.

Step 3—FIG. 27:

Select the size of the data base that you require.

Step 4—FIG. 27A:

The database name is OISDB. The SID must be OIS. The internal password can be freely defined.

Step 5—FIG. 28:

Data base options window.

Step 6—FIG. 29:

File parameters window

Step 7—FIG. 30:

Step 8—FIG. 31

Step 9—FIG. 32

Step 10—FIG. 33:

5.2 Adding ConText Support

ConText support is required for database instances that handle the OBVI schema. Others schemas do not require ConText support.

1) Run a ConText server

The OIS distribution files are available in a self-extractable archive called oisntb1.exe Executing this file will launch the Install Shield installation program. During installation, the following parameters are required:

Name of the OIS home directory:

5.4 Creation of the OIS Schemas

There are four schemas that can be installed on a specific OIS database instance. The choice of the schemas that have to be installed depends on the nature of the particular database instance that the administrator wants to install.

Type of system Schema to install
Hosting the Obvious Site Management Security Schema
Hosting the Obvious jlkhkjhk

Security Schema

For systems hosting the Obvious Site Management Component

Video & OBVI Indexing Schema

For systems hosting

It is possible to install all these schemas on the same system.

Scanners

The installation of the schemas is performed by a set of four SQL scripts:

These scripts are located in $OIS_HOME\Scripts and can be executed with the SQL Worksheet utility, provided with Oracle 8. Upon completion, these scripts create logs files in the same directory. Check them for any error.

XI. Obvious Management Console

1 Concepts

The Obvious Management Console is the application that is used for administering the whole system. From a single graphical interface, the administrator of a site can browse for the different kinds of objects defined by the Obvious Network Architecture (Vdoc, Media, Groups, Users, Streams, Units, Services, etc.) and manage them.

These objects can be grouped into meaningful administration realms. For example, the video realm contains the Vdocs, Media and Streams objects. The security realm contains Groups and Users objects.

Each realm is graphically represented by a tree in the Obvious Management Console. Several realms can be displayed at the same time and can be dynamically added or removed.

Administration realms are not available for all user. Even if the Obvious Management Console is able to access and manage several realms, the user credentials will prohibit the access to specific realms. The installation procedure of the Obvious Management Console will also allow the configuration of the realms that can be administered from a particular machine.

2 Administration Realms

The following realms have been defined. They should cover most administration tasks in the current implementation of the system.

2.1 Site Realm

The site realm involves two kind of objects: SiteCategory and Site. A SiteCategory object may contain others SiteCategory objects and Site objects. The corresponding tree has 2 levels.

2.2 Security Realm

The security realm involves two kind of objects: Group and User. The corresponding tree has two levels. A group can contain several users and a user can be part of several groups. Depending on the preference of the administrator, the tree can show the groups at the first level and the users at the second level ore it can show the users at the first level and the groups at the second level.

2.3 Video Realm

The video realm involves four kind of objects: VdocCategory, Vdoc, Media and Stream. A VdocCategory object can contain others VdocCategory objects and Vdoc objects. A Vdoc object may contain several Media object. A Media object may contain several Stream objects. The corresponding tree can have 4 levels.

2.4 OBVI Realm

The OBVI realm involves two kind of objects: ObviCategory and Obvi objects. An ObviCategory object may contain ObviCategory objects and Obvi objects.

3 Site Map

The Obvious Management Console can also display a geographical map showing the location of the units and services involved in a specific site. As shown in FIG. 34, units are represented by coloured squares. Each square may have several colours, one for each service. The OMS service is represented in red. The VS service is represented in blue.

This map tool is implemented as an ActiveX control called the Obvious Map and implemented in C++. It is currently used in the Obvious Management Console but it can be embedded in any other management application.

The user interface of the Obvious Map allows the user to manually define de 2D position of each unit with a simple drag and drop operation. The corresponding geographical coordinates are stored in the Position field of the UNIT table.

When the Obvious Map is launched, it connects to the OIS database via the Obvious Administration Server described in more detail in Section VIII for retrieving the configuration of a given site, in terms of units, services and replication information.

4 Implementation

The Obvious Management Console is currently implemented in C++/MFC. It offers an explorer-like graphical user interface: a left pane displays a hierarchy of objects and the right pane shows the details of a specific object. A new version is being implemented in VB and should offer the same level of functionalities.

FIG. 35 shows a screen shot of an administration session on the video realm. The left pane displays the hierarchy of Vdoc categories, Vdoc, Media and Streams objects.

Each managed object is called an AdminItem. An AdminItem has a set of properties, handles a set of child AdminItems and can display its configuration dialog. It can respond to basic events such as Configure (tells an object to display its configuration dialog), Add (tells an object to add a sub object) and Delete (tells the object to remove itself from the system). By right-clicking on an item in the left view, a contextual menu appears. For example, FIG. 36 shows the contextual menu for a Vdoc item.

Selecting an entry in the contextual menu will display a dialog box for object-specific operations. For instance, when the user selects the “Edit Media” menu entry from the contextual menu of a Media object, FIG. 37 shows the dialog box that appears, allowing the user to modify the definition of the media.

The architecture of the Obvious Management Console is modular: new objects (corresponding to a new administration realm) can be easily added and administered. For that purpose, each object must be a represented by a C++ class derived from the CAdminItem class. The derived class must override some member functions.

For each administration realm, a tree is constructed. The nodes of the tree are derived from the CAdminItem class. The graphical part of the Obvious Management Console displays the trees in the left pane, dispatch the events between the objects and updates the right pane when necessary. The Obvious Management Console is completely independent of the nature of the AdminItems it displays.

FIG. 38 shows an administration session on the security realm. Here, the hierarchy of objects is composed of Groups and Users objects.

XII. Video Registering

This Section will describe the steps performed at the video registering level. This concerns all the steps involved from the video acquisition to the creation of database entries for a specific Vdoc/Media. Video registering has nothing to do with OBVIs. It prepares and registers media files. Registered media files can then be used for creating and authoring OBVIs.

As explained before, these steps are accomplished from the Obvious Management Console. By right-clicking on a Vdoc category the user can create sub-categories. Then, from a Vdoc category, he creates a Vdoc. Each Vdoc is represented by a name and a description. From that Vdoc, he creates a new Media. Each Media is represented by various tags (name, description, format, standard, etc.).

The dialog box for the creation of the Media gives 2 possibilities to the user:

Video characteristics (such as the frame rate, the number of frames, the codec and the image size) are automatically extracted from the video file. Others user-defined fields (such as the name and the description of the Media) must be filled by the user.

At the end, the digital media file is processed as follows:

Then a new Vdoc/Media entry is created in the OIS database. Of course, if the Vdoc entry already exists, a new entry is appended to the list of Media entries for that Vdoc.

After the creation of the Media, the user can create Streams. By right-clicking on a Media, he launches a external tool for stream building. Typically, this tool is NetShow Encoder in the case of ASF streams or the Real producer in the case of RealMedia streams. Then, he defines the new Stream entry by its name, description, bandwidth, etc. A corresponding entry is created in the OIS database.

These constitute the only steps that must be manually accomplished from the Obvious Management Console. The others steps, described below, are performed in background, asynchronously to this first phase.

Once the original media file is uploaded to the Obvious Asset Manager, it is automatically analysed by the VAMT Service. A new analysis job is created and runs in parallel with others analysis jobs. At the end, a measures file, containing the VAMT pre-processing measures, is created and stored in the OIS database. These measures can be retrieved by any client application by sending the appropriate request to the Obvious Media Server.

XIII. OVI Publishing and Indexing

This chapter details the process of publishing and indexing OBVIs. As explained in previous chapters, an OBVI is a database object that can be exported in several forms: OVI, XML or OSF file. Currently, the only format that can support editing and authoring is the OVI file. The OMM/OME suite of tools allow the user to load an OVI file, modify it and save it locally.

The export functionality is a conversion from the promary storage format to one of the available secondary storage formats. Publishing and indexing an OVI file simply means converting an OBVI from a secondary storage format (the OVI file) to the primary (database-centric) storage format.

1 The Oracle 8 ConText Cartridge

ConText Cartridge is an Oracle extension module that gives fall text search capabilities to the Oracle 8 Server. In addition, ConText provides advanced linguistic processing of English-language text.

ConText provides advanced text searching and viewing functionality, such as full text retrieval, relevance ranking, and query term highlighting. Text queries support a wide range of search options, including: logical operators (AND, OR, NOT, etc.), proximity searches, thesaural expansion, and stored queries. Text viewing capabilities include WYSIWIG and plain text viewing of selected documents, as well as highlighting of query terms.

ConText provides in-depth linguistic analysis of English-language text. The output from this linguistic processing can be used to perform theme queries, which retrieve documents based on the main topics and concepts found in the documents.

2 Concepts

Basically, the publishing and indexing process involves 3 major steps:

These steps are accomplished by the Obvious Publishing Engine.

2.1 Publishing the Annotations

The annotations contained in an OVI file are converted into HTML. The following table describes how this conversion is achieved, depending of the annotation type.

Original annotation format Conversion technique Comments
Wordpad Microsoft Word automation
Embedded HTML No conversion
Link to a Web Site Spider engine

The conversion between Wordpad and HTML can be easily achieved by using Microsoft Word's automation features. This allows to programmatically launch a Microsoft Word application, load the Wordpad document and convert it into HTML. Microsoft Word automatically handles the conversion of the graphics and others embedded objects.

A specific converter is needed for each kind of annotation. The mapping between annotation types and converters is stored in the CONVERTER table of the OIS. For each annotation type, this table gives the GUID of the COM (or DCOM) object that can be used for processing it. A converter is a COM object that implements the IAnnotConverter COM interface, described in more detail in Section XXII entitled the IANNOTFILTER COM INTERFACE.

After being converted into HTML, each annotation is published on a Web Server and the corresponding URL is stored in the URL column of the ANNOTATION table. The publishing is achieved by doing a FTP upload on a specific directory in the remote Web Server. On this Web server, each annotation is stored in a separate directory whose name has the following syntax:

Regardless of the annotation format, the ConText Cartridge requires text to be filtered for the purposes of text indexing or text processing through the Linguistic Services (as well as highlighting the text for viewing).

Text extracted from OBVI annotations and OBVI metadata is stored in the Text column of the ANNOTATION table. Refer to section X for more details. This column stores data as a CLOB, i.e. a Character Large Object. Under Oracle 8, the CLOB data type can store single-byte text, up to 4 gigabytes in size. CLOBs have full transactional support: the CLOB value manipulations can be committed or rolled back.

At a certain time, the publishing/indexing process of an OBVI involves the extraction of text data from each annotation. The implementation details of this extraction depends on the type of the annotation. In current version of the OVI file format , the following annotations types can be found

A specific filter is used for filtering each type of annotation. To permit future extensions and enhancements, these filters are implemented as external modules (COM objects) that can be dynamically loaded and used by the Obvious Publishing Engine for retrieving text data from a given annotation. For example, when the Obvious Publishing Engine finds a Web annotation (a HTTP link to a remote HTML page), he uses a specific filter that will download the HTML code, parse it and produce raw text. All filters are supposed to output raw text that will be stored in the Content column of the ANNOTATION table. All filters present the same interface to the Obvious Publishing Engine, FIG. 39.

The mapping between annotation types and filters is stored in the FILTER table of the OIS. For each annotation type, this table gives the GUID of the COM (or DCOM) object that can be used for processing it. A filter is a COM object that implements the IAnnotFilter COM interface, described in more detail below in Section XXI entitled GUID For Objects.

2.3 Creating Database Entries

The database format is the primary storage format for an OBVI. An OBVI is uniquely represented by 2 identifiers: the OBVI Identifier and the Version Identifier. If the OVI file is already bound to an OBVI in the database then the publishing process consists of creating a new version. Otherwise, a new OBVI (with a starting version identifier) is created. The creation of database entries for a new OBVI involves the manipulation of several database tables. First, a new OBVI Identifier and a new Version Identifier are allocated (see OBVI and VERSION tables). Then, new Blocks are created (see BLOCK table) and bound to the OBVI (see BLOCKFOROBVI table). Finally, new Chunks are created (see CHUNK table) and bound to the OBVI (see CHUNKFOROBVI table). The ANNOTATION table.

In the case of publishing a new version of the OBVI (rather than publishing a new OBVI), the procedure is roughly the same. The only difference concerns the reuse of Blocks, Chunks and Annotations. As explained before, since changes between versions are supposed to be small (the user typically changes adds or removes some blocks and edit few annotations) the system tries to reuse Blocks, Chunks and Annotations from the previous version.

3 Implementation

Three modules are implemented:

The most important module is the Obvious Publishing Engine, responsible for the publishing and indexing process. It internally uses a set of filters for gathering text information from the various kinds of annotations found in the OVI file. It also uses a set of converters for transforming OVI annotations into HTML.

3.1 Filters

The following filters have implemented. As explained before, they are COM objects that implement the IAnnotFilter interface.

Wordpad Filter

Under Windows NT, the Obvious Publishing Engine is implemented as a NT service. It scans a list of predefined directories. For every OVI file found, the Obvious Publishing Engine starts an indexing process (a new thread). Several OVI files can be indexed at the same time.

The code of the core indexing process is located in a DLL called LibINDEX.dll. This DLL contains several exported functions but the most important one is called LIBINDEX_IndexOVI. This function accomplishes all the necessary steps for publishing and indexing an OVI file.

Most of the code uses ADO for accessing and updating the various tables of the OIS database. It also uses the OCI library for Oracle specific code concerning the handling of CLOB data.

3.4 Obvious Publishing Manager

As described in previous pages, the Obvious Publishing Engine can be controlled by a client application, by using a TCP/IP connection. The Obvious Indexing Manager is a sample of such client application. It is implemented in C++/MFC. It opens a TCP/IP connection to the machine hosting the Obvious Publishing Engine and sends requests for:

The Obvious Publishing Manager is a configuration tools that can be used by administrators for controlling and tuning the Obvious Publishing Engine.

3.5 Obvious Publisher

The Obvious Publisher is the graphical interface from which a user launches the publishing and indexing of its OVI files. The Obvious Publisher is supposed to run on a client machine, where OVI files are located.

The Obvious Publisher is implemented as a Wizard encapsulated in an ActiveX Control. It has been developed in C++/MFC. This ActiveX Control has only one automation function: RunWizard. A container application can call this function to launch the Wizard. It has the following steps:

After these steps, the OVI file is now on the machine where the Obvious Publishing Engine is located. The Obvious Publishing Engine will automatically handle all the steps for publishing and indexing the OVI. It will parse the annotations, extract raw text for indexing purposes, convert them into HTML and publishe these annotations on Web servers. It will also create database entries for the new OBVI version.

By using the Obvious Publisher wizard, the user can send several OVI files for publishing. They will be handled by the Obvious Publishing Engine in batch. The Email address that the user entered in the fourth page of the Wizard is used by the Obvious Indexing Engine for sending any error report to the author.

XIV. OBVI Searching

1 Concepts

Under current implementation, search capabilities are provided by the Context Cartridge engine. As explained before, on of the task accomplished by the Obvious Publishing engine is the filtering of annotations: the OVI annotations are extracted and filtered to produce raw text that can be indexed by the ConText Cartridge. This raw text is stored in the Text column of the ANNOTATION table.

The ConText Cartridge has its own indexing servers. They run in background and they continuously update the internal index if the content of the Text column changes. This chapter will focus on using the search capabilities of the ConText Cartridge to build a global search platform in the Obvious Network Architecture.

Two search methods have been implemented: the basic search and the advanced search.

1.1 Basic Search

The basic search procedure allows the user to enter a keyword (or a list of keyword). This keyword is searched in every annotation, for all OBVIs.

1.2 Advanced Search

The advanced search procedure allows the user to enter different keywords for different strata.

2 Implememtation

2.1 Stored Procedures for Searching

For performance reasons, the search code has been implemented as a set of Oracle stored procedures, written with the PL/SQL language. These stored procedures are part of the OBVIPACKAGE package2. Two procedures are of interest: PROC_SEARCH and PROC_ADVSEARCH. They corresponding to the basic search and the advanced search mechanisms respectively. 2The OBVIPACKAGE package contains all the Oracle stored procedures that have been implemented in the OIS.

Given a Category Identifier and a keyword (or a list of keywords), this function runs the Context Cartridge's search engine for finding all the annotations (in all indexed OBVIs) that are in the specified category and that contain the specified keyword.

FUNC_ISCHILDOF is another function of the OBVIPACKAGE package. This helper function determines the parent/child relationship between two categories.

TCursor is an Oracle cursor type. Its definition is given in the OBVIPACKAGE package definition.

The PROC_ADVSEARCH procedure:

-- PROC__ADVSEARCH
PROCEDURE PROC_ADVSEARCH(
vKeywordWho IN VARCHAR2,
vKeywordWhat IN VARCHAR2,
vKeywordWhere IN VARCHAR2,
vKeywordAnnot IN VARCHAR2,
vCategoryID IN NUMBER,
vCursor IN OUT TCursor)
IS
n NUMBER;
bFirst NUMBER;
BEGIN
n := 0;
bFirst := 0;
IF Length(vKeywordWho) <> 0 THEN
CTX_QUERY.CONTAINS(‘ObviPolicy’,
‘%’ ∥ vKeywordWho
‘%’,‘SEARCHRESULT’,bFirst,36,0,1,‘StrataID =
1’);
n := n+1;
bFirst := 1;
END IF;
IF Length(vKeywordWhat) <> 0 ‘THEN
CTX_QUERY.CONTAINS(‘ObviPolicy’,
‘%’ ∥ vKeywordWhat
‘%’,‘SEARCHRESULT’,bFirst,37,0,1,‘StrataID =
2’);
n := n+1;
bFirst := 1;
END IF;
IF Length(vKeywordWhere) <> 0 THEN
CTX_QUERY.CONTAINS(‘ObviPolicy’,‘%’ ∥
vKeywordWhere ∥
‘%’,‘SEARCHRESULT’,bFirst,38,0,1,‘StrataID =
3’);
n := n+1;
bFirst := 1;
END IF;
IF Length(vKeywordAnnot) <> 0 THEN
CTX_QUERY.CONTAINS(‘ObviPolicy’,‘%’ ∥
vKeywordAnnot ∥
‘%’,‘SEARCHRESULT’,bFirst,39,0,1,‘StrataID =
4’);
n := n+1;
bFirst := 1;
END IF;
OPEN vCursor FOR
SELECT
a.AnnotID, b.ObviID, c.Name,
c. Description,
g.VersionID, e.Proxy, h.Description,
LTCIN, LTCOUT, g.Author, g.CreationDate
FROM
Annotation a, AnnotForObvi b, Obvi c,
Media d, Vdoc e, Category f, Version g,
Strata h, Chunk i, SearchResult j
WHERE
j.Textkey = a.AnnotID
AND j.Textkey2 = h.StrataID
AND j.Textkey3 = i.ChunkID
AND a.AnnotID = b.AnnotID
AND a.StrataID = h.StrataID
AND a.ChunkID = i.ChunkID
AND b.ObviID = c.ObviID
AND c.ObviID = g.ObviID
AND b.ObviID IN
(
SELECT ObviID
FROM
(SELECT Distinct
b.ObviID,
a.TextKey2
FROM
SearchResult a,
AnnotForObvi b
WHERE a.TextKey =
b.AnnotID
GROUP BY ObviID
HAVING Count(ObviID) = n
AND c.MediaID = d.MediaID
AND d.VideoID = e.VideoID
AND e.CategoryID = f.CategoryID
AND
FUNC__ISCHILDOF(f.CategoryID,
vCategoryID)
= 1
ORDER BY b.ObviID, g.VersionID;
RETURN;
END PROC_ ADVSEARCH;

Given a keyword (or a list of keywords) for each strata, this functions runs the Context Cartridge's search engine on each strata, for a given category.

2.2 Search Pages

The search pages (for basic and advanced search) have been written as ASP pages. Although some of these pages use ADO for accessing the OIS database, they do not use ADO for executing a search request. Searches are handled by an Active Server Object called Obvious Search Engine. This module has been implemented in C++/ATL and contains the Oracle-specific code necessary for calling the PROC_SEARCH stored procedure responsible for the search3. 3Calling Oracle stored procedures from ADO is tricky. The ObviousSearchEngine uses the OCI library for direct access to all Oracle features.

The latest version of the ASP search pages can be seen at http://odyssee.opus.obvioustech.com/XXX

The first page allows the user to choose between the basic search and the advanced search, FIG. 47.

FIG. 47 depicts the basic search screen. The user can navigate in the hierarchy of Vdoc categories. When a search request is sent, it concerns all the categories below the current Vdoc category. By doing a search from the Root category, the user can access all OBVIs.

From the given list of keywords, a request is sent to the database, via the Obvious Search Engine. Results are grouped by OBVI and by version. For each OBVI version, a list of chunks (timecodes) shows the exact location of the hits.

By clicking on the image, the corresponding video is played. In FIG. 48, since the video is an ASF stream, the Windows Media Player will be automatically launched.

By clicking on the Version ID field, the OBVI is downloaded in an OVI form. For that purpose a GetObviAsOvi request is sent the OMS. The OMS sends backs the OVI file corresponding to the specific OBVI version. In next version, the user will also be able to click on a chunk. In that case, the OVI will be downloaded and the OMM will automatically position itself on that specific chunk.

The FIG. 49 shows the advanced search page. Here, an edit box is displayed for each annotation stratum.

XV. OBVI Indexing with MIS

1 Microsoft Index Server

Microsoft® Index Server is a full-text indexing and search engine for Microsoft Internet Information Server (IIS) and Microsoft Windows NT® Server. It allows any Web browser to search documents for key words, phrases, or properties such as an author's name.

Index Server is designed for use on a single Web server on an intranet or the Internet. It can easily handle large numbers of queries on a busy site. Automatic updating and support for Microsoft Office documents is ideal for an intranet where files change frequently.

Index Server is capable of indexing textual information in any document type through content filters. Filters are provided for HTML, text, and Microsoft Office documents. Application developers can provide support for any other document by writing to the open IFilter interface. An IFilter knows how to read a file and extract the text. This text can then be indexed.

2 Indexing OBVIs with MIS

MIS uses catalogs for storing the index information related to a set of directories. By default, the Web catalog is bound to the root hierarchy of the local Web site. The administrator of the system can create others catalogs. It is recommended to create

3 Implementation

An MIS filter has been implemented in C++. It allows MIS's indexing engine to parse OVI files and gather useful information for indexing. This filter basically implements the IFilter COM interface and internally uses the OBVI SDK, described in more detail in Section IV, for opening and reading OVI files.

The filter must registered on the system. Then, any OVI file present will be automatically indexed by MIS. Once indexed, queries can be ran from a web browser or any MIS-compliant application.

can be fetched ch resultscan either create a new MIS catalog or use the pre-defined Web catalog.

XVI—OBVI Streaming

As described above, OSF is, with OVI and XML, another secondary storage format. OBVIs saved as OSF files can be efficiently streamed. This Section will focus on the specific tools that have been developed for building OSF files, streaming OSF data over IP multicast channels and receiving channels content at the client side.

1 The OSF Specification

An OSF file is composed by several chunks: the metadata chunks, the structure chunks, the image chunks and the annotation chunks. Each chunk is encoded with several data packets. A packet has the following binary structure:

The pSync field allows the client applications to parse asynchronous OSF streams and synchronise the OSF reading.

Several OSF can be transmitted on the same communication channel. In that case, packets corresponding to different OSFs can be interleaved. The vOsfID allows the identification of each packet. It allows client applications to group received packets by OSF and rebuild the original stream.

The chunk type, stored in the vType field, can be one of the following:

The vDataSize field gives the number of bytes that constitute the packet data. This data starts at the pData field.

Each chunk type can be transmitted by using several packets. In that case, the vNumPacket and vNbPacket allow client application to reconstruct the original data chunk. This is useful with UDP protocols for example, where the maximum block of data that can be transmitted at each call is limited.

2 The Obvious Stream Builder

The Obvious Stream Builder is simple tool that allows the conversion of OVI files into OSF files. It internally uses the OBVI SDK (LibOBVI.dll) for parsing the input OVI file and creating corresponding packets for the OSF file.

3 The Obvious Multicaster

FIG. 50 depicts the main window of the Obvious Multicaster. It shows a list of channels, each channel being represented by a name, a description, an multicast IP address and a status.

The New button allows the user to create a new channel. FIG. 51 shows the dialog that appears. By selecting a channel an clicking on the Configure button, the user can define the list of OSF files that constitute that specific channel. The browse button permits to load an existing OSF file from the hard-drive, FIG. 52.

4 The Obvious Multicast Listener

XVII—The whole picture

This section gives an overview of the whole process. This process concerns the following tasks:

From the same Obvious Administration Console, the user starts the Obvious VAMT Manager. Then he creates and launches VAMT analysis jobs for the media that he has just created.

Cycle 2: Creating and Authoring OBVIs from Pre-registered Media

In the Obvious Server Architecture, many HTTP requests give a response that can be represented as a recordset, i.e. a table constituted by N fields and M rows. Each field has a type and a name.

An XML format has been designed for representing a generic recordset. This allows a common representation of all these HTTP responses.

The DTD is given below:

The GetStructure request of the Obvious Media Server returns a XML-formatted response whose DTD is described below:

A sample XML is given below. It represents a structure with 3 levels. First level is composed of 2 blocks, with 2 child blocks each.

Object annotations are internally represented by an XML file whose DTD is described below:

A sample XML file is given below.

A Global Unique Identifier (GUID), also called Universal Unique Identifier (UUID), is a 128-bit value used in cross-process communication to identify entities such as client and server interfaces, manager entry-point vectors, and RPC objects.

As previously described, the Obvious Network Architecture defines unique identifiers for various objects, such as Vdocs, Media, Streams, Users, Groups, Units, OBVIs, Versions, etc. However, these identifiers are not unique over all sites. Two objects from two different sites may have the same identifier.

This section describes a way for creating global unique identifier. These GUIDs would permit the referencing of objects across sites boundaries making possible for one site to access the objects of another site.

The structure of a GUID is given in the following matrix:
GUID(Object)=Site(Object)+Identifier(Object)

Suppose we have 2 Sites called A and B. From a client application a user fetches an object XA from site A. This object has a unique identifier in Site A. However it is not guaranteed that this identifier is not already in use in Site B. A GetGUID request is sent to the OSM of Site A. This allows the client application to get the GUID corresponding to XA.

XXII. The IAnnotFilter COM Interface

The Obvious Publishing Engine uses a set of filters for extracting raw text from the various kind of annotations that can be found in an OVI. Each OVI annotation correspond to a specific filter that acts as a parser for that annotation. The Obvious Network Architecture specifies that filters are COM objects that implements the IAnnotFilter interface. This COM interface is described below.

XXIII. The IAnnotConverter COM Interface

During the publishing/indexing process, the Obvious Publishing Engine uses a set of converters for converting OVI annotations into HTML. Each OVI annotation must be handled by a specific converter. The Obvious Network Architecture specifies that converters are COM objects that implements the IAnnotConverter interface. This COM interface is described below.

Madrane, Nabil

Patent Priority Assignee Title
Patent Priority Assignee Title
5008853, Dec 02 1987 XEROX CORPORATION, A CORP OF NY Representation of collaborative multi-user activities relative to shared structured data objects in a networked workstation environment
5220657, Dec 02 1987 Xerox Corporation Updating local copy of shared data in a collaborative system
5237648, Jun 08 1990 Apple Inc Apparatus and method for editing a video recording by selecting and displaying video clips
5596705, Mar 20 1995 Wistron Corporation System and method for linking and presenting movies with their underlying source information
5600775, Aug 26 1994 BEN GROUP, INC Method and apparatus for annotating full motion video and other indexed data structures
5708845, Sep 29 1995 INTELLECTUAL VENTURES ASSETS 6, LLC System for mapping hot spots in media content for interactive digital media program
5729471, Mar 31 1995 The Regents of the University of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
5745710, May 24 1993 Sun Microsystems, Inc. Graphical user interface for selection of audiovisual programming
5805118, Dec 22 1995 OFFICE OF NEW YORK STATE Display protocol specification with session configuration and multiple monitors
5828370, Jul 01 1996 OPEN TV, INC ; OPENTV, INC Video delivery system and method for displaying indexing slider bar on the subscriber video screen
5938724, Mar 19 1993 NCR Corporation Remote collaboration system that stores annotations to the image at a separate location from the image
5956716, Jun 07 1995 Intervu, Inc System and method for delivery of video data over a computer network
5963203, Jul 03 1997 Sony Corporation Interactive video icon with designated viewing position
6275529, Apr 05 1995 Sony Corporation Method of and apparatus for transmitting news data with script
6345288, Aug 31 1989 OneName Corporation Computer-based communication system and method using metadata defining a control-structure
EP555028,
EP590759,
FR9700423,
WO9802827,
WO9847084,
WO9901830,
WO9946702,
/
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 28 2011Sony Corporation(assignment on the face of the patent)
Date Maintenance Fee Events


Date Maintenance Schedule
Jun 30 20184 years fee payment window open
Dec 30 20186 months grace period start (w surcharge)
Jun 30 2019patent expiry (for year 4)
Jun 30 20212 years to revive unintentionally abandoned end. (for year 4)
Jun 30 20228 years fee payment window open
Dec 30 20226 months grace period start (w surcharge)
Jun 30 2023patent expiry (for year 8)
Jun 30 20252 years to revive unintentionally abandoned end. (for year 8)
Jun 30 202612 years fee payment window open
Dec 30 20266 months grace period start (w surcharge)
Jun 30 2027patent expiry (for year 12)
Jun 30 20292 years to revive unintentionally abandoned end. (for year 12)