A real-time low frame-rate video compression system and method that allows the user to perform face-to-face communication through an extremely low bandwidth network. The system and method employs novel eye tracking and blink detection techniques in order to select images for transmission. Experimental results show that the system is superior to more traditional video codecs for low bit-rate face-to-face communication.
|
1. A process for tracking eyes and detecting eye blinks, comprising the process actions of:
using a computing device for,
defining eye templates for a person depicted in a video frame;
inputting a video frame of the person's face;
using a face detector to find a face box which surrounds the face of the person;
searching the upper part of the face box for eyes using feature based matching to match image patches for each eye in the video frame to the eye templates to locate the eyes, wherein the feature based matching comprises the process actions of:
for each image patch,
computing a grayscale image corresponding to the image patch;
creating horizontal and vertical edge maps of the image patch;
summing columns of pixels for the grayscale image and the vertical edge map to project the image patch to the horizontal axis to create two one dimensional (1D) signals;
summing rows of pixels for the horizontal edge map to project the image patch to the vertical axis to produce one 1D signal; and
computing the similarity between the eye template and the image patch as a weighted sum of the correlations between corresponding 1D signals, wherein the similarity is determined by a signal correlation function S(A,b) of
where L is the length of the signal and signals A and b are two arrays, A=a1, a2, . . . , aL, B=b1, b2, . . . , bL where ai and bi are elements in the two arrays.
10. A system for detecting eye blinks, comprising:
a general purpose computing device; and
a computer program comprising program modules executable by the computing device, wherein the computing device is directed by the program modules of the computer program to,
define eye templates for a person depicted in a video frame;
input a sequence of video frames at least some of which containing image frames of the person;
use a face detector to find a face;
if a face is found, search the face for eyes using said eye templates and feature based matching, wherein image patches for each frame in the video sequence are extracted and compared to the eye templates wherein the feature based matching comprises the sub-modules to:
for each extracted image patch,
compute a grayscale image corresponding to the image patch;
create horizontal and vertical edge maps of the image patch;
sum columns of pixels of the grayscale image and the vertical edge map to project the image patch to the horizontal axis to create two one dimensional (1D) signals;
sum rows of pixels for the horizontal edge map to project the image patch to the vertical axis to produce one 1D signal; and
compute the similarity between the eye template and the image patch as a weighted sum of the correlations between corresponding 1D signals, wherein the similarity is determined by a signal correlation function S(A,b) of
where L is the length of the signal and signals A and b are two arrays, A=a1, a2, . . . , aL, B=b1, b2, . . . , bL where ai and bi are elements in the two arrays.
2. The process of
S(TL,I)=wG·S(GT where wG, wH and wv are predefined weights.
3. The process of
S(TR,I)=wG·S(GT where wG, wH and wv are predefined weights.
4. The process of
manually indicating the pupil positions on the first frame of the video sequence in which the person is depicted with wide open eyes; and
extracting two image patches at the pupil positions as templates, one for each eye.
5. The process of
6. The process of
7. The process of
9. A computer-readable storage medium having computer-executable instructions stored thereon for performing the process recited in
11. The system of
manually indicate the pupil positions on the first frame of the video sequence in which the person is depicted with wide open eyes; and
extract two image patches at the pupil positions as templates, one for each eye.
12. The system of
|
1. Technical Field
The invention is related to video conferencing, and in particular, to a system and method for very low frame rate video streaming for face-to-face videoconferencing that employs novel eye tracking and blink detection techniques.
2. Related Art
Face-to-face video communication is a potentially important component of real time communication systems. Inexpensive cameras connected to devices ranging from desktop computers to cell phones enable video conferencing in a variety of modes such as one-to-one and multi-party conferences.
Most video teleconference solutions are specifically designed for broadband networks and cannot be applied to low bandwidth networks. Previous face video compression techniques are not able to efficiently operate at very low bit rates because they compress and transmit the entirety of every video frame. Thus, reducing the bandwidth will of necessity degrade the image in every frame. There is a minimum for the allocated bits for each frame below which conventional compression techniques cannot produce visually acceptable results. Multi-party video conferences put an added strain on bandwidth requirements since multiple video streams need to be simultaneously transmitted in order for all of the participants to participate.
Different approaches have been proposed to reduce the bandwidth requirements for streaming video, such as the MPEG-4 face animation standard and H. 26x video coding [1]. By taking advantage of face models, the MPEG-4 face animation standard can achieve a high compression ratio by sending only face model parameters. However, it is difficult to make the synthesized faces look natural and match the original video. H.26x waveform-based coding techniques are fully automatic and robust, but are not efficient for low bit-rate face video since their generality does not take advantage of any face models. These two types of techniques are combined together in a recently proposed low bit-rate face video streaming system [2], where prior knowledge about faces are incorporated into traditional waveform-based compression techniques to achieve better compression performance. This system is, however, not able to operate efficiently at very low bit rates (e.g., on the order of 8 kb/s).
Therefore, what is needed is a system and method that can provide face-to-face video conferencing at very low bit rates with natural looking results. Additionally, this system and method should be able to provide face-to-face video conferencing in real time.
A very low bit rate video conferencing system and method that includes a novel system and process for tracking eyes and detecting eye blinks. Eye templates for a person depicted in a video frame are created. The eye templates are made by manually indicating the pupil positions on the first frame of the video sequence in which the person is depicted with wide open eyes; and extracting two image patches at the pupil positions as templates, one for each eye. Then a sequence of video frames at least some of which contain image frames of the person are captured or input. A face detector is used to scan in the neighborhood of the face location in a previous frame to find a face box which surrounds the face of the person. If a face box is found the upper part of the face box is searched for eyes using the eye templates and feature based matching wherein image patches for each frame in the video sequence image patches are extracted and compared to the eye templates. The feature based matching involves computing a grayscale image corresponding to the image patch and creating horizontal and vertical edge maps for this image patch. Columns of pixels for the grayscale image itself and the vertical edge map are summed to project the image patch into two one dimensional (1D) signals. Then the rows of pixels of the grayscale image for the horizontal edge map are summed to project the image patch to produce another one 1D signal. The similarity between the eye template and the image patch are computed as the weighted sum of the correlations between corresponding 1D signals. Eye closing is detected when the correlation values drop for a given frame drops below a given threshold.
It is noted that in this section and the remainder of this specification, the description refers to various individual publications identified by a numeric designator contained within a pair of brackets. For example, such a reference may be identified by reciting, “reference [1]” or simply “[1]”. A listing of the publications corresponding to each designator can be found at the end of the Detailed Description section.
The file of this patent or application contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the U.S. Patent and Trademark Office upon request and payment of the necessary fee.
The specific features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description of the preferred embodiments of the present invention, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
1.0 Exemplary Operating Environment:
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held, laptop or mobile computer or communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer in combination with hardware modules, including components of a microphone array 198. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. With reference to
Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
Computer storage media includes, but is not limited to, RAM, ROM, PROM, EPROM, EEPROM, flash memory, or other memory technology; CD-ROM, digital versatile disks (DVD), or other optical disk storage; magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices; or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, radio receiver, and a television or broadcast video receiver, or the like. These and other input devices are often connected to the processing unit 120 through a wired or wireless user input interface 160 that is coupled to the system bus 121, but may be connected by other conventional interface and bus structures, such as, for example, a parallel port, a game port, a universal serial bus (USB), an IEEE 1394 interface, a Bluetooth™ wireless interface, an IEEE 802.11 wireless interface, etc. Further, the computer 110 may also include a speech or audio input device, such as a microphone or a microphone array 198, as well as a loudspeaker 197 or other sound output device connected via an audio interface 199, again including conventional wired or wireless interfaces, such as, for example, parallel, serial, USB, IEEE 1394, Bluetooth™, etc.
A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as a printer 196, which may be connected through an output peripheral interface 195.
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
The exemplary operating environment having now been discussed, the remaining part of this description will be devoted to a discussion of the very low bit rate video conferencing system and method of the invention.
2.0 The Very Low Bit Rate Video Conferencing System and Method:
The very low bit rate video conferencing system and method can be used individually in very low bandwidth networks, or as a complement to existing video conferencing systems. In a teleconference involving a group of people, each person's face will be captured and transmitted to others. Since there is generally only one speaker at a time, the very low bit rate video conferencing system and method can transmit the face of the speaker with a higher frame-rate, high quality video while transmitting all the listeners using low frame-rate video to save overall network bandwidth.
The following paragraphs discuss the details of the very low bit rate video conferencing system and method of the invention.
2.1 The Encoder
The encoding process of the very low bit rate video conferencing system and method is shown in
2.2 The Decoder
A flow chart of the decoding process of the low bit rate video conferencing system and method is shown in
3.0 Finding Good Faces
The very low frame rate setting provides the freedom to choose which frame to transmit. For example, if the camera operates at 15 fps, but one wishes to transmit only one frame every 2 seconds, then one has up to 30 frames to choose from (however, in practice one may wish to limit the choice to minimize latency). Since each frame will be seen for 2 seconds, it becomes critical to select “good” frames. Choosing the specific features that distinguish a “good” frame from a “bad” one is somewhat objective. In an informal study many frames were examined and their quality was judged. It was originally hypothesized that some aspects of the eyes and mouth would correlate with the subjective judgment. In fact, only how open the eyes are had a significant correlation with “goodness.” Examining
3.1 Real-Time Eye Tracking.
Face tracking has been extensively used for efficient face video compression [4, 5]. The very low bit rate video conferencing system and method begins with the efficient face detection algorithm proposed in [6] to locate a rectangular face box containing the face. For robustness, the low bit rate video conferencing system and method employs a template matching based method for both eye tracking and blink detection. For each video conference participant, the pupil positions are manually indicated on the first frame with wide open eyes (process action 502). The user is also asked to indicate the mouth corners for later morphing (process action 504). Two image patches are extracted at the pupil positions as templates, one for each eye, (process action 506), an example of which is shown in
On each frame, the very low bit rate video conferencing system and method iteratively matches the templates to possible locations and select the best matching positions as the eye locations. For real-time detection and tracking, for each input frame (process action 508), the detector scans only in the neighborhood of the face location in the previous frame to find a face box (process action 510). If a face is not found, the face is no longer processed (process action 512). Given a face box is found, the very low bit rate video conferencing system and method searches the upper part of the box for eyes and extracts two image patches, one for each eye (process action 514). Eyes from different people or under different illumination conditions may have significantly different appearances. For efficiency and robustness to illumination changes, the very low bit rate video conferencing system and method uses image feature based matching instead of directly comparing image patches. As shown in
SP(TL,I)=wG·S(GT
where wG, wH and wv are predefined weights. In one exemplary embodiment these predefined weights are set to be 0.4, 0.3 and 0.3, respectively.
S(A,B) is the signal correlation function computed as
where L is the length of the signal. Equation 2 describes the function on the right hand side of equation 1. In Equation 2, A and B are two arrays, A=a1, a2, . . . , aL, B=b1, b2, . . . , bL where ai and bi are elements in the two arrays.
3.2 Eye Blink Detection
The advantage of template-matching based eye tracking is that it not only gives the best possible locations for the eyes, but also tells how well the templates match to these locations, indicated by the computed correlation values. Since the low bit rate video conferencing system and method uses open eyes as templates, when the eyes are blinking, the correlation values dropped significantly.
3.3 Good Frame Selection.
Good frames for transmission are selected as follows. For each good frame FGi, there is a time stamp tGi. Good frames are selected from the original sequence based on the following criteria:
tmin≦tGi−tGi−1≦tmax, (1)
where tmin and tmax are parameters determining how frequently one wants to pick out good frames and essentially, the frequency of the good frames determines the required bandwidth of the transmitted video. The variables tmin and tmax can be user defined.
(2) Both the face tracker and eye blink detector give positive results (e.g., the face-tracker gives a positive result when the frame contains a face and the eye blink detector gives a positive result when the eyes are open), which ensures the selected face has good visual quality (e.g., the eyes are open).
In cases that the user is temporally away from the camera, which means the second criterion cannot be satisfied, in on embodiment, the system sends a random frame every tmax time to keep the transmitted video alive. More specifically, in the time interval [tmin, tmax], the system will search for a frame which satisfies the two criteria. However, if none of the frames in this time interval satisfies the two criteria, the system will randomly choose one frame.
4.0 Compression and Rendering of Faces
4.1 Improved Motion Compensation
Selected good faces are compressed before transmission. The frames containing the good faces can be compressed using a standard video codec. Since the good faces are sparsely selected from the original video, the frame difference is typically larger than a high frame rate system making standard motion compensation less efficient. The face and eye tracking used to select frames can also inform the compression subsystem. First, by only transmitting the face region the very low bit rate video conferencing system and method avoids redundant transmission of the background. Second, the face tracking approximately aligns subsequent frames, significantly reducing the size of the interframe difference. Finally, by applying an image morph [7] that aligns the eye and mouth positions the difference between subsequent frames is further reduced. In one embodiment, the view area is limited to the area surrounding the head. However, it is also possible to send the background once or very infrequently.
As shown in
4.2 Image Morphing for Rendering.
As shown in
In this way one can create a new video in the same frame-rate as the original video captured at the encoder (process action 1000).
5.0 Results
5.1 Good Face Finding.
As can be seen in
5.2 Compression.
Table 1 shows the compression result of the system on a sample video with different settings of tmin and tmax, which control the desired frame rate. Note that the very low bit rate video conferencing system and method only requires a very low bit-rate to transmit a semi-alive video. The last row of the table shows the compression result of the codec of the very low bit rate video conferencing system and method without using image morphing.
TABLE 1
Compression results of the low frame-rate system
Codec
Configurations
Bit-Rate
MPEG 2
640*480, 30f/s, good quality
322Kb/s
H.264
640*480, 30f/s, lowest quality
12.4Kb/2
Low-frame rate
240*280, tmin = 1, tmax = 3
7.4Kb/s
Low-frame rate
240*280, tmin = 2, tmax = 4
3.8Kb/s
Low-frame rate
240*280, tmin = 2, tmax = 4
5.4Kb/s
w/o morphing
H.264 also achieves a low bit-rate compression, but the visual quality of the compressed video is significantly worse spatially, as shown in
The foregoing description of the very low bit rate video conferencing system and method has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate embodiments may be used in any combination desired to form additional hybrid embodiments. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
Patent | Priority | Assignee | Title |
10039445, | Apr 01 2004 | GOOGLE LLC | Biosensors, communicators, and controllers monitoring eye movement and methods for using them |
10074199, | Jun 27 2013 | TRACTUS CORPORATION | Systems and methods for tissue mapping |
10936867, | Apr 25 2018 | BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO.. LTD. | Systems and methods for blink action recognition based on facial feature points |
11315261, | Aug 26 2019 | Samsung Electronics Co., Ltd. | Image processing method and apparatus |
7899122, | Mar 31 2005 | HISENSE VISUAL TECHNOLOGY CO , LTD | Method, apparatus and computer program product for generating interpolation frame |
8031970, | Aug 27 2007 | ArcSoft, Inc. | Method of restoring closed-eye portrait photo |
8280170, | May 01 2009 | FUJIFILM Corporation | Intermediate image generating apparatus and method of controlling operation of same |
8494230, | Aug 27 2008 | 138 EAST LCD ADVANCEMENTS LIMITED | Image deforming apparatus, image deforming method, and image deforming program |
8655027, | Mar 25 2011 | The United States of America, as represented by the Director, National Security Agency | Method of image-based user authentication |
8717393, | Nov 03 2010 | Malikie Innovations Limited | System and method for controlling a display of a mobile device |
8849845, | Nov 03 2010 | BlackBerry Limited | System and method for displaying search results on electronic devices |
8885877, | May 20 2011 | GOOGLE LLC | Systems and methods for identifying gaze tracking scene reference locations |
8911087, | May 20 2011 | GOOGLE LLC | Systems and methods for measuring reactions of head, eyes, eyelids and pupils |
8929589, | Nov 07 2011 | GOOGLE LLC | Systems and methods for high-resolution gaze tracking |
9235814, | May 23 2012 | Amazon Technologies, Inc. | Machine learning memory management and distributed rule evaluation |
9265458, | Dec 04 2012 | SYNC-THINK, INC | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
9380976, | Mar 11 2013 | SYNC-THINK, INC | Optical neuroinformatics |
9465981, | May 09 2014 | BARRON ASSOCIATES, INC | System and method for communication |
9558714, | Nov 03 2010 | Malikie Innovations Limited | System and method for controlling a display of a mobile device |
9578301, | Dec 21 2011 | MAGNOLIA LICENSING LLC | Apparatus and method for detecting a temporal synchronization mismatch between a first and a second video stream of a 3D video content |
9782069, | Nov 06 2014 | International Business Machines Corporation | Correcting systematic calibration errors in eye tracking data |
Patent | Priority | Assignee | Title |
5878156, | Jul 28 1995 | Mitsubishi Denki Kabushiki Kaisha | Detection of the open/closed state of eyes based on analysis of relation between eye and eyebrow images in input face images |
5926251, | Aug 12 1997 | Mitsubishi Denki Kabushiki Kaisha | Eye image tracking apparatus |
5982945, | Jan 05 1996 | McDonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
5990901, | Jun 27 1997 | Microsoft Technology Licensing, LLC | Model based image editing and correction |
6430306, | Mar 20 1995 | L-1 IDENTITY SOLUTIONS OPERATING COMPANY, INC | Systems and methods for identifying images |
6539107, | Dec 30 1997 | Cognex Corporation | Machine vision method using search models to find features in three-dimensional images |
6640008, | Jun 29 2001 | NIKON AMERICAS INC ; Nikon Corporation | Rotation and scale invariant pattern matching method |
6766045, | Mar 11 2002 | DIGITAL VERIFICATION LTD | Currency verification |
6920237, | Dec 19 2000 | Monument Peak Ventures, LLC | Digital image processing method and computer program product for detecting human irises in an image |
7043056, | Mar 08 2001 | SEEING MACHINES LIMITED | Facial image processing system |
7197165, | Feb 04 2002 | Canon Kabushiki Kaisha | Eye tracking using image data |
20040179716, | |||
20040213437, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 21 2005 | COHEN, MICHAEL | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015915 | /0788 | |
Mar 21 2005 | WANG, JUE | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015915 | /0788 | |
Mar 22 2005 | Microsoft Corp. | (assignment on the face of the patent) | / | |||
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034543 | /0001 |
Date | Maintenance Fee Events |
Mar 18 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 24 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 25 2021 | REM: Maintenance Fee Reminder Mailed. |
Apr 11 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Mar 09 2013 | 4 years fee payment window open |
Sep 09 2013 | 6 months grace period start (w surcharge) |
Mar 09 2014 | patent expiry (for year 4) |
Mar 09 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 09 2017 | 8 years fee payment window open |
Sep 09 2017 | 6 months grace period start (w surcharge) |
Mar 09 2018 | patent expiry (for year 8) |
Mar 09 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 09 2021 | 12 years fee payment window open |
Sep 09 2021 | 6 months grace period start (w surcharge) |
Mar 09 2022 | patent expiry (for year 12) |
Mar 09 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |