A mobile device detects a moveable foreground object in captured images, e.g., a series of video frames without depth information. The object may be one or more of the user's fingers. The object may be detected by warping one of a captured image of a scene that includes the object and a reference image of the scene without the object so they have the same view and comparing the captured image and the reference image after warping. A mask may be used to segment the object from the captured image. Pixels are detected in the extracted image of the object and the pixels are used to detect the point of interest on the foreground object. The object may then be tracked in subsequent images. augmentations may be rendered and interacted with or temporal gestures may be detected and desired actions performed accordingly.
|
1. A method comprising:
capturing an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, wherein the foreground object is at least one of a finger of a user or a pointer and the point of interest is a tip of the at least one of the finger of the user or the pointer;
warping at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view;
comparing the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object;
detecting the point of interest on the foreground object using the detected pixels;
displaying the image on a display; and
rendering an augmentation on the display over the image based on the point of interest.
31. An apparatus comprising:
means for capturing an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, wherein the foreground object is at least one of a finger of a user or a pointer and the point of interest is a tip of the at least one of the finger of the user or the pointer;
means for warping at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view;
means for comparing the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object;
means for detecting the point of interest on the foreground object using the detected pixels;
means for displaying the image on a display; and
means for rendering an augmentation on the display over the image based on the point of interest.
16. An apparatus comprising:
a camera;
a display; and
a processor coupled to the display and coupled to the camera to receive an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, wherein the foreground object is at least one of a finger of a user or a pointer and the point of interest is a tip of the at least one of the finger of the user or the pointer, the processor configured to warp at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view, compare the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object, detect the point of interest on the foreground object using the detected pixels, display the image on the display, and render an augmentation on the display over the image based on the point of interest.
39. A non-transitory storage medium including program code stored thereon, comprising:
program code to receive an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, wherein the foreground object is at least one of a finger of a user or a pointer and the point of interest is a tip of the at least one of the finger of the user or the pointer;
program code to warp at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view;
program code to compare the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object;
program code to detect the point of interest on the foreground object using the detected pixels;
program code to display the image on a display; and program code to render an augmentation on the display over the image based on the point of interest.
2. The method of
3. The method of
generating a mask for the foreground object;
segmenting the foreground object from the image using the mask; and
detecting the pixels using the foreground object segmented from the image.
4. The method of
generating a foreground object image using pixels in the image that are different than corresponding pixels in the reference image; and
detecting the pixels that belong to the point of interest on the foreground object in the foreground object image.
5. The method of
subtracting pixels in the image from corresponding pixels in the reference image to generate a difference for each pixel after warping; and
comparing the difference for each pixel to a threshold.
6. The method of
generating ratios for corresponding pixels in the image and the reference image after warping; and
comparing the ratios for corresponding pixels to a threshold.
7. The method of
generating a pose between the image and the reference image; and
warping one of the image and the reference image based on the pose.
8. The method of
displaying subsequently captured images on the display; and
altering the augmentation based on the point of interest in the subsequently captured images.
9. The method of
10. The method of
11. The method of
tracking the point of interest on the foreground object in subsequently captured images;
detecting a temporal gesture based on movement of the point of interest on the foreground object; and
performing an action associated with the temporal gesture.
12. The method of
comparing a configuration of the point of interest on the foreground object to a library of gesture configurations;
identifying a gesture from the configuration of the point of interest on the foreground object; and
performing an action associated with the gesture.
14. The method of
for each subsequently captured image, warping at least one of the subsequently captured image and the reference image of the scene;
comparing the subsequently captured image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; and
detecting the point of interest on the foreground object using the detected pixels in the subsequently captured image.
15. The method of
17. The apparatus of
18. The apparatus of
generate a mask for the foreground object;
segment the foreground object from the image using the mask; and
detect the pixels using the foreground object segmented from the image.
19. The apparatus of
generate a foreground object image using pixels in the image that are different than corresponding pixels in the reference image; and
detect the pixels that belong to the point of interest on the foreground object in the foreground object image.
20. The apparatus of
21. The apparatus of
22. The apparatus of
23. The apparatus of
24. The apparatus of
25. The apparatus of
detect pixels that belong to the points of interest on the multiple foreground objects by comparing the image to the reference image after warping; and
detect the points of interest on the multiple foreground objects using the detected pixels.
26. The apparatus of
track the point of interest on the foreground object in subsequently captured images;
detect a temporal gesture based on movement of the point of interest on the foreground object; and
perform an action associated with the temporal gesture.
27. The apparatus of
compare a configuration of the point of interest on the foreground object to a library of gesture configurations;
identify a gesture from the configuration of the point of interest on the foreground object; and
perform an action associated with the gesture.
28. The apparatus of
29. The apparatus of
for each subsequently captured image, warp at least one of the subsequently captured image and the reference image of the scene;
compare the subsequently captured image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; and
detect the point of interest on the foreground object using the detected pixels in the subsequently captured image.
30. The apparatus of
32. The apparatus of
33. The apparatus of
34. The apparatus of
means for generating a pose between the image and the reference image; and
means for warping one of the image and the reference image based on the pose.
35. The apparatus of
36. The apparatus of
37. The apparatus of
means for displaying subsequently captured images on the display; and
means for altering the augmentation based on the point of interest in the subsequently captured images.
38. The apparatus of
means for tracking the point of interest on the foreground object in subsequently captured images;
means for detecting a temporal gesture based on movement of the point of interest on the foreground object; and
means for performing an action associated with the temporal gesture.
40. The non-transitory storage medium of
41. The non-transitory medium of
program code to generate a pose between the image and the reference image; and
program code to warp one of the image and the reference image based on the pose.
42. The non-transitory medium of
program code to subtract pixels in the image from corresponding pixels in the reference image to generate a difference for each pixel after warping; and
program code to compare the difference for each pixel to a threshold.
43. The non-transitory medium of
program code to generate ratios for corresponding pixels in the image and the reference image after warping; and
program code to compare the ratios for corresponding pixels to a threshold.
44. The non-transitory medium of
program code to display subsequently captured images; and
program code to alter the augmentation based on the point of interest in the subsequently captured images.
45. The non-transitory medium of
program code to track the point of interest on the foreground object in subsequently captured images;
program code to detect a temporal gesture based on movement of the point of interest on the foreground object; and
program code to perform an action associated with the temporal gesture.
|
1. Background Field
Embodiments of the subject matter described herein are related generally to detecting and tracking a movable object in a series of captured images, such as a video stream, and more particularly to using the moveable object to interact with augmentations rendered in the display of the captured images.
2. Relevant Background
In augmented reality (AR) applications, a real world object is imaged and displayed on a screen along with computer generated information, such as an image, graphics, or textual information. The computer generated information is rendered over the real world object and may be used, e.g., to provide graphical or textual information about the real world object or for entertainment purposes, such as animations or gaming. The ability of a user to conventionally interact with rendered objects displayed in AR type applications, however, is limited and non-intuitive.
Current approaches for a user to interact with rendered objects typically use physical input elements on the device, such as buttons or a touch screen. Another approach to interaction between the user and a rendered augmentation is referred to as virtual buttons. A user may interact with virtual buttons by occluding a pre-designated area of the imaged scene with an object, such as a finger. The occlusion of the pre-designated area can be visually detected and in response an action may be performed. The resulting augmentation with virtual buttons, however is limited, as the user does not interact with the virtual button as if virtual button actually exists in the same space as the user.
A mobile device detects a moveable foreground object in captured images, e.g., a series of video frames without depth information. The object may be one or more of the user's fingers. The object may be detected by warping one of a captured image of a scene that includes the object and a reference image of the scene without the object so they have the same view and comparing the captured image and the reference image after warping. A mask may be used to segment the object from the captured image. Pixels are detected in the extracted image of the object and the pixels are used to detect the point of interest on the foreground object. The object may then be tracked in subsequent images. Augmentations may be rendered and interacted with or temporal gestures may be detected and desired actions performed accordingly.
In one implementation, a method includes capturing an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect; warping at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view; comparing the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; detecting the point of interest on the foreground object using the detected pixels; displaying the image on a display; and rendering an augmentation on the display over the image based on the point of interest.
In one implementation, an apparatus includes a camera; a display; and a processor coupled to the display and coupled to the camera to receive an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, the processor configured to warp at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view, compare the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object, detect the point of interest on the foreground object using the detected pixels, display the image on the display, and render an augmentation on the display over the image based on the point of interest.
In one implementation, an apparatus includes means for capturing an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect; means for warping at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view; means for comparing the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; means for detecting the point of interest on the foreground object using the detected pixels; means for displaying the image on a display; and means for rendering an augmentation on the display over the image based on the point of interest.
In one implementation, a storage medium including program code stored thereon, includes program code to receive an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect; program code to warp at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view; program code to compare the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; program code to detect the point of interest on the foreground object using the detected pixels; program code to display the image on a display; and program code to render an augmentation on the display over the image based on the point of interest.
Mobile device 100 is shown in
As used herein, a mobile device refers to any portable electronic device capable of vision-based position detection and tracking from captured images or video streams, and may include a e.g., cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), or other suitable mobile device including cameras, wireless communication devices, computers, laptops, tablet computers, etc. The mobile device may be, but need not necessarily be capable of receiving wireless communication and/or navigation signals, such as navigation positioning signals. The term “mobile device” is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection—regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND.
The mobile device 100 is capable of detecting and tracking the position of one or more objects 112, such as the fingers of a user 111 or other movable objects that are not attached to the imaged scene. A classifier, such as a Random Forest classifier, may be used to robustly detect the foreground object, for example. The object, which is in the foreground as it is not attached to the scene, may be detected using background segmentation. Background segmentation, however, typically requires the use of depth information. Mobile device 100, however, may perform background segmentation without depth information by estimating the background with a known target 106 and a current pose (position and orientation) of mobile device 100 with respect to the target 106. The estimated background may be subtracted from the image to identify a foreground object. Thus, to interact with a displayed augmentation, the user 111 may bring an object 112, over the background in the captured image so that the finger 112 maybe segmented and detected, e.g., using the classifier.
The mobile device 100 uses information that is already available in the tracking system (i.e., a known target and pose) to perform the background segmentation, which simplifies and accelerates the object detection process. Moreover, with the object segmented from the background, a classifier, such as a Random Forest classifier, may be used to quickly detect the object. Thus, the object can be detected as well as tracked in an efficient manner allowing the user to naturally interact with an AR augmentation, thereby enhancing user experience.
At least one of the image and a reference image of the scene, which does not include the foreground object, is warped so the image and the reference image have a same view (204), e.g., such as a frontal view. The reference image is of the scene or a portion of the scene and does not include the foreground object and is, thus, the background in the scene. For example, the reference image may be an image of only the known target or may be an image that includes the known target and an area around the target. The image is compared to a reference image after warping to detect pixels that belong to the point of interest on the foreground object (206). The comparison of the image and the reference image identifies the portion of the image that is the foreground object from which pixels may be detected as extracted features, e.g., using SIFT, SURF, etc. If desired, but not necessarily, a mask of the foreground object may be generated based on the comparison of the image and the reference image, and the foreground object may be segmented from the image using the mask. The pixels may then be detected using the foreground object segmented from the image. The point of interest on the foreground object is then detected using the pixels (208). By way of example, a classifier may be used to detect the point of interest on the foreground object. The input to the classifier may be, e.g., the segmented foreground object or the foreground mask, where the training data fed to the classifier would be different in the two above mentioned cases. With the use of a classifier to detect the point of interest on the foreground object, no pre-determined geometric constraints on the foreground object are required. The point of interest may then be used in any desired application. For example, the image is displayed on the display (210) and an augmentation is rendered on the display over the image based on the point of interest (212). For example, the augmentation may be rendered to appear as if the augmentation is underneath the foreground object displayed on the display. Additionally, subsequently captured images may be displayed on the display and the augmentation may be altered based on the point of interest in the subsequently captured images.
In one embodiment, the captured image is warped based on the pose (234) to have the same view as the reference image, i.e., the captured image is backwarped. In this embodiment, a reference image may be produced (231) by warping an image captured during initialization based on a homography between that initial image, which includes the target but not the foreground object, and a known target image, i.e., a stored reference image for the target. Using an image captured during initialization as the reference image is advantageous so that the reference image has similar lighting conditions as subsequently captured images. By way of illustration,
As discussed in
The threshold may be fixed or may be adapted for every image. Moreover, the threshold may be the same or vary for each pixel (or patch). In one implementation, the threshold may be generated, as a percentile of the pixel-wise difference between the two images, e.g., the threshold may be 90th percentile of the difference values. In another implementation, the threshold may be determined by dividing the range of colors (in the chroma channels U and V) into N blocks such that each block visually appears to be the same color, and the threshold is the length of a single block. The value used for N may be based on the range of the U and V channels and may be determined empirically. For example, if U and V range from 0 to 1, dividing the range in 20 blocks produces a block length of 0.05, which is the threshold. In another implementation, the threshold may be based on collected statistics for the mean and variance of the colors each pixel over a few frames collected during initialization, where, e.g., mean±2.5*standard-deviation may be used as the threshold for a particular pixel.
As illustrated in
Alternatively, the comparison of the captured image and reference image may be used to identify the foreground object in the captured image without the intermediate steps of generating a mask and segmenting the foreground object from the captured image. For example, during the comparison of the warped image 284 and the reference image 282, any pixel that detected as being different, and thus in the foreground object, the pixel's intensities are copied to a corresponding location of a new image, which may be black initially. In this manner, the foreground pixels would be mapped to a foreground image 290 shown in
As illustrated in
If a mask is generated, as discussed in
The pixels that belong to the point of interest on the foreground object can then be detected. The pixels may be detected by extracting features using e.g., SIFT, SURF, or any other appropriate technique. The pixels may be detected, e.g., on the foreground image 290 shown in
As discussed above, the pixels may be used to detect a point of interest on the foreground object in the image (step 208 in
Thus, the foreground object, and specifically, a point of interest on the foreground object, is detected in the captured image. The foreground object may be detected over the target 106, when the reference image 282 includes only the target 106, as illustrated in
With the foreground object in the captured image detected, augmentations may be rendered with respect to the foreground object in the captured image. For example, augmentations may be rendered so that it appears the augmentation is under the foreground object, e.g., with the finger 112 partially occluding the disk 294 as illustrated in
With the foreground object 112 detected in the captured image, the foreground object may be tracked in subsequently captured images. The foreground object may be tracked in subsequently captured images, e.g., by repeating the process for each subsequently captured image, e.g., warping at least one of the subsequently captured image and the reference image of the scene, comparing the subsequently captured image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object; and detecting the point of interest on the foreground object using the detected pixels in the subsequently captured image. Alternatively, or additionally, the foreground object may be tracked in subsequently captured images, e.g., using a process such as optical flow to track the movement of the detected pixels in the point of interest, e.g., extracted features, in the subsequently captured images. The subsequently captured images may be displayed on the display 101 of the mobile device 100 while the augmentation is rendered on the display based on the tracked foreground object so that it appears that the user may interact with the augmentation. For example, as illustrated in
It may be desirable to update the reference image if there is a scene change. For example, if a pencil is placed on the target 106 after the reference image has been initialized, the pencil will be detected as a foreground object. A scene change may be detected, e.g., by detecting consistent and stationary regions that appear in the foreground mask over a number of frames.
Additionally, variations in the appearance of the target in current illumination conditions can also be learned by backwarping the first few camera frames and generating statistics therefrom. This would also make the system less vulnerable to errors in pose returned by tracker 302 (
Tracking the foreground object over multiple images may be used to discern a user's intended action, thus enabling the user to interact with the augmentation or perform other desired actions. For example, as illustrated in
The mobile device 100 also includes a control unit 105 that is connected to and communicates with the camera 110 and display 101, and other elements, such as motion sensors 114 if used. The control unit 105 accepts and processes data obtained from the camera 110 and causes the display 101 to display rendered augmentation as discussed herein. The control unit 105 may be provided by a bus 105b, processor 105p and associated memory 105m, hardware 105h, firmware 105f, and software 105s. The control unit 105 is further illustrated as including a tracker 302 that tracks the pose of the mobile device 100, or more specifically, the camera 110 with respect to the imaged scene, which may include a target 106. The control unit 105 may further include a background estimator 306 that may be used to generate a reference image, e.g., by warping an initial image without a foreground object into a reference image based on the pose generated by tracker 302. A foreground mask generator 308 in the control unit 105 compares the reference image to the current image to generate mask for the foreground object. A foreground extractor 310 may be used to segment the foreground object from the current image based on the mask, e.g., if geometry of the object is not already known. A detector 312 may include an extractor 312e for detecting pixels that belong to the point of interest on the foreground object, and a classifier 312c to detect the point of interest using the pixels, while the rendering module 314 is used to generate the augmentation that is shown in the display 101 over the captured image.
The various modules 302, 306, 308, 310, 312, and 314 are illustrated separately from processor 105p for clarity, but may be part of the processor 105p or implemented in the processor based on instructions in the software 105s which is run in the processor 105p. It will be understood as used herein that the processor 105p can, but need not necessarily include, one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processors (DSPs), and the like. The term processor is intended to describe the functions implemented by the system rather than specific hardware. Moreover, as used herein the term “memory” refers to any type of computer storage medium, including long term, short term, or other memory associated with the mobile device, and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware 105h, firmware 105f, software 105s, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in memory 105m and executed by the processor 105p. Memory 105m may be implemented within or external to the processor 105p. If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a storage medium that is computer-readable, wherein the storage medium does not include transitory propagating signals. Examples include storage media encoded with a data structure and storage encoded with a computer program. Storage media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of storage media.
Thus, the mobile device 100 includes means for capturing an image of a scene with a foreground object that is not attached to the scene, the foreground object including a point of interest that is a distinct physical aspect, which may be, e.g., the camera 110. A means for warping at least one of the image and a reference image of the scene that does not include the foreground object so the image and the reference image have a same view may be, e.g., the tracker 302, background estimator 306, and foreground mask generator 308 or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. A means for comparing the image to the reference image after warping to detect pixels that belong to the point of interest on the foreground object may be, e.g., foreground mask generator 308, foreground extractor 310, and detector 312 and more specifically, an extractor 312e, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. A means for detecting the point of interest on the foreground object using the detected pixels may be, e.g., the detector 312, and more specifically, a classifier, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. A means for displaying the image on a display may be, e.g., the display 101. A means for rendering an augmentation on the display over the image based on the point of interest may be, e.g., the rendering module 314, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s.
A means for segmenting the foreground object from the image using a mask may be, e.g., foreground extractor 310, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. A means for extracting the foreground object from the image may be, e.g., foreground extractor 310, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. The means for warping at least one of the image and the reference image may include a means for generating a pose between the image and the reference image, which may be, e.g., the tracker 302 or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s; and means for warping one of the image and the reference image based on the pose, which may be, e.g., the background estimator 306 or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. The mobile device 100 may include means for displaying subsequently captured images on a display, which may be, e.g., the display 101. Means for altering the augmentation based on the point of interest in the subsequently captured images may be, e.g., the rendering module 314 or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. The mobile device 100 may further include means for tracking the point of interest on the foreground object in subsequently captured images, which may be the tracker 302, background estimator 306, foreground mask generator 308, foreground extractor 310, and detector 312 or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s. Means for detecting a temporal gesture based on movement of the of the point of interest on the foreground object may be, e.g., hardware 105h, firmware 105f, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s; and means for performing an action associated with the temporal gesture may be, e.g., hardware 105h, firmware 105f, or hardware 105h, firmware 105f, or processor 105p performing instructions received from software 105s.
Although the present invention is illustrated in connection with specific embodiments for instructional purposes, the present invention is not limited thereto. Various adaptations and modifications may be made without departing from the scope of the invention. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description.
Rezaiifar, Ramin, Sharma, Piyush
Patent | Priority | Assignee | Title |
10699155, | Jan 17 2012 | LMI LIQUIDATING CO LLC; Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
11720180, | Jan 17 2012 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
11782516, | Jan 17 2012 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
9396387, | Oct 29 2014 | BAIDU ONLINE NETWORK TECHNOLOGY BEIJING CO , LTD | Image identification method and image identification device based on fingertip tracking |
Patent | Priority | Assignee | Title |
7643024, | May 17 2001 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | System and method for view management in three dimensional space |
20070216675, | |||
20080137940, | |||
20100002909, | |||
20100045448, | |||
20110216060, | |||
20120050324, | |||
20120062702, | |||
20120075343, | |||
20120113223, | |||
20120243732, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 03 2013 | Qualcomm Incorporated | (assignment on the face of the patent) | / | |||
Jan 16 2013 | SHARMA, PIYUSH | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029672 | /0928 | |
Jan 21 2013 | REZAIIFAR, RAMIN | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029672 | /0928 |
Date | Maintenance Fee Events |
Jun 01 2015 | ASPN: Payor Number Assigned. |
Dec 27 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 27 2023 | REM: Maintenance Fee Reminder Mailed. |
Aug 14 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 07 2018 | 4 years fee payment window open |
Jan 07 2019 | 6 months grace period start (w surcharge) |
Jul 07 2019 | patent expiry (for year 4) |
Jul 07 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 07 2022 | 8 years fee payment window open |
Jan 07 2023 | 6 months grace period start (w surcharge) |
Jul 07 2023 | patent expiry (for year 8) |
Jul 07 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 07 2026 | 12 years fee payment window open |
Jan 07 2027 | 6 months grace period start (w surcharge) |
Jul 07 2027 | patent expiry (for year 12) |
Jul 07 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |