The location of a user's head, for purposes such as head tracking or motion input, can be determined using a two-step process. In a first step, at least one image is captured including a representation of at least a portion of the user, with distance information also being obtained. The distance information can be used to segment the at least one image into a foreground portion, which then can be analyzed to recognize a head and shoulder signature of a user. In a second step, a contour of the foreground portion can be determined, and a center point of that contour determined. The distances from the center point to locations along the contour can be used to locate transition points associated with the head and shoulders. A center point of the portion of the contour between the transition points gives an approximation of the relative head position.
|
14. A non-transitory computer-readable storage medium including instructions that, when executed by at least one processor of a computing device, cause the computing device to:
acquire an image data using a camera, the image data including a representation of a plurality of objects;
acquire distance information for the plurality of objects;
determine a foreground portion and a background portion of the image data, the foreground portion representing at least one object of the plurality of objects that is closer than a threshold distance to the computing device, based at least in part upon the distance information for the at least one object;
locate a head and shoulder shape of a person represented in the foreground portion;
determine a contour of the head and shoulder shape;
determine a first center point of the head and shoulder shape;
determine a plurality of distances from the first center point to a plurality of points along the contour;
determine at least one first point that is closer to the first center point of the contour than other points of the plurality of points using the plurality of distances;
determine a head shape by segmenting the head and shoulder shape between the first point and a second point along the contour; and
determine a second center point of the head shape as a location of the head of the person; and
perform a head tracking process.
1. A computing device, comprising:
at least one processor;
a camera; and
memory including instructions that, when executed by the at least one processor, cause the computing device to:
acquire an image using the camera, the image including a representation of a plurality of objects;
acquire distance information for the plurality of objects;
determine a foreground portion and a background portion of the image, the foreground portion representing at least one object of the plurality of objects that is closer than a threshold distance to the computing device, based at least in part upon the distance information for the at least one object;
locate a head and shoulder shape of a person represented in the foreground portion;
determine a contour of the head and shoulder shape;
determine a first center point of the head and shoulder shape;
determine a respective distance from the first center point to each pixel location of a plurality of pixel locations along the contour;
determine a first transition point and a second transition point along the contour, the first transition point having a shortest distance to the first center point and the second transition point having a next shortest distance to the first center point;
determine a head shape by segmenting the head and shoulder shape between the first transition point and the second transition point;
determine a head position of the person as a second center point of the head shape; and
perform a head tracking process.
6. A computer-implemented method, comprising:
acquiring image data using a camera and a computing device, the image data including a representation of a plurality of objects;
acquiring distance information for the plurality of objects;
determining a foreground portion and a background portion of the image data, the foreground portion representing at least one object of the plurality of objects that is closer than a threshold distance to the computing device, based at least in part upon the distance information for the at least one object;
determining at least a portion of a head and shoulder shape of a person represented in the foreground portion;
determining a first center point of the head and shoulder shape;
determining a contour of the head and shoulder shape;
determining a plurality of distances from the first center point to a plurality of points along the contour;
determining at least one first point that is closer to the first center point of the contour than other points of the plurality of points using the plurality of distances;
determining a head shape by segmenting the head and shoulder shape between the first point that is closer to the first center point of the contour than other points of the plurality of points using the plurality of distances;
determining a head shape by segmenting the head and shoulder shape between the first point and a second point along the contour; and
determining a second center point of the head shape as a location of the head of the person; and
performing a head tracking process.
2. The computing device of
determine a set of pixels corresponding to the contour of the head and shoulder shape, each pixel of the set of pixels having a respective set of coordinates with respect to the image; and
calculate a centroid location with respect to the set of pixels, the centroid location being selected as the first center point of the head and shoulder shape.
3. The computing device of
identify the head and shoulder shape as a largest object represented in the foreground portion; or
determine that the head and shoulder shape matches a head and shoulders pattern.
4. The computing device of
determine a set of coordinates associated with the second center point; and
provide the set of coordinates to the head tracking process as a current head position.
5. The computing device of
locate an outer edge region of at least a portion of the head and shoulder shape represented in the image; and
determine a set of contour pixels representative of the outer edge region, the set of contour pixels defining the contour of the head and shoulder shape.
7. The computer-implemented method of
determining disparity data from the three image data;
determining distance information for one or more objects represented in the image data using the disparity data; and
determining a foreground portion and a background portion of the image data using the distance information.
8. The computer-implemented method of
locating a foreground object represented in the foreground portion from among the one or more objects that are closer than a threshold distance to one or more cameras used to acquire the image data.
9. The computer-implemented method of
processing the contour using at least one image filter before determining the at least one first point, the at least one image filter including at least one of a temporal filter, a median filter, a box filter, a morphology opening process, or a noise removal process.
10. The computer-implemented method of
performing a first centroid calculation to determine the first center point; and
performing a second centroid calculation to determine the second center point.
11. The computer-implemented method of
identifying the head and shoulder shape by selecting a largest foreground object or determining that the foreground object matches a head and shoulders pattern.
12. The computer-implemented method of
determining the plurality of points from a set of contour pixels defining the contour.
13. The computer-implemented method of
determining that the foreground object corresponds to at least the portion of the head and shoulder shape using at least one of a face detection algorithm, a facial recognition algorithm, a feature detection algorithm, or a pattern matching algorithm.
15. The non-transitory computer-readable storage medium of
compare a foreground object to a head and shoulders signature to verify that the foreground object corresponds to at least the contour of the head and shoulder shape.
16. The non-transitory computer-readable storage medium of
determine a foreground portion of the image data at least in part by segmenting a stereo disparity map associated with the image data to determine pixel locations that correspond to distances closer than a threshold distance.
17. The non-transitory computer-readable storage medium of
extract the head and should shape from a binarized portion of a depth map of the image data.
18. The non-transitory computer-readable storage medium of
process the contour using at least one filter before determination of the at least one first point, the at least one filter including at least one of a temporal filter, a median filter, a box filter, a morphology opening process, or a noise removal process.
19. The non-transitory computer-readable storage medium of
perform a first centroid calculation to determine the first center point; and
perform a second centroid calculation to determine the second center point.
20. The non-transitory computer-readable storage medium of
determine the threshold distance based upon at least one of an imaging condition or an environmental condition.
|
As the functionality offered by computing devices continues to improve, users are utilizing these devices in different ways for an increasing variety of purposes. For example, certain devices utilize one or more cameras to attempt to detect motions or locations of various objects, such as for head tracking or motion input. Continually analyzing full resolution images can be very resource intensive, and can quickly drain the battery of a mobile device. Further, such approaches are often sensitive to variations in lighting conditions. Using lower resolution cameras and less robust algorithms, however, can lead to an increase in the number of false positives and/or a decrease in the accuracy of the object tracking process.
Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:
Systems and methods in accordance with various embodiments of the present disclosure may overcome one or more of the aforementioned and other deficiencies experienced in conventional approaches to determining the position of one or more objects with respect to an electronic device. In particular, various embodiments utilize distance information with image data to determine the presence and/or location of at least one object of interest, such as a head or body of a user. In one embodiment, image data (e.g., still image or video) with depth information is acquired to attempt to locate such an object. The depth or distance information can be obtained in a number of different ways, such as by capturing stereoscopic or three-dimensional image data, or by using a proximity or distance sensor, among other such options. The distance information can be used to designate objects within a determined distance of the device as foreground objects, eliminating other objects from consideration as background objects. The determined distance can be set such that the distance includes the typical distance a user would be from a computing device with which that user is interacting. If more than one such object exists, the largest object can be selected as the user, although other approaches can be used as well, such as to use shape or object recognition processes. In some embodiments, an object location algorithm might look for objects in the image data that meet a certain object detection criterion, such as objects (e.g., areas of a relatively common intensity in an image) of a certain size, shape, and/or location. In some embodiments, the image data represents infrared (IR) light emitted by the device, reflected by the object, and detected by the device.
Once a foreground object has been identified, a contour of that object can be determined. This can include, for example, determining an edge region or portion of the foreground object, and then selecting a set of pixels that are representative of the edge, such as to provide a relatively continuous contour line that is approximately one pixel in line width. Various averaging or interpolation or other such processes can be used to attempt to generate an appropriate set of contour pixels in at least some embodiments. One or more image processing routines can be used to attempt to obtain a relatively smooth contour with few gaps or holes. In some embodiments, verification can be performed to ensure that the contour of the object substantially corresponds to a signature of a head and shoulders of a user. A center point, such as a center of mass or centroid, then can be calculated for the contour. The distance from this center point to each pixel location along the contour then can be determined, in an attempt to locate the two shortest distances (or at least one such distance in various embodiments). These shortest distances can be determined to correspond to the approximate locations where the user's neck meets the user's shoulders. These “transition” points then can be used to quickly identify a portion of the object and/or contour that corresponds substantially to the user's head. Another centroid then can be calculated, in at least some embodiments, for the portion of the object or contour that is between the located transition points. The coordinates of this centroid then can be determined to correspond to the approximate head location of the user. Information associated with these coordinates then can be used to determine and/or track the relative position of a user's or viewer's head with respect to the computing device.
Various other functions and advantages are described and suggested below as may be provided in accordance with the various embodiments.
For example, as illustrated in the example 120 of
For example, the head location is determined in a set of subsequently captured images, with these locations then compared for purposes of head tracking. The speed with which images can be analyzed limits the number of head positions that can be compared.
If the device 202 is rotated and/or the head 208 moves as illustrated in the example situation 220 of
In order for such a process to work in varying lighting conditions, the device might use a light source to illuminate the head or other such object. Since flashing or directing light from a white light source at a user may be distracting, and potentially power intensive, various devices might instead utilize IR light, or radiation of another such wavelength, band, or range. In this way, the IR light can be emitted as needed without being detectable by the user, and the reflected IR light can be detected by at least one camera sensor of the device. It should be understood that stereoscopic imaging is not required, and that other ways of determining depth information for an object represented in an image can be used as well within the scope of the various embodiments. These can include, for example, a single image with structured light data captured using a structured light assembly, varying intensity data due to different amounts of reflected light, proximity sensor data, and the like. As known in the art, a structured light assembly can determine distance and shape data by projecting a pattern of light and examining the light reflected back by one or more objects to determine variations in the reflected pattern that are indicative of shape and distance. Differences in intensity data can be used to infer distance, as the amount of light reflected by an object drops off as the distance to that object increases.
When analyzing each of the images to attempt to determine the head position, it can be beneficial to quickly eliminate background regions from consideration, in order to reduce the amount of processing needed to locate the head. As discussed, however, this typically involves analyzing the entire image to attempt to recognize various objects, or types of objects, in the image. It would be beneficial to utilize an approach that automatically removes at least some of the background objects, such that these objects do not need to be included in an object recognition or head location process. For example, in the situation 300 illustrated in
In many embodiments, however, the actual feature data of that portion of the image is not needed for head tracking and can be discarded or otherwise used only as needed. For example, once the foreground portion 422 of the image is located, as illustrated in
In at least some embodiments, standard image processing algorithms or image filters can be applied once the background is subtracted or otherwise removed or ignored, which can help to provide an acceptable region for the foreground portion. In some embodiments, a temporal filter can be applied to reduce temporal noise. Temporal filters are typically used to filter the raw image data, before other filtering or processing, to reduce noise due to, for example, motion artifacts. For each pixel in an image, a temporal filter can determine the spatial location of a current pixel and identify at least one collocated reference pixel from a previous frame. A temporal filter can also (or alternatively) process the image data to reduce noise by averaging image frames in the determined temporal direction. Other filters, such as a three frame median filter or box filter can be used as well, such as to remove discrete noise and/or smooth the shape of the foreground object, among other such options. A three-frame median filter can be used to attempt to provide noise reduction while preserving edges and other such features, as such a filter runs through the pixels of images in a set of three frames and replaces each pixel value with the median of the corresponding pixel values in the three frames. A box filter can be used to average the surrounding pixels to adjust pixel values, in some cases multiplying an image sample and a filter kernel to obtain a filtering result. The selection of an appropriate filtering kernel can enable different filtering to be applied, as may include a sharpen, emboss, edge-detect, smooth, or motion-blur filter to be applied. The quality of the segmentation then can be improved, in at least some embodiments, by applying a morphology opening or other such noise removal process, in order to attempt to remove any small holes or gaps in the contour.
Once an appropriate contour is determined, a process can be used to attempt to identify a center of mass, central point within a central region or portion, or other center point 432 of the contour 422, as illustrated in the image state 430 of
and the centroid coordinates of a plane figure can be computed in some embodiments using:
Once a center point such as a centroid has been located, the distance from that center point to each (or at least a subset) of the pixel locations along the contour can be determined. For example, the image state 440 of
From the portion of the contour 442 between the transition points 452, 454, or the area 524 defined by that region, another centroid 542 or other such point can be calculated as illustrated in the image state 540 of
It is also possible that a portion of the head may be obscured, such as by being at least partially occluded by a portion of the camera aperture, or may be located in shadow or otherwise unavailable or unsuitable for imaging and/or analysis. Accordingly, an approach can be utilized that can locate approximate head position in cases where the head is partially occluded as well as cases where the head is not occluded.
As in the previously discussed approach, the distances from the centroid to points along the contour for the selected half can be analyzed to locate the point 562 along the selected portion of the contour that is the shortest distance from the centroid 558, as illustrated in the example situation 560 of
As mentioned, in some embodiments a higher level of precision may be required. In such cases, the transition points and/or head centroid location can be used to designate a portion and/or starting point for an image to be analyzed. From this, a feature, shape, or object recognition algorithm, or other such process, can attempt to identify one or more specific locations for tracking purposes. For example, this could be the mid-point between the viewer's eyes or another such location, which can potentially provide for more accurate tracking that other processes discussed herein.
Other object recognition processes can be used as well within the scope of the various embodiments. These can include, for example, appearance-based processes that search for features such as edges, changes in lighting or color, or changes in shape or size. Various other approaches utilize gradient generation and matching or histogram analysis for object recognition. Other approaches include feature-based approaches, such as may utilize interpretation trees, geometric hashing, or invariance analysis. Algorithms such as scale-invariant feature transform (SIFT) or speeded up robust features (SURF) algorithms can also be used within the scope of the various embodiments. For computer vision applications, a bag of words or similar approach can also be utilized.
Once a foreground portion of the image is determined, a contour of the largest object (if more than one) in the foreground is determined 606. This can include processing the image, to attempt to smooth edges and remove gaps in the image, and then determining an outer edge of the object in the image. As discussed, in at least some embodiments a determination can be used as to whether the object matches, within an allowable deviation, a head and shoulders pattern or signature. Once the contour is determined, a center point can be determined 608, such as by calculating a centroid of the contour in the image. The distance from the center point to at least a subset of the pixel locations along the contour can be determined 610, and the pixel locations that are the shortest distances from the center point can be determined 612. In most cases, the transition points will be different distances from the center point, so a process might take the points with the two shortest distances, points within a given range of distances, points located substantially symmetrically with respect to the center point, etc. As mentioned, smoothing can attempt to remove false positives due to noise in the image, and in some embodiments points with more than an acceptable amount of uncertainty may be removed from the minimum distance calculation. As mentioned, some images might only have one such point, where the other transition point might be out of the image or at least partially obscured, etc. Once the location of the transition points corresponding to the shortest distances are determined, the portion of the contour between (i.e., above) those points can be determined 614, which in most cases should represent that portion of the contour that is substantially related to the user's head and/or neck region(s). As discussed, in the case of an obfuscation or other such issue only one such transition point may be determinable. Using the “head” portion of the contour, a centroid or other such representative (e.g., central) point can be determined 616. Coordinates or other information associated with the representative point then can be returned 618 as indicating the current head position of the user or viewer, at least with respect to the computing device. As discussed, the position information then can be used for purposes such as head tracking, motion input, and the like.
Once a foreground portion of the image is determined, a contour of the largest object (if more than one) in the foreground is determined 656. This can include processing the image, to attempt to smooth edges and remove gaps in the image, and then determining an outer edge of the object in the image. As discussed, in at least some embodiments a determination can be used as to whether the object matches, within an allowable deviation, a head and shoulders pattern or signature. Once the contour is determined, a highest point of the contour is determined 658, and the contour is separated 660 into left and right (or other such) portions using an imaginary line running through the highest point. A centroid (or other central point) of the contour be determined, and the portion containing the centroid can be selected 662 for analysis, as the portion containing the centroid likely will correspond to the least occluded portion of the user. The transition point corresponding to the approximate transition from the neck to the shoulders can be determined 664, and the height (or y-coordinate, etc.) of that transition point determined 666. A point along the imaginary line that is halfway between the transition height and the highest point can be determined 668, and selected as the approximate location of the head at the time the image was captured. The coordinates of this selected point then can be returned as the “current” location of the head of the user.
In order to provide various functionality described herein,
As discussed, the device in many embodiments will include at least two image capture elements 808, such as two or more cameras (or at least one stereoscopic camera) that are able to image a user, people, or objects in the vicinity of the device. An image capture element can include, or be based at least in part upon any appropriate technology, such as a CCD or CMOS image capture element having a determined resolution, focal range, viewable area, and capture rate. The image capture elements can also include at least one IR sensor or detector operable to capture image information for use in determining gestures or motions of the user. The example computing device includes at least one light sensor 810 which determine the need for light when capturing an image, among other such functions. The example device 800 includes at least one illumination component 812, as may include one or more light sources (e.g., white light LEDs, IR emitters, or flash lamps) for providing illumination and/or one or more light sensors or detectors for detecting ambient light or intensity, etc.
The example device can include at least one additional input device able to receive conventional input from a user. This conventional input can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, trackball, keypad or any other such device or element whereby a user can input a command to the device. These I/O devices could even be connected by a wireless infrared or Bluetooth or other link as well in some embodiments. In some embodiments, however, such a device might not include any buttons at all and might be controlled only through a combination of visual (e.g., gesture) and audio (e.g., spoken) commands such that a user can control the device without having to be in contact with the device.
The device also can include at least one orientation or motion sensor. As discussed, such a sensor can include an accelerometer or gyroscope operable to detect an orientation and/or change in orientation, or an electronic or digital compass, which can indicate a direction in which the device is determined to be facing. The mechanism(s) also (or alternatively) can include or comprise a global positioning system (GPS) or similar positioning element operable to determine relative coordinates for a position of the computing device, as well as information about relatively large movements of the device. The device can include other elements as well, such as may enable location determinations through triangulation or another such approach. These mechanisms can communicate with the processor, whereby the device can perform any of a number of actions described or suggested herein.
As discussed, different approaches can be implemented in various environments in accordance with the described embodiments. For example,
The illustrative environment includes at least one application server 908 and a data store 910. It should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device and handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 902 and the application server 908, can be handled by the Web server 906. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
The data store 910 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 912 and user information 916, which can be used to serve content for the production side. The data store also is shown to include a mechanism for storing log or session data 914. It should be understood that there can be many other aspects that may need to be stored in the data store, such as page image information and access rights information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 910. The data store 910 is operable, through logic associated therewith, to receive instructions from the application server 908 and obtain, update or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of element. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about elements of that type. The information can then be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 902. Information for a particular element of interest can be viewed in a dedicated page or window of the browser.
Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in
As discussed above, the various embodiments can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and other devices capable of communicating via a network.
Various aspects also can be implemented as part of at least one service or Web service, such as may be part of a service-oriented architecture. Services such as Web services can communicate using any appropriate type of messaging, such as by using messages in extensible markup language (XML) format and exchanged using an appropriate protocol such as SOAP (derived from the “Simple Object Access Protocol”). Processes provided or executed by such services can be written in any appropriate language, such as the Web Services Description Language (WSDL). Using a language such as WSDL allows for functionality such as the automated generation of client-side code in various SOAP frameworks.
Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS, and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, and any combination thereof.
In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers, and business application servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM®.
The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.
Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed.
Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.
Patent | Priority | Assignee | Title |
10659792, | Apr 09 2010 | Sony Corporation | Image processing device and method |
11298266, | Dec 07 2016 | CORTIGENT, INC | Depth filter for visual prostheses |
11782502, | Nov 05 2019 | PSS Belgium NV | Head tracking system |
Patent | Priority | Assignee | Title |
20090252423, | |||
20100215271, | |||
20100303290, | |||
20110286633, | |||
20130190086, | |||
20140270540, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 25 2013 | Amazon Technologies, Inc. | (assignment on the face of the patent) | / | |||
Oct 16 2013 | KHOKHLOV, DIMITRI YURIEVICH | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031700 | /0440 |
Date | Maintenance Fee Events |
Jun 28 2021 | REM: Maintenance Fee Reminder Mailed. |
Dec 13 2021 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Nov 07 2020 | 4 years fee payment window open |
May 07 2021 | 6 months grace period start (w surcharge) |
Nov 07 2021 | patent expiry (for year 4) |
Nov 07 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 07 2024 | 8 years fee payment window open |
May 07 2025 | 6 months grace period start (w surcharge) |
Nov 07 2025 | patent expiry (for year 8) |
Nov 07 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 07 2028 | 12 years fee payment window open |
May 07 2029 | 6 months grace period start (w surcharge) |
Nov 07 2029 | patent expiry (for year 12) |
Nov 07 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |