Method and apparatus for secure data entry. In the method a virtual data entry interface is generated, and is outputted so as to be readable only by the user. The user then enters data using the interface. The apparatus includes at least one display, or optionally a pair of displays that output a 3D stereo image. It also includes a data processor, and at least one sensor, or optionally a pair of sensors that capture 3D stereo data. The data processor generates a virtual data entry interface, and communicates it to the display or displays. The displays output the virtual interface such that it is only readable by the user. The sensor or sensors receives data entered by the user's actions, and send signals representing those actions to the processor. The processor then detects the data from the signals.
|
38. An apparatus, comprising:
means for generating a virtual interface;
means for generating a first input configuration of the virtual interface for a head-mounted display to securely receive a first input from a view;
means for displaying the virtual interface with the first input configuration at a defined focus distance and a defined location relative to an eye of a viewer to securely display the virtual interface, wherein:
the defined focus distance is a distance such that the virtual interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on the head-mounted display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
means for receiving a first input from the viewer, the first input corresponding to the virtual interface with the first input configuration;
means for, in response to receiving the first input, automatically generating a second input configuration for the virtual interface for the head-mounted display to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
means for displaying the virtual interface with the second input configuration at the defined focus distance and location relative to the viewer; and
means for receiving the second input from the viewer, the second input corresponding to the virtual interface with the second input configuration.
36. A method, comprising:
generating a first input configuration of a stereo three dimensional (3D) virtual interface to securely receive a first input from a viewer;
outputting the stereo 3D virtual interface with the first input configuration on a head-mounted display, the stereo 3D virtual interface being displayed at a defined focus distance and a defined location relative to an eye of a viewer to securely display the stereo 3D virtual interface, wherein:
the defined focus distance is a distance such that the stereo 3D virtual interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on the head-mounted display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
detecting with stereo 3D image capture the viewer manipulating an end-effector in relation to the virtual interface;
receiving a first data set entered by the viewer through the manipulation;
in response to receiving the first data set, automatically generating a second input configuration for the stereo 3D virtual interface to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
outputting the stereo 3D virtual interface with the second input configuration at the defined focus distance and the defined location relative to the viewer; and
receiving the second input from the viewer, the second input corresponding to the stereo 3D virtual interface with the second input configuration.
25. An apparatus, comprising:
a first display;
a processor in communication with the first display; and
a first sensor in communication with the processor, wherein the processor is to:
generate a first input configuration of a virtual data entry interface for the first display to securely receive a first input from a viewer;
send, to the first display, the virtual data entry interface with the first input configuration at a defined focus distance and a defined location relative to an eye of the viewer to securely display the virtual data entry interface, wherein:
the defined focus distance is a distance such that the virtual data entry interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on a head-mounted display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
receive the first input from the viewer, the first input corresponding to the virtual data entry interface with the first input configuration;
in response to receiving the first input, automatically generate a second input configuration for the virtual data entry interface for the first display to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
send, to the first display, the virtual data entry interface with the second input configuration at the defined focus distance and the defined location relative to the viewer; and
receive a second input from the viewer, the second input corresponding to the virtual data entry interface with the second input configuration.
1. A method, comprising:
generating, by a processor, a virtual data entry interface for a head-mounted display;
generating, by the processor, a first input configuration of the virtual data entry interface for the head-mounted display to securely receive a first input from a viewer;
displaying, by the head-mounted display, the virtual data entry interface with the first input configuration at a defined focus distance and a defined location relative to an eye of the viewer to securely display the virtual data entry interface, wherein:
the defined focus distance is a distance such that the virtual data entry interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on the head-mounted display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
receiving, by a sensor, the first input from the viewer, the first input corresponding to the virtual data entry interface with the first input configuration;
in response to receiving the first input, automatically generating, by the processor, a second input configuration for the virtual data entry interface for the head-mounted display to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
displaying, by the head-mounted display, the virtual data entry interface with the second input configuration at the defined focus distance and the defined location relative to the viewer; and
receiving, by the sensor, the second input from the viewer, the second input corresponding to the virtual data entry interface with the second input configuration.
37. An apparatus, comprising:
a first display and a second display adapted for stereo 3D output;
a processor in communication with the first display and the second display; and
a first sensor and a second sensor in communication with the processor, the first sensor and the second sensor being adapted for stereo 3D image capture, wherein the processor is to:
generate a first input configuration of a virtual data entry interface for the first display or the second display to securely receive a first input from a viewer;
send, to the first display or the second display, the virtual data entry interface with the first input configuration at a defined focus distance and a defined location relative to an eye of the viewer to securely display the virtual data entry interface, wherein:
the defined focus distance is a distance such that the virtual data entry interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on the first display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
receive the first input from the viewer, the first input corresponding to the virtual data entry interface with the first input configuration;
in response to receiving the first input, automatically generate a second input configuration for the virtual data entry interface for the first display or the second display to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
send, to the first display or the second display, the virtual data entry interface with the second input configuration at the defined focus distance and location relative to the viewer; and
receive the second input from the viewer, the second input corresponding to the virtual data entry interface with the second input configuration.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
17. The method of
18. The method of
19. The method of
20. The method of
the first input configuration comprises a first keypad in a first key configuration; and
the second input configuration comprises the first keypad in a second key configuration, the first key configuration being different from the second key configuration.
21. The method of
the first input configuration comprises at least one of a keypad, a slider, a wheel, a dial, or a color selection palette; and
the second input configuration comprises at least another configuration of the keypad, the slider, the wheel, the dial, or the color selection palette.
22. The method of
23. The method of
24. The method of
27. The apparatus of
the first display and the second display are adapted to output a 3D stereo image; and
the virtual data entry interface is a 3D stereo virtual interface.
30. The apparatus of
31. The apparatus of
35. The apparatus of
39. The apparatus of
41. The apparatus of
43. The apparatus of
|
This invention relates to secure data entry, wherein data being entered is protected from being intercepted as the data is being entered. This invention relates more particularly to a method and apparatus for securing data entry against such interception through the use of virtual data entry interfaces such as virtual keypads.
The notion of a keypad as a data entry interface is well known. Computers in particular utilize keypads for data entry, along with numerous other fixed and portable devices such as automatic teller machines (ATMs), calculators, telephones, etc.
Typically the term “keypad” is taken to imply a physical device. However, the functionality of a keypad does not require a physical device, and may be accomplished without one. For example, a touch-sensitive display may be used to show an image of a keypad, with the user touching the screen at appropriate points to simulate keypad data entry. Similarly, an image of keypad may be projected onto a table or other flat surface, with a camera or other monitoring device used to determine when a user presses the projected keys. In such cases, the keypad may be considered to be a virtual device, in that they keypad does not physically exist as an independent object. That is, while the touch screen, table, etc. may have physical substance, there is no physical substance to the keypad itself, even though it is functional as a keypad.
Thus, a virtual keypad is one is that perceived to be, and functions as, a keypad, but that is not a physical object in and of itself. Nevertheless, the keypad is both functional and perceivable to its user, and is therefore a “real” working device. The keypad is somewhat analogous to a projected image in that sense; an image projected on a screen has no physical substance, but nevertheless is a real image, and can serve the same function as a physical photograph, painting, etc. Likewise, a keypad can be a functional device even without physical substance, and can therefore serve as a data interface.
However, conventional virtual keypads suffer from many of the same security weaknesses as physical keypads. Notably, the act of data entry itself provides an opportunity for unauthorized interception of the data. Such interception is sometimes referred to as “peeping”.
Peeping circumvents many conventional forms of data security. For instance, firewalls can protect stored or transmitted data by restricting access to data in a system from outside the system, and encryption can protect stored or transmitted data while that data is inside a system by making the data unreadable. However, when a user enters data, that data typically is unencrypted, and the user's actions in entering data (e.g. by typing on a keypad) take place in the physical world, rather than within the electronic system. Thus, approaches directed to protecting data systems themselves frequently are ineffective against peeping attacks, since the data is observed/intercepted while outside the system.
In a simple form, peeping can be carried out by “looking over the shoulder” of a person using a conventional keypad, virtual or otherwise. By watching a person enter data on a keypad, a person watching can determine what data is being entered. Peeping is particularly problematic for security data such as passwords, PIN codes, encryption keys, and so forth, but is a concern for most or all types of data.
It will be understood that for such peeping, where the person intercepting the data can see both the keypad and the data entry process, it makes no difference whether the keypad is physical or virtual. Both types of keypad are susceptible.
Physical keypads have a disadvantage of being fixed in a single configuration. That is, because they are physical devices, the configuration of the keys cannot readily be changed; for example, on a particular numeric keypad, the number 1 is always in the same place. Thus, if a peeper can observe the keypad configuration at any time, they will know what the keypad configuration is at the time of data entry. As a result, they need not observe the keypad during data entry; watching the motions of the user is sufficient to determine what data is being entered.
Attempts have been made to rectify these problems using virtual keypads. Since they are less limited by a physical structure, virtual keypads can be reconfigured from time to time. Use of virtual keypads makes it more difficult to peep in circumstances where the peeper can only see the user, and cannot see the keypad at the same time. However, if a peeper can see both the user entering data and the keypad, they can still intercept the data as it is being entered, regardless of the configuration of the keypad.
It is noted that a peeper need not directly view the user and keypad in order to intercept the data being entered. Mirrors, wireless cameras, and other devices may be used while a peeper remains out of direct line of sight, stays at another location altogether, or even records the data entry activity for later viewing. Suitable surveillance devices are widely available, compact, and inexpensive.
It should be understood that although a virtual keypad is used as an example, the functions and security concerns described are relevant to other interfaces as well.
In view of the preceding, there is a need for a more secure method of data entry, one resistant to peeping.
The present invention contemplates a variety of systems, apparatus, methods, and paradigms for data entry.
In one embodiment of the present invention, a method is provided for secure data entry, that includes generating a virtual data entry interface, and receiving data entered by a user using that virtual data interface. The virtual data interface is outputted so as to be readable substantially only to the user. Limited readability tends to provide security against peeping attacks, since the interface is not visible to a person attempting such a peeping attack.
The virtual interface may be generated to be visible substantially only to the user. Alternately, the virtual interface may be generated to be in focus substantially only for the user. The virtual interface may be outputted such that the user him or herself substantially blocks the line of sight to the interface for anyone except the user.
The virtual interface may be outputted in front of the user's eyes, and close to them, so as not to be visible from behind or adjacent to the user. The virtual interface may be outputted so as to be in focus only at a distance corresponding to the position of the user's eyes. For example, the virtual interface may be outputted on a head mounted display worn by the user.
The configuration of the virtual interface may be variable, such that when the interface is generated, its configuration can differ from its previous configuration(s). The user may manually change the configuration of the interface, and/or may actuate a function to change the configuration of the interface. The virtual interface may change automatically, and may change automatically each time the interface is generated.
The virtual interface may be generated as a 3D virtual object. In particular, the interface may be generated as a virtual keypad. The virtual keypad may include multiple iterations of individual symbols. The virtual interface may be generated as a color selection palette.
The step of receiving data may include manual detection of the actions of the user. Such detection may include detecting manipulation of an end-effector such as a finger by the user, detecting a hand manipulation of the user, detecting the user's eye movement, and/or detecting the user's brain events, such as through electromagnetic sensing.
The virtual interface may be outputted so as to appear to the user to substantially align with a physical object.
In another embodiment of the present invention, an apparatus is provided for secure data entry. The system includes at least one display, a data processor in communication with the display, and at least one sensor in communication with the processor. The data processor generates a virtual interface. The display outputs the interface generated by the processor, such that the virtual interface is only readable by the user. The sensor receives data entered by actions of the user in using the interface, and sends a signal representative of those actions to the processor. The processor determines the data based on the signal.
The apparatus may include first and second displays, which may be adapted to output a 3D stereo image. The virtual interface may be a 3D stereo interface.
The apparatus may include first and second sensors, and those sensors may be cameras. The sensor or sensors may be adapted to capture a 3D stereo image, and to the processor may be adapted to detect the action of the user from that 3D stereo image. The sensor may be directed towards the user's hands, or towards the user's face.
The sensor may be a brain sensor.
The display, data processor, and sensor may be part of an integrated head mounted display.
In yet another embodiment of the present invention, a virtual data entry interface is provided. The interface includes a plurality of virtual features in communication with a data system. The virtual features are manipulatable by the user, such that the user can enter data by manipulating them. The interface is readable substantially only to the user.
In another embodiment of the present invention, a method of secure data entry is provided. The method includes generating a stereo 3D virtual data entry interface, visually detecting through 3 stereo 3D image capture a user's manipulation of an end-effector in relation to the virtual interface, and receiving data entered by the user through that manipulation. The virtual interface is outputted on a head mounted display, so as to be readable substantially only to the user.
In another embodiment of the present invention, an apparatus is provided for secure data entry. The apparatus includes first and second displays adapted for 3D stereo output, a data processor in communication with the displays, and first and second sensors adapted for stereo 3D image capture in communication with the processor. The data processor is adapted to generate a stereo 3D virtual data entry interface. The displays are adapted to output the virtual interface to a user such that the virtual interface is readable substantially only to the user. The sensors are adapted to receive data entered by an action of the user using the virtual interface, and to send a signal representative of the action to the processor. The processor is adapted to detect the data from that signal. The displays, processor, and sensors are disposed on a head mounted display.
In still another embodiment of the present invention, an apparatus is provided for secure data entry. The apparatus includes means for generating a virtual data entry interface, means for outputting the virtual interface to a user, and means for receiving data entry by the user to the virtual interface, with the virtual interface being readable substantially only by the user.
The virtual interface may be outputted in front of the user's eyes and proximate thereto, so as not to be visible from a point of view behind or adjacent the user. The virtual interface may be a stereo 3D virtual interface. The means for receiving user data entry may detect motions. The generating means, outputting means, and receiving means may be disposed on a head mounted display. The means for generating the virtual interface may generate the interface in a new configuration each time the virtual interface is generated.
Like reference numbers generally indicate corresponding elements in the figures.
Referring to
As shown in
A range of devices may be suitable for use as the first and second displays 12 and 14, including but not limited to light emitting diodes (LED), organic light emitting diodes (OLED), plasma screen panels (PDP), liquid crystal displays (LCD), etc. Likewise, the use of projected or transmitted displays, where the viewed surface is essentially a passive screen for an image projected or otherwise transmitted after being generated elsewhere, may also be suitable. In addition, either digital or analog display technologies may be equally suitable. Moreover, although as illustrated the displays 12 and 14 are in the form of screens that display the interface on their surfaces, this is an example only. Other arrangements, including but not limited to systems that display images directly onto a user's eyes, may be equally suitable.
The apparatus 10 also includes a first sensor 16 and a second sensor 18. These sensors are adapted to detect actions by a user of the apparatus 10, in particular actions that represent the entry of data to the apparatus 10. The sensors 16 and 18 are also adapted to generate a signal representative of the user's actions.
As noted with regard to the displays 12 and 14, an arrangement of first and second sensors 16 and 18 as shown in
As illustrated, the sensors 16 and 18 are compact digital cameras. A range of cameras, including but not limited to CMOS and CCD cameras, may be suitable. Moreover, sensors other than cameras likewise may be equally suitable.
The apparatus 10 also includes a data processor 20. The processor 20 is in communication with the first and second displays 12 and 14, and also with the first and second sensors 16 and 18. The manner by which the communication is accomplished may vary from one embodiment to another; in one embodiment the components may communicate by direct wire connection, but other arrangements may be equally suitable. The processor 20 is adapted to generate the virtual data entry interface, and to output that interface to the first and second displays 12 and 14. The processor 20 is also adapted to receive the signal representative of the user's actions as generated by sensors 16 and 18.
In addition, the processor 20 is adapted to detect the data being entered by the user, based on signal received from the sensors 16 and 18. The manner or manners by which the processor detects the data may vary based on the types of sensors 16 and 18 used in any particular embodiment of the apparatus 10, and on the types of user actions that are anticipated to be used for data entry. In one embodiment, sensors 16 and 18 are cameras arranged to generate 3D information regarding objects in their combined field of view, and users enter data by entering keystrokes on a virtual keypad. In such an embodiment, the processor may detect the data based on determining the positions and/or motions of the user's fingers as they manipulate the keypad. For example, the user's hands and fingers could be distinguished from the background based on their shape, their color, their texture, specific features, etc., and the position and/or motions of the user's hands in the physical 3D world could then be correlated with the positions of individual keys on the virtual keypad in virtual 3D space, so as to determine which keys the user is striking. A range of algorithms and data processing techniques may be suitable for such an embodiment.
A range of general-purpose, special-purpose, and embedded systems may be suitable for use as the data processor 20. Moreover, it may be equally suitable for the data processor 20 to consist of two or more physical or logical processor components.
Because the interface 22 appears as an object in 3D space to the user, the user can interact with the interface in a fashion similar to that for interacting with a solid object. For instance, for the arrangement in
As illustrated in
It should be understood that the arrangements for the virtual interface 22 as shown in
According to the principles of the present invention, the apparatus 10 displays the virtual interface in such a manner that the user 26 may see it, but that a peeper 34A, 34B, or 34C may not. Given the arrangement illustrated in
For example, given an apparatus 10 configured as glasses, the distance between the user's eyes 30 and 32 and the displays 12 and 14 is typically small, on the order of one to several centimeters at most. However, as may be understood from
The relative differences in eye-to-display distance for the user 26 and a peeper 34A, 34B, or 34C allow for convenient steps to oppose peeping. For example, the images on the displays 12 and 14, and thus the interface itself, can be presented in such a fashion as to only be in focus for the user 26. A peeper 34A, 34B, or 34C, at a much greater distance from the displays 12 and 14, might see nothing more than a blur of light, from which they would be unable to determine any information. Likewise, the images on the displays 12 and 14 and thus the interface can be presented at such a size, or at such a level of contrast, as to be only readable at distances suited for the user 26 but not for prospective peeper 34A, 34B, or 34C. Other arrangements for limiting the readability of the interface may also be equally suitable.
In addition, the apparatus 10 may take advantage of geometry in providing data security. As shown in the embodiment of
As will be understood, if a prospective peeper 34A, 34B, or 34C cannot see the virtual interface 22, determining the data being entered by a user 26 on that interface 22 is made more difficult. However, as noted above, it is possible for a peeper 34A, 34B, or 34C to intercept data without seeing the interface, if they know the arrangement of that interface.
With reference to
For example,
Given such an arrangement, the motion that would constitute entry of the key with the number 8 by a user interacting with the interface 22 of
Unlike a mechanical data entry interface, a virtual interface 22, lacking physical substance, can be readily rearranged or reconfigured. For example, an apparatus 10 could shift between the interface 22 shown in
In addition, as a virtual construct, a virtual interface 22 is not limited only to a specific geometry or functionality. For example, another virtual interface 22 is shown in
Such flexibility in the configuration of the virtual interface 22 secures the apparatus 10 against memorization of any particular configuration. Even if somehow individual configurations can be seen (which as described above is problematic for the present invention), memorizing one or even many configurations does not provide reliable insight into the configuration of the virtual interface 22 at the time that a user is entering data.
Thus, an apparatus 10 in accordance with the principles of the present invention is doubly secure against peeping: the virtual interface 22 is visible substantially only to the user, and the actions of the user cannot be reliably correlated by a peeper with any particular configuration of an interface 22 in such a way as to determine the data being entered, because the configuration of the interface 22 cannot be reliably predetermined by the peeper.
Although the preceding description has referred, for simplicity, to a virtual interface 22 in the form of a simple numeric keypad, this is an example only. While keypads, including but not limited to numeric, alphabetic, and alphanumeric keypads, may be advantageous for certain embodiments, a wide range of other interfaces may be equally suitable.
For example, a virtual interface 22 that is an analog of a different mechanical interface or device may be suitable for some embodiments.
It is noted that although the above examples refer to the use of letters and numbers as markings, this is done as an example only. Other symbols, markings, or distinguishing features may be equally suitable, including but not limited to non-alphanumeric characters, musical notes, icons, shapes, colors, etc. In particular, the virtual interface 22 in accordance with the principles of the present invention is not limited to analogs of conventional mechanical or symbol based systems.
For example,
The virtual interfaces 22 shown and described are examples only. Other arrangements, including but not limited to virtual combination locks, virtual geometric interfaces, virtual puzzles, virtual photo manipulations, and other constructs may be equally suitable.
In addition, the apparatus 10 as described and illustrated is also an example only. In particular, the approaches for implementing secure data entry as described herein are not hardware dependent, and could be executed on a wide range of apparatuses.
For example, as shown in
As another example, as shown in
Detection of eye motion and brain events are examples only, and other actions or events may be equally suitable for determining data entry.
In addition, although the invention is illustrated herein as an integrated unit, e.g. a head mounted display, this is an example only. For certain embodiments, it may be advantageous for components to be physically and/or logically separated. For example, sensors 16 and 18 may not be proximate the other elements and the user as illustrated, but could be disposed at some distance from the user, so as to view both any hand gestures by the user and any face/body motions that the user makes. Likewise, the processor 20 might be at some distance from the user and/or the other elements of the apparatus 10, e.g. in communication by wireless means.
One example of such a distributed arrangement would be an embodiment wherein the displays 12 and 14 are used to display content generated by an external processor, as when a user utilizes the present invention as a 3D display for a PC, game console, supercomputer array, etc.
The above specification, examples, and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
Patent | Priority | Assignee | Title |
10241638, | Nov 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for a three dimensional interface |
10782848, | Nov 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for a three dimensional interface |
11016631, | Apr 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for ego-centric 3D human computer interface |
11620032, | Apr 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for ego-centric 3D human computer interface |
11789583, | Nov 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for a three dimensional interface |
12175053, | Apr 02 2012 | WEST TEXAS TECHNOLOGY PARTNERS, LLC | Method and apparatus for ego-centric 3D human computer interface |
Patent | Priority | Assignee | Title |
6091546, | Oct 30 1997 | GOOGLE LLC | Eyeglass interface system |
8199126, | Jul 18 2011 | GOOGLE LLC | Use of potential-touch detection to improve responsiveness of devices |
8228315, | Jul 12 2011 | GOOGLE LLC | Methods and systems for a virtual input device |
8316319, | May 16 2011 | GOOGLE LLC | Efficient selection of characters and commands based on movement-inputs at a user-inerface |
8427434, | Nov 05 2007 | Electronic linear perspective drawing aid | |
20070089164, | |||
20100109920, | |||
20110213664, | |||
20130002559, | |||
20130241927, | |||
20130278631, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 22 2012 | Atheer, Inc. | (assignment on the face of the patent) | / | |||
Oct 26 2012 | ITANI, SLEIMAN | ATHEER, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029476 | /0510 | |
Nov 19 2018 | ATHEER, INC | COTA CAPITAL MASTER FUND, L P , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 048154 | /0759 | |
Jan 30 2022 | ATHEER, INC | WEST TEXAS TECHNOLOGY PARTNERS, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 058962 | /0067 |
Date | Maintenance Fee Events |
Jan 03 2022 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Jul 03 2021 | 4 years fee payment window open |
Jan 03 2022 | 6 months grace period start (w surcharge) |
Jul 03 2022 | patent expiry (for year 4) |
Jul 03 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 03 2025 | 8 years fee payment window open |
Jan 03 2026 | 6 months grace period start (w surcharge) |
Jul 03 2026 | patent expiry (for year 8) |
Jul 03 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 03 2029 | 12 years fee payment window open |
Jan 03 2030 | 6 months grace period start (w surcharge) |
Jul 03 2030 | patent expiry (for year 12) |
Jul 03 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |