The present invention allows for the creation of a biometrically secure environment that allows viewing, editing and sharing of confidential documents, or the like, in public places, without worrying that someone will see the contents. The invention provides privacy, for example for the purposes of reading documents, in a public environment while having the confidence that you are the only one able to read the document. Privacy may be achieved through methods of identification using biometric features, such as: face, iris or voice recognition. Verification that a real person is viewing the document may also be achieved by pulse recognition. In one embodiment, the screen will shut down when more than one person looks directly at the screen.
|
17. A method for managing access to a document on a user device to an authorized user, the document stored on a storage device, comprising the steps of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
generating a profile associated with the document, the document profile associated with at least one authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required to interact with the document in a selected manner;
capturing a field of view in proximity to the user device;
upon detecting in the field of view a person
determining a first biometric feature of the person;
calculating a confidence level using
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document,
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user; and
allowing the authorized user to perform the function that an authorized user can perform on the document.
10. A system for limiting access to a document on a user device to an authorized user, the system comprising:
a storage device for storing the document;
a processor for:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
generating a profile associated with the document, the document profile associated with at least one authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required to interact with the document in a selected manner;
capturing a field of view in proximity to the user device;
upon detecting in the field of view the at least one biometric feature of a person
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional biometric feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
and
allowing the authorized user to perform the functions that the authorized user can perform on the document.
6. A method for managing access to a display of a user device, comprising the steps of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document, wherein the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
capturing a field of view in proximity to the display;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and that feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that an authorized user can perform on the document.
1. A system for managing access to a display of a user device comprising:
a user device with a display;
a detection device for capturing a field of view in proximity to the display; and
a processor for:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document, wherein the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional biometric feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that the authorized user can perform on the document.
24. A non-transitory computer-readable medium with computer executable instructions embodied thereon for managing access to a display of a user device, the computer-executable instructions causing a computer to perform the process of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document on a user device, the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
capturing a field of view in proximity to the display;
upon detecting in the field of view a person, checking the profile to determine if the person is the authorized user of the associated opened document,
capturing a field of view in proximity to the display;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of each user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and that feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that the authorized user can perform on the document.
3. The system of
4. The system of
5. The system of
asking the person to say a random sequence;
comparing the person's speech to a pattern in a data base to determine if the person is the authorized user;
if the person is the authorized user, activating the display to permit the authorized user to perform the functions that an authorized user can perform on the document, commensurate with the profile; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that an authorized user can perform on the document.
8. The method of
9. The method of
11. The system of
continuing to capture the field of view in proximity to the user device; and
upon detecting in the field of view a second user who is not considered the authorized user, disallowing the authorized user to interact with the document in the selected manner.
12. The system of
13. The system of
14. The system of
16. The system of
18. The method of
continuing to capture the field of view in proximity to the user device; and
upon detecting in the field of view a second user who is not considered the authorized user, disallowing the authorized user to interact with the document in the selected manner.
19. The method of
20. The method of
21. The method of
23. The method of
|
The present patent application claims priority to U.S. Provisional Patent Application No. 62/720,543, filed Aug. 21, 2018, and entitled “System and Method for Securely Viewing and Editing Documents and Other Information”, the disclosure of which is incorporated herein by reference thereto.
The present invention allows for the creation of a biometrically secure environment that allows viewing, editing and sharing of confidential documents, or the like, in public places, without worrying that someone will see the contents.
The invention provides privacy, for example for the purposes of reading documents, in a public environment while having the confidence that you are the only one able to read the document. Privacy may be achieved through methods of identification using biometric features, such as: face, iris or voice recognition. Verification that a real person is viewing the document may also be achieved by pulse recognition. In one embodiment, the screen will shut down when more than one person looks directly at the screen.
A second layer of security in the form of liveness checks may also be provided. For example, this may be accomplished using pulse detection, in one embodiment.
In one embodiment, the present invention may be implemented as a hardware product, a software product, or a combination thereof.
As will be explained in further detail below, some of the features of the present invention may include:
In various embodiments, the present invention may be implemented on a variety of platforms, such as mobile device, tablet, laptop, desktop, etc., using a camera, microphone, etc.
For using some applications and systems, in the past a user will often need to perform a log-in operation, using a user ID and password that can identify the user. However, a password leaves the user with a very low level of protection. Most users aren't fully aware of the dangers of cyber security these days and use passwords that are easily hackable. The use of more elaborate passwords results in forgetfulness and can, in some cases, lead to keeping a log of all passwords, which obviously contradicts the entire purpose of a complex password.
In contrast, biometric Identification gives rise to a solution serving both purposes. The use of biometric technologies makes sure that the user will never forget a password again and simultaneously provides a very high level of security.
Biometric identification methods include, amongst others, face recognition (identification based on an image or video of a subject's face), iris recognition (identification based on an image or video of a subject's iris), fingerprint (identification based on a subject's finger print), voice recognition (identification based on a voice sample of the subject) or combination of any of the above methods, or other methods.
Hacking a biometric system is not simple at all, and yet it is possible. Many recent documented cases have publicized, such as the fooling of the Samsung Galaxy S8 iris scanner back in May 2017. The iris scanner was fooled using a camera, a laser printer and a contact lens. In September 2017, the researchers were able to bypass Apple's FaceID with a 3D printed mask of the expert's face, made of stone powder. The total cost of the materials used was approximately $150.
A solution for avoiding such hacks to is, as taught according to the present invention, the use of “liveness checks.” A liveness check allows a biometric application to discriminate between the real biometric factor of a subject and artificial copies of those features making up the biometric factor. Liveness detection reduces the likelihood that spoofing attempts will succeed, and as such reduces the false acceptance rate. An example of a liveness check is facial recognition software requiring the subject to blink with his or her eyes, and smile or give a head nod. However, even these gestures are also quite easily faked using a mask with holes in it.
As will be described further below, these limitations may be overcome, such as by using pulse detection using a video that can be obtained using a dual camera array.
A secure environment must be protected at all times because identifying a person in the beginning of a document viewing/editing is not enough. A solution according to the present invention is that during the viewing/editing of the document, the identification process must run in the background and yet provide the highest possible level of security by performing an identification screening every X seconds. In addition a backup identification is offered as a precaution in the event that the continuous identification fails.
For this method to be seamless and yet achieve the highest levels of performance, a weighing method is devised, comparing the validity of verification from the different methods.
A secure environment should provide complete confidentiality, which means that it is necessary to determine that only one person looks at a device in a given time. In a case where the system detects a breach, by detecting another face looking at the camera, meaning that another person is attempting to violate the privacy of the prime user, the system may provide an alert or at certain levels of confidence, even shut off the screen.
Behavioral profiling may be used in order to give the user the best secure environment, while not harming the convenience of use and improving the user experience. Behavioral profiling is a definition of a person by his habits, location, WIFI to which he is often connected, often visited sites, matter of text input, the way in which a person holds his phone, etc. It is best to secure documents by using a behavioral profiling score for defining security levels that need to be applied. For example, by using GPS coordinates or a WI-FI network to define a score that gives lower score if a user is in a public place, the security level needs to be set on a higher level. A medium score would mean a person is in a work place, and the security would be at a medium level. A high score corresponds to, for example, the user being at their home, for which will result in a lower level of security.
In one embodiment, behavioral profiling scores can be calculated according to a sensor that appears in a device. For example, in a smartphone/tablet a location sensor, gyro sensor, Wi-Fi/mobile module may be provided that can provide relevant information.
For laptops or desktop, location may, for example, be obtained according to IP address, the browser history may be checked, and many more components can be learned from different sensors and modules.
Behavioral profiling can be used in addition to existing biometric methods but, in one embodiment, cannot supply a secure environment if biometric methods do not exist.
The accompanying figures depict the structure and operation of various aspects of the present invention, in one embodiment.
Biometrics
Biometric technologies are the science of detecting and recognizing human characteristics using technologies that measure and analyze individual biological data. The way we are genetically composed creates identifiable traits that can uniquely represent us as individuals. DNA can be used to distinguish between two individuals apart for identical twins. Some biometric traits such as fingerprints and iris prints are distinctive even among identical twins.
Current antiquated mechanisms such as keys, passes, tokens, passwords, PIN's and secret questions are easily stolen and shared. However, biometrics is the method of identifying a person based on their distinctive physiological or behavioral characteristics and these attributes cannot be shared, misplaced or forgotten. Into the future, it is becoming increasingly important to have confidence in secure authentication of electronically stored information.
In enrollment step 101, a user's biometric information is presented, captured in step 102, processed in step 103, and stored in a database 104. In the verification/recognition steps, biometric information is presented 111 and captured (step 112), processed in step 113, and in step 114 the biometric data processed in step 113 is compared to the enrollment biometric data stored in the database 104. The result is either a match (115) or no match (116).
There are different types of biometric processes and techniques that may be used, including: for example: facial biometrics, finger print recognition, speaker biometrics, liveness checks, iris recognition, etc.
1. Facial Biometrics
The face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
A typical facial biometric flow is depicted in
Face detection is the identification of human faces in digital images. In essence, it is the task of defining bonding boxes that surround human faces. Face detection is a crucial pre-processing step in any face verification system. Detecting faces in the image allows the extraction of faces from the image and performs an analysis on each face alone and removes it from the background, allowing the system to perform the analysis on each face in the image separately.
In a typical embodiment of face detection, with reference to
For purposes of the present invention, face detection may be the tool used to determine whether only a single person is currently viewing the screen (a single face is located in the frame captured by the camera) or multiple people.
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks is a face detection algorithm that utilizes Multi-Task Cascaded Convolutional Networks (MTCCN). MTCNN makes use of the different levels of the image's Gaussian pyramid to create a bounding box and extract facial land marks of the face. The procedure may be performed using a 3 stage deep convolutional network. Each stage outputs a more refined and more accurate depiction of the face's location in the image.
YOLO: Real-Time Object Detection is a concept recently introduced in the field of real time object detection. YOLO divides a given image into a square grid, predicts bounding boxes for each grid cell and predicts a class probability for it. As a final step, the bounding boxes are merged and a final class prediction is given. YOLO is known to work in real time rate even when running on relatively simple devices. Seeing as the face viewed may be an object in a digital image, the use of an advanced, state of the art detection system would be a suitable choice.
Face alignment is the task of deforming two facial landmarks such that both will eventually have the same orientation. The task is vital in order to compensate for the high variability of poses in which the face may be captured. It allows for face verification to be feasible without having the user enroll in all possible poses to the camera. The procedure usually involves the use of facial landmarks and creates the best possible correspondence between these landmarks.
One Millisecond Face Alignment with an Ensemble of Regression Trees. This technique is when the framework learns face landmarks in preparation for the actual face alignment, thus allowing for an initial extraction of facial landmarks from the image and aligning the image according to said landmarks. This a priori step allows for real time performance. The alignment process itself may use a cascaded regression pipeline.
Deep Alignment Network: A convolutional neural network for robust face alignment Deep Alignment Network (DAN) is deep neural network that includes several stages. Each single stage of DAN includes a feed-forward neural network which performs landmark location estimation and connection layers that generate the input for the next stage. DAN moves between all stages 3 inputs, input image which has been warped so that the current landmark estimates are aligned with the canonical shape, a landmark heatmap and a feature image. The advantage of DAN is that it extracts features from the entire face image rather than the patches around landmark locations. An additional advantage is the use of a landmark heatmap that is an image with high intensity values around landmark locations where intensity decreases with the distance from the nearest landmark.
Feature extraction is the stage in which a biometric vector is extracted from the image to later be used by the matcher to determine whether two facial images are identical.
FaceNet: A Unified Embedding for Face Recognition and Clustering. FaceNet is a system that directly learns mappings from image faces to a compact Euclidean space where distances directly correspond to a measure of face similarity. The framework goes through a learning procedure allowing it to extract features capable of differentiating different facial images. This framework is based on the use of positive and negative examples of facial images. For each user, an anchor is determined. The framework attempts to bring positive (images of the same individual) “closer” in the formed feature space and negative (images of different individuals) “further away”. Said framework allows to determine if two facial images came from the same source or from different sources.
Video-Based Face Recognition Using Ensemble of Haar-Like Deep Convolutional Neural Networks. This framework treats the extraction stage of the features as a two stage operation. At the first stage, a neural network extracts features from the facial image. At the second step, three networks embed asymmetrical and complex facial features. This framework extracts facial embedding that can be fed into a matcher deciding if two faces came from the same source or from different sources.
Feature matching is the matching of feature vectors extracted from two different images. The matching may be performed in various different ways, including but not limited to:
Some advantages of Face Recognition include:
Disadvantages of Face Recognition:
2. Fingerprint Recognition
A fingerprint in its narrow sense is an impression left by the friction ridges of human finger. The fingerprint pattern is permanent and unchangeable. The probability that fingerprints of two individuals being alike is approximately 1 in quadrillion.
Most automatic systems for fingerprint matching are based on minutiae matching. Minutiae classification is based on:
In step 501, a fingerprint is captured, in 502 pre-processing occurs, features are extracted in step 503, and fingerprint matching is performed in step 504, based on stored fingerprints in database 505.
Two main technologies may be used to capture image of the fingerprint
Pre-process is the method that prepares the image to facilitate further work with the image. The pre-process can include the enhancement of the image, the binarization of the image, finding region of interest (ROI), thinning of the fingerprint image, detecting core point, minutia extraction.
Thinning of the fingerprint image: Generally, the gray values of pixels of ridges in the fingerprint image gradually decrease, going from an edge towards the center of the ridge line, then, increase again going towards the other edge. This represents the definition of a local minimum. The idea is to capture this local minimum line to convert the ridge of (e.g. 5) pixels wide into one pixel wide.
Core point detection: The core (or singular) point of a fingerprint is defined as “the point of the maximum curvature on the convex ridge”, which is usually located in the central area of fingerprint. The reliable detection of the position of a reference point can be accomplished by detecting the maximum curvature.
Minutiae extraction: Most fingerprint minutia extraction methods are thinning based where the skeletonization process converts each ridge to one pixel wide. Minutia points are detected by locating the end points and bifurcation points on the thinned ridge skeleton based on the number of neighboring pixels. The end points are selected if they have a single neighbor and the bifurcation points are selected if they have more than two neighbors.
Feature is a piece of information that describes a specific part in input image. In the feature extraction an extraction of the features is performed that create unique ID code for each of the fingerprint. Extracted features are used in the final matching feature stage to perform fingerprint recognition.
The features of the fingerprint will be represented by the number of minutiae of each type within a specific distance from the core point. This is achieved by dividing the fingerprint image into concentric tracks around the core point.
Fingerprint matching is the process used to determine whether two sets of fingerprint feature come from the same finger. One fingerprint feature is stored into the database and other is computed from acquisition image of fingerprint.
The matching may be performed in various different ways, including but not limited to:
Advantages of using fingerprint recognition include that its very fast technology, and the probability that fingerprints of two individuals being alike is approximately 1 in quadrillion. The disadvantages include that you must have a fingerprint scanner, and acidity can change a fingerprint.
3. Speaker Biometrics
Speaker recognition is the identification of a person from characteristics of voices. It is also called voice recognition. There is a difference between speaker recognition (recognizing who is speaking) and speech recognition (recognizing what is being said). These two terms are frequently confused, and “voice recognition” can be used for both.
Speaker verification may be performed for text dependent (unique pass phrase for a user) or for text independent (the user is verified based on voice print alone, independently of what is said). This section will focus solely on methods for text independent methods as we see these methods as the future of the field. Moreover, using text independent recognition, one may form a second layer of pass-phrase matching to convert it into a text-dependent method.
Voice Activity Detection is the process in which voiced segments are extracted out of the entire speech signal. Speaker analysis should preferably be performed only on segments recognized as speech as silent segments are shared amongst all speech signals.
Most speaker verification techniques are based on features called Mel-Frequency-Cepstral-Coefficients (MFCC). MFCC is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The MFCC features are extracted directly from the audio signal after several pre-processing steps that may vary as a result of different classification algorithms.
In one embodiment, two overall processes may be used for speaker biometrics—enrollment 600 and verification/recognition 610.
In step 601 of the enrollment process, the speech biometrics of a user to be enrolled may be presented. Features of the speech may be extracted in step 602, and a model may be trained (as will be described in further detail below) in step 603. A voiceprint is created and stored in step 604.
In the verification/recognition process 610, a person's speech biometrics may be presented in step 611, features are extracted in step 612, and in step 613, the extracted features are compared to the voiceprints (stored in step 604), resulting in a decision—match (614) or no match (615).
Again, most speaker verification techniques are based on features called Mel-Frequency-Cepstral-Coefficients (MFCC). MFCC is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The MFCC features are extracted directly from the audio signal after several pre-processing steps that may vary as a result of different classification algorithms.
More recent approaches make use of various deep learning algorithms. The learning portion in these methods attempts to find the best possible features for discriminating between different speakers. These algorithms provide an “extractor” to be used in extracting features from new, unseen utters.
In one embodiment of a speaker verification system, the system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. This method requires an initial learning step in which a UBM is created. This model is meant to capture the total variance of all possible speakers. The result is a model of the distribution of all background models.
At the verification phase, an utter is tested for the probability of it belonging to a set of speaker features or of it belonging to the UBM, and a decision is made based on the likelihood ratio. More recent approaches use SVM (support vector machines) and deep networks for binary classification for the task of verification.
4. Liveness Checks
Liveness check is a method that comes to verify that a real person is trying to perform biometric recognition to enter the device. A liveness check method is used in addition to some biometric recognition method (like face recognition, Iris recognition, voice recognition). Liveness detection reduces the likelihood of spoofing attempts to succeed, and as such reduces the false acceptance rate. Examples of liveness checks are described below.
Pulse recognition—Pulse may be extracted from video, for example using known techniques.
Advantages of pulse recognition include:
Disadvantages of pulse recognition include:
Blinking—a person may be asked to blink and then a camera may be used to recognize when a person is blinking.
Advantages of blinking recognition include:
Disadvantage of blinking recognition include:
Voice recognition—ask a person to say one of random sequences, that appears in data base, and then match recorded voice pattern with voice pattern that appears in data base.
Advantages of voice recognition include:
Disadvantages of voice recognition include:
5. Iris Recognition
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. It is perforated close to its center by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering the eye by controlling the diameter and size of the pupil. The eye color is defined by that of the iris.
In the enrollment stage 800, image acquisition 801 deals with capturing a sequence of iris images from the subject using cameras and is desirable with sensors that have high resolution and good sharpness. Today it is possible to do so using the front smartphone camera or a modern camera, as examples.
To perform iris recognition the minimum size of iris needs to be at least 70 pixels, in one embodiment. To achieve iris size of at least 70 pixels, the maximum distance that the phone can be placed from the eyes is around 50 centimeters. The iris needs to be clearly visible in the image, in one embodiment.
Referring back to
Iris segmentation may be performed in various ways, as described in further detail below.
Daughman's algorithm method for iris recognition-a biometric approach. The iris region can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. To find two circles to be used for approximating the iris region, a combination of circular and linear Hough transforms can be used. The use of the two transforms yields a valid approximation of the iris in the image.
Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. This method uses a two-stage iris segmentation method based on convolutional neural networks (CNN), which is capable of robustly finding the true iris boundary in intense cases with limited user cooperation. The first stage is pre prospecting which includes bottom-hat filtering, noise removal, canny edge detector, contrast enhancement, and modified Hough Transform to segment the approximate the iris boundary. The second stage is a deep CNN that is used as input image with fixed size and then fits the true iris boundary. This second stage applies only on region of interest (ROI) that is defined by the approximate iris boundary detected in the first stage. The choice to apply the second stage only on ROI reduces the processing time and error of iris segmentation. The result of the segmentation stage can be given to iris feature extraction for future matching.
After detecting the bounded iris in the segmentation stage (803), a normalization stage 804 may be performed, in which the bounded iris is converted to an accepted iris template. This is needed, in one embodiment, for future matching 807 between the iris template and iris templates from database 806. Typically, the process is a conversion from Cartesian to non-concentric polar representation of the iris template.
Daugman's Rubber sheet Model by Daugman. Daugman's rubber sheet model ensures the proper handling of the matter due to the specifics of the iris. This model converts the iris area from Cartesian representation to polar representation that maps each pixel in the iris area into a pair of polar coordinates (r, θ), where r and θ are on the intervals of [0 1] and [0 2π] respectively. It accounts for size inconsistencies and pupil dilation of the iris area, but does not compensate for rotational inconsistencies between templates. The output of this stage is an iris template with polar representation that is consistent with template sizes in the data base.
Image Registration modified by Wildes et a. Wildes has proposed an image registration technique for normalizing iris textures. A newly acquired image would be aligned with an image in the database, and a comparison will be performed. The alignment process is a transformation using a choice of mapping function. The alignment process compensates the rotation and scale variations. It must be noted that the normalization is performed in the matching time.
Feature is a piece of information that describes a specific part in an input image. In the feature extraction stage 805, an extraction of the features is performed, that creates a unique ID code for each of the iris normalized representation. Extracted features are used in a final matching feature stage 807 to perform iris recognition (with a result of match 808 or no match 809).
To use the iris recognition of
Gabor Filters: To extract features from an iris pattern, in polar representation, may use a demodulation process. Local regions of an iris are projected onto quadrature 2-D Gabor wavelets, generating complex valued coefficients whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantized to one of the four quadrants, setting two bits of phase information. This process is repeated all across the iris with many wavelet sizes, frequencies, and orientations to extract vector of 2048 phase bits (256 bytes) are computed for each iris. In one embodiment, this method is used only to phase information for recognizing irises because amplitude information is not very discriminating.
Iris Recognition with Off-the-Shelf CNN Features: A Deep Learning Perspective. To train a new deep network, a large amount of data is needed. These databases can be paid or not at all, that is, they have not yet been created or they are still small. Therefore, this method suggests using one of the best known deep networks, such as AlexNet, VGG, GoogLeNet and Inception, ResNet or DenseNet. These deep networks are already trained on huge databases with a large number of classes. These deep networks are designed to recognize visual patterns directly from pixel images with minimal preprocessing. To achieve perfect performance that is close to human's recognition performance, these deep networks extract unique features that help to later recognize a detected object and classify this object to the classes with the similar features. So, this method suggests using the aforementioned deep networks prior to the stage of classification, namely until the stage of feature extraction. And then this feature of Iris recognition may be used for matching with features in database.
A multi biometric iris recognition system based on a deep learning approach. This method is a real-time multimodal biometric system that is called IrisConvNet. Its architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input images of both the right and left irises of a person without any domain knowledge. The input image represents the localized iris region and then classifies it into one of N classes by fusing the results obtained using a ranking-level fusion method.
Feature matching means finding corresponding features from two similar feature vectors based on a search distance. In one embodiment, one of the feature vectors is named source and the other target. The feature matching process analyzes the similarity of source and target by using a suitable method. The accuracy of feature matching depends on data similarity, complexity, and quality. In general, the more similar the two datasets, the better matching results.
Hamming distance employed by Daugman. The Hamming Distance uses XOR to measure how many bits disagreed between two templates. To get invariance to the rotation when calculating hamming distance of two templates, one template is shifted left and right bit-wise and a number of hamming distance values are calculated from successive shifts. The actual number of shifts required to normalize invariance to the rotation will be determined by the maximum angle difference between two images of the same eye. One shift is defined as one shift to the left, followed by one shift to the right.
Deep neural networks for iris recognition system based on video: stacked sparse auto encoders (SSAE) and bi-propagation neural network models. For iris feature matching this method explains two different algorithms: first is Stacked Sparse Auto Encoders (SSAE) Deep Neural Network model and second is Bi-propagation Deep Neural Network, described further below
Advantages of iris recognition include:
Disadvantages of iris recognition include:
The present invention may allow for various functions and embodiments, such as the following three functions, each of which can be implemented in a variety of ways.
Informally, we can refer to the three overall functions as:
To detect whether an authorized user is attempting to view the screen (
To use an application (1300), in one embodiment a user may Log-In to an existing account (1301) or register a new one (1302). If the user Logs-In to the application, the Cloud loads all necessary data for user identification (1303). If the user chooses to create an account, he or she may go through the enrollment phase beginning at step 1304.
Enrollment phase 1305: Application asks user to define secure profiles:
At step 1306, user may choose secure profile mode (automatic 1307 or manual 1308).
Proceeding to secure phase at step 1315:
Different documents typically need different levels of security. For this purpose, secure profiles can be used. They will determine the security level of each document and of all secure environments; for example, in a public place the security level may be the highest, whereas at home it may be the lowest.
Table 1 below provides an example of different secure profile levels which may be used with the present invention. This is just one example—these secure profiles may be implemented in a variety of ways.
TABLE 1
Secure Profile
Secure methods
Effect on
Secure Level 1
1. Face Recognition
Open application
Secure Level 2
1. Face Recognition
Open application
2. Iris Recognition
Open edit documents
Secure Level 3
1. Face Recognition
Open application
2. Iris Recognition
Open edit documents
3. Continue identification
Share Document
Secure Level 4
1. Face Recognition
Open application
2. Iris Recognition
Open edit documents
3. Liveness Check
Share Document
4. Continue Identification
5. Finger Recognition
Secure Level 5
1. Face Recognition
Open application
2. Iris Recognition
Open edit documents
3. Liveness Check
Share Document
4. Continue Identification
5. Finger Recognition
6. Voice Recognition
When a user opens an application (1401) the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1402) or not (1403). When a user tries to open a document (step 1404), the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1405) or not (1406). Similarly, when a user tries to share a file (step 1410), the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1411, in which case the file is shared in step 1413) or not (1412). While user reads/edits a document (step 1407), continuous identification (1408) is performed according to which secure profile is activated, maintaining confidentiality (1409).
While user reads/edits a document, the user can pause the secure process by clicking on bottom to show document to a non-registered person. While a user reads/edits a document the system checks that nobody else is looking at the screen for which a secure profile is activated. And if an additional person is looking at the screen, the present invention performs the following steps:
When user finishes read/edit of document and closes the application then all secure processes are stopped, to give to the device better performance. In one embodiment, the document may be encoded before sharing, and decode can be performed by using same secure application.
The above-described steps describe the present invention as performed in “online mode”.
For offline mode: User can download documents to the device and continue to work with the document when a network connection (Ethernet, WiFi, etc.) is not available.
However, in one embodiment, the most secure documents can't be downloaded to the device in this situation. Also, in offline mode, and in one embodiment, not all biometric recognition methods will be available in this situation.
After a network connection is available once again, the downloaded documents may be merged with documents in the Cloud, and all secured documents will be erased.
Implementation Details
The present invention provides security of documents during storage, and also security while reading or editing the document. The security of the documents may be made through an application/software. As described previously, at the opening of an application/software the user will have to pass a primary person identification.
Primary person identification may be based on all (or a subset) of the biometric identification technologies described previously, such as face recognition, fingerprint, voice recognition, iris recognition. Moreover, primary person identification may be dependent on secure profile level. For example, a higher secure profile level may be required for identification of a person's identity. In one embodiment, the identification method may be chosen randomly every time when trying log in to the application. To achieve better security, in additional to the primary person identification, a liveness check may be dependent on secure profile level. For a higher secure profile level, more methods of identification of a person's identity may be required.
For a secure environment, a user may be required to register with the application in which the user is required to perform an enrollment phase (as described previously) that includes: enrollment of iris for iris recognition, enrollment of face for face recognition, enrollment of speech for voice recognition and enrollment of finger for fingerprint recognition. The enrollment phase may be necessary if the device already has some biometric signature, given the need to get the most updated biometric signature.
Additional personality recognition may be performed by device verification, such as by:
As described previously, a secure profile associated with each document may be used to determine the security level for the documents, and for all secure environments. User can change existing secure profiles as needed. The user will be able to determine, for each individual document, the level of protection and methods for recognizing the user. For each document, biometric personality recognition may be used, depending on the security level of the document.
During the viewing and editing of the document, continuous identification may be performed that is based, for example, on iris recognition technology to verify that the authorized person is still working with the device. A continuous identification process may run at all times in the background—for example, iris recognition may be performed periodically (such as every 10 seconds, or another suitable interval). Having the iris recognition perform only periodically balances effectiveness and performance. For the iris recognition to perform properly, the person's eyes must be open. If the person blinks then iris detection may fail to detect the person's iris. In this case, backup identification that is based on another biometric parameter, such as face recognition, may be performed to verify that an authorized person is still working with the device.
Additionally, during the viewing and editing of the document, face detection technology may be used to verify that only an authorized person is looking to the device screen. This process may run at all times (or periodically) in the background and search for additional faces that enter into the camera range, and then check for how long this extra face is in the camera range. If the face is in the camera range less than, for example, 2 seconds then no action may be taken, in one embodiment. Otherwise, the present invention may show a warning on the screen that notifies the owner that someone else is looking at his phone or device. If within, for example, a 10 second period after the message appears, no response is received from the owner, then the device screen will automatically turn off. Additionally, when a warning appears, the owner will have the option to pause “confidentiality” and “continuous identification” to show the document to the companion. User will be able to change the delay and related options.
When a second person is detected in the camera's range of review, the present invention may also try to identify this second person using, for example, face recognition. If it turns out that this second user already has permission to see the document (the user defined him in the enrollment phase), then the alert may automatically be removed. Otherwise, the screen may be turned off it there is no response from the owner's side.
Pause Security is an option to allow the pause of the “confidentiality” and “continuous identification” feature—for example, if an authorized user wants to show the document to a companion. The Pause Security option can be enabled when the system detects an additional face in the camera range or when the owner enables it from the settings. In one embodiment, and for additional security, to enable Pause Security the user may need to perform one random recognition.
To ensure full security on all platforms, all documents and all biometric vectors may be stored in the cloud, in one embodiment. This means that in this embodiment the full engine will be based in the cloud to prevent identity theft from the device. But for situations where access to the Internet is not available, an offline mode may be turned on. In order to be able to continue working offline it may be necessary to download a document and biometric vector to a local disk before offline mode will be activated.
In offline mode, the present invention is able to determine which documents can be viewed or edited. In one embodiment, an option may be included whereby especially important documents would not be able be downloaded to the device. This option may exist because offline capabilities will be available for the limited possibilities of biometric identifications and because the device may not have all necessary hardware to support all secure options. In both offline and online modes, the continuation and backup identification function may be available.
After working with documents offline, and thereafter going back online, the downloaded document may be synchronized with the document in the cloud, and after synchronization is complete, documents and biometric vectors may be erased automatically from the local disk.
In one embodiment, if the security level of the downloaded document is 0, then it can be stored on the device for an unlimited time but after X time biometric vector will be erased automatically from the local disk. In one embodiment, user must determine X time before enabling offline mode.
Additional possible features are described below.
The user may be able to create/select a folder and define it as a secure zone for downloaded documents and biometric vector.
The user can add applications (for example eMail/word/pdf) to a secure environment.
Different types of information about the document may be stored in the cloud, for example: the date the document was edited, the name of the last editor, etc.
Document sharing may be possible, but only through the application, which means that the receiving party must also be authorized in the application/software and be able to open the document with its biometric identification.
In addition to the present invention that provides “confidentiality” for documents are available several hardware solutions, such as: a screen protector that narrows the viewing angle, a polarized screen protector whereby the user wears glasses, etc.
In addition to the biometric technologies described herein, behavioral profiling may also be used as a passive method of owner recognition.
Overall Architecture of the Present Invention
The present invention may be implemented on a variety of computing and communications technology platform configurations. In one embodiment, a typical architecture may be as depicted below, with respect to
B2C Implementation of Present Invention
An app developed in accordance with the teachings of the present invention may be developed using appropriate iOS, Android or other types of tools. If iOS or Android, the resulting app may appear in the iOS and/or Android store as a B2C app and will function accordingly:
In the B2C app, in one embodiment, the system may use a cloud service (such as Amazon AWS cloud) to store all of their data.
Operation of the Present Invention in One Embodiment
While the present invention may be implemented in a variety of ways, sample screenshots in one embodiment from, for example, a smartphone operating in accordance with the present invention are depicted in
For example:
Level 1: One biometric, push notification
Level 2: Two biometrics, and a pattern swipe
Level 3: Three biometrics
Level 4: Four biometrics
FIG. 16AAA. This is the screen for the introduction on setting up iris recognition.
FIG. 16BBB. This is where the user sets up the iris recognition.
FIG. 16CCC. This is where it shows that iris recognition is successfully set up.
FIG. 16DDD. This is the introduction to setting up facial recognition.
FIG. 16EEE. This is where the user sets up the facial recognition.
FIG. 16FFF. This is where it shows that you successfully set up facial recognition.
FIG. 16GGG. This is the introduction of setting up swipe pattern.
FIG. 16HHH. This is where the user has set up the swipe pattern.
FIG. 16III. This is where the swipe pattern is confirmed.
FIG. 16JJJ. This is the introduction for setting up voice recognition.
FIG. 16KKK. This is where the user adds their voice recognition.
FIG. 16LLL: This is where the user has successfully set up the voice recognition.
B2B Implementation of the Present Invention, and Administration of Same
The app of the present invention may have an admin panel, for example for business clients. The admin panel may include a variety of features that will help security professionals who manage the company's documents have oversight of documents and files.
Below is a list of features that may in various embodiments be included in the admin panel:
The Admin may optionally receive notifications, as described below:
Additionally, the below describes some security features that may be implemented:
It will be apparent to persons skilled in the relevant fields that various modules and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of computer instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.
Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system.
Suitable development platforms may be used to implement the various features of the present invention, whether implemented on a server, on the client side (e.g., as an app on a mobile device), or the like. Those skilled in the art will be familiar with such development platforms.
In another embodiment, features of the present invention may be implemented in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays, or the like. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).
Patent | Priority | Assignee | Title |
11611590, | Dec 09 2019 | Proofpoint, Inc | System and methods for reducing the cybersecurity risk of an organization by verifying compliance status of vendors, products and services |
11875597, | Jan 24 2019 | IDENTY INC | Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and mobile device for executing the method |
11908478, | Aug 04 2021 | Q (Cue) Ltd. | Determining speech from facial skin movements using a housing supported by ear or associated with an earphone |
11915705, | Aug 04 2021 | Q (Cue) Ltd. | Facial movements wake up wearable |
Patent | Priority | Assignee | Title |
20020056046, | |||
20040064728, | |||
20100205667, | |||
20110225202, | |||
20110307960, | |||
20160253559, | |||
20170243020, | |||
20190034395, | |||
20190340373, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 06 2019 | CAFFEY, DEXTER A | SMART EYE TECHNOLOGY, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051164 | /0264 | |
Aug 16 2019 | Smart Eye Technology, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 16 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Aug 27 2019 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
May 24 2025 | 4 years fee payment window open |
Nov 24 2025 | 6 months grace period start (w surcharge) |
May 24 2026 | patent expiry (for year 4) |
May 24 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 24 2029 | 8 years fee payment window open |
Nov 24 2029 | 6 months grace period start (w surcharge) |
May 24 2030 | patent expiry (for year 8) |
May 24 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 24 2033 | 12 years fee payment window open |
Nov 24 2033 | 6 months grace period start (w surcharge) |
May 24 2034 | patent expiry (for year 12) |
May 24 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |