A method including receiving an unknown vector including a data structure populated with unknown features describing a user. The method also includes executing a primary machine learning model (mlm) trained using a prediction data set to predict a score representing a prediction regarding the user. The prediction data set includes the unknown vector stripped of a biased data set including markers set that directly indicate that the user belongs to a cohort against which bias is to be avoided. The method also includes executing a supervisory mlm trained using the prediction data set to predict whether the user belongs to the cohort. The method also includes performing, using an industry tool, a computer-implemented action using the score after executing the primary mlm and the supervisory mlm.
|
1. A system comprising:
a computer processor;
a non-transitory computer readable storage medium in communication with the computer processor and storing:
training data, wherein the training data comprises a plurality of features arranged as a first vector for input into a primary machine learning model (mlm) and a supervisory mlm, the plurality of features corresponding to information describing a plurality of users, and
a biased data set, wherein the biased data set comprises a subset of the plurality of features belonging to a cohort against which bias is to be avoided, the subset of the plurality of features including markers useable by the primary mlm to make a prediction of credit worthiness, of a user of the plurality of users who belongs to the cohort, on a basis of bias against the cohort;
a stripping utility which, when executed by the computer processor, is configured to strip the biased data set from the training data to form a first modified training data;
the primary mlm, stored on the non-transitory computer readable storage medium, and trained with the first modified training data to predict credit worthiness scores of the plurality of users;
a supervisory mlm, stored on the non-transitory computer readable storage medium, and trained with the first modified training data to predict which of the plurality of users belongs to the cohort;
a transform mlm, stored on the non-transitory computer readable storage medium, and trained, responsive to the supervisory mlm predicting at least some of the plurality of users belong to the cohort, to transform the first modified training data into a remediated data vector; and
a machine learning system which, when executed by the computer processor, is configured to re-train the primary mlm using the remediated data vector.
9. A computer-implemented method, comprising:
receiving, at a computer processor, training data, wherein the training data comprises a plurality of features arranged as a first vector for input into a primary machine learning model (mlm) and a supervisory mlm, the plurality of features corresponding to information describing a plurality of users;
stripping, by the computer processor, the training data of overt markers and known proxy markers, the overt markers and known proxy markers corresponding to a cohort against which bias is to be avoided, and forming a first modified training data comprising a subset of the plurality of features;
training, by the computer processor, the primary mlm with the first modified training data to predict credit worthiness scores of the plurality of users;
training, by the computer processor, the supervisory mlm with the first modified training data to predict which of the plurality of users belong to the cohort;
receiving, by the computer processor, a prediction data set from which the overt markers and known proxy markers have been stripped, the prediction data set comprising a second vector including sample data related to a user in the plurality of users;
executing, by the computer processor, the supervisory mlm using the prediction data set to predict whether the user belongs to the cohort, wherein as a result of executing, the supervisory mlm converges on the cohort;
responsive to the supervisory mlm converging on the cohort, determining, by the computer processor, that the primary mlm is biased against the cohort;
transforming, by the computer processor executing a transform mlm and responsive to determining that the primary machine learning model is biased against the cohort, the first vector into a remediated data vector;
remediating, by the computer processor, the primary mlm to form a remediated primary mlm, wherein remediating comprises re-training the primary mlm using the remediated data vector, and wherein the remediated primary mlm can no longer make predictions on a basis of the cohort;
predicting, using the remediated primary mlm, a credit worthiness score for the user; and
executing a finance tool and, responsive to the credit worthiness score exceeding a threshold, transmitting an offer to the user, the offer comprising a widget operable on a remote computer to indicate acceptance of the offer.
19. A method, executed using a computer, comprising:
receiving training data, wherein the training data comprises a plurality of features arranged as a first vector for input into a primary machine learning model (mlm) and a supervisory mlm, the plurality of features corresponding to information describing a plurality of users;
stripping the training data of overt markers and known proxy markers, the overt markers and known proxy markers corresponding to a cohort against which bias is to be avoided, and forming a first modified training data comprising a subset of the plurality of features;
training the primary mlm with the first modified training data to predict credit worthiness scores of the plurality of users;
training the supervisory mlm with the first modified training data to predict which of the plurality of users belong to the cohort;
receiving a prediction data set from which the overt markers and known proxy markers have been stripped, the prediction data set comprising a second vector including sample data related to a user in the plurality of users;
executing the supervisory mlm using the prediction data set to predict whether the user belongs to the cohort, wherein as a result of executing, the supervisory mlm converges on the cohort;
responsive to the supervisory mlm converging on the cohort, determining that the primary mlm is biased against the cohort;
responsive to the primary mlm being biased against the cohort, remediating the primary mlm to form a remediated primary mlm, wherein the remediated primary mlm can no longer make predictions on a basis of the cohort, wherein remediating comprises:
training a plurality of additional mlms to produce a residual vector containing only residual data in the first modified training data that results in the supervisory mlm, when executed, converging on the cohort, wherein:
training the plurality of additional mlms comprises training a discriminator mlm to predict whether an input vector “x” corresponds to real data or fake vector and training a generator mlm to generate fake vector,
the generator mlm and the discriminator mlm are programmed to oppose each other until the generator mlm creates a fake vector output, but which the discriminator mlm predicts is real, and
wherein the fake vector output comprises a first plurality of vectors corresponding to the cohort, represented by “z-min”, and a second plurality of vectors not corresponding to the cohort, represented by “z-maj”;
mapping z-diff to the first vector using the generator mlm to generate the residual data between the cohort and users not in the cohort;
sampling the residual data to reconstruct a probability function that corresponds to probabilities of particular features within the first vector resulting in hidden bias when the primary mlm is executed using the first vector as input;
training a transform mlm, using the probability function, to transform the first vector into a remediated data vector which, when input to the supervisory mlm, causes the supervisory mlm to fail to converge when executed; and
executing the transform mlm using the first vector as input to produce the remediated data vector, wherein providing the remediated data vector as input to the primary mlm forms the remediated primary mlm.
2. The system of
a finance tool configured to transmit a loan offer to the user when both a first credit worthiness score of the user exceeds a threshold and also the supervisory mlm fails to converge on the cohort, wherein the loan offer comprises a widget operable by the user using a computer.
3. The system of
a plurality of additional mlms trained to produce a residual vector containing only residual data in the first modified training data that, when input to the supervisory mlm, causes the supervisory mlm to converge on the cohort;
wherein the stripping utility is further configured to strip the residual data from the first modified training data as part of forming the remediated data vector.
4. The system of
a discriminator mlm trained to predict whether an input vector “x” corresponds to real data or fake vector; and
a generator mlm trained to generate a fake vector;
wherein the generator mlm and the discriminator mlm are programmed to oppose each other until the generator mlm creates a fake vector output, but which the discriminator mlm predicts is real.
5. The system of
a difference utility configured to subtract z-min from z-maj to create a third plurality of vectors, represented by “z-diff”.
6. The system of
the generator mlm is further configured to map z-diff to the first vector using the generator mlm to generate the residual data between the cohort and users not in the cohort;
the difference utility is further configured to sample the residual data to reconstruct a probability function that corresponds to probabilities of particular features within the first vector resulting in hidden bias when the primary mlm is executed using the first vector as input; and
the stripping utility is further configured to strip the particular features from the first vector.
7. The system of
the generator mlm is further configured to map z-diff to the first vector using the generator mlm to generate the residual data between the cohort and users not in the cohort, the difference utility is further configured to sample the residual data to reconstruct a probability function that corresponds to probabilities of particular features within the first vector resulting in hidden bias when the primary mlm is executed using the first vector as input, the transform mlm is trained, using the probability function, to transform the first vector into a remediated data vector which, when input to the supervisory mlm, causes the supervisory mlm to fail to converge when executed by the computer processor.
8. The system of
10. The method of
training a plurality of additional mlms to produce a residual vector containing only residual data in the first modified training data that results in the supervisory mlm, when executed, converging on the cohort;
stripping the first modified training data of the residual data to form a second modified training data; and
retraining the primary mlm and the supervisory mlm using the second modified training data.
11. The method of
training a discriminator mlm to predict whether an input vector “x” corresponds to real data or fake vector; and
training a generator mlm to generate fake vector;
wherein the generator mlm and the discriminator mlm are programmed to oppose each other until the generator mlm creates a fake vector output, but which the discriminator mlm predicts is real.
12. The method of
subtracting z-min from z-maj to create a third plurality of vectors, represented by “z-diff”.
13. The method of
mapping z-diff to the first vector using the generator mlm to generate the residual data between the cohort and users not in the cohort;
sampling the residual data to reconstruct a probability function that corresponds to probabilities of particular features within the first vector resulting in hidden bias when the primary mlm is executed using the first vector as input; and
stripping the particular features from the first vector.
14. The method of
mapping z-diff to the first vector using the generator mlm to generate residual data between the cohort and users not in the cohort;
sampling the residual data to reconstruct a probability function that corresponds to probabilities of particular features within the first vector resulting in hidden bias when the primary mlm is executed using the first vector as input; and
training the transform mlm, using the probability function.
15. The method of
highlighting the residual data; and
displaying the residual data for review.
16. The method of
providing a plurality of additional sets of unknown features corresponding to a plurality of additional unknown users;
executing the primary mlm using the plurality of additional sets of unknown features to predict a plurality of credit worthiness scores for the plurality of additional unknown users; and
using the supervisory mlm and the generator mlm to reconstruct probability distribution functions in an input vector space that correspond to probabilities of certain features being significant or insignificant in determining hidden bias, in order to generate a statistical profile that characterizes potential biases of the primary mlm against the cohort.
17. The method of
executing the remediated primary mlm to determine a new credit worthiness score for the user; and
inputting the new credit worthiness score into a finance tool.
18. The method of
|
This application is related to U.S. application Ser. No. 16/360,368, filed on Mar. 21, 2019, and entitled “METHOD FOR TRACKING LACK OF BIAS OF DEEP LEARNING AI SYSTEMS”.
Artificial intelligence (AI) systems are increasingly used to perform business transactions, particularly in online business systems, such as enterprise systems. An enterprise system is a combination of hardware and software that supports network-centric business processes, information flows, reporting, and data analytics in complex organizations.
AI is the ability of a computer to perform tasks commonly associated with intelligent beings, such as to draw inferences from data. AI is often implemented in the form of machine learning. Machine learning is the computer science study of algorithms and statistical methods that computer systems use to progressively improve their performance on a specific task. Machine learning algorithms, sometimes referred to as machine learning models (MLMs), build a mathematical model of sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to perform the specific task.
One kind of machine learning model is known as a deep learning model, such as but not limited to an artificial neural network. A known issue with deep learning models is that it is impossible for a human to know how a deep learning model arrives at a prediction or decision with respect to performing the specific task. In other words, a deep learning AI MLM may be thought of as a “black box,” to which input may be provided to achieve a desired output. While the output may be independently verified, to within a measurable degree of statistical probability, as being accurate or inaccurate, understanding how the deep learning MLM used the input to arrive at the output is more complicated.
In general, in one aspect, one or more embodiments relate to a method. The method includes receiving an unknown vector including a data structure populated with unknown features describing a user. The method also includes executing a primary machine learning model (MLM) trained using a prediction data set to predict a score representing a prediction regarding the user. The prediction data set includes the unknown vector stripped of a biased data set including markers set that directly indicate that the user belongs to a cohort against which bias is to be avoided. The method also includes executing a supervisory MLM trained using the prediction data set to predict whether the user belongs to the cohort. The method also includes performing, using an industry tool, a computer-implemented action using the score after executing the primary MLM and the supervisory MLM.
One or more embodiments relate to a system. The system includes a repository storing training data and a biased data set. The training data includes features arranged as a first vector for input into a primary machine learning model (MLM) and a supervisory MLM, the features corresponding to information describing users. The biased data set includes a subset of the features belonging to a cohort against which bias is to be avoided. The subset of the features including markers useable by the primary MLM to make a prediction of credit worthiness, of a user of the users who belongs to the cohort, on a basis of bias against the cohort. The system also includes a stripping utility including functionality for stripping the biased data set from the training data to form a first modified training data. The primary MLM is trained with the first modified training data to predict credit worthiness scores of the users. The supervisory MLM is trained with the first modified training data to predict which of the users belongs to the cohort.
One or more embodiments also include another method. The method includes receiving training data. The training data includes features arranged as a first vector for input into a primary machine learning model (MLM) and a supervisory MLM, the features corresponding to information describing users. The method also includes stripping the training data of overt markers and known proxy markers, the overt markers and known proxy markers corresponding to a cohort against which bias is to be avoided, and forming a first modified training data including a subset of the features. The method also includes training the primary MLM with the first modified training data to predict credit worthiness scores of the users. The method also includes training the supervisory MLM with the first modified training data to predict which of the users belong to the cohort. The method also includes receiving a prediction data set from which the overt markers and known proxy markers have been stripped, the prediction data set including a second vector including sample data related to a user in the users. The method also includes executing the supervisory MLM using the prediction data set to predict whether the user belongs to the cohort. The method also includes executing the primary MLM using the prediction data set to predict a credit worthiness score of the user. The method also includes, responsive to the supervisory MLM failing to converge on the cohort, inputting the credit worthiness score into a finance tool configured to determine whether a loan offer should be extended to the user. The loan offer including a widget operable by the user using a computer. The method also includes, responsive to the supervisory MLM converging on the cohort, determining whether the primary MLM is biased against the cohort. The method also includes, responsive to the primary MLM being non-biased against the cohort, inputting the credit worthiness score into the finance tool. The method also includes, responsive to the primary MLM being biased against the cohort. The method also includes remediating the primary MLM to form a remediated primary MLM. The method also includes executing the remediated primary MLM to determine a new credit worthiness score for the user. The method also includes inputting the new credit worthiness score into the finance tool.
Other aspects of the invention will be apparent from the following description and the appended claims.
Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
Artificial Intelligence (AI) systems, particularly in the form of machine learning, are powerful computer tools for quickly arriving at determinations that have an acceptable degree of accuracy. For example, in the financial industry, AI systems rapidly determine an acceptably accurate probability whether a loan applicant will default if a loan is made to the application, and such determination helps inform the lender's final decision to extend a loan to the applicant. Similarly, in medical research, AI systems may be able to find correlations which are invisible to a user in a vast pool of medical data, and those correlations help inform a principal investigator regarding what might be causes for a medical condition.
The one or more embodiments described herein present a method and system for ensuring and verifying that deep learning AI systems do not output a result which has been arrived at on the basis of consideration of an impermissible cohort. Thus, for example, the one or more embodiments present a method and a system for verifying that an AI system did not use race, or other legally excluded consideration, as a factor in determining a probability of loan default for a loan applicant. The one or more embodiments may also provide a method and a system for verifying that another AI system did not use an excluded consideration as a factor in determining a cause for a medical condition.
In particular, the one or more embodiments use a system of deep learning machine learning models (MLMs) to confirm that the primary, decision-making MLM (i.e., the AI responsible for making the primary prediction) did not use the impermissible cohort in an impermissible manner in deriving its output. The one or more embodiments also provide for a means for characterizing the cause of such bias when the primary MLM did predict the output on the basis of the cohort. The one or more embodiments also provide for a way to eliminate such bias from future predictions of the primary MLM.
Attention is now turned to the figures. In particular,
A data repository (102) includes training data (104) and model data (106). The training data (104) is used to train one or more machine learning models of a machine learning system, such as those described herein. For example, training data may be a data structure (108), such as a “vector” which contains a series of data values (called “values” or “markers”) within data categories (called “features”).
In a finance application, the training data (104) may be a vector of features for a large number of “users” (individuals described by the data values), with the features describing information about the users. The information describes the loan history of the user, such as whether the users defaulted on one or more prior loans. In a medical research application, the training data may be a vector of features for a large number of users, with the features relating to medical information about the users, including information describing the user's medical condition of interest.
The model data (106) is data provided to, or used by, a machine learning model (MLM) during use. Thus, after an MLM has been trained, the input data (110) is the data regarding the unknown user or users that is provided to the MLM. The output data (112) is the MLM's prediction or calculation that is output as a result of feeding the input data (100) to the MLM and executing the MLM.
In a finance application, the output data (112) may be a probability that the unknown user or users will default on a loan if a loan is accepted by the unknown user or users. In a medical research application, the output data (112) may be a correlation between features, so that the principal researcher can readily see that feature X is correlated to feature Y in some hidden way.
In one or more embodiments, the data repository (102) may be in communication with one or more computers over a network. For example, the data repository (102) may be communicated over network (114) with computer A (116) and computer B (118). An example of a repository, network, and a computer is provided with respect to
In one or more embodiments, computer A (116) is the computer upon which a machine learning system (120) is executed. The machine learning system (120) may include one or more machine learning models, such as machine learning model (122). In one or more embodiments, the MLMs of the machine learning system (120) are trained using the training data (104), receive as input the input data (110) of the model data (106), and generate as output the output data (112) of the model data (106).
In one or more embodiments, the machine learning system (120) is configured or programmed to determine whether an MLM of the machine learning system (120) used unacceptable bias to generate the output data (112). Details regarding how the machine learning system (120) is so configured or programmed is described below and, at least in some aspects, with respect to
Computer A (116) may also include a stripping utility (124). In one or more embodiments, the stripping utility is software having functionality, when executed by the computer A (116), to remove selected markers or features from the training data (104) or the input data (110) that are known or suspected to lead to undesirable bias. The functionality may take the form of a computer useable program code that deletes the indicated features from a vector or deletes markers from a feature. For example, the stripping utility (124) may contain functionality to strip information relating to the race, religion, or ethnicity of a loan applicant from the training data (104) or the input data (110), or to strip proxies. A proxy is data which may serve as a hidden indicator of the cohort; for example, the fact that a user listens to a particular genre of music could be a proxy for the fact that the user belongs to an ethnic group that is an impermissible cohort.
In another example, the stripping utility (124) may contain functionality to strip information from the training data (104) or the model data (106) that relates to a pathology pathway that is to be excluded from a medical research study. In one or more embodiments, the stripping utility (124) may be programmed to provide new inputs to the MLM (122) that instruct the MLM (122) to ignore information in the training data (104) or the model data (106). In one or more embodiments, the stripping utility (124) may be programmed to adjust weights assigned to the training data (104) or the model data (106) such that the importance of such data is reduced or eliminated with respect to calculating an output.
In one or more embodiments, the machine learning system (120) may be in communication with an industry tool (126). The industry tool (126) is software, hardware, or a combination thereof which is programmed to perform some other task of interest to the user, based on input from the machine learning system (120). For example, the industry tool (126) may be an enterprise system or may be a web-based software program. The industry tool (126) may be part of computer A (116), or may be executed by a system which is external to computer A (116).
In a more specific example, the industry tool (126) may be a loan determination system. In this case, the loan determination system takes as input the output of the machine learning system (120). For example, the output of the machine learning system (120) may be the probability that a given user will default on a loan if a loan offer is extended to the user and accepted by the user. The probability may be the only input to the industry tool (126), or may be one of many forms of input provided to the industry tool (126). Nevertheless, the industry tool is programmed to make the final determination of whether to extend a loan offer to the user applying for the loan, and at what interest rate.
In another specific example, the industry tool (126) may be a research tool which calculates some other result based on the correlation discovered by the machine learning system (120). Thus, the one or more embodiments are not necessarily limited only to financial applications.
Other components may be present in the system shown in
For example, the user application (128) may be web-based software hosted on computer A (116) which allows a user to input user data for a loan application. The machine learning system (120) then takes the user data as input and, among other data (such as data retrieved from a credit report on the user) calculates a probability that the user (the loan applicant) will default if a loan is extended to the user. The user application (128) could also be software directly installed on or otherwise instantiated on the computer B (188) which serves the same function and communicates retrieved data to the machine learning system (120) via the network (114).
In another example, the user application (128) could be a database or spreadsheet program which allows a principal investigator of a medical research team to input medical data regarding the study in question. In this case, the medical data is input to the machine learning system (120) via the network (114). Such software, again, may be web-based and hosted on computer A (116) (or some other remote computer) or may be locally instantiated on computer B (118).
In one or more embodiments, implementation details regarding the operation of the machine learning system (120) are described with respect to and shown by
Note that the examples described with respect to
In one or more embodiments, a data repository (202) includes training data (204) and model data (206), which may be examples of the data repository (102), training data (104), and model data (106) of
In one or more embodiments, the training data (204) also includes a biased data set (214). The biased data set (214) is a subset of features (216) among all of the features in vector A (208). In particular, the subset of features (216) includes overt markers (218) and proxy markers (220), or other features or markers, which directly or are known to indirectly indicate that a user belongs to a cohort against which bias is to be avoided. For example, the biased data set (214) may include the overt marker (218) that a user is a member of a gender, an ethnic racial, or a religious group or may include the proxy marker that implies that the user is the member of the gender, an ethnic racial, or a religious group. In either case, the overt markers (218) and proxy markers (220) directly or indirectly indicate that the user belong to a cohort against which bias is to be avoided.
As explained further below, the subset of the features (216) are to be stripped from the features in the vector A (208). Therefore, the training data includes modified training data set (222). The modified training data set (222) is the set of features in the vector A (208), but stripped of the subset of features (216). Thus, all overt markers (218) and proxy markers (220) are not included in the modified training data set (222).
Attention is now turned to the model data (206) in the data repository (202). Again, the model data (206) is an example of the model data (106) described with respect to
In one or more embodiments, the input data (224) is data which is input into one or more machine learning models (MLMs) in order to achieve an output. Therefore, in one or more embodiments, the training data (204) may be, but is not necessarily, considered part of the input data (224).
However, in most cases, the input data (224) refers to other types of input data. For example, the input data (224) may be an unknown vector (228) which contains unknown features (230) containing markers describing an unknown user for whom a primary MLM is to draw a prediction. In a specific example, the unknown vector may be information in a credit report, possibly together with other information drawn from a loan application, describing a person who is applying for a loan. Because the primary MLM (258), below, has never calculated a credit worthiness score for this user, the vector describing this user is referred to as the “unknown vector,” the features in the unknown vector are “unknown features,” and the markers for the unknown features are “unknown markers.” Stated differently, the unknown vector (228) with unknown features (230) and unknown markers are not “unknown” in the sense that the vector, features, and markers are not defined or are invisible to a programmer. Rather, the unknown vector (228) and unknown features (230) simply relate to a vector, with features and markers, that have not yet been subject to analysis by any of the MLMs in the machine learning system (200).
In one or more embodiments, the input data may also include fake vector input (232). The fake vector input (232) may be a vector which contains fake markers or a mixture of fake and real markers. The fake vector input is used in the machine learning system (200) to identify hidden proxy markers within a modified training data set (222), as described further with respect to
In one or more embodiments, the input data (224) may also include a cohort (234). The cohort is a data structure containing data values which describe a group against which bias is to be avoided. For example, a cohort may be a data structure include data values which describe or imply a racial ethnicity (such as for a financial application) or which describe a medical condition that is to be excluded in a study (such as for a medical research application). In any case, the primary MLM in the machine learning system (200) should not draw inferences based on the cohort (234), because bias against the cohort (234) is to be avoided.
In one or more embodiments, the input data (224) also includes a prediction data set (236), which is defined by vector B (238). The prediction data set (240) may be the unknown vector (228), but may also be some other test prediction data set. The vector B (238) of prediction data set (236) is stripped of any biased data, including any overt markers and known proxy markers, and thus may also be modified training data set (222).
Attention is now turned to the output data (226). In one or more embodiments, the output data (226) is data which is output from one or more machine learning models (MLMs). In some cases, the output data (226) is provided as input to another MLM in the machine learning system (200). In other cases, the output data (226) is provided to an industry tool, such as a finance tool (256). In still other cases, the output data (226) is stored or displayed, possibly for review by a programmer. In yet other cases, the output data (226) may include the actual model data (i.e., the parameters or weights that the model has learned).
The output data may include credit worthiness scores (240), a predicted cohort (242), a fake vector output (244), a remediated data vector (246), and a statistical profile (248). In one or more embodiments, the credit worthiness scores (240) are the output of the primary machine learning model (258), and may describe a probability that a given user will default if extended a loan. The credit worthiness scores (240) may be output by the primary MLM (258) in the manner described with respect to
In one or more embodiments, the predicted cohort (242) is the cohort to which a user is predicted to belong, according to the determination of an MLM, such as a supervisory MLM. (260) Generation and use of the predicted cohort (242) are described further with respect to
The remediated data vector (246) is a data structure containing features and markers from which all hidden proxy markers have been stripped, or otherwise adjusted. As used herein, a data vector is “adjusted” if the data contained therein has been weighted to disfavor or eliminate certain vector features or has been changed by the application of a different vector. In other words, after operation of the machine learning system (200) to identify hidden proxy markers which were used by the primary MLM (258) to determine an output which is biased against the cohort (234), as described with respect to
In one or more embodiments, the statistical profile (248) is a data structure containing data, derived according to statistical methods, that characterizes potential biases of the primary MLM (258) against the cohort (234). As described with respect to
Attention is now turned to the other components of
In one or more embodiments, the local computer (252) is described as “local” because the local computer (252) is operated by the company or organization that uses the machine learning system (200) to detect hidden bias against a cohort (234) in the operation of a primary MLM (258). However, the local computer (252) may, itself, be a distributed computing system over another network, e.g., a local area network or a wide area network, such as the Internet, and may also be interconnected via the network (250). Nevertheless, the local computer (252) is responsible for executing the various components of the machine learning system (200) and other executable software useful in the detection and elimination of undesirable bias against the cohort (234) in the operation of the primary MLM (258).
To that end, in addition to the primary MLM (258), the machine learning system (200) also includes a supervisory MLM (260), a generator MLM (262), a discriminator MLM (264), a transform MLM (266), and a remediated primary MLM (268). In one or more embodiments, the primary MLM (258) is the MLM that is trained to calculate the prediction of interest. For example, in the finance application example, the primary MLM is programmed to receive an unknown vector (228) or prediction data set (236) as input and to calculate credit worthiness scores (240) as output. The primary MLM (258) is a deep learning MLM, such as but not limited to a neural network. Operation of the primary MLM (258) is described with respect to
In addition, the machine learning system (200) includes the supervisory MLM. In one or more embodiments, the supervisory MLM (260) is trained to predict whether a given user described by the unknown vector (228) or the prediction data set (236) belongs to the cohort (234). In other words, the supervisory MLM (260) is trained to predict, based on data that has already been stripped of the overt markers (218) and the proxy markers (220) indicative of bias against the cohort (234), whether the user or users belong to the cohort (234). If the supervisory MLM (260) converges on the cohort (242) (i.e., by outputting the cohort (242)), then the programmer may conclude that some hidden bias may exist against the user or users when the primary MLM (258) output the credit worthiness scores (240) for that user. Operation of the supervisory MLM (260) is described further with respect to
The machine learning system (200) also includes the generator MLM (262) and the discriminator MLM (264), which operate in opposition to each other. In one or more embodiments, the generator MLM (262) is trained to predict a fake vector output (244) which can trick the discriminator MLM (264) into outputting a prediction that the fake vector output (244) is actually real. In turn, in one or more embodiments, the discriminator MLM (264) is trained to predict whether the fake vector output (244) of the generator MLM (262) is actually fake. The opposition of generator MLM (262) and the discriminator MLM (264) to each other allows the creation of the remediated data vector (242) and the statistical profile (248), as described with respect to
Continuing with the description of
In turn, the remediated data vector (246) may then be used to retrain the primary MLM (258) so that the primary MLM no longer generates predictions based on the membership of a user to the cohort (234). The result of retraining the primary MLM is the remediated primary MLM (268). In one or more embodiments, the remediated primary MLM (268) then makes future predictions without bias against a user on account of the user's membership in the cohort (234). The operation of the remediated primary MLM (268) is described further with respect to
The local computer (252) may also execute other software useful to the machine learning system (200). Examples of such software may include a stripping utility (270) and a difference utility (272). In one or more embodiments, the stripping utility (270) is software programmed to strip data from a vector, such as any of those described above. The data may be stripped by deleting a feature from a vector, deleting a marker from a feature, or any combination thereof. Deleted data may be discarded, or in some embodiments may be stored elsewhere for purposes of later study. Operation of the stripping utility (270) is described with respect to
In one or more embodiments, the difference utility (272) is software programmed to compare two vectors. More specifically, the difference utility (272) may be programmed to identify differences between two vectors. The differences, and the operation of the difference utility (272), are described with respect to
The local computer (252) may contain additional MLMs or other types of utilities. Thus, the examples described with respect to
In another example, the local computer (252) may also execute the finance tool (256). However, the finance tool (256) may be executed by a remote computer, possibly maintained by an entity different than the entity which operates the local computer (252). Therefore, the finance tool (256) is shown in
In one or more embodiments, the finance tool (256) is software programmed to determine whether to extend a loan offer (274) to an applicant when the finance tool (256) receives a loan application from the remote computer (254), which is operated by the user. The data entered for the loan application may be entered via a program instantiated on the remote computer (254) or via a web site hosted by the local computer (252) or by some other computer not shown.
In response, the finance tool (256) may call the primary MLM (258) of the machine learning system (200) to predict the credit worthiness scores (240) for the user, and then use the credit worthiness scores (240), along with other information in the loan application, to determine whether to transmit the loan offer (274) to the remote computer (254). In one or more embodiments, the finance tool (256) includes functionality or is programmed to transmit the loan offer (274) to the remote computer (254) (and hence the applicant) when the credit worthiness score (as possibly modified by further data processing performed by the finance tool (256)) exceeds a threshold (276). In other words, the threshold (276) is a number, chosen at the discretion of a user or possibly a programmer, which reflects a minimum credit worthiness score necessary to cause the finance tool (274) to transmit the loan offer (274) to the remote computer (254).
The loan offer (274) contains a widget (278). The widget is a script or other software element which, when manipulated by a computer (e.g. remote computer (254)) indicates to the finance tool (256) that the applicant has accepted or declined the loan offer. The widget may take the form of a button, a dialog box, a drop down menu, or any other convenient computerized tool for receiving data input.
Attention is now turned to
Either method
As indicated above,
At step 300, an unknown vector is received from a data repository, possibly via a network, though possibly from a physical bus connecting the data repository to the local computer executing the machine learning system that processes the unknown vector. As described above, the unknown vector is a data structure populated with unknown features describing a user. At step 302, a primary MLM is executed in the manner described above to generate a score.
Attention is now turned to
Turning to
The advantage of deriving MLM model P (404) is that MLM model P (404) may be used to predict correlations within new, previously unseen, vectors by feeding the vectors into MLM model P (404) and observing the values of prediction ŷ (406). For example, in a finance application example, a prediction may be made by MLM model P (404) whether a new borrower is likely to default or not. Using this prediction, a finance tool can decide whether or not to underwrite a loan.
The example of
Additionally, a variety of learning approaches (i.e., MLMs) may be used to make predictions (i.e., prediction ŷ (406)) that as closely as possible match the actual label data y. An example of MLM model P (404), therefore, might be an artificial neural network, which is highly non-linear. However, other deep learning MLMs are contemplated, such as but not limited to deep belief networks, recurrent neural networks, supervised deep learning models, semi-supervised deep learning models, and unsupervised deep learning models.
The model described with respect to
Stated differently, MLM model P (404) is a “black box,” which can be tested to verify that the predictions of MLM model P (404) are accurate to within a known degree of accuracy, but the process by which MLM model P (404) arrived at the prediction cannot be interpreted easily. In particular, it is difficult to understand how any individual feature or features are associated with the prediction. In some cases, even computational methods cannot gauge how MLM model P (404) arrived at prediction ŷ (406) from vector x (400). The one or more embodiments address this issue.
However, under some circumstances, a programmer desires to know if the MLM P (404) is exhibiting any kind of bias in the sense of penalizing certain input vectors in a way that is unacceptable to the programmers of the function, MLM P (404). One example might be that the programmer desires to avoid the violation of fair-lending principles in a financial application of machine learning.
This potential violation might arise when the MLM model P (404) is, without the programmer's knowledge, making predictions based upon a variable such as the borrower's gender or ethnicity, even when the input data has been stripped of overt markers and known proxy markers indicative of gender or ethnicity. A first step in preventing this undesirable calculation is to remove all “negative selection” variables from vector x (400). Negative selection variables are variables, markers, or features that are pre-determined to be overt or known proxy markers of bias.
For example, features or markers describing gender and ethnicity may be from the input variables in vector x (400) and vector y (402). However, if vector x (400) and vector y (402) are very large and encompasses many variables, it is possible that some of the variables, or combinations thereof, are acting as hidden proxies, or implicit markers, of bias.
In a highly non-linear MLM model P (404) and with very large input dimensions from vector x (400) or vector y (402), it is possible for these biases to be implied in a way that is not at all obvious by available means of inspection. For example, if vector x (400) includes data from a user's television viewing habits and eating habits, some of this data, or a non-obvious combination thereof, might indicate ethnicity. Thus, the MLM model P (404) may end up causing predictions that are unfairly biased in the sense that the outputs are being heavily influenced only by factors that indirectly indicate the negative selection variable of ethnicity.
Under these circumstances it might be possible that MLM model P (404) could be accused of being biased in an unacceptable manner. Namely, MLM model P (404) might be negatively impacting predictions due only to implied negative selection variables. In the case of lending, this fact might result in unfair lending, such as the decline of borrowers of a certain ethnicity solely because of their ethnicity, albeit via hidden (i.e., indirect) markers which were used to infer membership of the applicant in the cohort.
The goal, then, is to detect whether or not MLM model P (404) exhibits such bias and, if desirable, to gain some insight into what aspects of vector x (400) are causing P (404) to be biased. The one or more embodiments address these technical issues. The one or more embodiments also address the issue of remediating MLM model P (404) by retraining or by removing any hidden negative selection variables or markers. In other words, the one or more embodiments provide for the detection, characterization, and elimination of hidden negative selection variables that might cause MLM model P (404) from making a prediction on the basis of undesirable bias.
Returning to
Attention is now turned to
Thus, MLM model B (418) is trained using input vector x (408) and vector y (416) to generate as output prediction ŷ (420). Prediction ŷ (420) is a prediction regarding whether the applicant belongs to the cohort (i.e., the ethnicity or race).
Stated differently, MLM model B (418) uses the same dataset as MLM P (412), but the data are targeted to train on the labels that indicate the membership of a protected class (e.g. ethnic minority), or not, by using the overt markers. For example, the MLM model B (418) is trained to identify ethnicity or race. Note that these negative selection markers or variables (ethnicity or race) were explicitly excluded from vector x (408). After training the MLM model B (418), the MLM model B (418) can predict the ethnicity of the borrower purely by observing x (that does not contain the overt markers).
Two MLM models have now been trained. MLM P (412) is the primary model that performs some predictive function (based upon classification). Parallel MLM model B (418) is trained using the same data, but is programmed to classify the input vectors into classes that relate to bias, such as the ethnicity of the applicant.
If MLM model B (418) is able to classify samples within a sufficient degree of accuracy (or loss rate), then this fact indicates that negative selection criteria exists within the input vector x (408) somewhere (i.e., there is something implicit, one or more hidden proxy markers, in the input vector that is sufficient for a machine learning algorithm (instantiated as MLM model B (418)) to reconstruct the overt markers that had been removed from vector x (408). It may then be assumed that one, or a combination, of features or markers in the input vector exist such that the MLM model (418) can infer ethnicity. Examples of such hidden negative selection features or markers might be a borrower's eating and shopping habits in some combination.
If the MLM model B (420) failed to converge with sufficient accuracy (i.e., MLM model B (420) could not really tell the ethnicity of any borrower), then it may be inferred that hidden negative features or markers do not exist in the vector x (408). Accordingly, it may be likewise inferred that the MLM P (412) is not biased in the undesirable matter. Thus, the prediction ŷ (414) of the MLM P (412) can be trusted as being unbiased against the cohort, and used in future applications (i.e., used as input to the finance tool).
However, if the MLM model B (418) does converge on a prediction ŷN (420) that the applicant belongs to the cohort, then the MLM P (412) should be tested to determine whether the MLM P (412) is indeed biased (as described with respect to step 321 of
Otherwise, if no such correlation exists, then the MLM P (412) may be determined to be unbiased, even if the supervisory MLM (MLM model B (418)) converged on the cohort. If the MLM P (412) is unbiased, even with the convergence of MLM model B (418), then the prediction of MLM P (412) may still be provided as input to the finance tool.
Nevertheless, even if the primary MLM P (412) is determined to be biased against the cohort, the primary MLM P (412) may be remediated. Remediation of the primary MLM P (412) is described further with respect to
Returning to
Nevertheless, optionally, at step 310, the primary MLM may be remediated. The remediated MLM is not biased against the cohort. Then, at step 312, the remediated MLM may be executed to determine a new score that predicts whether the applicant will default on the loan. Thereafter, at step 314, the computer implemented action may be performed using the new industry tool using the new score as input. The method of
The details of steps 310, 312, and 314 are described further below with respect to
Attention is now turned to
At step 301, training data is received from a data repository, possibly via a network, though possibly from a physical bus connecting the data repository to the local computer executing the machine learning system that processes the training data. Again, the training data is one or more vectors containing features and markers that describe many different individuals and information regarding whether those individuals defaulted on one or more loans. At step 303, the training data is stripped of overt and known proxy markers corresponding to a cohort against which bias is to be avoided. The result of stripping the overt and known proxy markers is modified training data, which also is a vector data structure that is a subset of the original markers in the initial training data.
At step 305, the primary MLM is trained with the modified training data. Likewise, at step 307, the supervisory MLM is trained with the modified training data. Training the primary and supervisory MLMs, like training any MLM, is the process of creating a candidate model, then testing it with some held-back data, until a final model is selected for use. Together, the primary MLM and the supervisory MLM form part or all of a machine learning system, such as machine learning system 200 of
Thus, at step 309, a prediction data set, that described a user, is received from a data repository, possibly via a network, though possibly from a physical bus connecting the data repository to the local computer executing the machine learning system that processes the prediction data set. The user is a loan applicant in this example. The prediction data set has been stripped of any such overt or known proxy markers or features.
Then, at step 311, the primary MLM is executed by the machine learning system using the prediction data set to predict a score. In this example, the score is a prediction that the user will default on a loan, or is a prediction that the user will pay back a loan that is offered. In either case, the score may be referred-to as a “credit worthiness score.” At step 313, possibly in tandem with step 311, the supervisory MLM is executed using the prediction data set. Operation of the supervisory MLM is described with respect to
At step 315, a determination is made by the machine learning system whether the supervisory MLM converges on the cohort. If not (a “no” answer at step 315), then at step 317 the credit worthiness score is input into a finance tool. Transmission of the score to the finance tool may be via a network, though possibly from a physical bus. If the credit worthiness score exceeds a pre-selected threshold, then at step 319 an electronic loan offer is transmitted to the user using the finance tool. The finance tool used the credit worthiness score to decide to transmit the electronic loan offer to the user.
Returning to step 315, if the supervisory MLM did converge on the cohort (a “yes” answer at step 315), then at step 321 a determination by the machine learning system is made whether the primary MLM is biased against the cohort. Referring again to
If, at step 321, the primary MLM is not biased against the cohort (a “no” answer at step 321), then the method returns to step 317 and continues as described above. However, if the primary MLM is biased against the cohort (a “yes” answer at step 321), then the method proceeds to step 323. At step 323, the hidden proxy markers or features are characterized.
Attention is now turned to
One method of identifying the combination of variables that enable primary MLM P (412) in
Turning to
In the case of loan applications, discriminator MLM D (700) is trained to predict if a particular candidate vector x is real or fake. The term “fake” means mean that the input vector x (702) that did not come from the real world but one that was deliberately fabricated to look like a real loan applicant. The fake input vector x (702) could be composed of entirely fake markers, but could also contain a combination of real and fake markers. In any case, the real vectors are sampled from the real world of loan applications, whereas the fake vectors are made up. Generation of the fake vectors may be performed manually, or by a fourth MLM, such as generator MLM G (800) of
In one or more embodiments, discriminator MLM D (700) is some kind of differentiable function, meaning that it can be trained via a method like gradient descent to discriminate fake data from real data. An example of solving a differential function might be using an artificial neural network.
Tuning to
In one or more embodiments, generator MLM G (800) should produce high quality fake data. “High quality” means that the fake data is difficult to detect as a fake by some other machine learning process. In order to train G to produce high quality fake data, the discriminator MLM D (700) may be used to receive input vectors from generator MLM G (800). In other words, the discriminator MLM (700) attempts to predict whether the output of the generator MLM G (800) is fake. If the probability of a vector output by generator MLM G (800) being real is near to 1, then the vector output is considered a “high quality” fake. If the probability is near to 0, then a vector output is considered a “bad” fake.
In turn, the output from discriminator MLM D (700) can be fed back to generator MLM G (800). In this manner, generator MLM G (800) is optimized to produce better and better fakes. These improved fakes are then sent back to discriminator MLM D (700) repeatedly.
Thus, generator MLM G (800) and discriminator MLM D (700) are in an adversarial relationship because, in effect, G is trying to trick (or defeat) D via an adverse (fake) input that looks real. However, at the same time, D is allowed to re-train using the fake data to ensure that such fake samples are indeed detected (i.e. classified with a low probability of being real). Thus, generator MLM G (800) is constantly trying to generate fakes while discriminator MLM D (700) is trying to detect them while also improving its ability to detect more convincing fakes. Based upon the feedback received from discriminator MLM D (700), the generator MLM G (800) converges upon an input value of vector z (802) that generates a successful fake output x (702) that can fool discriminator MLM D (700). Expressed more succinctly, D tries to make D(G(z)) near to 0, while G tries to make D(G(z)) near to 1. In the case of trying to generate fake values of vector z (802), the application of the adversarial-generator network shown in
Turning to
Any given vector Z (802) may be thought of as a set of latent variables that describe what will be seen in the x vector, which in this example is a set of features describing a loan applicant. It is possible to take z vectors corresponding to a majority ethnicity applicant (z-maj 902) and subtract (using vector subtraction) z vectors corresponding to minority ethnicity applicants (z-min 900) and end up with a difference vector, z-diff (904). In the latent space, z-diff (904) represents what makes the difference between a majority and minority ethnicity applicant (i.e., the difference between non-members of the cohort and members of the cohort).
Turning to
By sampling the residuals in x-vector space, a probability distribution function (i.e., statistical profile (248) of
Returning to
Once the residual vectors x (1000) have been identified, along with the probability distribution functions of the features potentially causing bias against the cohort, the primary MLM P (412) can be remediated. In one embodiment, the vector space of the input vector x (408) can be stripped of the hidden markers identified by the above techniques. An example might be noticing in the residual vectors x (1000) that subscription to religious programs in an on-demand media service is contributing towards bias against an ethnic group. This data could be selectively removed without having to remove all on-demand media vectors whose wholesale removal might compromise the effectiveness of the original primary MLM P (412).
Another method, shown in
Note that transform MLM T (1100) only affects the features or markers of vector x (408) that were identified as potentially being negative selection features or marker. Thus, transform MLM T (1100) may be described as a “de-biasing” transform that can “neutralize” any original vector x (408) so as to remove bias-contributing features or markers without compromising features useful to predicting the credit worthiness score by the primary MLM P (412).
Additionally, the primary MLM P (412) can now be retrained using the transformed vector x (1102), or the set of all such vectors, X. Thus, the primary MLM P (412) is remediated, meaning that the primary MLM P (412) can no longer make predictions of credit worthiness on the basis of the applicant being a member of the cohort against which bias is to be avoided. Likewise, along with the overt features and markers indicative of the cohort, the hidden proxy features and vectors characterized in the residual vector x (1000) can be removed from the unknown vector received for a new loan applicant. In this manner, bias against the cohort by the primary MLM P (412) can be avoided.
One method of training transform MLM T (1100) is as just described (i.e., using the residual vector x (1000)). However, another method of training transform MLM T (1100) is to formally encode the decisions of a human operative into the transform MLM T (1100). In other words, if a human declares that a certain feature of vector x (408) should be removed, then the transform MLM T (1100) can be constructed that assigns the corresponding features or markers of the vector to zero (or any arbitrary constant) for all values of x.
Returning back to
Attention is now turned to
In both cases, the input data being fed to the machine learning models is stripped of overt markers and overt features that directly tie the respective user to the protected classes. Thus, for example, input data (1204), describing Chris (1200), is stripped of overt markers indicating that Chris (1200) belongs to the protected class, or to any protected class. Likewise, input data (1206), describing Alex (1202), is stripped of overt markers indicating that Alex (1202) belongs to the non-protected class, or to any protected class.
In both cases, the respective data is fed to the MLM. Thus, the input data (1204) describing Chris (1200), who is an actual member of the protected class, is fed as input to MLM model P (1208). Likewise, the input data (1206) describing Alex (1202), who is not a member of the protected class, is fed as input to MLM model P (1208).
In the example shown, in
As a matter of technological fact, it is impossible to know whether the MLM model P (1208) produced the output ŷ (1210) and the output ŷ (1212) using hidden correlations among the data (1204) or the data (1206). The hidden correlations may indicate that Alex (1200) belongs to the protected class, or that Chris (1202) belongs to the non-protected class. Thus, the possibility exists that the MLM model P (1208) predicted the output ŷ (1210) and the output ŷ (1212) purely on the basis of either (i) Chris (1200) is in the protected class, or (ii) Alex (1202) is in the non-protected class. In either case, the decision to offer or decline the loan, or to offer the same loan at different interest rates, purely on the basis of membership or non-membership in the protected class, would be unacceptable.
Note that the company (1201) using the MLM model P (1208) does not actually know whether unacceptable bias has been used in the loan offer. It is also possible that the decision by the MLM model P (1208) to decline a loan to the user (1200) in the protected class, or to offer the user (1200) a loan with a higher interest rate, was legitimate. In other words, the MLM model P (1208) could have made the prediction of output ŷ (1210) without any reference, hidden or otherwise, to the membership of the user (1200) in the protected class. Accordingly, it is also possible that the output ŷ (1210) and the output ŷ (1212) are legitimately determined and should be used.
Therefore, turning to
After training, the same stripped data used to describe Chris (1200), that is the input data (1204), is input to the MLM B (1300). If the output ŷ (1304) successfully predicts that Chris (1200) is a member of the protected class, then the company (1201) may assume that some hidden bias against the protected class exists when the MLM model P (1208) produces the output ŷ (1210) on the basis of the data stripped of overt markers (i.e., the input data (1204). On the other hand, if the output ŷ (1304) does not converge on a protected class, that is if the output ŷ (1304) does not indicate that Chris (1200) is in a protected class, then the output ŷ (1210) of
However, again, if the output ŷ (1304) converges on a protected class, then steps are taken to remediate the MLM model P (1208). Remediation may take a number of different forms. For example, as explained with respect to
Alternatively, a transform MLM model (see
As a still different alternative, if the MLM model B (1300) predicts bias by the MLM model P (1200) against the protected class, then the loan decision can be made by a person instead of automatically by a machine learning model executed by a computer. The person can then ensure that any loan decision, or interest rate decision, is made without reference to the protected class against which bias is to be avoided.
The features used with respect to the one or more embodiments may include many different kinds of information. Specific kinds of information include information from a credit report, such as credit score, number of credit cards, etc. Other specific kinds of information may be publicly available or searchable, such as what kinds of NETFLIX® programs the user watches, user car brand ownership, user music taste, user food preferences, etc. Overt features are features that directly indicate whether or not the user belongs to a protected class (i.e., a cohort). Examples of overt markers include race (e.g., that Chris (1200) is African-American or that Alex (1202) is Caucasian), gender, religious preference, etc.
Note that the examples described with respect to
Embodiments of the invention may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in
The computer processor(s) (1402) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (1400) may also include one or more input devices (1410), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
The communication interface (1412) may include an integrated circuit for connecting the computing system (1400) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
Further, the computing system (1400) may include one or more output devices (1408), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (1402), non-persistent storage (1404), and persistent storage (1406). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms.
Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention.
The computing system (1400) in
Although not shown in
The nodes (e.g., node X (1422), node Y (1424)) in the network (1420) may be configured to provide services for a client device (1426). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (1426) and transmit responses to the client device (1426). The client device (1426) may be a computing system, such as the computing system shown in
The computing system or group of computing systems described in
Based on the client-server networking model, sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device. Foremost, following the client-server networking model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process. Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters (e.g., bytes).
Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, only one authorized process may mount the shareable segment, other than the initializing process, at any given time.
Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the invention. The processes may be part of the same or different application and may execute on the same or different computing system.
Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the invention may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.
By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.
Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the invention, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system in
Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).
The extracted data may be used for further processing by the computing system. For example, the computing system of
The computing system in
The user, or software application, may submit a statement or query into the DBMS. Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application.
The computing system of
For example, a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.
Data may also be presented through various audio methods. In particular, data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device.
Data may also be presented to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.
The above description of functions presents only a few examples of functions performed by the computing system of
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10841372, | Jan 11 2018 | HOOT LIVE, INC | Systems and methods for performing useful commissioned work using distributed networks |
20160012465, | |||
20160225073, | |||
20170213280, | |||
20190114706, | |||
20190164221, | |||
20190354611, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 18 2019 | GOLDING, PAUL | Prosper Funding LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048663 | /0784 | |
Mar 21 2019 | Prosper Funding LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 21 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
May 16 2026 | 4 years fee payment window open |
Nov 16 2026 | 6 months grace period start (w surcharge) |
May 16 2027 | patent expiry (for year 4) |
May 16 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 16 2030 | 8 years fee payment window open |
Nov 16 2030 | 6 months grace period start (w surcharge) |
May 16 2031 | patent expiry (for year 8) |
May 16 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 16 2034 | 12 years fee payment window open |
Nov 16 2034 | 6 months grace period start (w surcharge) |
May 16 2035 | patent expiry (for year 12) |
May 16 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |