The present disclosure relates to secure broker-mediated data analysis and prediction. One example embodiment includes a method. The method includes receiving, by a managing computing device, a plurality of datasets from client computing devices. The method also includes computing, by the managing computing device, a shared representation based on a shared function having one or more shared parameters. Further, the method includes transmitting, by the managing computing device, the shared representation and other data to the client computing devices. In addition, the method includes, based on the shared representation and the other data, the client computing devices update partial representations and individual functions with one or more individual parameters. Still further, the method includes determining, by the client computing devices, feedback values to provide to the managing computing device. Additionally, the method includes updating, by the managing computing device, the one or more shared parameters based on the feedback values.
|
25. A non-transitory, computer-readable medium with instructions stored thereon, wherein the instructions are executable by a processor to perform a method, comprising:
receiving a plurality of datasets, wherein each dataset of the plurality of datasets is a respective dataset received from a respective client computing device of a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects;
determining a respective list of identifiers for each dataset and a composite list of identifiers comprising a combination of the lists of identifiers of each dataset of the plurality of datasets;
determining a set of unique objects from among the plurality of datasets;
selecting a subset of identifiers from the composite list of identifiers;
determining a subset of the set of unique objects corresponding to each identifier in the subset of identifiers;
computing a shared representation of the plurality of datasets based on the subset of the set of unique objects and a shared function having one or more shared parameters;
determining a subset of objects for the respective dataset received from each respective client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset;
determining a partial representation for the respective dataset received from each respective client computing device based on the subset of objects for the respective dataset and the shared representation;
acquiring one or more feedback values, wherein each of the one or more feedback values is determined based on a relation of predictions of the partial representation to the set of recorded values for the respective dataset;
determining based on the subsets of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values; and
updating the one or more shared parameters based on the one or more aggregated feedback values.
1. A method for machine learning, comprising:
receiving, by a managing computing device, a plurality of datasets, wherein each dataset of the plurality of datasets is a respective dataset received from a respective client computing device of a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects;
determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers comprising a combination of the lists of identifiers of each dataset of the plurality of datasets;
determining, by the managing computing device, a set of unique objects from among the plurality of datasets;
selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers;
determining, by the managing computing device, a subset of the set of unique objects corresponding to each identifier in the subset of identifiers;
computing, by the managing computing device, a shared representation of the plurality of datasets based on the subset of the set of unique objects and a shared function having one or more shared parameters;
determining, by the managing computing device, a subset of objects for the respective dataset received from each respective client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset;
determining, by the managing computing device, a partial representation for the respective dataset received from each respective client computing device based on the subset of objects for the respective dataset and the shared representation;
acquiring, by the managing computing device, one or more feedback values, wherein each of the one or more feedback values is determined as a change in the partial representation that corresponds to an improvement in a set of predicted values as compared to the set of recorded values corresponding to the respective dataset;
determining, by the managing computing device, based on the subsets of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values; and
updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values.
26. A memory with a model stored thereon, wherein the model is generated according to a method, comprising:
receiving, by a managing computing device, a plurality of datasets, wherein each dataset of the plurality of datasets is a respective dataset received from a respective client computing device of a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects;
determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers comprising a combination of the lists of identifiers of each dataset of the plurality of datasets;
determining, by the managing computing device, a set of unique objects from among the plurality of datasets;
selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers;
determining, by the managing computing device, a subset of the set of unique objects corresponding to each identifier in the subset of identifiers;
computing, by the managing computing device, a shared representation of the plurality of datasets based on the subset of the set of unique objects and a shared function having one or more shared parameters;
determining, by the managing computing device, a subset of objects for the respective dataset received from each respective client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset;
determining, by the managing computing device, a partial representation for the respective dataset received from each respective client computing device based on the subset of objects for the respective dataset and the shared representation;
acquiring, by the managing computing device, one or more feedback values, wherein each of the one or more feedback values is determined based on a relation of predictions of the partial representation to the set of recorded values for each dataset;
determining, by the managing computing device, based on the subsets of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values;
updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values; and
storing, by the managing computing device, the shared representation, the shared function, and the one or more shared parameters on the memory.
27. A method for machine learning, comprising:
transmitting, by a first client computing device to a managing computing device, a first dataset corresponding to the first client computing device,
wherein the first dataset is one of a plurality of datasets transmitted to the managing computing device by a plurality of client computing devices,
wherein each dataset corresponds to a set of recorded values, and
wherein each dataset comprises objects;
receiving, by the first client computing device, a first subset of objects for the first dataset and a first partial representation for the first dataset, wherein the first subset of objects is a subset of the objects of the first dataset being part of forming a shared representation of the plurality of datasets, the shared representation being defined by a shared function having one or more shared parameters, and the first partial representation for the first dataset is based on the first subset of objects and the shared representation;
determining, by the first client computing device, a first set of predicted values corresponding to the first dataset, wherein the first set of predicted values is based on the first partial representation and a first individual function with one or more first individual parameters corresponding to the first dataset;
determining, by the first client computing device, a first error for the first dataset based on a first individual loss function for the first dataset, the first set of predicted values corresponding to the first dataset, the first subset of objects, and non-empty entries in the set of recorded values corresponding to the first dataset;
updating, by the first client computing device, the one or more first individual parameters for the first dataset;
determining, by the first client computing device, one or more feedback values, wherein the one or more feedback values are used to determine a change in the first partial representation that corresponds to an improvement in the first set of predicted values; and
transmitting, by the first client computing device to the managing computing device, the one or more feedback values,
wherein the one or more feedback values are usable by the managing computing device along with subsets of objects from the plurality of client computing devices to determine one or more aggregated feedback values, and
wherein the one or more aggregated feedback values are usable by the managing computing device to update the one or more shared parameters.
2. The method of
3. The method of
4. The method of
5. The method of
determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset, wherein the set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset;
determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the subset of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset;
updating, by the respective client computing device, the one or more individual parameters for the respective dataset; and
determining, by the respective client computing device, the one or more feedback values, wherein the one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values.
6. The method of
7. The method of
creating, by the managing computing device, a composite set of objects that is a combination of the objects from each dataset; and
removing, by the managing computing device, duplicate objects from the composite set of objects based on an intersection of the lists of identifiers for each of the plurality of datasets.
8. The method of
identifying, by the respective client computing device, which of the non-empty entries in the set of recorded values corresponding to the respective dataset corresponds to an object in the subset of objects;
determining, by the respective client computing device, a partial error value for each of the identified non-empty entries in the set of recorded values corresponding to the respective dataset by applying the individual loss function between each identified non-empty entry and its corresponding predicted value in the set of predicted values corresponding to the respective dataset; and
combining, by the respective client computing device, the partial error values.
9. The method of
calculating, by the managing computing device, a final shared representation of the plurality of datasets based on the set of unique objects, the shared function, and the one or more shared parameters; and
transmitting, by the managing computing device, the final shared representation of the plurality of datasets to each of the client computing devices.
10. The method of
11. The method of
receiving, by the respective client computing device, the subset of objects for the respective dataset;
determining, by the respective client computing device, a final partial representation for the respective dataset based on the subset of objects and the final shared representation; and
determining, by the respective client computing device, the final set of predicted values corresponding to the respective dataset based on the final partial representation, the individual function, and the one or more individual parameters corresponding to the respective dataset.
12. The method of
13. The method of
14. The method of
wherein each of the plurality of datasets is represented by a tensor, and
wherein at least one of the plurality of datasets is represented by a sparse tensor.
15. The method of
selecting, by the managing computing device, an additional subset of identifiers from the composite list of identifiers;
determining, by the managing computing device, an additional subset of the set of unique objects corresponding to each identifier in the additional subset of identifiers;
computing, by the managing computing device, a revised shared representation of the plurality of datasets based on the additional subset of the set of unique objects and the shared function having the one or more shared parameters;
determining, by the managing computing device, additional subsets of objects for the respective dataset of each client computing device based on an intersection of the additional subset of identifiers with the list of identifiers for the respective dataset;
determining, by the managing computing device, a revised partial representation for the respective dataset of each client computing device based on the additional subset of objects for the respective dataset and the revised shared representation;
transmitting, by the managing computing device, to each of the client computing devices:
the additional subset of objects for the respective dataset; and
the revised partial representation for the respective dataset;
receiving, by the managing computing device, one or more revised feedback values from at least one of the client computing devices, wherein the one or more revised feedback values are determined by the client computing devices by:
determining, by the respective client computing device, a revised set of predicted values corresponding to the respective dataset, wherein the revised set of predicted values is based on the revised partial representation and the individual function with the one or more individual parameters corresponding to the respective dataset;
determining, by the respective client computing device, a revised error for the respective dataset based on the individual loss function for the respective dataset, the revised set of predicted values corresponding to the respective dataset, the additional subset of objects, and the non-empty entries in the set of recorded values corresponding to the respective dataset;
updating, by the respective client computing device, the one or more individual parameters for the respective dataset; and
determining, by the respective client computing device, the one or more revised feedback values, wherein the one or more revised feedback values are used to determine a change in the revised partial representation that corresponds to an improvement in the set of predicted values;
determining, by the managing computing device, based on the additional subsets of objects and the one or more revised feedback values, one or more revised aggregated feedback values;
updating, by the managing computing device, the one or more shared parameters based on the one or more revised aggregated feedback values; and
determining, by the managing computing device based on the one or more revised aggregated feedback values, that an aggregated error corresponding to the revised errors for all respective datasets has been minimized.
16. The method of
17. The method of
18. The method of
wherein each of the plurality of datasets comprises at least two dimensions,
wherein a first dimension of each of the plurality of datasets comprises a plurality of chemical compounds,
wherein a second dimension of each of the plurality of datasets comprises descriptors of the chemical compounds,
wherein entries in each of the plurality of datasets correspond to a binary indication of whether a respective chemical compound exhibits a respective descriptor,
wherein each of the sets of recorded values corresponding to each of the plurality of datasets comprises at least two dimensions,
wherein a first dimension of each of the sets of recorded values comprises the plurality of chemical compounds,
wherein a second dimension of each of the sets of recorded values comprises activities of the chemical compounds in a plurality of biological assays, and
wherein entries in each of the sets of recorded values correspond to a binary indication of whether a respective chemical compound exhibits a respective activity.
19. The method of
calculating, by the managing computing device, a final shared representation of the plurality of datasets based on the set of unique objects, the shared function, and the one or more shared parameters; and
transmitting, by the managing computing device, the final shared representation of the plurality of datasets to each of the client computing devices,
wherein the final shared representation of the plurality of datasets is usable by each of the client computing devices to determine a final set of predicted values corresponding to the respective dataset, and
wherein the final set of predicted values is used by at least one of the client computing devices to identify one or more effective treatment compounds among the plurality of chemical compounds.
20. The method of
wherein each of the plurality of datasets comprises at least two dimensions,
wherein a first dimension of each of the plurality of datasets comprises a plurality of patients,
wherein a second dimension of each of the plurality of datasets comprises descriptors of the patients,
wherein entries in each of the plurality of datasets correspond to a binary indication of whether a respective patient exhibits a respective descriptor,
wherein each of the sets of recorded values corresponding to each of the plurality of datasets comprises at least two dimensions,
wherein a first dimension of each of the sets of recorded values comprises the plurality of patients,
wherein a second dimension of each of the sets of recorded values comprises clinical diagnoses of the patients, and
wherein entries in each of the sets of recorded values correspond to a binary indication of whether a respective patient exhibits a respective clinical diagnosis.
21. The method of
calculating, by the managing computing device, a final shared representation of the plurality of datasets based on the set of unique objects, the shared function, and the one or more shared parameters; and
transmitting, by the managing computing device, the final shared representation of the plurality of datasets to each of the client computing devices,
wherein the final shared representation of the plurality of datasets is usable by each of the client computing devices to determine a final set of predicted values corresponding to the respective dataset, and
wherein the final set of predicted values is used by at least one of the client computing devices to diagnose at least one of the plurality of patients.
22. The method of
wherein each of the sets of predicted values corresponding to one of the plurality of datasets corresponds to a predicted value tensor,
wherein the predicted value tensor is factored into a first tensor multiplied by a second tensor, and
wherein the first tensor corresponds to the respective dataset multiplied by the one or more shared parameters.
23. The method of
24. The method of
|
Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Machine learning is a branch of computer science that seeks to automate the building of an analytical model. In machine learning, algorithms are used to create models that “learn” using datasets. Once “taught”, the machine-learned models may be used to make predictions about other datasets, including future datasets. Machine learning has proven useful for developing models in a variety of fields. For example, machine learning has been applied to computer vision, statistics, data analytics, bioinformatics, deoxyribose nucleic acid (DNA) sequence identification, marketing, linguistics, economics, advertising, speech recognition, gaming, etc.
Machine learning involves training the model on a set of data, usually called “training data.” Training the model may include two main subclasses: supervised learning and unsupervised learning.
In supervised learning, training data may include a plurality of datasets for which the outcome is known. For example, training data in the area of image recognition may correspond to images depicting certain objects which have been labeled (e.g., by a human) as containing a specific type of object (e.g., a dog, a pencil, a car, etc.). Such training data may be referred to as “labeled training data.”
In unsupervised learning, the training data may not necessarily correspond to a known value or outcome. As such, the training data may be “unlabeled.” Because the outcome for each piece of training data is unknown, the machine learning algorithm may infer a function from the training data. As an example, the function may be weighted based on one or more dimensions within the training data. Further, the function may be used to make predictions about new data to which the model is applied.
Upon training a model using training data, predictions may be made using the model. The more training data that is used to train a given model, the more the model may be refined and the more accurate the model may become. A common optimization in machine learning includes obtaining the most robust and reliable model while having access to the least amount of training data.
In some cases, additional sources of training data may provide a better-trained machine-learned model. However, in some scenarios, attaining more training data may not be possible. For example, two corporations may possess respective sets of training data that could be collectively used to train a machine-learned model that is superior to a model trained on either set of training data utilized individually. However, each corporation may desire that their data remains private (e.g., each corporation does not want to reveal the private data to the other corporation).
The specification and drawings disclose embodiments that relate to secure broker-mediated data analysis and prediction.
The disclosure describes a method for performing joint machine learning using multiple datasets from multiple parties without revealing private information between the multiple parties. Such a method may include multiple client computing devices that transmit respective datasets to a managing computing device. The managing computing device may then combine the datasets, perform a portion of a machine learning algorithm on the combined datasets, and then transmit a portion of the results of the machine learning algorithm back to each of the client computing devices. Each client computing device may then perform another portion of the machine learning algorithm and send its portion of the results back to the managing computing device. Based on the results received from each client computing device, the managing computing device may then perform an additional portion of the machine learning algorithm to update a corresponding machine-learned model. In some cases, the method may be carried out multiple times in an optimization-type or recursive-type manner and/or may be carried out on an on-going basis.
In a first aspect, the disclosure describes a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. The method additionally includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. In addition, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Even further, the one or more feedback values are also determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values, wherein the one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method also includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values.
In a second aspect, the disclosure describes a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values, wherein each dataset relates a plurality of chemical compounds to a plurality of descriptors of the chemical compounds. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. Still further, the method includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Additionally, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. Even further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. The set of recorded values corresponding to the respective dataset relates the plurality of chemical compounds to activities of the chemical compounds in a plurality of biological assays. In addition, the one or more feedback values are determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method additionally includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. The shared representation, the shared function, or the one or more shared parameters are usable by at least one of the plurality of client computing devices to identify one or more effective treatment compounds among the plurality of chemical compounds.
In a third aspect, the disclosure describes a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values, wherein each dataset relates a plurality of patients to a plurality of descriptors of the patients. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. Still further, the method includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Additionally, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. Even further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. The set of recorded values corresponding to the respective dataset relates the plurality of patients to clinical diagnoses of patients. In addition, the one or more feedback values are determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method additionally includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. The shared representation, the shared function, or the one or more shared parameters are usable by at least one of the plurality of client computing devices to diagnose one or more of the plurality of patients.
In a fourth aspect, the disclosure describes a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values, wherein each dataset provides a set of book ratings for a plurality of book titles by a plurality of users. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. Still further, the method includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Additionally, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. Even further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. The set of recorded values corresponding to the respective dataset provides a set of movie ratings for a plurality of movie titles by the plurality of users. In addition, the one or more feedback values are determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method additionally includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. The shared representation, the shared function, or the one or more shared parameters are usable by at least one of the plurality of client computing devices to recommend a movie to one or more of the plurality of users.
In a fifth aspect, the disclosure describes a method. The method includes transmitting, by a first client computing device to a managing computing device, a first dataset corresponding to the first client computing device. The first dataset is one of a plurality of datasets transmitted to the managing computing device by a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes receiving, by the first client computing device, a first sublist of objects for the first dataset and a first partial representation for the first dataset. The first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers that includes a combination of the lists of identifiers of each dataset of the plurality of datasets. The first sublist of objects for the first dataset and the first partial representation for the first dataset are also determined by determining, by the managing computing device, a list of unique objects from among the plurality of datasets. Further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by selecting, by the managing computing device, the subset of identifiers from the composite list of identifiers. Even further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by computing, by the managing computing device, the shared representation of the plurality of datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by determining, by the managing computing device, the first sublist of objects for the first dataset based on an intersection of the subset of identifiers with the list of identifiers for the first dataset. Yet further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by determining, by the managing computing device, the first partial representation for the first dataset based on the first sublist of objects and the shared representation. Even further, the method includes determining, by the first client computing device, a first set of predicted values corresponding to the first dataset. The first set of predicted values is based on the first partial representation and a first individual function with one or more first individual parameters corresponding to the first dataset. In addition, the method includes determining, by the first client computing device, a first error for the first dataset based on a first individual loss function for the first dataset, the first set of predicted values corresponding to the first dataset, the first sublist of objects, and non-empty entries in the set of recorded values corresponding to the first dataset. Still further, the method includes updating, by the first client computing device, the one or more first individual parameters for the first dataset. Yet further, the method includes determining, by the first client computing device, one or more feedback values. The one or more feedback values are used to determine a change in the first partial representation that corresponds to an improvement in the first set of predicted values. Yet still further, the method includes transmitting, by the first client computing device to the managing computing device, the one or more feedback values. The one or more feedback values are usable by the managing computing device along with sublists of objects from the plurality of client computing devices to determine one or more aggregated feedback values. The one or more aggregated feedback values are usable by the managing computing device to update the one or more shared parameters.
In a sixth aspect, the disclosure describes a non-transitory, computer-readable medium with instructions stored thereon. The instructions are executable by a processor to perform a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. The method additionally includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. In addition, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Even further, the one or more feedback values are also determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values, wherein the one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method also includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values.
In a seventh aspect, the disclosure describes a memory with a model stored thereon. The model is generated according to a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. The method additionally includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. In addition, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Even further, the one or more feedback values are also determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values, wherein the one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method also includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. Yet even further, the method includes storing, by the managing computing device, the shared representation, the shared function, and the one or more shared parameters on the memory.
In an eighth aspect, the disclosure describes a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers including a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. In addition, the method includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. The method additionally includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Still even further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even yet further, the method includes transmitting, by the managing computing device, to each of the client computing devices the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. In addition, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Even further, the one or more feedback values are also determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. Still further, the one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values, wherein the one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The method also includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Yet still further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. Yet even further, the method includes using, by a computing device, the shared representation, the shared function, or the one or more shared parameters to determine an additional set of predicted values corresponding to a dataset.
In a ninth aspect, the disclosure describes a server device. The server device has instructions stored thereon that, when executed by a processor, perform a method. The method includes receiving a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining a respective list of identifiers for each dataset and a composite list of identifiers that includes a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining a list of unique objects from among the plurality of datasets. In addition, the method includes selecting a subset of identifiers from the composite list of identifiers. Still further, the method includes determining a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. The method additionally includes computing a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Yet further, the method includes determining a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Even still further, the method includes transmitting to each of the client computing devices: the sublist of objects for the respective dataset and the partial representation for the respective dataset. Yet still further, the method includes receiving one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. The one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Further, the one or more feedback values are determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. In addition, the one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. Even yet further, the method includes determining based on the sublists of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values. Still yet further, the method includes updating the one or more shared parameters based on the one or more aggregated feedback values.
In a tenth aspect, the disclosure describes a server device. The server device has instructions stored thereon that, when executed by a processor, perform a method. The method includes transmitting, to a managing computing device, a first dataset corresponding to the server device. The first dataset is one of a plurality of datasets transmitted to the managing computing device by a plurality of server devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes receiving a first sublist of objects for the first dataset and a first partial representation for the first dataset. The first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by the managing computing device by determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers that includes a combination of the lists of identifiers of each dataset of the plurality of datasets. The first sublist of objects for the first dataset and the first partial representation for the first dataset are also determined by the managing computing device by determining, by the managing computing device, a list of unique objects from among the plurality of datasets. Further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by the managing computing device by selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. In addition, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by the managing computing device by determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Still further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by the managing computing device by computing, by the managing computing device, a shared representation of the plurality of datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. The first sublist of objects for the first dataset and the first partial representation for the first dataset are additionally determined by the managing computing device by determining, by the managing computing device, the first sublist of objects for the first dataset based on an intersection of the subset of identifiers with the list of identifiers for the first dataset. Yet further, the first sublist of objects for the first dataset and the first partial representation for the first dataset are determined by the managing computing device by determining, by the managing computing device, the first partial representation for the first dataset based on the first sublist of objects and the shared representation. Further, the method includes determining a first set of predicted values corresponding to the first dataset. The first set of predicted values is based on the first partial representation and a first individual function with one or more first individual parameters corresponding to the first dataset. Additionally, the method includes determining a first error for the first dataset based on a first individual loss function for the first dataset, the first set of predicted values corresponding to the first dataset, the first sublist of objects, and non-empty entries in the set of recorded values corresponding to the first dataset. Even further, the method includes updating the one or more first individual parameters for the first dataset. The method additionally includes determining one or more feedback values. The one or more feedback values are used to determine a change in the first partial representation that corresponds to an improvement in the first set of predicted values. Yet further, the method includes transmitting, to the managing computing device, the one or more feedback values. The one or more feedback values are usable by the managing computing device along with sublists of objects from the plurality of server devices to determine one or more aggregated feedback values. The one or more aggregated feedback values are usable by the managing computing device to update the one or more shared parameters.
In an eleventh aspect, the disclosure describes a system. The system includes a server device. The system also includes a plurality of client devices each communicatively coupled to the server device. The server device has instructions stored thereon that, when executed by a processor, perform a first method. The first method includes receiving a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client device of the plurality of client devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The first method also includes determining a respective list of identifiers for each dataset and a composite list of identifiers that includes a combination of the lists of identifiers of each dataset of the plurality of datasets. Additionally, the first method includes determining a list of unique objects from among the plurality of datasets. Further, the first method includes selecting a subset of identifiers from the composite list of identifiers. The first method additionally includes determining a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. The first method further includes computing a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Still further, the first method includes determining a sublist of objects for the respective dataset of each client device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Yet further, the first method includes determining a partial representation for the respective dataset of each client device based on the sublist of objects for the respective dataset and the shared representation. Even further, the first method includes transmitting to each of the client devices: the sublist of objects for the respective dataset and the partial representation for the respective dataset. Each client device has instructions stored thereon that, when executed by a processor, perform a second method. The second method includes determining a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. The second method also includes determining an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Further, the second method includes updating the one or more individual parameters for the respective dataset. The second method additionally includes determining one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. The second method further includes transmitting, to the server device, the one or more feedback values. Yet even further, the first method includes determining based on the sublists of objects and the one or more feedback values from the client devices, one or more aggregated feedback values. Still yet further, the first method includes updating the one or more shared parameters based on the one or more aggregated feedback values.
In a twelfth aspect, the disclosure describes an optimized model. The model is optimized according to a method. The method includes receiving, by a managing computing device, a plurality of datasets. Each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices. Each dataset corresponds to a set of recorded values. Each dataset includes objects. The method also includes determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers that includes a combination of the lists of identifiers of each dataset of the plurality of datasets. Further, the method includes determining, by the managing computing device, a list of unique objects from among the plurality of datasets. The method additionally includes selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers. The method further includes determining, by the managing computing device, a subset of the list of unique objects corresponding to each identifier in the subset of identifiers. Additionally, the method includes computing, by the managing computing device, a shared representation of the datasets based on the subset of the list of unique objects and a shared function having one or more shared parameters. Even further, the method includes determining, by the managing computing device, a sublist of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset. Yet further, the method includes determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the sublist of objects for the respective dataset and the shared representation. Still further, the method includes transmitting, by the managing computing device, to each of the client computing devices: the sublist of objects for the respective dataset and the partial representation for the respective dataset. Even still further, the method includes receiving, by the managing computing device, one or more feedback values from at least one of the client computing devices. The one or more feedback values are determined by the client computing devices by determining, by the respective client computing device, a set of predicted values corresponding to the respective dataset. The set of predicted values is based on the partial representation and an individual function with one or more individual parameters corresponding to the respective dataset. The one or more feedback values are also determined by the client computing devices by determining, by the respective client computing device, an error for the respective dataset based on an individual loss function for the respective dataset, the set of predicted values corresponding to the respective dataset, the sublist of objects, and non-empty entries in the set of recorded values corresponding to the respective dataset. Further, the one or more feedback values are determined by the client computing devices by updating, by the respective client computing device, the one or more individual parameters for the respective dataset. The one or more feedback values are additionally determined by the client computing devices by determining, by the respective client computing device, the one or more feedback values. The one or more feedback values are used to determine a change in the partial representation that corresponds to an improvement in the set of predicted values. Even yet further, the method includes determining, by the managing computing device, based on the sublists of objects and the one or more feedback values from the client computing devices one or more aggregated feedback values. Still even further, the method includes updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values. Still yet even further, the method includes computing, by the managing computing device, an updated shared representation of the datasets based on the shared function and the one or more updated shared parameters. The updated shared representation corresponds to the optimized model.
In a thirteenth aspect, the disclosure describes a method for machine learning, comprising: receiving, by a managing computing device, a plurality of datasets, wherein each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects; determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers comprising a combination of the lists of identifiers of each dataset of the plurality of datasets; determining, by the managing computing device, a set of unique objects from among the plurality of datasets; selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers; determining, by the managing computing device, a subset of the set of unique objects corresponding to each identifier in the subset of identifiers; computing, by the managing computing device, a shared representation of the datasets based on the subset of the set of unique objects and a shared function having one or more shared parameters; determining, by the managing computing device, a subset of objects for the respective dataset of each client computing device based on an intersection of the subset of identifiers with the list of identifiers for the respective dataset; determining, by the managing computing device, a partial representation for the respective dataset of each client computing device based on the subset of objects for the respective dataset and the shared representation; acquiring, by the managing computing device, one or more feedback values, wherein each of the one or more feedback values is determined as a change in the partial representation that corresponds to an improvement in a set predicted values as compared to the set of recorded values for the respective dataset; determining, by the managing computing device, based on the subsets of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values; and updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values.
In a fourteenth aspect, the disclosure describes a method for machine learning, comprising: transmitting, by a first client computing device to a managing computing device, a first dataset corresponding to the first client computing device, wherein the first dataset is one of a plurality of datasets transmitted to the managing computing device by a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects; receiving, by the first client computing device, a first subset of objects for the first dataset and a first partial representation for the first dataset, wherein the first subset of objects is a subset of the objects of the first dataset being part of forming a shared representation of the plurality of datasets, the shared representation being defined by a shared function having one or more shared parameters, and the partial representation for the first dataset is based on the first subset of objects and the shared representation; determining, by the first client computing device, a first set of predicted values corresponding to the first dataset, wherein the first set of predicted values is based on the first partial representation and a first individual function with one or more first individual parameters corresponding to the first dataset; determining, by the first client computing device, a first error for the first dataset based on a first individual loss function for the first dataset, the first set of predicted values corresponding to the first dataset, the first subset of objects, and non-empty entries in the set of recorded values corresponding to the first dataset; updating, by the first client computing device, the one or more first individual parameters for the first dataset; determining, by the first client computing device, one or more feedback values, wherein the one or more feedback values are used to determine a change in the first partial representation that corresponds to an improvement in the first set of predicted values; and transmitting, by the first client computing device to the managing computing device, the one or more feedback values, wherein the one or more feedback values are usable by the managing computing device along with subsets of objects from the plurality of client computing devices to determine one or more aggregated feedback values, and wherein the one or more aggregated feedback values are usable by the managing computing device to update the one or more shared parameters.
According to one or more of the above aspects, machine learning based on a plurality of datasets is provided. This implies that a plurality of sources may be used for obtaining training data, such that a robust and reliable machine-learned model may be formed. The machine learning according to at least some of the aspects enables datasets to be provided from a plurality of sources without data dissipating from one source to another. Thus, even though a client computing device may provide a dataset relating to private data in order to train the model, the machine learning may be set up such that the private data will not be shared to unauthorized parties, such as competitors that are also providing datasets to the machine learning.
The shared representation is determined based on the plurality of datasets from respective client computing devices. Thus, a shared representation for the datasets may be formed, which shared representation may contribute to forming a model for each of the plurality of datasets. The shared representation may be based on a shared function being shared for the plurality of datasets for representing relations between objects and features in the datasets. Thus, the shared function may be determined based on a plurality of datasets, enabling the shared function to be determined in a robust and reliable manner. Further, thanks to the partial representations being determined, the contribution of each dataset to forming of the shared representation may be extracted. Thus, the client computing devices may receive back only the partial representation relating to the dataset provided by the respective client computing device. This implies that the client computing device may benefit from the use of the plurality of datasets in forming of a robust model of the shared function, while the client computing device does not receive information of the full shared representation, so that the client computing device does not receive any information of relations between objects and features which are not deducible from the dataset provided by the client computing device (e.g. relating to features not present in the dataset of the client computing device).
Hence, a client computing device providing datasets to the managing computing device will not share sensitive data with other client computing devices. In fact, the client computing devices may even strip the datasets from sensitive information, so that the datasets may be provided in an encoded format. The datasets may provide information of relations between objects and features, but the managing computing device may not know what physical properties the objects and features represent. Corporations desiring to benefit from the secure machine learning method may have a (secret) agreement on a format in which the datasets are provided, such that the datasets have a common format enabling the managing computing device to determine the shared representation without knowing what physical properties the objects and features represent.
Hence, as set out above, the use of a managing computing device for determining the shared representation enables client computing devices to provide datasets without compromising sensitive data. Thus, although implementation of a machine learning method may very well be within competence of a host for the client computing device, the datasets may be provided from the client computing devices to the managing computing device for computing the shared representation so as to benefit from a large set of training data.
However, it should also be realized that, in some embodiments, the host of the client computing device may not have competence or resources to implement a machine learning method. Thus, the managing computing device may be used for providing a possibility to create a machine-learned model to predict relations between objects and features, where a client computing device would otherwise not be able to make such predictions. In such case, sensitivity of data may not be as important, and the managing computing device may be configured to generate an individual function with one or more individual parameters corresponding to the respective dataset. Thus, the managing computing device may not only develop the one or more shared parameters of the shared function, but may also develop the individual function corresponding to the respective dataset.
For example, the client computing devices may relate to online stores, wherein the store wants to provide suggestions on merchandise to a user based on a machine learning method. The managing computing device may thus use datasets of a plurality of client computing devices in order to provide a robust determination of the shared function. However, the client computing devices may not want to completely share the information of their datasets. Thus, the managing computing device may develop the individual function for the respective dataset, such that each client computing device benefits from the plurality of datasets in forming of the shared function, while even though the managing computing device has access to data of all the datasets there is still an individual function formed for the respective client computing devices based on the contribution of the dataset of the respective client computing device.
Thus, as set out above, the acquiring, by the managing computing device, of one or more feedback values may be achieved by the managing computing device transmitting information to the client computing devices and the client computing devices determining the one or more feedback values. However, as an alternative, the one or more feedback values may be acquired by the managing computing device applying an individual function for the respective dataset to the partial representation of the respective dataset.
According to an embodiment, each dataset comprises objects in a plurality of dimensions, and the objects in at least one of the plurality of dimensions for each of the datasets is of a same type. This implies that the datasets comprise corresponding information such that the managing computing device may easily determine shared relations between the objects so as to form the shared representation. However, it should be realized that different object types may still be used by the managing computing device to develop a shared representation. In some embodiments, using different object types may include normalizing values corresponding to different object types.
According to an embodiment, the respective list of identifiers for each dataset corresponds to a set of objects for each dataset, wherein the identifiers for the plurality of datasets are defined according to a common syntax. This may imply that identifiers for the respective datasets are determined in the same way so that the managing computing device may determine identical identifiers for identical objects. This may ensure that the managing computing device may easily form identical identifiers for identical objects so that a combined list of identifiers may be formed wherein unique objects may be easily identified. The managing computing device may provide information on the common syntax to the client computing devices in order to ensure that the client computing devices provide the datasets in a desired format. However, according to an alternative, the client computing devices may agree on a common syntax without the managing computing device. This implies that the client computing devices may provide the datasets in a format, which does not provide information of what physical properties the objects and features represent, but the client computing devices may ensure that the datasets are provided in a common manner allowing the managing computing device to still determine the shared representation.
A client computing device may be set up to determine feedback values for improving the shared parameters. Since the shared representation is based on a plurality of datasets, the partial representation for a client computing device may predict values for objects, where no recorded value exists. Thus, the client computing device may be configured to only take into account non-empty entries in the set of recorded values for determining the error for the dataset, even though predicted values may be available for further entries.
The managing computing device may be configured to transmit the partial representation to the client computing device in order to ensure that the client computing device only receives model information applying to the data provided by the client computing device. However, it should be realized that in some embodiments, secrecy of data in the datasets may not be of high importance and the client computing devices may be willing to share data. In such case, the managing computing device may be configured to transmit the shared representation to each of the client computing devices, such that the partial representation may be determined in the client computing device and that processing of data for determining of the partial representation may be distributed to the client computing devices. This may ensure that a speed of processing may be increased, especially if a large number of client computing devices are providing datasets for the shared representation, as the determining of the partial representations may be distributed to a plurality of client computing devices, which may in parallel determine the partial representations. The managing computing device may be configured to transmit the shared representation, the subset of identifiers and the list of identifiers for the respective dataset to the respective client computing devices in order to enable the client computing devices to determine the respective partial representations.
Thus, in a fifteenth aspect, the disclosure describes a method for machine learning, comprising: receiving, by a managing computing device, a plurality of datasets, wherein each dataset of the plurality of datasets is received from a respective client computing device of a plurality of client computing devices, wherein each dataset corresponds to a set of recorded values, and wherein each dataset comprises objects; determining, by the managing computing device, a respective list of identifiers for each dataset and a composite list of identifiers comprising a combination of the lists of identifiers of each dataset of the plurality of datasets; determining, by the managing computing device, a set of unique objects from among the plurality of datasets; selecting, by the managing computing device, a subset of identifiers from the composite list of identifiers; determining, by the managing computing device, a subset of the set of unique objects corresponding to each identifier in the subset of identifiers; computing, by the managing computing device, a shared representation of the datasets based on the subset of the set of unique objects and a shared function having one or more shared parameters; transmitting, by the managing computing device, to each of the client computing devices: the shared representation, the subset of identifiers and the list of identifiers for the respective dataset; receiving, by the managing computing device, one or more feedback values, from at least one of the client computing devices, wherein each of the one or more feedback values is determined as a change in the partial representation that corresponds to an improvement in a set predicted values as compared to the set of recorded values for the respective dataset; determining, by the managing computing device, based on the subsets of objects and the one or more feedback values from the client computing devices, one or more aggregated feedback values; and updating, by the managing computing device, the one or more shared parameters based on the one or more aggregated feedback values.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the figures and the following detailed description.
Example methods and systems are described herein. Any example embodiment or feature described herein is not necessarily to be construed as preferred or advantageous over other embodiments or features. The example embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein.
Furthermore, the particular arrangements shown in the figures should not be viewed as limiting. It should be understood that other embodiments might include more or less of each element shown in a given figure. In addition, some of the illustrated elements may be combined or omitted. Similarly, an example embodiment may include elements that are not illustrated in the figures.
Example embodiments relate to secure broker-mediated data analysis and prediction. The secure broker-mediated data analysis and prediction may be used to develop a machine learning model (e.g., an artificial neural network) using private data from multiple parties without revealing the private data of one party to another party. One example embodiment described herein relates to a method.
The method may include a managing computing device (e.g., a server) transmitting data to and receiving data from a plurality of client computing devices. The data may be transmitted to and received from the plurality of client computing devices to develop a machine learning model. For example, the managing computing device may receive multiple datasets from multiple client computing devices. Subsequently, the managing computing device may use these datasets to establish an initial version of the machine learning model (e.g., based on a function with initial parameters).
Then, the managing computing device may transmit different portions of the initial version of the machine learning model to different client computing devices. In some embodiments, the managing computing device may send the entire model to the client computing devices, as well as indications of which portion corresponds to a respective client computing device's dataset. Subsequently, the respective client computing device may extract the appropriate portion of the model from the entire model. Upon receiving/extracting a portion of the machine learning model that corresponds to the respective client computing device, each client computing device may update a local machine learning model that is stored within the respective client computing device. Then, each client computing device may use the updated local machine learning model to make a prediction (e.g., compute a set of predicted or expected values).
Upon making the prediction, the respective client computing device may compare the prediction to a set of recorded values stored within the client computing device. Based on the comparison, the client computing device may determine one or more errors with the prediction (and therefore one or more errors with the updated local machine learning model on which the prediction was based). In some embodiments, such errors may be calculated using a loss function. Based on the errors and the portion of the entire machine learning model that corresponds to the respective client computing device, the client computing device may transmit one or more feedback values to the managing computing device indicating that there is an error in a respective portion of the entire machine learning model. Similar feedback may be provided to the managing computing device by one, multiple, or all client computing devices.
Upon receiving the feedback from the client computing devices, the managing computing device may then update the entire machine learning model (e.g., including the function and the parameters) based on the feedback. In such a scenario, the entire machine learning model may be improved and/or refined. The steps outlined above of transmitting portions of the model to the client computing devices, having the client computing devices make and evaluate predictions based on the portions of the model, and then receiving feedback from the client computing devices may be repeated for multiple iterations until the entire machine learning model can no longer be improved (e.g., the predictions made by each client computing device have no error) or until the improvement between each iteration is below a threshold value of improvement (e.g., the errors in the predictions made by each client computing device do not substantially change from one iteration to the next).
Once the training of the entire machine learning model is complete (e.g., the improvement between subsequent iterations is below a threshold value), the machine learning model can be used to make predictions or recommendations. For example, the managing computing device could utilize the machine learning model to make predictions based on future events. Additionally or alternatively, once the model is trained, the entire machine learning model (or a portion thereof) may be transmitted to one or more of the client computing devices by the managing computing device. At this point, at least one of the one or more client computing devices may utilize the model to make predictions or recommendations (e.g., recommend a book to one of its users based on the machine learning model). Still further, the model could also be transmitted to one or more third parties who did not provide data used in the training of the machine learning model. Such third parties may be required to pay a fee, join a subscription service, view an advertisement, or log in with validation credentials before being able to view and/or utilize the machine learning model. Additionally, the third parties may utilize the machine learning model make predictions or recommendations based on their own data (e.g., data other than the data provided to the managing computing device in the training of the machine learning model).
The following description and accompanying drawings will elucidate features of various example embodiments. The embodiments provided are by way of example, and are not intended to be limiting. As such, the dimensions of the drawings are not necessarily to scale.
Client device 102 may be any type of device including a personal computer, laptop computer, a wearable computing device, a wireless computing device, a head-mountable computing device, a mobile telephone, or tablet computing device, etc., that is configured to transmit data 106 to and/or receive data 108 from a server device 104 in accordance with the embodiments described herein. For example, in
Client device 102 may include a user interface, a communication interface, a main processor, and data storage (e.g., memory). The data storage may contain instructions executable by the main processor for carrying out one or more operations relating to the data sent to, or received from, server device 104. The user interface of client device 102 may include buttons, a touchscreen, a microphone, and/or any other elements for receiving inputs, as well as a speaker, one or more displays, and/or any other elements for communicating outputs.
Server device 104 may be any entity or computing device arranged to carry out the server operations described herein. Further, server device 104 may be configured to send data 108 to and/or receive data 106 from the client device 102. In some embodiments, the server device 104 may correspond to a “managing computing device” (e.g., a “broker”). Additionally or alternatively, in some embodiments, the server device 104 may correspond to one or more “client computing devices” (e.g., a “data-supplying party”).
Data 106 and data 108 may take various forms. For example, data 106 and 108 may represent packets transmitted by client device 102 or server device 104, respectively, as part of one or more communication sessions. Such a communication session may include packets transmitted on a signaling plane (e.g., session setup, management, and teardown messages), and/or packets transmitted on a media plane (e.g., text, graphics, audio, and/or video data).
Regardless of the exact architecture, the operations of client device 102, server device 104, as well as any other operation associated with the architecture of
In this example, computing device 200 includes a processor 202, a data storage 204, a network interface 206, and an input/output function 208, all of which may be coupled by a system bus 210 or a similar mechanism. Processor 202 can include one or more CPUs, such as one or more general purpose processors and/or one or more dedicated processors (e.g., application specific integrated circuits (ASICs), digital signal processors (DSPs), network processors, etc.).
Data storage 204, in turn, may include volatile and/or non-volatile data storage devices and can be integrated in whole or in part with processor 202. Data storage 204 can hold program instructions, executable by processor 202, and data that may be manipulated by such program instructions to carry out the various methods, processes, or operations described herein. Alternatively, these methods, processes, or operations can be defined by hardware, firmware, and/or any combination of hardware, firmware, and software. By way of example, the data in data storage 204 may contain program instructions, perhaps stored on a non-transitory, computer-readable medium, executable by processor 202 to carry out any of the methods, processes, or operations disclosed in this specification or the accompanying drawings. The data storage 204 may include non-volatile memory (e.g., a read-only memory, ROM) and/or volatile memory (e.g., random-access memory, RAM), in various embodiments. For example, the data storage 204 may include a hard drive (e.g., hard disk), flash memory, a solid-state drive (SSD), electrically erasable programmable read-only memory (EEPROM), dynamic random-access memory (DRAM), and/or static random-access memory (SRAM). It will be understood that other types of transitory or non-transitory data storage devices are possible and contemplated within the scope of the present disclosure.
Network interface 206 may take the form of a wireline connection, such as an Ethernet, Token Ring, or T-carrier connection. Network interface 206 may also take the form of a wireless connection, such as IEEE 802.11 (WiFi), BLUETOOTH®, BLUETOOTH LOW ENERGY (BLE)®, or a wide-area wireless connection. However, other forms of physical layer connections and other types of standard or proprietary communication protocols may be used over network interface 206. Furthermore, network interface 206 may include multiple physical interfaces.
Input/output function 208 may facilitate user interaction with example computing device 200. Input/output function 208 may comprise multiple types of input devices, such as a keyboard, a mouse, a touch screen, and so on. Similarly, input/output function 208 may comprise multiple types of output devices, such as a screen, monitor, printer, or one or more light emitting diodes (LEDs). Additionally or alternatively, example computing device 200 may support remote access from another device, via network interface 206 or via another interface (not shown), such as a universal serial bus (USB) or high-definition multimedia interface (HDMI) port.
In some embodiments, one or more computing devices may be deployed in a networked architecture. The exact physical location, connectivity, and configuration of the computing devices may be unknown and/or unimportant to client devices. Accordingly, the computing devices may be referred to as “cloud-based” devices that may be housed at various remote locations.
For example, server devices 306 can be configured to perform various computing tasks of computing device 200. Thus, computing tasks can be distributed among one or more of server devices 306. To the extent that these computing tasks can be performed in parallel, such a distribution of tasks may reduce the total time to complete these tasks and return a result. For purpose of simplicity, both server cluster 304 and individual server devices 306 may be referred to as “a server device.” This nomenclature should be understood to imply that one or more distinct server devices, data storage devices, and cluster routers may be involved in server device operations.
Cluster data storage 308 may be data storage arrays that include disk array controllers configured to manage read and write access to groups of hard disk drives. The disk array controllers, alone or in conjunction with server devices 306, may also be configured to manage backup or redundant copies of the data stored in cluster data storage 308 to protect against disk drive failures or other types of failures that prevent one or more of server devices 306 from accessing units of cluster data storage 308.
Cluster routers 310 may include networking equipment configured to provide internal and external communications for the server clusters. For example, cluster routers 310 may include one or more packet-switching and/or routing devices configured to provide (i) network communications between server devices 306 and cluster data storage 308 via cluster network 312, and/or (ii) network communications between the server cluster 304 and other devices via communication link 302 to network 300.
Additionally, the configuration of cluster routers 310 can be based at least in part on the data communication requirements of server devices 306 and cluster data storage 308, the latency and throughput of the local cluster networks 312, the latency, throughput, and cost of communication link 302, and/or other factors that may contribute to the cost, speed, fault-tolerance, resiliency, efficiency and/or other design goals of the system architecture.
As a possible example, cluster data storage 308 may include any form of database, such as a structured query language (SQL) database. Various types of data structures may store the information in such a database, including but not limited to tables, arrays, lists, trees, and tuples. Furthermore, any databases in cluster data storage 308 may be monolithic or distributed across multiple physical devices.
Server devices 306 may be configured to transmit data to and receive data from cluster data storage 308. This transmission and retrieval may take the form of SQL queries or other types of database queries, and the output of such queries, respectively. Additional text, images, video, and/or audio may be included as well. Furthermore, server devices 306 may organize the received data into web page representations. Such a representation may take the form of a markup language, such as the hypertext markup language (HTML), the extensible markup language (XML), or some other standardized or proprietary format. Moreover, server devices 306 may have the capability of executing various types of computerized scripting languages, such as but not limited to Perl, Python, PHP Hypertext Preprocessor (PHP), Active Server Pages (ASP), JavaScript, and so on. Computer program code written in these languages may facilitate the provision of web pages to client devices, as well as client device interaction with the web pages.
The dashed lines in
Also as illustrated, client computing device A 402 and client computing device B 404 may be communicatively uncoupled from one another. In this way, privacy of communications between a respective client computing device and the managing computing device 406 may be preserved. In some embodiments, additional technologies to preserve privacy may be implemented (e.g., a private key/public key encryption mechanism for communications between a client computing device and the managing computing device 406).
In alternate embodiments, client computing device A 402 and client computing device B 404 may be communicatively coupled to one another. This may allow a sharing of some or all of a client computing device's data with another client computing device. In still other embodiments, there may be any number of client computing devices communicating with the managing computing device 406 (e.g., more than two client computing devices). For example, three or more client computing devices may provide data to and receive data from the managing computing device 406.
In an example embodiment, the third-party computing device 405 need not be directly involved in the determination of a machine learning model (e.g., shared representation U) as described with regard to method 400 herein. In such scenarios, third-party computing device 405 need not carry out all of the operations of method 400, as described herein. Rather, the third-party computing device 405 could be a recipient of information based on the machine learning model determined using other operations of method 400.
In various embodiments described herein, some operations are described as being performed by managing computing device(s) and/or client computing device(s). It is understood that operations performed by a single managing computing device could be spread over multiple managing computing devices (e.g., to decrease computational time). Similarly, operations performed by a single client computing device could instead be performed by a collection of computing devices that are each part of the single client computing device. Even further, multiple operations are described herein as being performed by a client computing device (e.g., client computing device A 402). It is also understood that any operation performed on a single client computing device could, but need not, be mirrored and performed on one or more other client computing devices (e.g., for respective datasets/sets of recorded values). Similarly, some operations are described herein as being performed by multiple client computing devices, but it is understood that in alternate embodiments, only one client computing device may perform the operation (e.g., transmitting one or more feedback values to the managing computing device may only be performed by one client computing device, in some embodiments). Even further, some client computing devices may store multiple datasets and/or multiple sets of recorded values, and therefore may perform actions corresponding to multiple computing devices disclosed herein. Still further, in some embodiments, the managing computing device may be integrated with one or more of the client computing devices (e.g., depending on the level of privacy required by the interacting parties/client computing devices).
The methods and systems described herein may be described using mathematical syntax (e.g., to represent variables in order to succinctly describe processes). It is understood that a variety of mathematical syntax, including mathematical syntax different than used herein, supplemental to used herein, or no mathematical syntax at all, may be employed when describing the methods and systems disclosed herein. Further, it is understood that the mathematical syntax employed may be based on the given machine-learning model used (e.g., syntax used to represent an artificial neural network model may be different than syntax used to represent a support vector machine model). By way of example, the following table summarizes the mathematical syntax used herein to illustrate the methods and systems of the disclosure (e.g., the mathematical syntax used in reference to
Variable
Description of Variable
U
Shared Representation
f
Shared Function
β
One or More Shared Parameters
Weight Tensor Representing the One or More Shared
Parameters
XA
Dataset A
XB
Dataset B
d
Number of Dimensions in a Dataset
YA
Set of Recorded Values Corresponding to Dataset A
YB
Set of Recorded Values Corresponding to Dataset B
MA
List of Identifiers for Dataset A
MB
List of Identifiers for Dataset B
MComb
Composite List of Identifiers
XComb
List of Unique Objects
S
Subset of Identifiers
Z
Subset of the List of Unique Objects
SA
Sublist of Objects for Dataset A
SB
Sublist of Objects for Dataset B
UA
Partial Representation for Dataset A
UB
Partial Representation for Dataset B
ŶA
Set of Predicted Values Corresponding to Dataset A
ŶB
Set of Predicted Values Corresponding to Dataset B
gA
Individual Function Corresponding to Dataset A
gB
Individual Function Corresponding to Dataset B
γA
One or More Individual Parameters Corresponding to
Dataset A
γB
One or More Individual Parameters Corresponding to
Dataset B
Weight Tensor Representing the One or More Individual
Parameters
EA
Error for Dataset A
EB
Error for Dataset B
LA
Individual Loss Function for Dataset A
LB
Individual Loss Function for Dataset B
WA
Non-Empty Entries in the Set of Recorded Values for
Dataset A
WB
Non-Empty Entries in the Set of Recorded Values for
Dataset B
CA
One or More Feedback Values Corresponding to Dataset A
CB
One or More Feedback Values Corresponding to Dataset B
CComb
One or More Aggregated Feedback Values
UFinal
Final Shared Representation
ŶA
Final Set of Predicted Values Corresponding to Dataset A
ŶB
Final Set of Predicted Values Corresponding to Dataset B
UFinal
Final Partial Representation for Dataset A
UFinal
Final Partial Representation for Dataset B
Tensor Representing Values for First-Level Neurons
Tensor Representing the Values for Mid-Level Neurons
Tensor Representing Values for Upper-Level Neurons
IA
Individual Representation for Dataset A
IB
Individual Representation for Dataset B
RA
Individual Learning Rate Corresponding to Dataset A
RB
Individual Learning Rate Corresponding to Dataset B
η
Shared Learning Rate
Feedback Tensor Corresponding to Dataset A
Feedback Tensor Corresponding to Dataset B
Tensor Representing the One or More Aggregated
Feedback Values
In some embodiments, the method 400 illustrated in
According to an example embodiment, the operations of the method 400 will be enumerated with reference to
At operation 412, the method 400 may include client computing device A 402 transmitting dataset A (XA) to the managing computing device 406. Dataset A (XA) may correspond to a set of recorded values (YA). Further, dataset A (XA) may include objects. In alternate embodiments, client computing device A 402 may also transmit the corresponding set of recorded values (YA) to the managing computing device 406 (e.g., in addition to dataset A (XA)).
At operation 414, the method 400 may include client computing device B 404 transmitting dataset B (XB) to the managing computing device 406. Dataset B (XB) may correspond to a set of recorded values (YB). Further, dataset B (XB) may include objects. In alternate embodiments, client computing device B 404 may also transmit the corresponding set of recorded values (YB) to the managing computing device 406 (e.g., in addition to dataset B (XB)).
At operation 416, the method 400 may include the managing computing device 406 determining a list of identifiers for dataset A (MA) and a list of identifiers for dataset B (MB).
At operation 418, the method 400 may include the managing computing device 406 determining a composite list of identifiers (MComb, e.g., representing the “combined” list of identifiers). The composite list of identifiers may include a combination of the lists of identifiers for dataset A (MA) and dataset B (MB).
At operation 420, the method 400 may include the managing computing device 406 determining a list of unique objects (XComb, e.g., representing the “combined” list of objects from the datasets) from dataset A (XA) and dataset B (XB).
At operation 422, the method 400 may include the managing computing device 406 selecting a subset of identifiers (S) from the composite list of identifiers (MComb).
At operation 424, the method 400 may include the managing computing device 406 determining a subset of the list of unique objects (Z) corresponding to each identifier in the subset of identifiers (S).
At operation 426, the method 400 may include the managing computing device 406 computing a shared representation (U) of the datasets (XA/XB) based on the subset (Z) of the list of unique objects (XComb) and a shared function (f) having one or more shared parameters (13).
At operation 428, the method 400 may include the managing computing device 406 determining a sublist of objects (SA/SB) for the respective dataset (XA/XB) of each client computing device. The sublist of objects (SA/SB) may be based on an intersection of the subset of identifiers (S) with the list of identifiers (MA/MB) for the respective dataset (XA/XB).
At operation 430, the method 400 may include the managing computing device 406 determining a partial representation (UA/UB) for the respective dataset (XA/XB) of each client computing device based on the sublist of objects (SA/SB) for the respective dataset (XA/XB) and the shared representation (U).
At operation 432, the method 400 may include the managing computing device 406 transmitting the sublist of objects (SA) for dataset A (XA) and the partial representation (UA) for dataset A (XA) to client computing device A 402.
At operation 434, the method 400 may include the managing computing device 406 transmitting the sublist of objects (SB) for dataset B (XB) and the partial representation (UB) for dataset B (XB) to client computing device B 404. It is understood that, in some embodiments, operation 434 may also happen before or substantially simultaneous with operation 432.
In alternate embodiments, instead of operations 428, 430, 432, and 434, the managing computing device 406 may transmit: the shared representation (U), the subset of identifiers (S), and the list of identifiers (MA) for dataset A (XA) to client computing device A 402; and the shared representation (U), the subset of identifiers (S), and the list of identifiers (MB) for dataset B (XB) to client computing device B 404. Then, each respective client computing device may determine, for itself, the corresponding sublist of objects (SA/SB) for its respective dataset (XA/XB). The sublist of objects (SA) for dataset A (XA) may be based on an intersection of the subset of identifiers (S) with the list of identifiers (MA) for dataset A (XA). The sublist of objects (SB) for dataset B (XB) may be based on an intersection of the subset of identifiers (SB) with the list of identifiers (MB) for dataset B (XB). Further, each respective client computing device may determine, for itself, the partial representation (UA/UB) for its respective dataset (XA/XB). The partial representation (UA) for dataset A (XA) may be based on the sublist of objects (SA) and the shared representation (U). The partial representation (UB) for dataset B (XB) may be based on the sublist of objects (SB) and the shared representation (U).
At operation 436A, the method 400 may include client computing device A 402 determining a set of predicted values (ŶA) that corresponds to dataset A (XA). The set of predicted values may be based on the partial representation (UA) and an individual function (gA) with one or more individual parameters (γA) corresponding to dataset A (XA).
At operation 436B, the method 400 may include client computing device B 404 determining a set of predicted values (ŶB) that corresponds to dataset B (XB). The set of predicted values may be based on the partial representation (UB) and an individual function (gB) with one or more individual parameters (γB) corresponding to dataset B (XB).
At operation 438A, the method 400 may include client computing device A 402 determining an error (EA) for dataset A (XA). The error (EA) may be based on an individual loss function (LA) for dataset A (XA), the set of predicted values (ŶA) that corresponds to dataset A (XA), the sublist of objects (SA), and non-empty entries (WA) in the set of recorded values (YA) corresponding to dataset A (XA).
At operation 438B, the method 400 may include client computing device B 404 determining an error (EB) for dataset B (XB). The error (EB) may be based on an individual loss function (LB) for dataset B (XB), the set of predicted values (ŶB) that corresponds to dataset B (XB), the sublist of objects (SB), and non-empty entries (WB) in the set of recorded values (YB) corresponding to dataset B (XB).
At operation 440A, the method 400 may include client computing device A 402 updating the one or more individual parameters (γA) for dataset A (XA).
At operation 440B, the method 400 may include client computing device B 404 updating the one or more individual parameters (γB) for dataset B (XB).
At operation 442A, the method 400 may include client computing device A 402 determining one or more feedback values (CA). The one or more feedback values (CA) may be used to determine a change in the partial representation (UA) that corresponds to an improvement in the set of predicted values (ŶA).
At operation 442B, the method 400 may include client computing device B 404 determining one or more feedback values (CB). The one or more feedback values (CB) may be used to determine a change in the partial representation (UB) that corresponds to an improvement in the set of predicted values (ŶB).
At operation 444, the method 400 may include client computing device A 402 transmitting the one or more feedback values (CA) to the managing computing device 406.
At operation 446, the method 400 may include client computing device B 404 transmitting the one or more feedback values (CB) to the managing computing device 406. In some embodiments, operation 444 may occur after or substantially simultaneous with operation 446.
In some embodiments (e.g., embodiments including more than two datasets and/or more than two client computing devices), only a subset of the client computing devices may transmit one or more feedback values (e.g., CA/CB) to the managing computing device 406. In such embodiments, the managing computing device 406 may update the one or more shared parameters (β) based only on the one or more feedback values provided, rather than all possible sets of one or more feedback values from all client computing devices. Alternatively, in embodiments where only a subset of the client computing devices transmit one or more feedback values to the managing computing device 406, the managing computing device 406 may transmit a request to those client computing devices that did not provide one or more feedback values for one or more feedback values from the respective client computing device.
At operation 448, the method 400 may include the managing computing device 406 determining one or more aggregated feedback values (CComb). The one or more aggregated feedback values (CComb) may be based on the sublists of objects (SA/SB) corresponding to dataset A (XA) and dataset B (XB), respectively, and the one or more feedback values (CA/CB) from client computing device A 402 and client computing device B 404, respectively.
At operation 450, the method 500 may include the managing computing device 406 updating the one or more shared parameters (β) based on the one or more aggregated feedback values (CComb). In some embodiments (e.g., embodiments where the one or more aggregated feedback values (CComb) correspond to back-propagated errors), the one or more shared parameters (β) may be updated according to a gradient descent method.
In some embodiments, the method illustrated in
For example, as illustrated in
Optionally (e.g., in embodiments that include the third-party computing device 405), as illustrated in
In yet other embodiments, the method may include the managing computing device 406 transmitting only a portion of the shared function (f) and/or only a portion of the one or more shared parameters (β) to the client computing devices 402/404 and/or the third-party computing device 405. The portions received by the client computing devices 402/404 or the third-party computing device 405 may be different. For example, each client computing device may only receive the portion of the one or more shared parameters (β) that correspond to the portion of the shared representation (U) that was developed according to the dataset (XA/XB) supplied by the client computing device.
Additionally or alternatively, the method may include the managing computing device 406 computing a final shared representation (UFinal) of the datasets (XA/XB). Such a final shared representation (UFinal) may be based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). Further, the method may include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to client computing device A 402 and/or client computing device B 404. The shared representation (UFinal) may be usable by each client computing device (or a subset of the client computing devices) to determine a final set of predicted values (ŶA
In some embodiments, multiple operations may be repeated one or more times to further refine the machine-learned model. For example, operations 422, 424, 426, 428, 430, 432, 434, 436A, 436B, 438A, 438B, 440A, 440B, 442A, 442B, 444, 446, 448, and 450 may be repeated one or more times to refine the machine-learned model. When these operations are repeated, they may be repeated such that each determining/selecting operation includes determining/selecting additional subsets of data (e.g., possibly different from the selections in the previous iteration).
In some embodiments, the method may also include the managing computing device 406 removing (“purging”) the list of unique objects (XComb), the shared representation (U), the lists of identifiers (MA/MB) for each dataset (XA/XB), and the composite list of identifiers (MComb) from a memory (e.g., a data storage 204 as illustrated in
As illustrated, dataset A (XA) includes four objects in a first dimension (e.g., “Timmy,” “Johnny,” “Sally,” and “Sue”). These objects may correspond to user profiles of an online book-rating and movie-rating platform (e.g., including a database) run by client computing device A 402 or associated with client computing device A 402, for example. Such user profiles may be identified by their profile name, their user identification (ID) number, a hash code, a social security number, or an internet protocol (IP) address associated with a user profile. Other methods of identifying user profiles are also possible.
In a second dimension of dataset A (XA), there may be seven features corresponding to each of the four objects (e.g., “Book Title 1,” “Book Title 2,” “Book Title 3,” “Book Title 4,” “Book Title 5,” “Book Title 6,” and “Book Title 7”). Such features may alternatively be identified by their 10-digit or 13-digit international standard book number (ISBN), for example, or by a corresponding unified resource locator (URL) from the book-rating and movie-rating platform.
The entries corresponding to each pair of (object, feature) may correspond to a rating value (e.g., ranging from 0-100). For example, user “Sally” may have rated “Book Title 4” with a rating of “17.” These entries may be scaled such that they all have the same range (e.g., some of the ratings may have originally be issued between 0-10 and then scaled to ensure they ranged from 0-100 and were normalized to the other rating values).
In alternate embodiments, other types of objects and/or features may be used within dataset A (XA). Further, in other embodiments, other numbers of objects and/or features may be used. In addition, in some embodiments, other types of data entries may be used. Additionally or alternatively, in some embodiments, there may be greater than two dimensions (d) for dataset A (XA).
As illustrated, the corresponding set of recorded values (YA) includes the same four objects in a first dimension (e.g., “Timmy,” “Johnny,” “Sally,” and “Sue”) as the four objects in dataset A (XA). As with above, these objects may correspond to user profiles of an online book-rating and movie-rating platform run by client computing device A 402 or associated with client computing device A 402, for example. Again, such user profiles may be identified by their profile name, their user ID number, a hash code, a social security number, or an IP address associated with a user profile. Other methods of identifying user profiles are also possible.
In a second dimension of the corresponding set of recorded values (YA), there may be five features corresponding to each of the four objects (e.g., “Movie Title 1,” “Movie Title 2,” “Movie Title 3,” “Movie Title 4,” and “Movie Title 5”). Such features may alternatively be identified by their barcode or a corresponding URL from the book-rating and movie-rating platform.
The entries corresponding to each pair of (object, feature) in the corresponding set of recorded values (YA) may correspond to a rating value (e.g., ranging from 0-100). For example, object “Johnny” may have rated feature “Movie Title 2” with a value of “18.” These entries may be scaled such that they all have the same range (e.g., some of the ratings may have originally be issued between 0-10 and then scaled to ensure they ranged from 0-100 and were normalized to the other rating values).
In alternate embodiments, other types of objects and/or features may be used within the corresponding set of recorded values (YA). Further, in other embodiments, other numbers of objects and/or features may be used. In addition, in some embodiments, other types of data entries may be used. Additionally or alternatively, in some embodiments, there may be greater than two dimensions (d) for the corresponding set of recorded values (YA). The number of dimensions (d) in the corresponding set of recorded values (YA) may be the same as the number of dimensions (d) in dataset A (XA). Alternate embodiments of datasets and corresponding sets of recorded values are illustrated and described further with reference to
As illustrated, the number of features in additional dimensions of dataset B (XB) and the corresponding set of recorded values (YB) may be the same as the number of features in additional dimensions of dataset A (XA) and its corresponding set of recorded values (YA). As illustrated, there are seven features in the additional dimension of dataset B (XB) (e.g., “Book Title 1,” “Book Title 2,” “Book Title 3,” “Book Title 4,” “Book Title 5,” “Book Title 6,” and “Book Title 7”). As shown, these features are the same as the features of dataset A (XA) illustrated in
Still further, the number of features the corresponding set of recorded values (YB) may be different than the number of features in the corresponding set of recorded values (YA). As illustrated, there are nine features in the additional dimension of the corresponding set of recorded values (YB) (e.g., “Movie Title 1,” “Movie Title 3,” “Movie Title 4,” “Movie Title 5,” “Movie Title 6,” “Movie Title 7,” “Movie Title 8,” “Movie Title 9,” and “Movie Title 10”). As shown, some of these features are the same as the features of the corresponding set of recorded values (YA) illustrated in
Additionally, in some embodiments, the numbers of dimensions (d) in dataset B (XB) and corresponding set of recorded values (YB) of
Additionally or alternatively, in some embodiments, at least one of the corresponding sets of recorded values may be represented by a tensor. In addition, in some embodiments, at least one of the corresponding sets of recorded values may be represented by a sparse tensor. In the embodiment illustrated in
In some embodiments, there may be more than two pairs of datasets and corresponding sets of recorded values. Additional pairs of datasets and sets of recorded values may be transmitted by and/or stored within client computing device A 402 or client computing device B 404. Additionally or alternatively, additional pairs of datasets and sets of recorded values may be transmitted by and/or stored within additional client computing devices (e.g., client computing devices not illustrated in
In various embodiments, the data represented by the datasets and the corresponding sets of recorded values may vary. For example, the method 400 may be used in a variety of applications, each application potentially corresponding to a different set of data stored within the datasets and different corresponding sets of recorded values. This is described and illustrated further with reference to
Determining, by the managing computing device 406, a respective list of identifiers for each dataset (XA/XB) may include various operations in various embodiments. In some embodiments, the lists of identifiers (MA/MB) may be provided to the managing computing device 406 by the respective client computing devices (computing device A 402/computing device B 404). In other embodiments, the managing computing device 406 may define the unique IDs based on an algorithm of the managing computing device 406. In still other embodiments, determining, by the managing computing device, the lists of identifiers (MA/MB) may include receiving commercially available identifiers for the objects (e.g., ISBNs for books or social security numbers for patients). Such commercially available identifiers for the objects may be provided to the managing computing device 406 by a third party (e.g., a domain name registrar may provide corresponding IP addresses for objects corresponding to domain names). Regardless of the technique used to define the unique IDs in the lists of identifiers (MA/MB), the unique IDs may be defined such that each unique ID is defined according to a common syntax, thereby enabling a creation of a composite list of identifiers (MComb).
In some embodiments, two or more datasets may include the same object (e.g., “Timmy” in
In some embodiments, operation 420 may be performed before operation 418. For example, the objects from dataset A (XA) and the objects from dataset B (XB), and any other dataset in embodiments having greater than two datasets, may be combined prior to creating a composite list of identifiers (MComb). Once combined, repeated (i.e., “duplicate”) objects in the combined set of objects may be removed, thereby forming the list of unique objects (XComb). The repeated objects may be removed based on an intersection of the lists of identifiers (MA/MB) for each of the plurality of datasets (XA/XB). Then, based on the list of unique objects, the composite list of identifiers (MComb) may be generated.
As illustrated in
Connecting the first-level neurons 1102 to the mid-level neurons 1104 may include using the first-level neurons 1102 as inputs to the mid-level neurons 1104. Using the first-level neurons 1102 as inputs to the mid-level neurons 1104 may include multiplying a value of each first-level neuron 1102 by one or more shared parameters (β) (i.e., one or more “weights”). These one or more shared parameters (13) may be stored as a tensor (e.g., a “weight matrix” in two-dimensional examples). For example, in
U:=f(Z;β)→
where
In the example of
In some embodiments, prior to computing the shared representation (U), the shared function (f) and the one or more shared parameters (β) (e.g., stored within a “weight” tensor in artificial neural network embodiments) may be initialized by the managing computing device 406. For example, the shared function (f) and the one or more shared parameters (β) may be initialized based on a related shared function used to model a similar relationship (e.g., if the shared function (f) is being used to model a relationship between risk factors of a given insured individual and a given premium for providing life insurance to that individual, the shared function (f) and the one or more shared parameters (β) may be set based on a previous model developed to model a relationship between risk factors of a given insured individual and a given premium for providing health insurance to that individual). Initializing the shared function (f) and the one or more shared parameters (β) based on a related shared function may include setting the shared function (f) and the one or more shared parameters (β) equal to the related shared function and the one or more shared parameters for the related shared function. Alternatively, initializing the shared function (f) and the one or more shared parameters (β) based on a related shared function may include scaling and/or normalizing the related shared function and the one or more shared parameters for the related shared function, and then using the scaled and/or normalized values as initial values for the shared function (f) and the one or more shared parameters (β).
In alternate embodiments, the managing computing device 406 initializing the shared function (f) or the one or more shared parameters (β) may include the managing computing device 406 receiving initial values for the one or more shared parameters (β) from one of the client computing devices (e.g., computing device A 402 or computing device B 404). The managing computing device 406 may initialize the shared function (f) and the one or more shared parameters (β) based on the initial values received. The initial values for the one or more shared parameters (f) may be determined by the respective client computing device (e.g., client computing device A 402 or client computing device B 404) based upon one or more public models (e.g., publicly available machine-learned models).
In still other embodiments, the managing computing device 406 may initialize the shared function (f) and the one or more shared parameters (β) based on a random number generator or a pseudo-random number generator (e.g., a Mersenne Twister pseudo-random number generator). For example, the managing computing device 406 may select random values for each of the one or more shared parameters (f) within a corresponding weight tensor of an artificial neural network. In yet other embodiments, the managing computing device 406 may initialize the shared function (f) and the one or more shared parameters (β) such that the one or more shared parameters are each at the midpoint of all possible values for the one or more shared parameters (β). For example, if the one or more shared parameters (β) represent a weight tensor (and each parameter of the one or more shared parameters (β) represents a weight), all values of the one or more shared parameters (β) may be initialized to 0.5 (midpoint between 0.0 and 1.0) by the managing computing device 406.
In
Similarly in
In some embodiments, as described with reference to
In example embodiments employing an artificial neural network model, the sets of predicted values (
In
The variables, as described below, that use subscript ‘i’, are intended to reference the corresponding dataset. For example, the subscript ‘i’ may be replaced with an ‘A’ when referring to any variable corresponding to dataset A (XA) and may be replaced with a ‘B’ when referring to any variable corresponding to dataset B (XB). Hence, the individual function (gi) corresponding to a given dataset may be replaced with variable names (gi→gA/gB), where appropriate. Similar replacement may be used for the partial representation for a given dataset (Ui→UA/UB) and the individual parameter for a given dataset (γi→γA/γB).
As described above, the shared representation (U) may be defined by the shared function (f) and the objects in the subset (Z) (e.g., U:=f(Z;β)). Further, the shared function (f) may correspond to one or more layers of an artificial neural network model, as illustrated in
Ii:=gi(Ui;γi)→
where
In the example of
In the example of
In some embodiments, prior to computing an individual representation (I), the individual function (gi) and the one or more individual parameters (γi) (e.g., stored within a “weight” tensor in artificial neural network embodiments) may be initialized by the respective client computing device. The respective client computing devices may initialize the individual function (gi) and the one or more individual parameters (γi) corresponding to the respective dataset (XA/XB) based on a random number generator or a pseudo-random number generator (e.g., a Mersenne Twister pseudo-random number generator).
To generate the data represented in
For both datasets, the set of recorded values 1502 (YA/YB) and the set of predicted values 1504 (ŶA/ŶB) are illustrated in
The sets of predicted values 1504 (ŶA/ŶB), similar to the sets of recorded values 1502 (YA/YB) may be stored as a tensor (e.g., a matrix). The sets of predicted values 1504 (ŶA/ŶB) may then be compared to the sets of recorded values 1502 (YA/YB) to compute an error for the respective dataset (XA/XB). The error (EA/EB) may be calculated according to an individual loss function (LA/LB). For example, the error (EA/EB) corresponding to each dataset (XA/XB) may be the sum of the individual loss function (LA/LB) evaluated at each non-zero entry (WA/WB) in a tensor representing the set of recorded values 1502 (YA/YB). As illustrated in FIG. 15A, the individual loss function (LA/LB) evaluated for a given entry may be based on the difference 1506 between the predicted value 1504 for that entry and the recorded value 1502 for that entry. Only one difference 1506 is labeled in each of
EA=ΣW
The above equation represents that the error (EA) corresponding to dataset A (XA) equals the sum of the individual loss function (LA), evaluated between the entry in the set of predicted values (ŶA) and the entry in the set of recorded values (YA), for each non-empty entry (WA) of each object in the sublist of objects (SA) corresponding to the set of recorded values (YA) for dataset A (XA). The addend in the above sum may be referred to as a “partial error value.” The mathematical representation for dataset B (XB) is analogous with the mathematical representation for dataset A (XA), mutatis mutandis.
In some embodiments, the individual loss functions (LA/LB) may include at least one of: a quadratic loss function, a logarithmic loss function, a hinge loss function, a quantile loss function, or a loss function associated with the Cox proportional hazard model. Further, in some embodiments, different client computing devices may use the same individual loss functions (e.g., the individual loss function used by each client computing device may be prescribed by the managing computing device 406), referred to as a “shared loss function.” For example both client computing device A 402 and client computing device B 404 may use a quadratic loss function for respective individual loss functions (LA/LB) (e.g., LA(j,k)=(j−k)2). In alternate embodiments, the individual loss functions (LA/LB) corresponding to different datasets (XA/XB) may be different. However, in such embodiments, certain results generated (e.g., errors (EA/EB)) by a dataset's (XA/XB) respective client computing device may be normalized prior to or after transmission to the managing computing device 406, such that the results can be meaningfully compared against one another even though different individual loss functions (LA/LB) were used. In some such embodiments, information about the particular individual loss function (LA/LB) used by a client computing device and/or about a corresponding set of recorded values (YA/YB) may be transmitted to the managing computing device 406. The information about the particular individual loss function (LA/LB) used and/or about the corresponding set of recorded values (YA/YB) may be used by the managing computing device 406 to perform a normalization.
In some embodiments, as illustrated and described with reference to
The amount by which the respective one or more individual parameters (γA/γB) are updated may correspond to a respective individual learning rate (RA/RB). The individual learning rates (RA/RB) may be determined by each of the client computing devices independently, in some embodiments. In other embodiments, the individual learning rates (RA/RB) may be determined by the managing computing device 406 and then transmitted to the client computing devices, individually. Alternatively, the individual learning rates (RA/RB) may be inherent in a gradient descent method used by each respective client computing device. In some embodiments, either or both of the individual learning rates (RA/RB) may include at least one of: an exponentially decayed learning rate, a harmonically decayed learning rate, a step-wise exponentially decayed learning rate, or an adaptive learning rate.
In some embodiments, as illustrated and described with reference to
In some embodiments, determining one or more feedback values (CA/CB) may correspond to performing, by the respective client computing device (client computing device A 402/client computing device B 404), a gradient descent method. Additionally or alternatively, when determining a change in the respective partial representation (UA/UB) that corresponds to an improvement in the respective sets of predicted values (ŶA/ŶB), the respective sets of predicted values may correspond to a threshold improvement value. The threshold improvement value may be based on a shared learning rate (η). In some embodiments, the shared learning rate (η) may be determined by the managing computing device 406 and transmitted by the managing computing device 406 to each of the client computing devices. Additionally or alternatively, in some embodiments, the shared learning rate (η) may be defined by the gradient descent method used by the client computing devices. For example, the client computing devices may each use the same gradient descent method (e.g., a shared gradient descent method prescribed by the managing computing device 406) that has an associated shared learning rate (η). Further, the shared learning rate (η) may include at least one of: an exponentially decayed learning rate, a harmonically decayed learning rate, or a step-wise exponentially decayed learning rate. In other embodiments, the shared learning rate (η) may include an adaptive learning rate.
In some embodiments, when determining a change in the respective partial representation (UA/UB) that corresponds to an improvement in the respective sets of predicted values (ŶA/ŶB), improvement in the set of predicted values (ŶA/ŶB) corresponds to a threshold improvement value defined by an individual learning rate (RA/RB) that is determined by each of the client computing devices independently. In some embodiments, the individual learning rate (RA/RB) may include at least one of: an exponentially decayed learning rate, a harmonically decayed learning rate, or a step-wise exponentially decayed learning rate.
In some embodiments, the one or more feedback values (CA/CB) may be organized into respective feedback tensors (
In some embodiments, as illustrated and described with reference to
In some embodiments, as illustrated and described with reference to
In some embodiments, the one or more aggregated feedback values (CComb) may be organized into an aggregated feedback tensor (
In some embodiments, as illustrated and described with reference to
where ‘i’ in the above sum is an index corresponding to a respective dataset (e.g., dataset A (XA)).
Additionally or alternatively, in some embodiments, updating the one or more shared parameters (β) based on the more aggregated feedback values (CComb) may include evaluating a sum of products of first partial derivatives and second partial derivatives, where each of the first partial derivatives are partial derivatives of the error (EA/EB) for a respective dataset (XA/XB) with respect to the respective partial representation (UA/UB), and where each of the second partial derivatives are partial derivatives of the respective partial representation (UA/UB) with respect to the one or more shared parameters (β). Such a sum may be represented mathematically as:
where ‘i’ in the above sum is an index corresponding to a respective dataset (e.g., dataset A (XA)).
Illustrated in
In some embodiments of both datasets and sets of recorded values, data for values of various entries may be purely binary data, purely textual data (e.g., classification data), or purely numeric data, for example. In alternate embodiments of both datasets and sets of recorded values, data for values of various entries may be a combination of binary data, textual data (e.g., classification data), and numeric data. In still other embodiments, data may be in the form of sound recordings (e.g., used to develop a machine learning model that can be used to perform voice recognition) or images (e.g., used to develop a machine learning model that can be used to perform object recognition).
In some embodiments (e.g., embodiments that are not illustrated), datasets and corresponding sets of recorded values may have greater than two dimensions. In such example embodiments, the datasets and the corresponding sets of recorded values may correspond to n-dimensional tensors (e.g., rather than two-dimensional tensors, i.e., matrices).
Illustrated in
Illustrated in
In some embodiments, all objects within a given dimension of a dataset or a corresponding set of recorded values may be the same (e.g., all objects within a first dimension of a dataset may be of designated type “user”). As illustrated in
Dimensions from different datasets containing different object types may still be used by the managing computing device 406 to develop a shared representation and to update the one or more shared parameters (β). In some embodiments, using different object types to develop a shared representation may include normalizing (e.g., by the managing computing device 406 or a respective client computing device) values corresponding to the different object types, such that those values can be meaningfully compared with one another.
The method described above, and various embodiments both explicitly and implicitly contemplated herein, may be used in a wide variety of applications. One example application includes a pharmaceutical discovery method. For example, in one embodiment, a first dimension of each of the plurality of datasets (XA/XB) transmitted to a managing computing device may include a plurality of chemical compounds and a second dimension of each of the plurality of datasets (XA/XB) may include descriptors of the chemical compounds (e.g., chemistry-derived fingerprints or descriptors identified via transcriptomics or image screening). In such an embodiment, entries in each of the plurality of datasets (XA/XB) may correspond to a binary indication of whether a respective chemical compound exhibits a respective descriptor. Further, in such an embodiment, a first dimension of each of the sets of recorded values (YA/YB) respectively corresponding to the plurality of datasets (XA/XB) may include the plurality of chemical compounds and a second dimension of each of the sets of recorded values (YA/YB) may include activities of the chemical compounds in a plurality of biological assays (e.g., concentration of a given product of a chemical reaction produced per unit time, fluorescence level, cellular reproduction rate, coloration of solution, pH of solution, or cellular death rate). In such an embodiment, entries in each of the sets of recorded values (YA/YB) may correspond to a binary indication of whether a respective chemical compound exhibits a respective activity.
The pharmaceutical discovery method may include the managing computing device 406 calculating a final shared representation (UFinal) of the datasets (XA/XB) based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). The pharmaceutical discovery method may also include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to each of the client computing devices. The final shared representation (UFinal) of the datasets (XA/XB) may be usable by each of the client computing devices to determine a final set of predicted values (ŶA
An additional example application includes a pharmaceutical diagnostic method. For example, in one embodiment, a first dimension of each of the plurality of datasets (XA/XB) transmitted to a managing computing device may include a plurality of patients and a second dimension of each of the plurality of datasets (XA/XB) may include descriptors of the patients (e.g., genomic-based descriptors, patient demographics, patient age, patient height, patient weight, or patient gender). In such an embodiment, entries in each of the plurality of datasets (XA/XB) may correspond to a binary indication of whether a respective patient exhibits a respective descriptor. Further, in such an embodiment, a first dimension of each of the sets of recorded values (YA/YB) respectively corresponding to the plurality of datasets (XA/XB) may include the plurality of patients and a second dimension of each of the sets of recorded values (YA/YB) may include clinical diagnoses of the patients (e.g., cancer diagnosis, heart disease diagnosis, broken bone diagnosis, skin infection diagnosis, psychological diagnosis, genetic disorder diagnosis, or torn ligament diagnosis). In such an embodiment, entries in each of the sets of recorded values (YA/YB) may correspond to a binary indication of whether a respective patient exhibits a respective clinical diagnosis.
The pharmaceutical diagnostic method may include the managing computing device 406 calculating a final shared representation (UFinal) of the datasets (XA/XB) based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). The pharmaceutical diagnostic method may also include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to each of the client computing devices. The final shared representation (UFinal) of the datasets (XA/XB) may be usable by each of the client computing devices to determine a final set of predicted values (ŶA
A further example application includes a media (e.g., book, movie, music, etc.) recommendation method. For example, in one embodiment, a first dimension of each of the plurality of datasets (XA/XB) transmitted to a managing computing device may include a plurality of users and a second dimension of each of the plurality of datasets (XA/XB) may include a plurality of book titles. In such an embodiment, entries in each of the plurality of datasets (XA/XB) may correspond to a rating of a respective book title by a respective user. Further, in such an embodiment, a first dimension of each of the sets of recorded values (YA/YB) respectively corresponding to the plurality of datasets (XA/XB) may include the plurality of users and a second dimension of each of the sets of recorded values (YA/YB) may include a plurality of movie titles. In such an embodiment, entries in each of the sets of recorded values (YA/YB) may correspond to a rating of a respective movie title by a respective user.
The media recommendation method may include the managing computing device 406 calculating a final shared representation (UFinal) of the datasets (XA/XB) based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). The media recommendation method may also include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to each of the client computing devices. The final shared representation (UFinal) of the datasets (XA/XB) may be usable by each of the client computing devices to determine a final set of predicted values (ŶA
Another example application includes an insurance policy determination method. For example, in one embodiment, a first dimension of each of the plurality of datasets (XA/XB) transmitted to a managing computing device may include a plurality of insurance policies and a second dimension of each of the plurality of datasets (XA/XB) may include a plurality of deductible amounts. In such an embodiment, entries in each of the plurality of datasets (XA/XB) may correspond to a binary indication of whether a respective insurance policy has a respective deductible amount. Further, in such an embodiment, a first dimension of each of the sets of recorded values (YA/YB) respectively corresponding to the plurality of datasets (XA/XB) may include the plurality of insurance policies and a second dimension of each of the sets of recorded values (YA/YB) may include a plurality of insurance premiums. In such an embodiment, entries in each of the sets of recorded values (YA/YB) may correspond to a binary indication of whether a respective insurance policy has a respective insurance premium.
The insurance policy determination method may include the managing computing device 406 calculating a final shared representation (UFinal) of the datasets (XA/XB) based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). The insurance policy determination method may also include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to each of the client computing devices. The final shared representation (UFinal) of the datasets (XA/XB) may be usable by each of the client computing devices to determine a final set of predicted values (ŶA
Yet another example application includes an automotive fuel-efficiency-prediction method. For example, in one embodiment, a first dimension of each of the plurality of datasets (XA/XB) transmitted to a managing computing device may include a plurality of automobiles (e.g., identified by vehicle identification number (VIN) or serial number) and a second dimension of each of the plurality of datasets (XA/XB) may include a plurality of automobile parts. In such an embodiment, entries in each of the plurality of datasets (XA/XB) may correspond to a binary indication of whether a respective automobile has a respective automobile part equipped. Further, in such an embodiment, a first dimension of each of the sets of recorded values (YA/YB) respectively corresponding to the plurality of datasets (XA/XB) may include the plurality of automobiles and a second dimension of each of the sets of recorded values (YA/YB) may include a plurality of average fuel efficiencies. In such an embodiment, entries in each of the sets of recorded values (YA/YB) may correspond to a binary indication of whether a respective automobile has a respective average fuel efficiency.
The automotive fuel-efficiency-prediction method may include the managing computing device 406 calculating a final shared representation (UFinal) of the datasets (XA/XB) based on the list of unique objects (XComb), the shared function (f), and the one or more shared parameters (β). The automotive fuel-efficiency-prediction method may also include the managing computing device 406 transmitting the final shared representation (UFinal) of the datasets (XA/XB) to each of the client computing devices. The final shared representation (UFinal) of the datasets (XA/XB) may be usable by each of the client computing devices to determine a final set of predicted values (ŶA
In addition to those embodiments enumerated above, several other applications will be apparent. For example, various sets of datasets (XA/XB) and corresponding sets of recorded values (YA/YB) may be used to generate various machine-learned models, which could be used to perform various tasks (e.g., make recommendations and/or predictions). In various embodiments, such machine-learned models may be used to: make an attorney recommendation for a potential client based on various factors (e.g., competency of the attorney, specialty of the attorney, nature of client issue, timeline, cost, location of the client, location of the attorney, etc.), make a physician recommendation for a patient based on various factors (e.g., competency of the physician, specialty of the physician, nature of patient issue, timeline, cost, insurance, location of the physician, location of the patient, etc.), make a route recommendation or selection for navigation (e.g., for an autonomous vehicle) based on various factors (e.g., traffic data, automobile type, real-time weather data, construction data, etc.), make an air travel reservation and/or recommendation based on various factors (e.g., historical airline price data, weather data, calendar data, passenger preferences, airplane specifications, airport location, etc.), make a recommendation of a vacation destination based on various factors (e.g., prior travel locations of a traveler, traveler preferences, price data, weather data, calendar information for multiple travelers, ratings from prior travelers for a given destination, etc.), provide translations of text or speech from one language to another based on various factors (e.g., input accent detected, rate of speech, punctuation used, context of text, etc.), perform object recognition on one or more images in an image database based on various factors (e.g., size of the image and/or object, shape of the image and/or object, color(s) of the image and/or object, texture(s) of the image and/or object, saturation of the image and/or object, hue of the image and/or object, location of the object within the image, etc.), recommend an insurance premium, deductible, and/or coverage amount for automotive insurance, home insurance, life insurance, health insurance, dental insurance, boat insurance, malpractice insurance, and/or long term disability insurance based on various factors (e.g., age of the insured, health of the insured, gender of the insured, marital status of the insured, insurance premium amount, insurance deductible amount, insurance coverage amount, credit history, other demographic information about the insured, etc.), recommend an interest rate for home, automotive, and/or boat loan based on various factors (e.g., age of the home/automobile/boat, credit score of the borrower, down payment amount, repayment term, resale value of the home/automobile/boat, reliability statistics about homes/automobiles/boats, etc.), recommend a lender for an home, automotive, and/or boat loan based on various factors (e.g., credit score of the borrower, amount of the loan, repayment term of the loan, down payment amount on the loan, etc.), calculate a credit score for a creditor based on various factors (e.g., creditor age, creditor gender, creditor repayment history, creditor credit card ownership data, creditor average interest rate, amount creditor is in debt, etc.), grant or deny biometric access for a requester based on various factors (e.g., object recognition in a set of biometric images, fingerprint data, retinal data, requester height, requester gender, requester age, requester eye color, requester hair color, requester race, etc.), recommend a restaurant to a patron based on various factors (e.g., prior restaurants visited by the patron, cuisine preference data, weather data, calendar data, real-time restaurant wait-time data, ratings from prior patrons of various restaurants, restaurant location data, patron location data, etc.), or predict the outcome of a sporting event based on various data (e.g., record of prior sports contests involving the participants, current betting statistics for the sporting event, betting statistics for prior sporting events, weather data, location of the sporting event data, etc.), recommending a menu-item selection for a restaurant patron based on various factors (e.g., patron preferences, ratings of previous patrons, cuisine type data, alternative options on the menu, price, spice-level data, preparation-time data, etc.), determine viability and success of one or more genetic modifications to an organism with regards to curing a disease and/or palliate symptoms based on various factors (e.g., locus of genetic modification, probability of occurrence of genetic modification, complexity of the genome of the organism, side-effects of the genetic modification, number of mutations/splices required to create genetic modification, gender of organism with genetic modification, symptoms of organism with genetic modification, age of organism with genetic modification, etc.), determine how to allocate funding in a smart city according to various factors (e.g., amount of funds, traffic data, ages of buildings/infrastructure, population density data, hospital data, power supply demand data, criminal statistics, etc.), recommend an investment strategy based on various factors (e.g., investor income data, investor risk aversion data, stock market data, etc.), and/or recommend preventative maintenance be performed (e.g., on an automobile, boat, industrial machine, factory equipment, airplane, infrastructure, etc.) based on various factors (e.g., age of the object, money invested into the object, statistical data regarding similar objects, year when the object was built, object use data, weather data, last preventative maintenance performed, cost of maintenance, etc.).
Any of the “various factors” provided as examples above could be components of datasets (e.g., one or more dimensions of a dataset). Similarly, corresponding “outcomes” (e.g., the recommended attorney or recommended physician) could be components of sets of predicted values (ŶA/ŶB) (e.g., one or more dimension of a set of predicted values) and/or sets of recorded values (YA/YB) (e.g., one or more dimension of a set of recorded values).
As illustrated in
Similar to the embodiments described above (with reference to
Upon creating the list of unique objects (XComb), the managing computing device may determine a shared representation (U) based on one or more shared parameters (β). In some embodiments, the one or more shared parameters may be randomly instantiated, prior to updating. This may happen according to the equation
In some embodiments, a method may then be employed to update/refine the one or more shared parameters (β). The method may include client computing device A 1702 using the determined partial representation (UA) to calculate a set of predicted values (
Upon receiving the updated partial representation matrices (ŪA1/ŪB1), the managing computing device may recombine them into a shared representation tensor (Ū1). Using the shared representation tensor, a shared parameter tensor (
Thereafter, the updated shared representation tensor (Ū1) may then be transmitted to each of the client computing devices 1702/1704 by the managing computing device 1706. Then, each client computing device may determine which portion of the updated shared representation tensor (Ū1) corresponds to the dataset corresponding to the respective client computing device (e.g., based on a subset of identifiers (S) and a list of identifiers (MA/MB) for a respective dataset corresponding to the respective client computing device). This determined portion may represent the respective partial representation (UA/UB). Based on this partial representation (UA/UB), an second iteration of a set of predicted values (
As illustrated in
Further, as illustrated in
Similarly, as illustrated in
Further, as illustrated in
As illustrated in
Propagating through the operations of the method 400 illustrated in
If the shared representation (U) and/or the one or more shared parameters (β) are distributed to the client computing devices, the client computing devices may use the shared representation (U) and/or the one or more shared parameters (β) to make future predictions. For example, if a bookseller corresponding to client computing device A 402 and dataset A (XA) offers a new book for sale, that bookseller may predict the ratings that pre-existing customers will give to the new book (e.g., based on the new book's values corresponding to the features of “Genre,” “Author,” “ISBN,” “Language,” and/or “Publication Year”).
In another example embodiment, the objects within the datasets (XA/XB) may be chemical compounds (e.g., in
The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those described herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims.
The above detailed description describes various features and operations of the disclosed systems, devices, and methods with reference to the accompanying figures. The example embodiments described herein and in the figures are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations.
With respect to any or all of the message flow diagrams, scenarios, and flow charts in the figures and as discussed herein, each step, block, operation, and/or communication can represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments. In these alternative embodiments, for example, operations described as steps, blocks, transmissions, communications, requests, responses, and/or messages can be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. Further, more or fewer blocks and/or operations can be used with any of the message flow diagrams, scenarios, and flow charts discussed herein, and these message flow diagrams, scenarios, and flow charts can be combined with one another, in part or in whole.
A step, block, or operation that represents a processing of information can correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a step or block that represents a processing of information can correspond to a module, a segment, or a portion of program code (including related data). The program code can include one or more instructions executable by a processor for implementing specific logical operations or actions in the method or technique. The program code and/or related data can be stored on any type of computer-readable medium such as a storage device including RAM, a disk drive, a solid state drive, or another storage medium.
The computer-readable medium can also include non-transitory computer-readable media such as computer-readable media that store data for short periods of time like register memory and processor cache. The computer-readable media can further include non-transitory computer-readable media that store program code and/or data for longer periods of time. Thus, the computer-readable media may include secondary or persistent long term storage, like ROM, optical or magnetic disks, solid state drives, compact-disc read only memory (CD-ROM), for example. The computer-readable media can also be any other volatile or non-volatile storage systems. A computer-readable medium can be considered a computer-readable storage medium, for example, or a tangible storage device.
Moreover, a step, block, or operation that represents one or more information transmissions can correspond to information transmissions between software and/or hardware modules in the same physical device. However, other information transmissions can be between software modules and/or hardware modules in different physical devices.
The particular arrangements shown in the figures should not be viewed as limiting. It should be understood that other embodiments can include more or less of each element shown in a given figure. Further, some of the illustrated elements can be combined or omitted. Yet further, an example embodiment can include elements that are not illustrated in the figures.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purpose of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
Embodiments of the present disclosure may thus relate to one of the enumerated example embodiments (EEEs) listed below.
EEE 1 is a method, comprising:
EEE 2 is the method of EEE 1, further comprising transmitting, by the managing computing device, the shared function and the one or more shared parameters to each of the client computing devices.
EEE 3 is the method of EEEs 1 or 2, wherein each identifier of the subset of identifiers is selected randomly or pseudo-randomly.
EEE 4 is the method of EEE 3, wherein each identifier of the subset of identifiers is selected based on a Mersenne Twister pseudo-random number generator.
EEE 5 is the method of EEEs 1 or 2, wherein each identifier of the subset of identifiers is selected according to an algorithm known only to the managing computing device.
EEE 6 is the method of EEEs 1 or 2, wherein each identifier of the subset of identifiers is selected according to a publicly available algorithm.
EEE 7 is the method of any of EEEs 1-6, wherein determining, by the managing computing device, the list of unique objects from among the plurality of datasets comprises:
EEE 8 is the method of any of EEEs 1-7, wherein the individual loss function comprises at least one of: a quadratic loss function, a logarithmic loss function, a hinge loss function, or a quantile loss function.
EEE 9 is the method of any of EEEs 1-8, wherein determining the error for the respective dataset comprises:
EEE 10 is the method of any of EEEs 1-9, further comprising:
EEE 11 is the method of EEE 10, wherein the final shared representation of the datasets is usable by each of the client computing devices to determine a final set of predicted values corresponding to the respective dataset.
EEE 12 is the method of EEE 11, wherein determining the final set of predicted values corresponding to the respective dataset comprises:
EEE 13 is the method of EEEs 11 or 12, wherein the final set of predicted values is:
EEE 14 is the method of any of EEEs 1-13, wherein the one or more feedback values from each of the client computing devices are based on back-propagated errors.
EEE 15 is the method of any of EEEs 1-14,
EEE 16 is the method of EEE 15, wherein the one or more shared parameters each corresponds to one or more weights in a weight tensor of the shared representation.
EEE 17 is the method of EEEs 15 or 16, wherein the one or more individual parameters each corresponds to one or more weights in a weight tensor of the individual function for the respective dataset.
EEE 18 is the method of any of EEEs 1-17, wherein each of the plurality of datasets comprises an equal number of dimensions.
EEE 19 is the method of EEE 18, wherein the number of dimensions is two, three, four, five, six, seven, eight, nine, ten, sixteen, thirty-two, sixty-four, one-hundred and twenty-eight, two-hundred and fifty-six, five-hundred and twelve, or one-thousand and twenty-four.
EEE 20 is the method of any of EEEs 1-19,
EEE 21 is the method of any of EEEs 1-20, wherein the shared function comprises an artificial neural network having rectified linear unit (ReLU) non-linearity and dropout.
EEE 22 is the method of any of EEEs 1-21, wherein determining, by the respective client computing device, the one or more feedback values corresponds to performing, by the respective client computing device, a gradient descent method.
EEE 23 is the method of any of EEEs 1-22, wherein the improvement in the set of predicted values corresponds to a threshold improvement value based on a shared learning rate.
EEE 24 is the method of EEE 23, further comprising:
EEE 25 is the method of EEE 23,
EEE 26 is the method of any of EEEs 23-25, wherein the shared learning rate comprises at least one of: an exponentially decayed learning rate, a harmonically decayed learning rate, or a step-wise exponentially decayed learning rate.
EEE 27 is the method of any of EEEs 1-22, wherein improvement in the set of predicted values corresponds to a threshold improvement value defined by an individual learning rate that is determined by each of the client computing devices independently.
EEE 28 is the method of EEE 27, wherein the individual learning rate comprises at least one of: an exponentially decayed learning rate, a harmonically decayed learning rate, or a step-wise exponentially decayed learning rate.
EEE 29 is the method of any of EEEs 1-28, further comprising:
EEE 30 is the method of any of EEEs 1-29,
EEE 31 is the method of any of EEEs 1-30,
EEE 32 is the method of any of EEEs 1-31, further comprising initializing, by the managing computing device, the shared function and the one or more shared parameters based on a related shared function used to model a similar relationship.
EEE 33 is the method of any of EEEs 1-31, further comprising:
EEE 34 is the method of EEE 33, wherein the initial values for the one or more shared parameters are determined by the first client computing device based upon one or more public models.
EEE 35 is the method of any of EEEs 1-31, further comprising initializing, by the managing computing device, the shared function and the one or more shared parameters based on at least one of: a random number generator or a pseudo-random number generator.
EEE 36 is the method of any of EEEs 1-35, wherein determining the one or more feedback values by the client computing devices further comprises initializing, by the respective client computing device, the individual function and the one or more individual parameters corresponding to the respective data set based on a random number generator or a pseudo-random number generator.
EEE 37 is the method of any of EEEs 1-36, wherein determining, based on the sublists of objects and the one or more feedback values from the client computing devices, the one or more aggregated feedback values comprises:
EEE 38 is the method of any of EEEs 1-37,
EEE 39 is the method of any of EEEs 1-37,
EEE 40 is the method of any of EEEs 1-39, further comprising removing, by the managing computing device, the list of unique objects, the shared representation, the lists of identifiers of each dataset of the plurality of datasets, and the composite list of identifiers from a memory of the managing computing device.
EEE 41 is the method of any of EEEs 1-40,
EEE 42 is the method of EEE 41, further comprising:
EEE 43 is the method of any of EEEs 1-40,
EEE 44 is the method of EEE 43, further comprising:
EEE 45 is the method of any of EEEs 1-40,
EEE 46 is the method of EEE 45, further comprising:
EEE 47 is the method of any of EEEs 1-40,
EEE 48 is the method of EEE 47, further comprising:
EEE 49 is the method of any of EEEs 1-40,
EEE 50 is the method of EEE 49, further comprising:
EEE 51 is the method of any of EEEs 1-50,
EEE 52 is the method of any of EEEs 1-14, 18-20, or 22-51,
EEE 53 is the method of EEE 52, wherein the respective dataset encodes side information about the objects of the dataset.
EEE 54 is the method of EEEs 52 or 53, wherein the predicted value tensor is factored using a Macau factorization method.
EEE 55 is the method of any of EEEs 1-54, wherein the individual loss function of each respective client computing device comprises a shared loss function.
EEE 56 is the method of any of EEEs 1-54, wherein a first individual loss function of the individual loss functions is different from a second individual loss function of the individual loss functions.
EEE 57 is a method, comprising:
EEE 58 is the method of EEE 57, wherein the plurality of descriptors comprises chemistry-derived fingerprints or descriptors identified via transcriptomics or image screening.
EEE 59 is a method, comprising:
EEE 60 is the method of EEE 59, wherein the plurality of descriptors of the patients comprises genomic-based descriptors, patient demographics, patient age, patient height, patient weight, or patient gender.
EEE 61 is a method, comprising:
EEE 62 is the method of EEE 61, wherein the book ratings comprise at least one of: binary ratings, classification ratings, or numerical ratings.
EEE 63 is the method of EEEs 61 or 62, wherein the movie ratings comprise at least one of: binary ratings, classification ratings, or numerical ratings.
EEE 64 is a method, comprising:
EEE 65 is a non-transitory, computer-readable medium with instructions stored thereon, wherein the instructions are executable by a processor to perform a method, comprising:
EEE 66 is a memory with a model stored thereon, wherein the model is generated according to a method, comprising:
EEE 67 is a method, comprising:
EEE 68 is a server device, wherein the server device has instructions stored thereon that, when executed by a processor, perform a method, the method comprising:
EEE 69 is a server device, wherein the server device has instructions stored thereon that, when executed by a processor, perform a method, the method comprising:
EEE 70 is a system, comprising:
EEE 71 is an optimized model, wherein the model is optimized according to a method, the method comprising:
EEE 72 is a computer-implemented method, comprising:
EEE 73 is a computer-implemented method, comprising:
EEE 74 is a computer-implemented method, comprising:
EEE 75 is a computer-implemented method, comprising:
EEE 76 is a computer-implemented method, comprising:
EEE 77 is a computer-implemented method, comprising:
Verachtert, Wilfried, Ceulemans, Hugo, Wuyts, Roel, Simm, Jaak, Arany, Adam, Moreau, Yves, Herzeel, Charlotte
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10579938, | Jan 20 2016 | Fair Isaac Corporation | Real time autonomous archetype outlier analytics |
10645548, | Jun 19 2016 | DATA WORLD, INC | Computerized tool implementation of layered data files to discover, form, or analyze dataset interrelations of networked collaborative datasets |
10776760, | Nov 17 2017 | The Boeing Company | Machine learning based repair forecasting |
10984008, | Jun 19 2016 | data.world, Inc. | Collaborative dataset consolidation via distributed computer networks |
10990593, | May 04 2018 | SALESFORCE COM, INC | Providing matching security between data stores in a database system |
11036716, | Mar 20 2018 | DATA WORLD, INC | Layered data generation and data remediation to facilitate formation of interrelated data in a system of networked collaborative datasets |
11210307, | Jun 19 2016 | data.world, Inc. | Consolidator platform to implement collaborative datasets via distributed computer networks |
11277720, | Jun 19 2016 | data.world, Inc. | Computerized tool implementation of layered data files to discover, form, or analyze dataset interrelations of networked collaborative datasets |
11341367, | Mar 29 2019 | Amazon Technologies, Inc | Synthetic training data generation for machine learning |
20150278451, | |||
20170091651, | |||
20170220949, | |||
JP2013117921, | |||
JP2014095931, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 01 2018 | IMEC VZW | (assignment on the face of the patent) | / | |||
Oct 01 2018 | Janssen Pharmaceutica NV | (assignment on the face of the patent) | / | |||
Oct 01 2018 | Katholieke Universiteit Leuven | (assignment on the face of the patent) | / | |||
Apr 20 2020 | WUYTS, ROEL | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Apr 20 2020 | WUYTS, ROEL | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Apr 20 2020 | WUYTS, ROEL | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 18 2020 | SIMM, JAAK | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 18 2020 | SIMM, JAAK | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 18 2020 | SIMM, JAAK | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | HERZEEL, CHARLOTTE | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | VERACHTERT, WILFRIED | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | VERACHTERT, WILFRIED | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | HERZEEL, CHARLOTTE | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | HERZEEL, CHARLOTTE | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
May 20 2020 | VERACHTERT, WILFRIED | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 15 2020 | MOREAU, YVES | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 15 2020 | MOREAU, YVES | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 15 2020 | MOREAU, YVES | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 21 2020 | ARANY, ADAM | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 21 2020 | ARANY, ADAM | IMEC VZW | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Jul 21 2020 | ARANY, ADAM | Katholieke Universiteit Leuven | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0289 | |
Oct 07 2020 | CEULEMANS, HUGO KAREL | Janssen Pharmaceutica NV | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055113 | /0511 |
Date | Maintenance Fee Events |
Apr 01 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Mar 28 2026 | 4 years fee payment window open |
Sep 28 2026 | 6 months grace period start (w surcharge) |
Mar 28 2027 | patent expiry (for year 4) |
Mar 28 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 28 2030 | 8 years fee payment window open |
Sep 28 2030 | 6 months grace period start (w surcharge) |
Mar 28 2031 | patent expiry (for year 8) |
Mar 28 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 28 2034 | 12 years fee payment window open |
Sep 28 2034 | 6 months grace period start (w surcharge) |
Mar 28 2035 | patent expiry (for year 12) |
Mar 28 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |