Features related to systems and methods expediting generation of a machine learning model, such as an image recognition model, are described. Existing machine learning models are analyzed to identify a starting point for creating the new machine learning model. An existing machine learning model can suggest learning parameters (e.g., training parameters or structural features of the model) that can be used to expedite the generating and training process along with training data that can augment the training of the new machine learning model.
|
13. A computer-implemented method comprising:
under control of one or more processors,
receiving, from an electronic device, a request for an image model that identifies a characteristic of an object shown in a set of reference images;
determining a first image processing result indicating a first confidence for detecting the characteristic of the object shown in an image included in the set of reference images;
determining a second image processing result indicating a second confidence for detecting the characteristic of the object shown in the image included in the set of reference images;
determining that the first confidence is associated with a higher confidence level than the second confidence;
selecting, from a library of machine learning models, a first machine learning model based on the request and the first confidence;
generating the image model from the first machine learning model and the set of reference images; and
storing the image model in the library of machine learning models.
5. A system comprising:
one or more computing devices having a processor and a memory, wherein the one or more computing devices execute computer-readable instructions to:
receive, from an electronic device, a request for an image model that identifies a characteristic of an object shown in a set of reference images;
determine a first image processing result indicating a first confidence for detecting the characteristic of the object shown in an image included in the set of reference images;
determine a second image processing result indicating a second confidence for detecting the characteristic of the object shown in the image included in the set of reference images;
determine that the first confidence is associated with a higher confidence level than the second confidence;
select, from a library of machine learning models, a first machine learning model based on the request and the first confidence;
generate the image model from the first machine learning model and the set of reference images; and
store the image model in the library of machine learning models.
1. A computer-implemented method comprising:
under control of one or more processors,
receiving, from an electronic communication device, a request for an image model, wherein the request identifies:
(i) a task for the image model to perform, wherein the task comprises at least one of identifying a location of an object within an image or identifying the object within the image, and
(ii) training data including a labeled image, wherein the labeled image includes an identification of pixels showing the object;
identifying, from a library of machine learning models, a first machine learning model and a second machine learning model, based on the task or the object;
determining a first confidence level that the first machine learning model identifies the pixels showing the object included in the labeled image;
determining a second confidence level that the second machine learning model identifies the pixels showing the object included in the labeled image;
determining that the first confidence level is greater than the second confidence level;
generating a learning parameter for the image model based on a set of labeled image data, the first machine learning model, and the second machine learning model, wherein the learning parameter defines a property of the image model or a property of a training of the image model;
generating the image model from the first machine learning model, the training data, and the learning parameter; and
storing the image model in the library of machine learning models.
2. The computer-implemented method of
determining that a size of the set of labeled data exceeds a size of a set of training data used to train the first machine learning model; and
selecting a portion of the first machine learning model to include in the image model, wherein the portion receives image data as an input and provides a vector of feature detection results as an output.
3. The computer-implemented method of
receiving the image model in association with the request; and
storing the image model in the library of machine learning models.
4. The computer-implemented method of
receiving, from the electronic communication device, an image for processing by the image model;
retrieving the image model from the library;
processing the image using the image model to generate an image processing result, the image processing result including at least one of segmentation information or classification information for the object shown in the image; and
transmitting the image processing result to the electronic communication device.
6. The system of
7. The system of
wherein the first machine learning model is associated with metadata identifying the object, and
wherein the one or more computing devices execute computer-readable instructions to select the first machine learning model based on the metadata and the object associated with the request.
8. The system of
wherein the first machine learning model is associated with training data, and
wherein the one or more computing devices execute computer-readable instructions to:
generate a first metric characterizing a property of the set of reference images, wherein the property comprises one of: a size of the set of reference images, pixel intensity for the set of reference images, or a size of images included in the set of reference images;
generate a second metric characterizing the property for the training data.
9. The system of
receive, from a client device, an image for processing by the image model;
retrieve the image model from the library;
process the image using the image model to generate an image processing result, the image processing result including at least one of segmentation information or classification information for the object shown in the image; and
transmit the image processing result to the client device.
10. The system of
determine that a size of the set of reference images exceeds a size of a set of training data used to train the first machine learning model; and
select a portion of the first machine learning model to include in the image model, wherein the portion receives image data as an input and provides a vector of feature detection results as an output.
11. The system of
12. The system of
wherein the first machine learning model is associated with a first learning parameter identifying a property of the first machine learning model or training of the first machine learning model,
wherein a second machine learning model is associated with a second learning parameter identifying the property of the second machine learning model, and wherein the one or more computing devices execute computer-readable instructions to:
train a first version of the image model using: (i) the first learning parameter and (ii) at least a portion of the set of reference images for a number of generations;
train a second version of the image model using: (i) the second learning parameter and (ii) at least the portion of the set of reference images for the number of generations;
generate a first confidence metric for the first version of the image model, wherein the first confidence metric identifies a number of objects correctly identified by the first version of the image model;
generate a second confidence metric for the second version of the image model, wherein the second confidence metric identifies another number of objects correctly identified by the second version of the image model;
determine that the second confidence metric corresponds to a higher degree of confidence than the first confidence metric; and
train the image model using the second learning parameter.
14. The computer-implemented method of
15. The computer-implemented method of
wherein the first machine learning model is associated with training data, and
wherein the computer-implemented method further comprises determining that a property for the set of reference images corresponds to the property for the training data, wherein the property identifies one of: a number of images, a pixel intensity for the images, or a size of the images.
16. The computer-implemented method of
receiving, from a client device, an image for processing by the image model;
retrieving the image model from the library;
processing the image using the image model to generate an image processing result, the image processing result including at least one of segmentation information or classification information for the object shown in the image; and
transmitting the image processing result to the client device.
17. The computer-implemented method of
wherein the first machine learning model comprises a convolution neural network including a convolution layer and a fully-connected layer,
wherein the convolution layer receives image data as an input and provides a vector of feature detection results as an output, and
wherein the computer-implemented method further comprises:
determining that a size of the set of reference images exceeds a size of a set of training data used to train the first machine learning model; and
including the convolution layer of the first machine learning model in the image model.
18. The computer-implemented method of
wherein the first machine learning model is associated with a first learning parameter identifying a property of the first machine learning model or training of the first machine learning model,
wherein a second machine learning model is associated with a second learning parameter identifying the property of the second machine learning model, and
wherein the computer-implemented method comprises:
training a first version of the image model using: (i) the first learning parameter, and (ii) at least a portion of the set of reference images for a number of generations;
training a second version of the image model using: (i) the second learning parameter, and (ii) at least the portion of the set of reference images for the number of generations; and
determining, based at least partly on the first version of the image model and the second version of the image model, to train the image model using the second learning parameter; and
training the image model using the second learning parameter.
19. The computer-implemented method of
identifying a topical domain for a user associated with the request, wherein selecting the first machine learning model comprises:
determining that the topical domain is associated with a domain associated with the first machine learning model.
|
A service provider may make a network service available for use by third parties. For example, the field of utility computing involves a service provisioning model in which a service provider makes computing resources and infrastructure management available to client devices on demand. For example, a user may wish to deploy an image processing service to analyze image data such as of products, users, or documents. Image processing services can rely on sophisticated modeling and training to provide accurate analysis of image data. The modeling and training can be resource and time intensive operations and require significant understanding of the complexities contributing needed to produce a satisfactory model.
Network service models allow users to access networked resources (e.g., applications, services, and data) via a client program, such as a web browser. Network services, such as web services, provide programmatic access to networked resources including technology platforms (e.g., image processing applications and services) and data (e.g., image data and other databases) hosted on networked computers via a service interface. Generally speaking, a network service interface provides a standard, cross-platform API (Application Programming Interface) for communication between a client requesting some service to be performed and the service provider. In some embodiments, a network service interface may be configured to support the exchange of documents or messages including information describing the service request and response to that request. Such documents, or messages, may be exchanged using standardized or proprietary messaging protocols, such as the Hypertext Transfer Protocol (HTTP), and may be formatted in a platform-independent data format, such as eXtensible Markup Language (XML).
Embodiments of various inventive features will now be described with reference to the following drawings. Throughout the drawings, reference numbers may be re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the disclosure.
Network services can be powerful tools that allow clients to perform a wide variety of processing operations. For example, image analysis algorithms can be applied to many domains, such as medical or health care, social networks, autonomous driving, and others. With advances in artificial intelligence, machine learning, and related applications, more and more users are engaging with such systems. Wide adoption, however, can be hindered in part because not all users in these domains have sufficient time or resources to deploy state-of-the-art solutions. The features described in this application provide an end-to-end solution to generate image processing services for users with little or no prior knowledge of image analysis or artificial intelligence techniques.
Users can engage with the system in a variety of modes. In one mode, the system trains deep learning models for a user-specified labeled data set. In one mode, the system can apply minor adjustments to pre-trained models available within the system to tailor the model for processing the user specified labeled data set. In one mode, the system can score labeled images using the either pre-trained, adjusted, or generated machine learning models.
The features described help guide the generation of a new image model in an efficient manner according to a modeling request. The modeling request includes information that drives the generation, such as training data or specific task information to be performed by the model. A requesting device need not understand the different technical requirements of machine learning models, training techniques, or other nuances of artificial intelligence to generate a robust image processing service.
The systems and methods described include features for expediting generation of a new machine learning model, such as an image recognition model. Existing models are analyzed to identify a starting point for creating the new machine learning model. An existing model can suggest learning parameters (e.g., training parameters or structural features of the model) that can be used to expedite the generating and training process along with training data that can augment the training of the new machine learning model.
In the embodiment shown in
The access device 110 transmits the modeling request to a modeling request service 120. The modeling request service 120 interprets the modeling request 120 and coordinates the generation of an image processing service 190 for the modeling request 120. In previous systems, a new image model may be trained to perform the task specified in the modeling request 102. However, training each model from scratch for each request can be time or resource intensive. Embodiments of the present disclosure can avoid this inefficiency and high resource demand.
To address training inefficiencies, the modeling request service 120 identifies previously trained models from an image model data store 180 based on the request. For example, if the image model data store 180 includes a previously trained model associated with descriptive metadata corresponding to the descriptive metadata provided in the modeling request 102, the previously trained model may serve as a starting point for generating the requested model. Metadata, such as domain information, may be associated with a client requesting the previously trained model and used to identify a previously trained model.
As used herein a “data store” may be embodied in hard disk drives, solid state memories and/or any other type of non-transitory computer-readable storage medium accessible to or by a device such as an access device, server, or other computing device described. A data store may also or alternatively be distributed or partitioned across multiple local and/or remote storage devices as is known in the art without departing from the scope of the present disclosure. In yet other embodiments, a data store may include or be embodied in a data storage web service.
In one embodiment, the structural features of the previously trained model are used to generate the requested model. For example, the model may be implemented using a convolution neural network including a convolution layer and fully connected layer. Based on the comparison between the model request and the previously trained model, the modeling request service 120 selects one or both of the layers for inclusion in the new machine learning model. Additional structural features include: a number of nodes, a number of fully connected layers, organization of the output layer, network architecture, activation function, or the combination of layers.
In one embodiment, the training parameters used to train the previous model are used to generate the requested model. Training parameters may include one or more of: a number of training generations, a node activation function, an error gradient, optimizer, learning rate, and the like.
In addition to or as an alternative to metadata comparisons, embodiments of the modeling request service 120 include consideration of the reference image data associated with the modeling request 102. For example, the modeling request service 120 may generate a metric describing one or more aspect of the reference image data and training data for a model. By comparing the metrics, similarities between the data can be identified which further indicates that a model trained on similar images may be a good starting model for generating the requested model. In some embodiments, one or more reference images may be processed using an existing model and the existing model that provides a highest confidence image processing result may be selected.
In addition to or as an alternative to metadata comparisons, embodiments of the modeling request service 120 include consideration of interaction data with image processing services associated with different models. For example, if a model is used many times over a period of time as compared to another model, the model's utilization may indicate that the model is superior to other models.
The process of using learning parameters from one or more previously trained models may be referred to as meta-hyper parameter optimization (Meta HPO). The system searches not only through common parameters related to the models and training (such as number of layers and learning rate) but it also searches through the type of models being trained. For example, where the models are neural networks, the types of models may include Visual Geometry Group (VGG) networks, residual networks (ResNets), or densely connected neural networks (DenseNets) and related sub-networks. In one embodiment, the optimization is based on a resource budget for generating the requested model. The budget may be included in the modeling request. The resources identified in a budget include one or more of: time, processor cycles, memory, network bandwidth, a monetary unit, machine type (e.g., where the training is performed using virtual machines or distributed servers, the user may specify a type of machine used for generating the new machine learning model), or number of graphical processing units (GPUs) used. The learning parameters may be identified by comparing models generated from previous modeling requests with similar budgets. In one embodiment, a first budget is deemed similar to a second budget if the difference between the first budget and the second budget is within a threshold tolerance range.
Over multiple trainings across users and use-cases, the system determines efficient starting points (e.g., initial learning parameters) for HPO. The system learns the mapping between the characteristics (features) of training data (such as number of samples, analysis type, data size, average pixel intensity etc.) and initializations (such as network type, subnetwork to be trained, algorithm type, learning rate, etc.). In one embodiment, the learning is achieved by aggregating the characteristics and initializations for trained models and storing the aggregation data in a look up table or other searchable data store. In such embodiments, when a modeling request is received by the system, the parameters included in the request may be used to search the look up table or other searchable data store for aggregated data most closely associated with the modeling request. In one embodiment, the learning is achieved by searching a library of trained models based on the modeling parameter and, in some embodiments, model interaction data to identify relevant training data or initializations for the requested model.
Based on the one or more of the factors described, the modeling request service 120 identifies a trained model to use as the starting point to train a new machine learning model for the modeling request 102. An image model trainer 130 executes the training of the requested model based, at least in part, on the trained image model retrieved from the image model data store 180.
For machine learning embodiments, training a model includes adjusting the weights between nodes included in the model such that the model including the adjusted weights provides an image processing result that is more accurate than the pre-adjusted weights. To increase the accuracy, backpropagation or other machine learning techniques are used by the image model trainer 130.
Some embodiments of the image model trainer 130 receive learning parameters to further guide the training of the new machine learning model. The learning parameters may be received along with the model retrieved from the image model data store 180. The learning parameters may be received for a different model than the model which is used as the starting point for the training.
After generating the new machine learning model, the image model trainer 130 shown in the environment 100 of
The image model trainer 130 provides the identifier for the trained image mode to the modeling request service 120. The modeling request service 120 generates the image processing service 190 based on the trained image model. Generating the image processing service 190 may include creating a service instance to receive image requests which are processed using the trained image model to provide image processing results.
If the modeling request 102 includes an identifier for an existing model, then when the modeling request service 120 requests training models from the model data store 180, the identifier will be used to retrieve the training model. If the model is stored in the image model data store 180, the image model trainer may retrain the model according to the modeling request 102. For example, additional training data may be provided to further tune the model.
In the embodiment shown in
To access the image processing service 190, network service client 124 sends a request message to network service interface 224 via the network 210. The network service provider server 220 identifies a requested service based on the request and provides the request to the appropriate service For example, if the request include modeling parameters to create or update an image processing service, the network service interface 224 detects the modeling parameters as one indicator of the destination for the request. In some embodiments, the endpoint to which the request is presented identifies the application or service to handle the request. For example, the modeling request service 120 may be hosted at a known network location (e.g., http://networkserviceprovider.com/services/modelingService). In such embodiments, requests presented to the endpoint will be routed to the modeling request service 120. The application provides a response to the request to the network service interface 224 which, in turn, provides the response to the device that transmitted the request (e.g., the server 202 or the access device 110).
As the network service interface 224 receives requests and transmits responses, the network service interface stores information regarding the service interactions in a network service metrics data store 230. The service interaction information may include one or more of: number of requests routed to the service, number of responses sent from the service, the confidence of the responses, time taken for a service to respond to a request, resources utilized by a service to respond to a request, or memory requirements for the service.
In one embodiment, the network service interface 224 monitors a service to collect metrics while it is processing. For example, the modeling request service 120 may generate training information about a model such as learning parameters, model training metrics such as number of generations for training or model accuracy, training speed, memory requirements, or training convergence rate that is collected by the network service interface 224. A training image data store 240 is included in the embodiment shown in
As another example, a model generated by the modeling request service 120 may be published as a new image processing service of the server 220. As shown in
The method 300 begins at block 302. At block 304, a request for an image model is received from an electronic communication device. In one embodiment, the request identifies a task for the image model to perform, and training data including a labeled image. The task is at least one of: identifying a location of an object within an image or identifying the object within the image. The labeled image includes an identification of pixels showing the object within an image.
At block 306, a first machine learning model and a second machine learning model are identified from a library of machine learning models. The identification is, based on the task or the object associated with the request received at block 304. For example, the first and second machine learning model may include metadata indicating the types of object identified by the models. Descriptive metadata included in the modeling request is compared with the metadata associated with the models included in the library to identify those machine learning models which are relevant to the requested model. Domain information for the requesting client or user is compared in some embodiments to the domain associated with machine learning models in the library to identify those machine learning models which may be relevant to the requested model.
In one embodiment, the identification includes comparing the data used to train the models with the reference image data associated with the request. For example, first machine learning model is associated with training data, identifying the first machine learning model includes determining that a property for the set of reference images corresponds to the property for the training data. The property identifies at least one of: a number of images, a pixel intensity for the images, or a size of the images in the respective set of images.
At block 308, a first confidence level that the first machine learning model identifies the pixels showing the object included in the labeled image is determined. The first confidence level indicates how well the first machine learning model can perform the requested task for the training data associated with the request. In one embodiment, the first confidence level is generated by processing the labeled image with the first machine learning model to receive, as an output, the confidence for predicting the object.
At block 310, a second confidence level that the second machine learning model identifies the pixels showing the object included in the labeled image is determined. The second confidence level indicates how well the second machine learning model can perform the requested task for the training data associated with the request. In one embodiment, the second confidence level is generated by processing the labeled image with the second machine learning model to receive, as an output, the confidence for predicting the object.
At block 312, the image model is generated using the model associated with the highest confidence level. In some embodiments, generating the image model includes generating a learning parameter for the image model based on the set of labeled image data, the first machine learning model, and the second machine learning model. The learning parameter defines a property of the image model or a property of the training of the image model. In some embodiments, generating the image model may include selecting a portion of the model associated with the highest confidence to include in the image model. For example, a size of the set of labeled data is compared to a size of a set of training data used to train the model and, based on the comparison, one or more portions of the machine learning model is selected to include in the image model. A portion, in one embodiment, includes a convolution layer that receives image data as an input and provides a vector of feature detection results (e.g., predictions) as an output.
Table 1 provides an example of comparisons between data sets of the requested model and a pre-trained model for determining what, if any, structural features from the pre-trained model should be used in generating the image model.
TABLE 1
Similar dataset
Different dataset
Small dataset
Use pre-trained model
Use pre-trained model low
feature extractor and
level feature detection and
classifier
classifier
(transfer learning)
(fine-tuning)
Large dataset
Fine-tune pre-trained
Train from scratch without
model classifier
structure from pre-trained model
(fine-tuning)
(full training)
Returning to
The memory 570 generally includes RAM, ROM, and/or other persistent, non-transitory computer readable media. The memory 570 stores an operating system 574 that provides computer program instructions for use by the processing unit 540 or other elements included in the computing device in the general administration and operation of the network service provider server 120. In one embodiment, the memory 570 further includes computer program instructions and other information for implementing aspects of generating models described.
For example, in one embodiment, the memory 570 includes a modeling service configuration 576. The modeling service configuration 576 includes thresholds or other values to support the modeling operations, such as generating a model and an associated image processing service, described herein. The memory 570 shown in
In one embodiment, the configurations store specific values for a given configuration. For example, the threshold for determining whether a value included in a modeling request is similar to aggregated data or other information associated with a previously trained model may be specified as an absolute value (e.g., every day, week, 10 days, etc.). In some embodiments, the values are provided in a look up table indexed by one or more characteristics of the model or the information upon which the model was generated (e.g., a modeling request value, training data, training data metrics, or training result(s)).
Rather than storing express values for a particular configuration element, one embodiment stores information that allows the network service provider server 120 to obtain a dynamically generated value for the given configuration element. For example, the identity of the default constraint engine may be specified as a network location (e.g., URL) in conjunction with username and password information to access the network location to obtain the modeling or image processing service parameters used by the network service provider server 120.
In the embodiment shown in
The elements included in the network service provider server 120 are coupled by a bus 590. The bus 590 includes one or more of: a data bus, communication bus, or other bus mechanism to enable the various components of the network service provider server 120 to exchange information.
In some embodiments, the network service provider server 120 includes additional or fewer components than are shown in
Depending on the embodiment, certain acts, events, or functions of any of the processes or algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described operations or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, operations or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of electronic hardware and executable software. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, or as software that runs on hardware, depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a network service provider server, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A network service provider server can be or include a microprocessor, but in the alternative, the network service provider server can be or include a controller, microcontroller, or state machine, combinations of the same, or the like configured to generate and publish image processing services backed by a machine learning model. A network service provider server can include electrical circuitry configured to process computer-executable instructions. Although described herein primarily with respect to digital technology, a network service provider server may also include primarily analog components. For example, some or all of the modeling and service algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a network service provider server, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An illustrative storage medium can be coupled to the network service provider server such that the network service provider server can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the network service provider server. The network service provider server and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the network service provider server and the storage medium can reside as discrete components in a user terminal (e.g., access device or network service client device).
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
As used herein, the terms “determine” or “determining” encompass a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like.
As used herein, the term “selectively” or “selective” may encompass a wide variety of actions. For example, a “selective” process may include determining one option from multiple options. A “selective” process may include one or more of: dynamically determined inputs, preconfigured inputs, or user-initiated inputs for making the determination. In some embodiments, an n-input switch may be included to provide selective functionality where n is the number of inputs used to make the selection.
As used herein, the terms “provide” or “providing” encompass a wide variety of actions. For example, “providing” may include storing a value in a location for subsequent retrieval, transmitting a value directly to the recipient, transmitting or storing a reference to a value, and the like. “Providing” may also include encoding, decoding, encrypting, decrypting, validating, verifying, and the like.
As used herein, the term “message” encompasses a wide variety of formats for communicating (e.g., transmitting or receiving) information. A message may include a machine readable aggregation of information such as an XML document, fixed field message, comma separated message, or the like. A message may, in some embodiments, include a signal utilized to transmit one or more representations of the information. While recited in the singular, it will be understood that a message may be composed, transmitted, stored, received, etc. in multiple parts.
As used herein “receive” or “receiving” may include specific algorithms for obtaining information. For example, receiving may include transmitting a request message for the information. The request message may be transmitted via a network as described above. The request message may be transmitted according to one or more well-defined, machine readable standards which are known in the art. The request message may be stateful in which case the requesting device and the device to which the request was transmitted maintain a state between requests. The request message may be a stateless request in which case the state information for the request is contained within the messages exchanged between the requesting device and the device serving the request. One example of such state information includes a unique token that can be generated by either the requesting or serving device and included in messages exchanged. For example, the response message may include the state information to indicate what request message caused the serving device to transmit the response message.
As used herein “generate” or “generating” may include specific algorithms for creating information based on or using other input information. Generating may include retrieving the input information such as from memory or as provided input parameters to the hardware performing the generating. Once obtained, the generating may include combining the input information. The combination may be performed through specific circuitry configured to provide an output indicating the result of the generating. The combination may be dynamically performed such as through dynamic selection of execution paths based on, for example, the input information, device operational characteristics (e.g., hardware resources available, power level, power source, memory levels, network connectivity, bandwidth, and the like). Generating may also include storing the generated information in a memory location. The memory location may be identified as part of the request message that initiates the generating. In some embodiments, the generating may return location information identifying where the generated information can be accessed. The location information may include a memory location, network locate, file system location, or the like.
As used herein a “user interface” (also referred to as an interactive user interface, a graphical user interface or a UI) may refer to a network based interface including data fields and/or other controls for receiving input signals or providing electronic information and/or for providing information to the user in response to any received input signals. A UI may be implemented in whole or in part using technologies such as hyper-text mark-up language (HTML), FLASH™, JAVA™, .NET™, web services, and rich site summary (RSS). In some embodiments, a UI may be included in a stand-alone client (for example, thick client, fat client) configured to communicate (e.g., send or receive data) in accordance with one or more of the aspects described.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain embodiments disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Swaminathan, Gurumurthy, Zhou, Xiong, Khare, Vineet, Dirac, Leo Parker
Patent | Priority | Assignee | Title |
11087883, | Apr 02 2020 | Blue Eye Soft, Inc.; BLUE EYE SOFT, INC | Systems and methods for transfer-to-transfer learning-based training of a machine learning model for detecting medical conditions |
11429813, | Nov 27 2019 | Amazon Technologies, Inc | Automated model selection for network-based image recognition service |
11526799, | Aug 15 2018 | SALESFORCE COM, INC | Identification and application of hyperparameters for machine learning |
Patent | Priority | Assignee | Title |
9589210, | Aug 26 2015 | MAXAR INTELLIGENCE INC | Broad area geospatial object detection using autogenerated deep learning models |
9704054, | Sep 30 2015 | Amazon Technologies, Inc | Cluster-trained machine learning for image processing |
9754190, | Nov 29 2016 | Allegro Artificial Intelligence LTD | System and method for image classification based on Tsallis entropy |
9892133, | Feb 13 2015 | Amazon Technologies, Inc | Verifying item attributes using artificial intelligence |
20140270494, | |||
20160342863, | |||
20180005134, | |||
20180075338, | |||
20180121762, | |||
20180144214, | |||
20180204065, | |||
20180225993, | |||
20180232663, | |||
20180330166, | |||
20180336479, | |||
20180365065, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 16 2017 | Amazon Technologies, Inc. | (assignment on the face of the patent) | / | |||
Nov 16 2017 | DIRAC, LEO PARKER | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050444 | /0883 | |
Nov 16 2017 | KHARE, VINEET | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050444 | /0883 | |
Nov 16 2017 | SWAMINATHAN, GURUMURTHY | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050444 | /0883 | |
Nov 16 2017 | ZHOU, XIONG | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050444 | /0883 |
Date | Maintenance Fee Events |
Nov 16 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
May 12 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 12 2022 | 4 years fee payment window open |
May 12 2023 | 6 months grace period start (w surcharge) |
Nov 12 2023 | patent expiry (for year 4) |
Nov 12 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 12 2026 | 8 years fee payment window open |
May 12 2027 | 6 months grace period start (w surcharge) |
Nov 12 2027 | patent expiry (for year 8) |
Nov 12 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 12 2030 | 12 years fee payment window open |
May 12 2031 | 6 months grace period start (w surcharge) |
Nov 12 2031 | patent expiry (for year 12) |
Nov 12 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |