Systems and methods for selecting optimal policies that maximize expected return subject to given risk tolerance and confidence levels. In particular, methods and systems for selecting an optimal ad recommendation policy—based on user data, a set of ad recommendation policies, and risk thresholds—by sampling the user data and estimating gradients. The system or methods utilize the estimated gradients to select a good ad recommendation policy (an ad recommendation policy with high expected return) subject to the risk tolerance and confidence levels. To assist in selecting a risk-sensitive ad recommendation policy, a gradient-based algorithm is disclosed to find a near-optimal policy for conditional-value-at-risk (CVaR) risk-sensitive optimization.
|
13. In a digital medium environment for identifying and deploying policies, where policies can be altered, removed, or replaced on demand, a method of using risk-sensitive, lifetime value optimization based on a conditional value at risk measure to improve accuracy, efficiency, and stability in selecting and executing a policy comprising:
identifying, by one or more processors, a risk-tolerance value indicating a measure of permissible variance for policies and a confidence level, the risk-tolerance value and the confidence level corresponding to a conditional value at risk indicating a threshold measure of a mean of an alpha-tail distribution;
identifying a set of sample data comprising prior interactions by user client devices;
receiving a set of policies, each policy being defined by one or more policy parameters; and
determining, by the one or more processors, an optimized policy that is subject to the risk-tolerance value within the confidence level using a trajectory-based policy gradient algorithm by converging a policy parameter, a risk parameter indicating policy conditional value at risk, and a constraint parameter by sampling the sample data to identify the optimized policy that satisfies the threshold measure of the mean of the alpha-tail distribution; and
in response to determining the optimized policy, executing the optimized policy subject to the risk-tolerance value and the confidence level corresponding to the conditional value at risk such that digital content is provided to client devices in accordance with the optimized policy.
1. In a digital medium environment for identifying and deploying digital advertising campaigns across a plurality of client devices, where campaigns can be altered, removed, or replaced on demand, a computer-implemented method of using risk-sensitive, lifetime value optimization based on a conditional value at risk measure to improve accuracy, efficiency, and stability in selecting and executing ad recommendation policies comprising:
identifying, at a server, a risk-tolerance value indicating a measure of permissible variance for ad recommendation policies in a digital content campaign, the risk-tolerance value corresponding to a conditional value at risk;
identifying a set of user data indicating prior interactions by user client devices in relation to digital advertisements of one or more digital content campaigns;
determining, by the server, an optimized ad recommendation policy subject to the risk-tolerance value by converging a policy parameter and a risk parameter, the risk parameter indicating policy conditional value at risk, to identify the optimized ad recommendation policy such that the optimized ad recommendation policy satisfies the conditional value at risk by:
determining a gradient for the policy parameter by sampling the set of user data;
determining a gradient for the risk parameter by sampling the set of user data;
using the determined gradient for the risk parameter to select an updated risk parameter that indicates updated policy conditional value at risk; and
using the determined gradient for the policy parameter to identify an updated policy parameter; and
in response to determining the optimized ad recommendation policy, executing the digital content campaign subject to the risk-tolerance value corresponding to the conditional value at risk by providing digital advertisements to client devices in accordance with the optimized ad recommendation policy.
19. A system using risk-sensitive, lifetime value optimization based on a conditional value at risk measure to improve accuracy, efficiency, and stability in selecting ad recommendation policies in connection with identifying and deploying digital advertising campaigns, where campaigns can be altered, removed, or replaced on demand, comprising:
at least one processor;
at least one non-transitory computer readable storage medium storing instructions thereon, that, when executed by the at least one processor, cause the system to:
receive a risk-tolerance value and a confidence level for the risk-tolerance value, the risk-tolerance value and confidence level corresponding to a conditional value at risk;
identify a set of user data indicating prior interactions by user client devices in relation to digital advertisements of one or more digital content campaigns;
determine an optimized ad recommendation policy that is subject to the risk-tolerance value within the confidence level by converging a policy parameter, a risk parameter indicating policy conditional value at risk, and a constraint parameter to identify the optimized ad recommendation policy such that the optimized ad recommendation policy satisfies the conditional value at risk by repeatedly, for each of the policy parameter, the risk parameter, and the constraint parameter:
estimating a gradient for a given parameter of the policy parameter, the risk parameter, or the constraint parameter based on sampling of the set of user data with respect to an initial ad recommendation policy; and
using the estimated gradient for the given parameter to update the given parameter; and
in response to determining the optimized ad recommendation policy, execute the digital content campaign subject to the risk-tolerance value and the confidence level corresponding to the conditional value at risk by providing digital advertisements to client devices in accordance with the optimized ad recommendation policy.
2. The method as recited in
determining a gradient for the constraint parameter by sampling the set of user data; and
using the determined gradient for the constraint parameter to select an updated constraint parameter.
3. The method as recited in
4. The method as recited in
the first time scale is faster than the second time scale; and
the second time scale is faster than the third time scale.
5. The method as recited in
6. The method as recited in
the conditional value at risk comprises a threshold measure of a mean of an alpha-tail distribution; and
determining the optimized ad recommendation policy subject to the risk-tolerance value comprises converging the policy parameter and the risk parameter such that an alpha-tail distribution corresponding to the optimized ad recommendation policy satisfies the threshold measure of the mean of the alpha-tail distribution.
7. The method as recited in
receiving a confidence level for the risk-tolerance value;
determining the threshold measure of the mean of the alpha-tail distribution based on the risk-tolerance value and the confidence level; and
determining that the optimized ad recommendation policy satisfies the risk-tolerance value and the confidence level by determining that the optimized ad recommendation policy satisfies the threshold measure of the mean of the alpha-tail distribution.
8. The method as recited in
selecting the policy parameter from a set of ad recommendation policies; and
selecting the updated risk parameter from the set of ad recommendation policies.
9. The method as recited in
10. The method as recited in
11. The method as recited in
12. The method as recited in
14. The method as recited in
generating one or more trajectories of a given parameter of the policy parameter, the risk parameter, or the constraint parameter by sampling the set of sample data;
estimating, by the one or more processors, a gradient for the given parameter based on the generated one or more trajectories of the given parameter; and
using the estimated gradient for the given parameter to update the given parameter.
15. The method as recited in
16. The method as recited in
17. The method as recited in
the first time scale is faster than the second time scale; and
the second time scale is faster than the third time scale.
18. The method as recited in
20. The system as recited in
generating one or more trajectories of the given parameter by sampling the set of sample data; and
using the generated one or more trajectories of the given parameter to estimate the gradient for the given parameter.
|
One or more embodiments of the present disclosure relate generally to selecting, altering, replacing, or policies. More specifically, one or more embodiments of the present disclosure relate to systems and methods for selecting advertising recommendation policies in light of risk tolerance.
Product marketers commonly seek to target advertisements to the particular individual or group of consumers viewing the advertisement. The more a marketer can target advertising to consumers' unique characteristics, the more likely consumers will have interest in, and purchase, the pertinent products. In today's digital economy, marketers can increasingly tailor their advertisements based on digital information regarding potential customers and interact with them on an individual basis. This is particularly true with regard to advertising campaigns on personal computers, hand-held devices, smartphones, tablets, or web-enabled devices, where a marketer can gather and analyze digital information regarding the consumer and uniquely tailor advertisements to each user or device, on demand.
Accordingly, many businesses commonly gather and statistically analyze data and then adopt advertising policy programs that select advertising content based on discoverable characteristics of target consumers. Such advertising policy programs commonly select the “optimal” content for a consumer or group of consumers using a risk-neutral approach: that is, the analysis for adopting advertising policies maximizes the expected average return without taking into account the variation, or risk, associated with adopting any given policy.
While this approach tends to maximize value over an infinite time horizon, it may overlook problems that arise in risk-sensitive applications, i.e., business applications that cannot sustain large-scale variability. Thus, if a business or marketer cannot sustain wide variability—such as businesses with a lack of liquid assets or a marketer with sensitive business clients—a conventional risk-neutral approach may lead to catastrophic business results (i.e., an unsustainable loss of revenue in the short term).
In the past, particularly with print advertising, it has often been sufficient to select advertisement recommendation policies in a manner that maximizes life-time expected return without regard to risk. Given the effort and time needed to alter or replace traditional print advertising, it has typically been impractical to account for potential short-term losses. Thus, advertisers have focused policies that are projected to provide the most revenue long-term.
The advances in digital marketing and advertising allow for on demand altering and replacing of advertising recommendation policies and advertisement campaigns. In the current digital advertising realm, a website can go viral or bust in relatively short periods of time. Given how web traffic and clicks can provide revenue, changes in website design or ad recommendation policy can have dire affects if the bad policy choices are made.
That said, the nature of advertising information generally makes it difficult to analyze advertising policies in light of risk. In particular, the stochastic nature of website visitor behavior can mean the precise dynamic interaction between variables is typically unknown even though marketers often have access to data regarding advertising content, customer background, and customer decisions. Indeed, determining a precise model describing the complex interaction between customer background information, advertising content, and consumer decisions is generally impossible, if not cost-prohibitive, to develop. Moreover, conventionally, techniques do not account for variability and risk-tolerance when optimizing lifetime value in advertising recommendation policies or other complex decision-making scenarios.
These and other problems exist with regard to selecting, switching, and modifying digital ad recommendation policies.
Embodiments of the present disclosure provide benefits and/or solve one or more of the foregoing or other problems in the art with systems and methods that use a new stochastic optimization theory to select a digital advertising recommendation policy that seeks to maximize value in light of risk sensitivity. In particular, in one or more embodiments, the systems and methods identify an ad recommendation policy that optimizes expected lifetime value within the parameters of a selected risk threshold. In order to assist a user in identifying a more optimal ad recommendation policy in light of applicable risk, in one or more embodiments, systems and methods use a risk-tolerance value and confidence level, a set of user data indicating prior consumer behavior in relation to advertising, and a set of ad recommendation policies to select an optimal policy. Although the precise dynamic interaction among the pertinent variables may not be known, the systems and methods estimate gradients by sampling the set of user data and applying it to the current policy. The systems and methods then utilize the gradients to point toward parameters of an ad recommendation policy that falls within the risk-tolerance value and confidence level.
The systems and methods disclosed herein allow a user to identify an ad recommendation policy that tends to optimize the expected return value while staying within a user-defined measure of risk. Thus, one or more embodiments enable a user in a risk-sensitive environment to minimize the risk of adverse business outcomes that may result from optimizing the mean return value without consideration of variability. Furthermore, the systems and methods disclosed herein can operate through reinforcement learning techniques using historical data sampling, without a need for defined dynamic interaction amongst the variables.
Additional features and advantages of exemplary embodiments of the present disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary embodiments. The features and advantages of such embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary embodiments as set forth hereinafter. The foregoing summary is not an extensive overview, and it is not intended to identify key elements or indicate a scope. Rather the foregoing summary identifies aspects of embodiments as a prelude to the detailed description presented below.
In order to describe the manner in which the above recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
One or more embodiments of the present disclosure include a policy selection system that uses stochastic optimization to select near optimal policies in light of risk. In particular, in one or more embodiments, the policy selection system allows the user to account for risk in selecting, changing, or altering an ad recommendation policy. Based on an identified acceptable level of risk, the policy selection system assists the user in identifying an ad recommendation policy that increases or optimizes expected return within given the acceptable level of risk.
Because the precise interaction between variables in selecting ad recommendation policies is unknown, the policy selection system utilizes reinforcement-learning techniques based on prior user data. In one or more embodiments, the policy selection system receives a risk-tolerance value and a confidence level, prior user data, and ad recommendation policies. The system estimates gradients of selection parameters by sampling the set of user data and applying the data to the current ad recommendation policy. The policy selection system utilizes these gradients to update the selection parameters and repeat the processes of estimating gradients using the updated selection parameters. The policy selection system repeats this process until the selection parameters converge, indicating an optimal ad recommendation policy that falls within the applicable risk-tolerance value and the confidence level.
The disclosed systems and methods allow a user to select an ad recommendation policy that tends to maximize expected returns while staying within a user-defined tolerance for risk. Thus, a user in risk-sensitive applications can state, to a selected confidence level, that the selected policy will remain within a certain range of results. This allows the user to reduce or minimize the risk of impermissible variance that may come from simply optimizing mean performance. Thus, one or more embodiments of the present disclosure, allow a user to control, within a selected confidence level, the amount of risk in expected results without unnecessarily sacrificing lifetime value.
For example, suppose that each trajectory is a weeklong company revenue data from ad recommendations. If a conventional ad recommendation policy selection is used that maximizes revenue without taking risk into account, although the average expected revenue is maximized, there might be weeks where the revenue drastically falls and causes panic in the company. On the other hand, the policy selection system can help ensure that a risk constraint is satisfied. For example, the policy selection system can select an ad recommendation policy that increases expected revenue while also helping ensure that the company's revenue stays above a minimum pre-specified threshold. In addition to helping avoid the deployment of risky policies, the policy selection system can give an ad manager the ability to shape revenue distributions and risk profiles.
To optimize lifetime value in light of risk, in one or more embodiments, the policy selection system can utilize a policy gradient algorithm. Specifically, the policy selection system can employ a policy gradient algorithm for mean-conditional value at risk optimization. It will be appreciated that conditional value at risk (CVaR) is a measure of the mean of the α-tail distribution—a variance measure that addresses some shortcomings of well-known variance-related risk measures. One or more embodiments employ reinforcement learning techniques through CVaR optimization in infinite horizon Markov decision processes (MDPs) as applied to large-scale digital problems, such as selection of an ad recommendation policy. Furthermore, embodiments of the present disclosure can operate in scenarios involving both continuous and discrete loss distributions.
In particular, the policy selection system can use a policy gradient algorithm for mean-conditional value at risk optimization by estimating gradients of the applicable risk-sensitive function. This approach uses gradient estimations that are used to update policy parameters until arriving at a risk-sensitive optimal policy. For example, one or more embodiments estimate gradients with regard to three selection parameters. Specifically, one gradient estimate directed to one or more risk parameters, one gradient estimate directed to one or more ad recommendation policy parameters, and one gradient directed to one or more constraint parameters (e.g., Lagrangian parameters).
To inform the system without knowing the precise system dynamics in advance, embodiments estimate the gradients in the policy gradient algorithm by calculating one or more trajectories. These trajectories provide the policy selection system with information regarding how the policy may operate in practice, thus providing a more accurate gradient estimation for the applicable risk function. Specifically, the system or methods generate one or more trajectories based on samples from user data, allowing the policy selection system to identify a more optimal ad recommendation policy that falls within applicable risk-thresholds without knowing the exact dynamic interaction amongst the variables. Using Monte Carlo methods or other reinforcement learning techniques, the one or more trajectories direct the gradient estimates to update the parameters toward a more optimal ad recommendation policy. Moreover, one or more embodiments update the applicable parameters after observing several trajectories.
Thus, the policy selection system may use user data to calculate one or more trajectories. The policy selection system can use the trajectories to provide direction for gradient estimates. The policy selection system can then use the gradient estimates to update policy parameters. The policy selection system can repeat this process until the algorithm converges toward a new ad recommendation policy within the risk-tolerance value and confidence level.
To help converge on an ad recommendation policy efficiently, in one or more embodiments the trajectories are used to estimate the gradients on differing time-scales. For example, the policy selection system can estimate the risk tolerance parameters on the fastest time-scale, the ad recommendation policy parameters on an intermediate time scale, and the Lagrangian parameters on a slowest time-scale. This assists the policy gradient algorithm in converging efficiently on an ad recommendation policy subject to the applicable risk and system constraints.
Furthermore, one or more embodiments employ projection operators to help ensure convergence of the policy gradient algorithm. The projection operators project vectors onto the gradient estimates. This helps to ensure that the policy gradient algorithm converges to an optimal set of ad recommendation policies. As outlined below, using ordinary differential equations (ODE), these algorithms asymptomatically converge to locally risk-sensitive optimal policies.
As used herein, the term “ad recommendation policy” refers to a set of one or more rules or parameters that recommends advertising content to serve for a given set of conditions. For example, an ad recommendation policy can select or identify an advertisement to present on a webpage (or other marketing medium) when a user with a given set of characteristics/user profile information visits the webpage. The characteristics/user profile information can include any type of information, including information regarding location, gender, age, interests, previous purchases, previous searches, and so forth. An advertisement can comprise advertising content. Specifically, advertising content can include advertisements directed or served to any number of marketing mediums, including, but not limited to, websites, native applications, handheld devices, personal computers, tablets, cellphones, smartphones, smart-watches, electronic readers, or other device or medium. Moreover, advertising content includes any advertising material or component thereof, including, but not limited to, images, colors, sound, text, and video, in isolation or in combination.
An ad recommendation policy may recommend advertising content based on one or more pieces of information in isolation or in conjunction with other pieces of information regarding a user. For example, an ad recommendation policy can recommend advertising based on age, gender, location, and previous purchases, in isolation or in combination. In particular, an ad recommendation policy can include a rule that recommends an advertisement for a particular toy when a child visits a website and recommends an advertisement for a particular automobile when an adult visits the website. Similarly, the more general term “policy” as used herein, refers to a set of rules that recommends a decision in response to a given set of data.
As used herein, the term “optimized ad recommendation policy” refers to an ad recommendation policy selected based on a projection that advertisements selected based on the optimized ad recommendation policy will lead to a high or optimized expected number of clicks, conversion, revenue, or other metric. As used herein, the term “optimized ad recommendation policy” does not necessarily mean the most optimal policy, but rather one that is projected to be optimal. Furthermore, an optimized ad recommendation policy can further be subject to one or more constraints. For example, an optimized ad recommendation policy can be “optimal” for a given time period, a given website, a given geographic location, a given risk tolerance, etc.
As used herein, the term “risk-tolerance value” or “loss-tolerance value” refers to a measure of permissible variance in a desired result. The statistical meaning of the term “risk-tolerance value” or “loss-tolerance value” is often expressed as the Greek letter “β.” The risk-tolerance value, as initially received, can take any number of forms, including a statistical conditional-value-at-risk measure or a minimum result in performance. For example, consider an advertising business that has historically lost clients upon selecting an advertising recommendation policy that falls below a 0.1% click rate. Even if the advertising business selects an advertising recommendation policy that optimizes the number of clicks, the advertising business may not have clients if the click rate drops below 0.1%. Accordingly, the advertising business can adopt a “risk-tolerance value” of a 0.1% click rate, which would allow the user to control the risk of losing clients as a result of excessive variance.
As used herein, the term “confidence level” refers to a measure of the probability that actual results will remain within the threshold described by the risk-tolerance value. The confidence level is generally described as a percent probability that the desired results will occur. The statistical meaning of the term “confidence level” is often expressed as the Greek letter “α.” A user that needs to be confident that risk, or variance, will not exceed a certain level should select a correspondingly high confidence level. For example, consider an advertising business that sets a “risk-tolerance value” of a 0.1% click rate. If the risk of losing clients is particularly high when the click-rate drops below 0.1%, the advertising business may wish to ensure, with 95% probability, that the click-rate will not drop below 0.1%. In such circumstances, the confidence level would be 95%.
As used herein, the terms “user data” or “sample data” refers to data of past policy performance. For example, user data can comprise data representing prior user behavior in relation to a set of conditions upon which a policy is based. In particular, user data can represent user responses to the same advertisement; to a variety of different advertisements; to advertisements using different features; to similar advertisements in different media forms; to similar advertising with different images or actors; and so forth. The user data can represent a variety of consumer behaviors in relation to advertising, such as, clicks, non-clicks, views, non-views, touches, non-touches, likes, dislikes, purchases, non-purchases, selections, non-selections, and so forth. The user data can also represent many types of consumer information, such as, gender, location, age, device-type, ethnicity, page-views, purchases, and so forth. For example, the user data can indicate users that visited a particular website, characteristics about the users, advertising content presented to the users, and whether that consumer clicked on the advertising content.
As used herein, the term “policy parameter” refers to the set of measurable factors that define the scope of the one or more policies. Similarly, the term “risk parameter” refers to the set of measurable factors that define variance. Moreover, the term “constraint parameter” refers to a set of factors that define the strategy conditions for optimizing the constrained problem. In other words, the constraint parameters are conditions for a minimum or maximum value to exist, and hence, for identifying an optimal expected result. Although these parameters can take a variety of forms, in one or more embodiments, these parameters are addressed below in the discussion of θ (policy parameter); ν (risk parameter); and λ (constraint parameter).
It will be recognized that any of components 102-108 may be combined into fewer components, such as into a single component, or divided into more components as may serve a particular embodiment. The components 102-108 can comprise software, hardware, or both. For example, the components 102-108 can comprise one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices, such as a client device or server device. When executed by the one or more processors, the computer-executable instructions of the policy selection system 100 can cause the computing device(s) to perform the policy selection methods described herein. Alternatively, the components 102-108 can comprise hardware, such as a special purpose processing device to perform a certain function or group of functions. Additionally or alternatively, the components 102-108 can comprise a combination of computer-executable instructions and hardware.
One or more embodiments can partially implement the policy selection system 100 as a native application installed on a client-computing device. For example, the policy selection system 100 may include a mobile application that installs and runs on a mobile device, such as a smart phone or a tablet. Alternatively, the policy selection system 100 can include a personal computer application, widget, or other form of a native computer program. Alternatively, the policy selection system 100 may be a remote application that a client-computing device accesses. For example, the policy selection system 100 may include a web application that is executed within a web browser of a client-computing device. Furthermore, the components 102-108 of the policy selection system 100 may, for example, be implemented as a stand-alone application, as a module of an application, as a plug-in for applications including marketing functions, as a library function or functions that may be called by other applications such as digital marketing applications, and/or as a cloud-computing model. Alternatively or additionally, the components of the policy selection system 100 may be implemented in any policy recommendation application, including but not limited to ADOBE Test&Target 1:1. “ADOBE,” “Target,” and “Test&Target 1:1” are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries.
As mentioned above, the policy selection system 100 can use optimize a conditional value-at risk for a Markov decision (see equations 1-4 below) process using a trajectory-based policy gradient algorithm (see algorithm 1 below). In particular, the trajectory generator 102 can generate a set of trajectories for a policy parameter, a risk parameter, and a constraint parameter. In particular, the trajectory generator 102 can generate one or more trajectories of a given parameter of the policy parameter, the risk parameter, or the constraint parameter by sampling the set of user data.
Once the trajectory generator 102, generates the trajectories for each of the policy parameter, risk parameter, and the constraint parameter, the gradient estimator 104 can use the trajectories to estimate the gradients for the parameters. In particular, the gradient estimator 104 can use the trajectories to estimate the gradients in equations 8-10 included below. The gradient estimator 104 can then use the estimated gradients to update the policy parameter, the risk parameter, and the constraint parameter.
The trajectory generator 102 can then generate another set of trajectories for the updated policy parameter, the updated risk parameter, and the updated constraint parameter. In particular, the trajectory generator 102 can generate one or more trajectories of a given updated parameter by sampling the set of user data. In turn, the gradient estimator 104 can then use the updated trajectories to estimate the gradients for the updated parameters. In particular, the gradient estimator 104 can use the updated trajectories to estimate the gradients in equations 8-10 included below. The gradient estimator 104 can then use the estimated gradients to again update the policy parameter, the risk parameter, and the constraint parameter. This process can be repeated to simultaneously converge the policy parameter, the risk parameter, and the constraint parameter to identify an optimized policy that is subject to a risk-tolerance value within a confidence level.
As part of simultaneously converging the policy parameter, the risk parameter, and the constraint parameter, the trajectory generator 102 can converge the risk parameter at a first time scale, converge the policy parameter at a second time scale, and converge the constraint parameter at a third time scale. The first time scale can be faster than the second time scale and the second time scale can be faster than the third time scale.
The policy selection system can also include a policy selector 106 to facilitate the selection of an ad recommendation policy. In particular, the policy selector 106 can select an ad recommendation policy in light of the updated policy parameters generated by the gradients estimated by the gradient estimator 104. In one or more embodiments the policy selector 106 identifies ad recommendation policies in accordance with generated policy parameters. Depending on the embodiment, the policy selector 106 can select ad recommendation policies from the ad policy recommendation information 112 (or some other source), can combine ad recommendation policies from the ad policy recommendation information 112 (or some other source), or can create ad recommendation policies or combinations of ad recommendation policies. One or more alternative embodiments may not include a policy selector 104; rather the policy selection can occur within one or more alternative components or may occur in conjunction with estimation of the gradients.
As described above, the policy selection system 100 can include a data storage manager 108 to facilitate storage of information for the policy selection system 100. In particular, the data storage manager 108 can store information used by one or more of the components of the data generation system 100 to facilitate the performance of various operations associated with the policy selection system 100. In one or more embodiments as shown in
In one or more embodiments, the user data 110 can include one or more samples of consumer behavior in relation to advertising. In particular, the user data can contain samples of one or more consumer interactions with advertising content, information associated with the consumer or transaction, or information associated with the advertisement. The user data can represent a variety of consumer behaviors in relation to advertising, such as, clicks, non-clicks, views, non-views, touches, non-touches, likes, dislikes, purchases, non-purchases, selections, non-selections, and so forth. The user data can also represent many types of user information, such as, gender, location, age, device-type, ethnicity, page-views, purchases, and so forth. The user data can also represent user interaction with any type of digital advertisement, and any component thereof, in any digital medium. In one or more embodiments, the trajectory generator 102 can generate trajectories of the policy, risk, and constraint parameters by sampling the user data 110.
In one or more embodiments, the ad recommendation policy information 1102 can include data representing one or more possible ad recommendation policies. For example, the ad recommendation policy information 112 can include recommended advertising content based on information known about a potential consumer. The ad recommendation policy information 112 can contain portions of an ad recommendation policy that can be combined together. The ad recommendation policy information 112 can gather information direction from client devices or receive information from an analytics server or another source. In one or more embodiments, the trajectory generator 102, the gradient estimator 104, and the policy selector 106 can access the ad recommendation policy information 112 to select an ad recommendation policy.
Moreover, the data storage manager 108 can also risk preference information 114. The risk preference information can include risk-tolerance values or confidence levels or other user preferences required or desired for alternative embodiments. The risk-tolerance values, confidence intervals, and other preferences can be defined by a user or client or can be automatically generated by the system based on default settings.
One or more embodiments of the client devices 204a-204n can access a webpage (or another marketing medium such as a native application) supported by the content server 202. In particular, the content server 202 can host or support the webpage. For example, the content server can include a web hosting application that allows the client devices 204a-204n to interact with content hosted at the content server 202. To illustrate, each of the client devices 204a-204n can run separate instances of a web application (e.g., a web browser) to allow users to access, view, and/or interact with a webpage or website hosted at the content server 202.
Upon a client device 204a accessing the webpage, the ad server 203 can provide an advertisement to be presented in connection with the webpage. The ad server 203 can select the advertisement based on an ad recommendation policy provided by the policy selection system 100 and user profile information obtained about the particular user or client device 204a accessing the webpage.
In one or more embodiments, an advertiser can use the policy selection system 100 to select an ad recommendation policy that optimizes clicks, conversions, or revenue, while also ensuring a minimum success rate subject to a predetermined confidence level. The policy selection system 100 can either provide an ad recommendation policy for the ad server 203 to follow. Otherwise, the policy selection system 100 can facilitate interaction between the ad server 203 and the content server 202. In particular, the policy selection system 100 can form part of a marketing agent that identifies ads for the ad server 203 to serve in connection with the webpage hosted by the content provider 202.
The policy selection system 100 can store, maintain, or have access to the profile information associated with one or more users or client devices 204a-204n. In particular, the policy selection system 100 can access information that can be used by the to select an ad recommendation policy or select advertisements. For example, the policy selection system 100 can obtain profile information for a user or other information that indicates preferences, characteristics, or interests associated with the user or client device 204a. In one implementation, the policy selection system 100 can access the profile information from an analytics server.
As previously described, the policy selection system 100 can be implemented on one or more devices. For example, the policy selection system 100 can include separate devices for performing one or more of the operations associated with the policy selection system 100. In one or more implementations, each of the devices can be in communication with the content server 202 or the ad server 203 via the network 206. In an alternative implementation, at least some of the devices in the policy selection system 100 may not be in communication with the content server 202 or the ad server 203 via the network 206, but may communicate with a specific device in the policy selection system 100 that is in communication with the ad server 203. In still further embodiments, the content server 202 or the ad server 203 can implement the policy selection system 100.
In alternative embodiments, the environment 200 can include a third party analytics system. In particular, the analytics system can be a third-party system that facilitates the collection of data, such as user data or ad recommendation policy information. In such instances, one or more content servers 202 can establish one or more accounts with the policy selection system 100 and the analytics system. The account(s) may allow the policy selection system 100 to obtain information from the analytics system. For example, a third party analytics system can monitor information regarding visitors to a website and provide potential consumer information to the policy selection system 100 for one or more accounts. The policy selection system can actively provide ad recommendation policies based on user data or other information. Similarly, a third party analytics system can actively collect and monitor user data for the policy selection system 100. The policy selection system 100 can provide updated ad recommendation policies as additional user data informs the selection process. In other embodiments, the policy selection system 100 can track user data or ad recommendation policy information directly.
In alternative embodiments, the content server 202 or the ad server 203 can employ the selected ad recommendation policy. In particular, the policy selection system 100 can provide an ad recommendation policy to the content server 202 or the ad server 203, which can then monitor user information and select advertising content for individual consumers in accordance with the ad recommendation policy. The policy selection system 100 can adjust the ad recommendation policy on demand for any number of reasons, such as a client request, updated user data, new advertising content, and so forth.
As discussed above, the policy selection system 100 can allow a marketer to select an ad recommendation policy that is both optimized with regard to expected results, while also ensuring that the expected results do not fall below a threshold within a given confidence level. Thus, the policy selection system 100 can prevent short term loses due to variance in an ad recommendation policy. One will appreciate in light of the disclosure provided herein that the policy selection system 100 can perform a tradeoff in potential lifetime value or return for low risk. In other words, an optimized ad recommendation policy selected by the policy selection system 100 can project a click-thru rate less than an ad recommendation policy optimized for life-time value without regard to risk.
For example,
As discussed, in implementing an ad recommendation policy, marketers often seek to maximize the mean return value. In risk-sensitive applications, however, a client may also need to control for the variation or risk associated with an ad recommendation policy. The amount of variation can be reflected graphically by the width of the distribution curve. The risk-sensitive alternative distribution 304 represents a sample return distribution for an ad recommendation policy in accordance with one or more embodiments of the present disclosure. The risk sensitive alternative 304 maximizes expected mean returns while also considering risk values associated with the particular client application.
One or more embodiment of the present disclosure control for risk by allowing a user to select a risk-tolerance level, β 306, and a confidence level α 308, such that the user can determine, to a confidence level α, that the expected results would not fall below the risk-tolerance level β. Graphically, the risk-tolerance level β 306 can be represented by a vertical line of results below which the client is unwilling, or unable, to fall. A return below this level would be unacceptable, based on the applicable level of risk in a particular application. Graphically, the confidence level α 308, can be represented by the area of the distribution falling within the selected risk-tolerance level. One or more embodiments measure or detect the area of the distribution falling within the selected risk-tolerance value and seek to identify an optimal ad recommendation policy that maximizes the expected return while also maintaining the area falling within the selected risk-tolerance value.
As shown, the probability of returning a result outside the selected risk-tolerance level can be represented as 1−α 310. The area falling outside the selected risk-tolerance level 310 can also be referred to as the tail of the distribution curve or an unsatisfactory tail of the distribution curve. One or more embodiments detect or measure the size of the tail and seek to minimize or maintain the size of the tail within risk-tolerance thresholds while maximizing the mean. As shown, the probability, 1−α 310, that the expected results will fall below the designated risk-tolerance level, β, is much smaller utilizing a risk-sensitive approach.
A number of methods exist for measuring or detecting the size of the tail for a given distribution, for example, the number of standard deviations or a value-at-risk measure (VaR). One measure, conditional value-at-risk (CVaR), is a measure of the mean of the tail distribution. One or more embodiments measure or detect CVaR and utilize CVaR to control variance in selecting an ad recommendation policy. Other embodiments could utilize other variance measures in selecting an ad recommendation policy.
In some applications, a client may seek to avoid only certain kinds of variation, or risk, in selecting an ad recommendation policy. For example, a marketing client seeking to maximize the rate of clicks for advertising content may lose business if the click rate drops below a certain level, but will generally not be concerned if the click rate is unexpectedly high. In such a scenario, the client only seeks to avoid variation in the lower range of results. Graphically, this range of expected results is reflected by a region of the return distribution that falls below the client's risk threshold.
In such a non-Guassian distribution, the goal may be to minimize the size of the unsatisfactory tail. For example, a marketer may seek an ad recommendation policy that reduces the risk of results that fall below a certain threshold while maximizing the expected return. Although
One or more embodiments allow a user to select a risk-tolerance level, β 406, and a confidence level α 408, such that the user can determine, to a confidence level α, that the expected results would not fall below the risk-tolerance level β, even as applied to a non-normal probability distribution. As discussed above, the risk-tolerance level β 406 can be reflected graphically as a threshold line for risk that a user seeks to stay above. Similarly, a confidence level α 408 can be represented graphically by the area of the distribution falling within the selected risk-tolerance level. One or more embodiments measure or detect the area of the distribution falling within the selected risk-tolerance level and select an ad recommendation policy that maximizes the return while maintaining an acceptable risk tolerance and confidence interval.
The probability of a result falling outside the selected risk-tolerance level is shown as 1−α 410 (often referred to as the tail, or unsatisfactory tail). One or more embodiments measure or detect the size of the unsatisfactory tail and utilize that measure to select a more optimal policy. As shown, the probability, 1−α 410, that the expected results will fall below the designated risk-tolerance level, β, is much smaller utilizing a risk-sensitive approach.
Although normal distribution curves are symmetrical, meaning a reduction in variation on one side of the curve may result in a symmetrical reduction on the opposite side of the curve, non-normal distributions are not symmetrical. Accordingly, embodiments can detect unsatisfactory tails and seek to identify ad recommendation policies that reduce the size of the tail and increase the size of the desired reward. As discussed, one or more embodiments utilize the CVaR measure to detect and measure variance in identifying an optimal ad recommendation policy. CVaR is a measure of the mean of the α-tail distribution. CVaR is representative of the tail, more stable, and easier to work with numerically in non-normal distributions.
One or more embodiments come to an optimal ad recommendation policy by identifying the CVaR of a random variable (or more than one random variable) and the expectation of the variable(s). The policy selection system 100 can seek to optimize by identifying an ad recommendation policy that maximizes the expectation subject to the CVaR being less than a certain risk tolerance at a certain confidence level. Graphically, the optimization problem equates to identifying the mean of the tail, identifying the expected return value of the distribution curve, accessing ad recommendation policies, analyzing the other ad recommendation policies based on their expected return and mean tail value, and then selecting a different ad recommendation policy with a higher expected return on the distribution curve and a mean tail value within the risk-tolerance level line at a certain confidence level.
Consider, for example, a user that visits a website. The policy selection system 100 can generate or receive a vector representing the characteristics of the user; such as, demographics, location, recency of visits, frequency, etc. The policy selection system 100 takes the vector as input and can map the vector to a distribution over the range of possible actions. The client or user can inform the policy selection system 100 regarding the type of distribution (for example, Gaussian or non-Gaussian) and the applicable parameters. The policy selection system 100 chooses the appropriate policy according to the probability of the expected return and variance.
One or more embodiments solve the optimization problem by estimating gradients that point toward the optimal ad recommendation policy. The gradient estimates incorporate parameters that constrain the optimization problem to the particular scenario at issue. For example, some embodiments utilize a gradient estimate directed to variance parameters. In particular, the gradient estimate directed to variance parameters can constrain the optimization so that the policy selection system 100 selects policies with distributions that satisfy the applicable risk tolerance threshold and confidence level. One or more embodiments receive or identify one or more variance parameters and update those parameters in the descent direction in selecting an ad recommendation policy.
Similarly, some embodiments estimate a gradient with regard to policy parameters. In particular, policy parameters could include all available ad recommendation policies, all possible ad recommendation policy combinations, possible percentile combinations of ad recommendation policies, or desired policy objectives, such as maximizing a particular reward. For example, the ad recommendation policy parameters can comprise ad recommendation policies A and B, while specifying that the policy selection system can generate a final ad recommendation policy consisting of any combination of A and B ranging from 48 to 52 percent of A and 48 to 52 percent of B such that the combination maximizes the number of clicks within the given risk parameters. One or more embodiments identify policy parameters and update those parameters in the descent direction to identify an optimal ad recommendation policy. The policy selection system 100 can identify policy parameters in a number of ways. For example, a user/marketer can provide a set of policies. The user/marketer operating the policy selection system 100 can decide on a set of policies. One or more embodiments assist in deciding on a range of policies or parameters to consider.
Moreover, some embodiments utilize a gradient estimate directed to Lagrangian parameters, or a Lagrangian multiplier. Specifically, the Lagrangian parameters can reflect the parameters of strategy to obtaining an optimal solution. Lagrangian parameters can reflect the trade off between optimizing the objective or the constraints. Moreover, Lagrangian parameters can comprise the conditions for optimality in the constrained problem. For example, to identify a maximum return value in a function subject to some independent condition, it is possible to identify certain conditions. The Lagrangian parameters can reflect these conditions and a strategy for locating the optimal solution. By estimating gradients subject to the Lagrangian parameters, one or more embodiments select an ad recommendation policy with conditions more likely to solve the optimization problem. One or more embodiments identify Lagrangian parameters and update those parameters in the descent direction in selecting an optimal ad recommendation policy.
One or more embodiments estimate separate gradients with regard to the three sets of parameters described above: risk parameters, policy parameters, and constraint (e.g., Lagrangian) parameters. Other embodiments can estimate gradients with regard to other parameters depending on the particular application. The policy selection system 100 can estimate gradients applicable to any number or type of risk-sensitive functions, including ad recommendation policies for a variety of applications, advertising content, or consumer groups, and also inclusive of other complex decision-making processes (such as the stopping problem described below).
Estimating gradients with regard to one or more parameters allows the policy selection system 100 to operate without the benefit of knowing the precise dynamics of the system. Thus, although
Again, the risk-neutral distribution 502 tends to have a higher mean value, and higher range, than the risk-sensitive distribution 504 calculated to control for a risk-tolerance value, β 506. This reflects a typical expected result in applying one or more embodiments to a risk-sensitive decision-making problem where the precise system model is unknown. In exchange for a reduction in expected return, a user can control the risk associated with selection of a recommendation policy even without an understanding of the exact interaction amongst all variables.
Even though the precise system dynamics may be unknown, one or more embodiments of the policy selection system 100 can identify an optimal ad recommendation policy by estimating the gradient with regard to various parameters using sampling methods, such as Monte Carlo sampling. Some embodiments select a current policy, generate samples from sample data, and create trajectories following the current policy in light of the sample data. The sample data can come from a number of sources and reflect data correlating a number of variables. For example, the policy selection system 100 can track individuals that visit a website and interact with advertising content, gather available information regarding the individual visitors, and correlate that information with how the user interacted with the advertising content or website. Alternatively, a client could provide the data or a third-party system could gather and provide the data.
The policy selection system 100 can generate samples from the data to calculate trajectories. Each of the trajectories reflect an anticipated result from following the current policy based on a sample data point. The policy selection system 100 can create trajectories from the data samples and then utilize those trajectories to assist in selecting an optimal ad recommendation policy. Specifically, the policy selection system 100 can utilize the trajectories to estimate gradients and update parameters associated with the gradient estimates, pointing toward an optimal ad recommendation policy. In light of the updated parameters, the policy selection system 100 can select a new policy based on the gradient estimates and updated parameters. One or more embodiments update parameters after observing several trajectories. The number of trajectories selected can vary depending on the embodiment, the application, and the available data.
In one or more embodiments, the policy selection system 100 can repeatedly select an ad recommendation policy, sample data, create trajectories, estimate gradients, and select a more optimal policy. Specifically, after identifying a new policy, the policy selection system 100 can create additional trajectories for the new policy by generating samples from the sample data. The disclosed systems and methods can then utilize the additional trajectories to estimate gradients. The gradient estimates can update the applicable parameters, pointing toward a more optimal ad recommendation policy. The policy selection system 100 can repeat this process until converging to an optimal ad recommendation policy. Thus, the present disclosure allows a user to identify an optimal ad recommendation policy without knowing a system model by using sample data to inform the policy selection system.
Because of the complexity of the optimization problem for selecting a policy, one or more embodiments converge the various set of parameters toward a solution simultaneously (i.e., converge one set of parameters toward a solution while also converging another set of parameters toward a solution), rather than iteratively (i.e., by solving for one set of parameters before seeking to solve for another set of parameters). Moreover, one or more embodiments update some parameters on a faster time scale than others. Embodiments that utilize varying time scales can converge to an optimal solution more easily in some applications.
Because one or more embodiments seek to maximize expected return subject to a risk threshold, converging on parameters that fail to meet a risk-tolerance would be counter-productive. Accordingly, the policy selection system 100 can seek to estimate the gradient related to risk parameters on the fastest time scale. If the risk parameter converges, the risk constraints are more likely to be satisfied. For example, in embodiments utilizing CVaR as a risk measure, convergence of risk parameters on a faster time scale can help ensure that CVaR falls within the applicable risk constraints.
In addition to satisfying constraints related to variance, the policy selection system 100 can seek to come to a decision or policy that maximizes expected return. Accordingly, one or more embodiments estimate the gradient related to the policy parameters on an intermediate time scale. As the policy selection system 100 converges on risk and policy parameters, the policy selection system 100 can converge constraint or Lagrangian parameters. Thus, the policy selection system 100 can estimate the gradient related to Lagrangian parameters on the slowest time scale. Embodiments utilizing this approach may converge on policies meeting the desired reward objectives subject to the constraints on variance.
As outlined in greater detail below, applying multiple time-scales assists the policy gradient algorithm in converging efficiently on an ad recommendation policy subject to the applicable risk and system constraints. Accordingly some embodiments choose a policy, generate trajectories based on samples, estimate gradients, change parameters simultaneously (although on different time scales), identify another policy, and repeat as necessary or desired. This process can be repeated until the user wishes to stop the process or until the policy has converged to an optimal policy. For example, repeating can cease if a client is satisfied with the selected ad recommendation policy or if the method has converged to an optimal ad recommendation policy. Alternatively, the process can repeat only a subset of steps, such as the step of estimating gradients, or the steps of estimating gradients and selecting a new policy based on those gradients. Other embodiments employ fewer than all of these steps, or perform these steps in a different order.
The number and type of gradient estimates, however, can vary depending on the particular embodiment. Moreover, some embodiments may allow the client or user to adjust the time-scale applicable to each parameter, depending on the particular application and data. Still further, in some embodiments the policy selection system 100 may or may not utilize a time-scale. For example, the policy selection system 100 may be more efficient to have the fastest time scale applicable to other parameters. Similarly, in some embodiments, the policy selection system 100 can set the difference in time-scales to be greater than in other use cases.
Furthermore, one or more embodiments employ projection operators to help ensure convergence of the gradient estimates in a policy gradient algorithm. The projection operators project vectors onto the gradient estimates. This can help to ensure that the policy gradient algorithm converges. The particular projection operators may differ depending on the embodiment and particular application. Some embodiments may not utilize projection operators to help converge to an optimal policy.
As described above, one or more embodiments calculate trajectories, estimate gradients, and identify an ad recommendation policy according to a policy gradient algorithm. One embodiment of such an algorithm is described below. The unit of observation in this algorithm is a system trajectory generated by following the current policy. At each iteration, the algorithm generates N trajectories by following the current policy, uses them to estimate the gradients, and then uses these estimates to update the parameters, resulting in parameters pointing toward a new ad recommendation policy.
For purposes of the present embodiment, consider an environment modeled as a Markov decision process (MDP) defined as a tuple M=(, , C, P, P0), where ={1, . . . , n} and ={1, . . . ,m} are the state and action spaces; C(x, a)∈[−Cmax, Cmax] is the bounded cost random variable whose expectation is donated by c(x, a)=[C(x, a)]; P(⋅|x, a) is the transition probability distribution; and P0 (⋅) is the initial state distribution. For simplicity, the present embodiment assumes that the system has a single initial state x0, i.e., P0(x)=1{x=x0}. It will be recognized that the embodiment disclosed can be easily extended to the case that the system has more than one initial state.
A stationary policy μ(⋅|x) is a probability distribution over actions, conditioned on the current state. In the policy gradient algorithm this embodiment defines a class of parameterized stochastic policies {μ(⋅|x; θ), x∈, θ∈Θ⊆Rκ
In the present embodiment, Z is a bounded-mean variable, i.e., [|Z|]<∞, with the cumulative distribution function F(z)=(Z≤z). At a confidence level α∈(0,1) the present embodiment defines the value-at-risk (VaR) as
VaRα(Z)=min {z|F(z)≥α}.
The γ-discounted visiting distribution of state x and state-action pair (x, a) under policy μ is denoted:
dγμ(x|x0)=(1−γ)Σk=0∞γk(xk=x|x0=x0;μ) and
πγμ(x,a|x0)=dγμ(x|x0)μ(a|x).
Here the minimum is attained because F is non-decreasing and right-continuous in Z. When F is continuous and strictly increasing, VaRα(z) is the unique z satisfying F(z)=α, otherwise, the VaR equation can have no solution or a whole range of solutions. VaR suffers from being unstable and difficult to work with numerically when Z is not normally distributed, which is often the case, as loss distributions tend to exhibit fat tails or empirical discreteness. Moreover, VaR is not a coherent risk measure and more importantly does not quantify the losses that might be suffered beyond its value at the α-tail distribution of Z.
The CVaR is the mean of the α-tail distribution of Z. Where there is no probability atom at VaRα(z), CVaRα(Z) has a unique value defined as:
CVaRα(Z)=[Z|Z≥VaRα(Z)].
As established by Rockafellar and Uryasev, see R. Rockafellar and S. Uryasev, Optimization of Conditional Value-at-Risk, JOURNAL OF RISK 26:1443-1471 (2002)—incorporated herein by reference in its entirety—it will be recognized that:
where (x)+=max(x, 0) represents the positive part of x. As a function of ν, Hα(⋅, ν) is finite, convex, and continuous.
The present embodiment further defines the loss of a state x, or the loss of a state-action par (x, a), as the sum of costs encountered by the agent when it starts at state x, or state action pair (x, a), and then follows policy μ. This can be articulated as follows:
Dθ(x)=Σk=0∞γkC(xk,ak)|x0=x,μ
Dθ(x,a)=Σk=0∞γkC(xk,ak)|x0=x,a0=a,μ.
Furthermore, the expected value of these two random variables are the value and action-value functions of policy μ, expressed as:
Vθ(x)=[Dθ(x)] and
Qθ(x,a)=[Dθ(x,a)].
CVaR Optimization Problem
The goal in the standard discounted formulation is to find an optimal policy θ*=argminθ Vθ(x0). For CVaR optimization as utilized in the present embodiment, and for a given confidence level α∈(0,1) and loss tolerance β∈, the optimization problem can be expressed as
minθ Vθ(x0) subject to CVaRα(Dθ(x0))≤β. (2)
Given the discussion of CVaR and Hα above (see Equation 1), the optimization problem (Equation 2) can be rewritten:
It will be recognized that by employing Lagrangian relaxation procedures, the optimization problem (Equation 3) can be converted to the following unconstrained problem:
where λ is the Lagrange multiplier. The goal is to find the saddle point of L(θ,ν,λ), i.e., a point (θ*,ν*,λ*) that satisfies:
L(θ,ν,λ*)≥L(θ*,ν*,λ*)≥L(θ*,ν*,λ),∀θ,ν,∀λ≥0.
This can be achieved by descending in θ and ν and ascending in λ using the gradients of L(θ,ν,λ) with regard to θ, ν, and λ.
Calculating the Gradients
The gradients of L(θ,ν,λ) with regard to θ, ν, and λ are each calculated below. A trajectory generated by following the policy θ can be represented as
ξ={x0,a0,x1,a1, . . . ,xT−1,aT−1,xT}
where x0=x0 and xT is usually a terminal state of the system. After xk visits the terminal state, it enters a recurring sink state xS at the next time step, incurring zero cost, i.e. C(xS, a)=0, ∀a∈. Time index T is the stopping time of the MDP. Since the transition is stochastic, T is a non-deterministic quantity. It is assumed that the policy μ is proper. This further means that with probability 1, the MDP exits the transient states and hits xS (and stays in xS) in finite time T. For simplicity, assume that the agent incurs zero cost at the terminal state, although analogous results for the general case with a non-zero terminal cost can be derived using identical arguments. The loss ξ can be defined as
D(ξ)=Σk=0T−1γkc(xk,ak)
and the probability of ξ can be defined as
θ(ξ)=P0(x0)Πk=0T−1μ(ak|xk;θ)P(xk+1|xk,ak).
It will be appreciated that
∇θ log θ(ξ)=Σk=0T−1∇θ log μ(a|X;θ).
Gradient of L(θ,ν,λ) with Regard to θ:
By expanding the expectations in the definition of the objective function L(θ,ν,λ) in Equation 4, the following results:
By taking the gradient with respect to θ:
This gradient can be rewritten as
Sub Differential of L(θ,ν,λ) with Regard to ν.
From the definition of L(θ,ν,λ), it will be appreciated that L(θ,ν,λ) is a convex function in ν for any fixed θ∈Θ. Note that for every fixed ν and ν′
(D(ξ)−ν′)+−(D(ξ)−ν)+≥g·(ν′−ν)
Where g is any element in the set of sub-derivatives:
Since L(θ,ν,λ) is finite-valued for any ν∈, by the additive rule of sub-derivatives:
In particular for q=1, the sub-gradient of L(θ,ν,λ) with regard to ν can be written:
Gradient of L(θ,ν,λ) with Regard to λ:
Because L(θ,ν,λ) is a linear function in λ, it will be appreciated that the gradient of L(θ,ν,λ) with regard to λ can be expressed as follows:
Accordingly, one way to find the desired saddle point discussed above, is to descend in θ and ν and ascend in λ using the gradients of L(θ,ν,λ) with regard to θ, ν, and λ as follows:
This assumes that there exists a policy μ(⋅|⋅; θ) such that CVaRα(Dθ(x0))≤β (feasibility assumption). It will be appreciated that there exists a deterministic history-dependent optimal policy for CVaR optimization. Moreover, note that this policy does not depend on the complete history, but only on the current time step k, current state of the system xk, and accumulated discounted cost, Σi=0kγiC(xi, ai).
Optimization Algorithm
Algorithm 1 below contains the pseudo-code for one embodiment solving the CVaR optimization problem described above (see Equation 4). What appears inside the parentheses on the right-hand-side of the update equations are the estimates of the gradients of L(θ,ν,λ) (estimates of Equations 8-10) with regard to θ, ν, and λ, i.e., the gradients related to the policy parameter (θ), risk parameter (ν), and Lagrangian parameter (λ). Γ0 is an operator that projects a vector to the closest point in a compact and convex set Θ⊂κ
and Γλ is a projection operator to [0, λmax]. These projection operators help ensure convergence of the algorithm. The algorithm also contains step-size schedules, ζ. The step-size schedules ensure that the risk parameter ν update is on the fastest time-scale {ζ3(i)}, the policy parameter θ update is on the intermediate time-scale {ζ2(i)}, and the Lagrangian parameter λ update is on the slowest time scale {ζ1(i)}. For purposes of the algorithms, the following assumptions apply:
Assumption 1:
For any state-action pair (x, a), μ(a|x; θ) is continuously differentiable in θ and ∇θμ(a|x; θ) is a Lipschitz function in θ for every α∈ and x∈.
Assumption 2:
The Markov chain induced by any policy θ is irreducible and aperiodic.
Assumption 3:
The step size schedules satisfy:
Σiζ1(i)=Σiζ2(i)=Σiζ3(i)=∞ (11)
Σiζ1(i)2,Σiζ2(i)2,Σiζ3(i)2<∞ (12)
ζ1(i)=o(ζ2(i)),ζ2(i)=o(ζ3(i)) (13)
This results in a three time-scale stochastic approximation algorithm, as shown in Algorithm 1.
Algorithm 1
Input: parameterized policy μ(· | ·; θ), confidence level α, and loss tolerance β.
Initialization: policy parameter θ = θ0, risk parameter v = v0, and Lagrangian parameter λ = λ0
for i = 0, 1, 2, . . . do
for j = 1, 2, . . . do
Generate N trajectories {ξj,i}j=1N by starting at x0 = x0 and following the current
policy θi.
end for
end for
return parameters v, θ, and λ.
As shown by the foregoing pseudo code, the policy gradient algorithm (i.e., Algorithm 1) estimates the gradient of a performance measure with respect to the policy parameter θ from observed system trajectories. Then the policy gradient algorithm improves the policy by adjusting the parameters θ, ν, and λ in the direction of the gradients.
Convergence of the Policy Gradient Algorithm
One or more embodiments that utilize Algorithm 1 will converge to a local saddle point of the risk-sensitive objective function L(θ,ν,λ). The following proof establishes convergence of the policy gradient algorithm described above.
Theorem 1:
Suppose λ*∈[0,λmax). Then the sequence of (θ,λ)-updates in Algorithm 1 converges to a (local) saddle point (θ*,ν*,λ*) of the objective function L(θ,ν,λ) almost surely, i.e., it satisfies L (θ,ν,λ*)≥L (θ*,ν*,λ*)≥L(θ*,ν*,λ),
for some r>0 and ∀λ∈[0,λmax]. Note that B(θ*,ν*)(r) represents a hyper-dimensional ball centered at (θ*,ν*) with radius r.
Since ν converges on the faster timescale than θ and λ, the ν-update can be rewritten by assuming (θ,λ) as invariant quantities, i.e.,
Consider the continuous time dynamics of ν defined using differential inclusion
and Γν is the Euclidean projection operator of ν to
In general Γν(ν) is not necessarily differentiable. Υν[K(ν)] is the left directional derivative of the function Γν(ν) in the direction of K(ν). By using the left direction derivative Υν[−g(ν)] in the sub-gradient descent algorithm for ν, the gradient will point at the descent direction along the boundary of ν whenever the ν-update hits its boundary.
Furthermore, since ν converges on the faster timescale than θ, and λ is on the slowest time-scale, the θ-update can be rewritten using the converged ν*(θ) and assuming λ as an invariant quantity, i.e.,
Consider the continuous time dynamics of θ∈Θ:
and Γθ is the Euclidean projection operator of θ to Θ, i. e., Γθ(θ)=argmin{circumflex over (θ)}∈Θ ½∥θ−{circumflex over (θ)}∥22. Similar to the analysis of ν, Υθ[K(θ)] is the left directional derivative of the function Γθ(θ) in the direction of K(θ). By using the left directional derivative Υθ[−∇θL(θ,ν,λ)] in the gradient decent algorithm for θ, the gradient will point at the descent direction along the boundary of Θ whenever the θ-update hits its boundary.
Finally, since λ-update converges in a slowest time-scale, the λ-update can be rewritten using the converged θ*(λ) and ν*(λ), i. e.,
Consider the continuous time system
and Γλ is the Euclidean projection operator of λ to [0,λmax], i.e., Γλ(λ)=argmin{circumflex over (λ)}∈[0,λ
Next, it is desirable to show that the ordinary differential equation 18 is actually a gradient ascent of the Lagrangian function using the envelope theorem in mathematical economics. The envelope theorem describes sufficient conditions for the derivative of L* with respect to λ where it equals to the partial derivative of the objective function L with respect to λ, holding (θ,ν) at its local optimum (θ,ν)=(θ*(λ),ν*(λ)). It can be shown that ∇λL*(λ) coincides with ∇λL(θ,ν,λ)|θ=θ*(λ),ν=ν*(λ) as follows.
Theorem 2:
The value function L* is absolutely continuous. Furthermore,
L*(λ)=L*(0)+∫0λ∇λ′L(θ,ν,λ′)|θ=θ*(s),ν=ν*(s),λ′=sds,λ≥0. (19)
Proof.
The proof follows from analogous arguments of Lemma 4.3 in V. Borkar, An Actor-Critic Algorithm for Constrained Markov Decision Processes, IEEE TRANSACTION ON AUTOMATIC CONTROL (2014)—incorporated herein by reference in its entirety. From the definition of L*, observe that for any λ′, λ″≥0 with λ′<λ″,
This implies that L* is absolutely continuous. Therefore, L* is continuous everywhere and differentiable almost everywhere.
By the Milgrom-Segal envelope theorem of mathematical economics, see P. Milgrom and I. Segal, Envelope Theorems for Arbitrary Choice Sets, E
Remark 1:
It can be shown that L*(λ) is a concave function. Since for given θ and ν, L(θ,ν,λ) is a linear function λ. Therefore, for any α′∈[0, 1], α′L*(λ1)+(1−α′)L*(λ2)≤L*(α′λ1+(1−α′)λ2), i.e., L*(λ) is concave function. Concavity of L*implies that it is continuous and directionally (both left hand and right hand) differentiable in int dom(L*). Furthermore at any λ={circumflex over (λ)} such that the derivative of L(θ,ν,λ) with respect of λ at θ=θ*(λ), ν=ν*(λ) exists,
Furthermore concavity of L* implies ∇λL*(λ)|λ={circumflex over (λ)}
In order to prove the main convergence result, the following standard assumptions and remarks are needed.
Assumption 4:
For any given x0∈ and θ∈Θ, the set {(ν,g(ν))|g(ν)∈∂νL(θ,ν,λ)} is closed.
Remark 2:
For any given θ∈Θ, λ≥0, and g(ν)∈∂νL(θ,ν,λ), we have
|g(ν)|≤3λ(1+|ν|)/(1−α). (20)
To see this, recall that g can be parameterized by q as, for q∈[0,1],
It is obvious that |1{D(ξ)=ν}|, |1{D(ξ)>ν}|≤1+|ν|. Thus, |Σξθ(ξ)1{D(ξ)>ν}|≤supξ|1{D(ξ)>ν}|≤1+|ν|, and |Σξθ(ξ)1{D(ξ)=ν}|≤1+|ν|. Recalling 0<(1−q), (1−α)<1, these arguments imply the claim of Equation 20.
Before getting into the main result, the following technical proposition is also needed.
Proposition 1: ∇0L(θ,ν,λ) is Lipschitz in θ.
Proof.
Recall that
and ∇θ log θ(ξ)=Σk=0T−1∇θμ(ak|xk; θ)/μ(ak|xk; θ) whenever μ(ak|xk; θ)∈(0,1]. Now Assumption 1 implies that ∇θμ(ak|xk; θ) is a Lipschitz function in θ for any a∈ and k∈{0, . . . , T−1} and μ(ak|xk; θ) is differentiable in θ. Therefore, by recalling that θ=Πk+1T−1P(xk+1|xk,ak)μ(ak|xk; θ)1{x0=x0} and by combining these arguments and noting that the sum of products of Lipschitz functions is Lipschitz, one concludes that ∇θL(θ,ν,λ) is Lipschitz in θ.
Remark 3:
∇θL(θ,ν,λ) is Lipschitz in θ implies that ∥∇θL(θ,ν,λ)∥2≤2(∥∇θL(θ0,ν,λ)∥+∥θ0∥)2+2∥θ∥2 which further implies that
∥∇θL(θ,ν,λ)∥2≤K1(1+∥θ∥2)
for K1=2 max(1,(∥∇θL(θ0,ν,λ)∥+∥θ0∥)2)>0. Similarly, ∇θ log θ(ξ) is Lipschitz implies that
∥∇θ log θ(ξ)∥2≤K2(ξ)(1+∥θ∥2).
for a positive random variable K2(ξ). Furthermore, since T<∞ w. p. 1, μ(ak|xk; θ)∈(0,1] and ∇θμ(ak|xk; θ) is Lipschitz for any k<T, K2(ξ)<∞ w. p. 1.
It is now possible to prove the convergence analysis of Theorem 1.
Proof.
[Proof of Theorem 1] The following four steps establish the proof:
Step 1 (Convergence of ν-Update):
Since ν converges in a faster time scale than θ and λ, one can assume both θ and λ as fixed quantities in the ν-update, i.e.,
First, one can show that δνi+1 is square integrable, i.e.
where ν,i=σ(νm,δνm, m≤i) is the filtration of νi generated by different independent trajectories. Second, since the history trajectories are generated based on the sampling probability mass function θ(ξ), Equation 6 implies that [δνi+1|ν,i]=0. Therefore, the ν-update is stochastic approximation of the ordinary differential equation 15 with Martingale difference error term, i.e.,
Then one can invoke Corollary 4 in Chapter 5 of V. Borkar, Stochastic Approximation: A Dynamical Systems Viewpoint, Cambridge University Press (2008) (discussing stochastic approximation theory for non-differentiable systems)—incorporated herein by reference in its entirety—to show that the sequence {∥i},
converges almost surely to a fixed point
of differential inclusion (Equation 16), where
To justify the assumptions of this theorem, 1) from Remark 2, the Lipschitz property is satisfied, i.e., supg(ν)∈ƒ
∀i implies mat supi∥νi∥<∞ almost surely.
Consider the ordinary differential equation of ν∈ in Equation 15, and define the set-valued derivative of L as follows:
DtL(θ,ν,λ)={g(ν)Υν[−g(ν)]|∀g(ν)∈∂νL(θ,ν,λ)}.
One may conclude that
Consider the following cases:
Case 1:
When
For every g(ν)∈∂νL(θ,ν,λ), there exists a sufficiently small η0>0 such that
and
Γν(θ−η0g(ν))−θ=−η0g(ν).
Therefore, the definition of Υθ[−g(ν)] implies
The maximum is attained because θνL(θ,ν,λ) is a convex compact set and g(ν)Υν[−g(ν)] is continuous function. At the same time, we have maxg(ν)DtL(θ,ν,λ)<0 whenever 0∉∂νL(θ,ν,λ).
Case 2:
When
and for any g(ν)∈∂Lν(θ,ν,λ) such that
for any η∈(0,η0] and some η0>0.
The condition
implies that Υν[−g(ν)]=−g(ν).
Then the following obtains:
Furthermore,
whenever 0∉∂νL(θ,ν,λ).
Case 3:
When
and there exists a non-empty set
First, consider any g(ν)|(ν). For any η>0, define νη:=ν−ηg(ν). The above condition implies that when 0<η→0, Γν[νη] is the projection of νη to the tangent space of
For any elements
since the following set
is compact, the projection of νη on
exists. Furthermore, since ƒ(ν):=½(ν−νη)2 is a strongly convex function and ∇ƒ(ν)=ν−νη, by first order optimality condition, one obtains
where νη* is an unique projection of νη (the projection is unique because ƒ(ν) is strongly convex and
is a convex compact set). Since the projection (minimizer) is unique, the above equality holds if and only if ν=νη*.
Therefore, for any
and η>0.
Second, for any g(ν)∈∂νL(θ,ν,λ)∩(ν)c, one obtains
for any η∈(0,η0] and some η0>0. In this case, the arguments follow from case 2 and the following expression holds, Υν[−g(ν)]=−g(ν).
Combining these arguments, one concludes that
This quantity is non-zero whenever 0∉{g(ν)Υν[−g(ν)]|∀g(ν)∈∂νL(θ,ν,λ)} (this is because, for any g(ν)∈∂νL(θ,ν,λ)∩(ν)c, one obtains g(ν)Υν[−g(ν)]=−g(ν)2).
Thus, by similar arguments one may conclude that maxg(ν)DtL(θ,ν,λ)≤0 and it is non-zero if Υν[−g(ν)]≠0 for every g(ν)∈∂νL(θ,ν,λ).
Now for any given θ and λ, define the following Lyapunov function
θ,λ(ν)=L(θ,ν,λ)−L(θ,ν*,λ)
where ν* is a minimum point (for any given (θ,λ), L is a convex function in ν). Then θ,λ(ν) is a positive definite function, i.e., θ,λ(ν)≥0. On the other hand, by definition of a minimum point, one easily obtains 0∈{g(ν*)Υν[−g(ν*)]|ν=ν*|∀g(ν*)∈∂νL(θ,ν,λ)|ν=ν*} which means that ν* is also a stationary point, i.e., ν*∈Nc.
Note that maxg(ν)Dtθ,λ(ν)=maxg(ν)DtL(θ,ν,λ)≤0 and this quantity is non-zero if Υν[−g(ν)]≠0 for every g(ν)∈∂νL(θ,ν,λ). Therefore, by Lyapunov theory for asymptotically stable differential inclusions, the above arguments imply that with any initial condition ν(0), the state trajectory ν(t) of Equation 15 converges to a stable stationary minimum point ν*, i.e., L(θ,ν*,λ)≤L(θ,ν(t),λ)≤L(θ,ν(0),λ) for any t≥0.
Based on previous analysis on stochastic approximation, the sequence
converges almost surely to the solution of differential inclusion (Equation 15) which further converges almost surely to ν*∈Nc. Also, it can be easily seen that Nc is a closed subset of the compact set
which is a compact set as well.
Step 2 (Convergence of θ-Update):
Since θ converges in a faster time scale than λ, and ν converges faster than θ, one can assume λ as a fixed quantity and ν as a converged quantity ν*(θ) in the θ-update. The θ-update can be rewritten as a stochastic approximation, i.e.,
First, one can show that δθi+1 is square integrable, i.e., [∥δθi+1∥2|Fθ,i]≤Ki(1+∥θi∥2) for some Ki>0, where θ,i=σ(θm,δθm, m≤i) is the filtration of θi generated by different independent trajectories. To see this, notice that
The Lipschitz upper bounds are due to results in Remark 3. Since K2(ξj,i)<∞ w.p. 1, there exists K2,i<∞ such that max1≤j≤NK2(ξj,i)≤K2,i. By combining these results, one concludes that [|δƒi+1∥2|Fθ,i]≤Ki(1+∥θi∥2) where
Second, since the history trajectories are generated based on the sampling probability
mass function θ
Now consider the continuous time system θ∈Θ in Equation 16. The following can be expressed:
Consider the following cases:
Case 1:
When θ∈Θ°. Since Θ° is the interior of the set Θ and Θ is a convex compact set, there exists a sufficiently small η0>0 such that θ−η0∇θL(θ,ν,λ)|ν=ν*(θ)∈Θ and
Therefore, the definition of Υθ[−∇θL(θ,ν,λ)|ν=ν*(θ)] implies
At the same time, dL(θ,ν,λ)/dt|ν=ν*(θ)<0 whenever ∥∇θL(θ,ν,λ)|ν=ν*(θ)∥≠0.
Case 2:
When θ∈∂Θ and θ−η∇θL(θ,ν,λ)|ν=ν*(θ)∈Θ for any η∈(0,η0] and some η0>0. The condition θ−η∇θL(θ,ν,λ)|ν=ν*(θ)∈Θ implies that
Then the following obtains
Υθ[−∇θLθ,ν,λ|ν=ν*(θ)]=−∇θLθ,ν,λ|ν=ν*(θ).
Then the following obtains
Furthermore, dL(θ,ν,λ)/dt|ν=ν*(θ)<0 when ∥∇θL(θ,ν,λ)|ν=ν*(θ)|≠0.
Case 3:
When θ∈∂Θ and θ−η∇θL(θ,ν,λ)|ν=ν*(θ)∉Θ for some η∈(0,η0] and any η0>0. For any η>0, define θη:=θ−η∇θL(θ,ν,λ)|ν=ν*(θ). The above condition implies that when θ<η→0, Γθ[θη] is the projection of θη to the tangent space of Θ. For any elements {circumflex over (θ)}∈Θ, since the following set {θ∈Θ:∥θ−θη∥2≤∥{circumflex over (θ)}−θη∥2} is compact, the projection of θη on Θ exists. Furthermore, since ƒ(θ):=½∥θ−θη∥22 is a strongly convex function and ∇ƒ(θ)=θ−θη, by first order optimality condition, one obtains
∇ƒ(θη*)T(θ−θD*)=(θη*−ηη)T(θ−θη*)≥0, ∀θ∈Θ
where θη* is an unique projection of θη (the projection is unique because ƒ(θ) is strongly convex and Θ is a convex compact set). Since the projection (minimizer) is unique, the above equality holds if and only if θ=θη*.
Therefore, for any θ∈Θ and η>0,
From these arguments, one concludes that dL(θ,ν,λ)/dt|ν=ν*(θ)≤0 and this quantity is non-zero whenever ∥Υθ[−∇θL(θ,ν,λ)|ν=ν*(θ)]∥≠0.
Now for any given λ, define the following Lyapunov function
λ(θ)=L(θ,ν*(θ),λ)−L(θ*,ν*(θ*),λ)
where θ* is a local minimum point. Then there exists a ball centered at θ* with radius r such that for any θ∈Bθ*(r), λ(θ) is a locally positive definite function, i.e., λ(θ)≥0. On the other hand, by the definition of a local minimum point, one obtains Υθ[−∇θL(θ*,ν,λ)|ν=ν*(θ)]|θ=θ*=0 which means that θ* is a stationary point, i.e., θ*∈Θc.
Note that dλ(θ(t)/dt=dL(θ(t),ν*(θ(t)),λ)/dt≤0 and the time-derivative is non-zero whenever |Υθ[−∇θL(θ,ν,λ)|ν=ν*(θ)]∥≠0. Therefore, by Lyapunov theory for asymptotically stable systems, the above arguments imply that with any initial condition θ(0)∈Bθ*(r), the state trajectory θ(t) of Equation 16 converges to a stable stationary and local minimum θ*, i.e., L(θ*,ν*(θ*),λ)≤L(θ(t),ν*(θ(t)),λ)≤L(θ(0),ν*(θ(0)),λ) for any t≥0.
Based on the above properties and noting that 1) from Proposition 1, ∇θL(θ,ν,λ) is a Lipschitz function in θ, 2) the step-size rule follows from Assumptions 1 to 3 (and corresponding discussion), 3) Equation 31 (below) implies that δθi+1 is a square integrable Martingale difference, and 4) θi∈Θ, ∀i implies that supi∥θi∥<∞ almost surely, one can invoke Theorem 2 in Chapter 6 of V. Borkar, Stochastic Approximation, supra, (addressing multi-time scale stochastic approximation theory), to show that the sequence {θi}, θi∈Θ converges almost surely to the solution of ordinary differential equation 16 which further converges almost surely to θ*∈Θ. Also, it can be easily seen that Θc is a closed subset of the compact set Θ, which is a compact set as well.
Step 3 (Local Minimum):
Now, it is desirable to show that {θi, νi} converges to a local minimum of L(θ,ν,λ) for fixed λ. Recall {θi, νi} converges to (θ*, ν*):=(θ*,ν*(θ*)). From previous arguments on (ν,θ) convergence analysis imply that with any initial condition (θ(0),ν(0)), the state trajectories θ(t) and ν(t)) of Equations 15 and 16 converge to the set of stationary points (θ*,ν*) in the positive invariant set Θc×Nc and L(θ*,ν*,λ)≤L(θ(t),ν*(θ(t)),λ)≤L(θ(0),ν*(θ(0)),λ)≤L(θ(0),ν(t),λ)≤L(θ(0),ν(0),λ) for any t≥0.
By contradiction, suppose (θ*,ν*) is not a local minimum. Then there exists
such that
The minimum is attained by Weierstrass extreme value theorem. By putting θ(0)=
which is clearly a contradiction. Therefore, the stationary point (θ*,ν*) is a local minimum of L(θ,ν,λ) as well.
Step 4 (Convergence λ-Update):
Since λ-update converges in the slowest time scale, it can be rewritten using the converged θ*(λ)=θ*(ν*(λ),Δ) and ν*(λ), i.e.,
From Equation 7 it obvious that ∇λL(θ,ν,λ) is a constant function of λ. Similar to θ-update. one can easily show that δλi+1 is square integrable, i.e.,
where λ,i=σ(λm,δλm, m≤i) is the filtration of λ generated by different independent trajectories. Furthermore, Equation 7 implies that [δλi+1|λ,i]=0. Therefore, the λ-update is a stochastic approximation of the ordinary differential equation 18 with a Martingale difference error term. In addition, from the convergence analysis of (θ,ν)-update, (θ*(λ),ν*(λ)) is an asymptotically stable equilibrium point of {θi,νi}. From Equation 5, ∇θL(θ,ν,λ) is a linear mapping in λ, it can be easily seen that (θ*(λ),ν*(λ)) is a Lipschitz continuous mapping of λ.
Consider the ordinary differential equation of λ∈[0,λmax] in Equation 18. Analogous to the arguments in the θ-update, it can be written
and shown that −dL(θ,ν,λ)/dt|θ=θ*(λ),ν=ν*(λ)≤0, this quantity is non-zero whenever ∥Υλ[dL(θ,ν,λ)/dλ|θ=θ*(λ),ν=ν*(λ)]∥≠0.
Consider the Lyapunov function
(λ)=−L(θ*(λ),ν*(λ),λ)+L(θ*(λ),ν*(λ*),λ*)
where λ* is a local maximum point. Then there exists a ball centered at λ* with radius r such that for any λ∈Bλ*(r), (λ) is a locally positive definite function, i.e., (λ)≥0. On the other hand, by the definition of a local maximum point, one obtains Υλ[dL(θ,ν,λ)/dλ|θ=θ*(λ),ν=ν*(λ),λ=λ*]|λ=λ*0 which means that λ* is also a stationary point, i.e., λ*∈Λc.
Since d(λ(t))/dt=−dL(θ*(λ(t)),ν*(λ(t)),λ(t))/dt≤0 and the time-derivative is non-zero whenever ∥Υλ[∇λL(θ,ν,λ)|ν=ν*(λ),θ=θ*(λ)]∥≠0, Lyapunov theory for asymptotically stable systems implies that λ(t) converges to λ* and λ* is asymptotically stable stationary and local maximum point.
Based on the above properties and noting that the step size rule follows from Assumptions 1 to 3 (and corresponding discussion), one can apply the multi-time scale stochastic approximation theory (Theorem 2 in Chapter 6 of V. Borkar, Stochastic Approximation, supra) to show that the sequence {λi} converges almost surely to the solution of ordinary differential equation 18 which further converges almost surely to λ*∈[0,λmax]. Since [0,λmax] is a compact set, following the same lines of arguments and recalling the envelope theorem (Theorem 3) for local optimum, on further concludes that λ* is a local maximum of L(θ*(λ),ν*(λ),λ)=L*(λ).
Step 5 (Saddle Point):
By letting θ*=θ*(ν*(λ*),λ*) and ν*=ν*(λ*), it can be shown that (θ*,ν*,λ*) is a (local) saddle point of the objective function L(θ,ν,λ) if λ*∈[0,λmax).
Now suppose the sequence {λi} generated from Equation 30 converges to a stationary point λ*∈[0,λmax). Since step 3 implies that (θ*,ν*) is a local minimum of L(θ,ν,λ*) over feasible set
there exists a r>0 such that
In order to complete the proof it is necessary to show
These two equations imply
which further implies (θ*,ν*,λ*) is a saddle point of L(θ,ν,λ). It can now be shown that Equations 32 and 33 hold.
Recall that Υλ[∇λL(θ,ν,λ)|θ=θ*(λ),ν=ν*(λ),λ=λ*]|λ=λ*=0. We show Equation 32 by contradiction. Suppose
This then implies that for λ*∈[0,λmax),
for any η∈[0,ηmax) for some sufficiently small ηmax>0. Therefore,
This contradicts with Υλ[∇λL(θ,ν,λ)|θ=θ*(λ),ν=ν*(λ),λ=λ*]|λ=λ*=0. Therefore, Equation 32 holds. To show that Equation 33 holds, it is only necessary to show that λ*=0 if
Suppose λ*∈(0,λmax), then there exists a sufficiently small λ0>0 such that
This again contradicts with the assumption Υλ[∇λL(θ,ν,λ)|θ=θ*(λ),ν=ν*(λ),λ=λ*]|λ=λ*=0. Therefore, Equation 33 holds.
Combining the above arguments, the final conclusion results that (θ*,ν*,λ*) is a (local) saddle point of L(θ,ν,λ) if λ*ϵ[0,λmax).
Remark 4:
When λ*=λmax and
For any η>0 and Υλ[∇λL(θ,ν,λ)|θ=θ*(λ),ν=ν*(λ),λ=λ*]|λ=λ*=0. In this case one cannot guarantee feasibility using the above analysis, and (θ*,ν*,λ*) is not a local saddle point. Such λ* is referred as a spurious fixed point. Practically by incrementally increasing λmax, when λmax becomes sufficiently large, one can ensure that the policy gradient algorithm will not get stuck at the spurious fixed point.
Adobe Target Problem
One or more embodiments of the present disclosure can be used in conjunction with the set of programs known as Adobe Target®. As described above, these programs have historically used the Test & Target 1:1 optimization system to help marketers test different variations of websites and optimize the advertisement targets based on different types of visitors. This optimization system uses machine-learning, risk-neutral techniques to decide which content to show each visitor based on consumer background and preferences. One or more embodiments have been utilized to select ad recommendation policies in comparison to the Adobe Test & Target 1:1 simulator. For purposes of this comparison, the loss-tolerance and confidence interval values were set for β=0.12 and α=0.05, respectively. Here, the cost is defined as minus reward, and thus in cost formulation the loss-tolerance and confidence interval values were set for β=−0.12 and α=0.95, respectively. A stage-wise reward of 1 indicates that the user clicks the advertising content and zero otherwise. Therefore, the number of clicks on advertising content defined the long-term reward function. With a fixed time horizon of 15 steps (T=15), and discounting factor λ=0.95, the experimental results can be found in
TABLE 1
Performance comparison for the policies learned by the risk-sensitive
and risk-neutral algorithms in Adobe Target.
(Rμ)(x0)
σ(Rμ)(x0)
CVaRα(Rμ)(x0)
CVaR Constrained
0.293
0.864
0.141
Optimization
Risk Neutral
0.404
1.919
0.036
Optimization
From Table 1, the mean reward is 0.404 in the risk-neutral policy gradient compared to the mean reward of 0.293 in mean-CVaR constrained policy gradient. This indicates that the mean reward is slightly higher for the risk-neutral policy. However, the CVaR value is 0.036 in risk-neutral policy gradient compared to the CVaR value of 0.141 in mean-CVaR constrained policy gradient. Because the CVaR value is higher, it indicates that the mean α-tail is higher, and that the variance, or risk, is therefore lower. These results show that while the expected revenue obtained from the risk-neutral method is higher than that of its mean-CVaR constrained counterpart, it encounters a higher risk of returning a very low return.
Stopping Problem
Furthermore, one or more embodiments of the disclosure can be utilized to provide other policy recommendations. For example, one embodiment has been utilized in conjunction with a recommendation for an investment-stopping problem. In this problem, the state at each time step k≤T consists of the cost ck and time k, i.e., x=(ck, k), where T is the stopping time. The agent (buyer) should decide either to accept the present cost or wait. The objective is to minimize the average price of an equity and control its worst-case loss distribution. If the buyer accepts, i.e., when k=T, the system reaches a terminal state and the cost ck is received, otherwise, the buyer receives the cost ph and the new state is (ck+1, k+1) where ck+1 is ƒu ck w.p. p and ƒd ck w.p. 1−p. Here, ƒu>1 and ƒd<1 are constants. Moreover, there is a discounted factor γ∈(0,1) to account for the increase in the buyer's affordability.
The stopping problem can be formulated as follows
minθ (Dθ(xθ)) subject to CVaRα(Dθ(x0))≤β
where Dθ(x0)=Σk=0Tγk(1{μk=1}ck+1{μk=0}ph)|x0=x,μ
This embodiment set the parameters of the MDP as follows in solving this particular stopping problem: x0=[2; 0], ph=0.1, T=20, γ=0.95, ƒu=2, ƒd=0.5, and p=0.35 The step-length sequence was given as:
The confidence level and risk-tolerance threshold were given by ∝=0.95 and β=3. The number of sample trajectories N was set to 100. The parameter bounds were given as follows: λmax=1000, Θ=[−60,60]κ
It will be appreciated that a near-optimal policy μ was obtained using the LSPI algorithm with 2-dimensional radial basis function (RBF) features. This embodiment also implemented the 2-dimensional RBF feature function ϕ and considered the family Boltzmann policies for policy parameterization:
This experiment implemented two trajectory-based algorithms (as described above), one that considered risk criteria, another that did not.
The experiments for each algorithm comprised of the following two phases:
TABLE 2
Performance comparison for the policies learned by the risk-sensitive
and risk-neutral algorithms in optimal stopping problem.
(Rμ)(x0)
σ(Rμ)(x0)
CVaRα(Rμ)(x0)
CVaR Constrained
1.84
0.57
2.25
Optimization
Risk Neutral
1.35
1.04
4.45
Optimization
From Table 2, one can see that the mean revenue of the policy returned by the risk-neutral policy gradient algorithm is 1.35, compared to the mean revenue of 1.84 for the policy returned by the mean-CVaR constrained policy gradient algorithm. Although the expected revenue obtained from risk-neutral policy gradient is higher (lower expected cost), it has a heavy-tailed distribution. This can be seen from the values of CVaR, which is 4.45 in the risk-neutral policy gradient and 2.25 in the mean-CVaR constrained policy gradient. This implies that the policy generated by the risk-neutral algorithm has a significant chance to return a very low return (high cost). Comparatively, by trading off some performance on maximizing the expected revenue, the return distribution from the mean-CVaR constrained method results in much smaller deviations (shorter tail).
The method 800 includes an act 802 of receiving a risk-tolerance value β. In particular, act 802 can involve receiving a threshold tolerance value for an ad serving campaign. For example, act 802 can involve receiving a risk threshold that a marketer is willing to allow the ad serving campaign to perform. The risk-tolerance value β can comprise a variety of forms, including in a business outcome (such as revenue or sales), consumer behavior (such as clicks, selections, or purchases), a threshold click-thru rate, or a statistical measure (such as CVaR, standard deviations, value-at-risk, etc.). Act 802 can optionally involve receiving or identifying a confidence level α for the risk-tolerance value β. For example, act 802 can involve identifying a default value confidence level α, such as for example 95%. Alternatively, act 802 can involve receiving input from a marketer that adjusts the default confidence level α or otherwise provides the confidence level α.
The method 800 further includes an act 804 of identifying a set of user data. In particular, act 804 can involve identifying a set of user data indicating prior behavior in relation to advertising. For example, act 804 can involve accessing a set of user data from an analytics server. Alternatively, act 804 can involve gathering the set of user data via known analytic techniques, such as tracking pixels.
Additionally, the method includes an act 806 of determining an optimized ad recommendation policy that is subject to the risk-tolerance value. In particular, act 806 can involve determining an optimized ad recommendation policy μ that is subject to the risk-tolerance value β by simultaneously converging a policy parameter θ and a risk parameter ν to identify the optimized ad recommendation policy μ that is subject to the risk-tolerance value β. For example, act 806 can involve repeatedly estimating a gradient for the policy parameter θ by sampling the set of user data. Additionally, act 806 can involve repeatedly estimating a gradient for the risk parameter ν by sampling the set of user data. Furthermore, act 806 can involve repeatedly using the estimated gradient for the risk parameter ν to select an updated risk parameter ν. Act 806 can also involve repeatedly using the estimated gradient for the policy parameter θ to select an updated policy parameter θ.
Act 806 can also involve simultaneously converging the policy parameter θ, the risk parameter ν, and a constraint parameter λ to identify the optimized ad recommendation policy that is subject to the risk-tolerance value β. In particular, act 806 can involve repeatedly estimating a gradient for the constraint parameter λ by sampling the set of user data.
As mentioned above, act 806 can involve determining an optimized ad recommendation policy μ that is subject to the risk-tolerance value β. In one or more embodiments, act 806 can involve determining an optimized ad recommendation policy that is subject to the risk-tolerance value β within the confidence level α. For example, act 806 can involve optimizing a conditional value-at risk for a Markov decision process. In particular, act 806 can involve using a policy gradient algorithm for the conditional value-at risk for the Markov decision process.
As part of estimating the gradient for the policy parameter θ by sampling the set of user data, act 806 can involve using the policy gradient algorithm to generate one or more trajectories for the policy parameter θ by following the policy parameter θ during sampling. Act 806 can further involve using the one or more trajectories to estimate the gradient for the policy parameter θ. Act 806 can also involve applying projection operators when estimating the gradient for the policy parameter θ to ensure convergence. Act 806 can then involve using the estimated gradient of the policy parameter θ to select an updated policy parameter θ from a set of ad recommendation policies.
Along similar lines, as part of estimating the gradient for the risk parameter ν by sampling the set of user data, act 806 can involve using the policy gradient algorithm to generate one or more trajectories for the risk parameter ν by following the policy parameter θ during sampling. Act 806 can further involve using the one or more trajectories to estimate the gradient for the risk parameter ν. Act 806 can also involve applying projection operators when estimating the gradient for the risk parameter ν to ensure convergence. Act 806 can then involve using the estimated gradient of the risk parameter ν to select an updated risk parameter ν.
Still further, as part of estimating the gradient for the constraint parameter λ. by sampling the set of user data, act 806 can involve using the policy gradient algorithm to generate one or more trajectories for the constraint parameter λ. by following the policy parameter θ during sampling. Act 806 can further involve using the one or more trajectories to estimate the gradient for the constraint parameter λ. Act 806 can also involve applying projection operators when estimating the gradient for the constraint parameter λ. to ensure convergence. Act 806 can then involve using the estimated gradient of the constraint parameter λ to select an updated constraint parameter λ.
In one or more embodiments, act 806 can involve converging the risk parameter ν at a first time scale, converging the policy parameter θ at a second time scale, and converging the constraint parameter λ at a third time scale. The first time scale can be faster than the second time scale. Along related lines, the second time scale can be faster than the third time scale.
The method 900 also includes an act 904 of receiving a set of sample data. In particular, act 904 can involve identifying a set of user data indicating prior behavior in relation to parameters of one or more policies. Furthermore, method 900 includes an act 906 of receiving a set of policies.
In addition to the foregoing, method 900 includes an act 908 of determining an optimized policy that is subject to the risk-tolerance value β within the confidence level α using a trajectory-based policy gradient algorithm. For example, act 908 can involve simultaneously converging a policy parameter θ, a risk parameter ν, and a constraint parameter λ by sampling the sample data to identify the optimized policy that is subject to the risk-tolerance value β within the confidence level α. More particularly, act 908 can involve for each of the policy parameter θ, the risk parameter ν, and the constraint parameter λ repeatedly: generating one or more trajectories of a given parameter of the policy parameter, the risk parameter, or the constraint parameter by sampling the set of sample data; estimating, by the one or more processors, a gradient for the given parameter based on the generated one or more trajectories of the given parameter; and using the estimated gradient for the given parameter to update the given parameter.
Act 908 can also involve applying projection operators when estimating gradients for the policy parameter θ, the risk parameter ν, and the constraint parameter λ to ensure convergence to the optimized policy that is subject to the risk-tolerance value within the confidence level.
In one or more embodiments, act 908 can involve converging the risk parameter ν at a first time scale, converging the policy parameter θ at a second time scale, and converging the constraint parameter λ at a third time scale. The first time scale can be faster than the second time scale. Along related lines, the second time scale can be faster than the third time scale.
Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. In particular, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer-readable medium and executable by one or more computing devices (e.g., any of the media content access devices described herein). In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein.
Computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are non-transitory computer-readable storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: non-transitory computer-readable storage media (devices) and transmission media.
Non-transitory computer-readable storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to non-transitory computer-readable storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that non-transitory computer-readable storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. In some embodiments, computer-executable instructions are executed on a general-purpose computer to turn the general purpose computer into a special purpose computer implementing elements of the disclosure. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Embodiments of the present disclosure can also be implemented in cloud computing environments. In this description, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly.
A cloud-computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud-computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the claims, a “cloud-computing environment” is an environment in which cloud computing is employed.
In particular embodiments, the processor 1002 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, the processor 1002 may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory 1004, or the storage device 1006 and decode and execute them. In particular embodiments, the processor 1002 may include one or more internal caches for data, instructions, or addresses. As an example and not by way of limitation, the processor 1002 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in the memory 1004 or the storage 1006.
The memory 1004 may be used for storing data, metadata, and programs for execution by the processor(s). The memory 1004 may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. The memory 1004 may be internal or distributed memory.
The storage device 1006 includes storage for storing data or instructions. As an example and not by way of limitation, the storage device 1006 can comprise a non-transitory storage medium described above. The storage device 1006 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage device 1006 may include removable or non-removable (or fixed) media, where appropriate. The storage device 1006 may be internal or external to the computing device 1000. In particular embodiments, the storage device 1006 is non-volatile, solid-state memory. In other embodiments, the storage device 1006 includes read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these.
The I/O interface 1008 allows a user to provide input to, receive output from, and otherwise transfer data to and receive data from the computing device 1000. The I/O interface 1008 may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. The I/O interface 1008 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, the I/O interface 1008 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
The communication interface 1010 can include hardware, software, or both. In any event, the communication interface 1010 can provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device 1000 and one or more other computing devices or networks. As an example and not by way of limitation, the communication interface 1010 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI.
Additionally or alternatively, the communication interface 1010 may facilitate communications with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the communication interface 1010 may facilitate communications with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination thereof.
Additionally, the communication interface 1010 may facilitate communications various communication protocols. Examples of communication protocols that may be used include, but are not limited to, data transmission media, communications devices, Transmission Control Protocol (“TCP”), Internet Protocol (“IP”), File Transfer Protocol (“FTP”), Telnet, Hypertext Transfer Protocol (“HTTP”), Hypertext Transfer Protocol Secure (“HTTPS”), Session Initiation Protocol (“SIP”), Simple Object Access Protocol (“SOAP”), Extensible Mark-up Language (“XML”) and variations thereof, Simple Mail Transfer Protocol (“SMTP”), Real-Time Transport Protocol (“RTP”), User Datagram Protocol (“UDP”), Global System for Mobile Communications (“GSM”) technologies, Code Division Multiple Access (“CDMA”) technologies, Time Division Multiple Access (“TDMA”) technologies, Short Message Service (“SMS”), Multimedia Message Service (“MMS”), radio frequency (“RF”) signaling technologies, Long Term Evolution (“LTE”) technologies, wireless communication technologies, in-band and out-of-band signaling technologies, and other suitable communications networks and technologies.
The communication infrastructure 1012 may include hardware, software, or both that couples components of the computing device 1000 to each other. As an example and not by way of limitation, the communication infrastructure 1012 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination thereof.
In the foregoing specification, the present disclosure has been described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the present disclosure(s) are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure.
The present disclosure may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. For example, the methods described herein may be performed with less or more steps/acts or the steps/acts may be performed in differing orders. Additionally, the steps/acts described herein may be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. The scope of the present application is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Ghavamzadeh, Mohammad, Chow, Yinlam
Patent | Priority | Assignee | Title |
10831509, | Feb 23 2017 | Ab Initio Technology LLC; Ab Initio Software LLC; AB INITIO ORIGINAL WORKS LLC | Dynamic execution of parameterized applications for the processing of keyed network data streams |
11080067, | Feb 23 2017 | AB INITIO ORIGINAL WORKS LLC; Ab Initio Technology LLC | Dynamic execution of parameterized applications for the processing of keyed network data streams |
11409545, | Feb 23 2017 | Ab Initio Technology LLC | Dynamic execution of parameterized applications for the processing of keyed network data streams |
11593013, | Oct 07 2019 | KYNDRYL, INC | Management of data in a hybrid cloud for use in machine learning activities |
11669343, | Feb 23 2017 | Ab Initio Technology LLC | Dynamic execution of parameterized applications for the processing of keyed network data streams |
Patent | Priority | Assignee | Title |
20060200333, | |||
20080052219, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 23 2015 | GHAVAMZADEH, MOHAMMAD | Adobe Systems Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035245 | /0733 | |
Mar 23 2015 | CHOW, YINLAM | Adobe Systems Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035245 | /0733 | |
Mar 24 2015 | Adobe Inc. | (assignment on the face of the patent) | / | |||
Oct 08 2018 | Adobe Systems Incorporated | Adobe Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 047688 | /0635 |
Date | Maintenance Fee Events |
Dec 05 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 04 2022 | 4 years fee payment window open |
Dec 04 2022 | 6 months grace period start (w surcharge) |
Jun 04 2023 | patent expiry (for year 4) |
Jun 04 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 04 2026 | 8 years fee payment window open |
Dec 04 2026 | 6 months grace period start (w surcharge) |
Jun 04 2027 | patent expiry (for year 8) |
Jun 04 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 04 2030 | 12 years fee payment window open |
Dec 04 2030 | 6 months grace period start (w surcharge) |
Jun 04 2031 | patent expiry (for year 12) |
Jun 04 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |