Embodiments are disclosed that relate to the correction of an estimated pose determined from depth image data. One disclosed embodiment provides, on a computing system, a method of obtaining a representation of a pose of articulated object from image data capturing the articulated object. The method comprises receiving the depth image data, obtaining an initial estimated skeleton of the articulated object from the depth image data, applying a random forest subspace regression function to the initial estimated skeleton, and determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton.
|
1. On a computing system, a method of obtaining a representation of a pose of articulated object from depth image data capturing the articulated object, the method comprising:
receiving the depth image data;
obtaining an initial estimated skeleton of the articulated object from the depth image data;
applying a random forest subspace regression function to the initial estimated skeleton;
determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton.
12. A computing system comprising:
a logic subsystem; and
a data-holding subsystem comprising instructions stored thereon that are executable by the logic subsystem to:
receive depth image data from a depth image sensor;
obtain an initial estimated skeleton from the depth image data, the initial estimated skeleton comprising a plurality of initial estimated joints;
apply a regression function to the initial estimated skeleton to determine one or more offsets to apply to a corresponding one or more initial estimated joints of the initial estimated skeleton; and
apply the offset to the initial estimated skeleton to determine a corrected skeleton.
20. A computing system comprising:
a logic subsystem; and
a data-holding subsystem comprising instructions stored thereon that are executable by the logic subsystem to:
receive depth image data from an image sensor;
obtain an initial estimated skeleton from the depth image data, the initial estimated skeleton comprising a plurality of initial estimated joints;
apply a random forest subspace regression function to the initial estimated skeleton to determine a pose tag to apply to the initial estimated skeleton, the random forest subspace regression function comprising a plurality of decision trees having leaf nodes that each comprises a set of bases that sparsely represents a subspace at the leaf node; and
output to a display an avatar having a pose based upon the pose tag determined.
3. The method of
4. The method of
5. The method of
6. The method of
8. The method of
9. The method of
10. The method of
13. The computing system of
14. The computing system of
15. The computing system of
16. The computing system of
17. The computing system of
18. The computing system of
19. The computing system of
|
The development of high-speed depth cameras has provided an opportunity for the application of a practical imaging modality to the building of a variety of systems in gaming, human computer interaction, surveillance, and other fields. For example, estimations of human pose determined via depth images acquired by such cameras may be used as input for computing systems and/or applications. As a more specific example, video games may utilize depth images of players as inputs to control game play.
Human poses may be estimated in various manners, such as via classification-based methods. However, poses determined via such methods may be prone to error due, for example, to pose variation and body part occlusion.
Embodiments are disclosed herein that relate to the correction of an estimated pose determined from depth image data. For example, one embodiment provides, on a computing system, a method of obtaining a representation of a pose of articulated object from image data capturing the articulated object. The method comprises receiving the depth image data, obtaining an initial estimated skeleton of the articulated object from the depth image data, applying a random forest subspace regression function to the initial estimated skeleton, and determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
As mentioned above, estimations of human pose (or other articulated object pose) from depth images may be used as input for many types of computing systems and applications. To ensure proper performance of such systems and applications, it is desirable for such pose estimations to be robust. However, initial pose estimation from raw depth data using, for example, classification-based approaches may be prone to error due to large pose variation and body part occlusion. As such, additional processing, which may be referred to as pose correction, may be performed to recover the pose from such a noisy initial estimation. Pose correction of various types may be performed, such as skeletal correction and pose tag assignment. Skeletal correction attempts to recover skeletal pose (e.g. by recovering the location of joints of the skeleton from an initial estimate of joint location), while pose tag assignment outputs a value within a range (e.g. a real value ranging from 0 to 1) indicating a particular location of the pose along a movement pathway.
Pose correction may be performed in various manners. For example, some classification-based methods may utilize a nearest neighbor approach in which distances between an experimental point and training set points are calculated, and the nearest training set point is used as the classification for the experimental point. However, a nearest neighbor approach may utilize heuristics to a larger than desired extent.
Thus, embodiments are disclosed herein that may provide a more data-driven approach to pose correction than nearest neighbor or other methods. The disclosed embodiments utilize random forest regression methods to perform pose correction on an estimated skeleton. Briefly, a random forest regression function is trained to learn errors that occur in the initial skeleton estimation. In the case of skeletal correction, a regression function is trained to learn the systematic errors in initial joint estimation, while in tag correction, a regression function is trained to learn pose tag values directly. A random forest regression function also may utilize subspace learning, such that leaf nodes in the decision trees of the random forest regression function each comprises a set of bases that sparsely represent a subspace at the leaf node. In addition to being more data-driven than nearest neighbor methods and other regression methods, random forest regression methods also may be more efficient when processing larger amounts of training data and/or when utilizing features of higher dimensions in a training set.
Prior to discussing these embodiments in more detail, an example use environment is described with reference to
Human target 108 is shown here as a game player within the observed scene 112. Human target 108 is tracked by depth camera 110 so that the movements of human target 108 may be interpreted by gaming system 102 as controls that can be used to affect the game being executed by gaming system 102. In other words, human target 108 may use his or her movements to control the game. The movements of human target 108 may be interpreted as any suitable type of game control. Some movements of human target 108 may be interpreted as controls that serve purposes other than controlling virtual avatar 106. As nonlimiting examples, movements of human target 108 may be interpreted as controls that steer a virtual racing car, shoot a virtual weapon, navigate a first-person perspective through a virtual world, or manipulate various aspects of a simulated world. Movements may also be interpreted as auxiliary game management controls. For example, human target 108 may use movements to end, pause, save, select a level, view high scores, communicate with other players, etc.
Depth camera 110 may also be used to interpret target movements as operating system and/or application controls that are outside the realm of gaming. Virtually any controllable aspect of an operating system and/or application may be controlled by movements of human target 108. The illustrated scenario in
The methods and processes described herein may be tied to a variety of different types of computing systems.
As shown in
The depth information determined for each pixel may be used to generate a depth map 204. Such a depth map may take the form of any suitable data structure, including but not limited to a matrix that includes a depth value for each pixel of the observed scene. It is to be understood that a depth map generally includes depth information for all pixels, not just pixels that image the human target 108. Thus, in some embodiments, background removal algorithms may be used to remove background information from the depth map 204, producing a background-free depth map 206.
After background removal, an initial estimated skeleton 208 is derived from the background-free depth map 206. Initial estimated skeleton 208 may be derived from depth map 204 to provide a machine readable representation of human target 108. Initial estimated skeleton 208 may be derived from depth map 204 in any suitable manner. For example, in some embodiments, one or more skeletal fitting algorithms may be applied to the background-free depth map 206. The present disclosure is compatible with any suitable skeletal modeling techniques.
Initial estimated skeleton 208 may include a plurality of joints, each joint corresponding to a portion of the human target 108. It will be understood that an initial estimated skeleton in accordance with the present disclosure may include any suitable number of joints, each of which can be associated with any suitable number of parameters (e.g., three dimensional joint position, joint rotation, body posture of corresponding body part (e.g., hand open, hand closed, etc.) etc.). It is to be understood that an initial estimated skeleton may take the form of a data structure including one or more parameters for each of a plurality of skeletal joints (e.g., a joint matrix including an x position, a y position, a z position, and a rotation for each joint). In some embodiments, other types of virtual skeletons may be used (e.g., a wireframe, a set of shape primitives, etc.).
Initial estimated skeleton 208 may contain various errors, for example due to occlusion of body parts by other body parts, as illustrated by arm position error 210.
Thus, a pose correction process may be performed on the initial estimated skeleton to obtain a corrected pose. The pose correction may be used to form a corrected skeleton, as shown at 212, to assign a pose tag 300, or to correct pose in any other suitable manner. In general, to perform pose correction from a noisy initial estimated skeleton, two types of information may be used: temporal motion consistency and systematic bias. While temporal motion consistency has received much attention, less attention has been paid to systematic bias. Systematic biases may be non-linear and associated with complex data manifolds. The bias estimation problem observes two properties: (1) human action has certain regularity, especially when some actions, e.g. golf or tennis, are performed, and (2) the bias is not homogeneous in the data manifold. For example, when a person is facing the camera with no occlusion, the initial estimates may be quite accurate. On the other hand, when a person is standing in a side-view with certain hand motion, there is severe occlusion, and the initial estimation may not be correct, as described above with reference to
The learning and use of a random forest regression function for pose correction may offer various advantages in the correction of systematic errors in initial pose estimation. Briefly, a random forest regression function is a function that utilizes a plurality of random splitting/projection decision trees trained via a set of training data to classify input data. In some embodiments, for each leaf node in the tree, a set of bases is learned to represent the data with sparse coefficients (within a subspace, constraints in sparsity may give rise to a more efficient representation). The overall codebook is the set of all bases from all leaf nodes of the trees. After training, observed data may be input into each random decision tree of the random forest regression function, and a result may be selected based upon a most frequent outcome of the plurality of trees.
A random forest approach may be well-suited for correcting systematic errors in initial estimated pose. For example, random forest regression techniques implement ensemble learning, divide-and-conquer techniques, and sparse coding, which are beneficial properties in light of the high dimensionality of initial estimated pose data. Random forest regression techniques implement these properties via voting, randomizing, partitioning, and sparsity. Ensemble learning is implemented through the use of multiple decision trees. Divide-and-conquer techniques are implemented via the use of decision trees, in which training data are recursively partitioned into subsets. Dividing training data into subsets may help solve difficulties in fitting the overall training data to a global mode. Further, the voting/averaging of multiple independent and/or complementary weak learners (e.g. individual decision trees that together make up a decision forest) helps to provide robustness compared to other correction methods. Further robustness may arise from certain randomness in the data and feature selection stage of training the random forest regression function. Finally, sparse representation of the bases may allow high-dimensional data within intrinsic lower dimension to be well represented by sparse samples of high dimension, wherein the robustness of the sparse representation may assume a subspace of a level of regularity, such as well-aligned data.
Next, in some embodiments, method 500 may comprise, at 508, normalizing and/or scaling the initial estimated skeleton. This may help to correct for skeletal translation and individual body differences. Such normalizing and/or scaling may be performed in any suitable manner. For example, the initial estimated skeletons shown in
where joint jo is a direct predecessor of joint j on the directed graph representing the skeleton. The design of the transformed coordinates H(ST) is motivated by the kinematic body joint motion. H(ST) observes a certain level of invariance to translation, scaling, and individual body changes. It will be understood that this embodiment of a method for normalizing the joint coordinates of an initial estimated skeleton is presented for the purpose of example, and that any other suitable method may be used. Further, in some embodiments, such normalization may be omitted.
As mentioned above, scaling of the initial estimated skeleton also may be performed. For example, scaling may be performed in embodiments in which skeletal correction is performed by inferring an offset of skeletal joints between the initial estimated skeleton ST and a ground truth skeleton GT, instead of directly predicting the locations of the joints in the corrected skeleton. Predicting the offset of joints may offer various advantages over directly predicting joint locations. As mentioned above, it will be noted that, when a user is facing a depth camera with no occlusion, ST may actually be very accurate, and therefore have nearly zero difference compared to GT. In contrast, when a person is in side view of the depth camera, severe occlusions may exist, which may lead to a large and inhomogeneous difference between ST and GT. The correction of ST is thus a manifold learning problem. As a result, certain clusters of ST on the manifold can be directly mapped to, e.g., very low values when predicting offsets, while predicting direct coordinates of GT based upon ST may involve exploring all possible ST in the data space.
Scaling of an initial estimated may be performed in any suitable manner. For example, in some embodiments, initial estimated skeletons may be normalized based upon default lengths of the edges between nodes in a template skeleton. To help avoid scaling errors caused by body part occlusion, this may involve selecting a subset of joints unlikely to be occluded, as indicated at 508 to use for such a scaling process. Such joints also may be referred to as stable joints, a set of which may be denoted as Js. Examples of such stable joints include, but are not limited to, joints in the spine of the initial estimated skeleton, central joints in the shoulder and/or hip, as well as joints in the legs. In comparison, joints such as hand and wrist joints may be more likely to be occluded. Thus, edges between these joints may be prone to errors.
Next, for each skeleton edge between the stable joints and direct predecessor joints, a proportion to the template skeleton edge length may be computed as
where Tj is the jth joint for the template T, which may be fixed. Then, the scale proportion of the initial estimated skeleton is
where δ(•) is an indicator function that is a robust measure to exclude outliers, and where
Continuing with
An embodiment of the process of skeletal correction is as follows. Given a training set {STi, GTi}, where STi and GTi are the initial estimated skeleton and ground truth respectively, a random forest subspace regression function ƒ: ST→D, may be trained, where Δ is the offset of ST from GT and λ is the above-described scale factor to be used for normalizing the initial estimated skeleton. After training the function, an offset Δ may be determined for an observed initial estimated skeleton using this function, as indicated at 516. The offset Δ may then be added to the initial estimated skeleton, as indicated at 518 to obtain a corrected pose in the form of a corrected skeleton.
As a more detailed example, the offset Δj for a joint j may be expressed as
where D=(Δ1, . . . , Δn) for each skeleton of n joints. For an entire sequence of m images, d=(D1, . . . , Dm) From the offsets, the corrected skeleton CT may be determined by CT=ST+λƒ(ST).
The random forest subspace regression function ƒ: ST→D may be trained in any suitable manner. For example, a training set may be represented by S={(st,gt)k} for k=1 through K (where st and gt represent the initial estimated skeleton and ground truth for that initial estimated skeleton). For simplicity, K=1 in this discussion. From the coordinate normalization described above, one may obtain h(st)=(H(ST1), . . . , H(STm)), where each H(ST)=(rj,cj; j=1, . . . , n). Using the offset computation Δj, the offset d=(D1, . . . , Dm) may be computed. Thus, the goal is to predict the mapping h(st)→d.
First, a function is learned to directly predict the mapping f: H(ST)→D by making the independent assumption of each pose. From this view, the training set may be rewritten as S=(H(STi), Di) for i=1 to m. As mentioned above, a random forest regression function includes an ensemble of tree predictors that naturally perform data partitioning, abstraction, and robust estimation. For the task of regression, tree predictors take on vector values, and the forest votes for the most possible value. Each tree in the forest comprises split nodes and leaf nodes. Each split node stores a feature index with a corresponding threshold ti to decide whether to branch to the left or right sub-tree, and each leaf node stores predictions.
To learn the random forest regression function ƒ: H(ST)→D, following a greedy tree training algorithm, each tree in the forest is learned by recursively partitioning the training set into left Sl and right Sr subsets according to a best splitting strategy
where e(•) is an error function standing for the uncertainty of the set, and θ is a set of splitting candidates. If a number of training samples corresponding to the node (node size) is larger than a maximal κ, and
is satisfied, then recurse for the left and right subsets Sl(θ*) and Sr(θ*), respectively.
Any suitable error function may be selected. One example is the simple standard tree node splitting function comprising the root mean squared differences, which may be expressed as
In the training stage, once a tree t is learned, a set of training samples
would fall into a particular leaf node lƒ. Instead of storing all of the samples Stlƒ for each leaf node lƒ, an abstraction may be performed. For example, one method may comprise storing the mean
In the testing stage, given a test example ST=({circumflex over (x)}j,cj; j=1, . . . , n), for each tree t, the training process begins at the root, then recursively branches left or right. The test example then reaches the leaf node Lt(H(ST)) in tree t. The prediction given by tree t is Ft(H(ST))=δ(lƒ=Lt(H(ST)))·
The mean may be considered as another output of the learned regression function ƒ(H(ST)) EP
Any suitable type of random forest function may be used. Examples include, but are not limited to, extremely randomized trees (ERTs) and random projection trees (RPTs). ERTs randomize both the feature selection and the quantization threshold searching process, which may help to make the trees less correlated. The samples (image patches) in each leaf node are assumed to form a small cluster in the feature space. The leaves in the forest are uniquely indexed and serve as the codes for the codebook. When a query sample reaches a leaf node, the index of that leaf is assigned to the query sample. A histogram then may be formed by accumulating the indices of the leaf nodes.
A RPT, which is a variant of k-d tree, splits the data set along one coordinate at the median and recursively builds the tree. Based on the realization that, high dimension data often lies on low-dimensional manifold, RPT splits the samples into two roughly balanced sets according to a randomly generated direction. This randomly generated direction approximates the principal component direction, and can adapt to the low dimensional manifold. The RPT naturally leads to tree-based vector quantization, and an ensemble of RPTrees can be used as a codebook.
In embodiments that implement sparse representation of leaf node bases, instead of splitting each sample until the sample cannot be split anymore, splitting may be stopped early. Then, a set of bases may be identified that provide a robust reconstruction of the samples in that node, wherein the identified bases may serve as the codes of the codebook. One possible advantage of sparse coding via random forest functions compared to other sparse coding techniques (e.g. vector quantization, spatial pyramid matching, Laplace sparse coding) is efficiency. Utilizing random forest techniques, the sparse coding is performed in subspaces, which may reduce the computational burden. Another possible advantage is the potential promotion of the discriminative ability, as label information may be used in the tree splitting process, which may allow the resulting codebook to have more discriminative power.
A random forest subspace regression with sparse representation of bases at leaf nodes may be represented in any suitable manner. One example is as follows. Given a set of training data S={xi}i=1n and xiεRD, in a supervised setting, each xi is also associated with a label yiεY={0, . . . , K}. Thus, S={(xi, yi}i=1n. The goal is to learn a codebook B comprising a set of bases, wherein B={bi}i=1m and bεRD such that
and such that ∀i, Σj|wij|≦Σ. The first term in this equation minimizes the reconstruction error, and the second term gives the sparsity constraints on the reconstruction coefficients. In codebook learning, each bj serves as a code, and the reconstruction coefficients with respect to the codes are pooled to form a histogram.
In this equation, the norm of bj may be arbitrarily large, making w, arbitrarily small. Thus, further constraints may be imposed on bj. For example, a constraint may be made that all of the bases in the codebook be from the training set S. With this constraint, the equation above regarding the set of bases may be transformed into
such that Σjvj≦m, vjε{0, 1}, and ∀i, Σj|wij|≦τ. Here, vj serves as an indicator value that is a member of the set {0, 1}, and B={xj: xjεS, vj=1}. While vj may add additional complexity, it also may allow the search space to be greatly reduced.
After an optimal basis set B* is found, for a new sample x, reconstruction coefficients w may be computed via
that Σj|wij|≦τ. The vector w can be used to characterize the sample x.
While learning a codebook of size greater than, for example, 5,000 from tens of thousands of samples may be computationally demanding, data of real-world complexity may live in complex manifolds. Thus, a divide-and-conquer strategy to partition the data into local subspaces may allow the more efficient learning of basis within a subspace for a sparse representation.
As mentioned above, any suitable random forest regression method may be used to learn a codebook for pose correction, including but not limited to ERT and RPT. Both ERT and RPT partition samples recursively in a top-down manner. ERT adopts the label information and uses normalized Shannon entropy as a criterion to select features. In contrast, RPT is unsupervised and does not utilize label information. Instead, it splits the data via a hyperplane normalized to each individual randomly generated projection bases.
Both ERT and RPT may build the trees to a fine scale and use the leaf nodes as codes. However, as mentioned earlier, instead of building the trees to a very deep level, random forest sparse coding (RFSC) for use in a random forest subspace regression may stop at some relatively higher level (for example, when the number of samples is less than M). At such nodes, the local manifold structure is assumed to be relatively simple and regularized. RFSC seeks a set of bases to sparsely represent the subspaces at those nodes. As one non-limiting example, when the splitting process stops, there may be approximately 80-200 samples (depending upon codebook size) and approximately 3-10 bases per leaf node. Thus, the computational overhead of subspace learning may not be significant compared with directly pursuing bases from the entire sample set.
In some embodiments, a plurality of random forest subspace regression functions may be performed in a cascaded manner, as indicated in
As mentioned above, pose correction also may be utilized to directly assign a pose tag based upon an initial estimated skeleton. This is shown at 518 in
In some embodiments, motion consistency may be taken into account to assist in pose correction by applying a temporal constraint, as indicated at 520 in
A temporal constraint may be applied in any suitable manner. For example, in the instance of pose tag assignment, to add a temporal constraint, a mean shift may be applied to seek multiple modes {Γ} from the votes of the trees. Considering the multiple modes of nth frame are {Γ(n)}, a mode Γ*(n) may be selected such that
where α is a weight factor, hp(Γ(n)) is the probability mass function of Γ(n) and
where σ is s the tolerable variance between two successive frames.
A temporal constraint may be applied to a skeletal correction process in a similar manner. For example, where real-time calculation is desired, one approach may follow a causal model, such that a current prediction depends on past/current inputs/outputs. In such a model, for the ith input estimated skeleton STi, its offset may be computed as
where E(•) is an energy function defined as
E(D|STi,STi-1,Di-1)=α(−log(PH(ST
where α is a weight factor. Equation 15 may be minimized by Gibbs sampling, which minimizes a function cyclically with respect to the coordinate variables. Finally, the corrected skeleton CTi given by CT=STi−+λ(STi)Di.
The above-described embodiments may help to provide more robust pose estimation than nearest neighbor or other methods, such as Gaussian process regressors and support vector regressors. Further, embodiments that perform skeletal correction, determination of the offset of joints may provide more robust determination of a corrected skeleton than regression an absolute joint position. It will be understood that parameters related to the learning of a random forest subspace regression function as disclosed herein, such as a number of trees and a leaf node size, may be selected to have any suitable values. Examples of suitable values include, but are not limited to, values of 10-50 trees and leaf node sizes of 1-20 bases.
In some embodiments, the above described methods and processes may be tied to a computing system including one or more computers. In particular, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product.
Computing system 700 includes a logic subsystem 702 and a data-holding subsystem 704. Computing system 700 may optionally include a display subsystem 706, communication subsystem 708, and/or other components not shown in
Logic subsystem 702 may include one or more physical devices configured to execute one or more instructions. For example, logic subsystem 702 may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
Logic subsystem 702 may include one or more processors that are configured to execute software instructions. Additionally or alternatively, logic subsystem 702 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of logic subsystem 702 may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. Logic subsystem 702 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of logic subsystem 702 may be virtualized and executed by remotely accessible networked computing systems configured in a cloud computing configuration.
Data-holding subsystem 704 may include one or more physical, non-transitory, devices configured to hold data and/or instructions executable by logic subsystem 702 to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 704 may be transformed (e.g., to hold different data).
Data-holding subsystem 704 may include removable media and/or built-in devices. Data-holding subsystem 704 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 704 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 702 and data-holding subsystem 704 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
It is to be appreciated that data-holding subsystem 704 includes one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
When included, display subsystem 706 may be used to present a visual representation of data held by data-holding subsystem 704. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 706 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 706 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 702 and/or data-holding subsystem 704 in a shared enclosure, or such display devices may be peripheral display devices.
When included, communication subsystem 708 may be configured to communicatively couple computing system 700 with one or more other computing systems. Communication subsystem 708 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As nonlimiting examples, the communication subsystem may be configured for communication via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, etc. In some embodiments, the communication subsystem may allow computing system 700 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Shen, Wei, Guo, Baining, Tu, Zhuowen, Deng, Ke, Leyvand, Tommer
Patent | Priority | Assignee | Title |
11861779, | Dec 14 2021 | Adobe Inc.; Adobe Inc | Digital object animation using control points |
9747493, | Sep 23 2014 | AMS INTERNATIONAL AG | Face pose rectification method and apparatus |
Patent | Priority | Assignee | Title |
7804999, | Mar 17 2005 | Siemens Medical Solutions USA, Inc | Method for performing image based regression using boosting |
20090154796, | |||
20100278384, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 04 2012 | DENG, KE | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027897 | /0338 | |
Feb 05 2012 | SHEN, WEI | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027897 | /0338 | |
Feb 06 2012 | TU, ZHUOWEN | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027897 | /0338 | |
Feb 06 2012 | LEYVAND, TOMMER | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027897 | /0338 | |
Feb 07 2012 | GUO, BAINING | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027897 | /0338 | |
Mar 20 2012 | Microsoft Corporation | (assignment on the face of the patent) | / | |||
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034544 | /0541 |
Date | Maintenance Fee Events |
Aug 10 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jun 23 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 25 2017 | 4 years fee payment window open |
Aug 25 2017 | 6 months grace period start (w surcharge) |
Feb 25 2018 | patent expiry (for year 4) |
Feb 25 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 25 2021 | 8 years fee payment window open |
Aug 25 2021 | 6 months grace period start (w surcharge) |
Feb 25 2022 | patent expiry (for year 8) |
Feb 25 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 25 2025 | 12 years fee payment window open |
Aug 25 2025 | 6 months grace period start (w surcharge) |
Feb 25 2026 | patent expiry (for year 12) |
Feb 25 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |