The animation generation system includes: an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin, and the movable nodes of the set of skeletons are manipulated so that a motion of the skin is induced; and an avatar manipulation module for manipulating the movable nodes, including: a position mark which is moved to at least one first real position in a real space; at least one control mark which is moved to at least one second real position in the real space; and a video capturing unit for capturing the images of the real space; an arithmetic unit for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position, and the second real position into a second virtual position.
|
19. A multi-view video generation method for synthesizing a multi-view video and drawing a virtual model in the multi-view video, comprising:
placing a mark on at least one real position in a real space;
capturing the images of the real space by using at least two video capturing units;
identifying at least two real positions from the images of the real space;
converting the at least two real positions into at least two virtual positions in a virtual space; and
synthesizing a multi-view video, comprising:
drawing one of the images captured by the video capturing unit as a background;
drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background to form a result video; and
synthesizing the background and the virtual model by using known multi-view synthesizing methods to generate a multi-view video, and
reading out skin information in respect of a skin of an avatar, and skeleton information in respect of a set of template skeletons of the avatar, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
fitting the set of skeletons to the trunk features of the skin;
calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
12. An animation generation method for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; comprising:
moving a position mark to at least one first real position in a real space;
moving at least one control mark to at least one second real position in the real space;
capturing the images of the real space;
identifying the first real position and the second real position from the images of the real space;
converting the first real position into a first virtual position where the avatar is in the virtual space;
converting the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space,
drawing the one of images of the real space as a background, and drawing a designated object on the first virtual position onto the background to generate an augmented reality (AR) animation,
reading out skin information in respect of the skin, and skeleton information in respect of a set of template skeletons, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
fitting the set of skeletons to the trunk features of the skin;
calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
18. A multi-view animation generation system, comprising:
at least one mark which is moved by users to at least one first real position in a real space;
an arithmetic unit, coupled to at least two video capturing units,
wherein the at least two video capturing units capturing image streams of the real space and transmitting the image stream to the arithmetic unit, and the arithmetic unit identifying at least two first real positions from the images of the real space, and converting the at least two real positions into at least two virtual positions in virtual space;
a multi-view animation synthesizing unit, coupled to the arithmetic unit, for drawing one of the images captured by the video capturing unit as a background;
drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background; synthesizing the background and the virtual model to generate an animation; and transmitting the animation to the multi-view display unit,
an avatar generation module, comprising:
a readout unit for reading out skin information in respect of a skin of an avatar, and skeleton information in respect of a set of template skeletons of the avatar, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between bones of the set of template skeletons;
an analyzing unit for analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
a rough fitting unit for adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
a precise fitting unit for fitting the set of skeletons to the trunk features of the skin;
an envelope range calculating unit for calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
a mesh vertices weight calculating unit for calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving;
an output unit for outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
1. An animation generation system, comprising:
an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; and
an avatar manipulation module for manipulating the movable nodes of the avatar, comprising:
a position mark which is moved by users to at least one first real position in a real space;
at least one control mark which is moved by the users to at least one second real position in the real space;
a video capturing unit for capturing the images of the real space;
an arithmetic unit, coupled to the video capturing unit, for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position where the avatar is in the virtual space, and the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space,
wherein the relative motions between the second virtual position and the first virtual position make the avatar perform a series of successive motions in the virtual space, and
one of the images captured by the video capturing unit are drawn as a background, while one of the images of a designated object corresponding to the position mark is drawn onto the background, according to the first virtual position, to generate an augmented reality (AR) animation,
wherein the avatar generation module further comprises:
a readout unit for reading out skin information in respect of the skin, and skeleton information in respect of a set of template skeletons, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
an analyzing unit for analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
a rough fitting unit for adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
a precise fitting unit for fitting the set of skeletons to the trunk features of the skin;
an envelope range calculating unit for calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
a mesh vertices weight calculating unit for calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
an output unit for outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
2. The animation generation system as claimed in
3. The animation generation system as claimed in
4. The animation generation system as claimed in
5. The animation generation system as claimed in
6. The animation generation system as claimed in
7. The animation generation system as claimed in
8. The animation generation system as claimed in
rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving;
rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving;
rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases from 1.0 to 0.0, which indicates that the mesh vertex is partially affected by the movement of the bone according to the distance between the mesh vertex and the bone;
rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex; and
rule (5) when all the weights of the mesh vertices affected by all the bones are recorded to form a weight table, each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0.
9. The animation generation system as claimed in
a display unit, coupled to the arithmetic unit, for displaying the virtual space and the avatar.
10. The animation generation system as claimed in
11. The animation generation system as claimed in
13. The animation generation method as claimed in
14. The animation generation method as claimed in
15. The animation generation method as claimed in
16. The animation generation method as claimed in
17. The animation generation method as claimed in
rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving;
rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving;
rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases from 1.0 to 0.0, which indicates that the mesh vertex is partially affected by the movement of the bone according to the distance between the mesh vertex and the bone;
rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex;
rule (5) when all the weights of the mesh vertices affected by all the set of skeletons are recorded to form a weight table, each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0.
|
This Non-provisional application claims priority under 35 U.S.C. §119(a) on Provisional Patent Application Nos. 61/290,848, filed in United States of America on Dec. 29, 2009, the entire contents of which are hereby incorporated by reference.
1. Technical Field
The present disclosure relates to animation generation systems and methods, and in particular relates to animation generation systems and methods for manipulating avatars in an animation.
2. Description of the Related Art
Due to increasing popularity of the internet, some network applications and online multiplayer games have grown in membership and usage. Thus, global revenues for digital content providers providing the network applications and online games have reached around US$35 billion per year.
An avatar represents a computer user on the Internet, in the form of a one-dimensional (1D) username or a two-dimensional (2D) icon (picture). Nowadays, the avatar is usually in a form of a three-dimensional model commonly used in computer games. Conventionally, procedures to construct a 3D avatar comprise steps of producing a 2D image, constructing its 3D mesh details, building its skeleton, etc. All the steps need a lot of time and effort, so that it is hard for a normal user to construct a personalized 3D virtual avatar.
Accordingly, an integrated system or method, wherein a personalized avatar can be easily generated and manipulated, would fulfill enjoyment for users' for network
The purpose of the present disclosure is to provide systems and method for generating a 3D avatar rapidly and efficiently and manipulating the 3D avatar.
The present disclosure provides an animation generation system. The animation generation system comprises an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; and an avatar manipulation module for manipulating the movable nodes of the avatar, comprising a position mark which is moved by users to at least one first real position in a real space; at least one control mark which is moved by the users to at least one second real position in the real space; a video capturing unit for capturing the images of the real space; an arithmetic unit, coupled to the video capturing unit, for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position where the avatar is in the virtual space, and the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space, and one of the images captured by the video capturing unit are drawn as a background, while one of the images of a designated object corresponding to the position mark is drawn onto the background, according to the first virtual position, to generate an Augmented Reality (AR) animation.
The present disclosure provides an animation generation method for generating an avatar in a virtual space. The avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced. The animation generation method comprises: moving a position mark to at least one first real position in a real space; moving at least one control mark to at least one second real position in the real space; capturing the images of the real space; identifying the first real position and the second real position from the images of the real space; converting the first real position into a first virtual position where the avatar is in the virtual space; and converting the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the virtual position position make the avatar perform a series of successive motions in the virtual space, drawing one of the images of the real space as a background, and drawing a designated object on the first virtual position onto the background to generate an Augmented Reality (AR) animation.
The present disclosure provides a multi-view animation generation system. The multi-view animation generation system comprises at least one mark which is moved by users to at least one first real position in a real space; an arithmetic unit, coupled to at least two video capturing units, wherein the at least two video capturing units for capturing image streams of the real space and transmitting the image stream to the arithmetic unit; and the arithmetic unit identifying at least two first real positions from the images of the real space, and converting the at least two real positions into at least two virtual positions in virtual space; a multi-view animation synthesizing unit, coupled to the arithmetic unit, for drawing one of the images captured by the video capturing unit as a background; drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background; synthesizing the background and the virtual model by using known 3D technique to generate an animation; and transmitting the animation to the multi-view display unit.
The present disclosure provides a multi-view video generation method for synthesizing a multi-view video and drawing a virtual model in the multi-view video. The multi-view video generation method comprises: placing a mark on at least one real position in a real space; capturing the images of the real space by using at least two video capturing units; identifying at least two real positions from the images of the real space; converting the at least two real positions into at least two virtual positions in a virtual space; and synthesizing a multi-view video, comprising: drawing one of the images captured by the video capturing unit as a background; drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background to form a result video; synthesizing the background and the virtual model by using known multi-view synthesizing methods to generate a multi-view video.
A detailed description is given in the following embodiments with reference to the accompanying drawings.
The present disclosure can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
The following description is made for the purpose of illustrating the general principles of the disclosure and should not be taken in a limiting sense. The scope of the disclosure is best determined by reference to the appended claims.
The analyzing unit 112 of the present disclosure is used to analyze the skin information to obtain a plurality of appearance features and a plurality of trunk feature. Specifically, the appearance features are the protruding points on the skin.
The rough fitting unit 113 of the present disclosure is used to adjust the size of the template skeletons according to the appearance features to generate the skeletons of the avatar, and fit the template skeletons to the appearance features. In an embodiment, the present disclosure can adjust the size of the template skeletons automatically by wising Inverse Kinematics. However, in another embodiment, the size of the template skeletons may be adjusted by users manually. Although the template skeletons are not directly obtained from the model shot by the camcorder, since the shape of the skeletons of the human body are similar to each other, after adjusting the scale and the size of the template skeletons by the rough fitting unit 113, the personalized skeletons belonging to the model may be constructed. After adjusting the size of the skeletons, the personalized set of skeletons is fitted to the appearance features of the avatar skin. Specifically, this fitting procedure further comprises a rotating procedure and a locating procedure. The rotating procedure firstly rotates the skin of the avatar toward the +Z axis in the virtual space, and then rotates the top end of the appearance features toward the +Y axis in virtual space. The locating procedure respectively locates each end of the set of skeletons to a specific coordinate. For example, the locating procedure may: (1) locate the top end of the head to the coordinate which has the highest Y value: (2) locate the end of the left hand to the coordinate which has the highest X value; (3) locate the end of the right hand to the coordinate which has the lowest X value; (4) locate the end of the left foot to the coordinate which has a negative Y value and the highest X value; (5) locate the end of the right foot to the coordinate which has a negative Y value and the lowest X value.
The precise fitting unit 114 of the present disclosure is used to fit the set of skeletons to the trunk features of the skin. Although the rough fitting unit 113 fits specific ends of the skeletons of the avatar to skin, there may be some skeletons which are located out of the skin. Therefore, the precise fitting unit 114 has to fit the set of skeletons to skin more precisely. The precise fitting unit 114 fits the bone which was located at a wrong place to a correct place according to the trunk features of the closest bone by using Inverse Kinematics. The precise fitting unit 114 may repeat the foregoing procedures till all the skeletons of the avatar have been correctly fitted.
The envelope range calculating unit 115 of the present disclosure is used to calculate an envelope range of each bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone.
The mesh vertices weight calculating unit 116 of the present disclosure is used to calculate the weight of a mesh vertex of the skin information relative to a bone according the envelope range, where the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving. The mesh vertices weight calculating unit 116 calculates the weight of the mesh vertices in accordance with the flowing rules: rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving; rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving; rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases as follows: Weightvibi(distbi)=Decay(distbi), where Weightvibi is the weight of a mesh vertex vi in relation to the bone bi; distbi is the distance between the mesh vertex vi and the bone bi; and Decay(x) is a decreasing function, decreasing from 1 to 0; rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex; and rule (5) each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0. Through the above rules, the mesh vertices weight calculating unit 116 can establish a weight table to record all the weights of the mesh vertices affected by all the bones.
After the procedures described above, the avatar of the present disclosure comprises not only the skin information (data of mesh vertices), the skeleton information (identification data and geometric data of each bone of the set of the skeletons, and linkage relationship data and movable degrees of freedom between each bone of the set of the skeletons), but also the relationship information between the skin and the set of skeletons (i.e., envelope range of the set of skeletons, and the weights of the mesh vertices, which indicate how the mesh vertices affected by the set of skeletons). The output unit 117 of the avatar generation module 110 is used to output the avatar to the display unit 130 for displaying the avatar, and output the avatar to the arithmetic unit 124 for further manipulation of the avatar.
In addition to the avatar generation module 110, the present disclosure further comprises an avatar manipulation module 120. As shown in
The video capturing unit 123, e.g. camcorders, is used to shoot the real space to obtain the images of the real space and the position mark 121 and the control mark 122 in the real space. There may be only one camcorder in an embodiment. In order to achieve a stereoscopy effect or multi-view effect, two or more than two camcorders may be employed in other embodiments, which will be described further later.
The arithmetic unit 124 of the present disclosure is coupled to the video capturing unit 123, and is used to identify the first real position of the position mark 121 and the second real position of the control mark 122 from the images captured by the video capturing unit 123. However, the location and direction of the video capturing unit 123 may be fixed or changeable from time to time. In an embodiment, the position mark 121 and the control mark 122 are barcodes or other objects which visible appearances have identifiable shapes, sizes or colors, in order to make it easier for the video capturing unit 123 to identify them. By checking the shape and size of the two barcodes, the arithmetic unit 124 may easily determine the relative distance and direction between the two marks 121 and 122 and the camcorders (video capturing unit 123). Open source software, such as ARToolkit and ARtag, may work in coordination with the arithmetic unit 124 to identify the marks and calculate the spatial coordinates of the marks. Further, the arithmetic unit 124 may further covert the first real position of the position mark 121 in the real space into a first virtual position where the avatar is in the virtual space, and the second real position of the control mark 122 in the real space to a second virtual position where a movable node of the avatar is in the virtual space. It is appreciated that relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space. In an embodiment, if a user wants to control the location of the whole avatar in the virtual space, the position mark 121 may be moved. Also, if a user wants to control the forearm of the avatar by the control mark 122, a point on the forearm bone of the avatar as a movable node (which is controlled by the control mark 122) may be set, before moving the control mark 122. Due to the linkage relationship between the forearm skin and the forearm bone, the whole forearm (including the forearm skin) will move when the control mark 121 is moving in the real space. Note that, due to the function performed by the avatar generation module 110 (especially the mesh vertices weight calculating unit 116), when the forearm of the avatar is moving, the skin away from the forearm, for example, the skin of chest or shoulder, will be moving accordingly. Thus, the avatar generated by the present disclosure has more smooth and natural movements.
The display unit 130 of the present disclosure is coupled to the avatar generation module 110 and the avatar manipulation module 120, and is used to display the avatar and the virtual space where the avatar exists.
The animation generation system 100 of the present disclosure has been discussed above. It is noted that, for the purpose of illustrating the present disclosure, the avatar generation module 110, the avatar manipulation module 120, the display unit 130, the readout unit 111, the analyzing unit 112, the rough fitting unit 113, the precise fitting unit 114, the envelope range calculating unit 115, the mesh vertices weight calculating unit 116, the output unit 117, the model establishing unit 118 in the avatar generation module 110, and the video capturing unit 123 and the arithmetic unit 124 in the avatar manipulation module 120 are described separately. Any combination of above the parts may be integrated in and performed by a single computer, or separated on the network and performed by a plurality of computers.
The present disclosure also provides an animation generation method.
While the disclosure has been described by way of example and in terms of the exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Lin, Tzung-Han, Teng, Chih-Jen, Hsiao, Fu-Jen
Patent | Priority | Assignee | Title |
10033992, | Sep 09 2014 | GOOGLE LLC | Generating a 3D video of an event using crowd sourced data |
10102659, | Sep 18 2017 | Systems and methods for utilizing a device as a marker for augmented reality content | |
10105601, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
10198871, | Apr 27 2018 | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user | |
10245505, | Mar 15 2013 | SONY INTERACTIVE ENTERTAINMENT INC | Generating custom recordings of skeletal animations |
10438631, | Feb 05 2014 | SNAP INC | Method for real-time video processing involving retouching of an object in the video |
10565767, | Sep 18 2017 | Systems and methods for utilizing a device as a marker for augmented reality content | |
10566026, | Feb 05 2014 | Snap Inc. | Method for real-time video processing involving changing features of an object in the video |
10586396, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
10586570, | Feb 05 2014 | SNAP INC | Real time video processing for changing proportions of an object in the video |
10593121, | Apr 27 2018 | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user | |
10636188, | Feb 09 2018 | Systems and methods for utilizing a living entity as a marker for augmented reality content | |
10661170, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
10672170, | Sep 18 2017 | Systems and methods for utilizing a device as a marker for augmented reality content | |
10679427, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
10796467, | Feb 09 2018 | Systems and methods for utilizing a living entity as a marker for augmented reality content | |
10818096, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
10846931, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
10848446, | Jul 19 2016 | Snap Inc. | Displaying customized electronic messaging graphics |
10852918, | Mar 08 2019 | Snap Inc.; SNAP INC | Contextual information in chat |
10855632, | Jul 19 2016 | SNAP INC | Displaying customized electronic messaging graphics |
10861170, | Nov 30 2018 | SNAP INC | Efficient human pose tracking in videos |
10861245, | Apr 27 2018 | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user | |
10867424, | Sep 18 2017 | Systems and methods for utilizing a device as a marker for augmented reality content | |
10872451, | Oct 31 2018 | Snap Inc.; SNAP INC | 3D avatar rendering |
10880246, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
10893385, | Jun 07 2019 | SNAP INC | Detection of a physical collision between two client devices in a location sharing system |
10895964, | Sep 25 2018 | Snap Inc. | Interface to display shared user groups |
10896534, | Sep 19 2018 | Snap Inc. | Avatar style transformation using neural networks |
10902661, | Nov 28 2018 | Snap Inc. | Dynamic composite user identifier |
10904181, | Sep 28 2018 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
10911387, | Aug 12 2019 | Snap Inc. | Message reminder interface |
10936066, | Feb 13 2019 | SNAP INC | Sleep detection in a location sharing system |
10936157, | Nov 29 2017 | SNAP INC | Selectable item including a customized graphic for an electronic messaging application |
10938758, | Oct 24 2016 | SNAP INC | Generating and displaying customized avatars in media overlays |
10939246, | Jan 16 2019 | SNAP INC | Location-based context information sharing in a messaging system |
10945098, | Jan 16 2019 | Snap Inc. | Location-based context information sharing in a messaging system |
10949648, | Jan 23 2018 | Snap Inc. | Region-based stabilized face tracking |
10950271, | Feb 05 2014 | Snap Inc. | Method for triggering events in a video |
10951562, | Jan 18 2017 | Snap. Inc. | Customized contextual media content item generation |
10952013, | Apr 27 2017 | Snap Inc. | Selective location-based identity communication |
10963529, | Apr 27 2017 | SNAP INC | Location-based search mechanism in a graphical user interface |
10964082, | Feb 26 2019 | Snap Inc. | Avatar based on weather |
10964114, | Jun 28 2019 | SNAP INC | 3D object camera customization system |
10979752, | Feb 28 2018 | SNAP INC | Generating media content items based on location information |
10984569, | Jun 30 2016 | Snap Inc. | Avatar based ideogram generation |
10984575, | Feb 06 2019 | Snap Inc. | Body pose estimation |
10991395, | Feb 05 2014 | Snap Inc. | Method for real time video processing involving changing a color of an object on a human face in a video |
10992619, | Apr 30 2019 | SNAP INC | Messaging system with avatar generation |
11010022, | Feb 06 2019 | Snap Inc. | Global event-based avatar |
11030789, | Oct 30 2017 | Snap Inc. | Animated chat presence |
11030813, | Aug 30 2018 | Snap Inc. | Video clip object tracking |
11032670, | Jan 14 2019 | Snap Inc. | Destination sharing in location sharing system |
11036781, | Jan 30 2020 | SNAP INC | Video generation system to render frames on demand using a fleet of servers |
11036989, | Dec 11 2019 | Snap Inc.; SNAP INC | Skeletal tracking using previous frames |
11039270, | Mar 28 2019 | Snap Inc. | Points of interest in a location sharing system |
11048916, | Mar 31 2016 | Snap Inc. | Automated avatar generation |
11055514, | Dec 14 2018 | Snap Inc. | Image face manipulation |
11063891, | Dec 03 2019 | Snap Inc.; SNAP INC | Personalized avatar notification |
11069103, | Apr 20 2017 | Snap Inc. | Customized user interface for electronic communications |
11074675, | Jul 31 2018 | SNAP INC | Eye texture inpainting |
11080917, | Sep 30 2019 | Snap Inc. | Dynamic parameterized user avatar stories |
11100311, | Oct 19 2016 | Snap Inc. | Neural networks for facial modeling |
11103795, | Oct 31 2018 | Snap Inc.; SNAP INC | Game drawer |
11113887, | Jan 08 2018 | Verizon Patent and Licensing Inc | Generating three-dimensional content from two-dimensional images |
11120596, | Feb 09 2018 | Systems and methods for utilizing a living entity as a marker for augmented reality content | |
11120597, | Oct 26 2017 | Snap Inc. | Joint audio-video facial animation system |
11120601, | Feb 28 2018 | Snap Inc. | Animated expressive icon |
11122094, | Jul 28 2017 | Snap Inc. | Software application manager for messaging applications |
11128586, | Dec 09 2019 | Snap Inc.; SNAP INC | Context sensitive avatar captions |
11128715, | Dec 30 2019 | SNAP INC | Physical friend proximity in chat |
11140515, | Dec 30 2019 | SNAP INC | Interfaces for relative device positioning |
11145136, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
11166123, | Mar 28 2019 | SNAP INC | Grouped transmission of location data in a location sharing system |
11169658, | Dec 31 2019 | Snap Inc.; SNAP INC | Combined map icon with action indicator |
11171902, | Sep 28 2018 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
11176737, | Nov 27 2018 | Snap Inc. | Textured mesh building |
11178083, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
11185775, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
11188190, | Jun 28 2019 | Snap Inc.; SNAP INC | Generating animation overlays in a communication session |
11189070, | Sep 28 2018 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
11189098, | Jun 28 2019 | SNAP INC | 3D object camera customization system |
11192031, | May 08 2012 | SNAP INC | System and method for generating and displaying avatars |
11195237, | Apr 27 2017 | SNAP INC | Location-based virtual avatars |
11198064, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
11199957, | Nov 30 2018 | Snap Inc. | Generating customized avatars based on location information |
11200748, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content in an augmented reality environment | |
11217020, | Mar 16 2020 | Snap Inc. | 3D cutout image modification |
11218433, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
11218838, | Oct 31 2019 | SNAP INC | Focused map-based context information surfacing |
11227442, | Dec 19 2019 | Snap Inc. | 3D captions with semantic graphical elements |
11229849, | May 08 2012 | SNAP INC | System and method for generating and displaying avatars |
11245658, | Sep 28 2018 | Snap Inc. | System and method of generating private notifications between users in a communication session |
11263254, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
11263817, | Dec 19 2019 | Snap Inc. | 3D captions with face tracking |
11270491, | Sep 30 2019 | Snap Inc. | Dynamic parameterized user avatar stories |
11275439, | Feb 13 2019 | Snap Inc. | Sleep detection in a location sharing system |
11284144, | Jan 30 2020 | SNAP INC | Video generation system to render frames on demand using a fleet of GPUs |
11290682, | Mar 18 2015 | SNAP INC | Background modification in video conferencing |
11294545, | Sep 25 2018 | Snap Inc. | Interface to display shared user groups |
11294936, | Jan 30 2019 | Snap Inc. | Adaptive spatial density based clustering |
11301117, | Mar 08 2019 | Snap Inc. | Contextual information in chat |
11307747, | Jul 11 2019 | SNAP INC | Edge gesture interface with smart interactions |
11310176, | Apr 13 2018 | SNAP INC | Content suggestion system |
11315259, | Nov 30 2018 | Snap Inc. | Efficient human pose tracking in videos |
11320969, | Sep 16 2019 | Snap Inc. | Messaging system with battery level sharing |
11321896, | Oct 31 2018 | Snap Inc. | 3D avatar rendering |
11348301, | Sep 19 2018 | Snap Inc. | Avatar style transformation using neural networks |
11354014, | Apr 27 2017 | SNAP INC | Map-based graphical user interface for multi-type social media galleries |
11354843, | Oct 30 2017 | Snap Inc. | Animated chat presence |
11356720, | Jan 30 2020 | SNAP INC | Video generation system to render frames on demand |
11360733, | Sep 10 2020 | Snap Inc. | Colocated shared augmented reality without shared backend |
11380361, | Feb 05 2014 | Snap Inc. | Method for triggering events in a video |
11385763, | Apr 27 2017 | SNAP INC | Map-based graphical user interface indicating geospatial activity metrics |
11388126, | Jul 19 2016 | Snap Inc. | Displaying customized electronic messaging graphics |
11392264, | Apr 27 2017 | SNAP INC | Map-based graphical user interface for multi-type social media galleries |
11411895, | Nov 29 2017 | SNAP INC | Generating aggregated media content items for a group of users in an electronic messaging application |
11418470, | Jul 19 2016 | Snap Inc. | Displaying customized electronic messaging graphics |
11418906, | Apr 27 2017 | Snap Inc. | Selective location-based identity communication |
11425062, | Sep 27 2019 | Snap Inc. | Recommended content viewed by friends |
11425068, | Feb 03 2009 | Snap Inc. | Interactive avatar in messaging environment |
11438288, | Jul 19 2016 | Snap Inc. | Displaying customized electronic messaging graphics |
11438341, | Oct 10 2016 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
11443491, | Jun 28 2019 | Snap Inc. | 3D object camera customization system |
11443772, | Feb 05 2014 | Snap Inc. | Method for triggering events in a video |
11450051, | Nov 18 2020 | Snap Inc.; SNAP INC | Personalized avatar real-time motion capture |
11450349, | Feb 05 2014 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
11451956, | Apr 27 2017 | SNAP INC | Location privacy management on map-based social media platforms |
11452939, | Sep 21 2020 | Snap Inc. | Graphical marker generation system for synchronizing users |
11455081, | Aug 05 2019 | Snap Inc. | Message thread prioritization interface |
11455082, | Sep 28 2018 | Snap Inc. | Collaborative achievement interface |
11460974, | Nov 28 2017 | Snap Inc. | Content discovery refresh |
11468618, | Feb 28 2018 | Snap Inc. | Animated expressive icon |
11474663, | Apr 27 2017 | Snap Inc. | Location-based search mechanism in a graphical user interface |
11477149, | Sep 28 2018 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
11509615, | Jul 19 2016 | Snap Inc. | Generating customized electronic messaging graphics |
11514635, | Jan 30 2020 | SNAP INC | System for generating media content items on demand |
11514947, | Feb 05 2014 | Snap Inc. | Method for real-time video processing involving changing features of an object in the video |
11516173, | Dec 26 2018 | SNAP INC | Message composition interface |
11523159, | Feb 28 2018 | Snap Inc. | Generating media content items based on location information |
11532105, | Mar 16 2021 | Snap Inc. | Mirroring device with whole-body outfits |
11532134, | Apr 27 2018 | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user | |
11543939, | Jun 08 2020 | Snap Inc. | Encoded image based messaging system |
11544883, | Jan 16 2017 | Snap Inc. | Coded vision system |
11544885, | Mar 19 2021 | Snap Inc.; SNAP INC | Augmented reality experience based on physical items |
11544902, | Nov 27 2018 | Snap Inc. | Rendering 3D captions within real-world environments |
11551423, | Oct 24 2016 | Snap Inc. | Augmented reality object manipulation |
11557075, | Feb 06 2019 | Snap Inc. | Body pose estimation |
11562548, | Mar 22 2021 | Snap Inc. | True size eyewear in real time |
11563702, | Dec 03 2019 | Snap Inc. | Personalized avatar notification |
11574431, | Feb 26 2019 | Snap Inc. | Avatar based on weather |
11580682, | Jun 30 2020 | Snap Inc. | Messaging system with augmented reality makeup |
11580698, | Nov 27 2018 | Snap Inc. | Rendering 3D captions within real-world environments |
11580700, | Oct 24 2016 | Snap Inc. | Augmented reality object manipulation |
11582176, | Dec 09 2019 | Snap Inc. | Context sensitive avatar captions |
11588769, | Jan 09 2017 | Snap Inc.; SNAP INC | Contextual generation and selection of customized media content |
11588772, | Aug 12 2019 | Snap Inc. | Message reminder interface |
11593980, | Apr 20 2017 | Snap Inc. | Customized user interface for electronic communications |
11594025, | Dec 11 2019 | Snap Inc. | Skeletal tracking using previous frames |
11601783, | Jun 07 2019 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
11607616, | May 08 2012 | SNAP INC | System and method for generating and displaying avatars |
11610354, | Oct 26 2017 | Snap Inc. | Joint audio-video facial animation system |
11610357, | Sep 28 2018 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
11615592, | Oct 27 2020 | Snap Inc.; SNAP INC | Side-by-side character animation from realtime 3D body motion capture |
11616745, | Jan 09 2017 | Snap Inc.; SNAP INC | Contextual generation and selection of customized media content |
11619501, | Mar 11 2020 | SNAP INC | Avatar based on trip |
11620325, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
11620791, | Nov 27 2018 | Snap Inc. | Rendering 3D captions within real-world environments |
11620798, | Apr 30 2019 | Systems and methods for conveying virtual content in an augmented reality environment, for facilitating presentation of the virtual content based on biometric information match and user-performed activities | |
11625873, | Mar 30 2020 | SNAP INC | Personalized media overlay recommendation |
11631223, | Apr 30 2019 | Systems, methods, and storage media for conveying virtual content at different locations from external resources in an augmented reality environment | |
11631276, | Mar 31 2016 | Snap Inc. | Automated avatar generation |
11636654, | May 19 2021 | SNAP INC | AR-based connected portal shopping |
11636657, | Dec 19 2019 | Snap Inc. | 3D captions with semantic graphical elements |
11636662, | Sep 30 2021 | Snap Inc.; SNAP INC | Body normal network light and rendering control |
11638115, | Mar 28 2019 | Snap Inc. | Points of interest in a location sharing system |
11651022, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
11651539, | Jan 30 2020 | SNAP INC | System for generating media content items on demand |
11651797, | Feb 05 2014 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
11657580, | Jan 09 2017 | Snap Inc. | Surface aware lens |
11659014, | Jul 28 2017 | Snap Inc. | Software application manager for messaging applications |
11660022, | Oct 27 2020 | Snap Inc. | Adaptive skeletal joint smoothing |
11662890, | Sep 16 2019 | Snap Inc. | Messaging system with battery level sharing |
11662900, | May 31 2016 | Snap Inc. | Application control using a gesture based trigger |
11663792, | Sep 08 2021 | Snap Inc. | Body fitted accessory with physics simulation |
11670059, | Sep 01 2021 | Snap Inc.; SNAP INC | Controlling interactive fashion based on body gestures |
11673054, | Sep 07 2021 | Snap Inc. | Controlling AR games on fashion items |
11676199, | Jun 28 2019 | Snap Inc. | Generating customizable avatar outfits |
11676320, | Sep 30 2019 | Snap Inc. | Dynamic media collection generation |
11683280, | Jun 10 2020 | Snap Inc. | Messaging system including an external-resource dock and drawer |
11688119, | Feb 28 2018 | Snap Inc. | Animated expressive icon |
11693887, | Jan 30 2019 | Snap Inc. | Adaptive spatial density based clustering |
11694402, | Nov 27 2018 | Snap Inc. | Textured mesh building |
11698722, | Nov 30 2018 | Snap Inc. | Generating customized avatars based on location information |
11704005, | Sep 28 2018 | Snap Inc. | Collaborative achievement interface |
11704878, | Jan 09 2017 | Snap Inc. | Surface aware lens |
11706267, | Oct 30 2017 | Snap Inc. | Animated chat presence |
11714524, | Feb 06 2019 | Snap Inc. | Global event-based avatar |
11714535, | Jul 11 2019 | Snap Inc. | Edge gesture interface with smart interactions |
11715268, | Aug 30 2018 | Snap Inc. | Video clip object tracking |
11729441, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand |
11734866, | Sep 13 2021 | Snap Inc. | Controlling interactive fashion based on voice |
11734894, | Nov 18 2020 | Snap Inc.; SNAP INC | Real-time motion transfer for prosthetic limbs |
11734959, | Mar 16 2021 | Snap Inc. | Activating hands-free mode on mirroring device |
11748931, | Nov 18 2020 | Snap Inc.; SNAP INC | Body animation sharing and remixing |
11748958, | Dec 07 2021 | Snap Inc. | Augmented reality unboxing experience |
11751015, | Jan 16 2019 | Snap Inc. | Location-based context information sharing in a messaging system |
11752431, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
11763481, | Oct 20 2021 | Snap Inc.; SNAP INC | Mirror-based augmented reality experience |
11769259, | Jan 23 2018 | Snap Inc. | Region-based stabilized face tracking |
11775165, | Mar 16 2020 | Snap Inc. | 3D cutout image modification |
11782574, | Apr 27 2017 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
11783494, | Nov 30 2018 | Snap Inc. | Efficient human pose tracking in videos |
11790531, | Feb 24 2021 | Snap Inc.; SNAP INC | Whole body segmentation |
11790614, | Oct 11 2021 | Snap Inc. | Inferring intent from pose and speech input |
11797155, | Jun 30 2021 | Snap Inc. | Hybrid search system for customizable media |
11798201, | Mar 16 2021 | Snap Inc. | Mirroring device with whole-body outfits |
11798238, | Sep 14 2021 | Snap Inc. | Blending body mesh into external mesh |
11798261, | Dec 14 2018 | Snap Inc. | Image face manipulation |
11809624, | Feb 13 2019 | Snap Inc. | Sleep detection in a location sharing system |
11809633, | Mar 16 2021 | Snap Inc. | Mirroring device with pointing based navigation |
11810220, | Dec 19 2019 | Snap Inc. | 3D captions with face tracking |
11810226, | Feb 09 2018 | Systems and methods for utilizing a living entity as a marker for augmented reality content | |
11818286, | Mar 30 2020 | SNAP INC | Avatar recommendation and reply |
11822766, | Jun 08 2020 | Snap Inc. | Encoded image based messaging system |
11822774, | Sep 16 2019 | Snap Inc. | Messaging system with battery level sharing |
11823312, | Sep 18 2017 | Systems and methods for utilizing a device as a marker for augmented reality content | |
11823341, | Jun 28 2019 | Snap Inc. | 3D object camera customization system |
11823346, | Jan 17 2022 | Snap Inc. | AR body part tracking system |
11824822, | Sep 28 2018 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
11830209, | May 26 2017 | Snap Inc. | Neural network-based image stream modification |
11831937, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUS |
11833427, | Sep 21 2020 | Snap Inc. | Graphical marker generation system for synchronizing users |
11836859, | Nov 27 2018 | Snap Inc. | Textured mesh building |
11836862, | Oct 11 2021 | Snap Inc. | External mesh with vertex attributes |
11836866, | Sep 20 2021 | Snap Inc. | Deforming real-world object using an external mesh |
11842411, | Apr 27 2017 | SNAP INC | Location-based virtual avatars |
11843456, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in media overlays |
11850511, | Oct 27 2017 | Systems and methods for rendering a virtual content object in an augmented reality environment | |
11852554, | Mar 21 2019 | SNAP INC | Barometer calibration in a location sharing system |
11854069, | Jul 16 2021 | Snap Inc.; SNAP INC | Personalized try-on ads |
11863513, | Aug 31 2020 | Snap Inc. | Media content playback and comments management |
11868414, | Mar 14 2019 | SNAP INC | Graph-based prediction for contact suggestion in a location sharing system |
11868590, | Sep 25 2018 | Snap Inc. | Interface to display shared user groups |
11870743, | Jan 23 2017 | Snap Inc. | Customized digital avatar accessories |
11870745, | Jun 28 2022 | SNAP INC ; Snap Inc. | Media gallery sharing and management |
11875439, | Apr 18 2018 | Snap Inc. | Augmented expression system |
11876762, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in media overlays |
11877211, | Jan 14 2019 | Snap Inc. | Destination sharing in location sharing system |
11880923, | Feb 28 2018 | Snap Inc. | Animated expressive icon |
11880947, | Dec 21 2021 | SNAP INC | Real-time upper-body garment exchange |
11882162, | Jul 28 2017 | Snap Inc. | Software application manager for messaging applications |
11887237, | Nov 28 2018 | Snap Inc. | Dynamic composite user identifier |
11887260, | Dec 30 2021 | SNAP INC | AR position indicator |
11888795, | Sep 21 2020 | Snap Inc. | Chats with micro sound clips |
11893166, | Nov 08 2022 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
11893208, | Dec 31 2019 | Snap Inc. | Combined map icon with action indicator |
11893301, | Sep 10 2020 | Snap Inc. | Colocated shared augmented reality without shared backend |
11893647, | Apr 27 2017 | SNAP INC | Location-based virtual avatars |
11900506, | Sep 09 2021 | Snap Inc.; SNAP INC | Controlling interactive fashion based on facial expressions |
11908041, | Jan 19 2022 | Object replacement system | |
11908083, | Aug 31 2021 | Snap Inc.; SNAP INC | Deforming custom mesh based on body mesh |
11908093, | Dec 19 2019 | Snap Inc. | 3D captions with semantic graphical elements |
11908243, | Mar 16 2021 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
11910269, | Sep 25 2020 | Snap Inc. | Augmented reality content items including user avatar to share location |
11917495, | Jun 07 2019 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
11922010, | Jun 08 2020 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
11925869, | May 08 2012 | Snap Inc. | System and method for generating and displaying avatars |
11928783, | Dec 30 2021 | Snap Inc. | AR position and orientation along a plane |
11930055, | Oct 30 2017 | Snap Inc. | Animated chat presence |
11941227, | Jun 30 2021 | Snap Inc. | Hybrid search system for customizable media |
11941767, | May 19 2021 | Snap Inc. | AR-based connected portal shopping |
11949958, | Jan 30 2020 | SNAP INC | Selecting avatars to be included in the video being generated on demand |
11954762, | Jan 19 2022 | Snap Inc. | Object replacement system |
11956190, | May 08 2020 | Snap Inc. | Messaging system with a carousel of related entities |
11956192, | Aug 12 2019 | Snap Inc. | Message reminder interface |
11960784, | Dec 07 2021 | Snap Inc. | Shared augmented reality unboxing experience |
11962598, | Oct 10 2016 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
11969075, | Mar 31 2020 | Snap Inc. | Augmented reality beauty product tutorials |
11973732, | Apr 30 2019 | Snap Inc. | Messaging system with avatar generation |
11978140, | Mar 30 2020 | Snap Inc. | Personalized media overlay recommendation |
11978283, | Mar 16 2021 | Snap Inc. | Mirroring device with a hands-free mode |
11983462, | Aug 31 2021 | SNAP INC | Conversation guided augmented reality experience |
11983826, | Sep 30 2021 | SNAP INC ; Snap Inc. | 3D upper garment tracking |
11983830, | Apr 27 2018 | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user | |
11989809, | Jan 16 2017 | Snap Inc. | Coded vision system |
11991130, | Jan 18 2017 | Snap Inc. | Customized contextual media content item generation |
11991419, | Jan 30 2020 | SNAP INC | Selecting avatars to be included in the video being generated on demand |
11995288, | Apr 27 2017 | Snap Inc. | Location-based search mechanism in a graphical user interface |
11995757, | Oct 29 2021 | Snap Inc. | Customized animation from video |
11996113, | Oct 29 2021 | SNAP INC | Voice notes with changing effects |
12056760, | Jun 28 2019 | Snap Inc. | Generating customizable avatar outfits |
12056792, | Dec 30 2020 | Snap Inc. | Flow-guided motion retargeting |
12056832, | Sep 01 2021 | Snap Inc. | Controlling interactive fashion based on body gestures |
12058583, | Apr 27 2017 | Snap Inc. | Selective location-based identity communication |
12062144, | May 27 2022 | Snap Inc. | Automated augmented reality experience creation based on sample source and target images |
12062146, | Jul 28 2022 | SNAP INC ; Snap Inc. | Virtual wardrobe AR experience |
12063569, | Dec 30 2019 | Snap Inc. | Interfaces for relative device positioning |
12067214, | Jun 25 2020 | SNAP INC ; Snap Inc. | Updating avatar clothing for a user of a messaging system |
12067663, | Jan 30 2020 | Snap Inc. | System for generating media content items on demand |
12067804, | Mar 22 2021 | Snap Inc. | True size eyewear experience in real time |
12070682, | Mar 29 2019 | Snap Inc. | 3D avatar plugin for third-party games |
12079264, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
12079445, | Apr 27 2017 | Map-based graphical user interface indicating geospatial activity metrics | |
12080065, | Nov 22 2019 | SNAP INC | Augmented reality items based on scan |
12081502, | Oct 24 2016 | Generating and displaying customized avatars in media overlays | |
12086381, | Apr 27 2017 | Snap Inc. | Map-based graphical user interface for multi-type social media galleries |
12086916, | Oct 22 2021 | Snap Inc. | Voice note with face tracking |
12086944, | Apr 30 2019 | Systems and methods for conveying virtual content from external resources and electronic storage in an augmented reality environment at different or same locations | |
12086946, | Sep 14 2021 | Snap Inc. | Blending body mesh into external mesh |
12094066, | Mar 30 2022 | Surface normals for pixel-aligned object | |
12096153, | Dec 21 2021 | SNAP INC | Avatar call platform |
12099701, | Aug 05 2019 | Snap Inc. | Message thread prioritization interface |
12099703, | Sep 16 2019 | Snap Inc. | Messaging system with battery level sharing |
12100156, | Apr 12 2021 | Snap Inc. | Garment segmentation |
12105938, | Sep 28 2018 | Snap Inc. | Collaborative achievement interface |
12106441, | Nov 27 2018 | Snap Inc. | Rendering 3D captions within real-world environments |
12106486, | Feb 24 2021 | SNAP INC | Whole body visual effects |
12111863, | Jan 30 2020 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
12112013, | Apr 27 2017 | Snap Inc. | Location privacy management on map-based social media platforms |
12113756, | Apr 13 2018 | Snap Inc. | Content suggestion system |
12113760, | Oct 24 2016 | Snap Inc. | Generating and displaying customized avatars in media overlays |
12121811, | Sep 21 2020 | Snap Inc. | Graphical marker generation system for synchronization |
12131003, | Apr 27 2017 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
12131006, | Feb 06 2019 | Snap Inc. | Global event-based avatar |
12131015, | May 31 2016 | Snap Inc. | Application control using a gesture based trigger |
12136153, | Jun 30 2020 | Snap Inc. | Messaging system with augmented reality makeup |
12136158, | Feb 06 2019 | Snap Inc. | Body pose estimation |
12141215, | Mar 14 2019 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
12142257, | Feb 08 2022 | SNAP INC ; Snap Inc. | Emotion-based text to speech |
12147644, | Jun 28 2019 | Snap Inc. | Generating animation overlays in a communication session |
12147654, | Jul 11 2019 | Snap Inc. | Edge gesture interface with smart interactions |
12148105, | Mar 30 2022 | Snap Inc. | Surface normals for pixel-aligned object |
12148108, | Oct 11 2021 | Snap Inc. | Light and rendering of garments |
12149489, | Mar 14 2023 | SNAP INC | Techniques for recommending reply stickers |
12153788, | Nov 30 2018 | Snap Inc. | Generating customized avatars based on location information |
12154232, | Sep 30 2022 | SNAP INC | 9-DoF object tracking |
12164109, | Apr 29 2022 | SNAP INC | AR/VR enabled contact lens |
12164699, | Mar 16 2021 | Snap Inc. | Mirroring device with pointing based navigation |
12165243, | Mar 30 2021 | Snap Inc. | Customizable avatar modification system |
12165335, | Nov 30 2018 | Snap Inc. | Efficient human pose tracking in videos |
12166734, | Sep 27 2019 | Snap Inc. | Presenting reactions from friends |
8830244, | Mar 01 2011 | SONY INTERACTIVE ENTERTAINMENT INC | Information processing device capable of displaying a character representing a user, and information processing method thereof |
8922547, | Dec 22 2010 | Electronics and Telecommunications Research Institute | 3D model shape transformation method and apparatus |
9060093, | Sep 30 2011 | Intel Corporation | Mechanism for facilitating enhanced viewing perspective of video images at computing devices |
9294757, | Mar 15 2013 | GOOGLE LLC | 3-dimensional videos of objects |
9607573, | Sep 17 2014 | International Business Machines Corporation | Avatar motion modification |
D916809, | May 28 2019 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
D916810, | May 28 2019 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
D916811, | May 28 2019 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
D916871, | May 28 2019 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
D916872, | May 28 2019 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
ER1147, | |||
ER3054, | |||
ER4137, | |||
ER4239, | |||
ER5250, | |||
ER5295, | |||
ER70, | |||
ER7438, | |||
ER7805, | |||
ER8409, | |||
ER8658, | |||
ER8686, | |||
ER9278, | |||
ER9925, |
Patent | Priority | Assignee | Title |
5745126, | Mar 31 1995 | The Regents of the University of California | Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
5889532, | Aug 02 1996 | AUTODESK, Inc | Control solutions for the resolution plane of inverse kinematic chains |
6057859, | Mar 31 1997 | SOVOZ, INC | Limb coordination system for interactive computer animation of articulated characters with blended motion data |
6088042, | Mar 31 1997 | SOVOZ, INC | Interactive motion data animation system |
6191798, | Mar 31 1997 | SOVOZ, INC | Limb coordination system for interactive computer animation of articulated characters |
6522332, | Jul 26 2000 | AUTODESK, Inc | Generating action data for the animation of characters |
6670954, | Dec 28 1998 | Fujitsu Limited | Three-dimensional skeleton data error absorbing apparatus |
7035436, | Aug 09 2001 | UNIVERSITY OF TOKYO, THE | Method of generating poses and motions of a tree structure link system |
7106334, | Feb 13 2001 | Sega Corporation | Animation creation program |
7202869, | Jan 07 2003 | Lucasfilm Entertainment Company Ltd | System and method of creating and animating a computer-generated image of a creature |
8253746, | May 01 2009 | Microsoft Technology Licensing, LLC | Determine intended motions |
20040104935, | |||
20110148858, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 13 2010 | TENG, CHIH-JEN | Industrial Technology Research Institute | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025330 | /0318 | |
Jul 14 2010 | LIN, TZUNG-HAN | Industrial Technology Research Institute | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025330 | /0318 | |
Jul 14 2010 | HSIAO, FU-JEN | Industrial Technology Research Institute | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025330 | /0318 | |
Nov 08 2010 | Industrial Technology Research Institute | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 12 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 30 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 11 2016 | 4 years fee payment window open |
Dec 11 2016 | 6 months grace period start (w surcharge) |
Jun 11 2017 | patent expiry (for year 4) |
Jun 11 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 11 2020 | 8 years fee payment window open |
Dec 11 2020 | 6 months grace period start (w surcharge) |
Jun 11 2021 | patent expiry (for year 8) |
Jun 11 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 11 2024 | 12 years fee payment window open |
Dec 11 2024 | 6 months grace period start (w surcharge) |
Jun 11 2025 | patent expiry (for year 12) |
Jun 11 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |