The animation generation system includes: an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin, and the movable nodes of the set of skeletons are manipulated so that a motion of the skin is induced; and an avatar manipulation module for manipulating the movable nodes, including: a position mark which is moved to at least one first real position in a real space; at least one control mark which is moved to at least one second real position in the real space; and a video capturing unit for capturing the images of the real space; an arithmetic unit for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position, and the second real position into a second virtual position.

Patent
   8462198
Priority
Dec 29 2009
Filed
Nov 08 2010
Issued
Jun 11 2013
Expiry
Aug 24 2031
Extension
289 days
Assg.orig
Entity
Large
366
13
EXPIRING-grace
19. A multi-view video generation method for synthesizing a multi-view video and drawing a virtual model in the multi-view video, comprising:
placing a mark on at least one real position in a real space;
capturing the images of the real space by using at least two video capturing units;
identifying at least two real positions from the images of the real space;
converting the at least two real positions into at least two virtual positions in a virtual space; and
synthesizing a multi-view video, comprising:
drawing one of the images captured by the video capturing unit as a background;
drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background to form a result video; and
synthesizing the background and the virtual model by using known multi-view synthesizing methods to generate a multi-view video, and
reading out skin information in respect of a skin of an avatar, and skeleton information in respect of a set of template skeletons of the avatar, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
fitting the set of skeletons to the trunk features of the skin;
calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
12. An animation generation method for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; comprising:
moving a position mark to at least one first real position in a real space;
moving at least one control mark to at least one second real position in the real space;
capturing the images of the real space;
identifying the first real position and the second real position from the images of the real space;
converting the first real position into a first virtual position where the avatar is in the virtual space;
converting the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space,
drawing the one of images of the real space as a background, and drawing a designated object on the first virtual position onto the background to generate an augmented reality (AR) animation,
reading out skin information in respect of the skin, and skeleton information in respect of a set of template skeletons, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
fitting the set of skeletons to the trunk features of the skin;
calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
18. A multi-view animation generation system, comprising:
at least one mark which is moved by users to at least one first real position in a real space;
an arithmetic unit, coupled to at least two video capturing units,
wherein the at least two video capturing units capturing image streams of the real space and transmitting the image stream to the arithmetic unit, and the arithmetic unit identifying at least two first real positions from the images of the real space, and converting the at least two real positions into at least two virtual positions in virtual space;
a multi-view animation synthesizing unit, coupled to the arithmetic unit, for drawing one of the images captured by the video capturing unit as a background;
drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background; synthesizing the background and the virtual model to generate an animation; and transmitting the animation to the multi-view display unit,
an avatar generation module, comprising:
a readout unit for reading out skin information in respect of a skin of an avatar, and skeleton information in respect of a set of template skeletons of the avatar, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between bones of the set of template skeletons;
an analyzing unit for analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
a rough fitting unit for adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
a precise fitting unit for fitting the set of skeletons to the trunk features of the skin;
an envelope range calculating unit for calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
a mesh vertices weight calculating unit for calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving;
an output unit for outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
1. An animation generation system, comprising:
an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; and
an avatar manipulation module for manipulating the movable nodes of the avatar, comprising:
a position mark which is moved by users to at least one first real position in a real space;
at least one control mark which is moved by the users to at least one second real position in the real space;
a video capturing unit for capturing the images of the real space;
an arithmetic unit, coupled to the video capturing unit, for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position where the avatar is in the virtual space, and the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space,
wherein the relative motions between the second virtual position and the first virtual position make the avatar perform a series of successive motions in the virtual space, and
one of the images captured by the video capturing unit are drawn as a background, while one of the images of a designated object corresponding to the position mark is drawn onto the background, according to the first virtual position, to generate an augmented reality (AR) animation,
wherein the avatar generation module further comprises:
a readout unit for reading out skin information in respect of the skin, and skeleton information in respect of a set of template skeletons, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons;
an analyzing unit for analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of the skin;
a rough fitting unit for adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features;
a precise fitting unit for fitting the set of skeletons to the trunk features of the skin;
an envelope range calculating unit for calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone;
a mesh vertices weight calculating unit for calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; and
an output unit for outputting the avatar to the arithmetic unit, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween.
2. The animation generation system as claimed in claim 1, wherein the position mark and the control mark are barcodes or other objects which visible appearances have identifiable shapes, patterns, dimensions or colors.
3. The animation generation system as claimed in claim 1, wherein the video capturing unit is composed of at least one camcorder.
4. The animation generation system as claimed in claim 1, wherein the AR animation drawn by the arithmetic unit is stored into a storage unit.
5. The animation generation system as claimed in claim 1, wherein the designated object is a successive motion.
6. The animation generation system as claimed in claim 1, wherein the designated object is a virtual three dimension (3D) model.
7. The animation generation system as claimed in claim 1, wherein the envelope range is two-layer-barrel-shaped, and each of the two ends of the envelope range is a concentric hemisphere.
8. The animation generation system as claimed in claim 1, wherein rules for the mesh vertices weight calculating unit to calculate the mesh vertices weight comprises:
rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving;
rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving;
rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases from 1.0 to 0.0, which indicates that the mesh vertex is partially affected by the movement of the bone according to the distance between the mesh vertex and the bone;
rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex; and
rule (5) when all the weights of the mesh vertices affected by all the bones are recorded to form a weight table, each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0.
9. The animation generation system as claimed in claim 1 further comprising:
a display unit, coupled to the arithmetic unit, for displaying the virtual space and the avatar.
10. The animation generation system as claimed in claim 4, wherein the AR animation is a 2D image stream, or a multi-view image stream.
11. The animation generation system as claimed in claim 10, wherein the synthesizing method of the multi-view image stream is an interlaced signal synthesizing method or a collateral signal synthesizing method.
13. The animation generation method as claimed in claim 12, wherein the designated object is a successive motion.
14. The animation generation method as claimed in claim 12, wherein the designated object is a virtual three dimension (3D) model.
15. The animation generation method as claimed in claim 12, wherein the AR animation is a 2D image stream, or a multi-view image stream.
16. The animation generation method as claimed in claim 12, wherein the envelope range is two-layer-barrel-shaped, and each of the two ends of the envelope range is a concentric hemisphere.
17. The animation generation method as claimed in claim 12, wherein rules for the mesh vertices weight calculating unit to calculate the mesh vertices weight comprises:
rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving;
rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving;
rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases from 1.0 to 0.0, which indicates that the mesh vertex is partially affected by the movement of the bone according to the distance between the mesh vertex and the bone;
rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex;
rule (5) when all the weights of the mesh vertices affected by all the set of skeletons are recorded to form a weight table, each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0.

This Non-provisional application claims priority under 35 U.S.C. §119(a) on Provisional Patent Application Nos. 61/290,848, filed in United States of America on Dec. 29, 2009, the entire contents of which are hereby incorporated by reference.

1. Technical Field

The present disclosure relates to animation generation systems and methods, and in particular relates to animation generation systems and methods for manipulating avatars in an animation.

2. Description of the Related Art

Due to increasing popularity of the internet, some network applications and online multiplayer games have grown in membership and usage. Thus, global revenues for digital content providers providing the network applications and online games have reached around US$35 billion per year.

An avatar represents a computer user on the Internet, in the form of a one-dimensional (1D) username or a two-dimensional (2D) icon (picture). Nowadays, the avatar is usually in a form of a three-dimensional model commonly used in computer games. Conventionally, procedures to construct a 3D avatar comprise steps of producing a 2D image, constructing its 3D mesh details, building its skeleton, etc. All the steps need a lot of time and effort, so that it is hard for a normal user to construct a personalized 3D virtual avatar.

Accordingly, an integrated system or method, wherein a personalized avatar can be easily generated and manipulated, would fulfill enjoyment for users' for network

The purpose of the present disclosure is to provide systems and method for generating a 3D avatar rapidly and efficiently and manipulating the 3D avatar.

The present disclosure provides an animation generation system. The animation generation system comprises an avatar generation module for generating an avatar in a virtual space, wherein the avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced; and an avatar manipulation module for manipulating the movable nodes of the avatar, comprising a position mark which is moved by users to at least one first real position in a real space; at least one control mark which is moved by the users to at least one second real position in the real space; a video capturing unit for capturing the images of the real space; an arithmetic unit, coupled to the video capturing unit, for identifying the first real position and the second real position from the images of the real space, and converting the first real position into a first virtual position where the avatar is in the virtual space, and the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space, and one of the images captured by the video capturing unit are drawn as a background, while one of the images of a designated object corresponding to the position mark is drawn onto the background, according to the first virtual position, to generate an Augmented Reality (AR) animation.

The present disclosure provides an animation generation method for generating an avatar in a virtual space. The avatar has a set of skeletons and a skin attached to the set of skeletons, and the set of skeletons has a plurality of movable nodes, and the movable nodes are manipulated so that a motion of the skin is induced. The animation generation method comprises: moving a position mark to at least one first real position in a real space; moving at least one control mark to at least one second real position in the real space; capturing the images of the real space; identifying the first real position and the second real position from the images of the real space; converting the first real position into a first virtual position where the avatar is in the virtual space; and converting the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the virtual control position and the virtual position position make the avatar perform a series of successive motions in the virtual space, drawing one of the images of the real space as a background, and drawing a designated object on the first virtual position onto the background to generate an Augmented Reality (AR) animation.

The present disclosure provides a multi-view animation generation system. The multi-view animation generation system comprises at least one mark which is moved by users to at least one first real position in a real space; an arithmetic unit, coupled to at least two video capturing units, wherein the at least two video capturing units for capturing image streams of the real space and transmitting the image stream to the arithmetic unit; and the arithmetic unit identifying at least two first real positions from the images of the real space, and converting the at least two real positions into at least two virtual positions in virtual space; a multi-view animation synthesizing unit, coupled to the arithmetic unit, for drawing one of the images captured by the video capturing unit as a background; drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background; synthesizing the background and the virtual model by using known 3D technique to generate an animation; and transmitting the animation to the multi-view display unit.

The present disclosure provides a multi-view video generation method for synthesizing a multi-view video and drawing a virtual model in the multi-view video. The multi-view video generation method comprises: placing a mark on at least one real position in a real space; capturing the images of the real space by using at least two video capturing units; identifying at least two real positions from the images of the real space; converting the at least two real positions into at least two virtual positions in a virtual space; and synthesizing a multi-view video, comprising: drawing one of the images captured by the video capturing unit as a background; drawing a virtual model corresponding to the mark on one of the at least two virtual positions onto the background to form a result video; synthesizing the background and the virtual model by using known multi-view synthesizing methods to generate a multi-view video.

A detailed description is given in the following embodiments with reference to the accompanying drawings.

The present disclosure can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:

FIG. 1 shows an animation generation system according to the present disclosure.

FIG. 2 shows a detailed animation generation system of the present disclosure.

FIG. 3 is a diagram illustrating a multi-view display technology of the present disclosure.

FIG. 4 shows an example which uses the model establishing unit 118 to establish the skin information of the avatar skin.

FIG. 5 shows the appearance features of the avatar.

FIGS. 6A and 6B shows the profile of a bone and its envelope range according to an embodiment of the present disclosure.

FIG. 7 is a diagram illustrating an embodiment of the present disclosure which employs the avatar manipulation module to manipulate an avatar.

FIGS. 8A and 8B is a flowchart of the animation generation method according to one embodiment of the present disclosure.

The following description is made for the purpose of illustrating the general principles of the disclosure and should not be taken in a limiting sense. The scope of the disclosure is best determined by reference to the appended claims.

FIG. 1 shows an animation generation system according to the present disclosure. The animation generation system comprises an avatar generation module 110, an avatar manipulation module 120, and a display unit 130. The avatar generation module 110 is used to generate an avatar in a virtual space, where the avatar has a set of skeletons and a skin attached to the set of skeletons. The virtual space and the avatar may be displayed in the display unit 130. For the purpose of illustrating the present disclosure, the avatar in the following description is a human, which has a human-like figure comprising a head, a body and four limbs. The set of skeletons of the avatar has a plurality of movable nodes, and the movable nodes may be manipulated so that a motion of the skin is induced. For example, a point on a femur (thigh bone) may be moved relative to the tibia (shinbone), while a point on the skin attached to the femur may be still relative to the entire femur. The avatar manipulation module 120 is used to manipulate the movable nodes of the avatar, so that the avatar can perform various movements in the virtual space, to generate an animation. The function and structure of the avatar generation module 110, the avatar manipulation module 120 and the display unit 130 will be illustrated in the following paragraphs.

FIG. 2 shows a detailed animation generation system of the present disclosure. As shown in FIG. 2, the avatar generation module 110 of the present disclosure comprises a readout unit 111, an analyzing unit 112, a rough fitting unit 113, a precise fitting unit 114, an envelope range calculating unit 115, a mesh vertices weight calculating unit 116 and an output unit 117. The readout unit is used to read out skin information in respect of the skin, and skeleton information in respect of a set of template skeletons. The skin information is composed of data of a plurality of triangles or mesh vertices. For example, the skin of the avatar may be composed of about 20 thousands triangle meshes, where each of the meshes further comprises color and material information. The skeleton information comprises identification data (such as the name and the serial number) of, geometric data (such as the shape and the size) of, and linkage relationship between, each bone of the set of template skeletons. In an embodiment, the readout unit 111 may be coupled to a model database (not shown), where the skin information and the skeleton information stored therein may be read out by the readout unit 111. In another embodiment, the readout unit 111 may be further coupled to a model establishing unit 118. FIG. 4 shows an example which uses the model establishing unit 118 to establish the skin information of the avatar skin. The model establishing unit 118 shoots a real model (for example, a real human) by using several camcorders synchronously (for example, camcorder 1˜8) to obtain image information from the model, establishes a three dimension (3D) virtual model corresponding to the real human (having skin information but no skeleton information) by using a visual hull algorithm, and then provides the virtual model to the readout unit. For the convenience of subsequent processes, the model shot by the camcorders may be asked to face a designated direction with a designated posture. For example, the model may be asked to face ahead and stretch like an eagle with both hands and feet protrude sideways and outwards, respectively, as shown in FIG. 4. With the model establishing unit 118, the customized avatars can be easily drawn into any animation.

The analyzing unit 112 of the present disclosure is used to analyze the skin information to obtain a plurality of appearance features and a plurality of trunk feature. Specifically, the appearance features are the protruding points on the skin. FIG. 5 shows the appearance features. The analyzing unit 112 of the present disclosure first analyzes the skin information of the avatar to find the convex hull of skin, which is the polyhedron covering the avatar, as shown in FIG. 5. In FIG. 5, the appearance features are the most protruding points on the convex hull (marked with circles), i.e., the end-points of the polyhedron (if the skin information of the avatar is obtained by using 3D image technique, the protruding points on the back or the stomach of the avatar may be found as the appearance features). In an embodiment, when the model stretches like an eagle, as described above, the appearance features representative of the head, the hands and the legs of the avatar of the model can be easily found out by the analyzing unit 112. However, physically, the analyzing unit 112 may find far more appearance features than expected. In such a case, according to the distances between the appearance features, the analyzing unit 112 may remove unreasonable appearance features one by one, and then control the number of appearance features to be between 5-10. After determining the appearance features, the analyzing unit 112 may further find the trunk features of the avatar. The trunk features is found to form a central axis within the skin of the avatar.

The rough fitting unit 113 of the present disclosure is used to adjust the size of the template skeletons according to the appearance features to generate the skeletons of the avatar, and fit the template skeletons to the appearance features. In an embodiment, the present disclosure can adjust the size of the template skeletons automatically by wising Inverse Kinematics. However, in another embodiment, the size of the template skeletons may be adjusted by users manually. Although the template skeletons are not directly obtained from the model shot by the camcorder, since the shape of the skeletons of the human body are similar to each other, after adjusting the scale and the size of the template skeletons by the rough fitting unit 113, the personalized skeletons belonging to the model may be constructed. After adjusting the size of the skeletons, the personalized set of skeletons is fitted to the appearance features of the avatar skin. Specifically, this fitting procedure further comprises a rotating procedure and a locating procedure. The rotating procedure firstly rotates the skin of the avatar toward the +Z axis in the virtual space, and then rotates the top end of the appearance features toward the +Y axis in virtual space. The locating procedure respectively locates each end of the set of skeletons to a specific coordinate. For example, the locating procedure may: (1) locate the top end of the head to the coordinate which has the highest Y value: (2) locate the end of the left hand to the coordinate which has the highest X value; (3) locate the end of the right hand to the coordinate which has the lowest X value; (4) locate the end of the left foot to the coordinate which has a negative Y value and the highest X value; (5) locate the end of the right foot to the coordinate which has a negative Y value and the lowest X value.

The precise fitting unit 114 of the present disclosure is used to fit the set of skeletons to the trunk features of the skin. Although the rough fitting unit 113 fits specific ends of the skeletons of the avatar to skin, there may be some skeletons which are located out of the skin. Therefore, the precise fitting unit 114 has to fit the set of skeletons to skin more precisely. The precise fitting unit 114 fits the bone which was located at a wrong place to a correct place according to the trunk features of the closest bone by using Inverse Kinematics. The precise fitting unit 114 may repeat the foregoing procedures till all the skeletons of the avatar have been correctly fitted.

The envelope range calculating unit 115 of the present disclosure is used to calculate an envelope range of each bone of the set of skeletons according to a plurality of mesh vertices in proximity to the bone. FIGS. 6A and 6B show the profile of a bone and its envelope range according to an embodiment of the present disclosure. A bone 610 has a starting point S and an end point E. In this embodiment, the envelope range 620 respectively forms two concentric hemispheres (called “sphere” hereinafter for brief description) on the starting point S and the end point E, as shown in FIG. 6A. Taking the two concentric spheres on the starting point S for example, the two concentric spheres may both have the starting point S as their centers, the distance between the point S and the furthest mesh vertex in a searching range (the searching range is a sphere having the point S as its center and at most half of the total length of the avatar as its radius) away from the point S (i.e., the normal of the mash vertex is outward from the bone) as the radius RSI of the inner concentric sphere 622, and the distance between the point S and the nearest mesh vertex in the search range toward the point S as the radius RSO of the outer sphere 624. Similarly, the two concentric spheres on the end point E may both have the end point E as their centers, the distance between the point E and the furthest mesh vertex in a searching range (the searching range is a sphere having the point E as its center and at most half of the total length of the avatar as its radius) away from the point E as the radius REO of the inner concentric sphere 626, and the distance between the point E and the nearest mesh vertex in the search range toward the point S as the radius REO of the outer sphere 628. Further, the envelope range 620 and the concentric spheres 622˜628 have to satisfy the following conditions: (1) the radiuses of the inner sphere and the outer sphere are not greater than the length of the set of skeletons 610; (2) the radiuses of the inner sphere and the outer sphere are not smaller than the internal radius of the bone 610; (3) the radius of the inner sphere is not greater than that of the outer sphere; (4) the radius of the outer sphere is not smaller than that of the inner sphere: (5) two adjacent envelope ranges do not overlap each other if possible; and (6) The envelope range of two symmetrical limbs (for example, the right femur and the left femur) do not overlap each other. After forming the concentric sphere 622˜628, the envelope range calculating unit 115 may calculate the inner RI and outer radius RO of the envelope range between the points S and E according to the concentric sphere 622˜628 by using interpolation. The inner envelope, the outer envelope, and the space between them due to interpolation may form a two-layered space. Finally, the envelope range calculating unit 115 may further remove half of the concentric sphere 622˜628 which exceeds the length of the bone and generate a two-layer-barrel-shaped envelope range, as shown in FIG. 6B.

The mesh vertices weight calculating unit 116 of the present disclosure is used to calculate the weight of a mesh vertex of the skin information relative to a bone according the envelope range, where the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving. The mesh vertices weight calculating unit 116 calculates the weight of the mesh vertices in accordance with the flowing rules: rule (1): when a mesh vertex is inside the inner layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone when the bone is moving; rule (2): when a mesh vertex is outside of the outer layer of the envelope of a bone, the weight of the mesh vertex in relation to the bone is 0.0, which indicates that the mesh vertex is totally unaffected by the movement of the bone when the bone is moving; rule (3) when a mesh vertex is between the inner and outer layer of the envelope range of a bone, the weight of the mesh vertex in relation to the bone decreases as follows: Weightvibi(distbi)=Decay(distbi), where Weightvibi is the weight of a mesh vertex vi in relation to the bone bi; distbi is the distance between the mesh vertex vi and the bone bi; and Decay(x) is a decreasing function, decreasing from 1 to 0; rule (4) when a mesh vertex does not belong to any envelope range of any bone in accordance with the former three rules, the weight of the mesh vertex in relation to a bone closest to the mesh vertex is 1.0, which indicates that the mesh vertex is totally affected by the movement of the bone closest to the mesh vertex; and rule (5) each value of the weight of the mesh vertices on the weight table has to be normalized so that the sum of all the values is 1.0. Through the above rules, the mesh vertices weight calculating unit 116 can establish a weight table to record all the weights of the mesh vertices affected by all the bones.

After the procedures described above, the avatar of the present disclosure comprises not only the skin information (data of mesh vertices), the skeleton information (identification data and geometric data of each bone of the set of the skeletons, and linkage relationship data and movable degrees of freedom between each bone of the set of the skeletons), but also the relationship information between the skin and the set of skeletons (i.e., envelope range of the set of skeletons, and the weights of the mesh vertices, which indicate how the mesh vertices affected by the set of skeletons). The output unit 117 of the avatar generation module 110 is used to output the avatar to the display unit 130 for displaying the avatar, and output the avatar to the arithmetic unit 124 for further manipulation of the avatar.

In addition to the avatar generation module 110, the present disclosure further comprises an avatar manipulation module 120. As shown in FIG. 1, the avatar manipulation module 120 of the present disclosure further comprises a position mark 121, at least one control mark 122, at least one video capturing unit 123 and an arithmetic unit 124.

FIG. 7 is a diagram illustrating an embodiment of the present disclosure which employs the avatar manipulation module to manipulate an avatar. The avatar manipulation module in FIG. 7 comprises a position mark 121, at least one control mark 122, a video capturing unit 123 and an arithmetic unit 124. The manipulated avatar is shown on the display unit 130. The position mark 121 and the control mark 122 are the objects having identifiable shapes, sizes or colors, or parts of a human body (note that one can use their two fingers as the position mark 121 and the control mark 122). The position mark 121 and the control mark 122 can be moved by users (by hands, manually or machines, automatically) in a real space. For example, the position mark 121 is moved by users to a first real position, (x0, y0, z0), while the control mark 122 is moved by users to a second real position, (x1, y1, z1).

The video capturing unit 123, e.g. camcorders, is used to shoot the real space to obtain the images of the real space and the position mark 121 and the control mark 122 in the real space. There may be only one camcorder in an embodiment. In order to achieve a stereoscopy effect or multi-view effect, two or more than two camcorders may be employed in other embodiments, which will be described further later.

The arithmetic unit 124 of the present disclosure is coupled to the video capturing unit 123, and is used to identify the first real position of the position mark 121 and the second real position of the control mark 122 from the images captured by the video capturing unit 123. However, the location and direction of the video capturing unit 123 may be fixed or changeable from time to time. In an embodiment, the position mark 121 and the control mark 122 are barcodes or other objects which visible appearances have identifiable shapes, sizes or colors, in order to make it easier for the video capturing unit 123 to identify them. By checking the shape and size of the two barcodes, the arithmetic unit 124 may easily determine the relative distance and direction between the two marks 121 and 122 and the camcorders (video capturing unit 123). Open source software, such as ARToolkit and ARtag, may work in coordination with the arithmetic unit 124 to identify the marks and calculate the spatial coordinates of the marks. Further, the arithmetic unit 124 may further covert the first real position of the position mark 121 in the real space into a first virtual position where the avatar is in the virtual space, and the second real position of the control mark 122 in the real space to a second virtual position where a movable node of the avatar is in the virtual space. It is appreciated that relative motions between the virtual control position and the first virtual position make the avatar perform a series of successive motions in the virtual space. In an embodiment, if a user wants to control the location of the whole avatar in the virtual space, the position mark 121 may be moved. Also, if a user wants to control the forearm of the avatar by the control mark 122, a point on the forearm bone of the avatar as a movable node (which is controlled by the control mark 122) may be set, before moving the control mark 122. Due to the linkage relationship between the forearm skin and the forearm bone, the whole forearm (including the forearm skin) will move when the control mark 121 is moving in the real space. Note that, due to the function performed by the avatar generation module 110 (especially the mesh vertices weight calculating unit 116), when the forearm of the avatar is moving, the skin away from the forearm, for example, the skin of chest or shoulder, will be moving accordingly. Thus, the avatar generated by the present disclosure has more smooth and natural movements.

The display unit 130 of the present disclosure is coupled to the avatar generation module 110 and the avatar manipulation module 120, and is used to display the avatar and the virtual space where the avatar exists. FIG. 3 is a diagram illustrating a multi-view display technology of the present disclosure. In an embodiment, the display unit 330 is a multi-view display unit, a stereo display, or a naked eye multi-view displayer. With the video capturing unit 123 having a plurality of camcorders and a multi-view animation synthesizing unit 125, the present disclosure may achieve stereoscopy effect or multi-views effect. For example, if the video capturing unit 123 has two camcorders, each may respectively represent a right and left eye of a human. The multi-view animation synthesizing unit may generate stereo videos by using an interlaced signal synthesizing method or a collateral signal synthesizing method. In an embodiment, the interlaced signal synthesizing method obtains the odd columns of an image of a left sight signal from a left camcorder and even columns of an image of a right sight signal from a right camcorder, and then combines the odd and even columns together to generate an interlaced image. In another embodiment, the interlaced signal synthesizing method obtains even columns of an image of a left sight signal from a left camcorder and odd columns of an image of a tight sight signal from a right camcorder, and then combines the odd and even columns together to generate an interlaced image. The collateral signal synthesizing can draw the right and left sight signal in a same image collaterally. By using the multi-view animation synthesizing unit, the display unit 130 employing a stereo display or a naked eye multi-view display may show pictures with depth perception to viewers.

The animation generation system 100 of the present disclosure has been discussed above. It is noted that, for the purpose of illustrating the present disclosure, the avatar generation module 110, the avatar manipulation module 120, the display unit 130, the readout unit 111, the analyzing unit 112, the rough fitting unit 113, the precise fitting unit 114, the envelope range calculating unit 115, the mesh vertices weight calculating unit 116, the output unit 117, the model establishing unit 118 in the avatar generation module 110, and the video capturing unit 123 and the arithmetic unit 124 in the avatar manipulation module 120 are described separately. Any combination of above the parts may be integrated in and performed by a single computer, or separated on the network and performed by a plurality of computers.

The present disclosure also provides an animation generation method. FIGS. 8A and 8B is a flowchart of the animation generation method according to one embodiment of the present disclosure. The animation generation method of the present disclosure is used to generate an avatar in a virtual space, the avatar has a set of skeletons, and a skin attached to the set of skeletons, where the set of skeletons has a plurality of movable nodes. The animation generation method is used to manipulate the movable nodes of the avatar. Each of the steps of the animation generation method may be respectively performed by the components of the animation generation system 100 described above. The animation generation method comprises: in step S811, reading out the skin information in respect of the skin, skeleton information in respect of a set of template skeletons, wherein the skin information is composed of data of a plurality of mesh vertices, and the skeleton information comprises geometric data of each bone of the set of template skeletons and the linkage relationship between the bones of the set of template skeletons; in step S812, analyzing the skin information to obtain a plurality of appearance features and a plurality of trunk features of skin, where the appearance features are protruding points of skin, and the trunk features form a central axis within the skin; in step S813, adjusting the size of the set of template skeletons according to the appearance features to generate the set of skeletons of the avatar, and fitting the set of skeletons to the appearance features; in step S814, fitting the set of skeletons to the trunk features of the skin; in step S815, calculating an envelope range of a bone of the set of skeletons according to a plurality of mesh vertices of the bone; in step S816, calculating the weight of a mesh vertex of the skin information relative to a bone according the envelope range, wherein the weight of the mesh vertex indicates how the mesh vertex is affected by the movement of the bone when the bone is moving; in step S817, outputting the avatar to the arithmetic unit 124 and/or display unit 130 in FIG. 1, wherein the outputted avatar comprises data in relation to the skin, the set of skeletons and the relationships therebetween. After steps S811˜S817, a manipulatable avatar is produced. Then, the present disclosure further manipulates the avatar. In step S821, moving a position mark to at least one first real position in the real space; in step S822, moving at least one control mark to at least one second real position in the real space; in step S823, capturing the images of the real space; and in step S824, identifying the first real position and the second real position from the images of the real space, converting the first real position into a first virtual position where the avatar is in the virtual space, and converting the second real position into a second virtual position where one of the movable nodes of the avatar is in the virtual space, wherein the relative motions between the second virtual position and the first virtual position make the avatar perform a series of successive motions in the virtual space. In an embodiment, the animation generation method also comprises: obtaining image data from a real model, constructing a virtual model in accordance with the image data, and providing the skin information to the other elements. The method for constructing the virtual model comprises employing a visual hull algorithm. In an embodiment, the present disclosure may fit the set of skeletons to the appearance features and the trunk features by using Inverse Kinematics. In an embodiment, the envelope range provided in step S815 is two-layer-barrel shaped, and each of the two ends of the envelope range has two concentric hemispheres where the outer half is removed. The shape of and the method for constructing the envelope range has been illustrated in the preceding examples with reference to FIG. 4. Further, the rules for calculating the weight of the mesh vertices in step S816 has been illustrated in the proceeding example in reference to the mesh vertices weight calculating unit 116. The animation generation method of the present disclosure also comprises displaying the virtual space and the avatar in a step (not shown in the FIGS). This step may further synthesize a plurality of visual signals obtained by the camcorders by employing multi-view animation synthesizing technology. The multi-view animation technology may be an interlaced signal synthesizing method or a collateral signal synthesizing method, as illustrated above. In an embodiment, the animation generation method of the present disclosure may follow the order shown in FIGS. 8A and 8B. However, the present disclosure is not limited to the particular order. For example, steps S821˜S824 may be performed at the same time.

While the disclosure has been described by way of example and in terms of the exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Lin, Tzung-Han, Teng, Chih-Jen, Hsiao, Fu-Jen

Patent Priority Assignee Title
10033992, Sep 09 2014 GOOGLE LLC Generating a 3D video of an event using crowd sourced data
10102659, Sep 18 2017 Systems and methods for utilizing a device as a marker for augmented reality content
10105601, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
10198871, Apr 27 2018 Systems and methods for generating and facilitating access to a personalized augmented rendering of a user
10245505, Mar 15 2013 SONY INTERACTIVE ENTERTAINMENT INC Generating custom recordings of skeletal animations
10438631, Feb 05 2014 SNAP INC Method for real-time video processing involving retouching of an object in the video
10565767, Sep 18 2017 Systems and methods for utilizing a device as a marker for augmented reality content
10566026, Feb 05 2014 Snap Inc. Method for real-time video processing involving changing features of an object in the video
10586396, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
10586570, Feb 05 2014 SNAP INC Real time video processing for changing proportions of an object in the video
10593121, Apr 27 2018 Systems and methods for generating and facilitating access to a personalized augmented rendering of a user
10636188, Feb 09 2018 Systems and methods for utilizing a living entity as a marker for augmented reality content
10661170, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
10672170, Sep 18 2017 Systems and methods for utilizing a device as a marker for augmented reality content
10679427, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
10796467, Feb 09 2018 Systems and methods for utilizing a living entity as a marker for augmented reality content
10818096, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
10846931, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
10848446, Jul 19 2016 Snap Inc. Displaying customized electronic messaging graphics
10852918, Mar 08 2019 Snap Inc.; SNAP INC Contextual information in chat
10855632, Jul 19 2016 SNAP INC Displaying customized electronic messaging graphics
10861170, Nov 30 2018 SNAP INC Efficient human pose tracking in videos
10861245, Apr 27 2018 Systems and methods for generating and facilitating access to a personalized augmented rendering of a user
10867424, Sep 18 2017 Systems and methods for utilizing a device as a marker for augmented reality content
10872451, Oct 31 2018 Snap Inc.; SNAP INC 3D avatar rendering
10880246, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in electronic messages
10893385, Jun 07 2019 SNAP INC Detection of a physical collision between two client devices in a location sharing system
10895964, Sep 25 2018 Snap Inc. Interface to display shared user groups
10896534, Sep 19 2018 Snap Inc. Avatar style transformation using neural networks
10902661, Nov 28 2018 Snap Inc. Dynamic composite user identifier
10904181, Sep 28 2018 Snap Inc. Generating customized graphics having reactions to electronic message content
10911387, Aug 12 2019 Snap Inc. Message reminder interface
10936066, Feb 13 2019 SNAP INC Sleep detection in a location sharing system
10936157, Nov 29 2017 SNAP INC Selectable item including a customized graphic for an electronic messaging application
10938758, Oct 24 2016 SNAP INC Generating and displaying customized avatars in media overlays
10939246, Jan 16 2019 SNAP INC Location-based context information sharing in a messaging system
10945098, Jan 16 2019 Snap Inc. Location-based context information sharing in a messaging system
10949648, Jan 23 2018 Snap Inc. Region-based stabilized face tracking
10950271, Feb 05 2014 Snap Inc. Method for triggering events in a video
10951562, Jan 18 2017 Snap. Inc. Customized contextual media content item generation
10952013, Apr 27 2017 Snap Inc. Selective location-based identity communication
10963529, Apr 27 2017 SNAP INC Location-based search mechanism in a graphical user interface
10964082, Feb 26 2019 Snap Inc. Avatar based on weather
10964114, Jun 28 2019 SNAP INC 3D object camera customization system
10979752, Feb 28 2018 SNAP INC Generating media content items based on location information
10984569, Jun 30 2016 Snap Inc. Avatar based ideogram generation
10984575, Feb 06 2019 Snap Inc. Body pose estimation
10991395, Feb 05 2014 Snap Inc. Method for real time video processing involving changing a color of an object on a human face in a video
10992619, Apr 30 2019 SNAP INC Messaging system with avatar generation
11010022, Feb 06 2019 Snap Inc. Global event-based avatar
11030789, Oct 30 2017 Snap Inc. Animated chat presence
11030813, Aug 30 2018 Snap Inc. Video clip object tracking
11032670, Jan 14 2019 Snap Inc. Destination sharing in location sharing system
11036781, Jan 30 2020 SNAP INC Video generation system to render frames on demand using a fleet of servers
11036989, Dec 11 2019 Snap Inc.; SNAP INC Skeletal tracking using previous frames
11039270, Mar 28 2019 Snap Inc. Points of interest in a location sharing system
11048916, Mar 31 2016 Snap Inc. Automated avatar generation
11055514, Dec 14 2018 Snap Inc. Image face manipulation
11063891, Dec 03 2019 Snap Inc.; SNAP INC Personalized avatar notification
11069103, Apr 20 2017 Snap Inc. Customized user interface for electronic communications
11074675, Jul 31 2018 SNAP INC Eye texture inpainting
11080917, Sep 30 2019 Snap Inc. Dynamic parameterized user avatar stories
11100311, Oct 19 2016 Snap Inc. Neural networks for facial modeling
11103795, Oct 31 2018 Snap Inc.; SNAP INC Game drawer
11113887, Jan 08 2018 Verizon Patent and Licensing Inc Generating three-dimensional content from two-dimensional images
11120596, Feb 09 2018 Systems and methods for utilizing a living entity as a marker for augmented reality content
11120597, Oct 26 2017 Snap Inc. Joint audio-video facial animation system
11120601, Feb 28 2018 Snap Inc. Animated expressive icon
11122094, Jul 28 2017 Snap Inc. Software application manager for messaging applications
11128586, Dec 09 2019 Snap Inc.; SNAP INC Context sensitive avatar captions
11128715, Dec 30 2019 SNAP INC Physical friend proximity in chat
11140515, Dec 30 2019 SNAP INC Interfaces for relative device positioning
11145136, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
11166123, Mar 28 2019 SNAP INC Grouped transmission of location data in a location sharing system
11169658, Dec 31 2019 Snap Inc.; SNAP INC Combined map icon with action indicator
11171902, Sep 28 2018 Snap Inc. Generating customized graphics having reactions to electronic message content
11176737, Nov 27 2018 Snap Inc. Textured mesh building
11178083, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in electronic messages
11185775, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
11188190, Jun 28 2019 Snap Inc.; SNAP INC Generating animation overlays in a communication session
11189070, Sep 28 2018 Snap Inc. System and method of generating targeted user lists using customizable avatar characteristics
11189098, Jun 28 2019 SNAP INC 3D object camera customization system
11192031, May 08 2012 SNAP INC System and method for generating and displaying avatars
11195237, Apr 27 2017 SNAP INC Location-based virtual avatars
11198064, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
11199957, Nov 30 2018 Snap Inc. Generating customized avatars based on location information
11200748, Apr 30 2019 Systems, methods, and storage media for conveying virtual content in an augmented reality environment
11217020, Mar 16 2020 Snap Inc. 3D cutout image modification
11218433, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in electronic messages
11218838, Oct 31 2019 SNAP INC Focused map-based context information surfacing
11227442, Dec 19 2019 Snap Inc. 3D captions with semantic graphical elements
11229849, May 08 2012 SNAP INC System and method for generating and displaying avatars
11245658, Sep 28 2018 Snap Inc. System and method of generating private notifications between users in a communication session
11263254, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of servers
11263817, Dec 19 2019 Snap Inc. 3D captions with face tracking
11270491, Sep 30 2019 Snap Inc. Dynamic parameterized user avatar stories
11275439, Feb 13 2019 Snap Inc. Sleep detection in a location sharing system
11284144, Jan 30 2020 SNAP INC Video generation system to render frames on demand using a fleet of GPUs
11290682, Mar 18 2015 SNAP INC Background modification in video conferencing
11294545, Sep 25 2018 Snap Inc. Interface to display shared user groups
11294936, Jan 30 2019 Snap Inc. Adaptive spatial density based clustering
11301117, Mar 08 2019 Snap Inc. Contextual information in chat
11307747, Jul 11 2019 SNAP INC Edge gesture interface with smart interactions
11310176, Apr 13 2018 SNAP INC Content suggestion system
11315259, Nov 30 2018 Snap Inc. Efficient human pose tracking in videos
11320969, Sep 16 2019 Snap Inc. Messaging system with battery level sharing
11321896, Oct 31 2018 Snap Inc. 3D avatar rendering
11348301, Sep 19 2018 Snap Inc. Avatar style transformation using neural networks
11354014, Apr 27 2017 SNAP INC Map-based graphical user interface for multi-type social media galleries
11354843, Oct 30 2017 Snap Inc. Animated chat presence
11356720, Jan 30 2020 SNAP INC Video generation system to render frames on demand
11360733, Sep 10 2020 Snap Inc. Colocated shared augmented reality without shared backend
11380361, Feb 05 2014 Snap Inc. Method for triggering events in a video
11385763, Apr 27 2017 SNAP INC Map-based graphical user interface indicating geospatial activity metrics
11388126, Jul 19 2016 Snap Inc. Displaying customized electronic messaging graphics
11392264, Apr 27 2017 SNAP INC Map-based graphical user interface for multi-type social media galleries
11411895, Nov 29 2017 SNAP INC Generating aggregated media content items for a group of users in an electronic messaging application
11418470, Jul 19 2016 Snap Inc. Displaying customized electronic messaging graphics
11418906, Apr 27 2017 Snap Inc. Selective location-based identity communication
11425062, Sep 27 2019 Snap Inc. Recommended content viewed by friends
11425068, Feb 03 2009 Snap Inc. Interactive avatar in messaging environment
11438288, Jul 19 2016 Snap Inc. Displaying customized electronic messaging graphics
11438341, Oct 10 2016 Snap Inc. Social media post subscribe requests for buffer user accounts
11443491, Jun 28 2019 Snap Inc. 3D object camera customization system
11443772, Feb 05 2014 Snap Inc. Method for triggering events in a video
11450051, Nov 18 2020 Snap Inc.; SNAP INC Personalized avatar real-time motion capture
11450349, Feb 05 2014 Snap Inc. Real time video processing for changing proportions of an object in the video
11451956, Apr 27 2017 SNAP INC Location privacy management on map-based social media platforms
11452939, Sep 21 2020 Snap Inc. Graphical marker generation system for synchronizing users
11455081, Aug 05 2019 Snap Inc. Message thread prioritization interface
11455082, Sep 28 2018 Snap Inc. Collaborative achievement interface
11460974, Nov 28 2017 Snap Inc. Content discovery refresh
11468618, Feb 28 2018 Snap Inc. Animated expressive icon
11474663, Apr 27 2017 Snap Inc. Location-based search mechanism in a graphical user interface
11477149, Sep 28 2018 Snap Inc. Generating customized graphics having reactions to electronic message content
11509615, Jul 19 2016 Snap Inc. Generating customized electronic messaging graphics
11514635, Jan 30 2020 SNAP INC System for generating media content items on demand
11514947, Feb 05 2014 Snap Inc. Method for real-time video processing involving changing features of an object in the video
11516173, Dec 26 2018 SNAP INC Message composition interface
11523159, Feb 28 2018 Snap Inc. Generating media content items based on location information
11532105, Mar 16 2021 Snap Inc. Mirroring device with whole-body outfits
11532134, Apr 27 2018 Systems and methods for generating and facilitating access to a personalized augmented rendering of a user
11543939, Jun 08 2020 Snap Inc. Encoded image based messaging system
11544883, Jan 16 2017 Snap Inc. Coded vision system
11544885, Mar 19 2021 Snap Inc.; SNAP INC Augmented reality experience based on physical items
11544902, Nov 27 2018 Snap Inc. Rendering 3D captions within real-world environments
11551423, Oct 24 2016 Snap Inc. Augmented reality object manipulation
11557075, Feb 06 2019 Snap Inc. Body pose estimation
11562548, Mar 22 2021 Snap Inc. True size eyewear in real time
11563702, Dec 03 2019 Snap Inc. Personalized avatar notification
11574431, Feb 26 2019 Snap Inc. Avatar based on weather
11580682, Jun 30 2020 Snap Inc. Messaging system with augmented reality makeup
11580698, Nov 27 2018 Snap Inc. Rendering 3D captions within real-world environments
11580700, Oct 24 2016 Snap Inc. Augmented reality object manipulation
11582176, Dec 09 2019 Snap Inc. Context sensitive avatar captions
11588769, Jan 09 2017 Snap Inc.; SNAP INC Contextual generation and selection of customized media content
11588772, Aug 12 2019 Snap Inc. Message reminder interface
11593980, Apr 20 2017 Snap Inc. Customized user interface for electronic communications
11594025, Dec 11 2019 Snap Inc. Skeletal tracking using previous frames
11601783, Jun 07 2019 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
11607616, May 08 2012 SNAP INC System and method for generating and displaying avatars
11610354, Oct 26 2017 Snap Inc. Joint audio-video facial animation system
11610357, Sep 28 2018 Snap Inc. System and method of generating targeted user lists using customizable avatar characteristics
11615592, Oct 27 2020 Snap Inc.; SNAP INC Side-by-side character animation from realtime 3D body motion capture
11616745, Jan 09 2017 Snap Inc.; SNAP INC Contextual generation and selection of customized media content
11619501, Mar 11 2020 SNAP INC Avatar based on trip
11620325, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of servers
11620791, Nov 27 2018 Snap Inc. Rendering 3D captions within real-world environments
11620798, Apr 30 2019 Systems and methods for conveying virtual content in an augmented reality environment, for facilitating presentation of the virtual content based on biometric information match and user-performed activities
11625873, Mar 30 2020 SNAP INC Personalized media overlay recommendation
11631223, Apr 30 2019 Systems, methods, and storage media for conveying virtual content at different locations from external resources in an augmented reality environment
11631276, Mar 31 2016 Snap Inc. Automated avatar generation
11636654, May 19 2021 SNAP INC AR-based connected portal shopping
11636657, Dec 19 2019 Snap Inc. 3D captions with semantic graphical elements
11636662, Sep 30 2021 Snap Inc.; SNAP INC Body normal network light and rendering control
11638115, Mar 28 2019 Snap Inc. Points of interest in a location sharing system
11651022, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of servers
11651539, Jan 30 2020 SNAP INC System for generating media content items on demand
11651797, Feb 05 2014 Snap Inc. Real time video processing for changing proportions of an object in the video
11657580, Jan 09 2017 Snap Inc. Surface aware lens
11659014, Jul 28 2017 Snap Inc. Software application manager for messaging applications
11660022, Oct 27 2020 Snap Inc. Adaptive skeletal joint smoothing
11662890, Sep 16 2019 Snap Inc. Messaging system with battery level sharing
11662900, May 31 2016 Snap Inc. Application control using a gesture based trigger
11663792, Sep 08 2021 Snap Inc. Body fitted accessory with physics simulation
11670059, Sep 01 2021 Snap Inc.; SNAP INC Controlling interactive fashion based on body gestures
11673054, Sep 07 2021 Snap Inc. Controlling AR games on fashion items
11676199, Jun 28 2019 Snap Inc. Generating customizable avatar outfits
11676320, Sep 30 2019 Snap Inc. Dynamic media collection generation
11683280, Jun 10 2020 Snap Inc. Messaging system including an external-resource dock and drawer
11688119, Feb 28 2018 Snap Inc. Animated expressive icon
11693887, Jan 30 2019 Snap Inc. Adaptive spatial density based clustering
11694402, Nov 27 2018 Snap Inc. Textured mesh building
11698722, Nov 30 2018 Snap Inc. Generating customized avatars based on location information
11704005, Sep 28 2018 Snap Inc. Collaborative achievement interface
11704878, Jan 09 2017 Snap Inc. Surface aware lens
11706267, Oct 30 2017 Snap Inc. Animated chat presence
11714524, Feb 06 2019 Snap Inc. Global event-based avatar
11714535, Jul 11 2019 Snap Inc. Edge gesture interface with smart interactions
11715268, Aug 30 2018 Snap Inc. Video clip object tracking
11729441, Jan 30 2020 Snap Inc. Video generation system to render frames on demand
11734866, Sep 13 2021 Snap Inc. Controlling interactive fashion based on voice
11734894, Nov 18 2020 Snap Inc.; SNAP INC Real-time motion transfer for prosthetic limbs
11734959, Mar 16 2021 Snap Inc. Activating hands-free mode on mirroring device
11748931, Nov 18 2020 Snap Inc.; SNAP INC Body animation sharing and remixing
11748958, Dec 07 2021 Snap Inc. Augmented reality unboxing experience
11751015, Jan 16 2019 Snap Inc. Location-based context information sharing in a messaging system
11752431, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
11763481, Oct 20 2021 Snap Inc.; SNAP INC Mirror-based augmented reality experience
11769259, Jan 23 2018 Snap Inc. Region-based stabilized face tracking
11775165, Mar 16 2020 Snap Inc. 3D cutout image modification
11782574, Apr 27 2017 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
11783494, Nov 30 2018 Snap Inc. Efficient human pose tracking in videos
11790531, Feb 24 2021 Snap Inc.; SNAP INC Whole body segmentation
11790614, Oct 11 2021 Snap Inc. Inferring intent from pose and speech input
11797155, Jun 30 2021 Snap Inc. Hybrid search system for customizable media
11798201, Mar 16 2021 Snap Inc. Mirroring device with whole-body outfits
11798238, Sep 14 2021 Snap Inc. Blending body mesh into external mesh
11798261, Dec 14 2018 Snap Inc. Image face manipulation
11809624, Feb 13 2019 Snap Inc. Sleep detection in a location sharing system
11809633, Mar 16 2021 Snap Inc. Mirroring device with pointing based navigation
11810220, Dec 19 2019 Snap Inc. 3D captions with face tracking
11810226, Feb 09 2018 Systems and methods for utilizing a living entity as a marker for augmented reality content
11818286, Mar 30 2020 SNAP INC Avatar recommendation and reply
11822766, Jun 08 2020 Snap Inc. Encoded image based messaging system
11822774, Sep 16 2019 Snap Inc. Messaging system with battery level sharing
11823312, Sep 18 2017 Systems and methods for utilizing a device as a marker for augmented reality content
11823341, Jun 28 2019 Snap Inc. 3D object camera customization system
11823346, Jan 17 2022 Snap Inc. AR body part tracking system
11824822, Sep 28 2018 Snap Inc. Generating customized graphics having reactions to electronic message content
11830209, May 26 2017 Snap Inc. Neural network-based image stream modification
11831937, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of GPUS
11833427, Sep 21 2020 Snap Inc. Graphical marker generation system for synchronizing users
11836859, Nov 27 2018 Snap Inc. Textured mesh building
11836862, Oct 11 2021 Snap Inc. External mesh with vertex attributes
11836866, Sep 20 2021 Snap Inc. Deforming real-world object using an external mesh
11842411, Apr 27 2017 SNAP INC Location-based virtual avatars
11843456, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in media overlays
11850511, Oct 27 2017 Systems and methods for rendering a virtual content object in an augmented reality environment
11852554, Mar 21 2019 SNAP INC Barometer calibration in a location sharing system
11854069, Jul 16 2021 Snap Inc.; SNAP INC Personalized try-on ads
11863513, Aug 31 2020 Snap Inc. Media content playback and comments management
11868414, Mar 14 2019 SNAP INC Graph-based prediction for contact suggestion in a location sharing system
11868590, Sep 25 2018 Snap Inc. Interface to display shared user groups
11870743, Jan 23 2017 Snap Inc. Customized digital avatar accessories
11870745, Jun 28 2022 SNAP INC ; Snap Inc. Media gallery sharing and management
11875439, Apr 18 2018 Snap Inc. Augmented expression system
11876762, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in media overlays
11877211, Jan 14 2019 Snap Inc. Destination sharing in location sharing system
11880923, Feb 28 2018 Snap Inc. Animated expressive icon
11880947, Dec 21 2021 SNAP INC Real-time upper-body garment exchange
11882162, Jul 28 2017 Snap Inc. Software application manager for messaging applications
11887237, Nov 28 2018 Snap Inc. Dynamic composite user identifier
11887260, Dec 30 2021 SNAP INC AR position indicator
11888795, Sep 21 2020 Snap Inc. Chats with micro sound clips
11893166, Nov 08 2022 Snap Inc. User avatar movement control using an augmented reality eyewear device
11893208, Dec 31 2019 Snap Inc. Combined map icon with action indicator
11893301, Sep 10 2020 Snap Inc. Colocated shared augmented reality without shared backend
11893647, Apr 27 2017 SNAP INC Location-based virtual avatars
11900506, Sep 09 2021 Snap Inc.; SNAP INC Controlling interactive fashion based on facial expressions
11908041, Jan 19 2022 Object replacement system
11908083, Aug 31 2021 Snap Inc.; SNAP INC Deforming custom mesh based on body mesh
11908093, Dec 19 2019 Snap Inc. 3D captions with semantic graphical elements
11908243, Mar 16 2021 Snap Inc. Menu hierarchy navigation on electronic mirroring devices
11910269, Sep 25 2020 Snap Inc. Augmented reality content items including user avatar to share location
11917495, Jun 07 2019 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
11922010, Jun 08 2020 Snap Inc. Providing contextual information with keyboard interface for messaging system
11925869, May 08 2012 Snap Inc. System and method for generating and displaying avatars
11928783, Dec 30 2021 Snap Inc. AR position and orientation along a plane
11930055, Oct 30 2017 Snap Inc. Animated chat presence
11941227, Jun 30 2021 Snap Inc. Hybrid search system for customizable media
11941767, May 19 2021 Snap Inc. AR-based connected portal shopping
11949958, Jan 30 2020 SNAP INC Selecting avatars to be included in the video being generated on demand
11954762, Jan 19 2022 Snap Inc. Object replacement system
11956190, May 08 2020 Snap Inc. Messaging system with a carousel of related entities
11956192, Aug 12 2019 Snap Inc. Message reminder interface
11960784, Dec 07 2021 Snap Inc. Shared augmented reality unboxing experience
11962598, Oct 10 2016 Snap Inc. Social media post subscribe requests for buffer user accounts
11969075, Mar 31 2020 Snap Inc. Augmented reality beauty product tutorials
11973732, Apr 30 2019 Snap Inc. Messaging system with avatar generation
11978140, Mar 30 2020 Snap Inc. Personalized media overlay recommendation
11978283, Mar 16 2021 Snap Inc. Mirroring device with a hands-free mode
11983462, Aug 31 2021 SNAP INC Conversation guided augmented reality experience
11983826, Sep 30 2021 SNAP INC ; Snap Inc. 3D upper garment tracking
11983830, Apr 27 2018 Systems and methods for generating and facilitating access to a personalized augmented rendering of a user
11989809, Jan 16 2017 Snap Inc. Coded vision system
11991130, Jan 18 2017 Snap Inc. Customized contextual media content item generation
11991419, Jan 30 2020 SNAP INC Selecting avatars to be included in the video being generated on demand
11995288, Apr 27 2017 Snap Inc. Location-based search mechanism in a graphical user interface
11995757, Oct 29 2021 Snap Inc. Customized animation from video
11996113, Oct 29 2021 SNAP INC Voice notes with changing effects
12056760, Jun 28 2019 Snap Inc. Generating customizable avatar outfits
12056792, Dec 30 2020 Snap Inc. Flow-guided motion retargeting
12056832, Sep 01 2021 Snap Inc. Controlling interactive fashion based on body gestures
12058583, Apr 27 2017 Snap Inc. Selective location-based identity communication
12062144, May 27 2022 Snap Inc. Automated augmented reality experience creation based on sample source and target images
12062146, Jul 28 2022 SNAP INC ; Snap Inc. Virtual wardrobe AR experience
12063569, Dec 30 2019 Snap Inc. Interfaces for relative device positioning
12067214, Jun 25 2020 SNAP INC ; Snap Inc. Updating avatar clothing for a user of a messaging system
12067663, Jan 30 2020 Snap Inc. System for generating media content items on demand
12067804, Mar 22 2021 Snap Inc. True size eyewear experience in real time
12070682, Mar 29 2019 Snap Inc. 3D avatar plugin for third-party games
12079264, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of servers
12079445, Apr 27 2017 Map-based graphical user interface indicating geospatial activity metrics
12080065, Nov 22 2019 SNAP INC Augmented reality items based on scan
12081502, Oct 24 2016 Generating and displaying customized avatars in media overlays
12086381, Apr 27 2017 Snap Inc. Map-based graphical user interface for multi-type social media galleries
12086916, Oct 22 2021 Snap Inc. Voice note with face tracking
12086944, Apr 30 2019 Systems and methods for conveying virtual content from external resources and electronic storage in an augmented reality environment at different or same locations
12086946, Sep 14 2021 Snap Inc. Blending body mesh into external mesh
12094066, Mar 30 2022 Surface normals for pixel-aligned object
12096153, Dec 21 2021 SNAP INC Avatar call platform
12099701, Aug 05 2019 Snap Inc. Message thread prioritization interface
12099703, Sep 16 2019 Snap Inc. Messaging system with battery level sharing
12100156, Apr 12 2021 Snap Inc. Garment segmentation
12105938, Sep 28 2018 Snap Inc. Collaborative achievement interface
12106441, Nov 27 2018 Snap Inc. Rendering 3D captions within real-world environments
12106486, Feb 24 2021 SNAP INC Whole body visual effects
12111863, Jan 30 2020 Snap Inc. Video generation system to render frames on demand using a fleet of servers
12112013, Apr 27 2017 Snap Inc. Location privacy management on map-based social media platforms
12113756, Apr 13 2018 Snap Inc. Content suggestion system
12113760, Oct 24 2016 Snap Inc. Generating and displaying customized avatars in media overlays
12121811, Sep 21 2020 Snap Inc. Graphical marker generation system for synchronization
12131003, Apr 27 2017 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
12131006, Feb 06 2019 Snap Inc. Global event-based avatar
12131015, May 31 2016 Snap Inc. Application control using a gesture based trigger
12136153, Jun 30 2020 Snap Inc. Messaging system with augmented reality makeup
12136158, Feb 06 2019 Snap Inc. Body pose estimation
12141215, Mar 14 2019 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
12142257, Feb 08 2022 SNAP INC ; Snap Inc. Emotion-based text to speech
12147644, Jun 28 2019 Snap Inc. Generating animation overlays in a communication session
12147654, Jul 11 2019 Snap Inc. Edge gesture interface with smart interactions
12148105, Mar 30 2022 Snap Inc. Surface normals for pixel-aligned object
12148108, Oct 11 2021 Snap Inc. Light and rendering of garments
12149489, Mar 14 2023 SNAP INC Techniques for recommending reply stickers
12153788, Nov 30 2018 Snap Inc. Generating customized avatars based on location information
12154232, Sep 30 2022 SNAP INC 9-DoF object tracking
12164109, Apr 29 2022 SNAP INC AR/VR enabled contact lens
12164699, Mar 16 2021 Snap Inc. Mirroring device with pointing based navigation
12165243, Mar 30 2021 Snap Inc. Customizable avatar modification system
12165335, Nov 30 2018 Snap Inc. Efficient human pose tracking in videos
12166734, Sep 27 2019 Snap Inc. Presenting reactions from friends
8830244, Mar 01 2011 SONY INTERACTIVE ENTERTAINMENT INC Information processing device capable of displaying a character representing a user, and information processing method thereof
8922547, Dec 22 2010 Electronics and Telecommunications Research Institute 3D model shape transformation method and apparatus
9060093, Sep 30 2011 Intel Corporation Mechanism for facilitating enhanced viewing perspective of video images at computing devices
9294757, Mar 15 2013 GOOGLE LLC 3-dimensional videos of objects
9607573, Sep 17 2014 International Business Machines Corporation Avatar motion modification
D916809, May 28 2019 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
D916810, May 28 2019 Snap Inc. Display screen or portion thereof with a graphical user interface
D916811, May 28 2019 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
D916871, May 28 2019 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
D916872, May 28 2019 Snap Inc. Display screen or portion thereof with a graphical user interface
ER1147,
ER3054,
ER4137,
ER4239,
ER5250,
ER5295,
ER70,
ER7438,
ER7805,
ER8409,
ER8658,
ER8686,
ER9278,
ER9925,
Patent Priority Assignee Title
5745126, Mar 31 1995 The Regents of the University of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
5889532, Aug 02 1996 AUTODESK, Inc Control solutions for the resolution plane of inverse kinematic chains
6057859, Mar 31 1997 SOVOZ, INC Limb coordination system for interactive computer animation of articulated characters with blended motion data
6088042, Mar 31 1997 SOVOZ, INC Interactive motion data animation system
6191798, Mar 31 1997 SOVOZ, INC Limb coordination system for interactive computer animation of articulated characters
6522332, Jul 26 2000 AUTODESK, Inc Generating action data for the animation of characters
6670954, Dec 28 1998 Fujitsu Limited Three-dimensional skeleton data error absorbing apparatus
7035436, Aug 09 2001 UNIVERSITY OF TOKYO, THE Method of generating poses and motions of a tree structure link system
7106334, Feb 13 2001 Sega Corporation Animation creation program
7202869, Jan 07 2003 Lucasfilm Entertainment Company Ltd System and method of creating and animating a computer-generated image of a creature
8253746, May 01 2009 Microsoft Technology Licensing, LLC Determine intended motions
20040104935,
20110148858,
////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jul 13 2010TENG, CHIH-JENIndustrial Technology Research InstituteASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0253300318 pdf
Jul 14 2010LIN, TZUNG-HANIndustrial Technology Research InstituteASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0253300318 pdf
Jul 14 2010HSIAO, FU-JENIndustrial Technology Research InstituteASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0253300318 pdf
Nov 08 2010Industrial Technology Research Institute(assignment on the face of the patent)
Date Maintenance Fee Events
Dec 12 2016M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Sep 30 2020M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Jun 11 20164 years fee payment window open
Dec 11 20166 months grace period start (w surcharge)
Jun 11 2017patent expiry (for year 4)
Jun 11 20192 years to revive unintentionally abandoned end. (for year 4)
Jun 11 20208 years fee payment window open
Dec 11 20206 months grace period start (w surcharge)
Jun 11 2021patent expiry (for year 8)
Jun 11 20232 years to revive unintentionally abandoned end. (for year 8)
Jun 11 202412 years fee payment window open
Dec 11 20246 months grace period start (w surcharge)
Jun 11 2025patent expiry (for year 12)
Jun 11 20272 years to revive unintentionally abandoned end. (for year 12)