A method includes detecting, for each of a plurality of images, a plurality of key points, where each of the plurality of images represents an object of an assembly system. The method includes generating, for each of the plurality of images, a correspondence between the plurality of key points, and generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points. The method includes identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region, and determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
|
1. A method comprising:
detecting, for each of a plurality of images, a plurality of key points, wherein each of the plurality of images represents an object of an assembly system;
generating, for each of the plurality of images, a correspondence between the plurality of key points;
generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points;
identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region; and
determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
9. A system comprising:
a processor; and
a nontransitory computer-readable medium including instructions that are executable by the processor, wherein the instructions include:
detecting, for each of a plurality of images, a plurality of key points, wherein each of the plurality of images represents an object of an assembly system;
generating, for each of the plurality of images, a correspondence between the plurality of key points;
generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points;
identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region; and
determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
17. A method comprising:
detecting, for each of a plurality of images, a plurality of key points, wherein each of the plurality of images represents an object of an assembly system;
obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points;
projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor;
generating a prior probability distribution for each of the plurality of key points;
generating, for each of the plurality of images, a posterior probability distribution based on the prior probability distribution and a gaussian distribution for each of the plurality of key points;
identifying, for each of the plurality of images, a reference region based on the posterior probability distribution;
identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region; and
determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
2. The method of
providing each of the plurality of images to a convolutional layer to generate a plurality of feature maps;
providing each of the plurality of feature maps to a first pooling layer to generate a plurality of reduced images;
providing each of the plurality of reduced images to a second pooling layer to generate a plurality of heat maps; and
providing the plurality of heat maps to a stacking layer to generate the plurality of key points for each of the plurality of images.
3. The method of
obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points;
projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor; and
generating a prior probability distribution for each of the remaining set of key points, wherein the correspondence between the plurality of key points is further based on the prior probability distribution.
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
for each of the plurality of images, identifying a pixel position of the reference key point;
determining a plurality of disparity values based on the pixel position of the reference key point of each of the plurality of images; and
determining an average of the plurality of disparity values, wherein the pose of the object is further based on the average of the plurality of disparity values.
10. The system of
providing each of the plurality of images to a convolutional layer to generate a plurality of feature maps;
providing each of the plurality of feature maps to a first pooling layer to generate a plurality of reduced images;
providing each of the plurality of reduced images to a second pooling layer to generate a plurality of heat maps; and
providing the plurality of heat maps to a stacking layer to generate the plurality of key points for each of the plurality of images.
11. The system of
obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points;
projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor; and
generating a prior probability distribution for each of the remaining set of key points, wherein the correspondence between the plurality of key points is further based on the prior probability distribution.
12. The system of
13. The system of
14. The system of
15. The system of
16. The system of
for each of the plurality of images, identifying a pixel position of the reference key point;
determining a plurality of disparity values based on the pixel position of the reference key point of each of the plurality of images; and
determining an average of the plurality of disparity values, wherein the pose of the object is further based on the average of the plurality of disparity values.
18. The method of
19. The method of
20. The method of
|
The present disclosure relates to a system and/or method for image-based component detection.
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
In a manufacturing environment, object detection and pose estimation is utilized to perform complex and automated assembly tasks. As an example, a control system may perform a machine-learning routine to detect a particular object, determine the pose of the object, and instruct another manufacturing system, such as a robot or machining device, to perform an automated tasks based on the pose. However, machine-learning routines may require large amounts of training data to properly train the control system to accurately and consistently perform machine-learning routines. These issues associated with machine-learning routines performed by control systems, among other issues, are addressed by the present disclosure.
This section provides a general summary of the disclosure and is not a comprehensive disclosure of its full scope or all of its features.
The present disclosure provides a method including detecting, for each of a plurality of images, a plurality of key points, where each of the plurality of images represents an object of an assembly system. The method includes generating, for each of the plurality of images, a correspondence between the plurality of key points, and generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points. The method includes identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region, and determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
In some forms, detecting, for each of the plurality of images, the plurality of key points further includes providing each of the plurality of images to a convolutional layer to generate a plurality of feature maps, providing each of the plurality of feature maps to a first pooling layer to generate a plurality of reduced images, providing each of the plurality of reduced images to a second pooling layer to generate a plurality of heat maps, and providing the plurality of heat maps to a stacking layer to generate the plurality of key points for each of the plurality of images.
In some forms, the method further includes obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points, projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor, and generating a prior probability distribution for each of the remaining set of key points, where the correspondence between the plurality of key points is further based on the prior probability distribution.
In some forms, the method further includes generating a posterior probability distribution based on the prior probability distribution and a gaussian distribution for each of the remaining set of key points, where the reference region is further based on the posterior probability distribution of each of the remaining set of key points.
In some forms, the method further includes identifying a given excitation from among a plurality of excitations of the posterior probability distribution based on an intensity value of the plurality of excitations, where the reference key point among the plurality of key points is further based on the given excitation.
In some forms, the method further includes updating at least one estimated transformation matrix from among the plurality of estimated transformation matrices based on the reference key point, where determining the pose of the object based on the reference key point of each of the plurality of images and the reference pose of the object is further based on the updated at least one estimated transformation matrix.
In some forms, the method further includes identifying the known set of key points from among the plurality of key points, where the known set of key points is identified in response to, for each of the set of key points, a corresponding estimated transformation matrix from among the estimated transformation matrices has an error value that is less than a threshold error value.
In some forms, the method further includes, for each of the plurality of images, identifying a pixel position of the reference key point, determining a plurality of disparity values based on the pixel position of the reference key point of each of the plurality of images, and determining an average of the plurality of disparity values, where the pose of the object is further based on the average of the plurality of disparity values.
The present disclosure provides a system including a processor and a nontransitory computer-readable medium including instructions that are executable by the processor. The instructions include detecting, for each of a plurality of images, a plurality of key points, where each of the plurality of images represents an object of an assembly system. The instructions include generating, for each of the plurality of images, a correspondence between the plurality of key points, and generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points. The instructions include identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region, and determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
In some forms, the instructions for detecting, for each of the plurality of images, the plurality of key points further includes providing each of the plurality of images to a convolutional layer to generate a plurality of feature maps, providing each of the plurality of feature maps to a first pooling layer to generate a plurality of reduced images, providing each of the plurality of reduced images to a second pooling layer to generate a plurality of heat maps, and providing the plurality of heat maps to a stacking layer to generate the plurality of key points for each of the plurality of images.
In some forms, the instructions further include obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points, projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor, and generating a prior probability distribution for each of the remaining set of key points, where the correspondence between the plurality of key points is further based on the prior probability distribution.
In some forms, the instructions further include generating a posterior probability distribution based on the prior probability distribution and a gaussian distribution for each of the remaining set of key points, where the reference region is further based on the posterior probability distribution of each of the remaining set of key points.
In some forms, the instructions further include identifying a given excitation from among a plurality of excitations of the posterior probability distribution based on an intensity value of the plurality of excitations, where the reference key point among the plurality of key points is further based on the given excitation.
In some forms, the instructions further include updating at least one estimated transformation matrix from among the plurality of estimated transformation matrices based on the reference key point, where determining the pose of the object based on the reference key point of each of the plurality of images and the reference pose of the object is further based on the updated at least one estimated transformation matrix.
In some forms, the instructions further include identifying the known set of key points from among the plurality of key points, where the known set of key points is identified in response to, for each of the set of key points, a corresponding estimated transformation matrix from among the estimated transformation matrices has an error value that is less than a threshold error value.
In some forms, the instructions further include, for each of the plurality of images, identifying a pixel position of the reference key point, determining a plurality of disparity values based on the pixel position of the reference key point of each of the plurality of images, and determining an average of the plurality of disparity values, where the pose of the object is further based on the average of the plurality of disparity values.
The present disclosure provides a method including detecting, for each of a plurality of images, a plurality of key points, where each of the plurality of images represents an object of an assembly system. The method includes obtaining a plurality of estimated transformation matrices based on a known set of key points from among the plurality of key points. The method includes projecting a remaining set of key points from among the plurality of key points based on the estimated transformation matrices and calibration information of an image sensor. The method includes generating a prior probability distribution for each of the plurality of key points. The method includes generating, for each of the plurality of images, a posterior probability distribution based on the prior probability distribution and a gaussian distribution for each of the plurality of key points. The method includes identifying, for each of the plurality of images, a reference region based on the posterior probability distribution. The method includes identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region. The method includes determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object.
In some forms, the method further includes identifying a given excitation from among a plurality of excitations of the posterior probability distribution based on an intensity value of the plurality of excitations, where the reference key point among the plurality of key points is further based on the given excitation.
In some forms, the method further includes updating at least one estimated transformation matrix from among the plurality of estimated transformation matrices based on the reference key point, where determining the pose of the object based on the reference key point of each of the plurality of images and the reference pose of the object is further based on the updated at least one estimated transformation matrix.
In some forms, the method further includes identifying the known set of key points from among the plurality of key points, where the known set of key points is identified in response to, for each of the set of key points, a corresponding estimated transformation matrix from among the estimated transformation matrices has an error value that is less than a threshold error value.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
The present disclosure provides a pose estimation system configured to detect an object and determine a pose of the object. In one form, the pose estimation system performs deep learning routines to detect a plurality of key points of an image including an object. The pose estimation system is configured to perform a Bayesian inference statistical routine to identify a reference key point of the object and determines a pose of the object based on the reference key point and a reference pose of the object. By performing the Bayesian inference statistical routines described herein, the pose estimation system accurately determines the pose of the object, reduces the amount of iterations needed to train the deep learning networks to detect and project key points of the object, and reduces the computing resources needed to determine the pose of the object. It should be readily understood that the pose determination control system of the present disclosure addresses other issues and should not be limited to the examples provided herein.
Referring to
In one form, the image sensors 10 provide image data of the environment 5 to the pose determination system 15 and include, but are not limited to: a two-dimensional (2D) camera, a 3D camera, a stereo vision camera, an infrared sensor, a radar scanner, a laser scanner, a light detection and ranging (LIDAR) sensor, and/or an ultrasonic sensor. As an example, the image sensors 10 are stereo cameras that are arranged to obtain a left-sided image of an object and a right-sided image of the object. As described below in further detail, the pose determination system 15 is configured to determine the pose of the object based on the left-sided and right-sided images.
In one form, the manufacturing control system 120 is configured to control a manufacturing process associated with the object based on a pose thereof, as determined by the pose determination system 15. As an example, the manufacturing control system 120 may instruct an external system (e.g., a robot, a mobile workpiece, among others) to adjust the orientation and/or position of the object based on the determined pose. As another example, the manufacturing control system 120 may generate a notification representing the determined pose (e.g., a visual and/or auditory alert if the pose of the object deviates from a reference pose beyond a predetermined amount).
In some forms, the pose determination system 15 includes a key point detection module 20, a correspondence module 40, a reference region module 60, a reference key point module 80, and a pose module 100. It should be readily understood that any one of the components of the pose determination system 15 can be provided at the same location or distributed at different locations (e.g., at one or more edge computing devices) and communicably coupled accordingly. Details regarding the pose determination system 15 is described in the following in association with the left-sided image, but is also applicable to the right-sided image.
In one form and referring to
In one form, the convolutional layer 22 is configured to perform a convolution routine on the images to generate a plurality of feature maps. The convolutional layer 22 may be defined by any suitable combination of parameters including, but not limited to: kernel dimensions, number of kernels, stride values, padding values, input/output channels, bit depths, feature map widths/lengths, and rectified linear unit (ReLU) activation layers. As an example implementation of the convolutional layer 22, a kernel (e.g., a 7×7 kernel) may by iteratively applied to various pixels of the obtained images in accordance with a defined stride (e.g., a stride value of 2). During each iteration, the convolutional layer 22 performs a convolution routine (e.g., a scalar product routine) based on a set of pixels in which the kernel is overlayed. The result of the convolution function at each iteration is output as a pixel value in a feature map (i.e., the feature map).
In one form, the downsampling layer 24 is configured to reduce the width and/or length of the feature maps generated by the convolution layer 22 (i.e., the feature map). The downsampling layer 24 may be defined by any suitable combination of parameters including, but not limited to: the type of downsampling routine (e.g., a maximum pooling routine, an average pooling routine, an L2-norm pooling routine, among other downsampling routines), kernel dimensions, and sliding values. As an example, the downsampling layer 24, may be a maximum pooling layer implemented by a 2×2 kernel and a stride value of 2.
In one form, the residual layers 26 are configured to generate a plurality of heat maps based on the reduced images generated by the downsampling layer 24. In some forms, each of the residual layers 26 is a residual block including one or more convolution layers similar to the convolutional layer 22 described above, a downsampling layer similar to the downsampling layer 24 described above, and/or a ReLU activation layer. In some forms, the inputs and outputs of the plurality of convolution layers of the residual layers 26 are selectively combined to increase the accuracy of the key point detection. As an example, each residual layer 26 includes eight convolutional layers that are defined by a 3×3 kernel and a sliding value of 1, and at least a set of the outputs of the eight convolutional layers are combined with other inputs of the eight convolutional layers. Furthermore, each residual layer 26 may be defined by a maximum pooling layer to further reduce the feature map generated by the eight convolutional layers. In some forms, the first residual layer 26-1 may include N output channels, the second residual layer 26-2 may include 2×N output channels, the third residual layer 26-3 includes 3×N output channels, and the fourth residual layer 26-4 includes 4×N output channels, where N is equal to a number of key points to be detected by the key point detection module 20. As such, the output (i.e., the heat map) generated by each of the residual layers 26 is based on the features of the convolutional layers, the downsampling layer, and/or the number of output channels of each residual layer.
In one form, the intermediate downsampling layers 28 are similar to the downsampling layer 24. The intermediate downsampling layers 28 are configured to reduce the number of output channels from the heat map generated by the corresponding residual layer 26 to N output channels. In one form, the bottleneck layer 30 is configured to generate the plurality of key points for each image. In one form, the bottleneck layer 30 stacks each of the outputs from the intermediate downsampling layers 28 such that an aggregate heat map with 4×N output channels are generated, and then further reduces the number of output channels of the aggregate heat map. In some forms, the aggregate heat map represents the detected key points of the component of the image.
In an example application and as shown in
In one form and referring to
In one form, the estimated transformation matrices database 42 includes a plurality of estimated transformation entries, where each estimated transformation entry includes one or more rotation matrices and translation matrices for a given set of key points 32. In one form, the rotation matrices and translation matrices indicate a rotation and translation, respectively, utilized to project one of the key points 32 from the given set to additional key points that may have not been detected and/or confidently detected by the key point detection module 20. In some forms, the estimated transformation entries may be updated as additional iterations of the routines performed by the reference key point module 80 described below are performed.
In one form, the prior probability database 44 includes a plurality of prior probability entries, where each prior probability entry includes a prior probability for a given pixel coordinate indicating a likelihood in which a key point can be confidently detected at the given pixel coordinate. In some forms, the prior probability for each pixel coordinate is defined while the key point detection module 20 is trained and/or as additional iterations of the routines described herein are performed. In some forms, the prior probability for each pixel coordinate collectively have a sum of 1. It should be understood that the prior probabilities of the pixel coordinates can be defined using other methods (e.g., an uninformative prior, Jeffreys prior, Bernardo's reference prior, conjugate prior, among others) and is not limited to the examples provided herein.
In one form, the image sensor characteristic database 46 includes various characteristics of the image sensors 10 that are utilized for transforming the 2D pixel coordinate of the key points 32 to a 3D position within the environment 5. As an example, the image sensor characteristic database 46 includes a calibration matrix for each of the image sensors 10, where the calibration matrix represents the intrinsic information of the image sensors 10 (i.e., the characteristics of the image sensor 10). Example intrinsic information includes but is not limited to: focal lengths in the x and y dimensions, skew between the axes of the image sensor 10 due to the image sensor 10 not being mounted perpendicular to an optical axis, an optical center expressed in pixel coordinates, and/or an aspect ratio. It should be understood that the image sensor characteristic database 46 may include other characteristics of the image sensors 10 and is not limited to the characteristics described herein.
In one form, the key point projection module 48 identifies a set of known key points 32 (e.g., a combination of three key points 32) that are confidently determined to be accurate. As an example, to identify the set of known key points 32 that are confidently determined to be accurate, the key point projection module 48 compares the location of the pixel coordinates of the key points 32 and identifies the prior probability entries from the prior probability database 44 having the same pixel coordinates. Accordingly, the key point projection module 48 identifies the set of known key points based on the prior probabilities of the identified prior probability entries. As an example, the key point projection module 48 identifies a key point 32 as part of the set of known key points if the corresponding prior probability is greater than a threshold value. As another example, the key point projection module 48 identifies a predetermined number of key points 32 (e.g., three key points 32) having the highest prior probability as part of the set.
In response to identifying the given set of key points 32, the key point projection module 48 obtains a plurality of estimated transformation matrices associated with the known set of key points 32. As an example, to obtain the plurality of estimated transformation matrices, the key point projection module 48 identifies an estimated transformation entry from the estimated transformation matrices database 42 that matches the known set of key points 32.
In one form, the key point projection module 48 is configured to project additional key points based on the known set of key points 32, the obtained estimated transformation matrices, and the calibration matrix from the image sensor characteristic database 46. As an example, if the key point projection module 48 identifies three key points 32 as part of the known set and a total of eight key points 32 are identified by the key point detection module 20, the key point projection module 48 projects five key points 32 based on the three key points 32 of the known set, the rotation/translation matrices associated with at least one of the three key points 32 of the known set, and the calibration matrix.
In one form, the prior probability distribution module 50 is configured to generate the correspondence between the set of known key points 32 and the projected additional key points by determining a prior probability distribution for the combination of the known set of key points 32 and the projected additional key points. In some forms, the prior probability distribution module 50 identifies the prior probability in the prior probability database 44 associated with the pixel coordinates of the projected additional key points, as described above. Subsequently, the prior probability distribution module 50 determines the prior probability distribution (denoted as P(Combination) in
In an example application and as shown in
In one form and referring to
In one form, the gaussian distribution module 62 is configured to generate a gaussian distribution for the key points 32-x. In some forms, the gaussian distribution is assumed to be the likelihood function, as each of the respective key points 32-x has the highest likelihood probability at the corresponding pixel coordinates due to its small projection error, and the likelihood probability exponentially decays at the surrounding pixel coordinates to the increasing projection error. Accordingly, the posterior probability distribution module 64 is configured to generate the posterior probability distribution based on the prior probability distribution of the key points 32-x (P(Combination) in
In an example application and as shown in
Referring to
In one form, the excitation identification module 82 is configured to identify a given excitation from among a plurality of excitations within the reference region 66, as the reference key point, based on the intensity values of the plurality of excitations. As an example, the excitation identification module 82 identifies the excitation having the maximum intensity value within the region 66 as the reference key point.
In one form, the transformation matrix updating module 84 is configured to update the transformation matrix for the given set of key points 32 utilized by the correspondence module 40 during the projection of the additional key points 32 described above based on the identified reference key point. As an example, the transformation matrix updating module 84 updates at least one of the rotation matrix and the translation matrix based on the identified reference key point. Similarly, the transformation matrix updating module 84 may update the transformation matrices utilized for projecting the additional key points based on the identified reference key point. As described below in further detail, the pose module 100 is configured to determine the pose of the object 140 based on the updated rotation and/or translation matrices.
Referring to
Referring to
In one form, the pixel coordinate module 102 is configured to identify the pixel coordinates of the reference key point. More particularly, the reference image database 108 includes a predefined pixel coordinate of the reference key points for the left-sided and right-sided images when the corresponding object is at a predefined reference pose. The reference image database 108 further includes a predefined 3D coordinate of the reference key point when the corresponding object is at a predefined reference pose.
In one form, the disparity value module 104 is configured to determine a disparity value for each of the pixel coordinates of the reference key points of the left-sided image and the right-sided image. As an example, to determine a disparity of the respective image, the disparity value module 104 is configured to perform a scale invariant feature transform (SIFT) routine or a speeded up robust feature (SURF) routine based on the pixel coordinates of the reference key point as determined by the pixel coordinate module 102 and the predefined pixel coordinates of the reference key point from the reference image database. Subsequently, the disparity value module 104 may determine the disparity value as the average of the disparity values from each of the right-sided and left-sided images. In some forms, the average disparity value represents the 3D position coordinate of the reference key point.
In one form, the pose determination module 106 determines the pose of the object in the image (e.g., object 140 of image 130) based on the average disparity value and the predefined 3D coordinate of the reference key point. As an example, the pose determination module 106 obtains the updated estimated transformation matrix associated with the reference point from the estimated transformation matrices database 42 of the correspondence module 40 to determine the rotation and/or translation of the object and, thus, the pose of the object. As provided in
With reference to
Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about” or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice; material, manufacturing, and assembly tolerances; and testing capability.
As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”
The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure.
In the figures, the direction of an arrow, as indicated by the arrowhead, generally demonstrates the flow of information (such as data or instructions) that is of interest to the illustration. For example, when element A and element B exchange a variety of information, but information transmitted from element A to element B is relevant to the illustration, the arrow may point from element A to element B. This unidirectional arrow does not imply that no other information is transmitted from element B to element A. Further, for information sent from element A to element B, element B may send requests for, or receipt acknowledgements of, the information to element A.
In this application, the term “controller” and/or “module” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components that provide the described functionality, such as, but not limited to, movement drivers and systems, transceivers, routers, input/output interface hardware, among others; or a combination of some or all of the above, such as in a system-on-chip.
The term memory is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc).
The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general-purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.
Upadhyay, Devesh, Soltani Bozchalooi, Iman, Rahimpour, Alireza
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10074054, | May 30 2013 | The Governing Council of the University of Toronto | Systems and methods for Bayesian optimization using non-linear mapping of input |
7940960, | Oct 27 2006 | Kabushiki Kaisha Toshiba; Toshiba Digital Solutions Corporation | Pose estimating device and pose estimating method |
8311954, | Nov 29 2007 | NEC Corporation | Recovery of 3D human pose by jointly learning metrics and mixtures of experts |
9280827, | Jul 03 2013 | Mitsubishi Electric Research Laboratories, Inc | Method for determining object poses using weighted features |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 04 2020 | UPADHYAY, DEVESH | Ford Global Technologies, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055244 | /0410 | |
Nov 04 2020 | RAHIMPOUR, ALIREZA | Ford Global Technologies, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055244 | /0410 | |
Nov 05 2020 | SOLTANI BOZCHALOOI, IMAN | Ford Global Technologies, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055244 | /0410 | |
Nov 13 2020 | Ford Global Technologies, LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Nov 13 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Aug 23 2025 | 4 years fee payment window open |
Feb 23 2026 | 6 months grace period start (w surcharge) |
Aug 23 2026 | patent expiry (for year 4) |
Aug 23 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 23 2029 | 8 years fee payment window open |
Feb 23 2030 | 6 months grace period start (w surcharge) |
Aug 23 2030 | patent expiry (for year 8) |
Aug 23 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 23 2033 | 12 years fee payment window open |
Feb 23 2034 | 6 months grace period start (w surcharge) |
Aug 23 2034 | patent expiry (for year 12) |
Aug 23 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |