A system and method for generating realistic depth images by enhancing simulated images rendered from a 3d model, include a rendering engine configured to render noiseless 2.5D images by rendering various poses with respect to a target 3d cad model, a noise transfer engine configured to apply realistic noise to the noiseless 2.5D images, and a background transfer engine configured to add pseudo-realistic scenedependent backgrounds to the noiseless 2.5D images. The noise transfer engine is configured to learn noise transfer based on a mapping, by a first generative adversarial network (gan), of the noiseless 2.5D images to real 2.5D scans generated by a targeted sensor. The background transfer engine is configured to learn background generation based on a processing, by a second gan, of output data of the first gan as input data and corresponding real 2.5D scans as target data.
|
9. A method for generating realistic depth images by enhancing simulated images rendered from a 3d model, comprising:
rendering noiseless 2.5D images by rendering various poses with respect to a target 3d cad model;
applying, by a noise transfer engine, realistic noise to the noiseless 2.5D images; and
adding, by a background transfer engine, pseudo-realistic scene-dependent backgrounds to the noiseless 2.5D images;
wherein learning by the noise transfer engine includes a training process based on a mapping, by a first generative adversarial network (gan), of the noiseless 2.5D images to a real 2.5D scan generated by a targeted sensor;
wherein learning by the background transfer engine includes a training process based on a processing, by a second gan, of output data of the first gan as input data and the corresponding real 2.5D scan as target data; and
wherein following training of the first and second gans, forming a pipeline by a serial coupling of the first and second gans for runtime operation to process rendered noiseless 2.5D images generated by the rendering engine to produce realistic depth images.
1. A system for generating realistic depth images by enhancing simulated images rendered from a 3d model, comprising:
at least one storage device storing computer-executable instructions configured as one or more modules; and
at least one processor configured to access the at least one storage device and execute the instructions,
wherein the modules comprise:
a rendering engine configured to render noiseless 2.5D images by rendering various poses with respect to a target 3d cad model;
a noise transfer engine configured to apply realistic noise to the noiseless 2.5D images; and
a background transfer engine configured to add pseudo-realistic scene-dependent backgrounds to the noiseless 2.5D images;
wherein the noise transfer engine is configured to learn noise transfer based on a mapping, by a first generative adversarial network (gan), of the noiseless 2.5D images to a real 2.5D scan generated by a targeted sensor;
wherein the background transfer engine is configured to learn background generation based on a processing, by a second gan, of output data of the first gan as input data and the corresponding real 2.5D scan as target data; and
wherein following training of the first and second gans, a pipeline is formed by a serial coupling of the first and second gans for runtime operation to process rendered noiseless 2.5D images generated by the rendering engine to produce realistic depth images.
2. The system of
3. The system of
a discriminator network configured as a deep convolutional network with Leaky rectified linear units and an output defined according to a sigmoid activation function, wherein each activation represents a deduction for a patch of input data.
4. The system of
a loss function that executes a binary cross entropy evaluation.
5. The system of
6. The system of
7. The system of
a depth sensor simulation engine configured to generate a simulated 2.5D scan for each generated noiseless 2.5D image, wherein the learning by the noise transfer engine includes stacking the simulated 2.5D scan and the noiseless 2.5D image into a 2-channel depth image as an input for the noise transfer engine.
8. The system of
an object recognition network configured to process the realistic depth images as training data to learn object classifications for target objects;
wherein following training of the object recognition network, sensor scans may be processed by the trained object recognition network to correlate features of the sensor scan object features for identification of objects in the sensor scan.
10. The method of
11. The method of
a discriminator network configured as a deep convolutional network with Leaky rectified linear units and an output defined according to a sigmoid activation function, wherein each activation represents a deduction for a patch of input data.
12. The method of
a loss function that executes a binary cross entropy evaluation.
13. The method of
14. The method of
15. The method of
generating, by a depth sensor simulation engine, a simulated 2.5D scan for each generated noiseless 2.5D image, wherein the learning by the noise transfer engine includes stacking the simulated 2.5D scan and the noiseless 2.5D image into a 2-channel depth image as an input for the noise transfer engine.
|
This application relates to artificial intelligence. More particularly, this application relates to applying artificial intelligence to image generation.
Recent progress in computer vision applications, such as recognition and reconstruction, has been dominated by deep neural networks trained with large amounts of accurately labeled data. For example, trained neural networks can be used to process a given image input and recognize objects in the image. Identification of objects in an image has a multitude of applications. However, training for the neural networks requires collecting and annotating vast datasets with labels, which is a tedious, and in some contexts, impossible task. Typical training datasets require around 10,000 images at minimum to obtain desired accuracy for object recognition. There is considerable effort needed to obtain more than 10,000 images using sensor devices, and to include annotation information such as capture pose on each image. Some approaches for generating training data rely solely on synthetically rendered data from 3D models using 3D rendering engines. Results for such methods have been inadequate due to the discrepancies between the synthetic scans and I scans obtained by sensor devices. In particular, synthetic scans are based on clean renderings, and lack noise and backgrounds found in scans produced by actual sensors. As a result, such synthetic scans cannot properly train the neural networks to recognize objects during runtime when analyzing sensor scans.
Previous works tried to statistically simulate and apply noise impairment to depth images. Such simulation-based pipelines have difficulties reproducing the scan quality of real devices in some particular conditions. For example, some sensors include unknown post-processing and image enhancement processing. Other causes for failure to accurately simulate real scans include gaps between the simulation engine and the real-world environment, such as ambient illumination, surface material, certain optical effects, etc.
Aspects according to embodiments of the present disclosure include a process and a system to generate realistic depth images by enhancing simulated images rendered from a 3D model. A rendering engine is provided to render noiseless 2.5D images by rendering various poses with respect to a target 3D CAD model. A noise transfer engine is provided to apply realistic noise to the noiseless 2.5D images, and a background transfer engine is provided to add pseudo-realistic scene-dependent backgrounds to the noiseless 2.5D images. Training of the noise transfer engine includes learning noise transfer based on a mapping, by a first generative adversarial network (GAN), of the noiseless 2.5D images to real 2.5D scans generated by a targeted sensor. Training of the background transfer engine includes learning background generation based on a processing, by a second GAN, of output data of the first GAN as input data and corresponding real 2.5D scans as target data The advantage of the trained neural network pipeline according to the embodiments of the disclosure is to generate a very large number of realistic depth images from simulated images based on 3D CAD models. Such depth images are useful to train an application-specific analytic model, without relying on access to large amounts of real image data from sensors, which is difficult to obtain and annotate precisely.
Non-limiting and non-exhaustive embodiments of the present embodiments are described with reference to the following FIGURES, wherein like reference numerals refer to like elements throughout the drawings unless otherwise specified.
Methods and systems are disclosed for enhanced depth image generation from synthetic depth images that simulate depth scans of an object by an image sensor. Machine learning by neural networks is used to extract simulated sensor noise from actual sensor scans, which may be transferred onto simulated depth renderings. Another set of neural networks may be trained to extract simulated background information from a training dataset, which may also be transferred onto the simulated depth renderings. Application of trained neural networks of a formed pipeline in accordance with embodiments of this disclosure include processing 3D CAD model images, rendering simulated depth images, transferring realistic noise and background information onto the simulated depth images, and thereby generating realistic depth images. The resultant realistic depth images may be useful as training data for learning by neural networks of application-specific analytic models, such as a computer vision algorithms related to object recognition. Unlike conventional depth-based object recognition systems that attempt to rely solely on simulated training data, the realistic depth images generated according to the present disclosure provide superior training data to object recognition systems leading to more accurate object recognition.
In an embodiment, a depth sensor simulation engine 115 may additionally generate input data as a simulated 2.5D depth scan 116, representing a pseudo-realistic depth image for the object, without background information, by using the same 3D CAD model 107 and annotated pose data 105 as for generation of the noiseless 2.5D image 114. As an example, for each shot, two generated images, the noiseless 2.5D image 114 and the pseudo-realistic scan 116 may be stacked (e.g., collated) into a single image, such as a 2-channel depth image and received as an input 125 for the noise transfer engine 121. Of the 2-channel image, the noise-free channel provides clean depth information about the pictured scene to the network 121, while the channel already containing simulated noise helps the network 121 converge faster and more efficiently.
The noise transfer engine 121 may be configured as an image-to-image GAN architecture with a discriminator neural network 122 and a generator neural network 123. In an embodiment, the discriminator neural network 122 may be configured as a deep convolutional network with Leaky ReLUs and sigmoid activation for output. At each iteration of training, the discriminator network 122 may take as input 125 the original synthetic image and either the target 124 real one (“real” pair) or the enhanced output 126 from the generator (“fake” pair), stacked into a single image, using the latest state of the generator. The discriminator network 122 functionality includes discernment of “fake” pairs from “real” pairs, in which the activation layer represent deductions, each activation representing a prediction for a patch of the input data. A binary cross entropy loss function may be applied by the discriminator network 122. In an embodiment, the generator neural network 123 may be configured as a U-Net architecture, with the synthetic depth data as input, and an activation layer returning an enhanced image. In order to train the generator 123 to force the simulated input data similar to the real target data and to fool the discriminator 122, the loss function for the generator 123 may be a combination of a cross entropy evaluation of the output and target images, and the reversed discriminator loss. Once converged, the weights of the GAN of noise transfer engine 121 are fixed and saved. Both the discriminator network 122 and the generator network 123 may be configured to process multi-channel depth images (e.g., 16 bpp). While examples are described above for implementing a discriminator neural network 122 and a generator neural network 123, other variations may be used to implement the noise transfer engine 121.
To initialize the learning of the GAN of background transfer engine 221, the same standard training data set as used for training of the noise transfer engine 121, is generated by a tagged process at stage 101. Given a set of objects for the visual recognition learning, a 3D CAD model 107 for each object may be provided, such as engineering design renderings of an object or a system of components. From the 3D CAD models 307, synthetic images may be generated by rendering engine 111, and processed by the trained noise transfer engine 121, for input of the GAN of background transfer engine 221. Real depth scans 103 with annotated poses 105 are received as target data for the GAN of background transfer engine 221. Unlike the training of the noise transfer engine 121, in which the backgrounds were removed from the real scans 103, the learning by the background transfer engine is made possible by using target images data that include the background. The corresponding input images output from the noise transfer engine 121 are single channel pseudo realistic images enhanced by the GAN of the noise transfer engine 121.
The background transfer engine 221 may be configured as an image-to-image GAN architecture with a discriminator neural network 222 and a generator neural network 223. In an embodiment, the discriminator network 222 may be configured similarly to the discriminator network 122. The generator network 223 may be configured with the same architecture as the generator network 123, but with a loss function that may be edited to heavily penalize changes to image foreground by using input data as a binary mask and a Hadamard product. At each iteration of training, the discriminator network 222 may take the enhanced output of noise transfer engine 121 as input 225, and either the real scan 103 as the target 224 (i.e., the “real” pair) or the enhanced output 226 from the generator 223 (i.e., the “fake” pair), stacked into a single image, using the latest state of the generator 223. The discriminator network 222 functionality includes discernment of “fake” pairs from “real” pairs, in which the activation layer represent deductions, each activation representing a prediction for a patch of the input data. A binary cross entropy loss function may be applied by the discriminator network 222. In an embodiment, the generator neural network 223 may be configured as a U-Net architecture, with the synthetic depth data from noise transfer engine 121 as input 225, and an activation layer returning an enhanced image as output 226. In order to train the generator 223 to force the simulated input data similar to the real target data and to fool the discriminator 222, the loss function for the generator 223 may be a combination of a cross entropy evaluation of the output and target images, and the reversed discriminator loss. Once converged, the weights of the GAN of background transfer engine 221 may be fixed and saved, which completes the training of the entire pipeline for depth image generation useful for visual recognition applications. Following training, should the proposed pipeline be applied to a sensibly different environment, such as a new target domain, the background transfer engine 221 may be fine-tuned by a refinement tuning session over a small dataset of real images 103 from the new target domain, to generate additional background data as required.
The processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
The system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610. The system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
Continuing with reference to
The operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640. The operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
The application programs 635 may a set of computer-executable instructions for performing synthetic image generation and the training of the noise and background transfer engines for depth scan generation in accordance with embodiments of the disclosure.
The computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 641, 642 may be external to the computer system 610, and may be used to store image processing data in accordance with the embodiments of the disclosure.
The computer system 610 may also include a display controller 665 coupled to the system bus 621 to control a display or monitor 666, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes a user input interface 660 and one or more input devices, such as a user terminal 661, which may include a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620. The display 666 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the user terminal device 661.
The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium, such as the magnetic hard disk 641 or the removable media drive 642. The magnetic hard disk 641 may contain one or more data stores and data files used by embodiments of the present invention. The data store may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. The processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
As stated above, the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642. Non-limiting examples of volatile media include dynamic memory, such as system memory 630. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions.
The computing environment 600 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680 and one or more image sensor devices 681, such as a depth scanning device (e.g., stereo camera) or the like, that may be used to capture real scan images 103. The network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671. Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610. When used in a networking environment, computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.
Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680). The network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.
It should be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in
An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.
The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112(f), unless the element is expressly recited using the phrase “means for.”
Patent | Priority | Assignee | Title |
11544503, | Apr 06 2020 | Adobe Inc | Domain alignment for object detection domain adaptation tasks |
Patent | Priority | Assignee | Title |
20170161590, | |||
20180314917, | |||
20180330511, | |||
20200285907, | |||
WO2018080533, | |||
WO2018156126, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 03 2018 | PLANCHE, BENJAMIN | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051721 | /0425 | |
Apr 09 2018 | WU, ZIYAN | Siemens Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051721 | /0366 | |
Aug 07 2018 | Siemens Aktiengesellschaft | (assignment on the face of the patent) | / | |||
Oct 28 2019 | Siemens Corporation | SIEMENS ENERGY, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051975 | /0167 | |
Feb 18 2020 | Siemens Corporation | Siemens Aktiengesellschaft | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051973 | /0417 |
Date | Maintenance Fee Events |
Feb 05 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Sep 16 2024 | REM: Maintenance Fee Reminder Mailed. |
Date | Maintenance Schedule |
Jan 26 2024 | 4 years fee payment window open |
Jul 26 2024 | 6 months grace period start (w surcharge) |
Jan 26 2025 | patent expiry (for year 4) |
Jan 26 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 26 2028 | 8 years fee payment window open |
Jul 26 2028 | 6 months grace period start (w surcharge) |
Jan 26 2029 | patent expiry (for year 8) |
Jan 26 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 26 2032 | 12 years fee payment window open |
Jul 26 2032 | 6 months grace period start (w surcharge) |
Jan 26 2033 | patent expiry (for year 12) |
Jan 26 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |