The invention produces a higher quality image from a rendering system based on a relationship between the output of a rendering system and the parameters used to compute them. Specifically, noise is removed in rendering by estimating the functional dependency between sample features and the random inputs to the system. Mutual information is applied to a local neighborhood of samples in each part of the image. This dependency is then used to reduce the importance of certain scene features in a cross-bilateral filter, which preserves scene detail. The results produced by the invention are computed in a few minutes thereby making it reasonably robust for use in production environments.

Patent
   RE48083
Priority
Jan 17 2011
Filed
Apr 29 2016
Issued
Jul 07 2020
Expiry
Jan 17 2032
Assg.orig
Entity
Small
1
8
currently ok
1. A method for performing a random parameter filter Monte Carlo denoising, comprising the steps of:
rendering one or more image samples or one or more pixels at a given sampling rate;
storing a vector of a plurality of scene features and one or more rendering system inputs for each image sample or each pixel;
saving one or more random parameters for each image sample used to calculate the image sample by a rendering system;
choosing the one or more image samples or the one or more pixels to process;
performing pre-processing processing on the one or more image samples or the one or more pixels using one or more stored features and filter parameters that are automatically adjusted for each image sample or each pixel to compute filter weights;
calculating a dependency of a color and a feature of the one or more random parameters and one or more rendering system inputs to obtain a calculated dependency;
using the calculated dependency to determine a weight for each scene feature to obtain dependency information;
modifying the one or more image samples using the dependency information to obtain a final modified sample;
filtering the final modified sample to produce one or more pixels one or more image samples or the one or more pixels using the computed filter weights to compute a filtered value; and
outputting a final image, wherein the final image includes the computed filtered value of each image sample or each pixel.
2. The method for performing a random parameter filter Monte Carlo denoising according to claim 1, wherein the choosing step further comprises the step of conducting iterations of using a block around a pixel of the image sample the one or more pixels from a large block size to a small block size.
3. The method for performing a random parameter filter Monte Carlo denoising according to claim 1, wherein the choosing step further comprises the step of selecting a random subset of image samples within each a block around the one or more pixels.
4. The method for performing a random parameter filter Monte Carlo denoising according to claim 1, wherein the performing step further comprises the step of clustering the one or more image samples or the one or more pixels into one or more groups.
5. The method for performing a random parameter filter Monte Carlo denoising according to claim 4, wherein, wherein the clustering step includes the step of calculating the a standard deviation of the a mean for the one or more pixels of the image sample.
6. The method for performing a random parameter filter Monte Carlo denoising according to claim 1, wherein the performing step further comprises the step of manipulating the vector by removing the a mean and dividing by the a standard deviation for each scene feature of the plurality of scene features, wherein each feature is a scene feature for each image sample.
7. The method for performing a random parameter filter Monte Carlo denoising according to claim 1 13, wherein the a dependency is a statistical dependency.
8. The method for performing random parameter filter Monte Carlo denoising according to claim 1, wherein the one or more scene features each feature is at least one selected from the group comprising consisting of: world position, surface normal, color, texture value, texture coordinate, and shader value.
9. The method for performing random parameter filter Monte Carlo denoising according to claim 1, wherein the one or more rendering system inputs is at least one selected from the group comprising consisting of: screen position and random parameter.
10. The method for performing a random parameter filter Monte Carlo denoising according to claim 1, wherein the filtering step further comprises the steps of:
classifying the weight filter weights above a certain value to mean that the scene a feature of the plurality has little or no dependency on the one or more random parameters and the weight filter weights below a certain value to mean that the scene a feature of the plurality has a significant dependency on the one or more random parameters.
0. 11. The method for performing Monte Carlo denoising according to claim 1 further comprising the step of reconstructing the computed filtered value of each image sample or each pixel to obtain the final image.
0. 12. The method for performing Monte Carlo denoising according to claim 1, further comprising the step of saving by a rendering system one or more random parameters for each image sample or each pixel used to calculate the image sample.
0. 13. The method for performing Monte Carlo denoising according to claim 12, further comprising the steps of:
calculating a dependency of a color and a feature of the one or more random parameters and the one or more rendering system inputs to obtain a calculated dependency;
using the calculated dependency to determine a weight for each feature to obtain dependency information; and
modifying the one or more image samples or the one or more pixels using the dependency information to obtain a final modified sample.

In one embodiment, a value of 30 instead of 3 is used when testing the world position since it varies much more than the other features. Also, this test is only done when σcustom character,kf>0.1 because it is not desirable to throw all the samples away in cases where the variance is very small, such as constant-valued regions.

Before the statistical dependencies for a set of samples in a neighborhood is computed, the scene features are normalized by removing the mean and dividing by the standard deviation for each of the elements in the sample vector. The reason for this is that the features in f reside in very different coordinate systems. For example, world positions could be in the range of 0 to 1000, while the normal vector could have components in the range of 0 to 1. If this discrepancy is not corrected, a larger weight could inadvertently be given to certain features when calculating dependency that may not necessarily be more important. Vectors that have been normalized in this manner are represented with a bar, for example, f becomes f.

The core of the algorithm according to the invention is the computation of the color weights α and feature weights β as shown in FIG. 5. Mutual information is used to compute the statistical dependencies between a sample feature and the inputs to the Monte Carlo system.

Since it is difficult to derive an exact functional relationship between scene features and the inputs of the rendering system pi and ri for complex scenes, a statistical dependency is proposed based on the inputs providing information about the scene features. The invention uses mutual information, which is the exact measure of dependence between two random variables and indicates how much information one tells about another. In one embodiment, the mutual information between two random variables X and Y can be calculated as:

μ ( X ; Y ) = x X y Y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) ( Eq . 1 )
where these probabilities are computed over the neighborhood of samples custom character around a given pixel. To calculate the mutual information between two vectors x and y, specifically, fcustom character,k and rcustom character,l, respectively, the histogram for each is calculated. To compute the histograms, all values are made positive by subtracting the minimum element in the vector and quantizing the elements into integer bins by rounding their values. How many times the values of x fall inside each bin are counted and the probabilities are found by dividing by the length of x. A joint histogram is calculated in a similar way, except with pairs of values (x, y).

To estimate statistical dependencies on inputs, the dependency of the kth scene feature on all random parameters (given by Df,kr) is calculated using mutual information. The heuristic approximates this by measuring the dependency on individual random parameters and adding them up. Therefore, the statistical dependency is first calculated between the kth scene feature and the lth random parameter by Df,kr,l=μ(fcustom character,k; rcustom character,l), and then the dependency of the kth scene feature on all n random parameters is approximated as:

D f , k r = 1 l n D f , k r , l = 1 l n μ ( f _ N , k ; r _ N , l ) ( Eq . 2 )

The dependency of the kth scene feature on screen position (Df,kp) and color dependencies Dc,kr and Dc,kp are similarly computed:

D f , k p = 1 l 2 D f , k p , l = 1 l 2 μ ( f _ N , k ; p _ N , l ) , ( Eq . 3 ) D c , k r = 1 l n D c , k r , l = 1 l n μ ( c _ N , k ; r _ N , l ) , ( Eq . 4 ) D c , k p = 1 l 2 D c , k p , l = 1 l 2 μ ( c _ N , k ; p _ N , l ) . ( Eq . 5 )

The dependency of the kth color channel on all the scene features is computed so that later the weight for features that do not contribute to the final color can be reduced:

D c , k f = 1 l m D c , k f , l = 1 l 2 μ ( c _ N , k ; f _ N , l ) . ( Eq . 6 )

In addition, a related term is computed pertaining to how all color channels are dependent on the kth scene features:

D c f , k = 1 l 3 D c , l f , k = 1 l 3 μ ( c _ N , l ; f _ N , k ) . ( Eq . 7 )

Finally, the terms Dc,kr, Dc,kp, Dcf are calculated by summing over the color channels:

D c r = 1 k 3 D c , k r , D c p = 1 k 3 D c , k p , D c f = 1 k 3 D c , k f . ( Eq . 8 )

As shown in FIG. 5, the error of the approximation is determined. Ideally, the statistical dependency of the kth scene feature on all random parameters is calculated using the joint mutual information μ(rcustom character,1, rcustom character,2, . . . , rcustom character,n;fcustom character,k). Unfortunately, this joint mutual information can be difficult and expensive to compute as the number n gets larger, because the histogram grows to the power of n while the number of samples to do statistics grows linearly. This means that the ability to compute the histogram accurately degenerates quickly and it becomes much slower to do so. For this reason, the approximation is performed by measuring the dependency on individual random parameters and adding them up as described above.

Now the effect this has in the overall calculation of statistical dependency is examined. To keep things simple, it is assumed that two statistically independent random variables R1 and R2 are inputs to the system and produce the output feature Y. In order to measure μ(R1, R2; Y), it is approximated as μ(R1;Y)+μ(R2;Y). The following derivative shows that the statistical dependence is underestimated:
μ(R1,R2;Y)=H(R1,R2)−H(R1,R2|Y)
=H(R1)+H(R2|R1)−H(R1|Y)−H(R2|R1,Y)
=μ(R1;Y)+H(R2|R1)−H(R2|R1,Y)

If R1 are R2 are independent, then H(R2|R1)=H(R2) such that:
μ(R1,R2;Y)=μ(R1;Y)+H(R2)−H(R2|R1,Y)
=μ(R1;Y)+μ(R2;R1,Y)
=μ(R1;Y)+μ(Y,R1;R2)
=μ(R1;Y)+μ(R2;Y)+μ(R1;R2|Y)

Thus, the approximation that μ(R1, R2; Y)≈μ(R1; Y)+μ(R2; Y) effectively assumes that μ(R1; R2|Y)=0. This means that the information is ignored that the output feature tells about relationship between the inputs, which might not be zero even though the inputs are statistically independent. To understand why, the function ƒ( ) is set to act as an XOR gate of two inputs. If one of the inputs and the output is known, the other input can automatically be determined even though the two inputs may be statistically independent. Since μ(R1; R2|Y)≥0, the approximation is an underestimate of the true joint mutual information between the random parameters and the scene feature. However, the approximation works quite reasonably, even for intricate scenes with complex relationships between the random parameters and the scene features.

Since the sample features are only functions of the random parameters ri and the screen position pi, the heuristic computes the fractional contribution of the random parameters to the kth scene feature with the following formula:

W f , k r = D f , k r D f , k r + D f , k p + ɛ ( Eq . 9 )

The addition of the term ϵ term prevents degeneration when the dependencies Df,kr and Df,kp are both small. This expression communicates how much the kth feature was affected by the random parameters as a fraction of the contributions from both sets of inputs, with the reasonable assumption that the position and random parameters are statistically independent. When the sample is only a function of the random parameters, this value will be close to 1, and when it is dependent only on the screen position it will be 0. In the common case where there is some contribution from both inputs, for example, a partially out-of-focus object is dependent on both screen position and u, v, the invention simply interpolates between the two.

The invention also includes a similar algorithm using the dependencies of the kth sample color channel on the random parameters Dc,kr and on the screen position Dc,kp to compute the fractional contribution of the random parameters on the kth color channel:

W c , k r = D c , k r D c , k r + D c , k p + ɛ ( Eq . 10 )

The overall contribution of the random parameters on the color Wcr for use in sizing the filter is obtained by averaging the Wc,1r, Wc,2r, Wc,3r terms:

W c r = 1 3 ( W c , 1 r + W c , 2 r + W c , 3 r ) . ( Eq . 11 )

Lastly, the Wcf,k term is computed to communicate how much the color depends on a specific feature:

W c f , k = D c f , k D c r + D c p + D c f ( Eq . 12 )

As shown in FIG. 5, the computation of the fractional contributions is determined. Specifically, once the statistical dependencies have been calculated, the normalized dependencies are computed to determine the α and β parameters. The α and β parameters are adjusted in each iteration as the block size decreases by giving more weight to the dependency on the random parameters. The idea behind this is that when the block sizes are large, there will be an increase in dependency on the spatial screen position because of the natural spatial variations in the image. However, the statistics are more corrupt because of the mixing of statistics that happens with large block sizes. Therefore, more weight is given to the normalized dependency on the random parameters as the block size goes down with each iteration. This adjustment is expressed as:
αk=max(1−2(1+0.1t)Wc,kr,0)  (Eq. 13)
βk=Wcf,k·max(1−(1+0.1t)Wf,kr,0)  (Eq. 14)
where the t term is the iteration of the multi-pass approach, with the first pass t=0. The incorporation of the t term increases the weight of Wc,kr and Wf,kr upon each successive iteration, and the max( ) term is added to ensure that the value stays positive.

As shown in FIG. 6, the color samples are filtered. The invention filters the color of samples xi using a weighted bilateral filter in which the importance of the color and scene features is adjusted to reflect their dependence on the random parameters:

w ij = exp [ - 1 2 σ p 2 1 k 2 ( p _ i , k - p _ j , k ) 2 ] × exp [ - 1 2 σ c 2 1 k 3 α k ( c _ i , k - c _ j , k ) 2 ] × exp [ - 1 2 σ f 2 1 k m β k ( f _ i , k - f _ j , k ) 2 ] , ( Eq . 15 )
where wi,j is the contribution—or weight—of the jth sample to the ith sample during filtering. Because of the way the samples are selected in neighborhood custom character randomly using a Gaussian distribution with standard deviation σp (where σp=b/4), the first term of this expression is dropped and becomes:

w ij = exp [ - 1 2 σ c 2 1 k 3 α k ( c _ i , k - c _ j , k ) 2 ] × exp [ - 1 2 σ f 2 1 k m β k ( f _ i , k - f _ j , k ) 2 ] . ( Eq . 16 )

The variances of the Gaussians for both the color and the feature are set to the same value:

σ c 2 = σ f 2 = σ 2 ( 1 - W c r ) 2 ( Eq . 17 )

The variances are divided by (1−Wcr)2 because, in the end, only the sample color is of importance and a large filter is desired wherever the color depends a lot on the random parameters, i.e., is very noisy. This term adjusts the size of the Gaussian based on the overall noise level, making it large when needed. The terms σc2 and σf2 are separated since they depend on all three color channels (because of the Wcr term) as opposed to αk (whose Wc,kr term varies per color channel). Therefore, the terms σc2 and σf2 modulate the overall size of the Gaussian while αk and βk adjust it further based on dependencies with the random parameters.

Because the constant σs2 is divided by the number of samples when computing the filter's variance σ2, the invention is a biased but consistent estimator, meaning that the estimator converges to the value of the integral as the number of samples per pixel s goes to infinity. As s→∞, a weight of wi,j=1 is produced only when i=j and zero is everywhere else. Therefore, the colors of the samples are not filtered at all, so the invention converges to standard Monte Carlo, which is a consistent estimator.

Once the filter weights wi,j are obtained, these weights are used to blend in the color contributions from these samples:

c i , k = j N w ij c j , k j N w ij ( Eq . 18 )
where the denominator is never zero because at least wi,i=1 (a sample fully contributes to itself). Note that this process filters the colors of individual samples (not pixels), and is performed separately for every pixel in the image, since statistics change from pixel to pixel. After all samples in the image have been filtered, the process is repeated with a new iteration as shown in FIG. 3.

The invention can be applied to a variety of scenarios such as using only direct lighting to highlight the illumination contribution from path tracing. Many regions of the image may be completely dark when using only direct lighting because these regions are totally occluded from the sky light source. This means that the illumination in these regions that is visible is due exclusively to path-tracing. The invention is able to denoise these regions by examining the relationship between the sample values and the random parameters used to compute the bounces of each path.

Path-tracing is notoriously noisy, and when Monte Carlo samples are input to the algorithm, much of the detail in the textures in the scene is completely gone. This is more evident if the color channel is multiplied by 1000. Many of the pixels remain black, which indicates that these pixels have no useful color information. The cross-bilateral filter examines other sample features, such as world position, surface normal, and texture value, each weighted depending on their amount of functional dependency on the random parameters. In embodiments where the samples' colors are extremely noisy because the path tracing produces a lot of noise while computing the global illumination, the invention detects the connection between the sample color and the random parameters of the path tracer, and essentially ignores the color when bilateral filtering. The texture value, on the other hand, is found to have little dependence on the random parameters so it is weighted heavily by the cross-bilateral filter.

Therefore, to filter a sample its color is ignored while close attention is paid to its texture value. When blending in values from around the filter kernel, only samples with similar texture values are blended together. Therefore, if a sample hits a dark part of the texture, samples from other parts of the texture are blended in that are also dark. Essentially, the filter according to the invention combines many noisy samples of dark texture together to approximate a noise-free dark texture. Of course, some blurring of the texture detail occurs when a large filter kernel is used to help denoise a very noisy image. Ideally, a small filter is desired to help preserve detail. The invention is reasonably robust and of great importance in production environments.

Although color is treated as a special feature since the color channel is filtered, it is contemplated that only incident illumination may be filtered and used in a surface shader to get an improved result.

FIG. 7 illustrates an exemplary computer rendering system 700 that may be used to implement the methods according to the invention. One or more computer systems 700 may carry out the methods presented herein as computer code.

Computer system 700 includes an input/output display interface 702 connected to communication infrastructure 704—such as a bus—, which forwards data such as graphics, text, and information, from the communication infrastructure 704 or from a frame buffer (not shown) to other components of the computer system 700. The input/output display interface 702 may be, for example, a keyboard, touch screen, joystick, trackball, mouse, monitor, speaker, printer, any other computer peripheral device, or any combination thereof, capable of entering and/or viewing data.

Computer system 700 includes one or more processors 706, which may be a special purpose or a general-purpose digital signal processor that processes certain information. Computer system 700 also includes a main memory 708, for example random access memory, read-only memory, mass storage device, or any combination thereof. Computer system 700 may also include a secondary memory 710 such as a hard disk unit 712, a removable storage unit 714, or any combination thereof. Computer system 700 may also include a communication interface 716, for example, a modem, a network interface (such as an Ethernet card or Ethernet cable), a communication port, a PCMCIA slot and card, wired or wireless systems (such as Wi-Fi, Bluetooth, Infrared), local area networks, wide area networks, intranets, etc.

It is contemplated that the main memory 708, secondary memory 710, communication interface 716, or a combination thereof, function as a computer usable storage medium, otherwise referred to as a computer readable storage medium, to store and/or access computer software including computer instructions. For example, computer programs or other instructions may be loaded into the computer system 700 such as through a removable storage device, for example, a floppy disk, ZIP disks, magnetic tape, portable flash drive, optical disk such as a CD or DVD or Blu-ray, Micro-Electro-Mechanical Systems, nanotechnological apparatus. Specifically, computer software including computer instructions may be transferred from the removable storage unit 714 or hard disc unit 712 to the secondary memory 710 or through the communication infrastructure 704 to the main memory 708 of the computer system 700.

Communication interface 716 allows software, instructions and data to be transferred between the computer system 700 and external devices or external networks. Software, instructions, and/or data transferred by the communication interface 716 are typically in the form of signals that may be electronic, electromagnetic, optical or other signals capable of being sent and received by the communication interface 716. Signals may be sent and received using wire or cable, fiber optics, a phone line, a cellular phone link, a Radio Frequency link, wireless link, or other communication channels.

Computer programs, when executed, enable the computer system 700, particularly the processor 706, to implement the methods of the invention according to computer software including instructions.

The computer system 700 described herein may perform any one of, or any combination of, the steps of any of the methods presented herein. It is also contemplated that the methods according to the invention may be performed automatically, or may be invoked by some form of manual intervention.

The computer system 700 of FIG. 7 is provided only for purposes of illustration, such that the invention is not limited to this specific embodiment. It is appreciated that a person skilled in the relevant art knows how to program and implement the invention using any computer system.

The computer system 700 may be a handheld device and include any small-sized computer device including, for example, a personal digital assistant, smart hand-held computing device, cellular telephone, or a laptop or netbook computer, hand held console or MP3 player, tablet, or similar hand held computer device, such as an iPad®, iPad Touch® or iPhone®.

While the invention has been described with reference to particular embodiments, those skilled in the art will recognize that many changes may be made thereto without departing from the scope of the invention. Each of these embodiments and variants thereof is contemplated as falling with the scope of the claimed invention, as set forth in the following claims.

Darabi, Aliakbar, Sen, Pradeep

Patent Priority Assignee Title
11727535, Jan 14 2021 Nvidia Corporation Using intrinsic functions for shadow denoising in ray tracing applications
Patent Priority Assignee Title
6792073, May 05 2000 Washington University Method and apparatus for radiotherapy treatment planning
7103541, Jun 27 2002 Microsoft Technology Licensing, LLC Microphone array signal enhancement using mixture models
7813581, May 06 2005 Bayesian methods for noise reduction in image processing
8023152, Apr 27 2005 PEKING UNIVERSITY FOUNDER GROUP CO , LTD ; BEIJING FOUNDER ELECTRONICS CO , LTD ; Peking University Method for frequency-modulation screening using error diffusion based on dual-feedback
8411310, Aug 22 2006 NEW FOUNDER HOLDINGS DEVELOPMENT LIMITED LIABILITY COMPANY; BEIJING FOUNDER ELECTRONICS CO , LTD ; Peking University Methods and systems for scanning and processing an image using the error diffusion screening technology
20080150938,
20130120385,
KR1020090085419,
/
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 29 2016STC.UNM(assignment on the face of the patent)
Date Maintenance Fee Events
Sep 29 2021M2552: Payment of Maintenance Fee, 8th Yr, Small Entity.


Date Maintenance Schedule
Jul 07 20234 years fee payment window open
Jan 07 20246 months grace period start (w surcharge)
Jul 07 2024patent expiry (for year 4)
Jul 07 20262 years to revive unintentionally abandoned end. (for year 4)
Jul 07 20278 years fee payment window open
Jan 07 20286 months grace period start (w surcharge)
Jul 07 2028patent expiry (for year 8)
Jul 07 20302 years to revive unintentionally abandoned end. (for year 8)
Jul 07 203112 years fee payment window open
Jan 07 20326 months grace period start (w surcharge)
Jul 07 2032patent expiry (for year 12)
Jul 07 20342 years to revive unintentionally abandoned end. (for year 12)