1. Introduction
Monte Carlo (MC) techniques form the foundation of realistic image synthesis for decades (Cook et al., 1984). The principle is simple: a function is sampled and the samples are combined to approximate its integral. Standard MC is often referred to as bruteforce
, as its implementation is simple but the variance of the estimation can be high and convergence slow. One method to usually improve the integral approximation is to reconstruct the underlying function from the samples and much previous work devoted its attention to particular cases (e.g., shadows
(Egan et al., 2009) or depth of field (Soler et al., 2009)). In this work, we revisit the reconstruction process. We derive an easytoimplement algorithm to compute sample weights that generally improves the approximation when compared to standard weights for general MC integration.Our observation is that standard sample weights are often less accurate for lower sampling rates because they do not properly reflect the integration domain nor the local sample density. Our weighting scheme considers all samples of a given set and defines weights based on a geometric partitioning of a lowdimensional integration domain. It results in a consistent estimator that outperforms standard weighting schemes. A major contribution of our work is the derivation of an unbiased estimator. It builds upon this partitioning and applies to sets of independent and identically distributed uniform random samples or stratified samples. Specifically, we propose a novel weighting scheme that is easy to implement and builds upon a sound theoretical derivation. It integrates well into existing rendering pipelines, can be parallelized in conjunction with the unbiased estimator, and we demonstrate its benefit over existing schemes via several rendering problems.
2. Background
MC methods
Since the 80s (Cook et al., 1984), MC integration plays a major role in rendering complex effects, such as motion blur, depth of field, and soft shadows. The complete light transport is described by the rendering equation (Kajiya, 1986), which can be solved using path tracing as an associated MC solution. Nevertheless, not all samples taken during the evaluation of an integral contribute strongly to the result. One strategy to modify subsequent sample choices is to rely on previous samples, i.e., a Markov process. Metropolis sampling (Veach and Guibas, 1997) can handle complex light path configurations by extensively exploring contributing paths once they are discovered. Multidimensional kd trees (Hachisuka et al., 2008; Guo et al., 2018) can store samples in a global structure, which can then be used as a means to control future sample placement.
While standard Monte Carlo (MC) methods solve a definite integration of a function over a finite support by using a random sample set () with the resulting estimator being , importance sampling
influences the sampling process via a probability distribution function (pdf)
(Veach and Guibas, 1995). The resulting unbiased estimator is:(1) 
which effectively weighs samples differently. Importance sampling is interesting when having knowledge about the scene. For instance, importance sampling the light source works better in scenes with small or point light sources (Dutre et al., 2006; Debevec, 2008). Sampling according to the BSDF works better with glossy to highly glossy surfaces(Shirley, 1991; Ward, 1992; Lafortune et al., 1997). Multiple importance sampling (MIS) combines different such sampling strategies (Veach and Guibas, 1995).
Reweighting
Our solution focuses on the weighting of samples interpreted as an improved function reconstruction. Different weight definitions have been shown to be beneficial for rendering, e.g., derived in Sobolev spaces (Marques et al., 2018). However, these previous solutions target hemispherical illumination integrals and are not generally applicable to other problems. A reweighting scheme was also proposed for addressing firefly artifacts (Zirr et al., 2018) but the solution is biased and limited to narrow application scenarios.
Other specialized reconstruction techniques exist, including solutions for soft shadows (Egan et al., 2009), defocus blur (Soler et al., 2009), and motion blur (Egan et al., 2011), which lead to significant improvements. More complex reconstructions for light fields (Lehtinen et al., 2011) have proven very successful but are biased (though consistent).
Our method is independent of the application scenario and unbiased. It handles general functions and links the weights to Voronoi cell volumes. The latter has also been studied in the context of antialiasing problems (Mitchell, 1990), for which the dimensional voronoi cell volumes bounded with a pixel are directly used as sample weights and leads to improved antialiasing effects, but the theory has not been further developed for unbiased solutions, nor generalized to other contexts. Voronoi cell size has been used as weights for Monte Carlo integration in (Vorechovsk et al., 2016), where two ways of treating boundaries have been proposed. In this work, Voronoi cells of given set of samples within a domain are either bounded and clipped by the domain boundary, or extended by periodically adding auxiliary samples that extend the domain. Both approaches are shown to improve numerical performance of MC integrations. However, as we show in the Sec.4, directly using Voronoi cell size as weight results in an biased estimate. Our solution takes advantage of Voronoi tessellation and remains unbiased.
3. Formulation and Problem Statement
Referring to Eq. 1, the estimator of importance sampling is a sum of function value times a weight , generally:
(2) 
Similarly, Riemann integration approximates an integral using function values times a weight :
(3) 
The Riemann weights stem from a partitioning of the support into hypervolumes. In 1D, these hypervolumes are intervals. Each hypervolume contains exactly one sample and its volume defines the sample’s weight.
The weights are typically easy to compute but cannot be considered hypervolumes; they would overlap or introduce gaps and cannot easily be linked to a partitioning of . Only with increasing number, due to the stochastic nature of the process, when the samples densely cover the support , the difference in the weight definitions becomes negligible. See Fig. 1 (a), (b) and (c) for an illustration. In consequence, especially for low sample counts, the weights do not well reflect an approximation of the function.
4. Geometric Sample Reweighting
Our goal is to associate weights to samples that define an improved function reconstruction during the integration. We will first define a consistent solution, inspired by Riemann integration. This solution is independent of the sampling pattern and can be applied on any sample set as a post process to improve the approximation. This reweighting is consistent, but not unbiased for all sampling strategies. We then propose a modification to obtain an unbiased estimator for the cases of uniform random samples that are independent and identically distributed (i.i.d.), and samples with stratification. See Sec section 6 for the possibility to generalize our method for samples generated with an analytically known pdf.
4.1. Consistent Estimator
Riemann integration typically assumes a regular partitioning of the domain. Using a Voronoi diagram of the sample points, it is possible to partition the domain
to take sample density into account. A Voronoi diagram is a partition into regions such that the points in each region share the same closest sample location. It can be shown that the Voronoi cell corresponds to the intersection of half spaces defined by hyperplanes that are equidistant to two sample points. The theory of Voronoi diagrams is beyond the scope of this paper but more details can be found in
(Aurenhammer, 1991; De Berg et al., 1997).In our case of a dimensional problem setting, the diagram will be bounded by the hypercube , the domain from which samples are drawn. The volume of each Voronoi cell determines the corresponding sample weight and given that the cells are intersections of halfspaces, they are convex and their volume can be easily computed. The resulting estimator of our approach is .
Implicitly, this construction approximates the integrand via a piecewiseconstant representation. Intuitively, to take the most benefit from this interpretation, samples should be chosen with respect to the gradient of the function. Fig. 1 shows an illustration of this strategy. In principle, even more advanced approximations could be used, yet it turns out that such weight definitions, while consistent, lead to a biased estimate. In the following, we will show the reasoning behind this and derive an unbiased estimator for i.i.d. uniform sampling and stratified sampling.
4.2. Deriving an Unbiased Estimator
i.i.d. Uniform Samples
The reason the direct use of the Voronoi cells’ volume is biased is due to the samples whose cell shares a boundary with the domain boundary. To illustrate this situation, we will first consider the 1D case with a set of (with ) i.i.d. uniform samples before generalizing to dimensions, and then to stratified sampling.
One Dimension
Let us assume that the onedimensional sample set is sorted from smallest to largest value. We are interested in the expected extent of each Voronoi cell, for which we need to derive the expected distance between two adjacent samples. For this reason, we first determine the expected position of sample .
From order statics (David and Nagaraja, 2004), we know that the distribution of the th i.i.d. sample follows the beta distribution, i.e.,
The expected position of the ordered th sample is then:
Consequently, we have for to and a similar condition holds for . The expected weight is then for samples with to and for samples and . The latter weights are larger due to the intervals containing the two boundaries of the domain. Using these weights directly, leads to a consistent but biased estimator.
To render the estimator unbiased, we introduce a persample correction coefficient :
(4) 
These factors have to be carefully chosen — for instance, would lead to the previouslyderived consistent but biased result. The correction coefficient should indeed modify the expected contribution of a sample to equal . Following the weight derivation, an unbiased estimator in 1D, we would then define and for all other samples. As most samples still share an identical correction factor, it keeps us close to the interpretation of the Voronoi cell volume. In higher dimensions, the definition is less straightforward.
D Dimensions
To derive the correction coefficient from Eq.4 in dimensions, we assume a set of (with , i.e., intuitively, this results in at least one inner point and two boundary points along each dimension) samples in . We define the boundary order of a sample as the amount of cell boundaries of its Voronoi cell that are part of the domain boundary. For instance, in the above one dimensional example, , and for all other sample points, we have .
The cardinality of samples of order is defined as: . For such a sample set of samples, the expected cardinality of samples of order is . This formula is the th term in the bionomial expansion of (). To understand this result, one should recall that the expected position of all samples forms a regular grid. Thus, this grid will have a resolution of along each axis. Starting with one axis, we would find samples with two boundary samples of order one and all others samples are inner points of order zero. Repeating these samples times along a new dimension will increment the order of the first repeated set of samples and the last, as these represent a new boundary along this dimension. For all other samples, their boundary order remains unchanged. This process can be done for all dimensions, thus implying the binomial expansion.
To achieve an unbiased estimator, we first compute the expected Voronoi volume for a sample . For dimensions, we have boundary orders from to
. As we are dealing with an i.i.d. uniform distribution, in each dimension, we have
intervals between samples and two intervals with the boundary, leading to a total of intervals. Therefore, we have:Again, for unbiasedness, we need , thus each sample should expectedly contribute equally. The following definition of the correction coefficients fulfills this property:
(5) 
because
Stratified Samples
The extension to stratified sampling is relatively straightforward, as each stratum is considered an independent unit. This means that the function is independently integrated in each stratum and its whole range is a composition of these units. In consequence, the boundary observation now applies to the boundary of each stratum. For a sample set of size generated with strata, each stratum is expected to contain samples. Let and , then the integration problem for a stratum with samples would imply the correction coefficients to be:
(6) 
5. Results
In this section, we first show the numerical performance (Sec. 5.1) of our scheme and show its application to a few rendering scenarios (Sec. 5.2). In all tests, we compare four different estimators:

Standard MC (i.i.d. uniform sampling)

Our weighted standard MC (i.i.d. uniform sampling)

Stratified MC (stratified sampling)

Our weighted stratified MC (stratified sampling)
5.1. Numerical Performance
The numerical performance is tested with two examples: one for a D MC integration and the other for a D MC integration, which are plotted in Figure 2. The D function is given as:
For the D function, we take the Lena image (Munson, 1996). The functions were chosen to include discontinuities, largescale variations and small scale changes and led to a representative behavior of several tests that we have performed. Generally, the MSE drops as more samples are added (Column ). Our solutions outperform standard uniform sampling and even stratified sampling by several orders of magnitude and converges around 1000, 100 times faster respectively in 1D and 100, 10 times faster respectively in 2D.
For the case of stratified sampling, we illustrate different amounts of strata for the same sampling count (Column ). Our weighting scheme makes this parameter less important, as it achieves a better function approximation.
We next investigate the impact of distributing samples into batches for which we estimate the function integration separately, before deriving the overall estimate by averaging, which would typically be the case for distributed computations. First, we fixate the amount of samples to K (Column ). Notice that the performance of standard uniform sampling remains invariant with respect to the amount of samples per batch, as it is already an averaging process. Our solution results in a better approximation for more samples per batch, as it will approximate the function more faithfully, as expected. Similarly, stratified sampling also benefits from more samples per batch, but shows slower convergence.
We also investigate the effect of using different batch sizes for uniform (Column ) and stratified sampling(Column ). More batches thus means a higher overall sample count and all methods improve with the addition of batches. In all cases, the graphs stop after reaching K samples. Our solution performs best and the graphs also illustrate the convergence over several batches, due to its unbiasedness.
5.2. Application to Rendering
We implemented our method in Mitsuba (Jakob, 2010), targeting one and two dimensional integration problems, namely motion blur (Sec. 5.2.1), dispersion (Sec. 5.2.2), depth of field (Sec. 5.2.3) and illumination integrals (Sec. 5.2.4). We evaluate MSE and visual appearance, as well as convergence behavior. For all implementations, our reweighting operates at a perpixel level. We apply our method on the level of primary samples, thus all applicable local importance sampling techniques are utilized throughout the pipeline.
5.2.1. Motion Blur
To simulate motion blur, distribution rendering samples the time domain: For a pixel , the luminance is given by:
with and being the shutter opening and closing time and incorporating the shutter function. Since time is dimensional, building a Voronoi partition means sorting and measuring the distance between samples. We tested our implementation in two scenes with animation (Fig. 3 and 4).
5.2.2. Spectral Rendering
Light dispersion can happen at reflective or refractive dielectric materials, leading to effects such as rainbows, resulting from different wavelengths travelling in different directions. Spectral sampling simulates multiple wavelengths in order to capture such effects. To reduce the complexity of the additional spectral dimension(Bergner et al., 2009), hero wavelength spectral sampling(Wilkie et al., 2014) can be used as an approximation:
Our implementation of spectral sampling uses bin wavelengths. Hero wavelength sampling is used with shifted additional wavelength samples (Wilkie et al., 2014). We tested our method with two scenes configured with dispersive dielectric materials (Fig. 5 and 6).
As shown in the results, our method brings down colour noise significantly and dispersive regions look much smoother at low sample rate.
5.2.3. Defocus Blur
A camera with aperture leads to defocus blur/depth of field effects. The aperture is usually modelled as a D shape, e.g., a square, a circle, or a star, which is sampled to determine the origin of each primary sample ray, which passes through the position on the focal plane corresponding to the current pixel. For lens aperture , we obtain:
To determine our weights, we use a D Voronoi diagram based on the aperture samples. We tested a simple glossy sphere illuminated using an environment map (Fig. 7).
5.2.4. Direct Illumination
Leaving out irrelevant terms, the luminance at scattering point x with one bounce is given by:
where denotes light emission and denotes all light sources. In this application, we use light sampling instead of random rays to ensure that the light source is always sampled. Our unbiased reweighting achieves the best convergence and, as shown in the insets, also the smoothest results (Fig. 8).
Observation
In all cases, our solution leads to smoother visual result and less black holes in the falloff regions. From the MSE plots, we can see that standard MC with uniform sampling has the worst performance, while our weighted stratified sampling generally has the best one. Our method improves both uniform sampling and stratified sampling. We can also see that even with uniform sampling as input, our weighted uniform sampling not only improves over the unweighted version, but also has a performance that is as good as our weighted stratified sampling. Precompute sample weights enables a negligible computation overhead.
6. Conclusion
The reweighting scheme in this paper enables a better approximation than standard MC weights. Our solution is general and does not require any prior knowledge about the integrating function. Implicitly, our method approximates this function via a reconstruction from the samples, but does not introduce a bias in the resulting estimator. We showed its practical benefit for various rendering problems.
While we focus on primary samples that are either i.i.d. uniform or stratified in this work, our method can also handle nonuniform sample sets following a distribution of . The expected position of the th sample is then , where . Unfortunately, it is necessary to integrate the distribution function. Approximate schemes remain an area of future work. Similarly, using the method in higher dimensions requires the computation of cell volumes in highdimensional Voronoi diagrames, which can be costly. One could precompute these weights but we left such accelerations as future work. Finally, it is an exciting opportunity to exploit the generality of our solution to improve other integration problems.
References
 Voronoi diagrams—a survey of a fundamental geometric data structure. ACM Computing Surveys (CSUR) 23 (3), pp. 345–405. External Links: Document Cited by: §4.1.
 A tool to create illuminant and reflectance spectra for lightdriven graphics and visualization. ACM Transactions on Graphics (TOG) 28 (1), pp. 5. External Links: Document Cited by: §5.2.2.
 Distributed ray tracing. In ACM SIGGRAPH Computer Graphics, Vol. 18, pp. 137–145. External Links: Document Cited by: §1, §2.
 Order statistics. 3rd edition, Wiley Online Library. External Links: Document, ISBN 9780471722168 Cited by: §4.2.
 Computational geometry. In Computational geometry, pp. 1–17. Cited by: §4.1.
 Rendering synthetic objects into real scenes: bridging traditional and imagebased graphics with global illumination and high dynamic range photography. In ACM SIGGRAPH 2008 classes, pp. 32. External Links: Document Cited by: §2.
 Advanced global illumination. AK Peters/CRC Press. Cited by: §2.
 Frequency analysis and sheared filtering for shadow light fields of complex occluders. ACM Transactions on Graphics (TOG) 30 (2), pp. 9. External Links: Document Cited by: §2.
 Frequency analysis and sheared reconstruction for rendering motion blur. In ACM Transactions on Graphics (TOG), Vol. 28, pp. 93. External Links: Document Cited by: §1, §2.
 Primary Sample Space Path Guiding. In Eurographics Symposium on Rendering  Experimental Ideas and Implementations, W. Jakob and T. Hachisuka (Eds.), External Links: ISSN 17273463, ISBN 9783038680680, Document Cited by: §2.
 Multidimensional adaptive sampling and reconstruction for ray tracing. In ACM Transactions on Graphics (TOG), Vol. 27, pp. 33. External Links: Document Cited by: §2.
 Mitsuba renderer. Note: http://www.mitsubarenderer.org Cited by: §5.2.
 The rendering equation. In ACM Siggraph Computer Graphics, Vol. 20, pp. 143–150. External Links: Document Cited by: §2.
 Nonlinear approximation of reflectance functions. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 117–126. External Links: Document Cited by: §2.
 Temporal light field reconstruction for rendering distribution effects. In ACM Transactions on Graphics (TOG), Vol. 30, pp. 55. External Links: Document Cited by: §2.
 Optimal sample weights for hemispherical integral quadratures. In Computer Graphics Forum, External Links: Document Cited by: §2.
 The antialiasing problem in ray tracing. Advanced Topics in Ray Tracing, SIGGRAPH 1990 Course Notes. Cited by: §2.
 A note on lena. IEEE Transactions on Image Processing 5 (1), pp. 3–3. Cited by: §5.1.
 Physically based lighting calculations for computer graphics. Ph.D. Thesis, University of Illinois at UrbanaChampaign. Cited by: §2.
 Fourier depth of field. ACM Transactions on Graphics (TOG) 28 (2), pp. 18. External Links: Document Cited by: §1, §2.
 Optimally combining sampling techniques for monte carlo rendering. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 419–428. External Links: Document Cited by: §2.
 Metropolis light transport. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 65–76. External Links: Document Cited by: §2.
 Application of voronoi weights in monte carlo integration with a given sampling plan. In International Journal of Reliability and Safety, Computing with Polymorphic Uncertain, Proceedings of REC2016, pp. 441–452. Cited by: §2.
 Measuring and modeling anisotropic reflection. In ACM SIGGRAPH Computer Graphics, Vol. 26, pp. 265–272. External Links: Document Cited by: §2.
 Hero wavelength spectral sampling. In Computer Graphics Forum, Vol. 33, pp. 123–131. External Links: Document Cited by: §5.2.2, §5.2.2.
 Reweighting firefly samples for improved finitesample monte carlo estimates. In Computer Graphics Forum, Vol. 37, pp. 410–421. External Links: Document Cited by: §2.
Comments
There are no comments yet.