Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

View-Independent Adjoint Light Tracing for Lighting Design Optimization

Published: 22 May 2024 Publication History

Abstract

Differentiable rendering methods promise the ability to optimize various parameters of three-dimensional (3D) scenes to achieve a desired result. However, lighting design has so far received little attention in this field. In this article, we introduce a method that enables continuous optimization of the arrangement of luminaires in a 3D scene via differentiable light tracing. Our experiments show two major issues when attempting to apply existing methods from differentiable path tracing to this problem: First, many rendering methods produce images, which restricts the ability of a designer to define lighting objectives to image space. Second, most previous methods are designed for scene geometry or material optimization and have not been extensively tested for the case of optimizing light sources. Currently available differentiable ray-tracing methods do not provide satisfactory performance, even on fairly basic test cases in our experience. In this article, we propose, to the best of our knowledge, a novel adjoint light tracing method that overcomes these challenges and enables gradient-based lighting design optimization in a view-independent (camera-free) way. Thus, we allow the user to paint illumination targets directly onto the 3D scene or use existing baked illumination data (e.g., light maps). Using modern ray-tracing hardware, we achieve interactive performance. We find light tracing advantageous over path tracing in this setting, as it naturally handles irregular geometry, resulting in less noise and improved optimization convergence. We compare our adjoint gradients to state-of-the-art image-based differentiable rendering methods. We also demonstrate that our gradient data works with various common optimization algorithms, providing good convergence behaviour. Qualitative comparisons with real-world scenes underline the practical applicability of our method.

1 Introduction

Lighting plays an often overlooked but central role in our daily lives. It contributes to making a home feel relaxing and a workplace efficient and is subject to important requirements in professional environments such as office spaces, hospitals, and factories. While artistic applications usually require manual luminaire placement, simple gridlike patterns are widespread in office spaces. Both scenarios can benefit from automated optimization, reducing the workload on artists, or producing more comfortable office lighting. Widely used commercial lighting design tools, such as DIALux [2022] and Relux [2022], are limited to simple light interactions and require manual editing of luminaires to achieve the desired result.
Physically based primal rendering, i.e., simulating light transport in a 3D scene according to the rendering equation, has a long-standing history in computer graphics. Conversely, inverse rendering methods aim to match a scene’s appearance to one or more target images. These existing view-dependent methods are technically capable of optimizing light sources in an image-based framework but have practical limitations: As indicated by our comparisons, their convergence behaviour and runtime performance are insufficient for interactive lighting optimization. Furthermore, they quickly become problematic for complex scenes, as many cameras are required to capture all areas of interest. Additionally, this approach would raise the question of how many (and where) cameras should be placed and how to define their illumination targets. Consequently, we avoid cameras in favour of a view-independent approach.
Currently, no solution for view-independent (i.e., camera-free) differentiable rendering exists. Moving from camera-based inverse renderers (Mitsuba, PSDR) to view-independent optimization requires a spatio-directional data structure that can be updated and edited quickly. To achieve interactive performance, we also require an efficient and robust gradient formulation that can be evaluated on the GPU. In this article, we propose an interactive and fully view-independent inverse rendering framework, building upon, to the best of our knowledge, a novel analytical adjoint formulation. The absence of a predefined camera view, and optimization parameters that are associated with light sources, motivate a light-tracing global-illumination method instead of more common path-tracing approaches.
Previously, automatic differentiation (AD) has been widely used. Our tests show that applying AD directly to light tracing suffers from systematic errors that prevent optimization convergence. Zhang et al. [2020] describe an AD path-space formulation, which is, in principle, also applicable to paths constructed via light tracing. Conversely, we present an analytically differentiable adjoint light-tracing method, which enables a GPU-accelerated implementation.
Overall, we present the following contributions in this article:
An efficient adjoint gradient formulation for differentiable light tracing, which outperforms comparable path-tracing approaches, and
an efficiently updatable view-independent radiance data structure.
This combination allows us to solve lighting design optimization tasks, while taking global illumination into account.
We also present a novel visualization of pointwise adjoint gradient contributions.

2 Related Work

Here, we briefly summarize relevant work on global-illumination rendering, before discussing recent advances in differentiable rendering. Furthermore, we introduce approaches that have investigated the lighting design problem from other directions, such as procedural modelling.

2.1 Primal Rendering

In the following, we classify related work in four categories, as illustrated in the inset image. Path tracing, one of the most fundamental methods for physically based rendering, applies Monte Carlo integration to solve the rendering equation [Kajiya 1986; Veach 1998]. Similar methods have been extensively used to render physically realistic images (category a). Pre-computed light-transport methods, however, such as radiosity [Goral et al. 1984; Greenberg et al. 1986], focus on solving global illumination for all surfaces in a 3D scene (category b) rather than for a view-dependent image. More recently, these radiosity methods have been extended to incorporate glossy surfaces [Hašan et al. 2009; Krivanek et al. 2005; Omidvar et al. 2015; Sillion et al. 1991; Sloan et al. 2002] and also enable fast incremental updates allowing for interactive frame rates [Luksch et al. 2019]. Similarly, photon mapping [Jensen 1995] is a two-pass process, where illumination data are first cast into the 3D scene by tracing light paths, while the second pass gathers these data to compute the final image. We take inspiration from these latter approaches, constructing a spatio-directional radiance data structure to enable view-independent differentiable rendering (category d). The key ideas of our adjoint light-tracing approach would in principle also be applicable to other pre-computed light-transport methods and radiance-field data structures, such as the recent work by Yu et al. [2022], as well as photon tracing, virtual point light, or light probe approaches.

2.2 Differentiable Rendering

Previous work in (physically based) inverse rendering has, so far, focused on camera-based methods (category c), where the goal is to optimize scene parameters (e.g., materials, textures, or geometry) [Gkioulekas et al. 2016, 2013; Khungurn et al. 2015; Li et al. 2018; Liu et al. 2018; Nimier-David et al. 2020, 2019; Zeltner et al. 2021; Zhang et al. 2019]. These methods define an optimization objective on the difference between a rendered and a target image. A strong focus has been on optimizing textures, while only few methods have considered optimizing light sources, which affect the result much more globally throughout the scene. Nimier-David et al. [2021] describe a texture-space sampling strategy to improve efficiency for multi-camera setups, while our approach removes cameras altogether.
Pioneering works in differentiable path tracing [Loubet et al. 2019; Nimier-David et al. 2019] have relied on code-level AD, recording a large computation graph, which is then traversed in reverse to compute the objective function gradient. However, AD fails for standard light tracing (Algorithm 1; also known as particle tracing), because each ray carries a constant amount of radiative flux (power) along the ray, independently of its direction or distance travelled. Consequently, parameters that affect these quantities (but not the per-ray flux) cannot be correctly differentiated with AD. The PSDR approach [Zhang et al. 2020] sidesteps this issue by evaluating radiative transport in a path-space formulation before differentiation. One of our key insights is that we can take inspiration from their “material-form differential path integral” and construct a differentiation of light tracing itself for lighting parameters. Instead of using automatic differentiation, we formulate an analytical adjoint state per ray, which allows for an efficient GPU-based implementation.
The basic idea of adjoint methods is to reverse the flow of information through a simulation to find objective function gradients with respect to optimization parameters [Bradley 2019; Geilinger et al. 2020; Stam 2020]. For path tracing, Mitsuba 3 [Jakob et al. 2022] provides a similar adjoint method, called “radiative backpropagation” [Nimier-David et al. 2020], which avoids the prohibitive memory cost of AD. They extend this approach to the reparameterized “path-replay backpropagation” integrator [Vicini et al. 2021].
In our view-independent case, we find that differentiable light tracing often outperforms path tracing in terms of optimization convergence behaviour (Figure 2). Instead of transferring one radiance sample per path from a light source to a sensor, we update our radiance data at each ray-surface intersection and accordingly collect objective function derivatives from each of these locations in our adjoint tracing step, thereby using samples more efficiently. To our knowledge, we present the first view-independent, analytically differentiable light tracing, inverse rendering method. For comparisons to related work, we have implemented a reference differentiable path tracer working on our view-independent data structure. We also compare to existing image-based methods in a baseline test scenario, by restricting our objective function to consider only surfaces that are visible by their camera. In this way we separate the performance of the differentiation method from the benefits of operating on a larger global target.
Fig. 1.
Fig. 1. Optimizing lighting design parameters for a large office scene with our view-independent differentiable light-tracing method: starting from an initial lighting setup (a) and a given ground-truth illumination target on the scene geometry, we perform gradient-based optimization, which closely recovers the ground truth in our solution (c). The close-up views (b) show the difference to the target before and after optimization (left and right columns, respectively).
Fig. 2.
Fig. 2. Comparing light tracing to path tracing on a large scene using our radiance data structure: While path tracing (a) would need more advanced sampling strategies to deal with a large non-uniform mesh, light tracing (b) naturally focuses samples on brighter areas, leading to a less noisy result and therefore improved optimization convergence (c) at equal runtime.
One problem that has received a lot of attention recently is how to differentiate pixel values in the presence of discontinuous integrands due to silhouette edges or hard shadows. Various strategies to resolve these issues have been proposed, such as reparametrizing the discontinuous integrands [Loubet et al. 2019], repositioning the samples around discontinuous edges [Bangaru et al. 2020; Zeltner et al. 2021], or separating the affected integrals into interior and boundary terms and applying separate Monte Carlo estimators to each part [Yan et al. 2022; Zhang et al. 2020]. The main focus of these methods is to compute how a pixel value changes as a sharp edge or hard shadow moves within this pixel. Our light-tracing gradient formulation keeps ray-surface intersection points constant while moving light sources. Therefore, rays cannot move across feature edges when their source is perturbed, although lights could still move behind an occluding object. The resulting moving shadows are currently not explicitly handled in our formulation; we leave them for future work. Nevertheless, we achieve good optimization convergence using our gradients. While previous work often deals with situations where optimization parameters only affect a very small number of pixels, in our case, lighting parameters affect many receiving 3D surfaces simultaneously, thereby diminishing the relevance of shadows moving through a few surface elements, see also Section 6.4. Our comparisons show that Mitsuba 3’s reparametrization method, and PSDR’s edge sampling, resolve discontinuities but introduce noise, leading to reduced optimization convergence rates compared to their standard version. Using our spatio-directional data structure could in principle allow for extension of their code bases to the view-independent case, potentially including advanced discontinuity handling. For the specific case of lighting design optimization, however, our adjoint formulation is more efficient than these methods (even before considering the overhead of discontinuity handling), as shown in Figures 1113.
Fig. 3.
Fig. 3. Illustration of our adjoint light-tracing optimization. See also Figure 5 for further details on the primal and adjoint dataflow.
Fig. 4.
Fig. 4. Iterative refinement of the scene shown in Figure 1. First, desired changes are sketched on the 3D geometry (left), the optimization then adjusts the nearest light’s position accordingly (right).
Fig. 5.
Fig. 5. Illustration of our primal and adjoint light tracing approach. In the primal pass, we trace radiant flux into the scene, while in the adjoint pass we collect derivatives of the objective function along the light path.
Fig. 6.
Fig. 6. Validation of our gradient formulation using finite difference approximations at various step sizes (for the light’s 3D position in Figure 14(a)). Our gradient closely matches FD approximations at adequate step sizes and we observe the expected tradeoff between floating-point errors (too-small FD steps) and approximation errors (too-large FD steps). The naïve version, however, exhibits large systematic errors across a wide range of step sizes.
Fig. 7.
Fig. 7. Optimizing the position of a spot light such as to find a tradeoff between uniformity and brightness on a Lambertian plane. The naïve gradient calculation (red dashes) fails to place the light at the correct distance from the plane. The inset illustrates how interpolation basis function gradients in the naïve approach push each ray toward the centroid of the intersected triangle to increase brightness equally at all corners. Our method (blue line), in contrast, provides improved gradient information, enabling the optimizer to find the best distance.
Fig. 8.
Fig. 8. Visualizing gradients on the Stanford Dragon: The top row shows the initial (a) and intended (b) lighting, as well as their difference (c). The second row shows gradient contributions for the x coordinate of a point light using our adjoint approach (d), direct differentiation (e), and a finite difference approximation (f). Image subtitles indicate the calculation; BRDF derivatives are taken into account but not explicitly stated here for brevity.
Fig. 9.
Fig. 9. Recovering the ground-truth position of a point light illuminating a glossy coin. Initial lighting (a), the target (b), and our result (c); both ADAM and L-BFGS reliably converge to the correct result (d). The insets show differences to the target. Top row: front view, second row: side view. The top-right parts in (a), (b), and (c) show our data structure, while the bottom-left shows a high-quality rendering of the same lighting.
Fig. 10.
Fig. 10. Optimizing indirect illumination: a spot light placed in a simple labyrinth (a) should illuminate the target (overlaid on the far left wall), which is only reached by indirect light. The ADAM optimizer simultaneously rotates and moves the light source around the corners (b) to find the “exit” (c).
Fig. 11.
Fig. 11. Baseline test case: moving an area light to a ground-truth position (inset images), showing the convergence of our method, PSDR and Mitsuba 3 in terms of parameter-space distance. All methods run ADAM with step size \(\alpha = 0.25\), four indirect illumination bounces, and 1 million samples. On this simple scene, restricting our objective function to visible surfaces (‘restricted’) results in the same convergence rate as using the entire radiance data (‘full’). Note that Mitsuba’s ‘prb_reparam’ method eventually converges (around 150 iterations), whereas their ‘ptracer’ does not. The graph on the right shows the total runtime for 100 iterations: Our light tracer is ca. twice as fast as our reference path tracer, while comparing the three Mitsuba integrators shows the runtime impact of discontinuity handling.
Fig. 12.
Fig. 12. Gradient images wrt. the x coordinate of the area light for the initial state of Figure 11 (primal rendering of this state and the target inset there); colour range: \(-0.05\) (blue) to \(+0.05\) (red). Each image uses 16 million primary samples, additional samples for discontinuity handling are stated per image as applicable. Note that Mitsuba 3’s particle tracer fails to produce correct results. PSDR’s path-space approach works, but light tracing causes more noise than their path tracer. Our gradient calculations ignore the moving shadow (bottom left corner), but produce comparable results to Mitsuba’s standard path tracer. In particular, our adjoint light tracing produces equally clean results, while other light tracers are noticeably noisier.
Fig. 13.
Fig. 13. Runtime (top) and convergence (middle) comparisons of our method with Mitsuba 3 on the same scene as shown in Figure 1, modified with four cameras and four lights sources. We show results for a restricted version of our method consisting only of camera-visible data (bottom, left), as well as a full version. Four different integrators of Mitsuba 3 were used and all Mitsuba 3 tests were done with 4 million samples, while for our algorithm we chose 1, 4, and 16 million samples. The error is the Euclidean distance between current and target light positions and intensities in parameter space. Note that our full version with just 1 million samples performs similar to our restricted version using 4 times the amount of samples. Bottom row: We restrict our objective by weighting the contribution of vertices (left, white: \(\alpha _k = 1\), black: \(\alpha _k = 0\)) based on their visibility to the cameras used by Mitsuba. The camera views used when optimizing with Mitsuba are shown on the right.

2.3 Lighting Design

The problem of designing lighting configurations has also been previously explored in computer graphics and tackled from different perspectives. Many approaches use variations of the sketching metaphor, such as sketching the shape of shadows to indirectly move light sources [Poulin et al. 1997] or painting highlights and dark spots to control light parameters in the scene [Pellacini et al. 2007; Shesh and Chen 2007]. Our view-independent differentiable rendering framework brings the sketching metaphor to 3D, providing a more intuitive way to paint the desired illumination directly into the scene. Therefore, the user-painted objective is guaranteed to be free of contradictions that might otherwise occur when editing images from different views corresponding to the same 3D location.
Procedural or hierarchical optimization approaches to luminaire placement have also been explored [Gkaravelis 2016; Jin and Lee 2019; Lin et al. 2013; Schwarz and Wonka 2014]. As we focus on continuous optimization, these methods could be used to estimate a starting configuration before further optimizing the parameters of the generated light sources.
More recent approaches include the user in the optimization process [Sorger et al. 2016; Walch et al. 2019] directly, instead of automatically operating on a predefined target. These methods interactively display the current illumination and provide information about where and how this configuration could be improved. The user is then free to choose which measures to take to bring the scene closer to the desired state. In the future, our approach could be integrated into such a user-centred editing framework to improve the design interaction between the user and the optimization system.

3 Problem Statement

We start with the well-known rendering equation [Kajiya 1986], which models the exitant radiance \(L({\mathbf {x}},{\omega _o})\) from a point \(\mathbf {x}\) on a surface in the outward direction \(\omega _o\) as
\begin{equation} L({\mathbf {x}},{\omega _o}) = {L_e}({\mathbf {x}},{\omega _o}) + \int _{{\mathcal {H}^2}} {f({\mathbf {x}},{\omega _i},{\omega _o}){(\boldsymbol {\mathbf {n}}\cdot \omega _i)}\;L_i({\mathbf {x}},{\omega _i})\;d{\omega _i}}. \end{equation}
(1)
The bidirectional reflectance distribution function (BRDF), \({f}\), encodes the material properties, and the incident radiance \(L_i({\mathbf {x}},{\omega _i})\) at \(\mathbf {x}\) is related to the radiance exitant from another point \(\mathbf {x}^{\prime }\) by \({L_i}({\mathbf {x}},{\omega _i}) = L({\mathbf {x}^{\prime }}, {-\omega _i})\), such that \(\mathbf {x}^{\prime }\) is the nearest ray-surface intersection when tracing a ray from \(\mathbf {x}\) in direction \(\omega _i\). Here, we focus on reflective light transport for brevity, but in principle, transmissive surfaces could be included in mostly the same way. Consequently, we restrict the directional integration to the hemisphere \({\mathcal {H}^2}\) instead of the entire (unit) sphere \({\mathcal {S}^2}\). We also do not consider any volumetric light-scattering effects.
We then minimize an objective function that measures the quality of a given lighting configuration. More formally, for this inverse rendering application, we consider objective functions O of the form
\begin{equation} O (L) = \frac{1}{2} \int _\Omega { \frac{1}{{2\pi }} \int _{{\mathcal {H}^2}} {\alpha ({\mathbf {x}},{\omega _o}){{\left({L({\mathbf {x}},{\omega _o}) - {L^*}({\mathbf {x}},{\omega _o})} \right)}^2}\;d{\omega _o}} d{\mathbf {x}}}, \end{equation}
(2)
where \(L^*\) is our illumination target, \(\alpha\) is a user-defined weighting function, and \(L({\mathbf {x}},{\omega _o})\) must satisfy the rendering equation. As the emitted radiance \(L_e(\boldsymbol {\mathbf {p}})\), due to a given set of light sources, depends on optimization parameters \(\boldsymbol {\mathbf {p}}\), we look for local minima \(\boldsymbol {\mathbf {p}}^*\),
\begin{equation} \boldsymbol {\mathbf {p}}^* = \arg \min _{\mathbf {p}} O(L({\mathbf {p}})), {\text{~s.t.~} L \text{~satisfies Equation~(1).}} \end{equation}
(3)
To address this problem, we compute an approximate solution, \(L({\mathbf {x}},{\omega _o})\), on the surfaces of a virtual 3D scene in a primal rendering pass (Section 4.2). We then derive an adjoint (or backpropagation) rendering pass (Section 4.3) that allows us to compute derivatives of the optimization objective with respect to the parameters, \(d O / d \boldsymbol {\mathbf {p}}\). Consequently, we are able to apply gradient-based continuous optimization methods to the lighting design problem (Figure 3).

4 View-independent Adjoint Light Tracing

In the following sections, we first describe how we discretize the radiance data and optimization target (Section 4.1), how we update this data while light tracing (Section 4.2), and, finally, how we compute the objective gradient through adjoint light tracing (Section 4.3).

4.1 Spatio-directional Data Structure

To represent the spatio-directional radiance field L with a finite number of variables (or degrees of freedom), we first construct an appropriate interpolation scheme. We then use this scheme to find approximate solutions to Equation (1).
For the spatial component, we choose a piecewise linear interpolation, as often used in finite-element (and specifically radiosity) methods [Greenberg et al. 1986; Larson and Bengzon 2013],
\begin{equation} L({\mathbf {x}},{\omega _o}) \approx \sum \limits _k {{L_k}({\omega _o}){\varphi _k}({\mathbf {x}})}. \end{equation}
(4)
Instead of single nodal values, however, we consider \(L_k\) to be a directional function associated with the kth mesh vertex and its nodal basis function \(\varphi _k\).
For the directional component, we discretize each per-vertex function \(L_k(\omega _o)\) using a hemi-spherical harmonic (HSH) basis [Gautron et al. 2004; Green 2003; Krivanek et al. 2005; Sillion et al. 1991], specifically, the \(2\pi\)-normalized hemi-spherical harmonics \({H_m^l}\) [Wieczorek and Meschede 2018]. Each directional function is consequently represented as
\begin{equation} {L_k}({\omega _o}) \approx \sum \limits _{l = 0}^{l = n} {\sum \limits _{m = - l}^{m = l} {{L_{klm}}{H_m^l}({\omega _o})} }, \end{equation}
(5)
where n is the maximal order used in this approximation, which implies that for each vertex k, we require \((n+1)^2\) directional coefficients. Consequently, our spatio-directional approximation (using three colour channels, RGB) has \(3 m (n+1)^2\) degrees of freedom, where m is the number of mesh vertices.
For completeness, we summarize the construction of \({H_m^l}\) in the supplementary material. Furthermore, for the special case \(n=0\), which is sufficient for diffuse surfaces, we effectively ignore the directional component and simply store vertex colours. Note that even if we choose \(n=0\), our light tracing method may still include glossy materials when simulating indirect lighting.
Our primary goal is to solve inverse problems based on this discretization, whereas we produce high-quality output images via standard rendering methods. Therefore, we favour the simplicity of this discretization over alternatives such as texturelike bilinear interpolation or meshless bases [Lehtinen et al. 2008]. In contrast to previous work, we store exitant rather than incident radiance. We also define our optimization objective on the exitant radiance, which can be intuitively painted by the user (Figure 4). Discretizing the target radiance field \(L^*(\boldsymbol {\mathbf {x}},\omega _o)\) using the projection derived in the supplement, yields target coefficients \({L_{klm}^*}\) and subsequently a discrete analog of the objective function, Equation (2),
\begin{equation} O(L) = \frac{1}{2} {\sum \limits _{klm} {{A_k}{\alpha _k}\alpha _l({L_{klm}} - L_{klm}^*) ^2}}, \end{equation}
(6)
where \(A_k\) is the area associated with vertex k, i.e, one third of the sum of adjacent triangle areas. Here, we use a shorthand notation for the nested summation over vertices and HSH coefficients according to Equations (4) and (5). We choose to split the weights into a spatial and a directional component, \(\alpha _k\) and \(\alpha _l\), respectively. Thus, the weights \(\alpha _l\) adjust the influence of each HSH band: Over-weighting lower bands for example would emphasize low-frequency components of the reflected radiance. Similarly, \(\alpha _k\) allows us to focus the attention of the optimizer on specific parts of the scene geometry. We use a painting interface to allow the user to specify \(L_{klm}^*\) as well as \(\alpha _k\). The HSH coefficients can be efficiently painted using a “directional” brush combined with an angular blur kernel, which is easy to compute, because a closed-form solution to Laplacian smoothing exists in HSH space.
For differentiable rendering, note that the derivative of Equation (6), \({\partial O / \partial L_{klm}}\) is straightforward to evaluate, which we use in Section 4.3.

4.2 Primal Light Tracing

One key feature of our discretization is the orthogonality of all basis functions. Therefore, we can project each radiance sample computed during light tracing into our data structure quickly and independently. In this section, we discuss how we solve the rendering equation using light tracing and update the coefficients \(\lbrace L_{klm}\rbrace\) of our data structure accordingly. We summarize this procedure in Algorithm 1 and provide a detailed derivation in the supplementary document.
Theoretically, we could regard our data structure as a generalized sensor and construct a path-space formulation of its measurement integral. Here, we instead consider light transport in the simpler particle tracing form, directly discretizing each light source’s radiative power into a set of light rays. In Section 4.4, we then use ideas from the path-space theory to formulate the required derivatives for our light tracing approach. For every light source, we sample N exitant rays, each representing a radiant flux \(\Phi _r\) leaving the light source, distributed according to the light’s emission profile, such that the sum of all these exitant samples equals the total radiative power of the light. Let \({{\mathbf {x}}_0}\) be the origin of a ray. We then construct a light path up to a maximal number of indirect bounces, denoted as a sequence of ray-surface intersection points \({{\mathbf {x}}_1},\ldots ,{{\mathbf {x}}_i},\ldots , {{\mathbf {x}}_\text{max}}\).
The incident flux \(\Phi _i\) arriving at each hit point \({{\mathbf {x}}_i}\) is \({\Phi _1} = {\Phi _r}\) (direct illumination) and
\begin{equation} {\Phi _{i + 1}} = {\Phi _i}\;f_i(- {\omega _i},{\omega _{i + 1}})\;({\omega _{i + 1}} \cdot {{\mathbf {n}}_i})/{p_{i+1}} \end{equation}
(7)
(indirect illumination), where \(f_i\) denotes the BRDF evaluated at \({{\mathbf {x}}_i}\), \({{\mathbf {n}}_i}\) is the surface normal at \({{\mathbf {x}}_i}\) and \({p_{i+1}}\) is the probability density of the exitant sample in direction \({\omega _{i +1}} = ({{\mathbf {x}}_{i+1}} - {{\mathbf {x}}_{i}}) / \left\Vert {{{\mathbf {x}}_{i+1}} - {{\mathbf {x}}_{i}}} \right\Vert\).
At every intersection point \(\boldsymbol {\mathbf {x}}_i\), we sample the exitant radiance distribution due to the incident flux \({\Phi _i}\) along \({N^{\prime }}\) randomly selected outward directions \({\omega _o}\). We then update the coefficients \(\lbrace L_{klm}\rbrace\) by adding the contribution of each sample \({\omega _o}\) to each vertex k, adjacent to \(\boldsymbol {\mathbf {x}}_i\), as follows:
\begin{equation} {L_{klm}} \leftarrow {L_{klm}} + \frac{1}{{N^{\prime }}}\frac{1}{{{A_k}}} \; H_m^l({\omega _o}) {{\varphi _k}({\mathbf {x}_i}) \; f_i({-\omega _i},{\omega _o}) \; {\Phi _i}}, \end{equation}
(8)
where k refers to the vertices of the triangle containing \({{\mathbf {x}}_i}\), with vertex-associated area \(A_k\), while l and m refer to each HSH basis function up to the selected maximal order n. Please refer to our supplement for a detailed derivation of this update rule.

4.3 Adjoint Light Tracing

So far we have discussed how we store the exitant radiance field (Section 4.1) and how each light path affects the degrees of freedom of this field (Section 4.2). We now turn our attention to improving the lighting configuration of a scene via gradient-based optimization and an adjoint formulation for computing the required gradients.
In contrast to previous work on image-based differentiable rendering, we do not handle derivatives along discontinuous edges in our method. This means we primarily ignore direct shadows and instead focus on an efficient derivative formulation for the continuous case. Our results show that these discontinuity problems have limited influence in common lighting design tasks; even in specifically designed test cases (Figures 20 and 21), the optimization algorithm sometimes recovers from receiving these biased gradients.
Fig. 14.
Fig. 14. A basic lighting design case: The target requires the tables (dashed lines in a) to be lit uniformly (the rest of the scene does not affect the objective \(\alpha _k=0\)). We plot the objective function on a regular grid in the horizontal plane (a), where the orange dot marks the initial position of the light source, producing the rendered result in the background. Arrows show negative gradients wrt. the in-plane position of the light, which reliably point toward a local minimum. We visually compare these gradients to a finite difference approximation (b). Initial and optimal light position overlaid on the final illumination (d): The light becomes brighter and moves centrally over the tables during optimization. We compare convergence of gradient descent and L-BFGS (both using our gradients) to gradient-free CMA-ES optimization (c). With each method, we first optimize only the light’s 3D position (solid lines) and then its position and intensity combined (dashed lines).
Fig. 15.
Fig. 15. Equal-time comparison between our adjoint light tracing and our reference differentiable path tracing implementation on the combined optimization of the light source’s position and intensity, as in Figure 14. Using our gradients, both gradient descent (step size \(\alpha = 0.1\)) and L-BFGS converge robustly, whereas differentiable path tracing produces more noisy gradients, preventing convergence. (Note that we squeeze the x-axis between iterations 150 and 450 to show the later gradient descent iterations. L-BFGS, however, terminates when it fails to find an improved configuration).
Fig. 16.
Fig. 16. Lighting design study on the bust of David: We first generate a flat initial lighting of the laser-scanned model (a) and then sketch a desired side lighting in 3D (b) and optimize the position and rotation of the point light source to match this target (d). We then re-create the optimized lighting setup using the real-world specimen (f). Columns (c) and (e) show the difference to the target before and after optimization respectively.
Fig. 17.
Fig. 17. Lighting an ancient coin according to a user-defined directional target (top row: front view, bottom row: side view). Initially, the light source is positioned on the left side of the coin (a) when viewed from the front. The user-defined target (b) aims for lighting mostly toward the camera in the front view. Our result (c) matches the desired lighting direction.
Fig. 18.
Fig. 18. Recovering light parameters for relighting an old PC game, OpenArena [2008], which originally shipped with baked lighting only: the original “look” of the game (c) is produced by combining albedo textures and lightmaps (b). From these data, we construct an optimization target (d) and use it to find a new lighting configuration (e) as close as possible to the original (c). Our optimization starts from a regular grid layout of lights and finds a suitable arrangement for them (a). Ultimately, this enables relighting old games often missing dedicated lights with more advanced rendering techniques (e.g., ray-tracing).
Fig. 19.
Fig. 19. Lighting a theatre stage; please also refer to our accompanying video for the design interaction. The main image shows the audience view, illuminated by a spot light on the left and an area light on top. The inset on the left shows a top-down view of the stage. The inset on the right shows a part of the user-drawn lighting target.
Fig. 20.
Fig. 20. Scene from Figure 14 with additional occluders: The light is placed above the ceiling (a) and must only move horizontally to best illuminate the tables (dashed lines), using the same objective as before. Gradient descent and L-BFGS fail to converge (b), while ADAM is more robust due to its momentum term. We find the true optimum by gradient-free CMA-ES optimization for comparison.
Fig. 21.
Fig. 21. In this test scene, containing a staggered array of occluding cubes (a), the light needs to navigate through this arrangement of cubes to illuminate the uniformly white target on the back wall (b). Depending on the random seed during rendering, ADAM optimization (step size \(\alpha =0.05\)) may get the light stuck inside of a cube but occasionally manages to find a way through, due to momentum (“spikes” in the objective function graph (c) indicate that the light is inside of a cube). We also optimize (with the same settings) using the central finite difference approximation of gradients (FD-step \(h = 0.03\)) for reference, which results in a smooth navigation of the light in between the cubes.
In the context of our light tracer, parameters that affect the lighting configuration may cause changes in brightness or colour (\(\Phi _r\)), as well as the ray origin, \(\boldsymbol {\mathbf {x}}_0\). In case the origin shifts (infinitesimally), the question is how this change propagates through the scene. Does the first hit, \(\boldsymbol {\mathbf {x}}_1\), also shift, and if so, what about the rest of the light path, \(\boldsymbol {\mathbf {x}}_i\)? Automatic differentiation of Algorithm 1 would answer “yes” and compute derivatives of all hit points, \({ d \boldsymbol {\mathbf {x}}_i / d \boldsymbol {\mathbf {x}}_0 }\), as well as derivatives of the interpolation basis functions. Our test cases (Figures 6 and 7) show that this naïve approach does not lead to useful results. Zhang et al. [2020] (PSDR) work around this issue by moving the evaluation to path space before applying AD. Following a similar idea, we compute derivatives by treating the light path \(\lbrace \boldsymbol {\mathbf {x}}_i\rbrace\) as a parameter-independent constant sample in path space when computing derivatives. Consequently, if a light source moves, then only the origin, \(\boldsymbol {\mathbf {x}}_0\), and hence the first incident direction, \(\omega _1 = {(\boldsymbol {\mathbf {x}}_1-\boldsymbol {\mathbf {x}}_0) / \Vert \boldsymbol {\mathbf {x}}_1-\boldsymbol {\mathbf {x}}_0 \Vert }\), are parameter dependent. All other intersection points, \(\boldsymbol {\mathbf {x}}_i\), remain fixed to the scene geometry. We first derive the general structure of our adjoint tracing method in this section and then compute the required parametric derivatives in Section 4.4.
Our goal is to compute the objective function gradient with respect to the parameters \(\mathbf {p}\), which affect the solution \(L_{klm}(\mathbf {p})\) as follows:
\begin{equation} {({\nabla _{\mathbf {p}}}O)^\textsf {T}} = \frac{{dO }}{{d{\mathbf {p}}}} = \sum \limits _{klm} {\frac{{\partial O }}{{\partial {L_{klm}}}}\frac{{d{L_{klm}}}}{{d{\mathbf {p}}}}}. \end{equation}
(9)
The partial derivative \({\partial O / \partial {L_{klm}}}\) can be interpreted as the desired illumination change in the 3D scene per degree of freedom of the discretized radiance field.
We first consider the direct differentiation for \(d {L_{klm}} / d \boldsymbol {\mathbf {p}}\) and then re-arrange the resulting terms to formulate an adjoint state per light path. Every light path resulting from a primary light ray r affects some coefficients of the solution \({L_{klm}}\) according to Equation (8). Consequently, we can split the sensitivity term \(d {L_{klm}} / d \boldsymbol {\mathbf {p}}\) into contributions per path as follows:
\begin{equation} \begin{aligned}\ \left(\frac{{d{L_{klm}}}}{{d{\mathbf {p}}}}\right)_r =&\,\, \frac{{\partial {L_{klm}}}}{{\partial {\Phi _i}}}\frac{{\partial {\Phi _i}}}{{\partial {\Phi _r}}}\frac{{d{\Phi _r}}}{{d{\mathbf {p}}}}\quad &{\text{(flux)}} \\ &+ \frac{{\partial {L_{klm}}}}{{\partial {\Phi _i}}}\frac{{\partial {\Phi _i}}}{{\partial f_1}}\frac{{\partial f_1}}{{\partial {\omega _1}}}\frac{{d{\omega _1}}}{{d{\mathbf {p}}}}\quad &{\text{(1st bounce BRDF)}} \\ &+ \frac{{\partial {L_{klm}}}}{{\partial f_i}}\frac{{\partial f_i}}{{\partial {\omega _1}}}\frac{{d{\omega _1}}}{{d{\mathbf {p}}}}\quad &{\text{(local BRDF).}} \\ \end{aligned} \end{equation}
(10)
Note that for direct illumination, the direct incident flux \(\Phi _1\) does not depend on the BRDF \(f_1\), and therefore the second term vanishes when \(i=1\), as \(\partial {\Phi _1}/\partial {f_1} = 0\). Similarly, only \(f_1\) depends on the direct incident angle \({\omega _1}\), but later BRDF evaluations do not; therefore, the third term vanishes for indirect illumination, because \(\partial f_i/\partial {\omega _1} = 0\) for \(i \gt 1\). While we have omitted function arguments for brevity, it is important to point out that the direct BRDF is sampled along local outward directions \(\omega _o\), see Equation (8), whereas the indirect path evaluates the BRDF toward the exitant direction of the first bounce, \(\omega _2\), Equation (7). Both of these terms depend on the first incident direction \(\omega _1\), which changes when the origin of the light path moves. This dependence is stated explicitly in Equation (10); in the following we use the notation \({{{df_1({{\omega _2}})}}/{{d{\mathbf {p}}}}}\) and \({{{df_1({{\omega _o}})}}/{{d{\mathbf {p}}}}}\) to distinguish these terms.
Storing the full sensitivity matrix \(d {L_{klm}} / d \boldsymbol {\mathbf {p}} = \sum \nolimits _r (d {L_{klm}} / d\boldsymbol {\mathbf {p}})_r\) by summation over contributions from all paths, Equation (10), would consume a lot of memory as the number of parameters grows. Our formulation avoids this memory cost and instead uses adjoint states per path, which fit into the local memory of each GPU thread.
Instead of tracing derivatives of radiant flux forward along a light path, we collect partial objective derivatives backwards along that path and compute a weighted sum of partial derivatives \({\partial O}/{\partial {\Phi _r}}\). Nimier-David et al. [2020] also refer to a similar quantity as adjoint radiance in their work. Their concept of adjoint radiance projects partial derivatives of the objective function from an image into the scene along camera paths, whereas we collect objective derivatives along light paths. Combining Equations (9) and (10), including summation over light paths noted above, and then re-arranging terms yields
\begin{equation} \frac{{dO}}{{d{\mathbf {p}}}} = \sum \nolimits _r {\left({\frac{{\partial O}}{{\partial {\Phi _r}}}\frac{{d{\Phi _r}}}{{d{\mathbf {p}}}} + \frac{{\partial O}}{{\partial f_1({{\omega _o}})}}\frac{{df_1({{\omega _o}})}}{{d{\mathbf {p}}}} + \frac{{\partial O}}{{\partial f_1({{\omega _2}})}}\frac{{df_1({{\omega _2}})}}{{d{\mathbf {p}}}}} \right)}. \end{equation}
(11)
Here, \({\partial O}/{\partial {\Phi _r}}\) and \({\partial O}/{\partial f_1({{\omega _2}})}\) are adjoint states, collecting objective function derivatives along the light path. We update these terms at every ray-surface intersection point \(\boldsymbol {\mathbf {x}}_i\), analogously to the primal simulation, Equation (8),
\begin{equation} \frac{{\partial O}}{{\partial {\Phi _r}}} \leftarrow \frac{{\partial O}}{{\partial {\Phi _r}}} + \frac{{\partial O}}{{\partial {L_{klm}}}}\frac{{\partial {L_{klm}}}}{{\partial {\Phi _i}}}\frac{{\partial {\Phi _i}}}{{\partial {\Phi _r}}}, \end{equation}
(12)
and
\begin{equation} \frac{{\partial O}}{{\partial {f_1}({\omega _2})}} \leftarrow \frac{{\partial O}}{{\partial {f_1}({\omega _2})}} + \frac{{\partial O}}{{\partial {L_{klm}}}}\frac{{\partial {L_{klm}}}}{{\partial {\Phi _i}}}\frac{{\partial {\Phi _i}}}{{\partial {f_1}({\omega _2})}}. \end{equation}
(13)
Figure 5 illustrates this idea, which we also outline in Algorithm 2.
Furthermore, the derivative wrt. the BRDF under direct illumination, \({{{\partial O}}/{{\partial f_1({{\omega _o}})}}}\), only occurs at \(\boldsymbol {\mathbf {x}}_1\) and can be calculated directly from Equations (8) and (6). Similarly, the derivative of the BRDF itself wrt. parameters \(\boldsymbol {\mathbf {p}}\) follows directly from the BRDF’s definition. Additionally, we find the derivative terms required to evaluate Equations (12) and (13) as follows: \({\partial O / \partial L_{klm}}\) is the derivative of Equation (6), while \({\partial L_{klm} / \partial \Phi _i}\) follows from differentiating Equation (8). Finally, \({\partial \Phi _i / \partial \Phi _r}\) and \({\partial \Phi _i / \partial f_1(\omega _2)}\) are tracked along the light path, effectively expanding the recursive product induced by Equation (7) and collecting factors.
Note that in our formulation, primal and adjoint tracing are entirely separate rendering passes. In contrast, automatic differentiation correlates primal rendering and derivative calculation, and previous methods work around this issue by running the primal pass twice (“decorrelation”). Here, whether we evaluate the adjoint step in a correlated or decorrelated way is simply a question of whether we choose the same or a different random seed compared to the primal step. In validation tests and finite difference comparisons, we use the correlated evaluation (i.e., keeping the same random seed in all rendering passes), while for all other results we use decorrelated evaluation.

4.4 Optimization Parameters

The final step to evaluating the objective function gradient according to Equation (11) is computing \(d\Phi _r/d\boldsymbol {\mathbf {p}}\) for all parameters \(\boldsymbol {\mathbf {p}}\) of the lighting configuration. Note that this is the only term that depends on the type of each light source (point, spot, area, etc.) and on the meaning of each parameter (position, intensity, rotation, etc.). As \(\Phi _r\) denotes the direct illumination, these derivatives can generally be calculated analytically, and we briefly summarize them here while deferring details to the supplementary document.

4.4.1 Intensity and Colour.

Intensity (per colour channel) \({I_e}\) for point-shaped lights (or analogously emissive power for area lights) directly affects the emitted flux per light ray, and hence the corresponding derivative is straightforward to calculate. We find it useful to parametrize intensity with a quadratic function, \(I_e = 0.5 \boldsymbol {\mathbf {p}}_j^2\), where \(\boldsymbol {\mathbf {p}}_j\) is the corresponding optimization parameter, to prevent negative values. If the task is to optimize chromaticity, but maintain a constant intensity, then we need to find a colour vector \(\boldsymbol {\mathbf {c}} = {(r, g, b)}\) such that \(r + g + b = 1\). We implement this constraint by projecting \(\boldsymbol {\mathbf {c}}\) to \(\boldsymbol {\mathbf {c}}_p = \boldsymbol {\mathbf {c}} / (r + g + b)\) in each optimization step and similarly projecting the derivative by multiplying with the Jacobian \((d{\mathbf {c}_p}/d{{\mathbf {c}}})\).

4.4.2 Position and Rotation.

Let us now consider the derivative of the objective function with respect to a light’s position \(d O / d \boldsymbol {\mathbf {x}}\). As discussed in Section 4.3, the naïve approach would compute the set of spatial derivatives \(d\boldsymbol {\mathbf {x}}_i / d\boldsymbol {\mathbf {x}}\) for the entire path. Automatic differentiation also produces this result; see Mitsuba 3’s ‘ptracer’ in Figures 11 and 12.
There is, however, a serious issue with this naïve approach: Computing derivatives in this way generally fails to produce reliable descent directions that would be useful for gradient-based optimization of lighting configurations, Figure 7. This issue arises because any spatial derivative of \(\boldsymbol {\mathbf {x}}_i\) results in a vector in the plane of the triangle containing \(\boldsymbol {\mathbf {x}}_i\), with discontinuous jumps between triangles, as illustrated in the inset of Figure 7. Note that this type of discontinuity problem is distinct from the case of moving hard shadows due to occlusions (we intentionally ignore the latter). The former issue, however, causes severe limitations for differentiable light (or particle) tracing and must be addressed.
The PSDR approach [Zhang et al. 2020] avoids this issue by evaluating the entire path according to the path-integral formulation (including all geometric terms explicitly) and then applying automatic differentiation. In contrast to their general formulation, we deal specifically with parameters that affect light sources, which means that we only need to consider the first geometric term, while our adjoint states handle the indirect light path. For the remaining direct part, we now apply a similar idea in explicitly differentiating the geometric term at \(\boldsymbol {\mathbf {x}}_1\). For improved performance, we compute these derivatives analytically rather than relying on code-level AD. The resulting expressions fit into thread-local GPU memory allowing for efficient parallel computation.
Our solution to finding useful gradients wrt. position and rotation of a light source is to consider the “reverse” direction of the primary light ray (\(\boldsymbol {\mathbf {x}}_0 \rightarrow \boldsymbol {\mathbf {x}}_1\)) as if we were tracing the ray in the other direction (via next event estimation from the scene geometry to the light source). Consequently, we treat \(\boldsymbol {\mathbf {x}}_1\) as constant, while the light source moves (or rotates) and consider a “virtual” intensity per ray, \({{\tilde{I}_e} = {\Phi _r r^2 / (\omega _1 \cdot \boldsymbol {\mathbf {n}})}}\), representing the irradiance this ray transports from the light source to \(\boldsymbol {\mathbf {x}}_1\). The direct illumination due to \(\Phi _r\) behaves like \({\tilde{I}_e (\omega _1 \cdot \boldsymbol {\mathbf {n}}) / r^2}\) locally. Consequently, the derivative of the incident flux with respect to the light’s position becomes
\begin{equation} \frac{d \Phi _r }{ d \boldsymbol {\mathbf {x}}} ={ \tilde{I}_e \; \frac{ d((\omega _1 \cdot \boldsymbol {\mathbf {n}})/{r^2}) }{ d \boldsymbol {\mathbf {x}}}}. \end{equation}
(14)
We compute this derivative using symbolic math software, as shown in our supplementary document.
Figure 6 compares errors of our gradient to the naïve version discussed earlier, relative to finite difference (FD) approximations of various step sizes. As expected, for small step sizes finite differencing suffers from floating-point errors, whereas for large step sizes, FD approximation error dominates. For a suitable range of step sizes, our gradients agree well with finite difference approximations. The naïve approach, however, shows systematic error over a wide range of FD step sizes. This error accumulates during the summation over many rays; for single light paths, we have verified the correct implementation of the naïve gradient calculation. In Figure 7, we compare our improved approach to the naïve version on the basic optimization task of positioning a spot light. Using our method, gradient descent optimization converges quickly and reliably remains in the optimal configuration. We intentionally choose a standard gradient descent approach (i.e., updating \({\mathbf {x}} \leftarrow {\mathbf {x}} - \alpha (dO/d{\mathbf {x}})\)) instead of more elaborate optimization algorithms for this example, because it most clearly exposes the flaws of the naïve differentiation approach.
Moving on to rotations, we parametrize the orientation of light sources by a rotation vector \({\boldsymbol {\mathbf {\theta }}}\), which defines a rotation matrix \({\mathbf {R}(\boldsymbol {\mathbf {\theta }})}\) following Rodrigues’ formula. The world-space orientation (normal \(\mathbf {n}\) and tangent \(\mathbf {t}\)) results from applying this rotation to the material-space normal and tangent \((\boldsymbol {\mathbf {n}}_0, \boldsymbol {\mathbf {t}}_0)\), respectively. We use the linear small-angle approximation to avoid numerical issues when \(\Vert \boldsymbol {\mathbf {\theta }}\Vert\) approaches zero. Applying the chain rule, we find the derivative of the flux wrt. the rotation vector
\begin{equation} \frac{{d{\Phi _r}}}{{d{\mathbf {\theta }}}} = \frac{{\partial {\Phi _r}}}{{\partial {\mathbf {n}}}}\frac{{d{\mathbf {n}}}}{{d{\mathbf {\theta }}}} + \frac{{\partial {\Phi _r}}}{{\partial {\mathbf {t}}}}\frac{{d{\mathbf {t}}}}{{d{\mathbf {\theta }}}}, \end{equation}
(15)
where we compute the terms \(d\boldsymbol {\mathbf {n}}/d\boldsymbol {\mathbf {\theta }}\) and \(d\boldsymbol {\mathbf {t}}/d\boldsymbol {\mathbf {\theta }}\) using symbolic differentiation of Rodrigues’s formula. Finally, we find \({{\partial \Phi _r }/{\partial {\mathbf {n}}}}\) and \({{\partial \Phi _r }/{\partial {\mathbf {t}}}}\) via symbolic differentiation, again keeping \(\boldsymbol {\mathbf {x}}_1\) fixed. The exact expressions depend on the type of light source. For instance, IES data files [ANSI/IES 2020]—which are widely used in architecture—define the emitted intensity by tabulated values, interpolated bilinearly over the unit sphere; spot lights with soft edges use a quadratic attenuation profile between the inner and outer cone angle, whereas area light sources assume a cosine-weighted emission profile. Please refer to our supplementary document for further details. Note that for area lights the ray origin moves when the light is rotated, i.e., \({\boldsymbol {\mathbf {x}}_0(\boldsymbol {\mathbf {\theta }})}\). (For our purposes we only consider rigid rotation and translation.) In this case, the derivatives wrt. the light’s orientation include an additional term, analogously to Equation (14) to account for the shift of the ray origin, as described in the supplement.

4.4.3 Further Parameters.

Because Equation (11) separates derivatives of the objective function from derivatives of the lighting configuration, we can in principle extend our optimization system to include other parameters relatively easily. The main question we have to address for each parameter is how it affects the incident radiance (discretized using many rays) at the first ray-surface intersection, thereby formulating derivatives of each ray’s radiant flux, such as to represent the expected change in the direct illumination. In general, we believe this idea of including knowledge about a global behaviour into local, but discretized, derivative calculations could also be beneficial for other applications.

5 Visualizing Gradient Contributions

In common camera-based differentiable rendering, the gradient of the rendered image with respect to a specific parameter can be visualized as a gradient image or a gradient texture [Nimier-David et al. 2020; Zeltner et al. 2021; Zhang et al. 2020] when optimizing an object’s appearance. Our objective function instead evaluates the full radiance field instead of a single image. During optimization, we employ an adjoint formulation that avoids computing derivatives of this radiance field explicitly. We can compute these derivatives for visualization and comparison, but more interestingly, we can also use our per-ray adjoint states for a different kind of visualization showing adjoint gradient contributions as follows.
For any point \(\boldsymbol {\mathbf {q}}\) on the surface geometry (that is directly visible from a selected light source), we construct a light path that originates at that light and passes through \(\boldsymbol {\mathbf {q}}\) as its first ray-surface intersection, and then continues bouncing through the scene. Evaluating Equation (11) restricted to these paths we find contributions to the derivative \(dO/d\boldsymbol {\mathbf {p}}_j\) that “flow” through the point \(\boldsymbol {\mathbf {q}}\). Intuitively, we can think of each point \(\boldsymbol {\mathbf {q}}\) casting a vote on how the parameter \(\boldsymbol {\mathbf {p}}_j\) should change to improve the objective function (while taking indirect reflections from \(\boldsymbol {\mathbf {q}}\) into account). Note that this quantity is different from adjoint radiance [Nimier-David et al. 2020], which does not include derivatives wrt. parameters, or indirect bounces.
The example in Figure 8 shows which parts of the dragon would improve the objective function if the light moves right (red) or left (blue). The final objective function gradient \(d O / d \boldsymbol {\mathbf {p}}\) can be thought of as the sum of these contributions. We compare our visualization to a reference implementation that directly differentiates the radiance field and a finite difference approximation in Figure 8 and also to related work in Figure 12. For (mostly) direct lighting, our method and the reference yield nearly identical results; when the objective is strongly influenced by indirect illumination; however, our adjoint visualization draws gradient contributions at the first ray-surface intersection, whereas direct differentiation produces a response at the target surface instead (see the additional example in our supplement).

6 Results

In this section, we first cover verification test cases, as well as comparisons to related methods; we then demonstrate the ability of our approach to address creative lighting design cases. Unless stated otherwise, we trace two indirect bounces in all of our results. Our light transport simulations run on the GPU, specifically a NVIDIA GeForce RTX 3080, using hardware-accelerated ray tracing via the Vulkan API. We also evaluate the objective function on the GPU but transfer the optimization parameters and their derivatives to the CPU to access publicly available optimization libraries. Running the optimization algorithm on the CPU is not a performance bottleneck, as the size of the parameter vector is much smaller than the size of the radiance data.
In general, predicting which optimization algorithm will perform best for a specific task is difficult. In our results we demonstrate that we provide gradient information that works with multiple commonly used optimization methods, most notably gradient descent, ADAM, and L-BFGS [Nocedal 1980]. Our implementation uses the LBFGS++ library [2021], as well as the ADAM algorithm of Kingma and Ba [2014] (using decay rates \(\beta _1 = 0.9\) and \(\beta _2 = 0.999\) as suggested in the original publication for all our results). For comparisons to gradient-free CMA-ES, we use the code by Hansen [2021, 2003].

6.1 Verification Tests

We first verify that our differentiable light-tracing system is capable of solving various inverse problems, where a well-defined solution exists. We perform ground-truth recovery tests, where the illumination target is copied from the result corresponding to a specific lighting configuration in various scenes: a large office (Figure 1), a glossy coin, using hemi-spherical harmonics up to order \(n=5\) (Figure 9), as well as the Stanford bunny (Figure 11). For these examples, we visualize the differences to the ground-truth target before and after optimization or compare convergence of the optimization to related methods (see details in Section 6.2).
Apart from ground-truth tests, we verify that our method correctly finds solutions for well-defined objectives on specific surfaces, Figures 7 and 10, and compare our gradient calculation to finite difference approximations in Figures 6, 8, and 14. In our supplementary material we also provide a visual comparison of different orders of HSH interpolation, which shows that our radiance data structure converges under refinement.
Finally, we test the capabilities of our system in dealing with indirect illumination. In Figure 10, we place a spot light into a small “labyrinth,” such that light can only reach the “exit” via multiple bounces. The side walls of the labyrinth have a dark target value to push the light away from the walls, with a very low weight (\(\alpha _k = 0.004\)), whereas the exit has a very bright target with unit weight to attract the light source. We trace light paths with 10 indirect bounces in this example. Optimizing for position and rotation simultaneously using ADAM with step size \(\alpha = 0.2\) successfully navigates the light through the labyrinth in ca. 10 s and 200 gradient evaluations. In principle, L-BFGS also successfully solves this problem, but the momentum estimated by ADAM results in a smoother, visually appealing motion of the light source.

6.2 Comparison to Previous Work

Current work on inverse rendering always considers images to define the optimization objective. We establish a baseline for comparing our (camera-free) view-independent approach to existing methods by restricting our objective function to data available to image-based methods (camera-visible surfaces) on a test scene, where one camera sufficiently captures most surfaces (except of course the back of the bunny, Figure 12). In this test, we use ideally diffuse materials for all objects in the scene, and no directional data (\(n=0\)) for our light tracer and reference path tracer, to allow for a fair comparison. As shown in Figure 11, our adjoint light tracing matches state-of-the-art methods in this setting in terms of optimization convergence, while our adjoint formulation, combined with an efficient GPU-based implementation outperforms other methods in terms of runtime. We use an equal number of samples for these comparisons. Finally, note that for the unrestricted case (‘full’) with \(n=0\), Figure 11 also contains an “object-space” baseline comparison between our light tracer and reference path tracer, which shows that light tracing (0.38 s) outperforms path tracing (0.9 s) for an equal number of samples (delivering equal optimization convergence).
Mitsuba’s light tracing method (‘ptracer’), using automatic differentiation, fails to converge, similar to our naïve differentiation approach in Figure 7. The differentiable path-integral formulation of PSDR1 [Zhang et al. 2020], however, produces good results in this simple setting for both light and path tracing (with slightly more noise and reduced convergence rate of their light tracer).
While related image-based methods (Mitsuba, PSDR) handle discontinous edges in various ways, they also introduce additional noise into the gradient (compared to Mitsuba’s standard AD path tracer), causing reduced optimization convergence (Figure 11). As discontinuity handling also comes with a runtime cost, we choose speed over (theoretical) accuracy for the scope of this article; we analyze the effects of the resulting gradient bias in Section 6.4. Overall, our approach produces less noisy gradient data compared to existing differentiable light tracing methods.
For further comparison, we also implement a reference differentiable path tracer following the ideas of Stam [2020] on top of our view-independent data structure: We uniformly sample each triangle and trace paths with next event estimation to find the incident radiance. Finally, we sample exitant directions locally, apply the BRDF and update our data structure according to Equation (8). This path tracer uses the same sampling strategies to construct indirect illumination paths as our light tracer for a fair comparison of our light tracing method against “object-space” path tracing. The gradients produced by this path tracer closely match Mitsuba’s AD path tracer (Figure 12). For the baseline test scene we observe similar convergence behaviour between restricted objective function data and full view-independent data (i.e., camera-visible vs. all surfaces in the scene), as intended, while our light-tracing approach is substantially faster than path tracing. We show an extension of this baseline test in the supplement, where adding an off-camera box around the scene highlights the advantage of our view-independent approach. Furthermore, Figure 15 shows an equal-time comparison on the more complex optimization problem in Figure 14, where our light-tracing method yields improved convergence rates, due to reduced (gradient) noise (similar behaviour is also visible in Figure 2, as well as the indirect illumination “labyrinth” example in our video).
Finally, we compare our approach to Mitsuba 3 on a large office scene (Figure 13), where Mitsuba uses four cameras to cover most of the scene. We again restrict our method to surfaces visible by any camera for comparison (Figure 13). Mitsuba’s reparmetrization shows reduced optimization performance (likely due to increased noise in the gradient estimates). Using our method without restriction to visible surface improves convergence, even at lower sample count. This behaviour is due to the global coupling between parameters in the lighting design problem, which is in this sense a more challenging optimization problem than, for instance, optimizing texture colours given a target image.
In summary, in terms of optimization convergence, our adjoint light tracing method at least matches, and on complex scenes outperforms, state-of-the art inverse rendering systems on the lighting design problem. Our formulation enables a fast GPU implementation, which results in far better runtime performance.

6.3 Lighting Design Applications

6.3.1 Small Office Lighting.

We first show an example of a fully automatic optimization of a single point light in a small office scene, Figure 14. The optimization target specifies bright but uniform lighting on the top surface of both tables, a common regulatory requirement for work spaces. In the initial configuration, a point light is placed off to one side, causing the work space to be too dimly and unevenly lit. We test different optimization methods on two subsequent tasks in this scene: The first optimizes the 3D position of the light, placing it centrally above the tables for a good tradeoff between brightness and uniformity; the second jointly optimizes for the light’s intensity and position. The optimal light placement is now just underneath the ceiling (thereby improving uniformity), with the intensity increased to compensate for the larger distance (Figure 14(d)).
We compare the performance of GD and L-BFGS to gradient-free CMA-ES [Hansen et al. 2003] in Figure 14(c). In the first, position-only, optimization task, both L-BFGS and GD reliably find a good solution in 6.5 s and 42 objective and gradient evaluations, or 7.9 s and 52 evaluations respectively. In comparison, CMA-ES requires around 200 objective evaluations to find a good solution (initial standard deviation \(\sigma = 1\) for all parameters). In general, the computation time required to evaluate gradients via an adjoint method is close to the time needed to compute the solution itself. Therefore, even though CMA-ES does not need to evaluate gradients, the total runtime is still significantly slower than gradient-based methods. See also Table 1 for timings of the primal and adjoint evaluations in our examples. In the second task, when we optimize for position and intensity simultaneously, the Hessian approximation built by the L-BFGS method captures the relation between the height (distance) and intensity. Consequently, L-BFGS finds a good solution (Figure 14) in 8.2 s and 45 evaluations. Plain gradient descent and ADAM (as shown in our video), however, converge more slowly, while CMA finds an acceptable solution after about 400 evaluations.
Table 1.
Scene (Fig.)Optim.\(\alpha\)N; m\(t_{L}\); \(t_{\nabla }\)t
Large Office (1, 2)ADAM0.114.7e612764.0
(light trace)  1 409 38186 
Large Office (2)ADAM0.139.7e610766.4
(path trace)  1 409 381110 
Coin (9)L-BFGS14.2e614718.4
(HSH \(n=5\))  6 345174 
Labyrinth (10)ADAM0.21.0e63110.8
   1 89322 
LabyrinthADAM0.22.4e6359.5
(path trace, video)  1 89312 
Small Office (14)L-BFGS14.2e615214.7
   301 00928 
Small Office (15)L-BFGS175e612713.5
(path trace)  301 00962 
Small OfficeL-BFGS14.2e667316.2
(HSH \(n = 3\))  301 00961 
David (16)ADAM0.144.2e6286.8
   535 07137 
Coin painted (17)L-BFGS14.2e612322.5
(HSH \(n=5\))  6 345154 
OpenArena (18)ADAM0.1540.5e665472.5
   810042 
Theatre stage (19)L-BFGS118e697*13.5
(5x L-B. + 1x A.)ADAM0.238 44820260.6
Table 1. Overview of Our Results
Columns: scene reference, optimization algorithm, step size \(\alpha\), number of primary rays (threads) N, number of mesh vertices m, average time per primal and adjoint evaluation \(t_{L}\), \(t_{\nabla }\) [ms], total wall clock time for the entire optimization t [s] (“*” marks average of multiple optimizations).

6.3.2 Directional Lighting of a Glossy Coin.

In Figure 17 we show an example where directional lighting plays an important role. The user specifies a direction-dependent target by painting directly into the HSH-discretized data structure and coefficients \({L_{klm}^*}\). In this example, we set the per-channel weights \(\alpha _l\) such as to under-weight the undirected lighting component represented by \(H_0^0\) by a factor of 0.1 relative to the directional components of the radiance data. Using our adjoint gradients, an L-BFGS optimizer successfully navigates the light source around the coin to find the best matching directional lighting configuration.

6.3.3 Real-world Bust of David.

We demonstrate the applicability of our system to an artistic lighting design process on a small bust of David figurine, Figure 16. We first laser-scan the bust and build a 3D mesh suitable for our simulation using a Metris MCA 3600 articulated arm with a Metris MMD 50 laser scanner. In our interactive user interface, we then sketch a desired shading onto this model. Performing 100 iterations of ADAM optimization (\(\alpha = 0.14\)), which takes about 7.5 s, we find the position and orientation of a spot light that closely matches the painted target. To validate our result, we replicate this lighting configuration on the real-life specimen, Figure 16(f). We use a Cree XLamp CXA1304 LED and a custom-made snout to limit the cone angle to \(45^{\circ }\), matching our simulation. We apply only basic white balance and gradation curve adjustments to the real-world photographs.

6.3.4 Refurbishing Baked Lighting.

Another interesting application of our system is “refurbishing” old video games. In many cases, the diffuse (potentially also indirect) illumination is baked into static lightmaps. However, all information about the original light sources, which would be required to render the scenes with a modern real-time ray tracing method, is often lost. Here, we reconstruct light sources such that the given textures (assumed to represent albedo), combined with the recovered lighting produce a similar impression as the original game. We demonstrate this approach in Figure 18, where we build lighting configurations for two different scenes from OpenArena [2008]. In each case, we first initialize a regular grid (e.g., \(9 \times 9 \times 2\)) of low-intensity point lights, covering the bounding box of the scene. We then optimize for position, colour, and intensity of all lights simultaneously (up to 972 parameters)sing ADAM. Figure 18 shows overviews of the initial and final lighting configuration, and interior views of our results.

6.3.5 Large Office.

In a more complex example, we show a lighting optimization of a large office consisting of two floors. This scene is illuminated by two point lights (one over the staircase, one over the lounge area), two spot lights near the reception and statue, an area light over the conference table, and two highly anisotropic lights (using IES intensity profiles [ANSI/IES 2020]) on the top floor. We first show a result that recovers a given ground truth (Figure 1) and compare the optimization convergence to our reference path tracer (Figure 2). We also compare our method to Mitsuba 3 on this scene (replacing the IES lights, which are not supported by Mitsuba, with spot lights) in Section 6.2. We then demonstrate the ability of our system to interactively edit the desired illumination and automatically reconfigure the lighting design accordingly in Figure 4.

6.3.6 Theatre Stage Lighting.

Finally, we show an interactive design session, where the user starts from a completely unlit scene and iteratively adds light sources, selects optimization parameters, and updates the illumination target. The interactive workflow alternates between user manipulations and automatic lighting optimization to realize the intended design, as shown in our video. Figure 19 summarizes the result of this design example. In total, this lighting design session required about 10 minutes to complete, with just over 2 minutes spent on automatic design optimization, and the remaining time on various user interactions including visual inspection of the results. Please also refer to Table 1 for a performance overview of our results.

6.4 Discontinuity Failure Cases

In this section, we analyze specific situations where our method may fail to converge properly due to not handling occlusion discontinuities (moving shadows). First, we show a modified lighting design example in Figure 20, where the optimizer should navigate the light (in the horizontal plane) in between the occluding panels on the ceiling of a small office. In a second test case, we construct a scene exhibiting a high degree of occlusion, Figure 21, where the light must move through a staggered array of cubes. In both cases, we clearly observe that our gradients are lacking information about how the moving occlusion boundaries (shadows) affect the objective function, causing most optimization attempts to fail. Especially gradient descent or L-BFGS exhibit convergence problems in these tests, as they rely on being provided with a valid descent direction. Adding momentum, as in ADAM, however, can sometimes recover and “skip over” areas exposed to problematic gradients. In fortunate cases, where momentum compensates for gradient bias, ADAM sometimes finds acceptable solutions, even with inaccurate gradient data.
One additional problem caused by noisy or wrong gradients is the increased chance of a light entering a closed-off scene object, effectively trapping it and stalling optimization. Note, however, that lights passing through a wall cause a truly non-differentiable discontinuity that could still occur even with known discontinuity handling methods and should be addressed by continuous collision detection instead. We leave this line of investigation for future work.

7 Limitations and Future Work

We focus on continuous optimization of a given lighting configuration, relying on the user to provide an initial placement of light sources, either interactively or as part of the original scene description. In the future we will investigate extending our method with mixed-integer optimization approaches to also optimize for non-continuous parameters like the selection of light sources. Currently, we do not handle discontinuities due to moving shadows in our derivatives, which has been done in previous work on camera-based differentiable rendering. Because our lighting objectives measure relatively large areas and our radiance solution is continuously interpolated, our method converges robustly nonetheless. In the future, it will be interesting to investigate discontinuity handling for specific applications, such as lampshade design, that rely on accurately placing an occluding object in front of a light source. In our implementation, we do not employ advanced sampling strategies, like importance sampling, which are often used to increase the efficiency of Monte Carlo methods. Such methods have recently been successfully applied to differentiable image rendering. As the resulting methods may require parameter-dependent sampling, thereby complicating the calculation of derivatives, we leave this line of research in the context of our light-tracing approach for future work. Similarly, we currently use a user-defined number of HSH basis functions to represent the directional component of the radiance field. In the future it could be interesting to investigate choosing the interpolation order adaptively based on the material properties and lighting conditions.

8 Conclusion

In summary, our method enables lighting optimization via differentiable rendering, for the first time, to the best of our knowledge, providing two important features: interactive feedback and easily modifiable, view-independent objectives. We present a novel, analytical adjoint light-tracing formulation rather than relying on automatic differentiation. Our view-independent radiance data structure can be quickly updated, both during light tracing and while painting illumination targets. Combining our adjoint formulation and data structure yields an efficient implementation, which improves per-iteration runtime (equal samples) and convergence rates, compared to existing methods. Providing objective function values and gradients in each iteration allows us to use any first-order optimizer in a black-box fashion. Note that we do not use interpolation-basis-function derivatives in our approach, therefore gradient accuracy does not suffer due to low-quality meshes.
Our validation tests show that differentiating the incident flux, while keeping the first ray-surface intersection point constant, captures gradients of information that is contained in the distribution of discrete light rays. Our method computes more accurate gradients compared to a naïve approach as evidenced by finite difference and optimization convergence tests. We also show that (in equal time comparisons) adjoint light tracing results in improved optimization convergence behaviour than differentiable path tracing and that our method outperforms existing image-based differentiable rendering methods on a baseline test scene. We also provide a novel visualization of adjoint gradient contributions to analyze the composition of the objective function gradient. Furthermore, we show the applicability of our system to various lighting design tasks that cannot be easily handled by state-of-the-art image-based inverse rendering, including large-scale work spaces, artistic installations, and video game refurbishing.

Acknowledgments

We thank our students Mathias Schwengerer and Matthias Preymann for helping with our implementation, our colleagues Balint Istvan Kovacs and Ingrid Erb for their help on the theatre stage example, as well as Henry Ehlers for narrating our video.
The Iberian Coin model, Figures 9 and 17 has been created by ‘Itagues’ on Sketchfab. The scenes shown in Figures 1 and 14 were created with pCon.planner [EasternGraphics GmbH 2023] using assets from the pCon Catalog by Vitra, Bene, SIGEL, Object Carpet, EFG, Abstracta, Frovi, about OFFICE, Cascando, Aarsland, Gotessons, Hailo and Bosig. The model of Michelangelo’s ‘David’, Figure 1, was created by ‘3DWP’ on Sketchfab.
The results in Figure 18 use geometry, texture, and lightmap data from OpenArena maps by Tim Willits, Bob “dmn_clown” Isaac, and Conrad “anyone” Colwood, available under CC-BY-SA licence.

Footnote

1
Most features of the PSDR code are currently only available in their CPU-based implementation. PSDR currently only supports area lights, but neither point, spot, nor IES lights.

Supplementary Material

3662180.supp (3662180.supp.pdf)
Supplementary material
tog-22-0114-File005 (tog-22-0114-file005.mp4)
Supplementary material
Supplementary material

References

[1]
ANSI/IES. 2020. IES Standard File Format for Photometric Data, ANSI/IES LM-63-19. Retrieved from https://webstore.ansi.org/Standards/IESNA/ANSIIESLM6319
[2]
Sai Praveen Bangaru, Tzu-Mao Li, and Frédo Durand. 2020. Unbiased warped-area sampling for differentiable rendering. ACM Trans. Graph. 39, 6 (2020), 1–18. DOI:
[3]
Andrew M. Bradley. 2019. PDE-constrained Optimization and the Adjoint Method. Retrieved from https://cs.stanford.edu/ambrad/adjoint_tutorial.pdf
[4]
DIAL GmbH. 2022. DIALux evo 10.1. Retrieved from https://www.dialux.com/en-GB/dialux
[5]
EasternGraphics GmbH. 2023. pCon.Planner. Retrieved from https://pcon-planner.com
[6]
Pascal Gautron, Jaroslav Krivanek, Sumanta Pattanaik, and Kadi Bouatouch. 2004. A novel hemispherical basis for accurate and efficient rendering. In Eurographics Workshop on Rendering. DOI:
[7]
Moritz Geilinger, David Hahn, Jonas Zehnder, Moritz Bächer, Bernhard Thomaszewski, and Stelian Coros. 2020. ADD: Analytically differentiable dynamics for multi-body systems with frictional contact. ACM Trans. Graph. 39, 6 (Nov. 2020), 1–15. DOI:
[8]
Anastasios Gkaravelis. 2016. Inverse lighting design using a coverage optimization strategy. Visual Comput. 32, 6 (2016), 10.
[9]
Ioannis Gkioulekas, Anat Levin, and Todd Zickler. 2016. An evaluation of computational imaging techniques for heterogeneous inverse scattering. In Computer Vision – ECCV 2016. Springer International, 685–701. DOI:
[10]
Ioannis Gkioulekas, Shuang Zhao, Kavita Bala, Todd Zickler, and Anat Levin. 2013. Inverse volume rendering with material dictionaries. ACM Trans. Graph. 32, 6 (Nov. 2013), 1–13. DOI:
[11]
Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile. 1984. Modeling the interaction of light between diffuse surfaces. ACM SIGGRAPH Comput. Graph. 18, 3 (Jul. 1984), 213–222. DOI:
[12]
Robin Green. 2003. Spherical harmonic lighting: The gritty details. In Archives of the Game Developers Conference, Vol. 56.
[13]
Donald P. Greenberg, Michael F. Cohen, and Kenneth E. Torrance. 1986. Radiosity: A method for computing global illumination. Visual Comput. 2, 5 (Sep. 1986), 291–297. DOI:
[14]
Nikolaus Hansen. 2021. c-cmaes. (2021). https://github.com/CMA-ES/c-cmaes
[15]
Nikolaus Hansen, Sibylle D. Müller, and Petros Koumoutsakos. 2003. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11, 1 (Mar. 2003), 1–18. DOI:
[16]
Miloš Hašan, Jaroslav Křivánek, Bruce Walter, and Kavita Bala. 2009. Virtual spherical lights for many-light rendering of glossy scenes. ACM Trans. Graph. 28, 5 (Dec. 2009), 1–6. DOI:
[17]
Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, Merlin Nimier-David, Delio Vicini, Tizian Zeltner, Baptiste Nicolet, Miguel Crespo, Vincent Leroy, and Ziyi Zhang. 2022. Mitsuba 3 Renderer. Retrieved from https://mitsuba-renderer.org
[18]
Henrik Wann Jensen. 1995. Importance driven path tracing using the photon map. In Eurographics. Springer Vienna, 326–335. DOI:
[19]
Sam Jin and Sung-Hee Lee. 2019. Lighting layout optimization for 3D Indoor scenes. Comput. Graph. Forum 38, 7 (2019), 733–743. DOI:
[20]
James T. Kajiya. 1986. The rendering equation. ACM SIGGRAPH Comput. Graph. 20, 4 (Aug. 1986), 143–150. DOI:
[21]
Pramook Khungurn, Daniel Schroeder, Shuang Zhao, Kavita Bala, and Steve Marschner. 2015. Matching real fabrics with micro-appearance models. ACM Trans. Graph. 35, 1 (Dec. 2015), 1–26. DOI:
[22]
Diederik P. Kingma and Jimmy Ba. 2014. ADAM: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’14). arXiv:1412.6980 [cs.LG].
[23]
J. Krivanek, P. Gautron, S. Pattanaik, and K. Bouatouch. 2005. Radiance caching for efficient global illumination computation. IEEE Trans. Vis. Comput. Graph. 11, 5 (Sep. 2005), 550–561. DOI:
[24]
Mats Larson and Fredrik Bengzon. 2013. The Finite Element Method: Theory, Implementation, and Applications. Springer-Verlag GmbH.
[25]
Jaakko Lehtinen, Matthias Zwicker, Emmanuel Turquin, Janne Kontkanen, Frédo Durand, François X. Sillion, and Timo Aila. 2008. A meshless hierarchical representation for light transport. ACM Trans. Graph. 27, 3 (Aug. 2008), 1–9. DOI:
[26]
Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018. Differentiable Monte Carlo ray tracing through edge sampling. ACM Trans. Graph. 37, 6 (Dec. 2018), 1–11. DOI:
[27]
Wen-Chieh Lin, Tsung-Shian Huang, Tan-Chi Ho, Yueh-Tse Chen, and Jung-Hong Chuang. 2013. Interactive lighting design with hierarchical light representation. Comput. Graph. Forum 32, 4 (2013), 133–142. DOI:
[28]
Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. 2018. Paparazzi: Surface editing by way of multi-view image processing. ACM Trans. Graph. 37, 6 (Dec. 2018), 1–11. DOI:
[29]
Guillaume Loubet, Nicolas Holzschuch, and Wenzel Jakob. 2019. Reparameterizing discontinuous integrands for differentiable rendering. ACM Trans. Graph. 38, 6 (2019). DOI:
[30]
Christan Luksch, Michael Wimmer, and Michael Schwärzler. 2019. Incrementally baked global illumination. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. ACM. DOI:
[31]
Merlin Nimier-David, Zhao Dong, Wenzel Jakob, and Anton Kaplanyan. 2021. Material and lighting reconstruction for complex indoor scenes with texture-space differentiable rendering. In Eurographics Symposium on Rendering—DL-only Track (2021). DOI:
[32]
Merlin Nimier-David, Sébastien Speierer, Benoît Ruiz, and Wenzel Jakob. 2020. Radiative backpropagation. ACM Trans. Graph. 39, 4 (Jul. 2020). DOI:
[33]
Merlin Nimier-David, Delio Vicini, Tizian Zeltner, and Wenzel Jakob. 2019. Mitsuba 2: A retargetable forward and inverse renderer. ACM Trans. Graph. 38, 6 (2019). DOI:
[34]
Jorge Nocedal. 1980. Updating quasi-Newton matrices with limited storage. Math. Comp. 35, 151 (1980), 773–782. DOI:
[35]
Mahmoud Omidvar, Mickaël Ribardière, Samuel Carré, Daniel Méneveaux, and Kadi Bouatouch. 2015. A radiance cache method for highly glossy surfaces. Visual Comput. 32, 10 (Oct. 2015), 1239–1250. DOI:
[36]
OpenArena Team, a FANDOM Games Community. 2008. OpenArena 0.8.1. Retrieved from https://openarena.ws/
[37]
Fabio Pellacini, Frank Battaglia, R. Keith Morley, and Adam Finkelstein. 2007. Lighting with paint. ACM Trans. Graph. 26, 2 (Jun. 2007), 9. DOI:
[38]
P. Poulin, K. Ratib, and M. Jacques. 1997. Sketching shadows and highlights to position lights. In Proceedings Computer Graphics International. IEEE, Los Alamitos, CA, 56–63. DOI:
[39]
Yixuan Qiu. 2021. LBFGS++. Retrieved from https://github.com/yixuan/LBFGSpp
[40]
Relux Informatik AG. 2022. ReluxDesktop. Retrieved from https://reluxnet.relux.com/en
[41]
Michael Schwarz and Peter Wonka. 2014. Procedural design of exterior lighting for buildings with complex constraints. ACM Trans. Graph. 33, 5 (Sep. 2014), 1–16. DOI:
[42]
Amit Shesh and Baoquan Chen. 2007. Crayon lighting: Sketch-guided illumination of models. In Proceedings of the 5th International Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia (GRAPHITE’07). ACM Press, New York, NY, 95. DOI:
[43]
Françis X. Sillion, James R. Arvo, Stephen H. Westin, and Donald P. Greenberg. 1991. A global illumination solution for general reflectance distributions. In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '91). ACM Press. DOI:
[44]
Peter-Pike Sloan, Jan Kautz, and John Snyder. 2002. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. ACM Trans. Graph. 21, 3 (Jul. 2002), 527–536. DOI:
[45]
Johannes Sorger, Thomas Ortner, Christian Luksch, Michael Schwarzler, Eduard Groller, and Harald Piringer. 2016. LiteVis: Integrated visualization for simulation-based decision support in lighting design. IEEE Trans. Vis. Comput. Graph. 22, 1 (Jan. 2016), 290–299. DOI:
[46]
Jos Stam. 2020. Computing light transport gradients using the adjoint method. arxiv:cs.GR/2006.15059. Retrieved from https://arxiv.org/abs/2006.15059
[47]
Eric Veach. 1998. Robust Monte Carlo Methods for Light Transport Simulation. Ph.D. Dissertation, Stanford, CA. AAI9837162
[48]
Delio Vicini, Sébastien Speierer, and Wenzel Jakob. 2021. Path replay backpropagation. ACM Trans. Graph. 40, 4 (aug 2021), 1–14. DOI:
[49]
Andreas Walch, Michael Schwärzler, Christian Luksch, Elmar Eisemann, and Theresia Gschwandtner. 2019. LightGuider: Guiding interactive lighting design using suggestions, provenance, and quality visualization. IEEE Trans. Vis. Comput. Graph. (2019), 1–1. DOI:
[50]
Mark A. Wieczorek and Matthias Meschede. 2018. SHTools: Tools for working with spherical harmonics. Geochem. Geophys. Geosyst. 19, 8 (Aug. 2018), 2574–2592. DOI:
[51]
Kai Yan, Christoph Lassner, Brian Budge, Zhao Dong, and Shuang Zhao. 2022. Efficient estimation of boundary integrals for path-space differentiable rendering. ACM Trans. Graph. 41, 4 (Jul. 2022), 1–13. DOI:
[52]
Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance fields without neural networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22).
[53]
Tizian Zeltner, Sébastien Speierer, Iliyan Georgiev, and Wenzel Jakob. 2021. Monte Carlo estimators for differential light transport. ACM Trans. Graph. 40, 4 (Aug. 2021), 1–16. DOI:
[54]
Cheng Zhang, Bailey Miller, Kai Yan, Ioannis Gkioulekas, and Shuang Zhao. 2020. Path-space differentiable rendering. ACM Trans. Graph. 39, 4 (2020). DOI:
[55]
Cheng Zhang, Lifan Wu, Changxi Zheng, Ioannis Gkioulekas, Ravi Ramamoorthi, and Shuang Zhao. 2019. A differential theory of radiative transfer. ACM Trans. Graph. 38, 6 (2019). DOI:

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 43, Issue 3
June 2024
219 pages
EISSN:1557-7368
DOI:10.1145/3613683
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 May 2024
Online AM: 03 May 2024
Accepted: 25 March 2024
Revised: 05 March 2024
Received: 29 November 2022
Published in TOG Volume 43, Issue 3

Check for updates

Author Tags

  1. Lighting design
  2. differentiable rendering
  3. global illumination
  4. optimization
  5. ray tracing

Qualifiers

  • Research-article

Funding Sources

  • Austrian Science Fund (FWF)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,532
    Total Downloads
  • Downloads (Last 12 months)1,532
  • Downloads (Last 6 weeks)369
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media