## Overview of the Mathematics

The primary contribution in this paper is a mathematical representation and decomposition of an ordinary differential equation (ODE) that, in the case of the problem studied in this paper, allows the fast computation of the refractive light transport through a liquid-crystal.
This approach is rather general and can be used for other problems.

The starting point is the operator-valued ODE

$$
\frac{\operatorname{d}}{\operatorname{d}y}\psi\left(y\right) = \mathbf{A}\left(y\right)\psi\left(y\right)
~,
$$

with some initial condition $\psi\left(0\right) = \psi_0$, and where $\mathbf{A}(y)$ is a complex-valued $2\times 2$ matrix, that is a function of $y$.
We seek a solution $\psi(y)$.
Assume that $\mathbf{A}$ does not commute with itself, i.e. it does not hold that for every $y_1,y_2$, $\mathbf{A}(y_1)\mathbf{A}(y_2) = \mathbf{A}(y_2)\mathbf{A}(y_1)$.
Otherwise, the solution reduces to a simple matrix exponential, see matrix differential equation.

To proceed we apply powerful tools that arise from the Magnus expansion:
Expand the matrix $\mathbf{A}$ as a linear combination of some constant (independent of $y$), but otherwise arbitrary matrices, as follows

$$
\mathbf{A}(y) = \sum_{j=1}^m a_j(y) \mathbf{X}_j
~,
$$

where $a_j(y)$ are the scalar-valued function coefficients.
The solution to the ODE then becomes the matrix exponential

$$
\psi(y) = e^{\sum_{j=1}^m f_j(y)\mathbf{X}_j}
~.
$$

The relation between the (known) $a_j$ and (unknown) $f_j$ are given by a rather complicated differential system (see Wei and Norman [1964] or the paper).
To devise a simple enough system, we choose the following matrices:

$$
\mathbf{X}_1 = \begin{bmatrix}
0 & 1 \\
0 & 0
\end{bmatrix} ~, \quad
\mathbf{X}_2 = \begin{bmatrix}
0 & 0 \\
1 & 0
\end{bmatrix} ~, \quad
\mathbf{X}_3 = \begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix} ~, \quad
\mathbf{X}_4 = \mathbf{I}
~.
$$

Working through the math yields the following (exact) final solution:

$$
\psi(y) = e^{f_4}\begin{bmatrix}
\left(1+f_1f_2\right)e^{f_3} & f_1e^{-f_3} \\
f_2e^{f_3} & e^{-f_3}
\end{bmatrix}
\tag{1}
~,
$$

with the scalar ODEs for $f_{1,2,3,4}$ given by

$$
\begin{aligned}
\frac{\operatorname{d}}{\operatorname{d}{y}} f_1(y) &= -a_2(y)f_1(y)^2 + 2a_3(y)f_1(y) + a_1(y)
~,
\\
\frac{\operatorname{d}}{\operatorname{d}{y}} f_2(y) &= 2\left[a_2(y)f_1(y) - a_3(y)\right]f_2(y) + a_2(y)
~,
\\
\frac{\operatorname{d}}{\operatorname{d}{y}} f_3(y) &= -a_2(y)f_1(y) + a_3(y)
~,
\\
\frac{\operatorname{d}}{\operatorname{d}{y}} f_4(y) &= a_4(y)
~.
\end{aligned}
$$

Once expressions for $f_{1,2,3,4}$ are found, $\psi(y)$ is computed directly via Eq. 1.
We have effectively reduced the original operator-valued ODE into the set of 4 scalar-valued ODEs above.
Solving these scalar ODEs is, in general, a far easier problem compared to the operator-valued ODE we started with:
The ODEs for $f_3,f_4$ are separable and reduce to integration, while the ODE for $f_2$ is linear.
The only difficulty arises in the quadratic $f_1$ ODE, which is known as the Riccati equation.

In addition, note, that because $\mathbf{A}$ and the scalar functions $f_{1,2,3,4}$ might be complex, the original operator-valued ODE might be highly oscillatory, frustrating numerical approaches and approximative solutions.
The representation of the solution as a product of matrix exponentials (Eq. 1) “extracts” some of this oscillatory behaviour into the exponents in Eq. 1, which means that it is reasonable to assume that the scalar-valued ODEs are, in general, “better behaved” than the original problem.
This is exactly what happens in the context discussed in the paper, and the scalar ODEs (which are an *exact* representation of the original problem) are orders-of-magnitude faster to solve numerically.

The analysis above applies to an arbitrary operator-valued ODE with a complex-valued $2\times 2$ matrix $\mathbf{A}$, and can be extended in a like-manner to higher dimensions.

Presented at SIGGRAPH 2020