This page was generated from doc/euclidean/catmull-rom-non-uniform.ipynb. Interactive online version: Binder badge.

Non-Uniform Catmull–Rom Splines§

[CR74] describes only the uniform case, but it is straightforward to extend the method to non-uniform splines.

The method creates three linear interpolations (and extrapolations) between neighboring pairs of the four relevant control points and then blends the three resulting points with a quadratic B-spline basis function.

As we have seen in the notebook about uniform Catmull–Rom splines and as we will again see in the notebook about the Barry–Goldman algorithm, the respective degrees can be reversed. This means that equivalently, two (overlapping) quadratic Lagrange interpolations can be used, followed by linearly blending the two resulting points.

Since latter is both easier to implement and easier to wrap one’s head around, we use it in the following derivations.

We will derive the tangent vectors at the segment boundaries (which will serve as basis for deriving non-uniform Kochanek–Bartels splines later) and the basis matrix. See the notebook about the Barry–Goldman algorithm for an alternative (but closely related) derivation.

[1]:
import sympy as sp
sp.init_printing()
[2]:
x3, x4, x5, x6 = sp.symbols('xbm3:7')
[3]:
t, t3, t4, t5, t6 = sp.symbols('t t3:7')

We use some tools from utility.py:

[4]:
from utility import NamedExpression, NamedMatrix

As shown in the notebook about Lagrange interpolation, it can be implemented using Neville’s algorithm:

[5]:
def lerp(xs, ts, t):
    """Linear interpolation.

    Returns the interpolated value at time *t*,
    given the two values *xs* at times *ts*.

    """
    x_begin, x_end = xs
    t_begin, t_end = ts
    return (x_begin * (t_end - t) + x_end * (t - t_begin)) / (t_end - t_begin)
[6]:
def neville(xs, ts, t):
    """Lagrange interpolation using Neville's algorithm.

    Returns the interpolated value at time *t*,
    given the values *xs* at times *ts*.

    """
    assert len(xs) == len(ts)
    while len(xs) > 1:
        step = len(ts) - len(xs) + 1
        xs = [
            lerp(*args, t)
            for args in zip(zip(xs, xs[1:]), zip(ts, ts[step:]))]
    return xs[0]

Alternatively, sympy.interpolate() could be used.

We use two overlapping quadratic Lagrange interpolations followed by linear blending:

[7]:
p4 = NamedExpression(
    'pbm4',
    lerp([
        neville([x3, x4, x5], [t3, t4, t5], t),
        neville([x4, x5, x6], [t4, t5, t6], t),
    ], [t4, t5], t))

Note

Since the two invocations of Neville’s algorithm overlap, some values that are used by both are unnecessarily computed by both. It would be more efficient to calculate each of these values only once.

The Barry–Goldman algorithm avoids this repeated computation.

But here, since we are using symbolic expressions, this doesn’t really matter because the redundant expressions should be simplified away by SymPy.

[8]:
p4.simplify()
[8]:
$\displaystyle \boldsymbol{p}_{4} = \frac{\left(t - t_{4}\right) \left(t_{3} - t_{4}\right) \left(t_{3} - t_{5}\right) \left(- \left(t - t_{4}\right) \left(t_{4} - t_{5}\right) \left(- \boldsymbol{x}_{5} \left(t - t_{6}\right) + \boldsymbol{x}_{6} \left(t - t_{5}\right)\right) + \left(t - t_{6}\right) \left(t_{5} - t_{6}\right) \left(- \boldsymbol{x}_{4} \left(t - t_{5}\right) + \boldsymbol{x}_{5} \left(t - t_{4}\right)\right)\right) - \left(t - t_{5}\right) \left(t_{4} - t_{6}\right) \left(t_{5} - t_{6}\right) \left(- \left(t - t_{3}\right) \left(t_{3} - t_{4}\right) \left(- \boldsymbol{x}_{4} \left(t - t_{5}\right) + \boldsymbol{x}_{5} \left(t - t_{4}\right)\right) + \left(t - t_{5}\right) \left(t_{4} - t_{5}\right) \left(- \boldsymbol{x}_{3} \left(t - t_{4}\right) + \boldsymbol{x}_{4} \left(t - t_{3}\right)\right)\right)}{\left(t_{3} - t_{4}\right) \left(t_{3} - t_{5}\right) \left(t_{4} - t_{5}\right)^{2} \left(t_{4} - t_{6}\right) \left(t_{5} - t_{6}\right)}$

The following expressions can be simplified by introducing a few new symbols \(\Delta_i\):

[9]:
delta3, delta4, delta5 = sp.symbols('Delta3:6')
deltas = {
    t4 - t3: delta3,
    t5 - t4: delta4,
    t6 - t5: delta5,
    t5 - t3: delta3 + delta4,
    t6 - t4: delta4 + delta5,
    t6 - t3: delta3 + delta4 + delta5,
    # A few special cases that SymPy has a hard time resolving:
    t4 + t4 - t3: t4 + delta3,
    t6 + t6 - t3: t6 + delta3 + delta4 + delta5,
}

Tangent Vectors§

To get the tangent vectors at the control points, we just have to take the first derivative …

[10]:
pd4 = p4.diff(t)

… and evaluate it at \(t_4\) and \(t_5\):

[11]:
start_tangent = pd4.evaluated_at(t, t4)
start_tangent.subs(deltas).simplify()
[11]:
$\displaystyle \left.{\frac{d}{d t} \boldsymbol{p}_{4}}\right\rvert_{t=t_{4}} = \frac{\Delta_{3}^{2} \left(- \boldsymbol{x}_{4} + \boldsymbol{x}_{5}\right) + \Delta_{4}^{2} \left(- \boldsymbol{x}_{3} + \boldsymbol{x}_{4}\right)}{\Delta_{3} \Delta_{4} \left(\Delta_{3} + \Delta_{4}\right)}$
[12]:
end_tangent = pd4.evaluated_at(t, t5)
end_tangent.subs(deltas).simplify()
[12]:
$\displaystyle \left.{\frac{d}{d t} \boldsymbol{p}_{4}}\right\rvert_{t=t_{5}} = \frac{- \Delta_{4}^{2} \boldsymbol{x}_{5} + \Delta_{4}^{2} \boldsymbol{x}_{6} - \Delta_{5}^{2} \boldsymbol{x}_{4} + \Delta_{5}^{2} \boldsymbol{x}_{5}}{\Delta_{4} \Delta_{5} \left(\Delta_{4} + \Delta_{5}\right)}$

Both results lead to the same general expression (which is expected, since the incoming and outgoing tangents are supposed to be equal):

\begin{align*} \boldsymbol{\dot{x}}_i &= \frac{ (t_{i+1} - t_i)^2 (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) + (t_i - t_{i-1})^2 (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ (t_{i+1} - t_i)(t_i - t_{i-1})(t_{i+1} - t_{i-1}) } \\ &= \frac{ {\Delta_i}^2 (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) + {\Delta_{i-1}}^2 (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ \Delta_i \Delta_{i-1} (\Delta_i + \Delta_{i-1}) } \end{align*}

Equivalently, this can be written as:

\begin{align*} \boldsymbol{\dot{x}}_i &= \frac{ (t_{i+1} - t_i) (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) }{ (t_i - t_{i-1})(t_{i+1} - t_{i-1}) } + \frac{ (t_i - t_{i-1}) (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ (t_{i+1} - t_i)(t_{i+1} - t_{i-1}) } \\ &= \frac{ \Delta_i (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) }{ \Delta_{i-1} (\Delta_i + \Delta_{i-1}) } + \frac{ \Delta_{i-1} (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ \Delta_i (\Delta_i + \Delta_{i-1}) } \end{align*}

An alternative (but very similar) way to derive these tangent vectors is shown in the notebook about the Barry–Goldman algorithm.

And there is yet another way to calculate the tangents, without even needing to obtain a cubic polynomial and its derivative: Since we are using a linear blend of two quadratic polynomials, we know that at the beginning (\(t = t_4\)) only the first quadratic polynomial has an influence and at the end (\(t = t_5\)) only the second quadratic polynomial is relevant. Therefore, to determine the tangent vector at the beginning of the segment, it is sufficient to get the derivative of the first quadratic polynomial.

[13]:
first_quadratic = neville([x3, x4, x5], [t3, t4, t5], t)
[14]:
sp.degree(first_quadratic, t)
[14]:
$\displaystyle 2$
[15]:
first_quadratic.diff(t).subs(t, t4)
[15]:
$\displaystyle \frac{\frac{\left(- t_{3} + t_{4}\right) \left(- \boldsymbol{x}_{4} + \boldsymbol{x}_{5}\right)}{- t_{4} + t_{5}} + \frac{\left(- t_{4} + t_{5}\right) \left(- \boldsymbol{x}_{3} + \boldsymbol{x}_{4}\right)}{- t_{3} + t_{4}}}{- t_{3} + t_{5}}$

This can be written as (which is sometimes called the standard three-point difference formula):

\begin{equation*} \boldsymbol{\dot{x}}_i = \frac{ \Delta_i \boldsymbol{v}_{i-1} + \Delta_{i-1} \boldsymbol{v}_i }{ \Delta_{i-1} + \Delta_i }, \end{equation*}

with \(\Delta_i = t_{i+1} - t_i\) and \(\boldsymbol{v}_i = \frac{\boldsymbol{x}_{i+1} - \boldsymbol{x}_i}{\Delta_i}\).

[dB78] calls this piecewise cubic Bessel interpolation, and it has also been called Bessel tangent method, Overhauser method and Bessel–Overhauser splines.

Note

Even though this formula is commonly associated with the name Overhauser, it is not describing the tangents of Overhauser splines (as presented in [Ove68]).

Long story short, it’s the same as we had above:

[16]:
assert sp.simplify(_ - start_tangent.expr) == 0

The first derivative of the second quadratic polynomial can be used to get the tangent vector at the end of the segment.

[17]:
second_quadratic = neville([x4, x5, x6], [t4, t5, t6], t)
second_quadratic.diff(t).subs(t, t5)
[17]:
$\displaystyle \frac{\frac{\left(- t_{4} + t_{5}\right) \left(- \boldsymbol{x}_{5} + \boldsymbol{x}_{6}\right)}{- t_{5} + t_{6}} + \frac{\left(- t_{5} + t_{6}\right) \left(- \boldsymbol{x}_{4} + \boldsymbol{x}_{5}\right)}{- t_{4} + t_{5}}}{- t_{4} + t_{6}}$
[18]:
assert sp.simplify(_ - end_tangent.expr) == 0

You might encounter another way to write the equation for \(\boldsymbol{\dot{x}}_4\) (e.g. at https://stackoverflow.com/a/23980479/) …

[19]:
(x4 - x3) / (t4 - t3) - (x5 - x3) / (t5 - t3) + (x5 - x4) / (t5 - t4)
[19]:
$\displaystyle \frac{- \boldsymbol{x}_{4} + \boldsymbol{x}_{5}}{- t_{4} + t_{5}} - \frac{- \boldsymbol{x}_{3} + \boldsymbol{x}_{5}}{- t_{3} + t_{5}} + \frac{- \boldsymbol{x}_{3} + \boldsymbol{x}_{4}}{- t_{3} + t_{4}}$

… but again, this is equivalent to the equation shown above:

[20]:
assert sp.simplify(_ - start_tangent.expr) == 0

Using Non-Uniform Bézier Segments§

Similar to the uniform case, the above equation for the tangent vectors can be used to construct non-uniform Hermite splines or, after multiplying them with the appropriate parameter interval and dividing them by 3, to obtain the two additional control points for non-uniform cubic Bézier spline segments:

\begin{align*} \boldsymbol{\tilde{x}}_i^{(+)} &= \boldsymbol{x}_i + \frac{\Delta_i \boldsymbol{\dot{x}}_i}{3} \\ &= \boldsymbol{x}_i + \frac{\Delta_i}{3} \frac{ \Delta_i \boldsymbol{v}_{i-1} + \Delta_{i-1} \boldsymbol{v}_i }{ \Delta_{i-1} + \Delta_i } \\ &= \boldsymbol{x}_i + \frac{ {\Delta_i}^2 (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) }{ 3 \Delta_{i-1} (\Delta_i + \Delta_{i-1}) } + \frac{ \Delta_{i-1} (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ 3 (\Delta_i + \Delta_{i-1}) } \\ \boldsymbol{\tilde{x}}_i^{(-)} &= \boldsymbol{x}_i - \frac{\Delta_{i-1} \boldsymbol{\dot{x}}_i}{3} \\ &= \boldsymbol{x}_i - \frac{\Delta_{i-1}}{3} \frac{ \Delta_i \boldsymbol{v}_{i-1} + \Delta_{i-1} \boldsymbol{v}_i }{ \Delta_{i-1} + \Delta_i } \\ &= \boldsymbol{x}_i - \frac{ \Delta_i (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) }{ 3 (\Delta_i + \Delta_{i-1}) } - \frac{ {\Delta_{i-1}}^2 (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i) }{ 3 \Delta_i (\Delta_i + \Delta_{i-1}) } \end{align*}

This is again using \(\Delta_i = t_{i+1} - t_i\) and \(\boldsymbol{v}_i = \frac{\boldsymbol{x}_{i+1} - \boldsymbol{x}_i}{\Delta_i}\).

Animation§

To illustrate how two quadratic Lagrange interpolations followed by linear blending might look like, we can generate an animation by means of the file catmull_rom.py:

[21]:
from catmull_rom import animation_2_1, animation_1_2
from IPython.display import HTML
[22]:
vertices = [
    (1, 0),
    (0.5, 1),
    (6, 2),
    (5, 0),
]
[23]:
times = [
    0,
    1,
    6,
    8,
]
[24]:
ani_2_1 = animation_2_1(vertices, times)
[25]:
HTML(ani_2_1.to_jshtml(default_mode='reflect'))
[25]:

In the beginning of this notebook we claimed that two quadratic interpolations followed by linear blending are easier to understand. To prove this, let’s have a look at how three linear interpolations (and extrapolations) followed by quadratic B-spline blending would look like:

[26]:
ani_1_2 = animation_1_2(vertices, times)
[27]:
HTML(ani_1_2.to_jshtml(default_mode='reflect'))
[27]:

Would you agree that this is less straightforward?

If you would rather replace the quadratic B-spline basis function with a bunch of linear interpolations (using De Boor’s algorithm), take a look at the notebook about the Barry–Goldman algorithm.