Calibration
For a basic example, see the CalibrationProblem
document string below. For an extended workflow, see Calibration workflows.
Overview
By default, calibration is performed using a gradient descent optimiser to minimise a least-squares error on provided clinical measurements, and uses the adjoint method to auto-differentiate solutions to the underlying ODE's, with respect to the ODE parameters, and initial conditions to be optimised. Alternatively, Levengerg-Marquardt or Powell's dog leg optimisation may be employed. This can be faster for the smaller models, but enjoys fewer features.
Gradient descent optimisation
Calibration using a gradient descent optimiser has these advantages:
Any updater (
optimiser
) from Optimisers.jl can be used.Fine-grained control of iteration, including live plots, is possible using IterationControl.jl.
Stability issues for models with a singularity at zero volume can be mitigated by specifying a loss
penalty
.Instead of minimizing a least-squares loss, one can give more weight to recent observations by specifying an appropriate
half_life
.Convergence may be faster for models with a large number of parameters (e.g., larger neural ODE models)
Models can be calibrated even if the number of observations is less than the number of model parameters.
By default, optimisation is by gradient descent using Adaptive Moment Estimation and a user-specifiable learning_rate
.
Levengerg-Marquardt / dog leg optimisation
The main advantage of these methods is that they are faster for all the models currently provided, with the exception of large neural ODE models. Users can specify an initial trust_region_radius
to mitigate instability.
Usage
TumorGrowth.CalibrationProblem
— Type CalibrationProblem(times, volumes, model; learning_rate=0.0001, options...)
Specify a problem concerned with optimizing the parameters of a tumor growth model
, given measured volumes
and corresponding times
.
See TumorGrowth
for a list of possible model
s.
Default optimisation is by Adam gradient descent, using a sum of squares loss. Call solve!
on a problem to carry out optimisation, as shown in the example below. See "Extended Help" for advanced options, including early stopping.
Initial values of the parameters are inferred by default.
Unless frozen (see "Extended help" below), the calibration process learns an initial condition v0
which is generally different from volumes[1]
.
Simple solve
using TumorGrowth
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
volumes = [0.00023, 8.4e-5, 6.1e-5, 4.3e-5, 4.3e-5, 4.3e-5]
problem = CalibrationProblem(times, volumes, gompertz; learning_rate=0.01)
solve!(problem, 30) # apply 30 gradient descent updates
julia> loss(problem) # sum of squares loss
1.7341026729860452e-9
p = solution(problem)
julia> pretty(p)
"v0=0.0002261 v∞=2.792e-5 ω=0.05731"
extended_times = vcat(times, [42.0, 46.0])
julia> gompertz(extended_times, p)[[7, 8]]
2-element Vector{Float64}:
3.374100207406809e-5
3.245628908921241e-5
Extended help
Solving with iteration controls
Continuing the example above, we may replace the number of iterations, n
, in solve!(problem, n)
, with any control from IterationControl.jl:
using IterationControl
solve!(
problem,
Step(1), # apply controls every 1 iteration...
WithLossDo(), # print loss
Callback(problem -> print(pretty(solution(problem)))), # print parameters
InvalidValue(), # stop for ±Inf/NaN loss
NumberSinceBest(5), # stop when lowest loss so far was 5 steps prior
TimeLimit(1/60), # stop after one minute
NumberLimit(400), # stop after 400 steps
)
p = solution(problem)
julia> loss(problem)
7.609310030658547e-10
See IterationControl.jl for all options.
Controlled iteration as above is not recommended if you specify optimiser=LevenbergMarquardt()
or optimiser=Dogleg()
because the internal state of these optimisers is reset at every Step
. Instead, to arrange automatic stopping, use solve!(problem, 0)
.
Visualizing results
using Plots
scatter(times, volumes, xlab="time", ylab="volume", label="train")
plot!(problem, label="prediction")
Keyword options
p0=guess_parameters(times, volumes, model)
: initial value of the model parameters.lower
: named tuple indicating lower bounds on components of the model parameterp
. For example, iflower=(; v0=0.1)
, then this introduces the constraintp.v0 < 0.1
. The model-specific default value isTumorGrowth.lower_default(model)
.upper
: named tuple indicating upper bounds on components of the model parameterp
. For example, ifupper=(; v0=100)
, then this introduces the constraintp.v0 < 100
. The model-specific default value isTumorGrowth.upper_default(model)
.frozen
: a named tuple, such as(; v0=nothing, λ=1/2)
; indicating parameters to be frozen at specified values during optimisation; anothing
value means freeze at initial value. The model-specific default value isTumorGrowth.frozen_default(model)
.learning_rate > 0
: learniing rate for Adam gradient descent optimisation. Ignored ifoptimiser
is explicitly specified.optimiser
: optimisation algorithm, which will be one of two varieties:A gradient descent optimiser: This must be from Optimisers.jl or implement the same API.
A Gauss-Newton optimiser: Either
LevenbergMarquardt()
,Dogleg()
, provided by LeastSquaresOptim.jl (but re-exported byTumorGrowth
).
The model-specific default value is
TumorGrowth.optimiser_default(model)
, unlesslearning_rate
is specified, in which case it will beOptimisers.Adam(learning_rate)
.scale
: a scaling function with the property thatp = scale(q)
has a value of the same order of magnitude for the model parameters being optimised, wheneverq
has the same form as a model parameterp
but with all values equal to one. Scaling can help components ofp
converge at a similar rate. Ignored by Gauss-Newton optimisers. Model-specific default isTumorGrowth.scale_default(model)
.radius > 0
: initial trust region radius. This is ignored unlessoptimiser
is a Gauss-Newton optimiser. The model-specific default isTumorGrowth.radius_default(model, optmiser)
, which is typically10.0
forLevenbergMarquardt()
and1.0
forDogleg
.half_life=Inf
: set to a real positive number to replace the sum of squares loss with a weighted version; weights decay in reverse time with the specifiedhalf_life
. Ignored by Gauss-Newton optimisers.penalty ≥ 0
: the larger the positive value, the more a loss penalty discourages large differences inv0
andv∞
on a log scale. Helps discouragev0
andv∞
drifting out of bounds in models whose ODE have a singularity at the origin. Model must includev0
andv∞
as parameters. Ignored by Gauss-Newton optimisers. The model-specific default value isTumorGrowth.penalty_default(model)
.ode_options...
: optional keyword arguments for the ODE solver,DifferentialEquations.solve
, from DifferentialEquations.jl. Not relevant for models using analytic solutions (see the table atTumorGrowth
).
CalibrationProblem(problem; kwargs...)
Construct a new calibration problem out an existing problem
but supply new keyword arguments, kwargs
. Unspecified keyword arguments fall back to defaults, except for p0
, which falls back to solution(problem)
.