Reference

TumorGrowth.CalibrationProblemMethod
    CalibrationProblem(times, volumes, model; learning_rate=0.0001, options...)

Specify a problem concerned with optimizing the parameters of a tumor growth model, given measured volumes and corresponding times.

See TumorGrowth for a list of possible models.

Default optimisation is by Adam gradient descent, using a sum of squares loss. Call solve! on a problem to carry out optimisation, as shown in the example below. See "Extended Help" for advanced options, including early stopping.

Initial values of the parameters are inferred by default.

Unless frozen (see "Extended help" below), the calibration process learns an initial condition v0 which is generally different from volumes[1].

Simple solve

using TumorGrowth

times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
volumes = [0.00023, 8.4e-5, 6.1e-5, 4.3e-5, 4.3e-5, 4.3e-5]
problem = CalibrationProblem(times, volumes, gompertz; learning_rate=0.01)
solve!(problem, 30)    # apply 30 gradient descent updates
julia> loss(problem)   # sum of squares loss
1.7341026729860452e-9

p = solution(problem)
julia> pretty(p)
"v0=0.0002261  v∞=2.792e-5  ω=0.05731"


extended_times = vcat(times, [42.0, 46.0])
julia> gompertz(extended_times, p)[[7, 8]]
2-element Vector{Float64}:
 3.374100207406809e-5
 3.245628908921241e-5

Extended help

Solving with iteration controls

Continuing the example above, we may replace the number of iterations, n, in solve!(problem, n), with any control from IterationControl.jl:

using IterationControl
solve!(
  problem,
  Step(1),            # apply controls every 1 iteration...
  WithLossDo(),       # print loss
  Callback(problem -> print(pretty(solution(problem)))), # print parameters
  InvalidValue(),     # stop for ±Inf/NaN loss
  NumberSinceBest(5), # stop when lowest loss so far was 5 steps prior
  TimeLimit(1/60),    # stop after one minute
  NumberLimit(400),   # stop after 400 steps
)
p = solution(problem)
julia> loss(problem)
7.609310030658547e-10

See IterationControl.jl for all options.

Note

Controlled iteration as above is not recommended if you specify optimiser=LevenbergMarquardt() or optimiser=Dogleg() because the internal state of these optimisers is reset at every Step. Instead, to arrange automatic stopping, use solve!(problem, 0).

Visualizing results

using Plots
scatter(times, volumes, xlab="time", ylab="volume", label="train")
plot!(problem, label="prediction")

Keyword options

  • p0=guess_parameters(times, volumes, model): initial value of the model parameters.

  • lower: named tuple indicating lower bounds on components of the model parameter p. For example, if lower=(; v0=0.1), then this introduces the constraint p.v0 < 0.1. The model-specific default value is TumorGrowth.lower_default(model).

  • upper: named tuple indicating upper bounds on components of the model parameter p. For example, if upper=(; v0=100), then this introduces the constraint p.v0 < 100. The model-specific default value is TumorGrowth.upper_default(model).

  • frozen: a named tuple, such as (; v0=nothing, λ=1/2); indicating parameters to be frozen at specified values during optimisation; a nothing value means freeze at initial value. The model-specific default value is TumorGrowth.frozen_default(model).

  • learning_rate > 0: learniing rate for Adam gradient descent optimisation. Ignored if optimiser is explicitly specified.

  • optimiser: optimisation algorithm, which will be one of two varieties:

    • A gradient descent optimiser: This must be from Optimisers.jl or implement the same API.

    • A Gauss-Newton optimiser: Either LevenbergMarquardt(), Dogleg(), provided by LeastSquaresOptim.jl (but re-exported by TumorGrowth).

    The model-specific default value is TumorGrowth.optimiser_default(model), unless learning_rate is specified, in which case it will be Optimisers.Adam(learning_rate).

  • scale: a scaling function with the property that p = scale(q) has a value of the same order of magnitude for the model parameters being optimised, whenever q has the same form as a model parameter p but with all values equal to one. Scaling can help components of p converge at a similar rate. Ignored by Gauss-Newton optimisers. Model-specific default is TumorGrowth.scale_default(model).

  • radius > 0: initial trust region radius. This is ignored unless optimiser is a Gauss-Newton optimiser. The model-specific default is TumorGrowth.radius_default(model, optmiser), which is typically 10.0 for LevenbergMarquardt() and 1.0 for Dogleg.

  • half_life=Inf: set to a real positive number to replace the sum of squares loss with a weighted version; weights decay in reverse time with the specified half_life. Ignored by Gauss-Newton optimisers.

  • penalty ≥ 0: the larger the positive value, the more a loss penalty discourages large differences in v0 and v∞ on a log scale. Helps discourage v0 and v∞ drifting out of bounds in models whose ODE have a singularity at the origin. Model must include v0 and v∞ as parameters. Ignored by Gauss-Newton optimisers. The model-specific default value is TumorGrowth.penalty_default(model).

  • ode_options...: optional keyword arguments for the ODE solver, DifferentialEquations.solve, from DifferentialEquations.jl. Not relevant for models using analytic solutions (see the table at TumorGrowth).

source
TumorGrowth.CalibrationProblemMethod
CalibrationProblem(problem; kwargs...)

Construct a new calibration problem out an existing problem but supply new keyword arguments, kwargs. Unspecified keyword arguments fall back to defaults, except for p0, which falls back to solution(problem).

source
TumorGrowth.bertalanffyMethod
bertalanffy(times, p)

Return volumes for specified times, based on the analytic solution to the General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, where v0 is the volume at time times[1]. Other parameters are explained below.

Special cases of the model are:

Underlying ODE

In the General Bertalanffy model, the volume $v > 0$ evolves according to the differential equation

$dv/dt = ω B_λ(v_∞/v) v,$

where $B_λ$ is the Box-Cox transformation, defined by $B_λ(x) = (x^λ - 1)/λ$, unless $λ = 0$, in which case, $B_λ(x) = \log(x)$. Here:

  • $v_∞$=v∞ is the steady state solution, stable and unique, assuming $ω > 0$; this is sometimes referred to as the carrying capacity

  • $1/ω$ has the units of time

  • $λ$ is dimensionless

For a list of all models see TumorGrowth.

source
TumorGrowth.bertalanffy2Method
bertalanffy2(times, p; capacity=false, solve_kwargs...)

Return volumes for specified times, based on numerical solutions to a two-dimensional extension of General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, γ, where v0 is the volume at time times[1].

The usual General Bertalanffy model is recovered when γ=0. In that case, using bertalanffy, which is based on an analytic solution, may be preferred. Other parameters are explained below.

Keyword options

  • solve_kwargs: optional keyword arguments for the ODE solver, DifferentialEquations.solve, from DifferentialEquations.jl.

Underlying ODE

In this model the carrying capacity of the bertalanffy model, ordinarily fixed, is introduced as a new latent variable $u(t)$, which is allowed to evolve independently of the volume $v(t)$, at a rate in proportion to its magnitude:

$dv/dt = ω B_λ(u/v) v$

$du/dt = γωu$

Here $B_λ$ is the Box-Cox transformation with exponent $λ$. See bertalanffy. Also:

  • $1/ω$ has units of time
  • $λ$ is dimensionless
  • $γ$ is dimensionless

Since $u$ is a latent variable, its initial value, v∞ ≡ u(times[1]), is an additional model parameter.

For a list of all models see TumorGrowth.

source
TumorGrowth.bertalanffy_numericalMethod
bertalanffy_numerical(times, p; solve_kwargs...)

Provided for testing purposes.

Return volumes for specified times, based on numerical solutions to the General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, where v0 is the volume at time times[1]; solve_kwargs are optional keyword arguments for the ODE solver, DifferentialEquations.solve, from DifferentialEquations.jl.

Since it is based on analtic solutions, bertalanffy is the preferred alternative to this function.

Important

It is assumed without checking that times is ordered: times == sort(times).

See also bertalanffy2.

source
TumorGrowth.classical_bertalanffyMethod
classical_bertalanffy(times, v0, v∞, ω)

Return volumes for specified times, based on anaytic solutions to the classical Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].

This is the λ=1/3 case of the bertalanffy model.

For a list of all models see TumorGrowth.

source
TumorGrowth.compareMethod
compare(times, volumes, models; holdouts=3, metric=mae, advanced_options...)

By calibrating models using the specified patient times and lesion volumes, compare those models using a hold-out set consisting of the last holdouts data points.

times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
volumes = [0.00023, 8.4e-5, 6.1e-5, 4.3e-5, 4.3e-5, 4.3e-5]

julia> comparison = compare(times, volumes, [gompertz, logistic])
ModelComparison with 3 holdouts:
  metric: mae
  gompertz:     2.198e-6
  logistic:     6.55e-6

julia> errors(comparison)
2-element Vector{Float64}:
 2.197843662660861e-6
 6.549858321487298e-6

julia> p = parameters(comparison)[1]  # calibrated parameter for `gompertz`
(v0 = 0.00022643603114569068, v∞ = 3.8453274218216947e-5, ω = 0.11537512108224635)

julia> gompertz(times, p)
6-element Vector{Float64}:
 0.00022643603114569068
 9.435316392754094e-5
 5.1039159299783234e-5
 4.303209015899451e-5
 4.021112910411027e-5
 3.922743006690166e-5

Visualising comparisons

using Plots
plot(comparison, title="A comparison of two models")

Keyword options

  • holdouts=3: number of time-volume pairs excluded from the end of the calibration data

  • metric=mae: metric applied to holdout set; the reported error on a model predicting volumes is metric(v̂, v) where v is the last holdouts values of volumes. For example, any regression measure from StatisticalMeasures.jl can be used here. The built-in fallback is mean absolute error.

  • iterations=TumorGrowth.iterations.(models): a vector of iteration counts for the calibration of models

  • calibration_options: a vector of named tuples providing keyword arguments for the CalibrationProblem for each model. Possible keys are: p0, lower, upper, frozen, learning_rate, optimiser, radius, scale, half_life, penalty, and keys corresponding to any ODE solver options. Keys left unspecified fall back to defaults, as these are described in the CalibrationProblem document string.

See also errors, parameters.

source
TumorGrowth.exponentialMethod
exponential(times, p)

Return volumes for specified times, based on the analytic solution to the exponential model for lesion growth. Here p will have properties v0 and ω, where v0 is the volume at time times[1] and log(2)/ω is the half life. Use negative ω for growth and positive ω for decay.

Underlying ODE

In the exponential model, the volume $v > 0$ evolves according to the differential equation

$dv/dt = -ω v.$

For a list of all models see TumorGrowth.

source
TumorGrowth.flat_patient_dataMethod
flat_patient_data()

Return, in row table form, the lesion measurement data collected in Laleh et al. (2022) "Classical mathematical models for prediction of response to chemotherapy and immunotherapy", PLOS Computational Biology.

Each row represents a single measurement of a single lesion on some day.

See also patient_data, in which each row represents all measurements of a single lesion.

source
TumorGrowth.gompertzMethod
gompertz(times, p)

Return volumes for specified times, based on anaytic solutions to the classical Gompertz model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].

This is the λ=0 case of the bertalanffy model.

For a list of all models see TumorGrowth.

source
TumorGrowth.guess_parametersMethod
guess_parameters(times, volumes, model)

Apply heuristics to guess parameters p for a model.

New model implementations

Fallback returns nothing which will prompt user's to explicitly specify initial parameter values in calibration problems.

source
TumorGrowth.logisticMethod
logistic(times, v0, v∞, ω)

Return volumes for specified times, based on anaytic solutions to the classical logistic (Verhulst) model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].

This is the λ=-1 case of the bertalanffy model.

For a list of all models see TumorGrowth.

source
TumorGrowth.neuralMethod
neural([rng,] network; transform=log, inverse=exp)

Initialize the Lux.jl neural network, network, and return a callable object, model, for solving the associated one-dimensional neural ODE for volume growth, as detailed under "Underlying ODE" below.

The returned object, model, is called like this:

volumes = model(times, p)

where p should have properties v0, v∞, θ, where v0 is the initial volume (so that volumes[1] = v0), v∞ is a volume scale parameter, and θ is a network-compatible Lux.jl parameter.

It seems that calibration works best if v∞ is frozen.

The form of θ is the same as TumorGrowth.initial_parameters(model), which is also the default initial value used when solving an associated CalibrationProblem.

using Lux, Random

# define neural network with 1 input and 1 output:
network = Lux.Chain(Dense(1, 3, Lux.tanh; init_weight=Lux.zeros64), Dense(3, 1))

rng = Xoshiro(123)
model = neural(rng, network)
θ = TumorGrowth.initial_parameters(model)
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
v0, v∞ = 0.00023, 0.00015
p = (; v0, v∞, θ)

julia> volumes = model(times, p) # (constant because of zero-initialization)
6-element Vector{Float64}:
 0.00023
 0.00023
 0.00023
 0.00023# # Neural2

Underlying ODE

View the neural network (with fixed parameter θ) as a mathematical function $f$ and write $ϕ$ for the transform function. Then $v(t) = v_∞ ϕ^{-1}(y(t))$, where $y(t)$ evolves according to

$dy/dt = f(y)$

subject to the initial condition $y(t₀) = ϕ(v_0/v_∞)$, where $t₀$ is the initial time, times[1]. We are writing $v₀$=v0 and $v_∞$=v∞.

For a list of all models see TumorGrowth. See also CalibrationProblem.

source
TumorGrowth.neural2Method
neural2([rng,] network; transform=log, inverse=exp)

Initialize the Lux.jl neural network, network, and return a callable object, model, for solving the associated two-dimensional neural ODE for volume growth, as detailed under "Underlying ODE" below.

The returned object model is called like this:

volumes = model(times, p)

where p should have properties v0, v∞, θ, where v0 is the initial volume (so that volumes[1] = v0), v∞ is a volume scale parameter, and θ is a network-compatible Lux.jl parameter.

It seems that calibration works best if v∞ is frozen.

The form of θ is the same as TumorGrowth.initial_parameters(model), which is also the default initial value used when solving an associated CalibrationProblem.

using Lux, Random

# define neural network with 2 inputs and 2 outputs:
network = Lux.Chain(Dense(2, 3, Lux.tanh; init_weight=Lux.zeros64), Dense(3, 2))

rng = Xoshiro(123)
model = neural2(rng, network)
θ = TumorGrowth.initial_parameters(model)
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
v0, v∞ = 0.00023, 0.00015
p = (; v0, v∞, θ)

julia> volumes = model(times, p) # (constant because of zero-initialization)
6-element Vector{Float64}:
 0.00023
 0.00023
 0.00023
 0.00023
 0.00023
 0.00023

Underlying ODE

View the neural network (with fixed parameter θ) as a mathematical function $f$, with components $f₁$ and $f₂$, and write $ϕ$ for the transform function. Then $v(t) = v_∞ ϕ^{-1}(y(t))$, where $y(t)$, and a latent variable $u(t)$, evolve according to

$dy/dt = f₁(y, u)$

$du/dt = f₂(y, u)$

subject to the initial conditions $y(t₀) = ϕ(v₀/v_∞)$, $u(t₀) = 1$, where $t₀$ is the initial time, times[1]. We are writing $v₀$=v0 and $v_∞$=v∞.

For a list of all models see TumorGrowth. See also CalibrationProblem.

source
TumorGrowth.patient_dataMethod
patient_data()

Return, in row table form, the lesion measurement data collected in Laleh et al. (2022) "Classical mathematical models for prediction of response to chemotherapy and immunotherapy", PLOS Computational Biology.

Each row represents all measurements for a single lesion for a unique patient.

record = first(patient_data())

julia> record.Pt_hashID # patient identifier
"0218075314855e6ceacca856fcd4c737-S1"

julia> record.T_weeks # measure times, in weeks
7-element Vector{Float64}:
  0.1
  6.0
 12.0
 17.0
 23.0
 29.0
 35.0

julia> record.Lesion_normvol # all volumes measured, normalised by dataset max
7-element Vector{Float64}:
 0.000185364052636979
 0.00011229838600811
 8.4371439525252e-5
 8.4371439525252e-5
 1.05464299406565e-5
 2.89394037571615e-5
 8.4371439525252e-5

See also flat_patient_data.

source
TumorGrowth.solutionMethod
solution(problem)

Return to the solution to a CalibrationProblem. Normally applied after calling solve!(problem).

Also returns the solution to internally defined problems, as constructed with TumorGrowth.OptimisationProblem, TumorGrowth.CurveOptimisationProblem.

source
TumorGrowth.solve!Method
solve!(problem, n)

Solve a calibration problem, as constructed with CalibrationProblem. The calibrated parameters are then returned by solution(problem).

If using a Gauss-Newton optimiser (LevenbergMarquardt or Dogleg) specify n=0 to choose n automatically.


solve!(problem, controls...)

Solve a calibration problem using one or more iteration controls, from the package IterationControls.jl. See the "Extended help" section of CalibrationProblem for examples.

Not recommended for Gauss-Newton optimisers (LevenbergMarquardt or Dogleg).

source
TumorGrowth.TumorGrowthModule

TumorGrowth.jl provides the following models for tumor growth:

modeldescriptionparameters, panalytic?
bertalanffyGeneral Bertalanffy (GB)(; v0, v∞, ω, λ)yes
bertalanffy_numericalGeneral Bertalanffy (testing only)(; v0, v∞, ω, λ)no
bertalanffy22D extension of General Bertalanffy(; v0, v∞, ω, λ, γ)no
gompertzclassical Gompertz (GB, λ=0)(; v0, v∞, ω)yes
logisticclassical Logistic/Verhulst (GB, λ=-1)(; v0, v∞, ω)yes
classical_bertalanffyclassical Bertalanffy (GB, λ=1/3)(; v0, v∞, ω)yes
exponentialexponential decay or growth(; v0, ω)yes
neural(rng, network)1D neural ODE with Lux.jl network(; v0, v∞, θ)no
neural2(rng, network)2D neural ODE with Lux.jl network(; v0, v∞, θ)no

Here a model is a callable object, that outputs a sequence of lesion volumes, given times, by solving a related ordinary differential equation with parameters (p below):

using TumorGrowth

times = times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
p = (v0=0.0002261, v∞=2.792e-5,  ω=0.05731) # `v0` is the initial volume
volumes = gompertz(times, p)
6-element Vector{Float64}:
 0.0002261
 0.0001240760197801191
 6.473115210101774e-5
 4.751268597529182e-5
 3.9074807723757934e-5
 3.496675045077041e-5

In every model, v0 is the initial volume, so that volumes[1] == v0.

In the case analytic solutions to the underlying ODEs are not known, optional keyword arguments for the DifferentialEquations.jl solver can be passed to the model call.

TumorGrowth.jl also provides a CalibrationProblem tool to calibrate model parameters, and a compare tool to compare models on a holdout set.

source
TumorGrowth.WeightedL2LossType
WeightedL2Loss(times, h=Inf)

Private method.

Return a weighted sum of squares loss function (ŷ, y) -> loss, where the weights decay in reverse time with a half life h.

source
TumorGrowth.bertalanffy2_ode!Method
bertalanffy2_ode!(dX, X, p, t)

A two-dimensional extension of the ODE describing the General Bertalanffy model for lesion growth. Here X = [v, u], where v is volume at time t and u is the "carrying capacity" at time t, a latent variable. The time derivatives are written to dX. For the specific form of the ODE, see bertalanffy2.

source
TumorGrowth.bertalanffy_odeMethod
bertalanffy_ode(v, p, t)

Based on the General Bertalanffy model, return the rate in change in volume at time t, for a current volume of v. For details, see bertalanffy.

Note here that v, and the return value, are vectors with a single element, rather than scalars.

source
TumorGrowth.curvatureMethod
curvature(xs, ys)

Return the coefficient a for the parabola x -> a*x^2 + b*x + c of best fit, for ordinates xs and coordinates ys.

source
TumorGrowth.deleteMethod
delete(x, kys)

Private method.

Assuming x is a named tuple, return a copy of x with any key in kys removed. Otherwise, assuming x is a structured object (such as a ComponentArray) first convert to a named tuple and then delete the specified keys.

source
TumorGrowth.fill_gapsMethod
fill_gaps(short, long, filler)

Private method.

Here long is a ComponentArray and short a named tuple with some of the keys from long. The method returns a ComponentArray with the same structure as long but with the values of short merged into long, with all other (possibly nested) values replaced with filler for numerical values or arrays of Inf in the case of array values.

``julia long = (a=1, b=rand(1,2), c=(d=4, e=rand(2))) |> ComponentArray short = (; a=10) # could alternatively be aComponentArray`

julia> TumorGrowth.fill_gaps(short, long, Inf) filled = ComponentVector{Float64}(a = 10.0, b = [Inf Inf], c = (d = Inf, e = [Inf, Inf]))

julia> all(filled .> long) true

source
TumorGrowth.frozen_defaultMethod
frozen_default(model)

Return a named tuple indicating parameter values to be frozen by default when calibrating model. A value of nothing for a parameter indicates freezing at initial value.

New model implementations

Fallback returns an empty named tuple.

source
TumorGrowth.functorMethod
function TumorGrowth.functor(x, frozen)

Private method.

For a ComponentArray, x, return a tuple (xfree, reconstructor), where:

  • xfree is a deconstructed version of x with entries corresponding to keys in the ordinary named tuple frozen deleted.

  • reconstruct is a method to reconstruct a ComponentArray from something similar to xfree, ensuring the missing keys get values from the named tuple frozen, as demonstrated in the example below. You can also apply reconstruct to things like xfree wrapped as ComponentArrays.


c = (x =1, y=2, z=3) |> ComponentArray
free, reconstruct = TumorGrowth.functor(c, (; y=20))
julia> free
(x = 1, z = 3)

julia> reconstruct((x=100, z=300))
ComponentVector{Int64}(x = 100, y = 20, z = 300)

julia> reconstruct(ComponentArray(x=100, z=300)))
ComponentVector{Int64}(x = 100, y = 20, z = 300)
source
TumorGrowth.functorMethod
TumorGrowth.functor(x) -> destructured_x, recover

Private method.

An extension of Functors.functor from the package Functors.jl, with an overloading for ComponentArrays.

source
TumorGrowth.iterations_defaultMethod
iterations_default(model, optimiser)

Number of iterations, when calibrating model and using optimiser, to be adopted by default in model comparisons. Here optimiser is an optimiser from Optimisers.jl, or implements the same API, or is one of: LevenbergMarquardt(), or Dogleg(). .

New model implementations

Fallback returns 10000, unless optimiser isa Union{LevenbergMarquardt,Dogleg}, in which case 0 is returned (stopping controlled by LeastSquaresOptim.jl).

source
TumorGrowth.lower_defaultMethod
lower_default(model)

Return a named tuple with the lower bound constraints on parameters for model.

For example, a return value of (v0 = 0.1,) indicates that p.v0 > 0.1 is a hard constraint for p, in calls of the form model(times, p), but all other components of p are unconstrained.

New model implementations

Fallback returns NamedTuple().

source
TumorGrowth.mergeMethod
TumorGrowth.merge(x, y::NamedTuple)

Private method.

Ordinary merge if x is also a named tuple. More generally, first deconstruct x using TumorGrowth.functor, merge as usual, and reconstruct.

source
TumorGrowth.neural_odeMethod

neural_ode([rng,] network)

Initialize the Lux.jl neural2 network, network, and return an associated ODE, ode, with calling syntax dX_dt = ode(X, p, t), where p is a network-compatible parameter.

The initialized parameter value can be recovered with TumorGrowth.initial_parameters(ode). Get the network state with TumorGrowth.state(ode).

using Lux
using Random

rng = Xoshiro(123)
network = network = Lux.Chain(Lux.Dense(2, 3, Lux.tanh), Lux.Dense(3, 2))
ode = neural_ode(rng, network)
θ = TumorGrowth.initial_parameters(ode)
ode(rand(2), θ, 42.9) # last argument irrelevant as `ode` is autonomous
source
TumorGrowth.optimiser_defaultMethod
optimiser_default(model)

Return the default choice of optimiser for model.

New model implementations

Must return an optimiser from Optimisers.jl, or an optimiser with the same API, or one of the optimisers from LeastSquaresOptim.jl, such as LevenbergMarquardt() or Dogleg().

The fallback returns Optimisers.Adam(0.0001).

source
TumorGrowth.penalty_defaultMethod
penalty_default(model)

Return the default loss penalty to be used when calibrating model. The larger the positive value, the more calibration discourages large differences in v0 and v∞ on a log scale. Helps discourage v0 and v∞ drifting out of bounds in models whose ODE have a singularity at the origin.

Ignored by the optimisers LevenbergMarquardt() and Dogleg().

New model implementations

Must return a value in the range $[0, ∞)$, and will typically be less than 1.0. Only implement if model has strictly positive parameters named v0 and v∞.

Fallback returns 0.

source
TumorGrowth.radius_defaultMethod
radius_default(model, optimiser)

Return the default value of radius when calibrating model using optimiser. This is the initial trust region radius, which is named Δ in LeastSquaresOptim.jl documentation and code.

This parameter is ignored unless optimiser, as passed to the CalibrationProblem, is LevenbergMarquardt() or Dogleg().

New model implementations

The fallback returns:

  • 10.0 if optimiser isaLevenbergMarquardt`
  • 1.0 if optimiser isaDogleg`
  • 0 otherwise
source
TumorGrowth.recoverMethod
recover(tuple, from)

Private method.

Return a new named tuple by replacing any nothing values with the corresponding value in the from named tuple, whenever a corresponding key exists, and otherwise ignore.

julia> recover((x=1, y=nothing, z=3, w=nothing), (x=10, y=2, k=7))
(x = 1, y = 2, z = 3, w = nothing)
source
TumorGrowth.satisfies_constraintsMethod
satisfies_constraints(x, lower, upper)

Private method.

Returns true if both of the following are true:

  • upper.k < x.k for each k appearing as a key of upper
  • x.k < lower.k for each k appearing as a key of lower

Otherwise, returns false.

source
TumorGrowth.scale_defaultMethod
scale_default(times, volumes, model)

Return an appropriate default for a function p -> f(p) so that p = f(q) has a value of the same order of magnitude expected for parameters of model, whenever q has the same form as p but with all values equal to one.

Ignored by the optimisers LevenbergMarquardt() and Dogleg().

New model implementations

Fallback returns the identity.

source
TumorGrowth.slopeMethod
slope(xs, ys)

Return the slope of the line of least-squares best fit for ordinates xs and coordinates ys.

source
TumorGrowth.upper_defaultMethod
upper_default(model)

Return a named tuple with the upper bound constraints on the parameters for model.

For example, a return value of (v0 = 1.0,) indicates that p.v0 < 1.0 is a hard constraint for p, in calls of the form model(times, p), but all other components of p are unconstrained.

New model implementations

Fallback returns empty named tuple.

source