Reference
TumorGrowth.TumorGrowthTumorGrowth.CalibrationProblemTumorGrowth.CalibrationProblemTumorGrowth.WeightedL2LossTumorGrowth.bertalanffyTumorGrowth.bertalanffy2TumorGrowth.bertalanffy2_ode!TumorGrowth.bertalanffy_numericalTumorGrowth.bertalanffy_odeTumorGrowth.classical_bertalanffyTumorGrowth.compareTumorGrowth.curvatureTumorGrowth.deleteTumorGrowth.errorsTumorGrowth.exponentialTumorGrowth.fill_gapsTumorGrowth.flat_patient_dataTumorGrowth.force_constraints!TumorGrowth.frozen_defaultTumorGrowth.functorTumorGrowth.functorTumorGrowth.gompertzTumorGrowth.guess_parametersTumorGrowth.iterations_defaultTumorGrowth.logisticTumorGrowth.lossTumorGrowth.lower_defaultTumorGrowth.mergeTumorGrowth.neuralTumorGrowth.neural2TumorGrowth.neural_odeTumorGrowth.optimiser_defaultTumorGrowth.parametersTumorGrowth.patient_dataTumorGrowth.penalty_defaultTumorGrowth.radius_defaultTumorGrowth.recoverTumorGrowth.satisfies_constraintsTumorGrowth.scale_defaultTumorGrowth.slopeTumorGrowth.solutionTumorGrowth.solve!TumorGrowth.upper_default
TumorGrowth.CalibrationProblem — Method CalibrationProblem(times, volumes, model; learning_rate=0.0001, options...)Specify a problem concerned with optimizing the parameters of a tumor growth model, given measured volumes and corresponding times.
See TumorGrowth for a list of possible models.
Default optimisation is by Adam gradient descent, using a sum of squares loss. Call solve! on a problem to carry out optimisation, as shown in the example below. See "Extended Help" for advanced options, including early stopping.
Initial values of the parameters are inferred by default.
Unless frozen (see "Extended help" below), the calibration process learns an initial condition v0 which is generally different from volumes[1].
Simple solve
using TumorGrowth
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
volumes = [0.00023, 8.4e-5, 6.1e-5, 4.3e-5, 4.3e-5, 4.3e-5]
problem = CalibrationProblem(times, volumes, gompertz; learning_rate=0.01)
solve!(problem, 30) # apply 30 gradient descent updates
julia> loss(problem) # sum of squares loss
1.7341026729860452e-9
p = solution(problem)
julia> pretty(p)
"v0=0.0002261 v∞=2.792e-5 ω=0.05731"
extended_times = vcat(times, [42.0, 46.0])
julia> gompertz(extended_times, p)[[7, 8]]
2-element Vector{Float64}:
3.374100207406809e-5
3.245628908921241e-5Extended help
Solving with iteration controls
Continuing the example above, we may replace the number of iterations, n, in solve!(problem, n), with any control from IterationControl.jl:
using IterationControl
solve!(
problem,
Step(1), # apply controls every 1 iteration...
WithLossDo(), # print loss
Callback(problem -> print(pretty(solution(problem)))), # print parameters
InvalidValue(), # stop for ±Inf/NaN loss
NumberSinceBest(5), # stop when lowest loss so far was 5 steps prior
TimeLimit(1/60), # stop after one minute
NumberLimit(400), # stop after 400 steps
)
p = solution(problem)
julia> loss(problem)
7.609310030658547e-10See IterationControl.jl for all options.
Controlled iteration as above is not recommended if you specify optimiser=LevenbergMarquardt() or optimiser=Dogleg() because the internal state of these optimisers is reset at every Step. Instead, to arrange automatic stopping, use solve!(problem, 0).
Visualizing results
using Plots
scatter(times, volumes, xlab="time", ylab="volume", label="train")
plot!(problem, label="prediction")Keyword options
p0=guess_parameters(times, volumes, model): initial value of the model parameters.lower: named tuple indicating lower bounds on components of the model parameterp. For example, iflower=(; v0=0.1), then this introduces the constraintp.v0 < 0.1. The model-specific default value isTumorGrowth.lower_default(model).upper: named tuple indicating upper bounds on components of the model parameterp. For example, ifupper=(; v0=100), then this introduces the constraintp.v0 < 100. The model-specific default value isTumorGrowth.upper_default(model).frozen: a named tuple, such as(; v0=nothing, λ=1/2); indicating parameters to be frozen at specified values during optimisation; anothingvalue means freeze at initial value. The model-specific default value isTumorGrowth.frozen_default(model).learning_rate > 0: learniing rate for Adam gradient descent optimisation. Ignored ifoptimiseris explicitly specified.optimiser: optimisation algorithm, which will be one of two varieties:A gradient descent optimiser: This must be from Optimisers.jl or implement the same API.
A Gauss-Newton optimiser: Either
LevenbergMarquardt(),Dogleg(), provided by LeastSquaresOptim.jl (but re-exported byTumorGrowth).
The model-specific default value is
TumorGrowth.optimiser_default(model), unlesslearning_rateis specified, in which case it will beOptimisers.Adam(learning_rate).scale: a scaling function with the property thatp = scale(q)has a value of the same order of magnitude for the model parameters being optimised, wheneverqhas the same form as a model parameterpbut with all values equal to one. Scaling can help components ofpconverge at a similar rate. Ignored by Gauss-Newton optimisers. Model-specific default isTumorGrowth.scale_default(model).radius > 0: initial trust region radius. This is ignored unlessoptimiseris a Gauss-Newton optimiser. The model-specific default isTumorGrowth.radius_default(model, optmiser), which is typically10.0forLevenbergMarquardt()and1.0forDogleg.half_life=Inf: set to a real positive number to replace the sum of squares loss with a weighted version; weights decay in reverse time with the specifiedhalf_life. Ignored by Gauss-Newton optimisers.penalty ≥ 0: the larger the positive value, the more a loss penalty discourages large differences inv0andv∞on a log scale. Helps discouragev0andv∞drifting out of bounds in models whose ODE have a singularity at the origin. Model must includev0andv∞as parameters. Ignored by Gauss-Newton optimisers. The model-specific default value isTumorGrowth.penalty_default(model).ode_options...: optional keyword arguments for the ODE solver,DifferentialEquations.solve, from DifferentialEquations.jl. Not relevant for models using analytic solutions (see the table atTumorGrowth).
TumorGrowth.CalibrationProblem — MethodCalibrationProblem(problem; kwargs...)Construct a new calibration problem out an existing problem but supply new keyword arguments, kwargs. Unspecified keyword arguments fall back to defaults, except for p0, which falls back to solution(problem).
TumorGrowth.bertalanffy — Methodbertalanffy(times, p)Return volumes for specified times, based on the analytic solution to the General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, where v0 is the volume at time times[1]. Other parameters are explained below.
Special cases of the model are:
logistic(λ = -1)classical_bertalanffy(λ = 1/3)gompertz(λ = 0)
Underlying ODE
In the General Bertalanffy model, the volume $v > 0$ evolves according to the differential equation
$dv/dt = ω B_λ(v_∞/v) v,$
where $B_λ$ is the Box-Cox transformation, defined by $B_λ(x) = (x^λ - 1)/λ$, unless $λ = 0$, in which case, $B_λ(x) = \log(x)$. Here:
$v_∞$=
v∞is the steady state solution, stable and unique, assuming $ω > 0$; this is sometimes referred to as the carrying capacity$1/ω$ has the units of time
$λ$ is dimensionless
For a list of all models see TumorGrowth.
TumorGrowth.bertalanffy2 — Methodbertalanffy2(times, p; capacity=false, solve_kwargs...)Return volumes for specified times, based on numerical solutions to a two-dimensional extension of General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, γ, where v0 is the volume at time times[1].
The usual General Bertalanffy model is recovered when γ=0. In that case, using bertalanffy, which is based on an analytic solution, may be preferred. Other parameters are explained below.
Keyword options
solve_kwargs: optional keyword arguments for the ODE solver,DifferentialEquations.solve, from DifferentialEquations.jl.
Underlying ODE
In this model the carrying capacity of the bertalanffy model, ordinarily fixed, is introduced as a new latent variable $u(t)$, which is allowed to evolve independently of the volume $v(t)$, at a rate in proportion to its magnitude:
$dv/dt = ω B_λ(u/v) v$
$du/dt = γωu$
Here $B_λ$ is the Box-Cox transformation with exponent $λ$. See bertalanffy. Also:
- $1/ω$ has units of time
- $λ$ is dimensionless
- $γ$ is dimensionless
Since $u$ is a latent variable, its initial value, v∞ ≡ u(times[1]), is an additional model parameter.
For a list of all models see TumorGrowth.
TumorGrowth.bertalanffy_numerical — Methodbertalanffy_numerical(times, p; solve_kwargs...)Provided for testing purposes.
Return volumes for specified times, based on numerical solutions to the General Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, λ, where v0 is the volume at time times[1]; solve_kwargs are optional keyword arguments for the ODE solver, DifferentialEquations.solve, from DifferentialEquations.jl.
Since it is based on analtic solutions, bertalanffy is the preferred alternative to this function.
It is assumed without checking that times is ordered: times == sort(times).
See also bertalanffy2.
TumorGrowth.classical_bertalanffy — Methodclassical_bertalanffy(times, v0, v∞, ω)Return volumes for specified times, based on anaytic solutions to the classical Bertalanffy model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].
This is the λ=1/3 case of the bertalanffy model.
For a list of all models see TumorGrowth.
TumorGrowth.compare — Methodcompare(times, volumes, models; holdouts=3, metric=mae, advanced_options...)By calibrating models using the specified patient times and lesion volumes, compare those models using a hold-out set consisting of the last holdouts data points.
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
volumes = [0.00023, 8.4e-5, 6.1e-5, 4.3e-5, 4.3e-5, 4.3e-5]
julia> comparison = compare(times, volumes, [gompertz, logistic])
ModelComparison with 3 holdouts:
metric: mae
gompertz: 2.198e-6
logistic: 6.55e-6
julia> errors(comparison)
2-element Vector{Float64}:
2.197843662660861e-6
6.549858321487298e-6
julia> p = parameters(comparison)[1] # calibrated parameter for `gompertz`
(v0 = 0.00022643603114569068, v∞ = 3.8453274218216947e-5, ω = 0.11537512108224635)
julia> gompertz(times, p)
6-element Vector{Float64}:
0.00022643603114569068
9.435316392754094e-5
5.1039159299783234e-5
4.303209015899451e-5
4.021112910411027e-5
3.922743006690166e-5Visualising comparisons
using Plots
plot(comparison, title="A comparison of two models")Keyword options
holdouts=3: number of time-volume pairs excluded from the end of the calibration datametric=mae: metric applied to holdout set; the reported error on a model predicting volumesv̂ismetric(v̂, v)wherevis the lastholdoutsvalues ofvolumes. For example, any regression measure from StatisticalMeasures.jl can be used here. The built-in fallback is mean absolute error.iterations=TumorGrowth.iterations.(models): a vector of iteration counts for the calibration ofmodelscalibration_options: a vector of named tuples providing keyword arguments for theCalibrationProblemfor each model. Possible keys are:p0,lower,upper,frozen,learning_rate,optimiser,radius,scale,half_life,penalty, and keys corresponding to any ODE solver options. Keys left unspecified fall back to defaults, as these are described in theCalibrationProblemdocument string.
See also errors, parameters.
TumorGrowth.errors — Methoderrors(comparison)Extract the the vector of errors from a ModelComparison object, as returned by calls to compare.
TumorGrowth.exponential — Methodexponential(times, p)Return volumes for specified times, based on the analytic solution to the exponential model for lesion growth. Here p will have properties v0 and ω, where v0 is the volume at time times[1] and log(2)/ω is the half life. Use negative ω for growth and positive ω for decay.
Underlying ODE
In the exponential model, the volume $v > 0$ evolves according to the differential equation
$dv/dt = -ω v.$
For a list of all models see TumorGrowth.
TumorGrowth.flat_patient_data — Methodflat_patient_data()Return, in row table form, the lesion measurement data collected in Laleh et al. (2022) "Classical mathematical models for prediction of response to chemotherapy and immunotherapy", PLOS Computational Biology.
Each row represents a single measurement of a single lesion on some day.
See also patient_data, in which each row represents all measurements of a single lesion.
TumorGrowth.gompertz — Methodgompertz(times, p)Return volumes for specified times, based on anaytic solutions to the classical Gompertz model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].
This is the λ=0 case of the bertalanffy model.
For a list of all models see TumorGrowth.
TumorGrowth.guess_parameters — Methodguess_parameters(times, volumes, model)Apply heuristics to guess parameters p for a model.
New model implementations
Fallback returns nothing which will prompt user's to explicitly specify initial parameter values in calibration problems.
TumorGrowth.logistic — Methodlogistic(times, v0, v∞, ω)Return volumes for specified times, based on anaytic solutions to the classical logistic (Verhulst) model for lesion growth. Here p will have properties v0, v∞, ω, where v0 is the volume at time times[1].
This is the λ=-1 case of the bertalanffy model.
For a list of all models see TumorGrowth.
TumorGrowth.loss — Methodloss(problem)Return the sum of squares loss for a calibration problem, as constructed with CalibrationProblem.
TumorGrowth.neural — Methodneural([rng,] network; transform=log, inverse=exp)Initialize the Lux.jl neural network, network, and return a callable object, model, for solving the associated one-dimensional neural ODE for volume growth, as detailed under "Underlying ODE" below.
The returned object, model, is called like this:
volumes = model(times, p)where p should have properties v0, v∞, θ, where v0 is the initial volume (so that volumes[1] = v0), v∞ is a volume scale parameter, and θ is a network-compatible Lux.jl parameter.
It seems that calibration works best if v∞ is frozen.
The form of θ is the same as TumorGrowth.initial_parameters(model), which is also the default initial value used when solving an associated CalibrationProblem.
using Lux, Random
# define neural network with 1 input and 1 output:
network = Lux.Chain(Dense(1, 3, Lux.tanh; init_weight=Lux.zeros64), Dense(3, 1))
rng = Xoshiro(123)
model = neural(rng, network)
θ = TumorGrowth.initial_parameters(model)
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
v0, v∞ = 0.00023, 0.00015
p = (; v0, v∞, θ)
julia> volumes = model(times, p) # (constant because of zero-initialization)
6-element Vector{Float64}:
0.00023
0.00023
0.00023
0.00023# # Neural2Underlying ODE
View the neural network (with fixed parameter θ) as a mathematical function $f$ and write $ϕ$ for the transform function. Then $v(t) = v_∞ ϕ^{-1}(y(t))$, where $y(t)$ evolves according to
$dy/dt = f(y)$
subject to the initial condition $y(t₀) = ϕ(v_0/v_∞)$, where $t₀$ is the initial time, times[1]. We are writing $v₀$=v0 and $v_∞$=v∞.
For a list of all models see TumorGrowth. See also CalibrationProblem.
TumorGrowth.neural2 — Methodneural2([rng,] network; transform=log, inverse=exp)Initialize the Lux.jl neural network, network, and return a callable object, model, for solving the associated two-dimensional neural ODE for volume growth, as detailed under "Underlying ODE" below.
The returned object model is called like this:
volumes = model(times, p)where p should have properties v0, v∞, θ, where v0 is the initial volume (so that volumes[1] = v0), v∞ is a volume scale parameter, and θ is a network-compatible Lux.jl parameter.
It seems that calibration works best if v∞ is frozen.
The form of θ is the same as TumorGrowth.initial_parameters(model), which is also the default initial value used when solving an associated CalibrationProblem.
using Lux, Random
# define neural network with 2 inputs and 2 outputs:
network = Lux.Chain(Dense(2, 3, Lux.tanh; init_weight=Lux.zeros64), Dense(3, 2))
rng = Xoshiro(123)
model = neural2(rng, network)
θ = TumorGrowth.initial_parameters(model)
times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
v0, v∞ = 0.00023, 0.00015
p = (; v0, v∞, θ)
julia> volumes = model(times, p) # (constant because of zero-initialization)
6-element Vector{Float64}:
0.00023
0.00023
0.00023
0.00023
0.00023
0.00023Underlying ODE
View the neural network (with fixed parameter θ) as a mathematical function $f$, with components $f₁$ and $f₂$, and write $ϕ$ for the transform function. Then $v(t) = v_∞ ϕ^{-1}(y(t))$, where $y(t)$, and a latent variable $u(t)$, evolve according to
$dy/dt = f₁(y, u)$
$du/dt = f₂(y, u)$
subject to the initial conditions $y(t₀) = ϕ(v₀/v_∞)$, $u(t₀) = 1$, where $t₀$ is the initial time, times[1]. We are writing $v₀$=v0 and $v_∞$=v∞.
For a list of all models see TumorGrowth. See also CalibrationProblem.
TumorGrowth.parameters — Methodparameters(comparison)Extract the the vector of parameters from a ModelComparison object, as returned by calls to compare.
TumorGrowth.patient_data — Methodpatient_data()Return, in row table form, the lesion measurement data collected in Laleh et al. (2022) "Classical mathematical models for prediction of response to chemotherapy and immunotherapy", PLOS Computational Biology.
Each row represents all measurements for a single lesion for a unique patient.
record = first(patient_data())
julia> record.Pt_hashID # patient identifier
"0218075314855e6ceacca856fcd4c737-S1"
julia> record.T_weeks # measure times, in weeks
7-element Vector{Float64}:
0.1
6.0
12.0
17.0
23.0
29.0
35.0
julia> record.Lesion_normvol # all volumes measured, normalised by dataset max
7-element Vector{Float64}:
0.000185364052636979
0.00011229838600811
8.4371439525252e-5
8.4371439525252e-5
1.05464299406565e-5
2.89394037571615e-5
8.4371439525252e-5See also flat_patient_data.
TumorGrowth.solution — Methodsolution(problem)Return to the solution to a CalibrationProblem. Normally applied after calling solve!(problem).
Also returns the solution to internally defined problems, as constructed with TumorGrowth.OptimisationProblem, TumorGrowth.CurveOptimisationProblem.
TumorGrowth.solve! — Methodsolve!(problem, n)Solve a calibration problem, as constructed with CalibrationProblem. The calibrated parameters are then returned by solution(problem).
If using a Gauss-Newton optimiser (LevenbergMarquardt or Dogleg) specify n=0 to choose n automatically.
solve!(problem, controls...)Solve a calibration problem using one or more iteration controls, from the package IterationControls.jl. See the "Extended help" section of CalibrationProblem for examples.
Not recommended for Gauss-Newton optimisers (LevenbergMarquardt or Dogleg).
TumorGrowth.TumorGrowth — ModuleTumorGrowth.jl provides the following models for tumor growth:
| model | description | parameters, p | analytic? |
|---|---|---|---|
bertalanffy | General Bertalanffy (GB) | (; v0, v∞, ω, λ) | yes |
bertalanffy_numerical | General Bertalanffy (testing only) | (; v0, v∞, ω, λ) | no |
bertalanffy2 | 2D extension of General Bertalanffy | (; v0, v∞, ω, λ, γ) | no |
gompertz | classical Gompertz (GB, λ=0) | (; v0, v∞, ω) | yes |
logistic | classical Logistic/Verhulst (GB, λ=-1) | (; v0, v∞, ω) | yes |
classical_bertalanffy | classical Bertalanffy (GB, λ=1/3) | (; v0, v∞, ω) | yes |
exponential | exponential decay or growth | (; v0, ω) | yes |
neural(rng, network) | 1D neural ODE with Lux.jl network | (; v0, v∞, θ) | no |
neural2(rng, network) | 2D neural ODE with Lux.jl network | (; v0, v∞, θ) | no |
Here a model is a callable object, that outputs a sequence of lesion volumes, given times, by solving a related ordinary differential equation with parameters (p below):
using TumorGrowth
times = times = [0.1, 6.0, 16.0, 24.0, 32.0, 39.0]
p = (v0=0.0002261, v∞=2.792e-5, ω=0.05731) # `v0` is the initial volume
volumes = gompertz(times, p)
6-element Vector{Float64}:
0.0002261
0.0001240760197801191
6.473115210101774e-5
4.751268597529182e-5
3.9074807723757934e-5
3.496675045077041e-5In every model, v0 is the initial volume, so that volumes[1] == v0.
In the case analytic solutions to the underlying ODEs are not known, optional keyword arguments for the DifferentialEquations.jl solver can be passed to the model call.
TumorGrowth.jl also provides a CalibrationProblem tool to calibrate model parameters, and a compare tool to compare models on a holdout set.
TumorGrowth.WeightedL2Loss — TypeWeightedL2Loss(times, h=Inf)Private method.
Return a weighted sum of squares loss function (ŷ, y) -> loss, where the weights decay in reverse time with a half life h.
TumorGrowth.bertalanffy2_ode! — Methodbertalanffy2_ode!(dX, X, p, t)A two-dimensional extension of the ODE describing the General Bertalanffy model for lesion growth. Here X = [v, u], where v is volume at time t and u is the "carrying capacity" at time t, a latent variable. The time derivatives are written to dX. For the specific form of the ODE, see bertalanffy2.
TumorGrowth.bertalanffy_ode — Methodbertalanffy_ode(v, p, t)Based on the General Bertalanffy model, return the rate in change in volume at time t, for a current volume of v. For details, see bertalanffy.
Note here that v, and the return value, are vectors with a single element, rather than scalars.
TumorGrowth.curvature — Methodcurvature(xs, ys)Return the coefficient a for the parabola x -> a*x^2 + b*x + c of best fit, for ordinates xs and coordinates ys.
TumorGrowth.delete — Methoddelete(x, kys)Private method.
Assuming x is a named tuple, return a copy of x with any key in kys removed. Otherwise, assuming x is a structured object (such as a ComponentArray) first convert to a named tuple and then delete the specified keys.
TumorGrowth.fill_gaps — Methodfill_gaps(short, long, filler)Private method.
Here long is a ComponentArray and short a named tuple with some of the keys from long. The method returns a ComponentArray with the same structure as long but with the values of short merged into long, with all other (possibly nested) values replaced with filler for numerical values or arrays of Inf in the case of array values.
``julia long = (a=1, b=rand(1,2), c=(d=4, e=rand(2))) |> ComponentArray short = (; a=10) # could alternatively be aComponentArray`
julia> TumorGrowth.fill_gaps(short, long, Inf) filled = ComponentVector{Float64}(a = 10.0, b = [Inf Inf], c = (d = Inf, e = [Inf, Inf]))
julia> all(filled .> long) true
TumorGrowth.force_constraints! — Methodforce_constraints!(x_candidate, x, lower, upper)Private method.
Assumes x is a ComponentArray for which TumorGrowth.satisfies_constraints(x, lower, upper) is true. The method mutates those components of the x_candidate which do not satisfy the constraints by moving from x towards the boundary half the distance to the boundary, along the failed component.
TumorGrowth.frozen_default — Methodfrozen_default(model)Return a named tuple indicating parameter values to be frozen by default when calibrating model. A value of nothing for a parameter indicates freezing at initial value.
New model implementations
Fallback returns an empty named tuple.
TumorGrowth.functor — Methodfunction TumorGrowth.functor(x, frozen)Private method.
For a ComponentArray, x, return a tuple (xfree, reconstructor), where:
xfreeis a deconstructed version ofxwith entries corresponding to keys in the ordinary named tuplefrozendeleted.reconstructis a method to reconstruct aComponentArrayfrom something similar toxfree, ensuring the missing keys get values from the named tuplefrozen, as demonstrated in the example below. You can also applyreconstructto things likexfreewrapped asComponentArrays.
c = (x =1, y=2, z=3) |> ComponentArray
free, reconstruct = TumorGrowth.functor(c, (; y=20))
julia> free
(x = 1, z = 3)
julia> reconstruct((x=100, z=300))
ComponentVector{Int64}(x = 100, y = 20, z = 300)
julia> reconstruct(ComponentArray(x=100, z=300)))
ComponentVector{Int64}(x = 100, y = 20, z = 300)TumorGrowth.functor — MethodTumorGrowth.functor(x) -> destructured_x, recoverPrivate method.
An extension of Functors.functor from the package Functors.jl, with an overloading for ComponentArrays.
TumorGrowth.iterations_default — Methoditerations_default(model, optimiser)Number of iterations, when calibrating model and using optimiser, to be adopted by default in model comparisons. Here optimiser is an optimiser from Optimisers.jl, or implements the same API, or is one of: LevenbergMarquardt(), or Dogleg(). .
New model implementations
Fallback returns 10000, unless optimiser isa Union{LevenbergMarquardt,Dogleg}, in which case 0 is returned (stopping controlled by LeastSquaresOptim.jl).
TumorGrowth.lower_default — Methodlower_default(model)Return a named tuple with the lower bound constraints on parameters for model.
For example, a return value of (v0 = 0.1,) indicates that p.v0 > 0.1 is a hard constraint for p, in calls of the form model(times, p), but all other components of p are unconstrained.
New model implementations
Fallback returns NamedTuple().
TumorGrowth.merge — MethodTumorGrowth.merge(x, y::NamedTuple)Private method.
Ordinary merge if x is also a named tuple. More generally, first deconstruct x using TumorGrowth.functor, merge as usual, and reconstruct.
TumorGrowth.neural_ode — Methodneural_ode([rng,] network)
Initialize the Lux.jl neural2 network, network, and return an associated ODE, ode, with calling syntax dX_dt = ode(X, p, t), where p is a network-compatible parameter.
The initialized parameter value can be recovered with TumorGrowth.initial_parameters(ode). Get the network state with TumorGrowth.state(ode).
using Lux
using Random
rng = Xoshiro(123)
network = network = Lux.Chain(Lux.Dense(2, 3, Lux.tanh), Lux.Dense(3, 2))
ode = neural_ode(rng, network)
θ = TumorGrowth.initial_parameters(ode)
ode(rand(2), θ, 42.9) # last argument irrelevant as `ode` is autonomousTumorGrowth.optimiser_default — Methodoptimiser_default(model)Return the default choice of optimiser for model.
New model implementations
Must return an optimiser from Optimisers.jl, or an optimiser with the same API, or one of the optimisers from LeastSquaresOptim.jl, such as LevenbergMarquardt() or Dogleg().
The fallback returns Optimisers.Adam(0.0001).
TumorGrowth.penalty_default — Methodpenalty_default(model)Return the default loss penalty to be used when calibrating model. The larger the positive value, the more calibration discourages large differences in v0 and v∞ on a log scale. Helps discourage v0 and v∞ drifting out of bounds in models whose ODE have a singularity at the origin.
Ignored by the optimisers LevenbergMarquardt() and Dogleg().
New model implementations
Must return a value in the range $[0, ∞)$, and will typically be less than 1.0. Only implement if model has strictly positive parameters named v0 and v∞.
Fallback returns 0.
TumorGrowth.radius_default — Methodradius_default(model, optimiser)Return the default value of radius when calibrating model using optimiser. This is the initial trust region radius, which is named Δ in LeastSquaresOptim.jl documentation and code.
This parameter is ignored unless optimiser, as passed to the CalibrationProblem, is LevenbergMarquardt() or Dogleg().
New model implementations
The fallback returns:
10.0ifoptimiser isaLevenbergMarquardt`1.0ifoptimiser isaDogleg`0otherwise
TumorGrowth.recover — Methodrecover(tuple, from)Private method.
Return a new named tuple by replacing any nothing values with the corresponding value in the from named tuple, whenever a corresponding key exists, and otherwise ignore.
julia> recover((x=1, y=nothing, z=3, w=nothing), (x=10, y=2, k=7))
(x = 1, y = 2, z = 3, w = nothing)TumorGrowth.satisfies_constraints — Methodsatisfies_constraints(x, lower, upper)Private method.
Returns true if both of the following are true:
upper.k < x.kfor eachkappearing as a key ofupperx.k < lower.kfor eachkappearing as a key oflower
Otherwise, returns false.
TumorGrowth.scale_default — Methodscale_default(times, volumes, model)Return an appropriate default for a function p -> f(p) so that p = f(q) has a value of the same order of magnitude expected for parameters of model, whenever q has the same form as p but with all values equal to one.
Ignored by the optimisers LevenbergMarquardt() and Dogleg().
New model implementations
Fallback returns the identity.
TumorGrowth.slope — Methodslope(xs, ys)Return the slope of the line of least-squares best fit for ordinates xs and coordinates ys.
TumorGrowth.upper_default — Methodupper_default(model)Return a named tuple with the upper bound constraints on the parameters for model.
For example, a return value of (v0 = 1.0,) indicates that p.v0 < 1.0 is a hard constraint for p, in calls of the form model(times, p), but all other components of p are unconstrained.
New model implementations
Fallback returns empty named tuple.