ode_modelling

Return to start page

Full size table of content and diagram for ODE modelling

1. Short description

  • Framework for modelling dynamical systems

2. Applicability/restrictions/pitfalls

  • Good for modelling medium sized systems.
  • Compilation of models with many conditions can take a long time.

3. Code availability and implementations

  • D2D: all
  • -

5. Publications from the Timmer group

  • Raue et al. (2015) - Data2Dynamics: a modeling environment tailored to parameter estimation in dynamical systems

6. Publications from external groups

1. Short description

  • Framework for modelling dynamical systems

2. Applicability/restrictions/pitfalls

  • Good for modelling medium sized systems.
  • (Too much / A lot of) freedom in model setup.

3. Code availability and implementations

  • -

5. Publications from the Timmer group

6. Publications from external groups

1. Short description

  • Modelling framework(s) of the Hasenauer Group in Munich / Bonn
  • More flexible (to new approaches) because of modularized design
  • Similar to D2D, but different ;)
  • AMICI: Advanced Multilanguage Interface to CVODES and IDAS
  • PESTO: Parameter EStimation TOolbox
  • MEMOIR: MATLAB toolbox for Mixed Effect Model InfeRence

2. Applicability / restrictions / pitfalls

  • All Hass et al. benchmark paper models are also available
  • AMICI / PESTO / MEMOIR are written for Matlab, however the Hasenauer group decided to not not further use matlab, if possible
  • pyPESTO is the python version of PESTO (PESTO will be discontinued)

3. Code availability and implementations

4. Publications from external groups

  • AMICI: Fröhlich et al. Bioinformatics (2016) / Fröhlich et al. PLOS Comp Biol (2017)
  • PESTO: Stapor et al. Bioinformatics (2017)
  • MEMOIR: Fröhlich et al. Bioinformatics (2018)

Focus on mixed-effects modelling. Interfaces NONMEM and MONOLIX as the two standard estimation tools for non-linear mixed effects models. Supports Quasi-NLME estimation via dMod backaend.

1. Short description

  • Representation of biochemical models in XML data format
  • In particular, the Benchmark collection and models from the Biomodels database are in SBML format

2. Applicability/restrictions/pitfalls

  • Import and Export works for most Biomodels
  • Events, negative fluxes or not-specified fields can throw errors
  • tend should be specified beforehand

3. Code availability and implementations

  • D2D: arImportSBML, arExportSBML
  • dmod: not implemented
  • SBML data format, Benchmark models

5. Publications from external groups

1. Short description

  • Specification of a standard for formulating a parameter estimation problem of an ODE model
  • Goal: Import & export in all ODE optimization software should be possible, allowing for easy sharing/publishing between tools
  • Consists of SBML model and three .tsv-files that specify experimental data, measurement conditions and parameter settings

2. Applicability/restrictions/pitfalls

  • Multiple special cases like events, multiple models, and complex parameter transformations might not be captured by the standard.
  • As of 2019 still under construction.

3. Code availability and implementations

  • Standard is defined in this github
  • D2D: arImportSBML, arExportSBML, arExportBenchmark
  • dmod: not implemented. (But there is a wrapper for pypesto, which might be adaptable for dmod )
  • SBML models, benchmark models

1. Short description

  • Collection of ODE models with experimental data
  • Mainly models from Timmer and Hasenauer group
  • To use for evaluation of methodologies and algorithms

2. Applicability/restrictions/pitfalls

  • Models were made available for D2D in the publication (see below) using a Setup.m file.
  • PEtab (see above) was developed afterwards.
  • Some models take a lot of time to compile / need some tweaking to compile on some machines
  • Additional models would be desirable, Hasenauer’s group is supporting efforts in this direction

3. Code availability and implementations

  • D2D: arImportSBML, arExportSBML, arExportBenchmark
  • dmod: not implemented. (But there is a wrapper for pypesto, which might be adaptable for dmod )

4. Publications from the Timmer group

1. Short description

  • When analytical expressions are not available steady states can be determined by presimulation.
  • Simulate the model for a long time.
  • Very robust

2. Applicability/restrictions/pitfalls

  • Equilibration by simulation is not always possible (limit cycles, unstable systems etc). In such cases one can use arPlotEquilibration to debug the situation.
  • It can be slower than the alternative.
  • Very long equilibration times can lead to loss of conserved moieties in the model as error accumulates.
  • For models with no conserved moieties, steady state sensitivities can be computed without simulating the sensitivity equations (D2D: see arReduce.m).
  • Compilation of models with many conditions can take a long time.

3. Code availability and implementations

  • D2D: arSteadyState.m / arPrintSteadyState.m / arPlotEquilibration.m
  • Steady states by constraints. See also the D2D documentation

5. Publications from the Timmer group

  • -

6. Publications from external groups

  • Fiedler et al. 2016 - Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints

1. Short description

  • Determine steady states by prestimulation.
  • Can be orders of magnitude faster for big steady state systems (200x speedup in glycolysis).

2. Applicability/restrictions/pitfalls

  • Models without conserved moieties.
  • Model may contain no conserved moieties, or this method produces invalid sensitivities.
  • Note that new conserved moieties can arise from setting parameters to zero in such a way that new conserved moieties are created at run time.

3. Code availability and implementations

  • D2D: arReduce.m / arSteadyState.m / arFastSensis.m
  • -

5. Publications from the Timmer group

  • -

6. Publications from external groups

  • -

1. Short description

  • Inputs are described by smoothing splines.
  • The spline parameters are estimated jointly with all other model parameters
  • Joint uncertainty analysis is then straight forward (e.g. via likelihood profiles which takes information from all data points and account for uncertainties of the input)
  • In order to ensure positivity of the spline, the log-transformation is exploited

2. Availability

  • The approach is implemented in D2D and is exploited by default if an input has parameters.

3. Applicability

  • It is tested in many projects. It works.
  • Drawing spline parameter might lead to ODE integration problems because exponentiation steps (unlog of parameters, unlog of the spline)

4. Publication

  • doi.org/10.1093/bioinformatics/bts393

1. Evaluation of predictions for all parameter vectors obtained during profile likelihood calculation

2. The DREAM6 approach

3. Implemented in D2D as arPLETrajectories

1. The Monte-Carlo approach (for model discrimination):

  • Drawing the correct model according to available prior knowledge
  • Drawing true parameters according to available prior knowledge
  • Drawing data for the design candidates
  • For each design and data set: Fit both models, apply the likelihood ratio test and evaluate the result

2. The corresponding approach can be used for parameter estimation. Parameter information has to be evaluated instead of the likelihood ratio test. In the original publication, the metrics based on the Fisher Information matrix were discussed. Profile likelihood-based approaches can be used instead.

3. Publication

1. Short description

  • For an experimental condition of interest, generate a profile which varies parameter of interest and validation data point simultaneously
  • Predict the information gain (profile width) for a measurement for this condition
  • Work in progress, coming soon

1. Short description

  • Initial value approach fails for periodic or chaotic processes
  • Likelihood has huge amount of local optima
  • Idea:
    • Divide time axis in many segments
    • Fit simultaneously in all segments
    • Consider continuity constraint between segment in linearised fashion
    • Very powerful method

2. Applicability/restrictions/pitfalls

  • Applicable only to absolute deterministic systems
  • Stochasticity in oscillatory or chaotic processes does not allow for a deterministic description

3. Code availability and implementations

  • No

5. Publication from the Timmer group

  • Peifer M., Timmer J. Parameter estimation in ordinary differential equations for biochemical processes using the method of multiple shooting. (2007)

6. Publications from external groups

  • Bock - Numerical treatment of inverse problems in chemical reaction kinetics (1981)
  • Bock - Recent advances in parameter identification techniques for ordinary differential equations (1983)

1. Short description

  • Method for finding optimal paths between points in the parameter space
  • Applies Nudged Elastic Band (NEB) method from molecular dynamics community (smooth transition paths in (free) energy landscapes)
  • Allows to check if two points form e.g. a waterfall plot are connected, i.e. from the same convergence region / belong to the same optimum and thus allows for merging fits in the waterfall plot

2. Applicability/restrictions/pitfalls

  • Could help to improve ‘slippery-stair’-Waterfall plots by revealing the local optima structure or indicating a suboptimal optimizer setup
  • Computation of the optimal bands is computationally demanding since model equations and parameters are multiplied by the number of nodes of the band
  • Works for small models (Swameye, Boehm)

3. Code availability and implementations

  • Implemented in D2D (folder NEB, example in Examples/NEB/Swameye)
  • Idea similar to Profile Likelihood
  • Sample size calculation for waterfall plot / LHS-Fit

5. Publications from the Timmer group

  • UNPUBLISHED/Submitted: Tönsing et al., Frontiers in Physics (2019)

6. Publications from external groups

1. Short description

  • Different methods available to (locally) minimize objective function (least-squares/likelihood)
  • Deterministic methods include
    • Levenberg-Marquardt
    • Trust-region optimization
    • Line-Search algorithms

2. Applicability/restrictions/pitfalls

  • Different algorithms have been used in practice. Often, similar algorithms exist in many implementations.

3. Code availability and implementations

  • Many different optimizers implemented in d2d. See ar.config.optimizers for list and set with ar.config.optim.
  • Multi Start optimization
  • Bayesian sampling (MCMC) as alternative to local optimization

5. Publications from the Timmer group

  • Raue A., et al. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE, 8(9), e74335, 2013.

6. Publications from external groups

1. Short description

  • Reparametrization of ODE model yields a different geometry in the optimization problem and might be beneficial for optimization
  • This is especially the case when multiple parameters have identical physical units after reparametrization

2. Applicability/restrictions/pitfalls

  • For independent parameter bounds, reparametrization results in a different search space
  • Master thesis of L. Refisch (2017) showed that the influence of a different parameter search space is as big as the changed geometry for the problem.
  • Whether is improves optimization behavior is mostly problem-specific
  • Symmetry detection tool of B.Merkt can be used to identify physical units of parameters.
  • Optimization & parameter bounds

4. Publications from the Timmer group

  • Raue A., et al. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE, 8(9), e74335, 2013.

1. Short description

  • Parameter bounds specify the search space in which optima of the likelihood are searched
  • Ideally, you would analyze the likelihood only in proximity to the global optimum. Its position is mostly unknown, though. Therefore, tradeoff between inclusion of global optimum and huge search space.
  • As multiple orders of magnitude are usually spanned in biological contexts, a transformation of parameters to log-space is the default. This improves optimization performance

2. Applicability/restrictions/pitfalls

  • A huge search space has multiple possible drawbacks: Many local optima, infeasibility of model integration, long optimization times for areas far away from optimum
  • On the other hand: Not including the point in parameter space that describes the data best is a worst-case-scenario!
  • In dmod no are hard bounds are specified, but a prior for sampling starting points in the search space is given[a]

3. Implementations

  • D2D: ar.lb / ar.ub
  • dmod: ….
  • Optimerger
  • Reparametrization
  • Optimization & parameter bounds

5. Publications from the Timmer group

  • Benefit of log-transformation: C. Kreutz: New Concepts for Evaluating the Performance of Computational Methods, IFAC-PapersOnLine (2016) 49(26): 63-70.
  • This is somewhat connected to the above points. Maybe improve structure here?

1. Short description

  • Parameter estimation by sampling from posterior
  • Uncertainty analysis included in sampling
  • Sampling from posterior, i.e. likelihood times prior

2. Applicability/restrictions/pitfalls

  • Non-identifiabilties lead to problems (posterior not proper)
  • Slow compared to multistart deterministic parameter estimation - however includes uncertainty analysis
  • Prior has to be defined - uninformative prior usually impossible

3. Code availability and implementations

  • D2D: arMC3, arPlotMarginalized, arPlotCorrelations (in subfolder arFramework3/MCMC)
  • dmod: not implemented
  • MCMC vs. profiles (Uncertainty Analysis)
  • Multistart optimization

5. Publications from the Timmer group

  • Joep Vanlier’s PhD thesis
  • Raue et. al. - Joining forces of Bayesian and frequentist methodology (2013)
  • Wieland - Bayesian parameter estimation in systems biology: Markov chain Monte Carlo sampling of biochemical networks - (2018 master thesis)

6. Publications from external groups

  • Ballnus et. al. - Comprehensive benchmarking of Markov chain Monte Carlo methods for dynamical systems (2017)

1. See “Optimal transformations”

1. Short description

  • Sloppiness describes the (unexpected) large spread of eigenvalues of the hessian matrix and originates from structures in the sensitivity matrix, due to the model topology and the choice of data points
  • Sloppiness is not correlated to identifiability
  • Gutenkunst et al. 2007 has been intensively cited in approx. 50% of the cases used as an excuse for bad parameter estimation

2. Applicability/restrictions/pitfalls

  • only local quadratic approximation
  • Very fast: only one eigenvalue calculation
  • But: Overestimates uncertainty - Profile likelihood is more precise
  • Not a good measure for identifiability -> use Profile Likelihood or ITRP

3. Code availability and implementations

  • D2D: arPlotSloppiness
  • Profile Likelihood, Experimental Design

5. Publications from the Timmer group

  • Tönsing et al., Cause and cure of sloppiness in ODE models, PRE (2014)
  • (Hass Benchmark Paper)

6. Publications from external groups

  • “Initial” Paper: Gutenkunst et al., Universally Sloppy Parameter Sensitivities in Systems Biology Models, PLOS Comp Biol, (2007)
  • Sloppiness vs. Identifiability: Chis et al., On the relationship between sloppiness and identifiability, Mathematical Biosciences (2016)

1. Short description

  • Comparison of uncertainty analysis between profile posterior (similar to profile likelihood for posterior) and MCMC marginalization
  • Results: profile posterior similar to MCMC marginalization if no non-identifiabilities exist

2. Applicability/restrictions/pitfalls

  • Non-identifiabilities make posterior improper → sampling impossible
  • Assumes quadratic form of posterior
  • Prior necessary for posterior

3. Code availability and implementations

  • D2D: arMC3, arPlotMCMCProfile, arPlotPLEComparisonToMCMC, arPlotMarginalized
  • dmod: not implemented
  • MCMC (Parameter estimation)
  • Profile likelihood

5. Publications from the Timmer group

  • Raue et. al. - Joining forces of Bayesian and frequentist methodology (2013)
  • Wieland - Bayesian parameter estimation in systems biology: Markov chain Monte Carlo sampling of biochemical networks - (2018 master thesis)

6. Publications from external groups

  • -

1. Short description

  • Reduce conserved moieties (sums of species that stay constant over time) from the model.
  • Left null space of the stoichiometric matrix gives the conserved moieties. These can be reduced by iteratively expressing one species into the others by solving for that species using these conservation laws.
  • Reduces the number of states and preserves pools by reformulating the model.
  • Allows the use of steady state sensitivity evaluation by implicit function theorem.

2. Applicability/restrictions/pitfalls

  • Currently does not detect conserved moieties that arise from specific parameter choices (setting a sink to zero for instance).

3. Code availability and implementations

  • D2D: arConservedPools.m / arReduce.m (currently only officially supports simple conserved moieties with no sharing of species between conserved moieties)
  • -

5. Publications from the Timmer group

  • -

1. Short description

  • Model reduction based on negligible reaction fluxes
  • If no or only a negligible flux is going through a certain reaction, this reaction won’t be used by the model will be removed

2. Applicability/restrictions/pitfalls

  • No additional computation time needed
  • Only applicable for the removal of reactions, parameter dependencies cannot be visualized

3. Code availability and implementations

  • Implemented in dMod: getFluxes(), plotFluxes()
  • Implemented in D2D: select “PlotV” in arPlotter, arPlotV
  • Profile likelihood

5. Publications from the Timmer group

  • Maiwald et al. Driving the model to its limit: profile likelihood based model reduction. PLoS ONE (2016)
  • Oppelt et al. Model-based identification of TNFa-induced IKKb- and IkBa-mediated regulation of NFkB signal transduction as a tool to quantify the impact of Drug-Induced Liver Injury compounds. NPG Systems Biology and Application (2018)

1. Short description

  • D2d-way to implement sudden/discontinuous changes of states or inputs
  • ODE solver integrates up to time point of event, then states are reinitialized by an affine transformation

2. Applicability/restrictions/pitfalls

  • Use steady state found via pre-equilibration
  • Applied for reinitialization of integrator at step functions in inputs
  • No recompiling necessary after adding event
  • Event functionality distributed among different functions
  • arAddEvent adds custom event
  • arSteadyState invokes arAddEvent to set initial condition to steady state arFindInputs creates events according to step functions from input
  • arLink adds/initializes events
  • arClearEvents

4. Documentation

1. Short description

  • Blotit is used to bring data measurements that are on different scales, e.g. due to the use of several western blot gels, to a common scale.
  • Determines the scaling parameter between different data realisations and scales the data.

2. Applicability/restrictions/pitfalls

  • Usage restricted to data realisation with overlap

3. Code availability and implementations

  • R package "blotIt2"
  • Function: alignME()

4. Publications from the Timmer group

  • Implemented by Daniel Kaschek, Publication coming soon

1. Short description

  • Generate asymptotic confidence intervals for measurements on a specified experimental condition, which comprises error in the prediction as well as error in the measurement

2. Applicability/restrictions/pitfalls

  • Applicable in most of the parametric models analysed based on the likelihood
  • Interpretation calls for special care, confidence intervals are only valid for a single measurement

3. Code availability and implementations

  • d2d : Work in progress
  • dmod: ?
  • Extension of the prediction profile method
  • Important in the 2D-Profile setting

5. Publications from the Timmer group

  • Kreutz et al. - Likelihood based observability analysis and confidence intervals for predictions of dynamic models (2012)

1. Short description

  • Analogously to the profile likelihood method, prediction profiles are profiles for the model prediction

2. Applicability/restrictions/pitfalls

  • Generally more complicated than a parameter profile, because the optimization constraint is nonlinear. But there is a relation to the validation profile from which it can be calculated
  • Interpret it in the same way as parameter profiles

3. Code availability and implementations

  • d2d: ?
  • dmod:?
  • Profile Likelihood
  • Validation Profiles
  • Prediction bands

5. Publications from the Timmer group

  • Kreutz et al. - Likelihood based observability analysis and confidence intervals for predictions of dynamic models (2012)
  • Kreutz et al. - Profile likelihood in systems biology (2013)

1. Short description

  • Technique to model time delays with ordinary differential equations
  • Number of linear chain states determines shape of output signal
  • Optimal/smallest possible number of states can be inferred by identifiability analysis of an auxiliary parameter

2. Applicability

  • Time delays in reaction networks, e.g. transcription, translation, etc (see Swameye or Bachmann model)

3. Code availability

  • Method not implemented in an automated fashion, but example is available in d2d

4. Publications

  • ode_modelling.txt
  • Last modified: 2021/06/01 07:16
  • by admin