0

Part 3. NOVA’s Modeling and Simulation approach

TL; DR
  1. Clinical outcome is an ultimate focus of the M&S process.

  2. Six-steps process consists of:

    • Problem formulation (project plan),
    • Systemic knowledge review (knowledge model),
    • Computational model,
    • VPop,
    • Validation,
    • Simulation.
  3. NOVA's approach is transparent. It helps to trace back a simulation output up to the knowledge or data that fed it and to understand how the output was obtained. It can be easily traced by looking into clear documentation on the ongoing or achieved M&S process, which can be automatically generated in the jinkō platform.Transparency distinguishes NOVA's approach from AI.

  4. The context of use (CoU) is the context of application of the model predictions. It represents the time and space domain of patient condition and environment (including standard of care) of interest. It is not the model that defines the CoU but the research question which the model was designed for.

  5. Risk assessment in M&S consists of assessing the possible role of M&S in the alternative decisions and its potential detrimental effect on patients.

  6. VPop and Calibration are 2 intertwined processes. VPop can be defined with parameters, VP descriptors, and variables.

  7. Validation is the crucial step as it makes simulation outputs credible, which can be essential for regulatory acceptance of in silico. Validation of a model is an ever ongoing process which never ends. The validation finishes once model is considered as sufficiently credible in the context of use consistent with the research question.

  8. Simulation is the main output of in silico approach: running the model to answer the question.

  9. In order to help in operating the six steps and the demand of transparency and continuous model updating NOVA has developed JINKŌ - unified M&S or in silico clinical trial simulation platform.

Overview of the model

An overview of the whole model is displayed in Figure 2. The functional key parts are shown schematically and dynamically, following administration of a drug (a chemical or biological entity):

  • Pharmacokinetics (PK): “What the body does to the drug”, i.e. journey of the drug molecule from the intake site (mouth, skin, …) to either its site(s) of action or elimination (chemical transformation or excretion unchanged).
  • Pharmacodynamics (PD): “What the drug does to the body”, i.e. how, and where, the molecule interacts with the body. At the desired site of action, the drug interacts with its target(s). The way this interaction takes place is called Mode of Action (MoA) of the drug.
  • To what extent and how the drug alters the course of the disease or drug effects (DE): the Mechanism of Action of the drug refers to the pathway(s) the functioning of which is changed by the drug. However, changes induced by the drug (DE) are much larger than the system component involved in the Mechanism of Action. For instance, the DE encompasses the way it changes the occurrence of the clinical outcomes. All these alterations depend on the system properties. They are written into the components of the system and their functions. If you know the target of the drug, you don't need to know its Mechanism of Action and the drug effects you observed by simulation (making the whole model going its way) derives from the properties you accounted for. It is only necessary and sufficient to include in the model the relevant properties of the system. Thus the weight on the model predictive value of the balance between false positive and false negative mentioned below.
  • The Virtual Population (VP): It carries the inter- and intra-patient variabilities seen in real life at any level: genomic, phenotypic, environmental and ageing.
  • Clinical outcome: It should be considered as the ultimate focus of the M&S process. Even if the question is addressed at the very beginning of the R&D process, such as “Which is the best target for this condition?”, the answer requires ranking the effects on the clinical outcome. This is achieved through the EM. See boxes 4 and 5.

Figure 2: Overview of the whole model

A six-step process

The six-step Nova M&S approach is shown in Figures 3, 4, 5 and illustrated in Box 5. Thereafter, the list of steps:

  1. Problem formulation. The scope of the model is delineated in order to answer the scientific question(s) arised with the problem. In order not to miss a component of the living system of interest, it is wise to accept false positive components rather than discarding relevant components (false negative), i.e. to increase the perimeter a bit wider than necessary. The model scope is then split into a series of connected, however rather independent, submodels. A mandatory submodel is the clinical submodel which makes the link with the clinical outcome(s) of interest.
  2. Systemic knowledge review. All knowledge within the above defined perimeter that can be extracted from scientific literature is considered for inclusion and curation. The major issue addressed by a piece of knowledge curation is its strength of evidence. The piece of knowledge is then rewritten as an assertion (one, two or more biologics associated with or connected by a function). Two issues at this step that determine the relevance of the whole M&S process: 1) the scope of the review should be larger than necessary to prevent missing relevant pieces of knowledge, even if it means that the computational model may be reduced later (for instance after the sensitivity analysis); 2) normal physiology and biology must be integrated into the model before introducing pathophysiology. This step is particularly sensitive and often the most time consuming. It leads to the Knowledge Model (KM), a mixture of figures and text, arrangement of assertions in structured order (see Box 5 and Figures 4 and 5)
  3. Computational model. The assertions are represented by mathematical equations according to the known laws of biochemistry, physics and biophysics. The equations are translated into computational code in jinkō, the NOVA platform.
  4. Virtual population (VP). The virtual population is designed to capture the variability which is relevant for addressing the question within the related context of use - CoU (see below). In the meantime, the model calibration is achieved.
  5. Validation. According to the question(s) and the CoU, model validation is performed.
  6. Simulation. Finally, simulations designed to answer the question(s) are run.

 

Figure 3: six-step NOVA M&S approach. jinkō is the platform designed by NOVA to implement the process. 

 
Figure 4: A section of the text of a Knowledge Model

The GALT lies throughout the intestine and represents the largest immune system in the body[1soe1]. The gut epithelium has evolved structurally to form a passive barrier between the antigen-rich environment of the lumen and the sterile core of the organism [2soe1]. Specific entry ports, formed by microfold (M) cells, are present within the epithelial belt overlaying the inductive Peyer’s patches (PPs) and lymphoid follicles [2soe4][3soe1]. Peyer’s patches are small masses of lymphatic tissue that resemble lymph nodes, with a central B cell follicle and surrounding intervening T cell areas [4soe1]. Following antigenic exposure, activated lymphocytes migrate out through lymphatic channels to regional mesenteric lymph nodes (mLNs) [4soe4] where they further differentiate, mature and proliferate [4soe4]. The activated lymphocytes then migrate out through the thoracic duct, into the systemic circulation, and then “home” back to the effector sites of GALT, the lamina propria (LP) [4soe4]; an amorphous collection of vascular and lymphatic-rich connective tissue beneath the gut epithelium [4soe4]. A portion of the circulating cells, originally processed at the level of the gut, migrate out to distant sites within the common mucosa-associated lymphoid tissue (MALT) system (adding to the mass of lymphoid tissue at the mucosal surfaces of such organs as the lung, liver, and urogenital tract) [4soe4].

Legend: Each sentence is an assertion. At the end of each sentence, the SoE of the assertion. By clicking on the SoE button, one reached the assertion file with its annotations, the original piece of knowledge and a linh to this piece of knowledge in the pdf of the full text of the original article
 

Figure 5: Graphic representation of a Knowledge Model 

Box 5: From knowledge to simulation: illustration

a Scientific articles on the biological system to be modelled are identified and analysed. To apply the process detailed in Box 4, all known and relevant processes from genes to clinical events need to be accounted for, covering both the physiology and the pathology. Relevant knowledge pieces are extracted, curated, annotated and translated in assertion, function(s) linking biological entities (e.g. “IkB kinase phosphorylates IkB, resulting in a dissociation of NF-kappaB from the complex with its inhibitor”). The curation and annotation process includes the assessment of the strength of evidence (e.g. "Strong: The assertion is unequivocally confirmed by several other experiments or observations; weak: contradictory findings'').

b The assertions are assembled in a Knowledge Model which takes both a textual and graphical form. A number of appendices to the Knowledge Model are documented (knowledge gaps, simplifying hypotheses, etc.).

c Then, the entities and the relationships between them, contained in the Knowledge Model form the components and the functional relationships, respectively, of the final model. This network represents the system of interest. Functional relationships are translated into functional equations (e.g. Michaelis-Menten equation, ionic exchanges through a channel, etc.). These functional equations are then translated into a series of differential equations in order to reproduce the dynamics of the system of interest. Parameters of these equations will become the patient descriptors of the Virtual Population (see Box 2). The resulting system of equations forms the Formal Model, which is the mathematical representation of the Knowledge Model.

d Equations forming the Formal Model are then translated into computer code to generate the Computational Model.

e The Computational Model is used to run in silico experiments (Figure e) and to simulate a clinical trial and predict the clinical benefit of a drug candidate (see Box 4).

A transparent process

Transparency of a M&S process means that every step is traceable. Ultimately, it means a person not involved in the process can 1) trace back a simulation output up to the knowledge or data that fed it and 2) understand how the output was obtained. It has two tightly intertwined aims: credibility and understanding. A brick of transparency is documentation. Clear, honest and updated documentation on the ongoing or achieved M&S process is mandatory. The jinkō platform which enables NOVA to implement this approach generates documentation automatically.

Transparency is one of the properties of the M&S NOVA approach which distinguishes it from AI.

Context of Use of a model

The context of use (CoU) is the context of application of the model predictions. It represents the time and space domain of patient condition and environment (including standard of care) of interest.

An adapted definition of a model’s CoU (originally defined in the V&V40 [1] framework for Computational Model validation) with regard to mechanistic modeling in drug development (originally in physiologically based pharmacokinetic models, PBPK), has been published by Kuemmel et al.[2] The authors wrote that the CoU describes “how the model will be used to address the question of interest, i.e. the specific role and scope of the model”. Further, “ambiguity in the question of interest and COU can result in (i) reluctance to accept modeling and simulation in a given drug development or regulatory review scenario or (ii) undesirably protracted dialogue between drug developers and regulators on the data requirements needed to establish model credibility. It is, therefore, critical to unambiguously and explicitly state the question of interest and how the proposed modeling and simulation approach will address it”.

The definition and the Kuemmel et al. comments link tightly the CoU to the questions the model is designed to answer. E.g., in a phase 2 trial, the question may be: What is the optimal dose? The CoU is then the choice of the dose to be used in a phase 3 trial in which the treatment at the optimal dose will be compared to its control. As well as the patients who will be included in this trial.

In a phase 3 trial, if the question relates to the effectiveness of the treatment, the CoU is of course the eligible patients but by extension it will also concern the patients defined as the target population. Because this population is imagined according to the findings made during this trial (and previous clinical trials), and in a MIDD, according to M&S findings.

If a post-marketing study is performed, it should include patients corresponding to the target population as defined from the clinical trials performed and other data collected during the drug development, including simulation outputs. If the model is used to answer a question about this post-marketing study, the CoU should be the same as the previous one, unless the post-marketing study is aimed at answering specific questions other than verifying that the phase 3 results apply in real life and the treatment efficacy is sustained over an extended period of time for a chronic condition and without new side effects.

Contrary to a popular belief, it is not the model that defines the CoU but the research question which the model was designed for.

In the same way as the CoU is associated with a question, a decision attached to the question is equally attached to the CoU.

Decision attached to a question and risk assessment

A M&S process is a nonlinear series of operations, some of which are shown in Figure 6. The main output of the process is supposed to help with decision making based on the outcome provided. Directly connected to the decision box is the risk assessment box. Risk assessment in M&S consists of assessing the possible role of M&S in the alternative decisions and its potential detrimental effect on patients. Obviously, risk assessment is linked with the M&S validation through the credibility of its prediction.

Figure 6: Connection between decision and other key elements of the M&S process. Model prediction validation step is not shown in the diagram, although it is linked with almost all the operations shown here.

Virtual Population and Calibration

Overview: two intertwined operations

Calibrating a model and creating a Virtual Population (VP) attached to this model are two intertwined operations. Calibration consists in selecting relevant values for model parameters. These values are, by construct, the VP descriptors values (see Box 2). The VP descriptors carry the inter and intra-patient variability. In turn, when a simulation is run, the VP descriptor values are model inputs. Relevant values for parameters and, therefore, descriptors are obtained by constraining the model dynamics to be biologically plausible, i.e. to reproduce what is known or has been observed.

Thus, in order to establish quantitatively correct and credible model outputs, plausible parameter value ranges have to be identified. Eventually, parameter identification allows to explore, identify and assign variability associated with each parameter.

Parameters, descriptors, variables

In the model-VP combination, there are several components that can vary between virtual patients or along with time or both. All can be both input or output of simulation, depending on the moment:

Parameters: attached to model components, either biological entity representations or function connecting the former. They will be valued through input of VP descriptor values.

VP descriptors: Most of them are images of the parameters, giving them values; others are values of variables (below) or models specific to these descriptors (e.g. compliance).

Variables: several types:

  • output of the model, valuable from a run of simulation; for instance the degree of liver fibrosis after a few months of running in a Non-Alcoholic SteatoHepatitis model of liver disease
  • output from a construct, as specific adjunct to the VP descriptors to help in making the VP more realistic or as a result of an allometric scaling approach of the parameter values. Both are special types of VP descriptors with no one-to-one parameter connection.
  • modeled descriptors: in dynamic VP, some patient descriptors are added and made variable over time through a specific model (e.g. compliance). They are inputs to the model when a simulation is run. Comorbidities can be accounted for this way.

In other words, descriptors are all the scalar values we could extract from the model, either from its dynamics (observed at a given time) or its inputs. Some descriptors are indeed observable in “real-life” depending on the capabilities but most are not in practice.

Descriptor valuation and calibration

Valuation consists in implementing the process represented in Box 2. This is far from being an automated process. During the literature research, published data (from text, graphs and tables) about expected biological constraints for molecular, cellular and tissue variables and their dynamic behavior are extracted and will serve as initial control checkpoints or as constraints for automatic parameter identification (e.g. biological variables should be positive). Generally, model parameter values will be initially chosen to obey these constraints.

For choosing such parameter values, different techniques including sensitivity analysis of the model are applied to reduce the ranges. Eventually, a multi-objective optimization is conducted to determine parameter values distribution that ensure that the model satisfies biological constraints under reference conditions.

The list of constraints implemented as scoring and their non-violation is generated automatically for each run of a Virtual Patient as part of the model documentation.

The initial parameter identification will provide parameter values by using hypotheses to more or less directly infer values (e.g. via analogy, scaling). Plausibility of value ranges as it is assessed from knowledge is essential. Calibration partly overlaps with this but requires numerical fitting procedures and will refine parameters values (i.e. descriptors) and inform the remaining unknown parameters. Calibration accounts for system behavior and constraints and improves the correspondence of predictions with experimental outputs.

In view of the overall model complexity not all parameter values (descriptors) can be informed or inferred (accurately) from literature or preclinical experiments. Therefore, clinical data will be additionally used to refine the chosen parameter values. Calibration is an automatic or systematic manual procedure including one-at-a-time and several-at-a-time changes to model parameters to achieve better fit to desired individual output than conferred by the uncalibrated model alone. Calibration of the moments of VP descriptor distributions (population level calibration) or calibration of a set of discrete realizations of patients in the VP and inference of parameter distributions are tasks that can typically be performed using clinical datasets. Data used for this process should be as close as possible to the Context of Use.

The interplay between proportion of data-driven calibrated descriptors, generalizability and CoU

Unsettled vocabulary issues

Generalizability, external generalizability, robustness, external validity... All of these terms are close to each other and if we intuitively grasp their meaning, our intuition is insufficient for us to put them into practice in a knowledge-based M&S. Some, like the external validity, are well defined for the statistical models, that is to say models based on the fitting to a set of data. For this type of models that are (rather) stuck to the data set they are fitted to, the domain of generalizability does not go easily beyond the domain of their external validity [3].

Without wishing to resolve these questions of vocabulary here, we must propose a general meaning to each of these terms in the knowledge-based M&S. The external validity of a model at NOVA corresponds to the domain of validity envisaged in another paragraph. It is closely if not totally related to its validation. The robustness of the model is its ability to keep reliable predictions when the CoU changes somewhat. It has something to do with its reusability.

The generalizability is the property of the model to make reliable predictions beyond the domain of validity as it has been checked. Generalizability and robustness are linked. Generalizability is greatly dependent on the material the model is based on. In knowledge-based modelling, generalizability is greater than with data-driven modelling, because knowledge is more stable than data, as explained in a paragraph above. We can assume as a first approximation that the robustness of the model is a factor of its generalizability. We can see the role of virtual populations in the verification of these properties.

Find the compromise

There are two complementary ways to value model parameters (thus the VP descriptors and build the VP): 1) derive their values from knowledge; 2) use a scoring process (= an algorithm) to find values that are consistent with knowledge - usually knowledge on the the behavior of the modelled systems - or with data for the parameters that cannot be valued through the first way or that the modeler don't want to derive from knowledge. The former are called knowledge-driven scoring and the later data-driven. The border is between the knowledge derived (either by “direct” valuation or through scoring) and the data derived values. The relative proportion of the two approaches has a major impact on generalizability.

Let’s take the way of a proof by contradiction. Taking the case where all descriptor distributions (i.e. their joint distribution) derived from the calibration process run on a given set of in vivo data. First, the resulting VP acquires de facto the CoU of the calibration set (e.g. same population in the same context as the calibration dataset). Second, the margin for using the model and the VP to extrapolate, for instance for seeking for a larger target population, is quite limited, although the major objective of NOVA approach is to predict, which means generalise and extrapolate.

In the case we took, we are in the same paradigm as the one of a statistical model fitted to a set of data. Actually, the only difference between the M&S approach and the traditional statistical modelling is the foundation of the models: mathematical equations that represent knowledge on each biological interaction of interest for the former, and a global statistical model that is assumed to fit with the data and the way they were collected for the later. Note that Leo Breiman made the difference between two statistical cultures “statistical modeling” and “algorithmic modeling”, with in the later more account of the phenomenon behind the data [4]. M&S is a third “culture”, steps further the Breiman “algorithmic culture”. In the case we are illustrating, this third culture gets considerably closer to the two others by the fact that all the values of the parameters result from a data fitting. In such an extreme case, the model outputs are for sure fitting with the CoU. But would the generalizability of the model be better than the statistical or algorithmic ones? Probably not.

Since a major objective and claimed benefit of NOVA is generalizability (in form and scope varying with the step of the R&D process we are), it emerges from this absurd case that in order to justify a prediction, always the result of a demarch of generalizability or extrapolation, the proportion of the data-driven calibrated parameters must be as low as possible (the remaining being calibrated from the knowledge). The counterpart is that the CoU will then depend on the constitution of the validation population which will certainly have to include the CoU patients but also other patients (which will help to enlarge the domain of validity).

Thus, finding the good compromise between knowledge derived and data derived parameter valuation is a key factor of generalizability.

Besides the role of the VP and the way it is designed (see Box 6), the reliable scoring of the SoE is important. One can assume that generalizability of a knowledge-based model increases with the average SoE of the assertions it is based on.

A M&S project results in more than one VP

Before the ultimate VP which corresponds to the CoU and enables to address the research question(s), a series of VPs (i.e. set of inputs to the model) are used (see Box 6).

Box 6: various Virtual Populations

LegendM = model; P = parameter; De = descriptor; VP = virtual population; JD = descriptors’ joint distribution; CM = computational model

Stage Model M Development Population Parameters Descriptors Objectives & Comments
A uncalibrated M a single Virtual Patient all variables a single set of De values 1.find a reference Virtual Patient; 2.M proof of concept
B uncalibrated M VP all variables (plausible) uniform Distributions (no correlation) JD find the parameter space (or set) for which the CM converge numerically (not necessarily biologically plausible) = finding the function domain
C uncalibrated M VP some are set constant (plausible) uniform JD for the remaining De idem
D uncalibrated M VP some are set constant JD derived from knowledge & data 1.calibration on virtual VP; 2.step #2 M proof of concept
E uncalibrated M → type 1 calibrated M VP some are set constant JD derived from knowledge & data and adjusted through scoring 1.calibration on a virtual VP with constraints; 2.set of scores resulting in JD adjustment
F not fully calibrated M → type 2 calibrated M real aggregated data some are set constant JD derived from knowledge & data and adjusted through scoring 1.calibration on a virtual VP with constraints; 2.set of scores resulting in JD adjustment
G not fully calibrated M → type 3 calibrated M real individual data some are set constant JD derived from knowledge & data and adjusted through scoring 1.calibration on a virtual VP with constraints; 2.set of scores resulting in JD adjustment
H calibrated M → type 1 validated M real aggregated data some are set constant JD derived from knowledge & data and adjusted through scoring 1.validation on summarized data; 2.results in validation metrics values
I calibrated M → type 2 (ultimate) validated M real individual data some are set constant JD derived from knowledge & data and adjusted through scoring 1.validation on summarized data; 2.results in validation metrics values; 3. and in a final validated M
J validated M real individual data some are set constant JD derived from knowledge & data and adjusted through scoring 1.simulation(s); 2.answering the questions

Calibration and validation: strictly non-overlapping knowledge and datasets

In order to avoid tautological derivation and to optimize model prediction credibility, the sets of knowledge and of data involved in calibration (valuation of the VP descriptors joint distribution) and validation (major component of the assessment of model prediction credibility) should not overlap.

Validation

Model validation is crucial

In order to make its simulation predictions credible, a model should be validated. The question of simulation output validity is also central for regulators’ acceptance of in silico clinical trials (see a section below). The principles of science, experimental method and inferential statistics applied to therapeutic efficacy and safety evaluation justifiably dictate the way people think about the assessment of evidence for in vivo or in vitro experiments and observations[5] [6] [7]. Beyond the need for unbiased comparison, weighting the strength of evidence requires to get rid of random findings due to the inherent variability of living systems. Thus the recourse to the statistical paradigm. The statistical paradigm hardly applies in the in silico world, since one is not working with samples drawn from a theoretical parent or real population, but on the whole (virtual) population. To imagine a parent population of this virtual population seems far-fetched, with the sole benefit of carelessly applying the statistical paradigm of life and earth sciences. Any tiny difference, with no clinical relevance, in the studied system’s behaviour could emerge with enough simulation runs. Additionally, one can retro-engineer a model by including the expected results in the model itself, which only serves as a tautological demonstration.

The only acceptable way to assess simulation output validity is through a standard, stakeholder accepted, certification process. A standard quality control procedure serves to evaluate whether a pre-specified set of rules has been properly followed. A summary of these principles is presented in Table 2.

Table 2: In silico approach validation principles

Principle Definition
Traceability Knowledge incorporated into the disease model needs to be fully traceable back to each primary source.
Consistency Knowledge incorporated into the disease model needs to be consistent with current science.
Bias The mathematical and computational models need to be unbiased representations of the selected knowledge.
Internal validity The disease model needs to be capable of “reproducing” knowledge and data that have not been used to design it in the first place.
Prospective validation Simulated predictions of clinical trials or in vitro results must be qualitatively and quantitatively in line with future trial/experiment/observation outputs.

Validation: beneath the surface

Validation of a model is an ever ongoing process which never ends. There is no real “final” validation step. When a model is built to answer a question, which is the right scheme, its “final” validation will have to be designed so that the predictions which the answer to the research question relies on are sufficiently credible. The predictions are obtained in the context of use consistent with the research question, the M&S process transparency has enabled to check their relevance and the metrics that measure the gap between the predicted data and real data are within acceptable confidence intervals.

A new validation will be necessary if a new context of use is formulated, to address a new research question. The arrival of new data, more relevant than those used for the “final” validation, could also justify a new model validation. This would be the case if the context of use was for a phase 3 trial once data from a post-marketing study become available. Further, in most of the cases, the post-marketing study will be integrated in the M&S process as a new “final” validation step. Especially, when Health Technology Assessment (HTA) decisions are to be supported with the model’s predictions.

Context of Use and validity domain

As defined above, validation is the process of determining the degree to which a model, its associated VP and simulations accurately represent the real world, and especially that part of the real world of interest for this particular R&D project. The process should be then viewed from the perspective of the intended application of the model or simulations under similar conditions. This stage of the M&S protocol is aimed at demonstrating the reliability of predictions generated by the computational approach in a particular project, related to the question of interest. The question defines the Context of Use (CoU), i.e. M&S role in the R&D project as well as the patients, their conditions and care environment that are concerned by the question. The output of the validation process defines the credibility and the domain of validity of the model and thus of the predictions that result from simulations run with the model. This requires to include the Virtual Population in the validation process, since simulations are performed on a virtual population. Depending on the validation's outputs, the validity domain can reduce the context of use.

Validation and data

Similar to calibration, the quality of the validation process output is considerably dependent on the quality and relevance of available data, experimental, clinical and/or epidemiological. The final validation step (see below) should compare outcome prediction obtained from simulations to real patient outcomes in the CoU of interest.

A thorny question of metrics

A validation process is a series of steps. Each step provides either a qualitative judgement (e.g. “one can trace back to the pieces of knowledge that were inputs in the model design from this simulation output”) or a measure of fit between the reference data, and the model predictions. This distance is expressed by a metric. Since the probability is virtually zero that the reference data and predictions will be rigorously equal, there is a need to compare the metric with an acceptable value. In turn this comparison requires a threshold to decide whether, at this step, the model can be called valid.

Since there are several steps, model validity is a global judgement, which summarises the credibility of the model.

Illustration: outlines of NOVA model validation procedure Model and virtual population are validated, in accordance with the latest guidelines [VV40, 2018] and publications [Erdemir 2020, Viceconti 2019, Friedrich 2016] through the comparison to relevant preclinical, observational and/or experimental clinical data, depending on the question the model is used to address and the availability of data.

Regulatory guidances regarding M&S validity

The NOVA validation plans are designed in accordance with the regulatory guidance regarding the reporting of modeling. The closest guidance relevant to NOVA’s disease and treatment models are the:

  • ASME V&V40 framework that has been summarized in view of its application to mechanistic PBPK models by Kuemmel et al. which can be regarded as regulatory reference for the FDA [VV40, 2018]
  • EMA’s specific guidance on the reporting of PBPK models [8]

Validation is a progressive process

At NOVA, validation of a model and a VP is not a one-time procedure. It is rather a continuous process, which follows general principles presented above and developed in this section. In fact, it is relevant during almost all the steps of the M&S, beginning with the KM.

Validation is necessary following:

  • Any modification, removal, addition of at least one functional relation in the computational model or in the Virtual Population
  • A new use of the model (change of the context of use)

Rather arbitrarily, the validation of a model at NOVA can be described as a series of questions grouped into five categories:

1. Qualitative validation Qualitative criteria to support credibility in the prediction of a model can be assembled apart from the quantitative comparison of predictions with reference data. They have been acknowledged by the FDA during NOVA MIDD meetings in 2019. An assessment of these “qualitative credibility goals“ will be added to the validation report.

  • Assessing the model content: How was the Knowledge Model validated? Is the model granularity adapted to the question of interest? Is it possible to access the knowledge justifying the model form and the parameter values? (transparency checking)
  • Model reuse: Has the model been used in a different Context of Use?
  • Assessing model inputs:
    • Uncertainty management: Are the uncertainties associated with the model form and inputs and their impact on model predictions understood and controlled for?
    • Sensitivity analysis: Is the model sensitivity to input parameters and its effect on model prediction understood and controlled for?
    • Risk of tautology: Is there a risk that a bias has been introduced in the model’s structural form or inputs resulting in an answer favoring the desired outcome?
  • Simulation design: Is the in silico experiment design relevant to address the question(s) of interest?
  • Relevance to a clinical outcome: Are the Modeling and Simulation (M&S) results relevant to clinical purpose? Relevance to the context of use: Are the M&S results relevant to the context of use?

In brief, it is about demonstrating that the computational model, the Virtual Population and the simulation protocol were appropriately developed to support the simulations in order to eventually be able to perform predictions and possibly extrapolations of interest.

2. Semi-quantitative validation of the prediction The purpose of the semi-quantitative validation is to assure that submodels reproduce behaviors of the modeled systems that are consistent with their known behaviors or at least plausible when these behaviors are not precisely known (e.g. an untreated tumor receiving sufficient nutrients and no antagonist signals should grow).

3. Quantitative validation of the prediction Quantitative validation is the last and most relevant step. It consists of comparing output predictions of the full model in a real group of patients similar to the CoU with observed outcomes. The model is run with baseline data of these patients and simulations are continued until the end of the real world observation period. Depending on the data quality, chosen validation metrics, size of the real patient group, addressed questions, the compared outputs can be either averaged values (e.g. rate of a binary outcome, mean of a continuous outcome) or individual outcomes (e.g. the occurrence of an event for each real patient).

4. Verification Code verification of the solving algorithm is carried out by automatic tests using randomly generated ordinary differential equation systems, and comparing the explicit closed form solution and the output of the resolution.

Calculation verification consists in estimating and minimizing the numerical error arising from the approximation of the mathematical model of interest. This is done by solving the system with various tolerance values, comparing the outputs and checking the convergence.

Verification also includes peer-reviewing of the code and continuous testing to avoid user errors. It may be completed with project specific evaluations, such as a verification of a spatial discretization implemented in the model to mimic diffusion processes.

5. Model input sensitivity and uncertainty analysis Sensitivity analysis examines the degree to which the computational model outputs are sensitive to the model inputs. Combined with an uncertainty analysis it is used to identify the impact of the parameter uncertainty (parameters that are known or susceptible to be associated with a measurement error, a low strength of evidence assertion or for which there is uncertainty about the value after calibration) on the variables of interest and the final predictions. The aim of this sensitivity analysis is to ensure that the model is not sensitive to parameters or variables that present a high uncertainty, due to for example knowledge gaps.

Simulations

The ultimate purpose of the process described above is simulation. Simulation means running the model to answer the question. It is a matter of carrying out an experiment. The particularity is that it is carried out in silico. Besides this point that makes it different to wt lab or clinical investigations, this experiment follows the rules of the experimental method, the same as those decreed by Claude Bernard in the XIXth century and applied to in vivo or in vitro experiments to ensure the validity of their results[4:1].

The point that makes an obvious and formidable difference relates to the experimental units: they are virtual. This feature has two consequences that help in enhancing the validity of comparison (an experiment is always comparing a “new” scenario to a control): 1) the compared scenarios can be be applied to exactly the same experimental units, allowing the researcher to get rid of any cause of bias; 2) the only limitation in the number of experimental units is the computation time, which is in any case exceedingly lower than the duration of the corresponding in vitro/in vivo experiment.

As with in vitro or in vivo experiments, a protocol is written prior to running the model (one of the prerequisites of the experimental method according to Claude Bernard).

The JINKŌ platform

In order to help in operating the six steps shown in Figure 3 and the demand of transparency and continuous model updating (according to the virtuous circle principle, see Figure 1), NOVA has developed JINKŌ. JINKŌ is NOVA’s unified M&S or in silico clinical trial simulation platform. With mathematical model and virtual populations at the heart of the company’s approach, the platform takes its name from the fortunate homonymic collision of the Japanese words for ‘artificial’ (人工, jinkō) and ‘population’ (人口, jinkō). The JINKŌ platform is written in Haskell [9], a functional language. It is composed of two modules detailed hereafter: The first one is called JINKŌ Knowledge and allows to store, handle and use bibliographic references and literature reviews; the second one is called JINKŌ SimWork and is the computational management module.

JINKŌ Knowledge: the knowledge management module

JINKŌ knowledge operates as a community-driven “knowledge engine” to curate and organize biomedical knowledge extracted from white and grey literature, with the ultimate objective to build and maintain state-of-the-art Knowledge Models of pathophysiological processes. Based on human investigation (for the curation) and semantic web formalism (for the exploitation), JINKŌ offers researchers and biomodelers the ability to curate, formalize and share biomedical knowledge extracted from scientific literature in an open science environment. Through this platform, the biomodelers are therefore able to:

  • gather the necessary scientific publications in a dedicated project
  • extract pieces of knowledge from these publications through the creation of assertions
  • grade the assertion’s Strength of Evidence (SoE); annotate the assertion (e.g. link a protein in the assertion with descriptive and gene data-banks)
  • organize these assertions into a discursive format (i.e. plain language) describing the pathophysiology of interest
  • graphically represent the findings

Representing the assertions’ relationships in a network results in a Knowledge Model, associated with a list of knowledge gaps.

The platform is intended as a community-based environment where non-modelers (e.g. clinical experts) are invited to contribute to a given project and to provide direct feedback on the review of the literature conducted by biomodelers.

JINKŌ SimWork: the computational management module

The JINKŌ Model Solver component at the heart of the platform, runs atomic simulation tasks (a single virtual patient with a single experimental setting on a single model), in a stateless and purely functional fashion (i.e. the same input set will systematically lead to the same outputs, allowing for safe and efficient catching).

SimWork also hosts a set of calibration and analytics modules. The Calibration module uses an approach based on genetic algorithms to calibrate the model parameters, using relevant knowledge extracted from literature and all available preclinical and clinical data. Finally, various analytics modules assist in analyzing simulation results and producing reports in line with traditional clinical trial reports, augmented by insights specifically enabled by the in silico approach, such as identification of biomarkers.


  1. (n.d.). Assessing Credibility of Computational Modeling through ... - ASME. Retrieved May 30, 2021, from https://www.asme.org/codes-standards/find-codes-standards/v-v-40-assessing-credibility-computational-modeling-verification-validation-application-medical-devices ↩

  2. Kuemmel C, Yang Y, Zhang X, Florian J, Zhu H, Tegenge M, Huang S-M, Wang Y, Morrison T, Zineh I. Consideration of a Credibility Assessment Framework in Model-Informed Drug Development: Potential Application to Physiologically-Based Pharmacokinetic Modeling and Simulation.CPT Pharmacometrics Syst. Pharmacol 2020, 9, 21–28 ↩

  3. Walter A. Kukull, WA, Ganguli, M. Generalizability. The trees, the forest, and the low-hanging fruit. Neurology, 78; 1886-91 (2012) ↩

  4. Breiman L Statistical Modeling: The Two Cultures. Statistical Science; 16, 199–231 (2001) ↩ ↩

  5. Bernard, C. An introduction to the study of experimental medicine (1865) ↩

  6. Neyman, J. & Pearson, E. S. in Breakthroughs in Statistics 73–108 (Springer, 1992) ↩

  7. Popper, K. The logic of scientific discovery. (Routledge, 2005) ↩

  8. (2018, December 13). (PBPK) modelling and simulation - European Medicines Agency |. Retrieved June 29, 2021, from https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-reporting-physiologically-based-pharmacokinetic-pbpk-modelling-simulation_en.pdf ↩

  9. Marlow, S., & others. (2010). Haskell 2010 language report. Available Online https://www.haskell.org/onlinereport/haskell2010/ ↩

Reply

null