How Jinkō efficiently simulates clinical trials
Jinkō transforms clinical trial simulations by integrating advanced computational methods and scalable cloud solutions.Key methodologies include Just-in-Time (JIT) compilation, parallelization of large-scale virtual patient populations, and advanced strategies for solving ordinary differential equations (ODEs). These innovations boost the speed and accuracy of simulations while simplifying setup, allowing for quick and efficient analysis of results.
Just-in-time compilation
Our models (ODE system, events, initial conditions, etc.) are described in a portable file format. We support the SBML file format [1], which is an industry standard, as well as our own file format, including some in-house developed extensions to SBML.
Once loaded, we use Just-In-Time (JIT) compilation to convert the model description to fast native machine instructions. This approach allows us to achieve significantly faster execution compared to uncompiled, interpreted AST evaluation, of our simulations. JIT compilation offers performance improvements by optimizing code at runtime, taking advantage of the specific details of the model and the execution environment. Unlike traditional interpretation or ahead-of-time compilation, JIT can apply optimizations based on the actual data and usage patterns encountered during execution. Additionally, JIT can leverage modern processor features such as SIMD (Single Instruction, Multiple Data) extensions like SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions). These extensions enable parallel processing of multiple data points with a single instruction, further enhancing performance by efficiently handling the computationally intensive tasks in our simulations. This means that frequently executed paths can be heavily optimized, reducing the time spent on computations and allowing our simulations to run much more efficiently.
Once we use JIT to convert our model to fast native machine code, we integrate this optimized code into Sundials [2], a suite of solvers designed for solving differential equations. Sundials handles the complex mathematical computations (see Section “Solving ODE problems efficiently”) involved in simulating biological reaction networks, allowing us to focus on accurately modeling the system. By leveraging Sundials, we ensure that the intricate numerical details are managed efficiently and robustly, enabling our simulations to perform with high accuracy and speed.
Parallelization
The solving time per patient depends on the model's complexity, such as the number of equations, their stiffness, event frequency (e.g. drug injections), required accuracy, and measurement frequency. Most of our simulations can be solved in less than a second per patient, though highly complex models with over 100k ODEs may take a few minutes.
Improving solving time for an individual patient is achieved through Sundials, our JIT mechanism (see previous paragraph), our pre-processing phase, and advanced solver techniques (see next Section).
In jinkō, we focus on simulating large populations of virtual patients (up to several million patients), which are independent and fall into the "Embarrassingly Parallel" [3] category. With enough processors, all patients can be solved simultaneously, reducing the overall simulation time to that of the slowest patient.
We leverage AWS [4] to scale the number of processors as needed. Our simulation scheduler provisions the necessary computational resources to optimize cost and time. Recently, we simulated a population of 1.5 million patients using 4000 processors in 45 minutes. Our team continuously works on minimizing the overhead associated with high parallelism, ensuring efficient and scalable simulations.
Solving ODE problems efficiently
Implicit vs Explicit
In jinkō, most computational models consist of reaction networks which are converted to systems of ordinary differential equations (ODE) upon solving. “Solving” a model therefore consists in computing a numerical approximation of the solution to the initial value problem consisting of an ODE system and a set of initial conditions for the ODE unknowns.
In the following, we will use the term “solver” to denote any method that can produce such an approximation.
There exist two main families of ODE solvers: explicit and implicit. Both compute an approximation of the solution by iterating through time. The approximate solution consists of a sequence of pairs of discrete time-steps and approximate solutions at those time-steps. In simple terms, the smaller the time-step (and therefore the more time-steps needed to reach the prescribed tmax, i.e. the end of the simulation) the more accurate the solution is.
Depending on the type of ODE system, some solvers may be more efficient than others. Here, efficiency means finding a correct answer, as in an approximation that has a low enough error, with as few calculations as possible.
All other things being equal, the computation of a single iteration is more costly using an implicit solver than an explicit one. Without going too much into details, an implicit solver approximates the right hand side of the ODE system using information from the future while an implicit one uses information from the past. Using information from the future comes with an extra cost because it requires solving some additional equations. However, explicit solvers tend to be more unstable for some ODE problems, meaning the error will grow in time without bounds for large enough time-steps.
In general, for ODE problems exhibiting a stiff [5,7] behavior, implicit solvers will require fewer time-steps than explicit ones to reach the prescribed accuracy, making them actually computationally cheaper. Sometimes even, the step size required to get an accurate solution is so small that it makes such solvers unusable in practice.
For those reasons, we most often use implicit solvers Jinkō even though users may choose to use explicit solvers which can prove more efficient in certain situations.
Adaptive solver
In real-world ODE problems, keeping a fixed time-step size throughout the simulation is inefficient. Indeed, some parts of the solution exhibit fast changing dynamics (think of a PKPD model right after administering a treatment dose) and some other parts where the solution is more quiet and sometimes even constant in time (think of a reaction system where the equilibrium has been reached).
An efficient way of dealing with this consists in adapting the time-step size to the behavior of the solution. This is known as an “adaptive time-step” solver and this is what is used in jinkō.
Similarly, the order of the solver can be adaptive as well. Making a gross simplification, the smaller the order, the less accurate (meaning we need smaller time-step sizes to achieve the prescribed accuracy) but the more stable the solver is. An “adaptive order” solver will use a smaller order when the solution becomes unstable and will switch to a higher order one when it becomes stable enough.
In practice, we use a solver adaptive in time-step and adaptive in order to combine the best of both worlds.
Sundials
In jinkō, we interface with the Sundials ODE solvers CVODE and ARKODE [2] which implement a wide variety of both implicit and explicit solvers.
Among those, our go-to solver is probably the adaptive order, adaptive time-step implicit BDF method [6] which proves to be both robust and efficient in most situations. BDF stands for “Backwards Differentiation Formula” and can be described as a scheme which takes into account the history of the solution. At any given time-step, BDF computes an approximation of the next time-step solution by taking the current time-step (order 1) up to 4 time-steps in the past (order 5). As explained in the above, the higher the order, the more accurate the approximation is and therefore the larger time-steps the solver can take to achieve the prescribed accuracy. However, BDF schemes of order 3 and above may be unstable which is why Sundials’ implementation is also adaptive in order such that it can adjust the order of the BDF scheme depending on the solution behavior through time.
An illustration of jinkō's speed of simulation via a simple benchmark on Simbiology:
In order to provide a fair illustration to the speed of simulation obtained with jinkō with an identifiable baseline, we performed the following test with a basic setup of Simbiology (free online version, with no additional parallelization capabilities added):
- We uploaded both in jinkō and in Simbiology a public model available from biomodels: Orton 2009 - Modelling cancerous mutations in the EGFR/ERK pathway - EGF by Adrians ME & al. (available here)
- No additional arms were added, our goal being to observe the individual solving time of variants (=virtual patients) as the metric of reference for this test.
- Simulations on both platforms were then ran with 10k virtual patients, with the following results:
- We could observe with Simbiology an average solving time per variant of 0.025s
- Using jinkō, the average solving time per variant averaged at 0.0025s, thus a solver 10 times faster.
-> On top of its solver's efficiency, jinkō's out-of-the box parralelization capabilities also significantly reduce the overall simulation time for the end-user. With 100 CPUs as a base, the overall simulation time is reduced up to a 100 times. When performing benchmarks, this is particularly apparent on longer simulations (with more complex models and more variants and/or arms), as the initial loading of the data and initial trial setup, which does not vary much in time, is performed at the beginning of a simulation.
Conclusion
In our jinko.ai platform, we optimize the performance of individual patient simulations using advanced mathematical tools like Sundials and computer science techniques such as Just-In-Time (JIT) compilation. By leveraging AWS services, we can scale to a virtually unlimited number of CPUs, enabling us to solve many patients in parallel. This approach significantly reduces iteration time and simplifies the setup process, allowing users to start a clinical simulation from their cell phone and analyze the results within minutes.
Our platform also provides tools to help users understand and debug model performance. Jinko.ai is continuously evolving, introducing new features for simulation and result analysis. We are dedicated to reducing bottlenecks and the overhead of high parallelism to further enhance performance and efficiency.
References
Reply
Content aside
-
1
Likes
- 3 mths agoLast active
- 66Views
-
1
Following