How Jinkō efficiently simulates clinical trials
Jinkō transforms clinical trial simulations by integrating advanced computational methods and scalable cloud solutions.Key methodologies include JustinTime (JIT) compilation, parallelization of largescale virtual patient populations, and advanced strategies for solving ordinary differential equations (ODEs). These innovations boost the speed and accuracy of simulations while simplifying setup, allowing for quick and efficient analysis of results.
Justintime compilation
Our models (ODE system, events, initial conditions, etc.) are described in a portable file format. We support the SBML file format [1], which is an industry standard, as well as our own file format, including some inhouse developed extensions to SBML.
Once loaded, we use JustInTime (JIT) compilation to convert the model description to fast native machine instructions. This approach allows us to achieve significantly faster execution compared to uncompiled, interpreted AST evaluation, of our simulations. JIT compilation offers performance improvements by optimizing code at runtime, taking advantage of the specific details of the model and the execution environment. Unlike traditional interpretation or aheadoftime compilation, JIT can apply optimizations based on the actual data and usage patterns encountered during execution. Additionally, JIT can leverage modern processor features such as SIMD (Single Instruction, Multiple Data) extensions like SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions). These extensions enable parallel processing of multiple data points with a single instruction, further enhancing performance by efficiently handling the computationally intensive tasks in our simulations. This means that frequently executed paths can be heavily optimized, reducing the time spent on computations and allowing our simulations to run much more efficiently.
Once we use JIT to convert our model to fast native machine code, we integrate this optimized code into Sundials [2], a suite of solvers designed for solving differential equations. Sundials handles the complex mathematical computations (see Section “Solving ODE problems efficiently”) involved in simulating biological reaction networks, allowing us to focus on accurately modeling the system. By leveraging Sundials, we ensure that the intricate numerical details are managed efficiently and robustly, enabling our simulations to perform with high accuracy and speed.
Parallelization
The solving time per patient depends on the model's complexity, such as the number of equations, their stiffness, event frequency (e.g. drug injections), required accuracy, and measurement frequency. Most of our simulations can be solved in less than a second per patient, though highly complex models with over 100k ODEs may take a few minutes.
Improving solving time for an individual patient is achieved through Sundials, our JIT mechanism (see previous paragraph), our preprocessing phase, and advanced solver techniques (see next Section).
In jinkō, we focus on simulating large populations of virtual patients (up to several million patients), which are independent and fall into the "Embarrassingly Parallel" [3] category. With enough processors, all patients can be solved simultaneously, reducing the overall simulation time to that of the slowest patient.
We leverage AWS [4] to scale the number of processors as needed. Our simulation scheduler provisions the necessary computational resources to optimize cost and time. Recently, we simulated a population of 1.5 million patients using 4000 processors in 45 minutes. Our team continuously works on minimizing the overhead associated with high parallelism, ensuring efficient and scalable simulations.
Solving ODE problems efficiently
Implicit vs Explicit
In jinkō, most computational models consist of reaction networks which are converted to systems of ordinary differential equations (ODE) upon solving. “Solving” a model therefore consists in computing a numerical approximation of the solution to the initial value problem consisting of an ODE system and a set of initial conditions for the ODE unknowns.
In the following, we will use the term “solver” to denote any method that can produce such an approximation.
There exist two main families of ODE solvers: explicit and implicit. Both compute an approximation of the solution by iterating through time. The approximate solution consists of a sequence of pairs of discrete timesteps and approximate solutions at those timesteps. In simple terms, the smaller the timestep (and therefore the more timesteps needed to reach the prescribed tmax, i.e. the end of the simulation) the more accurate the solution is.
Depending on the type of ODE system, some solvers may be more efficient than others. Here, efficiency means finding a correct answer, as in an approximation that has a low enough error, with as few calculations as possible.
All other things being equal, the computation of a single iteration is more costly using an implicit solver than an explicit one. Without going too much into details, an implicit solver approximates the right hand side of the ODE system using information from the future while an implicit one uses information from the past. Using information from the future comes with an extra cost because it requires solving some additional equations. However, explicit solvers tend to be more unstable for some ODE problems, meaning the error will grow in time without bounds for large enough timesteps.
In general, for ODE problems exhibiting a stiff [5,7] behavior, implicit solvers will require fewer timesteps than explicit ones to reach the prescribed accuracy, making them actually computationally cheaper. Sometimes even, the step size required to get an accurate solution is so small that it makes such solvers unusable in practice.
For those reasons, we most often use implicit solvers Jinkō even though users may choose to use explicit solvers which can prove more efficient in certain situations.
Adaptive solver
In realworld ODE problems, keeping a fixed timestep size throughout the simulation is inefficient. Indeed, some parts of the solution exhibit fast changing dynamics (think of a PKPD model right after administering a treatment dose) and some other parts where the solution is more quiet and sometimes even constant in time (think of a reaction system where the equilibrium has been reached).
An efficient way of dealing with this consists in adapting the timestep size to the behavior of the solution. This is known as an “adaptive timestep” solver and this is what is used in jinkō.
Similarly, the order of the solver can be adaptive as well. Making a gross simplification, the smaller the order, the less accurate (meaning we need smaller timestep sizes to achieve the prescribed accuracy) but the more stable the solver is. An “adaptive order” solver will use a smaller order when the solution becomes unstable and will switch to a higher order one when it becomes stable enough.
In practice, we use a solver adaptive in timestep and adaptive in order to combine the best of both worlds.
Sundials
In jinkō, we interface with the Sundials ODE solvers CVODE and ARKODE [2] which implement a wide variety of both implicit and explicit solvers.
Among those, our goto solver is probably the adaptive order, adaptive timestep implicit BDF method [6] which proves to be both robust and efficient in most situations. BDF stands for “Backwards Differentiation Formula” and can be described as a scheme which takes into account the history of the solution. At any given timestep, BDF computes an approximation of the next timestep solution by taking the current timestep (order 1) up to 4 timesteps in the past (order 5). As explained in the above, the higher the order, the more accurate the approximation is and therefore the larger timesteps the solver can take to achieve the prescribed accuracy. However, BDF schemes of order 3 and above may be unstable which is why Sundials’ implementation is also adaptive in order such that it can adjust the order of the BDF scheme depending on the solution behavior through time.
Conclusion
In our jinko.ai platform, we optimize the performance of individual patient simulations using advanced mathematical tools like Sundials and computer science techniques such as JustInTime (JIT) compilation. By leveraging AWS services, we can scale to a virtually unlimited number of CPUs, enabling us to solve many patients in parallel. This approach significantly reduces iteration time and simplifies the setup process, allowing users to start a clinical simulation from their cell phone and analyze the results within minutes.
Our platform also provides tools to help users understand and debug model performance. Jinko.ai is continuously evolving, introducing new features for simulation and result analysis. We are dedicated to reducing bottlenecks and the overhead of high parallelism to further enhance performance and efficiency.
References
Reply
Content aside

1
Likes
 10 days agoLast active
 27Views

1
Following