Milestone Achievement
Milestone Achievement We have reached a new milestone by implementing tests for key MultiXscale applications in the EESSI test suite. The EESSI test suite is a suite of portable application tests, implemented in the ReFrame HPC testing framework. While primarily designed to test software in the EESSI software environment, it can be run on any (module-based) software environment, such as those typically provided by HPC sites on their systems. The test suite scans available modules, and generate tests for those modules for which a test case has been implemented. A range of important applications in EESSI was already covered in the test suite. The current milestone reflects that EESSI test suite, as of version 0.9.0 (https://github.com/EESSI/test-suite/releases), covers all the key applications developed in WPs 2, 3 and 4 of MultiXscale with a test case, as they are described below. Helicopter design Load-balancing Ionic liquid and hydrodynamics Supercapacitors Biomedical application The lbmpy [1] software package is a python-based domain specific language which automatically generates performance portable implementations of key Lattice Boltzmann (LB) ingredients such as velocity space discretizations, equilibrium and boundary representations, collision models, etc. In this framework, a typical MPIX parallelized LB simulation workflow couples highly optimized shared memory parallelized CPU/GPU kernels generated by lbmpy with the MPI parallel waLBerla [2] software. From an LB domain scientists’ perspective, lbmpy offers a convenient python interface wherein new developments can be quickly prototyped, and preliminary scientific validations can be conducted via OpenMP/CUDA parallelized tests. Nevertheless, the utility of lbmpy is particularly limited to moment-space LB methods which, as compared to traditional population-space methods, offer better stability characteristics albeit at the cost of computational efficiency. However, extensive research has been recently conducted to improve the stability of population-space LB methods. Indeed, such research has culminated in the development of several advanced population-space models. Consequently, lbmpy has been extended to also allow population-space descriptions and is available in the multixscale-lbmpy [3] repository. At present the traditional Bhatnagar-Gross-Krook collision model is validated for a test case [4] that simulates the Kelvin-Helmholtz instability where an initial hyperbolic tangent velocity profile imposed in a fully periodic 2D square box is slightly perturbed to initiate rolling of the shear layers. The test conducts a validation run and computes the normalized kinetic energy as a validation metric. Further, the performance of the employed stream-collide algorithm is evaluated and reported in Mega lattice updates per seconds (MLUPS). The test can be executed in 2 modes, namely, serial/openmp parallelized. Serial runs:python mixing_layer_2D.py OpenMP parallel runs:OMP_NUM_THREADS=4 python mixing_layer_2D.py –openmp References:[1] https://i10git.cs.fau.de/pycodegen/lbmpy[2] https://i10git.cs.fau.de/walberla/walberla[3] https://github.com/multixscale/multiXscale-lbmpy/tree/population_space[4] https://github.com/multixscale/multiXscale-lbmpy/blob/population_space/usecases/mixing_layer_2D.py Efficient load balancing is critical for large-scale particle simulations on modern HPC systems as the computational workload and data distribution can change significantly over time due to particle motion, interactions and adaptive resolution. Without dynamic load balancing, these effects result in imbalances in the workload across processes, reduced `parallel efficiency and poor scalability at extreme core counts. The A Load Balancing Library (ALL) addresses these challenges by providing domain decomposition load-balancing strategies tailored to particle-based applications. By optimizing the domain decomposition through a range of available methods, ALL helps to maintain an even workload distribution, minimise communication overhead and improve time-to-solution on heterogeneous and massively parallel architectures. Within this milestone, we report on the integration and coupling of ALL with the LAMMPS particle simulation code. This coupling demonstrates how ALL can be applied to a production-grade molecular dynamics application, enabling improved scalability and performance on EuroHPC systems and highlighting the benefits of a reusable load-balancing library within a Centre of Excellence software ecosystem. In addition, a coupled build integrating LAMMPS, ALL, and OBMD. OBMD simulations are characterized by dynamically changing particle populations and strongly non-uniform spatial workloads due to particle insertion, removal, and fluxes across open boundaries. In this context, efficient load balancing is particularly critical to sustain scalability and numerical efficiency, as imbalances can rapidly arise during the simulation. The integration of ALL with LAMMPS and OBMD demonstrates how adaptive load-balancing capabilities enable robust and efficient execution of advanced open-boundary particle simulations. ALL souce code can be found at https://gitlab.jsc.fz-juelich.de/SLMS/loadbalancing, while examples using the coupling to LAMMPS are publicly available at: https://github.com/yannic-kitten/lammps/tree/ALL-integration/examples/PACKAGES/allpkg. ESPResSo is a versatile particle-based simulation package for molecular dynamics, fluid dynamics and Monte Carlo reaction schemes [1,2]. It provides numerical solvers for electrostatics, magnetostatics, hydrodynamics, electrokinetics, and diffusion-advection-reaction equations. It is designed as an MPI-parallel and GPU-accelerated simulation core written in C++ and CUDA, with a scripting interface in Python which integrates well with science and visualization packages in the Python ecosystem, such as NumPy and PyOpenGL. Its modularity and extensibility made it a popular tool in soft matter physics, where it has been used to simulate ionic liquids, polyelectrolytes, liquid crystals, colloids, ferrofluids and biological systems such as DNA and lipid membranes. Some research applications lead to the development of specialized codes that use ESPResSo as a library, such as molecular builders (pyMBE [3], pyOIF [4], pressomancy [5]), reinforcement learning frameworks (SwarmRL [6]) and systematic coarse-graining frameworks (VOTCA [7,8]). Many soft matter systems are characterized by physical properties that resolve different time- or length-scales. To fully capture these effects in a simulation, a multiscale approach is needed. Often, long-range interactions can be resolved very efficiently with minimal loss of accuracy using grid-based solvers, such as the particle–particle–particle mesh (P3M) algorithm for electrostatics and magnetostatics. Similarly, solute–solvent interactions can be approximated using the lattice-Boltzmann (LB) method for hydrodynamics, where the dense solvent is discretized on a grid and exchanges momentum with solid particles that represent the solute. These techniques not only bring the computational costs down, they also consume several orders of magnitude less computer memory compared to an atomistically-resolved system and require less bandwidth during communication between HPC nodes. Choice of simulation scenarios We opted for three main simulation scenarii that are described in more detail in Deliverable 2.1 [9]. They have been written in a backward-compatible way, such that ESPResSo releases 4.2 and 5.0 can execute them, despite the API changes between both versions. The Lennard-Jones (LJ) scenario consists of soft spheres interacting with a short-range potential [10]. This simple setup underpins …

