Skip to content
Snippets Groups Projects
user avatar
Marcin Kirsz authored
2efd979d
History

Massively Parallel version of Tadah!MLIP

This project is currently under development. The goal is to enable the training of MLIPs using large datasets distributed across numerous HPC (supercomputer) nodes via MPI. The optimization of weights is performed using ScaLAPACK, facilitating efficient and scalable model training.