Key Exploitable Results
The “Key Exploitable Results” section of the eFlows4HPC project website showcases the forefront of innovation and technology development within the European High-Performance Computing (HPC) ecosystem. This section is dedicated to highlighting the project’s significant outcomes, emphasizing the transformative impact these advancements have on various scientific domains and industrial applications. As a collaborative effort funded under the European Union’s Horizon 2020 research and innovation programme, eFlows4HPC brings together leading academic and industrial partners to push the boundaries of HPC workflows, data analytics, and machine learning integration. The key exploitable results underscore the project’s commitment to addressing complex computational challenges, fostering breakthroughs in efficiency, scalability, and performance across Europe’s HPC landscape. Visit this section to explore the project’s milestones and discover how eFlows4HPC is shaping the future of computational science and engineering.
It supports the full life-cycle management of scientific workflows that combine HPC, Artificial Intelligence and Data Analytics, simplifying workflow programming and widening the access to newcomers to HPC.
eFlows4HPC software stack
The eFlows4HPC software stack can be applied to many scientific and industrial applications that require the integration of computational intensive phases (HPC), artificial intelligence and big data.
Container Image Creation service (CIC)
CIC automates the creation of HPC ready containers to ease the installation of complex software in HPC systems.
Data Logistics Service (DLS)
As part of eFlows4HPC Workflow-as-a-Service, DLS can schedule, orchestrate, manage, and monitor data movements. It is integrated with popular data repositories and supports both stage-in and stage-out to and from HPC, cloud, etc. It is mainly responsible for data movement and data management operation.
CONVLIB: Convolution operators on multicore ARM and RISC-V architectures
CONVLIB is a library containing high performance implementations of convolution algorithms for multicore platforms with ARM and RISC-V architectures. The library contains a driver routine that identifies the best values for four hyper-parameters: micro-kernel, cache configuration parameters, parallelization loop and algorithm, automatically adapting the call to the dimensions of the convolution operator.
BLEST-ML (BLock size ESTimation through Machine Learning)
BLEST is an advanced implementation of machine learning techniques for data mapping in HPDA applications. The new machine learning techniques are useful for specialists and software developers that need to implement scalable executions of data-intensive applications of HPC systems.
IO optimization for better model scaling and performance in high resolution simulations
Optimization of IO backends for increased data throughput, facilitating more efficient model scaling. Incorporation of parallel and asynchronous IO capabilities that enhance high-resolution climate simulations. Boost the performance and scalability of your high-resolution climate simulations with optimized IO throughput.
Dynamic ESM Workflow
It performs an optimized execution of a workflow running ensemble experiments of an Earth System Model in HPC environments. Support the reuse of existing components to automate ensemble execution, model tuning and postprocessing.
ML-based EStimator for ground-shaking maps (MLESmap)
The MLESmap is a novel ML-based methodology to generate ground motion shaking maps at the speed of the traditional empirical ground motion prediction equations. The first prototype of MLESmap has been developed and tested using one of the largest synthetic datasets generated by the CyberShake software for the Southern California. The MLESmap methodology is currently being generalized to act as a full workflow including an offline and online phase. We are using PyCOMPSs to optimize the data extraction and merging step. As an ML engine, the dislib software is used to generate the models and the inferences of the MLESmap
The UCIS4EQ workflow aims to provide insights into the impact of a large-magnitude event a few minutes after occurs for mitigation and resilience actions. The capability of UCIS4EQ to simulate high-frequencies provides a high-quality detail of the seismic waves to analyze the overall potential of the impact of an earthquake on key infrastructures that could produce collateral risks. UCIS4EQ is a fully automatic HPC workflow orchestrated by PyCOMPS technology enhanced to evaluate uncertainty quantification in seismic urgent computing.
Efficient End-to-End parallel workflow to reduce computationally demanding multi-physical models
Parallel workflow able to reduce computationally demanding multi-physical models to a size and speed such that can be run on an edge device in parallel to operation. This workflow enables the owners of a complex physics based simulation model, which is usually expensive w.r.t. execution time, to derive fast reduced order models (ROM) that can be used for real-time applications.
An advanced Machine Learning approach for TC detection
A Machine Learning (ML) approach to detect the coordinate of the Tropical Cyclones eye starting from climate variables produced by the CMCC-CM3 model.
PTF workflow for HPC-based urgent computing
A seamless end-to-end workflow with vastly increased portability from system to system. A simpler and dynamic evaluation of uncertainty in the forecast allows to a controlled reduction of uncertainty in the forecast, thanks to the update of the estimate with a continous flux of incoming information and with more accurate simulation methods.
The expected example of the entire urgent mode ecosystem
The policy document on the deployment of the urgent computing scenario in the ecosystem consisting of different stakeholders.