By the end of eFlows4HPC, the project successfully identified and uploaded the following 14 Key Exploitable Results* (KERs) to the Horizon Results Platform:
HPC Workflow as a Service methodology (HPCWaaS): is a service-oriented methodology helping designers and developers manage complex scientific workflows. The methodology supports scientists and professionals in simplifying all the lifecycle steps of a workflow: design, deployment, and execution. The paradigm leverages the project software stack to facilitate the reusability of complex workflows in federated HPC infrastructures. It supports the full life-cycle management of scientific workflows that combine HPC, Artificial Intelligence and Data Analytics, simplifying workflow programming and widening the access to newcomers to HPC. | |
eFlows4HPC software stack: it can be applied to many scientific and industrial applications that require the integration of computational intensive phases (HPC), artificial intelligence and big data. | |
Container Image Creation service (CIC): it automates the creation of HPC ready containers to ease the installation of complex software in HPC systems. | |
CONVLIB: Convolution operators on multicore ARM and RISC-V architectures. CONVLIB is a library containing high performance implementations of convolution algorithms for multicore platforms with ARM and RISC-V architectures. The library contains a driver routine that identifies the best values for four hyper-parameters: micro-kernel, cache configuration parameters, parallelization loop and algorithm, automatically adapting the call to the dimensions of the convolution operator. | |
Data Logistics Service (DLS): as part of eFlows4HPC WaaS, DLS can schedule, orchestrate, manage, and monitor data movements. It is integrated with popular data repositories and supports both stage-in and stage-out to and from HPC, cloud, etc. It is mainly responsible for data movement and data management operation. | |
BLEST-ML (BLock size ESTimation through Machine Learning): BLEST is an advanced implementation of machine learning techniques for data mapping in HPDA applications. The new machine learning techniques are useful for specialists and software developers that need to implement scalable executions of data-intensive applications of HPC systems. | |
IO optimization for better model scaling and performance in high resolution simulations: optimization of IO backends for increased data throughput, facilitating more efficient model scaling. Incorporation of parallel and asynchronous IO capabilities that enhance high-resolution climate simulations. Boost the performance and scalability of your high-resolution climate simulations with optimized IO throughput. | |
Dynamic ESM Workflow. It performs an optimized execution of a workflow running ensemble experiments of an Earth System Model in HPC environments. Support the reuse of existing components to automate ensemble execution, model tuning and postprocessing. | |
ML-based EStimator for ground-shaking maps (MLESmap) The MLESmap is a novel ML-based methodology to generate ground motion shaking maps at the speed of the traditional empirical ground motion prediction equations. The first prototype of MLESmap has been developed and tested using one of the largest synthetic datasets generated by the CyberShake software for the Southern California. The MLESmap methodology is currently being generalized to act as a full workflow including an offline and online phase. We are using PyCOMPSs to optimize the data extraction and merging step. As an ML engine, the dislib software is used to generate the models and the inferences of the MLESmap | |
Efficient End-to-End parallel workflow to reduce computationally demanding multi-physical models. Parallel workflow able to reduce computationally demanding multi-physical models to a size and speed such that can be run on an edge device in parallel to operation. This workflow enables the owners of a complex physics based simulation model, which is usually expensive w.r.t. execution time, to derive fast reduced order models (ROM) that can be used for real-time applications. | |
The UCIS4EQ workflow aims to provide insights into the impact of a large-magnitude event a few minutes after occurs for mitigation and resilience actions. The capability of UCIS4EQ to simulate high-frequencies provides a high-quality detail of the seismic waves to analyze the overall potential of the impact of an earthquake on key infrastructures that could produce collateral risks. UCIS4EQ is a fully automatic HPC workflow orchestrated by PyCOMPS technology enhanced to evaluate uncertainty quantification in seismic urgent computing. | |
Advanced Machine Learning approach for TC detection: a Machine Learning (ML) approach to detect the coordinate of the Tropical Cyclones eye starting from climate variables produced by the CMCC-CM3 model. | |
Urgent Computing Policy recommendations. The policy document on the deployment of the urgent computing scenario in the ecosystem consisting of different stakeholders. | |
Probabilistic Tsunami Forecast (PTF) workflow for HPC-based urgent computing: PTF is a seamless end-to-end workflow with vastly increased portability from system to system. A simpler and dynamic evaluation of uncertainty in the forecast allows to a controlled reduction of uncertainty in the forecast, thanks to the update of the estimate with a continous flux of incoming information and with more accurate simulation methods. |
* Definitions:
- Result: any tangible or intangible output of the action, such as data, knowledge and information whatever their form or nature, whether or not they can be protected, which are generated in the action as well as any attached rights, including intellectual property rights.
- KER: main interesting result which has been selected and prioritized due to its high potential to be “exploited” – meaning to make use and derive benefits – downstream the value chain of a product, process or solution, or act as an important input to policy, further research or education.