Abstraction, Orchestration and Modelling of Data Movement in Heterogeneous Memory Systems

Data movement is a constraining factor in almost all HPC applications and workflows. The reasons for this ubiquity include physical design constraints, power limitations, relative advancements of processors versus memory and rapid increases in dataset sizes. While decades of research and innovation in HPC have resulted in robust and powerful optimizing environments, even basic data-movement optimization remains a challenge. In many cases, fundamental abstractions suited to the expression of data are still missing, as is a model of various memory types/features. Performance portability of Exascale systems requires that heterogeneous memories are used intelligently and abstractly in the middleware/runtime rather than requiring explicit, laborious hand-coding. To do so, capacity, bandwidth and latency considerations of multiple levels must be understood (and often modeled) at runtime. Furthermore, the semantics of data usage within applications must be evident in the programming model. Several research projects are presenting solutions for either a piece of this problem (EPiGRAM-HS with MAMBA, Tuyere) or holistically (DaCE). This minisymposium will present a sample of the most relevant research concerning programming abstractions, models and runtimes for data movement from the perspectives of vendors (HPE/Cray), world-class supercomputer centers (EPCC, ORNL) and programming model developers (ETH).

Organizer(s): Stefano Markidis (KTH Royal Institute of Technology), Jeffrey S. Vetter (Oak Ridge National Laboratory), and Olivier Marsden (ECMWF)

Domain: Computer Science and Applied Mathematics


Advances in Computational Geosciences, Parts I, II, III

Computational geosciences leverage advanced computational methods to improve our understanding of the interiors of  Earth and other planets. They combine numerical models to understand the current state of physical quantities describing a system, to predict their future states, and to infer unknown parameters of those models from data measurements. Such models produce highly nonlinear numerical systems with extremely large numbers of unknowns. The ever-increasing power and availability of High Performance Computing (HPC) facilities offers researchers unprecedented opportunities to continually increase both the spatiotemporal resolution and the physical complexity of their numerical models. However, this requires complex numerical methods and their implementations that can harness the HPC resources efficiently for problem sizes of billions of degrees of freedom. The goal of this minisymposium is to bring scientists who work in theory, numerical methods, algorithms and scientific software engineering for scalable numerical modelling and inversion within computational geosciences including, but not limited to, computational geodynamics, seismology, glaciology, geophysical fluid dynamics, and urgent computing in natural hazard management.

Organizer(s): Marta Pienkowska-Cote (ETH Zurich), Patrick Sanan (ETH Zurich), and Vaclav Hapla (ETH Zurich)

Domain: Solid Earth Dynamics


Algorithmic Developments towards Earth System Modelling on Exascale Supercomputers

Many of the operational weather and climate prediction models world wide have been developed over multiple decades. The algorithms used in these models were often designed well before the multicore era started. To take full advantage of the emerging massively parallel heterogeneous supercomputers it is necessary to investigate completely new algorithms which have never been used in operational weather and climate prediction before and which promise significant improvement in terms of scalability and energy efficiency. In particular, high order discontinuous and continuous Galerkin methods offer increased operational intensity which allows to make better use of the available processor performance while reducing the necessary memory traffic which is typically the bottleneck in most earth system models. At the same time new time integration methods allow to reduce the number of time steps or even to parallelize the computation over time steps. This mini-symposium will present innovative work on these new algorithmic approaches and discuss their benefits in light of the upcoming exascale supercomputers.

Organizer(s): Andreas Mueller (ECMWF), and Giovanni Tumolo (ECMWF)

Domain: Climate and Weather


Applied Cutting-Edge Machine Learning in Cosmology and Particle Physics

Cosmology and particle physics both strive to gain a deeper understanding of the history, the composition and the inner workings of the Universe. While complementary in many aspects, cosmology and particle physics also share many things: they try to find answers to the same open questions, they use similar detectors and analysis techniques, and they both have marvelously precise models. Both disciplines rely heavily on Monte Carlo simulation techniques, they have ever-increasing datasets with planned next-generation experiments, which face yet-unsolved computing challenges related to triggering, data reconstruction and simulation, as well as data storage. Traditional computing approaches are not scaling to these New Challenges and are limiting the physics output. New Computing Paradigms are needed to make progress. Recent developments in machine learning techniques coupled with custom hardware may offer potential directions of improvement. The symposium will highlight several prominent areas in this fast emerging and thriving field with a diverse set of elect international speakers from research Universities both in physics and computer science, research institutes and industry. This rich diversity will provide different angles from which to shine light on the problem for maximum clarity and accessibility of the underlying challenges and the proposed methods to address them.

Organizer(s): Tobias Golling (University of Geneva), Slava Voloshynovskiy (University of Geneva), and Danilo Rezende (DeepMind)

Domain: Physics


Artificial Intelligence Enabled Multiscale Molecular Simulations in Biological and Material Sciences

Simulations of physical phenomena consume a large fraction of computer time on existing supercomputing resources. Today, the challenge of scaling multiscale simulations is primarily addressed by brute-force search-and-sample techniques and are computationally expensive. Emerging Exascale architectures pose challenges for simulations such as efficient and scalable execution of complex workflows, concurrent execution of heterogeneous tasks and the robustness of algorithms on millions of processing cores, data and I/O parallelism, and fault tolerance. Therefore, incremental approaches that scale simulations will not be successful in achieving the throughput and utilization on such machines. Machine learning (ML) techniques can be integrated with system and application changes and give many orders of magnitude higher effective performance. We term this convergence of high-performance computing (HPC) and ML methodologies/ practice as MLforHPC. Nowhere is the impact of MLforHPC methods likely to be greater than multiscale simulations in biological and material sciences, with early evidence suggesting several orders of magnitude improvement over traditional methods. Fueled by advances in statistical algorithms and runtime systems, ensemble-based methods have overcome some of the limitations of traditional monolithic simulations. Furthermore, integrating ML approaches with such ensemble methods holds even greater promise in overcoming performance barriers and enabling simulations of complex multiscale phenomena.

Organizer(s): Arvind Ramanathan (Argonne National Laboratory), Shantenu Jha (Rutgers University, Brookhaven National Laboratory), and Geoffrey Fox (Indiana University)

Domain: Life Sciences


Bringing Task-Based Programming to the Mainstream

HPC systems are increasingly heterogeneous and massively parallel. This creates unique challenges in fully utilizing all resources available on a node. Application developers have to expose enough parallelism to take advantage of the increasing core counts. At the same time, communication between both on-node components and inter-node components is becoming harder to manage. The fork-join programming paradigm is the preferred choice for most applications because of its simplicity and often straightforward application to serial programs. However, it imposes significant limitations to performance with its implicit global barriers. Asynchrony is becoming a requirement to hide latencies and is even starting to see wider use in more traditional libraries. Relaxing data and task dependencies is also an important technique to expose more parallelism in an application. These are all ideas that task-based programming brings to users, and which are making their way into more more established libraries, and pushing applications, libraries, and languages in new directions. This minisymposium brings together implementers and users of task-based programming frameworks and aims to discuss the benefits, recent advances, and remaining challenges in making task-based programming usable and accessible to everyone.

Organizer(s): Mikael Simberg (ETH Zurich / CSCS), John Biddiscombe (ETH Zurich / CSCS), and Auriane Reverdell (ETH Zurich / CSCS)

Domain: Computer Science and Applied Mathematics


Computational Challenges in Nonlinear Macroeconometrics

The estimation of large-scale nonlinear rational expectations models constitutes a major methodological and computational challenge for macroeconometrics. This session addresses the high demand for appropriate, efficient, and computationally feasible methods to bring complex economic models to the data. We discuss nonlinear filtering methods and advanced sampling techniques. We aim at bringing together economists and econometricians that work on the filtering and estimation of large-scale nonlinear models. At the same time we discuss difficulties concerning implementation and the efficient use of computational resources.

Organizer(s): Gregor Boehl (Goethe University Frankfurt)

Domain: Emerging Application Domains


Cosmological N-Body Simulations Beyond Newtonian Physics

Cosmology has undergone a revolution, from a rather philosophical enterprise to a data-driven, observational science. To make sense of the terabytes and petabytes of data streaming in from the new and future facilities we need large-scale N-body simulations that contain the relevant physics and cover a huge dynamical range to reach the required precision, in the future they will provide a benchmark problem for exascale computing. These simulations combine many numerical challenges, including load-balancing multi-scale dynamical evolution, solving nonlinear finite-difference equations, managing complex data sets and performing on-the-fly statistical analyses. This minisymposium reviews the methods behind the largest current simulations that evolve over a trillion particles, and ongoing developments in including effects from General Relativity as well as relativistic fields and particles in the simulations. The presentations will provide both an overview of current results relevant for cosmology and a look inside the machinery (algorithms and their implementations) behind the latest generation of cosmological simulation codes.

Organizer(s): Martin Kunz (University of Geneva), Joachim Stadel (University of Zurich), and Julian Adamek (Queen Mary University of London)

Domain: Physics


CP2K: High-Performance Computing in Chemistry and Material Science

CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of condensed phase systems. It can simulate the electronic structure and thermodynamic properties of liquids and solutions, complex materials and soft biological systems. CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of shared memory multi-threading using OpenMP, distributed memory MPI, and on accellerators using e.g. CUDA. New low-scaling implementations of electronic structure methods enable simulations of systems containing millions of atoms for Density Functional Theory (DFT) and thousands of atoms for Random Phase Approximation (RPA). These methods are based on sparse linear algebra. Performance, portability and ease-of-development is ensured by the accompanying development of a general sparse matrix/tensor library (DBCSR). The desire to perform calculations on a large number of materials of interest is calling for automated workflows to organize massive amounts of data and calculations. This is enabled by combining CP2K with the Automated Interactive Infrastructure and Database for Computational Science (AiiDA).

Organizer(s): Patrick Seewald (University of Zurich), Beliz Sertcan (University of Zurich), and Maria Bilichenko (University of Zurich)

Domain: Chemistry and Materials


Data Locality Challenges in Astrophysics Applications

Data locality on new and future generation platforms is expected to be of paramount importance for good performance. However, many applications in astrophysics are by nature multiphysics, involving multiple solvers with diverse data layouts and communication patterns. Additionally, because they can have a large dynamic range of scales, different resolutions in different parts of the domains are often necessary for computational efficiency. Thus, the constraints placed on data management due to the physics models and their corresponding numerical methods can be at cross purposes with data locality. This mini-symposium will examine constraints on data locality for two major class of astrophysics applications, cosmology and supernovae. For each class we include two types of discretization approaches, adaptive mesh refinement and smooth particle hydrodynamics, that have very different data layouts, access patterns and solver characteristics.

Organizer(s): Anshu Dubey (Argonne National Laboratory, University of Chicago), Bronson Messer (Oak Ridge National Laboratory, University of Tennessee), and Sean Couch (Michigan State University)

Domain: Physics


Data Movement Orchestration on HPC Systems

The next HPC systems are expected to feature millions of cores leading to more parallelism for large-scale simulations and workflows in varied scientific domains. However, there is a price to pay: the gap between computing power and data movement performance will keep increasing more and more and will inevitably intensify the bottleneck caused by data movements. Thus, it becomes necessary to consider data movement within those modern architectures at all the stages of the data lifetime and to come up with software components for orchestrating this movement. In particular, we can identify several key aspects among which: data semantics for expressing applications needs and memory abstraction for portability and extendibility given the multiplicity of memory and storage tiers. During this Minisymposium, we will focus on those two facets and present very promising ongoing projects coming from academia and industry proposing to address the data orchestration challenge and facilitate efficient exploitation of future exascale systems. An open discussion with the speakers will close the session.

Organizer(s): François Tessier (ETH Zurich / CSCS), Dirk Pleiter (Forschungszentrum Jülich), and Utz-Uwe Haus (Cray European Research Lab)

Domain: Computer Science and Applied Mathematics


Data-Driven Scale-Bridging for Computational Materials Science

The field of computational materials design has been pushed to new frontiers over the last decade. This is mainly rooted in recent advances to augment well known simulation approaches like density functional theory (DFT) or molecular dynamics (MD) simulations with data-driven approaches like machine learning. The representation of DFT potential energy surfaces with neural networks lead to a great gain in computational efficiency without losing much of the accuracy of the DFT calculation. In addition to describing interatomic interactions, machine learning has been further applied to predict material properties based on first-principle calculations or to classify molecules based on thermodynamic properties. The ultimate goal of this new class of methodological approaches is to aid the development of novel materials. These materials may find applications in drug design, unconventional energy resources or innovative semiconducting materials. This minisymposium will present state-of-the-art examples of the underlying method development, scientific applications, and implementation on high performance computing platforms.

Organizer(s): Timothy Germann (Los Alamos National Laboratory), Jean-Bernard Maillet (CEA), and David Rosenberger (Los Alamos National Laboratory)

Domain: Engineering


Developing Scientific Codes for Predictive Simulations on Massively Parallel Heterogeneous Computing Platforms: Integrating Extreme-Scale Computation, Data Analysis and Visualization I

In this minisymposium we address an important question: how do we future proof scientific codes on a rapidly changing hardware landscape of heterogeneous computing platforms which, at present consists of CPU+GPU systems, with significant differences between the GPUs. Given that the language(s)/API(s)/Pragma(s) to offload instructions/data from/to the GPUs on these systems (e.g. SYCL, HIP, OpenMP-5.x) are very different, the task of refactoring large scientific codes, each with their own dependencies on libraries, is a daunting one. Consequently, the questions that are uppermost in minds of code developers are: a) how feasible is it to use a high level hardware abstraction layer (HAL) that would make codes portable across the various heterogeneous computing platforms, and b) will these HALs continue to be developed when there are other accelerators that become part of the hardware landscape? In this mini-symposia we shine the spotlight on one such HAL, namely, Kokkos, that is being developed by the US Department of Energy, as part of the exascale computing program (ECP).  We have four talks on the usability of Kokkos, on the development of mesh and particle based scientific codes, and on a specialized scientific library, all of which leverage Kokkos for portability.

Organizer(s): Ramesh Balakrishnan (Argonne National Laboratory), and Irina Tezaur (Sandia National Laboratories)

Domain: Engineering


Developing Scientific Codes for Predictive Simulations on Massively Parallel Heterogeneous Computing Platforms: Integrating Extreme-Scale Computation, Data Analysis and Visualization II

Most of the flow solvers, commercial as well as opensource, that are used for turbulent flow simulations are based on spatial discretizations that are nominally second-order accurate for evolving the compressible and incompressible Navier-Stokes on unstructured meshes that represent the underlying complex geometry. For canonical simulations of incompressible turbulent flows, on the other hand, where the geometry of the computational domain is much simpler, the solvers usually make use of FFT based pseudo-spectral solvers that could be used in conjunction with higher-order finite difference schemes. The construction of these solvers for optimal performance on GPU based platforms, and the hardware abstractions that are used to offload computations to the GPU, is the subject of this mini-symposia. Secondly, this mini-symposia will feature a talk that assesses the performance of higher-order discretization schemes (with local support) on GPU based platforms, and their ability to represent the fine scale turbulent flow features when compared with pseudo-spectral solvers that have traditionally been used for DNS of canonical flows. Finally, this mini-symposia will also present the simulation of multiphase flows with a higher-order lattice Boltzmann method.

Organizer(s): Ramesh Balakrishnan (Argonne National Laboratory), and Irina Tezaur (Sandia National Laboratories)

Domain: Engineering


Developing Scientific Codes for Predictive Simulations on Massively Parallel Heterogeneous Computing Platforms: Integrating Extreme-Scale Computation, Data Analysis and Visualization III

In this minisymposium we present talks on the development of software capabilities for uncertainty quantification, optimization, and machine learning. In particular, we present the development of a framework for the effective use of next generation computing platforms for ensemble calculations for uncertainty quantification. The development of the MFEM library, for implementing finite element solvers, for solid and fluid mechanics, on GPU based platforms will be discussed next. With massively parallel computations generating terrabytes of data per second, it becomes infeasible to write out checkpoint files that can later be used for data analysis, visualization, and as “truth” datasets for machine learning. The task of learning from simulations, on the fly, will be presented in a talk that will examine problems related to flow physics and climate modeling. Finally, we present a talk on proxy applications. In instances where a code in its entirety cannot be shared with the hardware vendor, a representative section of the code, or kernel, also known as a “proxy-app”, is often used to assess the effectiveness of a new hardware/compiler. We examine a newer aspect of proxy-apps, namely, the use of optimized scientific kernels as building blocks for scientific code development.

Organizer(s): Ramesh Balakrishnan (Argonne National Laboratory), and Irina Tezaur (Sandia National Laboratories)

Domain: Engineering


Directive-Based Approaches to Port Earth System Models to Accelerators

Weather and climate models contain typically millions of lines of code which were in many cases developed over multiple decades. This makes it difficult to adapt these models to the emerging massively parallel heterogeneous supercomputers while still keeping the code readable, maintainable and ready for operational use. One major concern for many weather and climate prediction centers is the ability for domain scientists to explore new algorithms without the need to first create the infrastructure for those changes. One option to address these difficulties to some extent is the use of directive based approaches. This includes programming models like OpenMP and OpenACC. These programming models have the advantage that the original code base can in principle still be used on traditional architectures like CPUs or the new NEC Aurora Tsubasa vector engines. This mini-symposium presents directive based porting efforts for four major weather models widely used in the weather and climate community. The speakers will present how they have ported their code to heterogenous machines, why they have used directive-based approaches and how the performance and porting effort compares in their experience to hand tuned optimization and/or domain specific tools.

Organizer(s): Andreas Mueller (ECMWF)

Domain: Climate and Weather


Disaster Response: HPC for Real-Time Urgent Decision Making

Responding to disasters such as wildfires, hurricanes, flooding, earthquakes, tsunamis, winter weather conditions, spread of diseases, and accidents; technological advances are creating exciting new opportunities that have the potential to move HPC well beyond traditional computational workloads. While HPC has a long history of simulating disasters after the fact, an exciting possibility is to use these resources to support emergency, urgent, decision making in real-time. As our ability to capture data continues to grow significantly, it is only now possible to combine high velocity data and live analytics with HPC models to aid in urgently responding to real-world problems, ultimately saving lives and reducing economic loss. To make this vision a reality, a variety of technical and policy challenges must be identified and overcome. Whether it be developing more interactive simulation codes which include real-time data feeds, improving in-situ data analysis techniques, developing new large-scale data visualisation techniques, or guaranteeing bounded and predictable machine queue times, the challenges here are significant. In this minisymposium, we will discuss this emerging HPC use-case by bringing together experts in the field, researchers, practitioners, and interested parties from across our community to identify and tackle issues involved in using HPC for urgent decision making.

Organizer(s): Nick Brown (EPCC), and Max Kontak (German Aerospace Center)

Domain: Emerging Application Domains


Discontinuous Numerical Methods and High-Performance Computing for Geotechnical Engineering

Numerical simulation has become a necessary tool in the field of geotechnical engineering. As the common material involved, geomaterial is always with great discontinuity, heterogeneity, and anisotropy. To describe the mechanical behavior of geomaterial, various computational methods have been developed. As an important branch, discontinuous numerical methods are designed using the bottom-to-top strategy, in which the computational model is divided into a group of discrete elements to reproduce the response of its physical counterpart. Compared to continuous methods, such as the finite element method (FEM), the discontinuous numerical method is regarded as superior in representing characteristics of geomaterial and obtaining closer results to those of laboratory testing. However, they are always handicapped to be further applied into the practical case in geotechnical engineering due to the extra high computational requirements, e.g., over millions if not billions of numerical elements are generally required for discontinuous numerical model of a large scale slope or underground cavern. This mini-symposium targets presenting new parallel computing algorithms of discontinuous numerical methods for geotechnical engineering, including but not limited to new developed computing methods such as the discontinuous deformation analysis, the discrete element model and the four-dimensional lattice spring model.

Organizer(s): Gao-Feng Zhao (Tianjin University), Chun Liu (Nanjing University), and Yuyong Jiao (China University of Geosciences)

Domain: Solid Earth Dynamics


Domain-Specific Languages and Compilers for Weather and Climate

Architectural specialization driven by the limits imposed by the slow down in Moore’s Law is here to stay. For weather and climate models, increased complexity of hardware architectures imposes a huge challenge. Balancing development speed, performance portability, efficiency and maintenance cost of community developed weather and climate models using the prevalent programming model of Fortran plus extensions has become increasingly hard and slowed scientific productivity. A few efforts are aiming to solve this challenge by developing domain-specific language (DSL) compilers. Higher-level programming increased developer productivity and shifts the burden to generate efficient code for a given hardware architecture to the DSL compiler. Requirements on the DSLs are not unanimous since target architecture, computational patterns from different models, as well as the preferred way of expressing the model, varies among some of the major weather and climate model development efforts. In this mini-symposium, keynote speakers from various efforts around the world will talk about their approaches and the learnings from their work. We provide a platform to discuss how the future of domain-specific languages in weather and climate should look and how we can evolve our current ideas.

Organizer(s): Oliver Fuhrer (Vulcan Inc, MeteoSwiss), Tobias Wicky (Vulcan Inc, University of Washington), and Tobias Grosser (ETH Zurich)

Domain: Climate and Weather


Earth System Modelling on the Supercomputer Summit

Weather and climate prediction have made significant progress over the past decades. Despite this progress there are still substantial shortcomings including insufficient parallelism, limited scalability, portability limitations, and increasing complexity in the applications. Weather extremes, for example, are still difficult to predict with sufficient lead time and predicting the impact of climate change at a regional or national level is a big challenge. Improving these predictions promises important economic benefits. One of the key sources of model error is limited spatial and temporal resolution. Improving resolution translates into significant computational challenges. This makes it necessary to heavily restructure and optimise weather and climate models for the fastest available supercomputers. This mini-symposium gives an overview of work on porting and optimising four popular earth system models for the supercomputer Summit. This includes optimisation for the NVIDIA V100 GPUs as well as the IBM Power 9 host CPUs and the Mellanox interconnect. Being able to make good use of fat nodes like on Summit will be highly relevant for many domains within the HPC community.

Organizer(s): Andreas Mueller (ECMWF)

Domain: Climate and Weather


Efficient Solution Methods for Large-Scale Nonlinear Macroeconomic Models

Computational models play a central role for understanding macroeconomic phenomena and for the evaluation of policy measures. As economic agents are forward-looking, solving models with nonlinear dynamics and/or heterogeneity across agents is typically very costly, whith costs sharply increasing in the size of the state space. This minisymposium aims at bringing together economists working on solution methods for nonlinear or heterogenous-agent macroeconomic models, and to focus not only on these methods, but also to discuss topics like computational implementation, effective use of computational ressources, and general advances in the field of computational economics.

Organizer(s): Gregor Boehl (Goethe University Frankfurt)

Domain: Emerging Application Domains


Enriching Earth and Climate Science simulations using AI/ML

In this mini-symposium, we examine the increasing role that AI/ML methods are playing in the Earth and Climate Sciences, with speakers relating how these new tools can be used judiciously. From the atmospheric sciences side, we will see how ML can help understand the variability of the stratospheric polar vortex and thus enhance seasonal forecasts. And in terms of solid earth science, we discuss the usefulness of ML in gaining insight into earthquake dynamics. We further cover how to provision ML services for Earth System sciences as these tools gain more adoption within the scientific communities, and need to be made available in a more systematic way. Finally, we peer into the future of both ML and Earth Science, with thoughts on how sparsity in both of these areas will likely grow over time, and the expected interplay of this sparsity with current and alternative computer architectures.

Organizer(s): Marie-Christine Sawley (Intel Inc.), and Michel Speiser (ICES Foundation)

Domain: Climate and Weather


Excellerat - Extreme CFD for Engineering Applications, Parts I, II

Computational fluid dynamics is one of the main drivers of exascale computing, both due to its high relevance in today's world (from nano fluidics up to planetary flows), but also due to the inherent multiscale properties of turbulence. The numerical treatment is notoriously difficult due to the disparate scales in turbulence, and the need for resolving local features. In addition, aspects such as the quantification of (internal or external) uncertainties is becoming a necessity, together with in-situ visualisation/postprocessing. The recent trend in numerical methods goes towards high-fidelity methods (for instance continuous and discontinuous Galerkin) which are suitable for modern computers; however, relevant issues such as scaling, accelerators and heterogeneous systems, high-order meshing and error control, are still far from solved when it comes to largest scale simulations, e.g. in automotive and aeronautical applications. This two part minisymposium brings together eight experts from various international institutions (Europe, America, Japan) to discuss current and future issues of extreme scale CFD in engineering applications, with special focus on accurate CFD methods, and their implementation on current HPC systems. The interaction between participants of the Horizon2020 Centre of Excellence Excellerat and external experts will be particularly fruitful.

Organizer(s): Philipp Schlatter (KTH Royal Institute of Technology), and Niclas Jansson (KTH Royal Institute of Technology, RIKEN)

Domain: Engineering


Guilt-by-Association: Using Network Models to Decipher Complex Patterns

A common research problem in diverse domains is the extraction of combinatorial patterns from large datasets. For example, most human diseases arise due to the interactions of multiple genetic factors and lifestyle choices; and weather events, such as tornados, manifest due to the complex interactions of a host of meteorological states. Data in such domains is rapidly being gathered, yet identification of these high-dimensional patterns remains difficult due to the combinatorial explosion of the number of groups to be considered. One practical approach leverages the concept of guilt-by-association and models the data as a network. These networks typically represent factors as nodes and relationships between pairs of factors as edges between the corresponding nodes. A key benefit of network modeling is that the computation of simple pair-wise relationships can yield knowledge about unknown high-dimensional relationships. Network models have been widely employed, yet several challenges impede their full potential. This minisymposium focuses on these challenges, discusses current state-of-the-art approaches, and also presents promising directions for future research, such as the computations of 3-way relationships.

Organizer(s): Sharlee Climer (University of Missouri - St. Louis), and Daniel Jacobson (Oak Ridge National Laboratory)

Domain: Computer Science and Applied Mathematics


Hardware Agnostic Programming Paradigms in HPC

The advent of new supercomputing architectures often challenges the current best practices, programming paradigms and potentially renders state-of-the-art software outdated. Given the research and development man-years spent on scientific applications, this risk should make the HPC community consider more sustainable and long-term development strategies. A case in point is the GPU development, where one must decide carefully the platform to target, e.g., CUDA, OpenCL, ROCm. On the other hand, there exist few examples of software that is agnostic to the underlying architecture, offering the possibility of a single code to deal with multiple architectures. Currently, scientists and engineers often develop their applications for a very specific architecture, spending valuable time optimizing and tailoring their codes. Furthermore, as the code moves to different platforms with different accelerators, the code branches into multiple development streams, where each of them is dealing with its own platform-specific issues that are solved with diverging techniques. Thus, it is critical to open a wide discussion on frameworks, inherently parallel programming languages, compilers, platforms and a combination of them, that will help the community choose a development pipeline, leading to HPC models that favor flexible, versatile and sustainable solutions.

Organizer(s): Christos Kotsalos (University of Geneva), and Jonas Latt (University of Geneva)

Domain: Computer Science and Applied Mathematics


High Performance Scientific Computing in Aquatic Research

Lakes form an integral component of ecosystems and our communities, with a significant portion of the Swiss population living in their close proximity. Better understanding of the internal lake processes and can be obtained through the development of more accurate computational models and the use of the newly available high frequency sensor data. Enabled by powerful computational resources, researchers can now test and evaluate multitude of model paradigms, calibrate and infer quantities governing the physical and ecological dynamical processes and study the underlying fine-scale mechanisms. These methodologies can be coupled with state-of-the-art data assimilation techniques allowing to perform statistical inference of inaccessible quantities of interest and to perform accurate forecasting for early warning systems, including the quantification of the associated uncertainty. The goal of this minisymposium is to foster exchange of recent developments and methodologies pertaining to high performance computing in aquatic research, with a large focus on lake phenomena. These discussions aim to help scientists to better understand complex processes that are relevant to improve the quality of lake models and predictive frameworks.

Organizer(s): Jonas Šukys (Eawag), and Artur Safin (Eawag)

Domain: Emerging Application Domains


High-Performance Simulations of Fluid Dynamics with Uncertain Wind for Robust Design in Civil Engineering

Uncertainty in the input data is a reality of engineering practice, albeit not always modelled in simulations. Accurate quantification of the resulting output uncertainty yields finer control on the robustness and cost of designs. However, this quantification requires exploring the parameter space – usually with numerous simulations – which may incur a prohibitive cost, especially for involved studies such as optimal design in fluid dynamics. Parallel computing can make such studies tractable, provided suitable methods are used to leverage it. This is an active field of research especially relevant for fluid dynamics, whose simulations are notoriously expensive and intricate, while being critical to many applications: aeronautics, civil engineering, meteorology, etc. Presented here are novel research developments from ExaQUte, an EU-H2020 project developing HPC methods for robust engineering design. The driving application is the optimisation of building shapes for civil engineering under uncertain wind loads. Therefore, this mini-symposium encompasses unsteady fluid problems, adaptive meshing, uncertainty quantification, robust shape optimisation and more. The methods discussed here are designed to leverage parallelisation with a modern framework for current and future distributed computing environments.

Organizer(s): Quentin Ayoul-Guilmard (EPFL), Riccardo Rossi (CIMNE, Polytechnic University of Catalonia), and Andreas Apostolatos (TU Munich)

Domain: Engineering


High-Resolution and Large-Scale Numerical Simulations for Fractured Porous Media, Parts I, II

Fractures are ubiquitous at different scales in the subsurface regions, and can strongly dominate the hydraulic and mechanical response of such regions. Understanding their distribution, connectivity, initiation, and propagation is fundamental
for several applications, such as geothermal energy production, hydrocarbon exploration, hydraulic stimulation and induced-seismicity assessment, CO2 storage. Modelling realistic fracture networks introduces several challenges, and, in the literature, several methods have been introduced to handle with the multiscale and multiphysics phenomena underlying the geophysics applications, including phase-field models for fracture initiation and propagation, hydro-mechanical and thermo-hydro-mechano-chemical coupling for fractured poroelastic media. For these kinds of problems, accurate and realistic discretization methods for fractured porous media hence give rise to large-scale problems for which modern high-performance computing architectures, such as hybrid GPU-CPU supercomputers, are necessary for efficient simulations. The goal of this mini-symposium is to bring together applied researchers and computational scientists working on the simulation of fractured porous media, with a particular focus on geoscientific applications. The presentations will be focused on the major challenges of the field and the most recent developments of HPC and large-scale software.

Organizer(s): Marco Favino (University of Lausanne), Maria Giuseppina Chiara Nestola (Università della Svizzera italiana, ETH Zurich), and Dimitrios Karvounis (ETH Zurich)

Domain: Computer Science and Applied Mathematics


In-Silico Medicine

Computers models and HPC simulations are becoming a very important approach to assist clinicans in treating diseases and devising new therapies. The goal of this minisymposium is to investigate several challenges associated with this approach. We will propose a selection of talks inspired by the problems raised in the H2020 project INSIST, for instance the estimation of success scores of a medical intervention through modeling (e.g. thrombolysis and thrombectomy processes in the treatment of a stroke, the impact of the lack of oxygen in the brain), and the way to build a virtual population of patients to propose new treatments and avoid in-vivo and in-vitro experiments. The problem of validation and uncertainty quantification of the numerical models will also be considered.

Organizer(s): Bastien Chopard (University of Geneva)

Domain: Life Sciences


Machine Learning for Electronic Correlation

Theoretical materials and molecules design requires a deep understanding of the underlying quantum mechanics and therefore the electronic correlation of the considered molecules and materials. However, traditional methods for the treatment of electronic correlation either suffer from an extremely fast scaling in computational complexity with system size that limits their application or from a lack of accuracy. And even when such methods are available the material design problem remains extremely challenging. In recent years, the improvement in algorithms, increase in computational power and the large amount of data produced by ab-initio methods have led to the rise of machine learning methods for the treatment of correlated electronic systems. A wide variety of approaches from force fields, over the learning of  the Hamiltonian or various electronic/system properties to variational Monte Carlo approaches for the electronic wavefunction and recently even inverse design methods have shown a lot of promise. In this mini-symposium experts will provide some perspective regarding the machine learning approaches to these different facets of the electronic problem.

Organizer(s): Miguel Marques (Martin-Luther Universität Halle-Wittenberg), Jonathan Schmidt (Martin-Luther-Universität Halle-Wittenberg), and Silvana Botti (Friedrich-Schiller-Universität Jena)

Domain: Chemistry and Materials


Modeling Evolution in the Era of Big Data.

The availability of genomic data for non-model organisms can be combined with phenotypic data to adress fundamental questions in evolutionary biology. This will allow to test the dynamic of phenotypic and genomic changes during species diversification and to better understand i) what are the factors affecting both the micro- and macro-evolutionary scales, and ii) whether these factors and similar and comparable. The analyses of large-scale genomic and phenotypic data within a phylogenetic context is still in its infancy because of the computational burden to perform such analyses. A large part of the current research focus is to develop novel approaches to enable the test of hypotheses on such data sets.

Organizer(s): Nicolas Salamin (University of Lausanne)

Domain: Life Sciences


Multiprecision Numerics in Scientific High Performance Computing

Recently, hardware manufacturers are responding to an increasing request for low precision functionality such as FP16 by integrating special low-precision functional units, e.g., NVIDIA Tensor cores. These, however, remain unused even for compute-intensive applications if high precision is employed for all arithmetic operations. At the same time, communication-intensive applications suffer from the memory bandwidth of architectures growing at a much slower pace than the arithmetic performance. In both cases, a promising strategy is to abandon the high-precision standard (typically fp64), and employ lower or non-standard precision for arithmetic computations or memory operations whenever possible. While employing formats other than working precision can render attractive performance improvements, it also requires careful consideration of the numerical effects. On the other end of the spectrum, precision formats with higher accuracy than the hardware-supported fp64 can be effective in improving the robustness and accuracy of numerical methods. With this breakout minisymposium, we aim to create a platform where those working with multiprecision or interested in using multiprecision technology come together and share their expertise and experience.

Organizer(s): Hartwig Anzt (Karlsruhe Institute of Technology, University of Tennessee), Erin Carson (Charles University), and Ulrike Meier Yang (Lawrence Livermore National Laboratory)

Domain: Computer Science and Applied Mathematics


Multiscale Modeling of Materials, Parts I, II

This minisymposium will focus on the tools and the models required to accurately model material behavior under various mechanical stimuli.  Continuum-scale models traditionally have difficulty accounting for specific mesoscale deformation behavior due to the larger length scales (tens to hundreds of microns) at which these models are applicable.  Accurately modeling fracture in higher length scale models is limited in similar ways; the sub-scale features of interest such as cracks and/or voids, and their interactions along boundaries cannot be resolved. Furthermore, when complex and extreme loading conditions are considered, the active deformation mechanisms can change, impacting overall material strength and damage evolution. Hence, the current state of the art models, particularly those active at larger length scales, cannot accurately predict material behavior especially under dynamic loading conditions. To get around these issues, many multiscale approaches have been developed in which information is ‘passed’ from lower length scales up to higher length scales.  While this approach is reasonable, what information is needed, how different models on different length scales connect, and the fidelity of these connections is still not clear.  This symposium is aimed at addressing these issues by bringing together modelers who have been working on modeling materials across scales.

Organizer(s): Saryu Fensin (Los Alamos National Laboratory), and Abigail Hunter (Los Alamos National Laboratory)

Domain: Chemistry and Materials


Peering HPC and Data-Driven Science: Scalable Cross-Facility and System Workflows

Scientific campaigns are increasingly tightening the feedback and validation loop between simulation and observational data. The linking of experimental and observational data from empirically driven facilities with computational facilities is giving rise to cross-facility workflows and needs to peer traditional modeling and simulation and high-performance data science. The dramatic increases in luminosity and data collection capabilities are also forcing HPC centers to support modes of operation such as interactivity and adaptive computation. As we scale up such pipelines for scientific discovery, cross-facility and intra-facility workflows require each participating facility and system to overcome interface hurdles to work seamlessly in an end-to-end manner. We bring together the perspectives of disparate collaborating facilities and experts, explore HPC and data intensive science (including observational science) needs, and aim to offer a roadmap for how cross-facility collaborations can be more effective by peering HPC and data-driven science. The session consists of talks by CSCS, PSI, and ORNL, and conclude with a panel. We summarize each facility’s perspective and pose leading questions to our panel to invite targeted responses.

Organizer(s): Cerlane Leong (ETH Zurich / CSCS), Alun Ashton (Paul Scherrer Institute), Sadaf Alam (ETH Zurich / CSCS), Arjun Shankar (Oak Ridge National Laboratory), and Jack Wells (Oak Ridge National Laboratory)

Domain: Computer Science and Applied Mathematics


Performance Optimisation and Productivity for EU HPC Centres of Excellence (and all other European Parallel Application Developers Preparing for Exascale)

While parallel applications in all scientific and engineering domains have always been prone to execution inefficiencies that limit their performance and scalability, exascale computer systems comprising millions of heterogeneous processors/cores present a very considerable imminent challenge to be addressed for academia and industry alike.  Ten HPC Centres of Excellence are currently funded by the EU Horizon2020 programme to prepare applications for forthcoming exascale computer systems [https://www.focus-coe.eu/index.php/centres-of-excellence-in-hpc-applications/].  The transversal Performance Optimisation and Productivity Centre of Excellence (POP CoE) [https://www.pop-coe.eu] supports the others, along with the wider European community of application developers, with impartial application performance assessments of parallel execution efficiency and scaling based on a solid methodology analysing measurements with open-source performance tools. This minisymposium introduces the POP services and methodology, summarising results provided to date for over 200 customers with particular focus on those from the HPC CoEs.  Engagements with the HPC CoEs will be reviewed in the introductory presentation, covering climate and weather (ESiWACE), chemistry and materials (BioExcel/MaX), and computational fluid dynamics in engineering (EXCELLERAT).  The CoEs for Computational Biomedicine (CompBioMed [https://www.compbiomed.eu/]), Solid Earth (ChEESE [https://cheese-coe.eu/])  and Energy-oriented applications (EoCoE [https://www.eocoe.eu/]) will then report their experience of collaborating with POP in preparing their flagship codes for exascale.

Organizer(s): Marta Garcia-Gasulla (Barcelona Supercomputing Center), and Brian Wylie (Jülich Supercomputing Centre, Forschungszentrum Jülich)

Domain: Computer Science and Applied Mathematics


Productivity, Performance, and Portability for Scientific Computing with Continuous Integration, Containers, and Build Systems, Parts I, II

Nowadays the complexity of scientific computing in conjugation with the hardware complexity has become significant. On one side, scientific software applications require several dependencies for their compilation. On the other side, users and developers of scientific applications target a wide range of diverse computing platforms, such as their laptops to supercomputers. This complexity poses challenges during the entire workflow of the applications. We can distinguish at least five critical areas:

1) applications building, including all dependencies;
2) testing during the application development with Continuous Integration (CI) and automated build and testing techniques;
3) deployment of the applications via Continuous Deployment (CD) techniques;
4) packaging of the applications with dependencies for easy user-level installation and productivity;
5) software performance portability.

The challenge in High Performance Computing is to develop techniques that allow maximizing the three software applications characteristics: productivity, performance, and portability for the five areas aforementioned. In this Minisymposium researchers and developers will discuss their successes and failures concerning this challenge. This is split into two sessions, each of two hours. The first session will cover the building tools and CI/CD techniques, while the second session focuses on packaging applications and performance portability.

Organizer(s): Alfio Lazzaro (HPE), Tiziano Mueller (University of Zurich), and Nina Mujkanovic (HPE)

Domain: Computer Science and Applied Mathematics


Scalable Machine Learning in Economics and Finance

In times of ever-increasing data sets on one hand side, and sophisticated models to account for the substantial heterogeneity observed in the real world, researchers in economics and finance have started to leverage recent advances from machine learning to study questions of unprecedented complexity. This minisymposium brings together researchers from different application fields of finance and economics who develop and use scalable approaches from machine learning.

Organizer(s): Simon Scheidegger (University of Lausanne), and Felix Kubler (University of Zurich)

Domain: Emerging Application Domains


Shrinking the Gap between HPC and Emerging Domains: Can Optimization Serve as the Bridge?

Many traditional optimization algorithms have been shown to be computationally intractable or at least extremely difficult.  In many instances, the rise of HPC has allowed us to push the realm of the possible well beyond previous limits.  At the same time, researchers in the field of quantum computing (QC) are exploring potential gains to be had through the integration of optimization and QC.  To date, however, there has not been a large amount of overlap between the fields.  (Note that we would be remiss not to include artificial intelligence as part of the discussion given it’s historic link to the field of optimization.)  In this minisymposium, four talks will be presented exploring whether or not optimization as a domain and a technique can be leveraged to bring HPC, AI and quantum computing closer together.  Can gains be had by leveraging aspects from each field, as well as enhance algorithmic results?  If so, there is the potential to apply both AI and QC to a broader set of scientific application domains than at the present time.  Lastly, but not least, the symposium will discuss appropriate benchmarks for these new paradigms and whether they are application specific.

Organizer(s): Sarah Powers (Oak Ridge National Laboratory)

Domain: Computer Science and Applied Mathematics


Swiss Chapter Women in HPC

WHPC is the only international organization working to improve equity, diversity and inclusion in High Performance Computing.  The chapter is being formed by senior professionals scientists and engineers working in Switzerland, representing academia, large research centre, national HPC Centre and the IT industry. The mission of WHPC is to "promote, build and leverage a diverse and inclusive HPC workforce by enabling and energising those in the HPC community to increase the participation of women and highlight their contribution to the success of supercomputing. To ensure that women are treated fairly and have equal opportunities to succeed in their chosen HPC career". The minisymposium will be comprised of 1 short intro, 3 talks followed by a panel discussion with all the speakers and organizers on actions, existing programmes and emerging proposals to enable more diversity and inclusivity.

Organizer(s): Marie-Christine Sawley (Intel Inc.), Sadaf Alam (ETH Zurich / CSCS), and Maria Girone (CERN)

Domain: Emerging Application Domains


Tape in the Cloud: New Paradigms for High-Latency Storage in Distributed Scientific Networks

This mini-symposium focuses on the challenges, opportunities and likely evolution of scientific data storage over the next decade. Since the 1990s, the particle physics community has been at the forefront of distributed data storage, with projects such as the Worldwide LHC Computing Grid (WLCG). The recently-published update to the European Strategy for Particle Physics describes how the HEP storage and computing landscape will evolve in the next decade. Unlike the situation twenty years ago, when everything had to be built from scratch, commercial solutions for scientific data storage are ubiquitous, but not without their drawbacks. Another change in the landscape is that other sciences besides HEP have become much more data-driven and wish to benefit from a collaborative approach to data management infrastructure and organisation. The mini-symposium is based around the following themes: (a) the exponential increase in the scale of data taking and the gap between storage needs and available resources; (b) the emphasis on greater collaboration between different scientific disciplines and the imperative of shared infrastructures (ESCAPE collaboration and European Open Science Cloud); (c) new ways of organising, e.g. federated data storage, public/private partnerships; (d) the latest research in optimising workflows which access high-latency archival storage (tape).

Organizer(s): Michael Davis (CERN)

Domain: Physics


Time Dependent Superfluid Density Functional Theory and Supercomputing: Latest Developments and Challenges

Superfluidity is a generic feature of many quantum systems at low temperatures. It has been experimentally confirmed in condensed matter systems like 3He and 4He liquids, in nuclear systems including nuclei and neutron stars, in both fermionic and bosonic cold atoms in traps. Superfluids exhibit fascinating dynamical properties. Presently the dynamics can be modelled microscopically via a framework based on the time-dependent density functional theory (TDDFT). The superfluid-TDDFT is applicable to a wide range of physical processes involving superfluidity including simulations of nuclear reactions (fission/fusion), modeling the superfluid interior of neutron stars, and dynamics of ultracold atomic gases (quantum turbulence, dynamics of topological excitations). Since superfluidity is an emergent phenomenon, a large number of quantum particles are needed in order to simulate it correctly. It sets high numerical and technical demands for evaluating superfluid-TDDFT with classical computers. This minisymposium will present the most relevant applications of the TDDFT framework achieved with the help of computer systems like Summit (ORNL, USA) and Piz Daint (CSCS, Switzerland) together with the presently utilized numerical and technical solutions. Challenges for future exascale systems in the context of modelling superfluidity/superconductivity will be also highlighted.

Organizer(s): Gabriel Wlazłowski (Warsaw University of Technology, University of Washington)

Domain: Physics


Toward Semantic Integration of Biological Resources

One major potential and promise of big data analysis lies in the simultaneous mining andintegration of multiple heterogeneous sources of data. In life sciences, recent years have seenthe increasing availability of biological and bioinformatic databases using the ResourceDescription Framework (RDF), which facilitates automatic data processing andinteroperability. However, there are major stumbling blocks on the path to mass adoption.The complexity of general-purpose models, inconsistent data models, and low usability aresome of the challenges that hamper the use of RDF resources by the bulk of biologicalresearchers. This mini-symposium brings together specialists on semantic data integration inlife science and will provide a forum to explore innovative solutions to fulfil the potential ofbig data integration.

Organizer(s): Christophe Dessimoz (University of Lausanne, University College London), Tarcisio Mendes de Farias (Swiss Institute of Bioinformatics), and Kurt Stockinger (Zurich University of Applied Sciences)

Domain: Life Sciences


Towards Exascale Computing in Kinetic Simulations of Magnetic Fusion Plasmas. Part I - Code Developments

Magnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. Due to the extreme challenges at hand, fusion plasma computations have always exploited the largest HPC resources available at any point in time, and it is anticipated that will remain so in the foreseeable future. Adapting the codes to the ever changing landscape of new and emerging computer architectures is a non-trivial challenge. In this Part I of the minisymposium, the emphasis will be on code developments and software advances in particular for ensuring the efficient exploitation of  heterogeneous architectures.

Organizer(s): Stephan Brunner (EPFL, SPC), Laurent Villard (EPFL, SPC), and Claudio Gheller (EPFL, SPC)

Domain: Physics


Towards Exascale Computing in Kinetic Simulations of Magnetic Fusion Plasmas. Part II - Core and Edge

Magnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. Core and edge plasmas present very different physical conditions and, until now, these two regions were typically treated separately. But given that these two regions actually strongly interact with each other, the challenge to achieve realistic simulations is to describe them in a unified framework. This Part II of the minisymposium will specifically address this issue. In particular,  pros and cons of treating this problem using different numerical approaches will be covered.

Organizer(s): Stephan Brunner (EPFL, SPC), Laurent Villard (EPFL, SPC), and Claudio Gheller (EPFL, SPC)

Domain: Physics


Towards Exascale Computing in Kinetic Simulations of Magnetic Fusion Plasmas. Part III - Advanced Numerical Methods and Algorithms

Magnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. In this Part III of the minisymposium, we shall focus on the fact that relying on just  porting, and optimizing existing codes on new generation computers will not be sufficient to make  a global, full fusion reactor (‘from the magnetic axis to the wall’) description  tractable. Ongoing developments in new innovative mathematical representations, discretizations and algorithms are just as critical. Challenges  related to transitioning from reduced to more accurate descriptions where needed will also be addressed, in particular considering the possible transition from fluid to gyrokinetic models as well as from gyrokinetic to fully kinetic.

Organizer(s): Stephan Brunner (EPFL, SPC), Laurent Villard (EPFL, SPC), and Claudio Gheller (EPFL, SPC)

Domain: Physics


Towards Kilometer-Scale Global Storm-Resolving Weather and Climate Simulations, Parts I, II

The predictive skill of weather and climate models has significantly improved over the past few decades, thanks to a huge increase in resolution facilitated by increased supercomputing capacity. A million-fold increase in computational power has allowed the resolution of operational global weather models to increase from 500 km to 10 km since 1980, for example. Further increases towards 1 km resolution would deliver significant improvements in the skill of weather and climate simulations. However, these simulations are still not viable for operational predictions due to the vast increase in computational cost. The computational speed of global kilometer-scale simulations on today’s supercomputers is below a practical level by at least two orders of magnitude. Furthermore, taking advantage of future exascale supercomputers with heterogeneous architectures will require a substantial rethink of traditional coding paradigms.
This two-part minisymposium will bring together researchers on global kilometer-scale atmosphere and ocean models from around the world. Speakers will discuss both the scientific and computational challenges of 1 km resolution. They will present the state-of-the-art of their respective simulation systems and their roadmaps for the future. The challenge of kilometer-scale global simulations can be met, but only by the synthesis of ideas across Earth-System science and supercomputing.

Organizer(s): Sam Hatfield (ECMWF), William Sawyer (ETH Zurich / CSCS), Oliver Fuhrer (Vulcan Inc.), Peter Düben (ECMWF), Joachim Biercamp (DKRZ), Chris Bretherton (Vulcan Inc.), and Xavier Lapillonne (MeteoSwiss)

Domain: Climate and Weather