April 03 – April 04, 2025, Kaiserslautern, Germany 
A Workshop co-organized by National HPC Center South-West (NHR@SW)
We are happy to announce the 27th EuroAD Workshop on Algorithmic Differentiation. It will take place at the RPTU Kaiserslautern-Landau from 03.04.2025 at 9:00 to 04.04.2025 at 13:00. The 1.5-day workshop provides a forum for the presentation and discussion of research topics, results, and applications of AD. The workshop is informal and can also be used to present work-in-progress topics. Everybody is welcome and we especially encourage PhD students and people new to the field to attend and present their work and ideas. Remote attendance will also be possible, but we prefer local attendance.
Registration
The workshop fee is 70 EUR. It covers the lunch on both days, the workshop dinner on 03.04. and the general catering. For registration please send an informal email to max.sagebaum@scicomp.uni-kl.de. We kindly ask to register as soon as possible but not later than 20.03.2025. Please include
- your name, mail contact, local/remote attendance, and,
- if you to present, title, and abstract.
Updates
- 24.03.2025 – Final program, updated location for breaks
Agenda
Preliminary agenda, subject to be changed. Slots for talks will be either 20 or 30 minutes.
Thursday, April 03 | |
---|---|
09:00 – 09:30 | Registration (Room 32-349) |
09:30 – 10:00 | Welcome and Morning Coffee (Room 32-349) |
10:00 – 12:30 | Talks (Room 42-105)
10:00 – The GradBench benchmark suite for Automatic Differentiation by Troels Henriksen, DIKU, University of Copenhagen Abstract:Shared collections of benchmarks are useful community resources, as they allow qualitative and quantitative comparisons between different tools and approaches. For several years, the ADBench suite fulfilled such a role in the AD community, and many new tools and ideas were demonstrated by implementing one or more of the problem specifications from ADBench. Unfortunately, ADBench is no longer maintained, and so the AD community is at risk of missing an useful community resource. In this talk, I will introduce GradBench, a new polyglot AD benchmark suite created by Sam Estep of CMU, which is already close to achieving full parity with ADBench, and in several ways have already moved beyond. I will describe how GradBench’s decoupled design makes it easy to add new benchmark implementations, even for programming languages or tools with exotic dependencies. I will discuss some of the principles behind its design and maintenance, and call on the AD community to submit implementations using their own favourite tool, or to improve the ones that are already present. I will also discuss some of the challenges of bootstrapping a large collection of programs using multiple languages and AD libraries, where you can hardly expect to be an expert on them all. 10:30 – Jacobian sparsity detection in Tapenade by Laurent Hascoet, INRIA Jacobian matrices, Hessian tensors, and other derivative objects can be huge and require care to compute efficiently. One classical solution is to exploit their sparsity. This requires precomputation of a so-called sparsity pattern, a collection of boolean information. The sparsity pattern can be precomputed by a special kind of AD, and this is a classical ingredient of several AD tools, mostly of the “Overloading-Based” sort. We discuss implementation of a sparsity detection mode in Tapenade. After recalling the principle of a sparsity AD mode, we sketch the very few changes needed in Tapenade for a running sparsity mode. We then discuss some significant improvements to this plain sparsity mode, exploiting specific features of “Source-Transformation” AD tools namely, global data-flow analysis and flow-graph restructuring. 11:00 – Parametric Sensitivities of a Wind-driven Baroclinic Ocean using Neural Surrogates by Sri Hari Krishna Narayanan, Argonne National Laboratory Abstract: 11:30 – Facilitating online training in Fortran-based climate models by Joe Wallwork, Institute of Computing for Climate Science, University of Cambridge Abstract: Machine learning (ML) based techniques are becoming increasingly popular in numerical simulation, bringing potentially significant gains in efficiency. Whilst popular ML tools such as PyTorch are written in Python, the climate modelling community continues to make heavy use of Fortran for their scientific models, which lacks native ML support. This presents an issue for users because of the challenges of language interoperation. One approach to make use of ML models in Fortran is to use a Fortran interface to PyTorch, such as FTorch. FTorch has supported “offline training” for some time, whereby models are designed and trained in Python, saved to file, and then loaded to run inference in Fortran. In this talk, we will be sharing the latest developments to support “online training”, where the training is done while the Fortran model is running. Online training brings the benefits of avoiding unnecessarily archiving of large volumes of training data and being able to define cost functions in terms of errors involving downstream model code. The talk will detail our approach for enabling online training in FTorch by exposing PyTorch’s autograd module, providing a seamless interface which should be familiar to users of both Fortran and PyTorch. The new functionality will be demonstrated in a climate modelling context. 12:00 – Combining Bayesian Probing and Bloom Filters to Determine Jacobian and Hessian Sparsity Patterns by Paul Hovland, Argonne National Laboratory Abstract: Many techniques for the efficient computation of sparse derivative matrices (Jacobians and Hessians) require knowing the sparsity pattern of the matrix. One of the two main methods for determining the sparsity pattern relies on propagating bit vectors through a computation. In the naive version of bit vector probing, each bit represents one independent variable (column of the derivative matrix). However, Griewank and Mitev showed that one can determine the sparsity pattern with fewer bit probes by using carefully selected probes and Bayes’ theorem. We previously demonstrated that one can also reduce the number of bit probes by using randomized probes based on Bloom filters. In this work, we combine Bayesian probing and Bloom filter probing to overcome the shortcomings of each method in isolation. We also examine how to use symmetry in determining the sparsity pattern of Hessian matrices and how to estimate the number of nonzeros per row. |
12:30 – 13:30 | Lunch Break (Room 32-349) |
13:45 – 15:45 | Talks (Room 42-105)
13:45 – Interactive Theorem Prover Lean as an AD Laboratory by Tomas Skrivan, Abstract: 14:15 – 2-D Shock AD by Sasha Fleming, STCE, RWTH Aachen Abstract: Conservation laws (and other types of PDEs, more generally) have very few strong solutions, but admit many weak solutions that contain discontinuities. A variational calculus using objects called “Generalized Tangent Vectors” was developed in the late 1990s to study these discontinuous solutions. This calculus has been successfully used to compute GTVs to solutions of one-dimensional scalar conservation laws via a shock-tracking time-stepping solver. This talk will show that GTVs exist in two-dimensions, that their behavior is analog to the one-dimensional case. A technique for computing GTVs to steady-state solutions of two-dimensional systems of conservation laws will be also presented. 14:45 – Matrix-Free Jacobin Chaining on Tape-Based AD Tools by Simon Märtens, STCE (RWTH)Abstract: Representing a program as a sequential composition of differential subprograms leads to the well-known Jacobian Chain Bracketing Problem which asks for an optimal bracketing of the Jacobian chain. We can solve this problem efficiently via dynamic programming. A matrix-free variant of this problem can be formulated when instead of the elemental Jacobians we only have access to the corresponding elemental tangent and adjoint models to the subprograms. For tape-based AD tools these models can be represented as the forward and reverse interpretation of the underlying tape. By cutting the tape into chunks, we can transform the interpretation of the entire tape into a matrix-free Jacobian Chain Bracketing Problem, which can be solved and executed directly on the tape. This can lead to a reduction of the necessary FLOPs to accumulate the entire Jacobian but also allows us to exploit more parallelism during the interpretation. 15:15 – Algorithmic Differentiation of the CFD Solver CODA by Adam Büchner, DLR – Institute of Software Methods for Product VirtualizationAbstract:The new-generation computational fluid dynamics (CFD) software CODA is being developed as part of a collaboration between the French Aerospace Lab ONERA, the German Aerospace Center (DLR), Airbus, and their European research partners. CODA’s reverse-mode AD solution will be presented that retains the HPC capabilites of the primal solver. Memory and CPU performance observations will be discussed for selected aerodynamic applications. |
15:45 – 16:30 | Coffee Break (Room 32-349) |
16:30 – 17:00 | Talks (Room 42-105)
16:30 – Adjoint-PETSc by Max Sagebaum, SciComp, RPTU Kaiserslautern-Landau Abstract: PETSc is a large library that contains matrix, vector and linear system solvers. There are also many other algorithms and helpers included in PETSc. The ISSM code (Ice-sheet and Sea-level System Model) (https://issm.jpl.nasa.gov/) uses PETSc for the handling of the llinear algebra functionality. Next to the PETSc implementation, ISSM can switch to there own implementation of the linear algebra functionality. The AD version of ISSM uses the self implemented linear algebra library. This restricts the linear solver, which can be used for AD, to the MUMPS solver. A current ongoing project tries to create an adjoint PETSc library, so that AD can use more advanced solver in ISSM. |
19:00 | Workshop dinner |
Friday, April 04 | |
09:00 – 9:30 | Morning Coffee (Room 32-349) |
9:30 – 12:00 | Talks (Room 42-105)
9:30 – Source transform AD for Julia with Tapenade by Jean-Luc Bouchot, INRIA Abstract: TBD 10:00 – Composable Sparse AD in Julia by Adrian Hill, TU Berlin, BIFOLDAbstract:Despite AD’s widespread adoption in the Julia ecosystem, Automatic Sparse Differentiation (ASD) remains an underutilized technique. We present a novel pipeline consisting of three open-source packages to bring ASD capabilities to all major Julia AD backends. The first half of the talk focuses on the unique challenges AD users face in Julia, introducing DifferentiationInterface.jl, a unified interface for over a dozen AD backends. The second half focuses on sparsity pattern detection, a key component of ASD. We present SparseConnectivityTracer.jl (SCT), a performant implementation of Jacobian and Hessian sparsity detection based on operator-overloading. SCT computes both local and global sparsity patterns, naturally avoids dead-ends in compute graphs and requires no code modifications. Notably, our ASD pipeline often outperforms standard AD for one-off computations, previously thought impractical in Julia due to slower sparsity detection methods. 10:30 – Effect of simulation and estimator type on accuracy of pathwise derivatives of particle Monte Carlo simulations by Niels Horsten, KU LeuvenAbstract: Particle-based Monte Carlo (MC) methods are an important simulation strategy in a variety of disciplines in science and engineering, including rarefied gas dynamics, plasma physics, financial mathematics, and computer graphics. Using brute-force AD to calculate derivatives often gives undesired results with large differences between the expected value of the derivative and the derivative of the expected value. In this talk, I show how accurate derivatives can be obtained for kinetic MC simulations by a well-thought choice of simulation and estimator type. Including reflections can lead to diverging derivatives for high-collisional 2D/3D problems. A correction term is proposed to properly include reflections in the derivatives. 11:00 – AD in the wild. Experiences from both implementing and applying AD in Julia for HPC by Dr. Valentin Churavy, Universität AugsburgAbstract: High-performance computing applications are full of challenging code-patterns for automatic differentiation frameworks. They often employ mutations, parallel paradigms, and low-level performance tweaks to maximize performance of the primal simulations. Automatic differentiation is typically added after the fact and either has to handle the complicated code-patterns or will lead to inefficient primal simulations. In this talk, I will discuss this challenge from both my experience of developing Enzyme.jl and helping various science teams apply it to their code basis. 11:30 –Efficient Optimization of large compressor module using the adjoint method by Corentin Battistoni, MTU Aero EnginesAbstract: Reaching high levels of efficiency on turbomachines compressors requires an ever increasing number of parameters as the volume studied grows. Gradient optimizations using the adjoint method allow to scale up the number of parameters relatively cheaply. However, adjoints calculations are prone to instabilities. The Recursive Projection Method, or RPM, enables some diverging setups to converge. It is in particular useful as a compressor is meant to operate on a large domain which includes some operating point near the surge line where heavy CFD instabilities can be observed.
|
12:00 – 13:00 | Lunch Break (Room 32-349) |
Organization
For general questions concerning the organization as well as regarding contents of the agenda of the workshop, please send an email to max.sagebaum@scicomp.uni-kl.de or nicolas.gauger@scicomp.uni-kl.de
Travel and Accomodation
The workshop will take place at
Venue: Building 42, Room 42-105 Google maps
Breaks: Building 32, Room 32-349 Google mapsRegistration will also take place here.
RPTU Kaiserslautern Landau
67663 Kaiserslautern
There will be written exams in building 42. Therefore, we can not host the breaks in the lobby of building 42. Please register in room 32-349 and join us there for the morning coffee. Because of the exams, we have to be quite outside of the room 42-105. We are sorry for this inconvenience.
For a campus map click here!
You can get there
- by air via Frankfurt Airport (FRA) which is approximately 120km away from Kaiserslautern and can be reached conveniently by train
- by train as TGV, ICE/IC and regional trains arrive frequently at Kaiserslautern Hbf and
- by car (Kaiserslautern is connected by Autobahn A6)
From the railway station Kaiserslautern Hbf the city buses 105, 106, 115, 116 connect to the campus of the RPTU Kaiserslautern Landau. (Further information – left hand side.)
Every participant will pay for one’s lodging separately. Lodging is not included in the fee. We recommend the following hotels:
- B&B Hotel Kaiserslautern
- Hotel Alcatraz (old prison, unique experience)
- Hotel Zollamt (closest to the venue)
- SAKS Urban Design Hotel Kaiserslautern (Reduced rates with code: RPTU24)
- Best Western Kaiserslautern (Reduced rates with code: RPTU24)