Prof. Dr. Andrea Walther
Profil
Forschungsthemen15
Adaptives Surrogatmodell für strömungsdynamische Optimierungsaufgaben
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 01/2026 - 12/2027 Projektleitung: Prof. Dr. Andrea Walther
Ein modellbasiertes Messverfahren zur Charakterisierung der frequenzabhängigen Materialeigenschaften von Piezokeramiken unter Verwendung eines einzelnen Probekörperindividuums
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 09/2020 - 01/2021 Projektleitung: Prof. Dr. Andrea Walther
Ein modellbasiertes Messverfahren zur Charakterisierung der frequenzabhängigen Materialeigenschaften von Piezokeramiken unter Verwendung eines einzelnen Probekörperindividuums.
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 02/2021 - 05/2023 Projektleitung: Prof. Dr. Andrea Walther
Entwicklung eines neuartigen Systems für die unterstützte, teilautomatisierte interne und externe Maschinenprogrammierung von Hochtechnologiewerkstücken in der spanenden Fertigung
Quelle ↗Förderer: BMWE: ZIM Zeitraum: 02/2020 - 02/2023 Projektleitung: Prof. Dr. Andrea Walther
EXC 2046/1 Math+ "Convolutional Proximal Neural Networks for Solving Inverse Problems"
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 10/2021 - 09/2024 Projektleitung: Prof. Dr. Andrea Walther
EXC 2046 /1: Math+ Professur
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 01/2020 - 12/2025 Projektleitung: Prof. Dr. Andrea Walther
EXC 2046/1: Sparse Deep Neuronal Networks for the Design of Solar Energy Materials (TP AA2-7)
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 10/2021 - 09/2024 Projektleitung: Prof. Dr. Andrea Walther
EXC 2046 1 Z-Projekt für Sachmittel
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 01/2019 - 12/2025 Projektleitung: Prof. Dr. Andrea Walther
Fehlercharakterisierung in CFK-Schalenbauteilen mittels ableitungsbasierten Optimierungsverfahren
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 03/2020 - 08/2023 Projektleitung: Prof. Dr. Andrea Walther
FOR 5208/1: Angepasste ableitungsbasierte Optimierung zur Charakterisierung des thermopiezoelektrischen Materialverhaltens (TP OPT)
Quelle ↗Förderer: DFG Forschungsgruppe Zeitraum: 06/2024 - 09/2026 Projektleitung: Prof. Dr. Andrea Walther, Dr. Benjamin Jurgelucks
On a Frank-Wolfe Approach for Abs-Smooth Optimization
Quelle ↗Förderer: DFG Exzellenzstrategie Cluster Zeitraum: 04/2023 - 03/2026 Projektleitung: Prof. Dr. Andrea Walther
SFB/TRR 154/2: Optimierung für Gasmarktprobleme (TP B10)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 08/2020 - 06/2022 Projektleitung: Prof. Dr. Andrea Walther
SFB/TRR 154/3: Gemischt ganzzahlige nichtglatte Optimierung für Bilevel-Probleme (TP B10)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2022 - 06/2026 Projektleitung: Prof. Dr. Andrea Walther
Shape Optimierung für Maxwell Gleichungen unter Berücksichtigung von Hysterese Effekten in den Materialgesetzen.
Quelle ↗Förderer: DFG Schwerpunktprogramm Zeitraum: 04/2020 - 09/2020 Projektleitung: Prof. Dr. Andrea Walther
Thematic Einstein Semester on Mathematical Optimizationfor Machine Learning
Quelle ↗Förderer: Einstein Zentrum Zeitraum: 04/2023 - 09/2023 Projektleitung: Prof. Dr. Andrea Walther, Prof. Dr. Daniel Walter
Mögliche Industrie-Partner10
Stand: 26.4.2026, 19:48:44 (Top-K=20, Min-Cosine=0.4)
- 54 Treffer59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)P59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)
- 54 Treffer59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)P59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)
- 58 Treffer59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)P59.8%
- Lösung gekoppelter Probleme in der Nanoelektronik (nanoCOPS)
- 24 Treffer58.6%
- Gamification for Climate ActionP58.6%
- Gamification for Climate Action
- 15 Treffer58.4%
- Optimierte Natrium-Feststoffbatterien mit neuen Anoden basierend auf KohlenstoffgerüststrukturenP58.4%
- Optimierte Natrium-Feststoffbatterien mit neuen Anoden basierend auf Kohlenstoffgerüststrukturen
- 42 Treffer57.9%
- Interfaces in opto-electronic thin film multilayer devicesP57.9%
- Interfaces in opto-electronic thin film multilayer devices
- 5 Treffer57.9%
- Playing beyond CLILP57.9%
- Playing beyond CLIL
- 5 Treffer57.9%
- Playing beyond CLILP57.9%
- Playing beyond CLIL
- Playing beyond CLILP57.9%
- Playing beyond CLIL
- 5 Treffer57.9%
- Playing beyond CLILP57.9%
- Playing beyond CLIL
Publikationen25
Top 25 nach Zitationen — Quelle: OpenAlex (BAAI/bge-m3 embedded für Matching).
2216 Zitationen · DOI
Algorithmic, or automatic, differentiation (AD) is a growing area of theoretical research and software development concerned with the accurate and efficient evaluation of derivatives for function evaluations given as computer programs. The resulting derivative values are useful for all scientific computations that are based on linear, quadratic, or higher order approximations to nonlinear scalar or vector functions. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. Apart from quantifying sensitivities numerically, AD also yields structural dependence information, such as the sparsity pattern and generic rank of Jacobian matrices. The field opens up an exciting opportunity to develop new algorithms that reflect the true cost of accurate derivatives and to use them for improvements in speed and reliability. This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity. There is also added material on checkpointing and iterative differentiation. To improve readability the more detailed analysis of memory and complexity bounds has been relegated to separate, optional chapters.The book consists of three parts: a stand-alone introduction to the fundamentals of AD and its software; a thorough treatment of methods for sparse problems; and final chapters on program-reversal schedules, higher derivatives, nonsmooth problems and iterative processes. Each of the 15 chapters concludes with examples and exercises. Audience: This volume will be valuable to designers of algorithms and software for nonlinear computational problems. Current numerical software users should gain the insight necessary to choose and deploy existing AD software tools to the best advantage. Contents: Rules; Preface; Prologue; Mathematical Symbols; Chapter 1: Introduction; Chapter 2: A Framework for Evaluating Functions; Chapter 3: Fundamentals of Forward and Reverse; Chapter 4: Memory Issues and Complexity Bounds; Chapter 5: Repeating and Extending Reverse; Chapter 6: Implementation and Software; Chapter 7: Sparse Forward and Reverse; Chapter 8: Exploiting Sparsity by Compression; Chapter 9: Going beyond Forward and Reverse; Chapter 10: Jacobian and Hessian Accumulation; Chapter 11: Observations on Efficiency; Chapter 12: Reversal Schedules and Checkpointing; Chapter 13: Taylor and Tensor Coefficients; Chapter 14: Differentiation without Differentiability; Chapter 15: Implicit and Iterative Differentiation; Epilogue; List of Figures; List of Tables; Assumptions and Definitions; Propositions, Corollaries, and Lemmas; Bibliography; Index
Society for Industrial and Applied Mathematics eBooks · 1476 Zitationen · DOI
ACM Transactions on Mathematical Software · 482 Zitationen · DOI
In its basic form, the reverse mode of computational differentiation yields the gradient of a scalar-valued function at a cost that is a small multiple of the computational work needed to evaluate the function itself. However, the corresponding memory requirement is proportional to the run-time of the evaluation program. Therefore, the practical applicability of the reverse mode in its original formulation is limited despite the availability of ever larger memory systems. This observation leads to the development of checkpointing schedules to reduce the storage requirements. This article presents the function revolve, which generates checkpointing schedules that are provably optimal with regard to a primary and a secondary criterion. This routine is intended to be used as an explicit “controller” for running a time-dependent applications program.
PAMM · 161 Zitationen · DOI
Abstract Automatic, or algorithmic, differentiation (AD) is a chain rule‐based technique for evaluating derivatives of functions given as computer programs for their elimination. We review the main characteristics and application of AD and illustrate the methodology on a simple example.
Chapman & Hall/CRC computational science series/Chapman & Hall/CRC computational science · 137 Zitationen · DOI
Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.4 Computational Results in PDE-Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121The availability of large-scale computing platforms comprising of tens of thousands of multicore processors motivates the need for the next generation of highly scalable sparse linear system solvers. These solvers must optimize parallel performance, processor (serial) performance, as well as memory requirements, while being robust across broad classes of applications and systems. In this chapter, we present a hybrid parallel solver that combines the desirable characteristics of direct methods (robustness) and effective iterative solvers (low computational cost), while alleviating their drawbacks (memory requirements, lack of robustness). The hybrid solver is based on the general sparse direct solver PARDISO [1], and a class of Spike factorization [2, 3, 4, 5, 6, 7, 8, 9, 10, 11] solvers. The resulting algorithm, called PSPIKE, is as robust as direct solvers, more reliable than classical preconditioned Krylov-subspace methods, and much more scalable than direct sparse solvers. We discuss several combinatorial problems that arise in the design of this hybrid solver, present algorithms to solve these combinatorial problems, and demonstrate their impact on a large-scale three-dimensional PDE-constrained optimization problem.
Lecture notes in computational science and engineering · 124 Zitationen · DOI
Optimization methods & software · 78 Zitationen · DOI
In this article we propose a new approach to constrained optimization that is based on direct and adjoint vector-function evaluations in combination with secant updating. The main goal is the avoidance of constraint Jacobian evaluations and the reduction of the linear algebra cost per iteration to $ {\cal O}(n + m)^2 $ operations in the dense, unstructured case. A crucial building block is a transformation invariant two-sided-rank-one update (TR1) for approximations to the (active) constraint Jacobian. In this article we elaborate its basic properties and report preliminary numerical results for the new total quasi-Newton approach on some small equality constrained problems. A nullspace implementation under development is briefly described. The tasks of identifying active constraints, safeguarding convergence and many other important issues in constrained optimization are not addressed in detail.
Mathematics of Computation · 69 Zitationen · DOI
This article considers the problem of evaluating all pure and mixed partial derivatives of some vector function defined by an evaluation procedure. The natural approach to evaluating derivative tensors might appear to be their recursive calculation in the usual forward mode of computational differentiation. However, with the approach presented in this article, much simpler data access patterns and similar or lower computational counts can be achieved through propagating a family of univariate Taylor series of a suitable degree. It is applicable for arbitrary orders of derivatives. Also it is possible to calculate derivatives only in some directions instead of the full derivative tensor. Explicit formulas for all tensor entries as well as estimates for the corresponding computational complexities are given.
INFORMS journal on computing · 64 Zitationen · DOI
The computation of a sparse Hessian matrix H using automatic differentiation (AD) can be made efficient using the following four-step procedure: (1) Determine the sparsity structure of H, (2) obtain a seed matrix S that defines a column partition of H using a specialized coloring on the adjacency graph of H, (3) compute the compressed Hessian matrix B ≡ HS, and (4) recover the numerical values of the entries of H from B. The coloring variant used in the second step depends on whether the recovery in the fourth step is direct or indirect: a direct method uses star coloring and an indirect method uses acyclic coloring. In an earlier work, we had designed and implemented effective heuristic algorithms for these two NP-hard coloring problems. Recently, we integrated part of the developed software with the AD tool ADOL-C, which has recently acquired a sparsity detection capability. In this paper, we provide a detailed description and analysis of the recovery algorithms and experimentally demonstrate the efficacy of the coloring techniques in the overall process of computing the Hessian of a given function using ADOL-C as an example of an AD tool. We also present new analytical results on star and acyclic coloring of chordal graphs. The experimental results show that sparsity exploitation via coloring yields enormous savings in runtime and makes the computation of Hessians of very large size feasible. The results also show that evaluating a Hessian via an indirect method is often faster than a direct evaluation. This speedup is achieved without compromising numerical accuracy.
Computational Optimization and Applications · 61 Zitationen · DOI
SIAM Journal on Scientific Computing · 42 Zitationen · DOI
Frequently, the computation of derivatives for optimizing time-dependent problems is based on the integration of the adjoint differential equation. For this purpose, the knowledge of the complete forward solution may be required. Similar information is needed in the context of a posteriori error estimation with respect to a given functional. In the area of flow control, especially for three dimensional problems, it is usually impossible to keep track of the full forward solution due to the lack of storage capacities. Further, for many problems, adaptive time-stepping procedures are needed toward efficient integration schemes in time. Therefore, standard optimal offline checkpointing strategies are usually not well suited in that framework. In this paper we present two algorithms for an online checkpointing procedure that determines the checkpoint distribution on the fly. We prove that these approaches yield checkpointing distributions that are either optimal or almost optimal with only a small gap to optimality. Numerical results underline the theoretical results.
Journal of Optimization Theory and Applications · 40 Zitationen · DOI
Computer Physics Communications · 39 Zitationen · DOI
SIAM Journal on Scientific Computing · 36 Zitationen · DOI
The computation of derivatives for optimizing time-dependent flow problems is often based on the integration of the adjoint differential equation. For this purpose, the knowledge of the complete forward solution is required. In the area of flow control, especially for three-dimensional problems, it may be impossible to keep track of the full forward solution due to the lack of storage capacities. One usual method to overcome this problem is checkpointing. Using a checkpointing strategy, only parts of the forward solution are kept in the main memory and additional time steps have to be performed. If one extends this approach such that some checkpoints can be stored on disc too, one can reduce the number of additional time steps. On the other hand, one has to take the access cost to one checkpoint into account. On parallel machines, one may use parallel I/O facilities to store and retrieve checkpoints. In these cases, the access cost of the checkpoints is also no longer negligible. Therefore, in this paper, the write and read counts for each checkpoint in a binomial checkpointing approach are examined. This way, the overall access cost to checkpoints is minimized. Numerical results illustrate the derived checkpointing approaches. They also show that checkpointing techniques may reduce the overall computing time despite the required recalculations.
Optimal Control Applications and Methods · 36 Zitationen · DOI
Abstract This paper discusses approximation schemes for adjoints in control of the instationary Navier–Stokes system. It tackles the storage problem arising in the numerical calculation of the appearing adjoint equations by proposing a low‐storage approach which utilizes optimal checkpointing. For this purpose, a new proof of optimality is given. This new approach gives so far unknown properties of the optimal checkpointing strategies and thus provides new insights. The optimal checkpointing allows a remarkable memory reduction by accepting a slight increase in run‐time caused by repeated forward integrations as illustrated by means of the Navier–Stokes equations. In particular, a memory reduction of two orders of magnitude causes only a slow down factor of 2–3 in run‐time. Copyright © 2005 John Wiley & Sons, Ltd.
Optimization methods & software · 34 Zitationen · DOI
We present a sequential quadratic programming (SQP) type algorithm, based on quasi-Newton approximations of Hessian and Jacobian matrices, which is suitable for the solution of general nonlinear programming problems involving equality and inequality constraints. In contrast to most existing SQP methods, no evaluation of the exact constraint Jacobian matrix needs to be performed. Instead, in each SQP iteration only one evaluation of the constraint residuals and two evaluations of the gradient of the Lagrangian function are necessary, the latter of which can efficiently be performed by the reverse mode of automatic differentiation. Factorizations of the Hessian and of the constraint Jacobian are approximated by the recently proposed STR1 update procedure. Inequality constraints are treated by solving within each SQP iteration a quadratic program (QP), the dimension of which equals the number of degrees of freedom. A recently proposed gradient modification in these QPs takes account of Jacobian inexactness in the active set determination. Superlinear convergence of the procedure is shown under mild conditions. The convergence behaviour of the algorithm is analysed using several problems from the Hock–Schittkowski test library. Furthermore, we present numerical results for an optimization problem based on a small periodic adsorption process, where the Jacobian of the equality constraints is dense.
PAMM · 33 Zitationen · DOI
Abstract In this paper, we present two strategies for the implementation of Automatic Differentiation (AD) based on the operator overloading facility in C++. Subsequently, we describe the capabilities of the AD‐tool ADOL‐C that applies operator overloading to differentiate C‐ and C++‐code. Finally, we discuss some applications of ADOL‐C.
Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery · 31 Zitationen · DOI
Abstract Algorithmic differentiation (AD), also known as automatic differentiation, is a technology for accurate and efficient evaluation of derivatives of a function given as a computer model. The evaluations of such models are essential building blocks in numerous scientific computing and data analysis applications, including optimization, parameter identification, sensitivity analysis, uncertainty quantification, nonlinear equation solving, and integration of differential equations. We provide an introduction to AD and present its basic ideas and techniques, some of its most important results, the implementation paradigms it relies on, the connection it has to other domains including machine learning and parallel computing, and a few of the major open problems in the area. Topics we discuss include: forward mode and reverse mode of AD, higher‐order derivatives, operator overloading and source transformation, sparsity exploitation, checkpointing, cross‐country mode, and differentiating iterative processes. This article is categorized under: Algorithmic Development > Scalable Statistical Methods Technologies > Data Preprocessing
Optimization methods & software · 30 Zitationen · DOI
Any piecewise smooth function that is specified by an evaluation procedure involving smooth elemental functions and piecewise linear functions like and can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic, differentiation, one can then compute certain first- and second-order derivative vectors and matrices that represent a local piecewise linearization and provide additional curvature information. On the basis of these quantities, we characterize local optimality by first- and second-order necessary and sufficient conditions, which generalize the corresponding Kuhn-Tucker-Karush (KKT) theory for smooth problems. The key assumption is the linear independence kink qualification, a generalization of Linear Independence Constraint Qualification (LICQ) familiar from nonlinear optimization. It implies that the objective has locally a so-called decomposition and renders everything tractable in terms of matrix factorizations and other simple linear algebra operations. By yielding descent directions, whenever they are violated the new optimality conditions point the way to a superlinearly convergent generalized Quadratic Program solver, which is currently under development. We exemplify the theory on two nonsmooth examples of Nesterov.
30 Zitationen · DOI
30 Zitationen · DOI
Optimization methods & software · 28 Zitationen · DOI
Computer-aided design (CAD) tools are extensively used to design industrial components, however, contrary to e.g. computational fluid dynamics (CFD) solvers, shape sensitivities for gradient-based optimization of CAD-parametrized geometries have only been available with inaccurate and non-robust finite differences. Here, algorithmic differentiation (AD) is applied to the open-source CAD kernel Open CASCADE Technology using the AD software tool ADOL-C (Automatic Differentiation by OverLoading in C++). The differentiated CAD kernel is coupled with a discrete adjoint CFD solver, thus providing the first example of a complete differentiated design chain built from generic, multi-purpose tools. The design chain is demonstrated on the gradient-based optimization of a squared U-bend turbo-machinery cooling duct to minimize the total pressure loss.
Mathematical Programming · 28 Zitationen · DOI
Computer-Aided Design and Applications · 27 Zitationen · DOI
Computer-Aided Design and Applications is an international journal on the applications of CAD and CAM. It publishes papers in the general domain of CAD plus in emerging fields like bio-CAD, nano-CAD, soft-CAD, garment-CAD, PLM, PDM, CAD data mining, CAD and the internet, CAD education, genetic algorithms and CAD engines. The journal is aimed at all developers and users of CAD technology to ptovide CAD solutions for various stages of design and manufacturing. The journal publishes all about Computer-Aided Design and Computer-Aided technologies.
ACM Transactions on Mathematical Software · 26 Zitationen · DOI
A new approach for computing a sparsity pattern for a Hessian is presented: nonlinearity information is propagated through the function evaluation yielding the nonzero structure. A complexity analysis of the proposed algorithm is given. Once the sparsity pattern is available, coloring algorithms can be applied to compute a seed matrix. To evaluate the product of the Hessian and the seed matrix, a vector version for evaluating second order adjoints is analysed. New drivers of ADOL-C are provided implementing the presented algorithms. Runtime analyses are given for some problems of the CUTE collection.
Kooperationen6
Bestätigte Forscher↔Partner-Paare aus HU-FIS — Gold-Standard-Positive für das Matching.
Thematic Einstein Semester on Mathematical Optimizationfor Machine Learning
university
SFB/TRR 154/3: Gemischt ganzzahlige nichtglatte Optimierung für Bilevel-Probleme (TP B10)
university
Thematic Einstein Semester on Mathematical Optimizationfor Machine Learning
university
SFB/TRR 154/3: Gemischt ganzzahlige nichtglatte Optimierung für Bilevel-Probleme (TP B10)
university
SFB/TRR 154/3: Gemischt ganzzahlige nichtglatte Optimierung für Bilevel-Probleme (TP B10)
other
Thematic Einstein Semester on Mathematical Optimizationfor Machine Learning
research_institute
Stammdaten
Identität, Organisation und Kontakt aus HU-FIS.
- Name
- Prof. Dr. Andrea Walther
- Titel
- Prof. Dr.
- Fakultät
- Mathematisch-Naturwissenschaftliche Fakultät
- Institut
- Institut für Mathematik
- Arbeitsgruppe
- Mathematische Optimierung
- Telefon
- +49 30 2093-45333
- HU-FIS-Profil
- Quelle ↗
- Zuletzt gescrapt
- 26.4.2026, 01:13:48