Prof. Dr. Nicole Schweikardt
Profil
Forschungsthemen7
Die Logik der Farbverfeinerung als Rahmenwerk zur Analyse von dynamischen Systemen auf Graphen
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 04/2026 - 03/2029 Projektleitung: Prof. Dr. Nicole Schweikardt
GRK 2434/1: Facetten der Komplexität
Quelle ↗Förderer: DFG Graduiertenkolleg Zeitraum: 04/2018 - 09/2022 Projektleitung: Prof. Dr. Nicole Schweikardt, Prof. Dr. Stefan Kratsch
NW: Grundlagen der Verarbeitung von großen Datenmengen und Datenströmen
Quelle ↗Förderer: DFG Nachwuchsgruppe Zeitraum: 04/2005 - 05/2007 Projektleitung: Prof. Dr. Nicole Schweikardt
NW: Grundlagen der Verarbeitung von großen Datenmengen und Datenströmen II
Quelle ↗Förderer: DFG Nachwuchsgruppe Zeitraum: 04/2007 - 03/2009 Projektleitung: Prof. Dr. Nicole Schweikardt
SFB 1404/1: Grundlagen der Validierung von Datenanalyseworkflows (TP A01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2020 - 06/2024 Projektleitung: Prof. Dr. Matthias Weidlich, Prof. Dr. Nicole Schweikardt
SFB 1404/2: Validierung verteilter DAWs mittels Ereignisanfragen (TP A01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2024 - 06/2028 Projektleitung: Prof. Dr. Matthias Weidlich, Prof. Dr. Nicole Schweikardt
Theoretische Grundlagen der effizienten Aufzählung von Anfrageergebnissen
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 07/2016 - 12/2019 Projektleitung: Prof. Dr. Nicole Schweikardt
Mögliche Industrie-Partner10
Stand: 26.4.2026, 19:48:44 (Top-K=20, Min-Cosine=0.4)
- 52 Treffer56.3%
- Tiere zum Sprechen bringen. Logistik, Wissenschaft, PräsentationP56.3%
- Tiere zum Sprechen bringen. Logistik, Wissenschaft, Präsentation
- 13 Treffer55.3%
- Zuwendung im Rahmen des Programms „exist – Existenzgründungen aus der Wissenschaft“ aus dem Bundeshaushalt, Einzelplan 09, Kapitel 02, Titel 68607, Haushaltsjahr 2026, sowie aus Mitteln des Europäischen Strukturfonds (hier Euro-päischer Sozialfonds Plus – ESF Plus) Förderperiode 2021-2027 – Kofinanzierung für das Vorhaben: „exist Women“T55.3%
- Zuwendung im Rahmen des Programms „exist – Existenzgründungen aus der Wissenschaft“ aus dem Bundeshaushalt, Einzelplan 09, Kapitel 02, Titel 68607, Haushaltsjahr 2026, sowie aus Mitteln des Europäischen Strukturfonds (hier Euro-päischer Sozialfonds Plus – ESF Plus) Förderperiode 2021-2027 – Kofinanzierung für das Vorhaben: „exist Women“
- 39 Treffer55.1%
- Datenpraxis zur Gestaltung der Open-Access-Transformation - Analyse, Empfehlung, Training & Vernetzung (OA Datenpraxis)P55.1%
- Datenpraxis zur Gestaltung der Open-Access-Transformation - Analyse, Empfehlung, Training & Vernetzung (OA Datenpraxis)
- 71 Treffer54.4%
- Embodied Audition for RobotSP54.4%
- Embodied Audition for RobotS
Higher Functions - Sistemas Informáticos Inteligentes
P50 Treffer54.1%- QTLeap - Qualitätsübersetzung durch tiefe SprachanalysemethodenP54.1%
- QTLeap - Qualitätsübersetzung durch tiefe Sprachanalysemethoden
- 21 Treffer54.0%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche BegleitanalyseT54.0%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche Begleitanalyse
- 20 Treffer54.0%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche BegleitanalyseT54.0%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche Begleitanalyse
- 25 Treffer53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)P53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)
- 25 Treffer53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)P53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)
- 27 Treffer53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)P53.8%
- EU: Printed Logic for Applications of Screen Matrix Activation (PLASMAS)
Publikationen25
Top 25 nach Zitationen — Quelle: OpenAlex (BAAI/bge-m3 embedded für Matching).
97 Zitationen · DOI
We consider the task of enumerating and counting answers to k-ary conjunctive queries against relational databases that may be updated by inserting or deleting tuples. We exhibit a new notion of q-hierarchical conjunctive queries and show that these can be maintained efficiently in the following sense. During a linear time pre-processing phase, we can build a data structure that enables constant delay enumeration of the query results; and when the database is updated, we can update the data structure and restart the enumeration phase within constant time. For the special case of self-join free conjunctive queries we obtain a dichotomy: if a query is not q-hierarchical, then query enumeration with sublinear *) delay and sublinear update time (and arbitrary preprocessing time) is impossible.
Elsevier eBooks · 97 Zitationen · DOI
FluXQuery
2004Elsevier eBooks · 65 Zitationen · DOI
ACM Transactions on Computational Logic · 58 Zitationen · DOI
This article gives a thorough overview of what is known about first-order logic with counting quantifiers and with arithmetic predicates. As a main theorem we show that Presburger arithmetic is closed under unary counting quantifiers. Precisely, this means that for every first-order formula φ( y , z ) over the signature {<,+} there is a first-order formula ψ( x , z ) which expresses over the structure 〈ℕ,<,+〉 (respectively, over initial segments of this structure) that the variable x is interpreted exactly by the number of possible interpretations of the variable y for which the formula φ( y , z ) is satisfied. Applying this theorem, we obtain an easy proof of Ruhl's result that reachability (and similarly, connectivity) in finite graphs is not expressible in first-order logic with unary counting quantifiers and addition. Furthermore, the above result on Presburger arithmetic helps to show the failure of a particular version of the Crane Beach conjecture.
Lecture notes in computer science · 47 Zitationen · DOI
43 Zitationen · DOI
Data exchange deals with the following problem: given an instance over a source schema, a specification of the relationship between the source and the target,and dependencies on the target, construct an instance over a target schema that satisfies the given relationships and dependencies. Recently - for data exchange settings without target dependencies - Libkin (PODS'06) introduced a new concept of solutions based on the closed world assumption (so calledCWA-solutions), and showed that, in some respects, this new notion behaves better than the standard notion of solutions considered in previous papers on data exchange. The present paper extends Libkin's notion of CWA-solutions to data exchange settings with target dependencies. We show that, when restricting attention to data exchange settings with weakly acyclic target dependencies, this new notion behaves similarly as before: the core is the unique "minimal" CWA-solution, and computing CWA-solutions as well as certain answers to positive queries is possible in polynomial time and can be PTIME-hard. However, there may be more than one "maximal" CWA-solution. And going beyond the class of positive queries, we obtain that there are conjunctive queries with (just) one inequality, for which evaluating the certain answers is coNP-hard. Finally, we consider the EXISTENCE-OF-CWA-SOLUTIONS problem: while the problem is tractable for data exchange settings with weakly acyclic target dependencies, it turns out to be undecidable for general data exchange settings. As a consequence, we obtain that also the EXISTENCE-OF-UNIVERSAL-SOLUTIONS problem is undecidable in genera.
40 Zitationen · DOI
We consider a scenario where we want to query a large dataset that is stored in external memory and does not fit into main memory. The most constrained resources in such a situation are the size of the main memory and the number of random accesses to external memory. We note that sequentially streaming data from external memory through main memory is much less prohibitive.We propose an abstract model of this scenario in which we restrict the size of the main memory and the number of random accesses to external memory, but do not restrict sequential reads. A distinguishing feature of our model is that it admits the usage of unlimited external memory for storing intermediate results, such as several hard disks that can be accessed in parallel. In practice, such auxiliary external memory can be crucial. For example, in a first sequential pass the data can be annotated, and in a second pass this annotation can be used to answer the query. Koch's [9] ARB system for answering XPath queries is based on such a strategy.In this model, we prove lower bounds for sorting the input data. As opposed to related results for models without auxiliary external memory for intermediate results, we cannot rely on communication complexity to establish these lower bounds. Instead, we simulate. our model by a non-uniform computation model for which we can establish the lower bounds by combinatorial means.
Logical Methods in Computer Science · 40 Zitationen · DOI
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas. We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that: (i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic.
39 Zitationen · DOI
Let phi(X) be a first-order formula in the language of graphs that has a free set variable X, and assume that X only occurs positively in phi(X). Then a natural minimisation problem associated with phi(X) is to find, in a given graph G, a vertex set S of minimum size such that G satisfies phi(S). Similarly, if X only occurs negatively in phi(X), then phi(X) defines a maximisation problem. Many well-known optimisation problems are first-order definable in this sense, for example, minimum dominating set or maximum independent set. We prove that for each class Gscr of graphs with excluded minors, in particular for each class of planar graphs, the restriction of a first-order definable optimisation problem to the class Gscr has a polynomial time approximation scheme. A crucial building block of the proof of this approximability result is a version of Gaifman's locality theorem for formulas positive in a set variable. This result may be of independent interest
Lecture notes in computer science · 33 Zitationen · DOI
Information and Computation · 31 Zitationen · DOI
ACM SIGLOG News · 30 Zitationen · DOI
This paper is the tutorial we wish we had had available when starting our own research on constant delay enumeration for conjunctive queries. It provides precise statements and detailed, self-contained proofs of the fundamental results in this area.
Theoretical Computer Science · 30 Zitationen · DOI
Journal of Computer and System Sciences · 27 Zitationen · DOI
ACM Transactions on Database Systems · 26 Zitationen · DOI
Data exchange deals with translating data structured in some source format into data structured in some target format, given a specification of the relationship between the source and the target and possibly constraints on the target; and answering queries over the target in a way that is semantically consistent with the information in the source. Theoretical foundations of data exchange have been actively explored recently. It was also noticed that the standard semantics for query answering in data exchange may lead to counterintuitive or anomalous answers. In the present article, we explain that this behavior is due to the fact that solutions can contain invented information (information that is not related to the source instance), and that the presence of incomplete information in target instances has been ignored. In particular, proper query evaluation techniques for databases with nulls have not been used, and the distinction between closed and open world semantics has not been made. We present a concept of solutions, called CWA-solutions, that is based on the closed world assumption. For data exchange settings without constraints on the target, the space of CWA-solutions has two extreme points: the canonical universal solution (the maximal CWA-solution) and the core of the universal solutions (the minimal CWA-solution), both of them well studied in data exchange. In the presence of constraints on the target, the core of the universal solutions is still the minimal CWA-solution, but there may be no unique maximal CWA-solution. We show how to define the semantics of query-answering taking into account incomplete information, and show that some of the well-known anomalies go away with the new semantics. The article also contains results on the complexity of query-answering, upper approximations to queries (maybe-answers), and various extensions.
25 Zitationen · DOI
We introduce the logic FOCN(P) which extends first-order logic by counting and by numerical predicates from a set P, and which can be viewed as a natural generalisation of various counting logics that have been studied in the literature.
25 Zitationen · DOI
We study the randomized version of a computation model (introduced in [9, 10]) that restricts random access to external memory and internal memory space. Essentially, this model can be viewed as a powerful version of a data stream model that puts no cost on sequential scans of external memory (as other models for data streams) and, in addition, (like other external memory models, but unlike streaming models), admits several large external memory devices that can be read and written to in parallel.We obtain tight lower bounds for the decision problems set equality, multiset equality, and checksort. More precisely, we show that any randomized one-sided-error bounded Monte Carlo algorithm for these problems must perform Ω(logN) random accesses to external memory devices, provided that the internal memory size is at most O(4√N/logN), where N denotes the size of the input data.From the lower bound on the set equality problem we can infer lower bounds on the worst case data complexity of query evaluation for the languages XQuery, XPath, and relational algebra on streaming data. More precisely, we show that there exist queries in XQuery, XPath, and relational algebra, such that any (randomized) Las Vegas algorithm that evaluates these queries must perform Ω(logN) random accesses to external memory devices, provided that the internal memory size is at most O(4√N/logN).
24 Zitationen · DOI
We consider the evaluation of first-order queries over classes of databases that are nowhere dense. The notion of nowhere dense classes was introduced by Nesetril and Ossona de Mendez as a formalization of classes of "sparse" graphs and generalizes many well-known classes of graphs, such as classes of bounded degree, bounded tree-width, or bounded expansion. It has recently been shown by Grohe, Kreutzer, and Siebertz that over nowhere dense classes of databases, first-order sentences can be evaluated in pseudo-linear time (pseudo-linear time means that for all ε there exists an algorithm working in time O(n1+ε), where n is the size of the database). For first-order queries of higher arities, we show that over any nowhere dense class of databases, the set of their solutions can be enumerated with constant delay after a pseudo-linear time preprocessing. In the same context, we also show that after a pseudo-linear time preprocessing we can, on input of a tuple, test in constant time whether it is a solution to the query.
24 Zitationen · DOI
A class of relational databases has low degree if for all δ, all but finitely many databases in the class have degree at most nδ, where n is the size of the database. Typical examples are databases of bounded degree or of degree bounded by log n. It is known that over a class of databases having low degree, first-order boolean queries can be checked in pseudo-linear time, i.e. in time bounded by n1+ε, for all ε. We generalise this result by considering query evaluation.
Theoretical Computer Science · 22 Zitationen · DOI
DROPS (Schloss Dagstuhl – Leibniz Center for Informatics) · 21 Zitationen · DOI
We investigate the query evaluation problem for fixed queries over \nfully dynamic databases where tuples can be inserted or deleted. \nThe task is to design a dynamic data structure that can immediately \nreport the new result of a fixed query after every database update. \nWe consider unions of conjunctive queries (UCQs) and focus on the query evaluation tasks testing (decide whether an input tuple belongs to the query result), enumeration (enumerate, without repetition, \nall tuples in the query result), and counting (output the number of tuples in the query result). \n \nWe identify three increasingly restrictive classes of UCQs which we \ncall t-hierarchical, q-hierarchical, and exhaustively q-hierarchical UCQs. \nOur main results provide the following dichotomies: \nIf the query's homomorphic core is t-hierarchical (q-hierarchical, \nexhaustively q-hierarchical), then the testing (enumeration, counting) \nproblem can be solved with constant update time and constant testing time (delay, counting time). Otherwise, it cannot be solved with sublinear update time and sublinear testing time (delay, counting time), unless the OV-conjecture and/or the OMv-conjecture fails. \n \nWe also study the complexity of query evaluation in the dynamic setting in the presence of integrity constraints, and we obtain similar dichotomy results for the special case of small domain constraints (i.e., constraints which state that \nall values in a particular column of a relation belong to a fixed domain of constant size).
21 Zitationen · DOI
This paper gives an overview of recent work on machine models for processing massive amounts of data. The main focus is on generalizations of the classical data stream model where, apart from an "internal memory" of limited size, also a number of (potentially huge) streams may be used as "external memory devices".
19 Zitationen · DOI
As data analytics becomes more crucial to digital systems, so grows the importance of characterizing the database queries that admit a more efficient evaluation. We consider the tractability yardstick of answer enumeration with a polylogarithmic delay after a linear-time preprocessing phase. Such an evaluation is obtained by constructing, in the preprocessing phase, a data structure that supports polylogarithmic-delay enumeration. In this paper, we seek a structure that supports the more demanding task of a "random permutation": polylogarithmic-delay enumeration in truly random order. Enumeration of this kind is required if downstream applications assume that the intermediate results are representative of the whole result set in a statistically valuable manner. An even more demanding task is that of a "random access": polylogarithmic-time retrieval of an answer whose position is given. We establish that the free-connex acyclic CQs are tractable in all three senses: enumeration, random-order enumeration, and random access; and in the absence of self-joins, it follows from past results that every other CQ is intractable by each of the three (under some fine-grained complexity assumptions). However, the three yardsticks are separated in the case of a union of CQs (UCQ): while a union of free-connex acyclic CQs has a tractable enumeration, it may (provably) admit no random access. For such UCQs we devise a random-order enumeration whose delay is logarithmic in expectation. We also identify a subclass of UCQs for which we can provide random access with polylogarithmic access time. Finally, we present an implementation and an empirical study that show a considerable practical superiority of our random-order enumeration approach over state-of-the-art alternatives.
Journal of the ACM · 19 Zitationen · DOI
We consider a scenario where we want to query a large dataset that is stored in external memory and does not fit into main memory. The most constrained resources in such a situation are the size of the main memory and the number of random accesses to external memory. We note that sequentially streaming data from external memory through main memory is much less prohibitive. We propose an abstract model of this scenario in which we restrict the size of the main memory and the number of random accesses to external memory, but admit arbitrary sequential access. A distinguishing feature of our model is that it allows the usage of unlimited external memory for storing intermediate results, such as several hard disks that can be accessed in parallel. In this model, we prove lower bounds for the problem of sorting a sequence of strings (or numbers), the problem of deciding whether two given sets of strings are equal, and two closely related decision problems. Intuitively, our results say that there is no algorithm for the problems that uses internal memory space bounded by N 1−ε and at most o (log N ) random accesses to external memory, but unlimited “streaming access”, both for writing to and reading from external memory. (Here, N denotes the size of the input and ε is an arbitrary constant greater than 0.) We even permit randomized algorithms with one-sided bounded error. We also consider the problem of evaluating database queries and prove similar lower bounds for evaluating relational algebra queries against relational databases and XQuery and XPath queries against XML-databases.
ACM Transactions on Database Systems · 17 Zitationen · DOI
We investigate the query evaluation problem for fixed queries over fully dynamic databases, where tuples can be inserted or deleted. The task is to design a dynamic algorithm that immediately reports the new result of a fixed query after every database update. We consider queries in first-order logic (FO) and its extension with modulo-counting quantifiers (FO+MOD) and show that they can be efficiently evaluated under updates, provided that the dynamic database does not exceed a certain degree bound. In particular, we construct a data structure that allows us to answer a Boolean FO+MOD query and to compute the size of the result of a non-Boolean query within constant time after every database update. Furthermore, after every database update, we can update the data structure in constant time such that afterwards we are able to test within constant time for a given tuple whether or not it belongs to the query result, to enumerate all tuples in the new query result, and to enumerate the difference between the old and the new query result with constant delay between the output tuples. The preprocessing time needed to build the data structure is linear in the size of the database. Our results extend earlier work on the evaluation of first-order queries on static databases of bounded degree and rely on an effective Hanf normal form for FO+MOD recently obtained by Heimberg, Kuske, and Schweikardt (LICS 2016).
Kooperationen2
Bestätigte Forscher↔Partner-Paare aus HU-FIS — Gold-Standard-Positive für das Matching.
GRK 2434/1: Facetten der Komplexität
university
GRK 2434/1: Facetten der Komplexität
university
Stammdaten
Identität, Organisation und Kontakt aus HU-FIS.
- Name
- Prof. Dr. Nicole Schweikardt
- Titel
- Prof. Dr.
- Fakultät
- Mathematisch-Naturwissenschaftliche Fakultät
- Institut
- Institut für Informatik
- Arbeitsgruppe
- Theoretische Informatik
- Telefon
- +49 30 2093-41102
- HU-FIS-Profil
- Quelle ↗
- Zuletzt gescrapt
- 26.4.2026, 01:12:26