Prof. Dr. Richard Kempter
Profil
Forschungsthemen16
BCCN II, A4 „Theoretical Analysis of Hippocampal Memory Formation“
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 06/2010 - 06/2017 Projektleitung: Prof. Dr. Richard Kempter
Centre for Computational Neuroscience Berlin - Teilprojekt B 5: Neural rhythms, synaptic plasticity, and network stability
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 09/2004 - 08/2010 Projektleitung: Prof. Dr. Richard Kempter
CompLS - Runde 5 - Verbundprojekt: ATLAS - Al and Simulation for Tumor Liver Assessment - Entwicklung eines Systems zur klinischen Entscheidungsunterstützung in der Diagnose und Behandlung von Lebertumoren auf Basis von künstlicher Intelligenz und Simulationen - Teilprojekt B
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 03/2023 - 12/2026 Projektleitung: Dr. Matthias König, Prof. Dr. Richard Kempter
Die Rolle von Gitter- und Ortszellen sowie der Phasenpräzession für das menschliche episodische Gedächnis
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 03/2018 - 06/2021 Projektleitung: Prof. Dr. Richard Kempter
Feldpotentiale im Hörsystem
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 04/2016 - 06/2020 Projektleitung: Prof. Dr. Richard Kempter
Mikrosekunden-Präzision im Hörsystem von Vögeln; Analyse und Simulation des Neurophonpotentials
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 04/2007 - 02/2010 Projektleitung: Prof. Dr. Richard Kempter
NW: Plastizität und Stabilität in rückgekoppelten Systemen spikender Neurone (II)
Quelle ↗Förderer: DFG Nachwuchsgruppe Zeitraum: 01/2005 - 03/2007 Projektleitung: Prof. Dr. Richard Kempter
NW: Plastizität und Stabilität in rückgekoppelten Systemen spikender Neurone III
Quelle ↗Förderer: DFG Nachwuchsgruppe Zeitraum: 09/2007 - 08/2008 Projektleitung: Prof. Dr. Richard Kempter
SFB 1315/1: Neuronale Grundlagen der systemischen Gedächtniskonsolidierung (TP B01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2018 - 06/2022 Projektleitung: Prof. Dr. Richard Kempter
SFB 1315/1: Zur Rolle von synaptischer Disinhibition bei der hippokampalen Gedächtniskonsolidierung (TP A01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2018 - 06/2022 Projektleitung: Prof. Dr. Richard Kempter, Dietmar Schmitz Prof. Dr.
SFB 1315/2: Ein neuronales Modell für die Entwicklung von Schemas und ihre Rolle bei der systemischen Gedächtniskonsolidierung (TP B01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2022 - 06/2026 Projektleitung: Prof. Dr. Richard Kempter, Prof. Dr. Benjamin Lindner
SFB 1315/2: Zur Rolle von heterogenen Populationen von Pyramidenzellen und spezifischen Motiven der Konnektivität bei der Musterergänzung und dem Wiederabspielen von Gedächtnissequenzen (TP A01)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2022 - 06/2026 Projektleitung: Prof. Dr. Richard Kempter, Dietmar Schmitz Prof. Dr.
„SimLivA II - Simulationsgestützte Leberbewertung für Spenderorgane II -- Kontinuumsbiomechanische Modellierung zur Beurteilung von Ischämie-Reperfusionsschäden bei Lebertransplantation und Maschinenperfusion“
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 10/2025 - 12/2028 Projektleitung: Dr. Matthias König, Prof. Dr. Richard Kempter
SPP 1665: Schaltkreis-Mechanismen der Phasenpräzession: Experiment und Theorie
Quelle ↗Förderer: DFG Schwerpunktprogramm Zeitraum: 11/2016 - 12/2019 Projektleitung: Prof. Dr. Michael Brecht
SPP 1665: Schaltkreis-Mechanismen der Phasenpräzession: Experiment und Theorie
Quelle ↗Förderer: DFG Schwerpunktprogramm Zeitraum: 01/2017 - 12/2019 Projektleitung: Prof. Dr. Richard Kempter
US-German Research Proposal: Collaborative Research: Field Potentials in the Auditory System
Quelle ↗Zeitraum: 09/2017 - 08/2019 Projektleitung: Prof. Dr. Richard Kempter
Mögliche Industrie-Partner10
Stand: 26.4.2026, 19:48:44 (Top-K=20, Min-Cosine=0.4)
- 127 Treffer64.1%
- SFB 1315/2: Mechanismen und Störungen der Gedächtniskonsolidierung: Von Synapsen zur SystemebeneT64.1%
- DFG-Sachbeihilfe: Aufmerksamkeit und sensorische Integration im aktiven Sehen von bewegten ObjektenT49.5%
- SFB 1315/2: Mechanismen und Störungen der Gedächtniskonsolidierung: Von Synapsen zur Systemebene
- 62 Treffer56.8%
- Design & Implementierung eines neuronalen Netzwerks für die Personendetektion (Transferbonus)T56.8%
- Design & Implementierung eines neuronalen Netzwerks für die Personendetektion (Transferbonus)
- 52 Treffer56.4%
- EU: Hybrid Organic/Inorganic Memory Elements for Integration of Electronic and Photonic Circuitry (HYMEC)P56.4%
- EU: Hybrid Organic/Inorganic Memory Elements for Integration of Electronic and Photonic Circuitry (HYMEC)
- 9 Treffer56.3%
- Digitaler Workflow für die Prozesskette für Halbleiter-Epitaxie-Schichten mit großem Bandabstand für die Leistungselektronik.P56.3%
- Digitaler Workflow für die Prozesskette für Halbleiter-Epitaxie-Schichten mit großem Bandabstand für die Leistungselektronik.
- 36 Treffer55.7%
- Embodied Audition for RobotSP55.7%
- Embodied Audition for RobotS
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion UnderstandingP55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion Understanding
- 81 Treffer55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion UnderstandingP55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion Understanding
- 80 Treffer55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion UnderstandingP55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion Understanding
- 81 Treffer55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion UnderstandingP55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion Understanding
- 81 Treffer55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion UnderstandingP55.7%
- Promoting Deaf and Hard of Hearing Children's Theory of Mind and Emotion Understanding
Publikationen25
Top 25 nach Zitationen — Quelle: OpenAlex (BAAI/bge-m3 embedded für Matching).
Nature · 1147 Zitationen · DOI
Journal of Neuroscience · 839 Zitationen · DOI
Neuronal oscillations allow for temporal segmentation of neuronal spikes. Interdependent oscillators can integrate multiple layers of information. We examined phase-phase coupling of theta and gamma oscillators in the CA1 region of rat hippocampus during maze exploration and rapid eye movement sleep. Hippocampal theta waves were asymmetric, and estimation of the spatial position of the animal was improved by identifying the waveform-based phase of spiking, compared to traditional methods used for phase estimation. Using the waveform-based theta phase, three distinct gamma bands were identified: slow gamma(S) (gamma(S); 30-50 Hz), midfrequency gamma(M) (gamma(M); 50-90 Hz), and fast gamma(F) (gamma(F); 90-150 Hz or epsilon band). The amplitude of each sub-band was modulated by the theta phase. In addition, we found reliable phase-phase coupling between theta and both gamma(S) and gamma(M) but not gamma(F) oscillators. We suggest that cross-frequency phase coupling can support multiple time-scale control of neuronal spikes within and across structures.
Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics · 695 Zitationen · DOI
A correlation-based (``Hebbian'') learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ``learning window.'' A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic weights.
European Journal of Neuroscience · 313 Zitationen · DOI
Tinnitus, the perception of a sound in the absence of acoustic stimulation, is often associated with hearing loss. Animal studies indicate that hearing loss through cochlear damage can lead to behavioral signs of tinnitus that are correlated with pathologically increased spontaneous firing rates, or hyperactivity, of neurons in the auditory pathway. Mechanisms that lead to the development of this hyperactivity, however, have remained unclear. We address this question by using a computational model of auditory nerve fibers and downstream auditory neurons. The key idea is that mean firing rates of these neurons are stabilized through a homeostatic plasticity mechanism. This homeostatic compensation can give rise to hyperactivity in the model neurons if the healthy ratio between mean and spontaneous firing rate of the auditory nerve is decreased, for example through a loss of outer hair cells or damage to hair cell stereocilia. Homeostasis can also amplify non-auditory inputs, which then contribute to hyperactivity. Our computational model predicts how appropriate additional acoustic stimulation can reverse the development of such hyperactivity, which could provide a new basis for treatment strategies.
Hearing Research · 245 Zitationen · DOI
Neural Computation · 177 Zitationen · DOI
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.
Journal of Neuroscience Methods · 166 Zitationen · DOI
Journal of Neuroscience · 136 Zitationen · DOI
During the crossing of the place field of a pyramidal cell in the rat hippocampus, the firing phase of the cell decreases with respect to the local theta rhythm. This phase precession is usually studied on the basis of data in which many place field traversals are pooled together. Here we study properties of phase precession in single trials. We found that single-trial and pooled-trial phase precession were different with respect to phase-position correlation, phase-time correlation, and phase range. Whereas pooled-trial phase precession may span 360 degrees , the most frequent single-trial phase range was only approximately 180 degrees. In pooled trials, the correlation between phase and position (r = -0.58) was stronger than the correlation between phase and time (r = -0.27), whereas in single trials these correlations (r = -0.61 for both) were not significantly different. Next, we demonstrated that phase precession exhibited a large trial-to-trial variability. Overall, only a small fraction of the trial-to-trial variability in measures of phase precession (e.g., slope or offset) could be explained by other single-trial properties (such as running speed or firing rate), whereas the larger part of the variability remains to be explained. Finally, we found that surrogate single trials, created by randomly drawing spikes from the pooled data, are not equivalent to experimental single trials: pooling over trials therefore changes basic measures of phase precession. These findings indicate that single trials may be better suited for encoding temporally structured events than is suggested by the pooled data.
Hearing Research · 134 Zitationen · DOI
Neuron · 130 Zitationen · DOI
Neural Computation · 107 Zitationen · DOI
How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidence-detection properties of an integrate-and-fire neuron. We derive an expression indicating how coincidence detection depends on neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a temporal code into a rate code.
Journal of Neurophysiology · 94 Zitationen · DOI
Tinnitus is often related to hearing loss, but how hearing loss could lead to tinnitus has remained unclear. Animal studies show that the occurrence of tinnitus is correlated to increased spontaneous firing rates of central auditory neurons, but mechanisms that give rise to such hyperactivity have not been identified yet. Here we present a computational model that reproduces tinnitus-related hyperactivity and predicts tinnitus pitch from the audiograms of tinnitus patients with noise-induced hearing loss and tone-like tinnitus. Our key assumption is that the mean firing rates of central auditory neurons are controlled by homeostatic plasticity. Decreased auditory nerve activity after hearing loss is counteracted through an increase of the neuronal response gain, which restores the mean rate but can also lead to hyperactivity. Hyperactivity patterns calculated from patients' audiograms exhibit distinct peaks at frequencies close to the perceived tinnitus pitch, corroborating hyperactivity through homeostatic plasticity as a mechanism for the development of tinnitus after hearing loss. The model suggests that such hyperactivity, and thus also tinnitus caused by cochlear damage, could be alleviated through additional stimulation.
Neuron · 88 Zitationen · DOI
PLoS Computational Biology · 86 Zitationen · DOI
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global-potentially neuromodulatory-alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.
Cell Reports · 84 Zitationen · DOI
The distinctive firing pattern of grid cells in the medial entorhinal cortex (MEC) supports its role in the representation of space. It is widely believed that the hexagonal firing field of grid cells emerges from neural dynamics that depend on the local microcircuitry. However, local networks within the MEC are still not sufficiently characterized. Here, applying up to eight simultaneous whole-cell recordings in acute brain slices, we demonstrate the existence of unitary excitatory connections between principal neurons in the superficial layers of the MEC. In particular, we find prevalent feed-forward excitation from pyramidal neurons in layer III and layer II onto stellate cells in layer II, which might contribute to the generation or the inheritance of grid cell patterns.
Current Opinion in Neurobiology · 78 Zitationen · DOI
Neuroscience · 77 Zitationen · DOI
Frontiers in Systems Neuroscience · 72 Zitationen · DOI
The understanding of tinnitus has progressed considerably in the past decade, but the details of the mechanisms that give rise to this phantom perception of sound without a corresponding acoustic stimulus have not yet been pinpointed. It is now clear that tinnitus is generated in the brain, not in the ear, and that it is correlated with pathologically altered spontaneous activity of neurons in the central auditory system. Both increased spontaneous firing rates and increased neuronal synchrony have been identified as putative neuronal correlates of phantom sounds in animal models, and both phenomena can be triggered by damage to the cochlea. Various mechanisms could underlie the generation of such aberrant activity. At the cellular level, decreased synaptic inhibition and increased neuronal excitability, which may be related to homeostatic plasticity, could lead to an over-amplification of natural spontaneous activity. At the network level, lateral inhibition could amplify differences in spontaneous activity, and structural changes such as reorganization of tonotopic maps could lead to self-sustained activity in recurrently connected neurons. However, it is difficult to disentangle the contributions of different mechanisms in experiments, especially since not all changes observed in animal models of tinnitus are necessarily related to tinnitus. Computational modeling presents an opportunity of evaluating these mechanisms and their relation to tinnitus. Here we review the computational models for the generation of neurophysiological correlates of tinnitus that have been proposed so far, and evaluate predictions and compare them to available data. We also assess the limits of their explanatory power, thus demonstrating where an understanding is still lacking and where further research may be needed. Identifying appropriate models is important for finding therapies, and we therefore, also summarize the implications of the models for approaches to treat tinnitus.
Proceedings of the National Academy of Sciences · 64 Zitationen · DOI
Computational maps are of central importance to a neuronal representation of the outside world. In a map, neighboring neurons respond to similar sensory features. A well studied example is the computational map of interaural time differences (ITDs), which is essential to sound localization in a variety of species and allows resolution of ITDs of the order of 10 micros. Nevertheless, it is unclear how such an orderly representation of temporal features arises. We address this problem by modeling the ontogenetic development of an ITD map in the laminar nucleus of the barn owl. We show how the owl's ITD map can emerge from a combined action of homosynaptic spike-based Hebbian learning and its propagation along the presynaptic axon. In spike-based Hebbian learning, synaptic strengths are modified according to the timing of pre- and postsynaptic action potentials. In unspecific axonal learning, a synapse's modification gives rise to a factor that propagates along the presynaptic axon and affects the properties of synapses at neighboring neurons. Our results indicate that both Hebbian learning and its presynaptic propagation are necessary for map formation in the laminar nucleus, but the latter can be orders of magnitude weaker than the former. We argue that the algorithm is important for the formation of computational maps, when, in particular, time plays a key role.
Frontiers in Systems Neuroscience · 62 Zitationen · DOI
The entorhinal cortices in the temporal lobe of the brain are key structures relaying memory related information between the neocortex and the hippocampus. The medial entorhinal cortex (MEC) routes spatial information, whereas the lateral entorhinal cortex (LEC) routes predominantly olfactory information to the hippocampus. Gamma oscillations are known to coordinate information transfer between brain regions by precisely timing population activity of neuronal ensembles. Here, we studied the organization of <i>in vitro</i> gamma oscillations in the MEC and LEC of the transgenic (tg) amyloid precursor protein (APP)-presenilin 1 (PS1) mouse model of Alzheimer's Disease (AD) at 4-5 months of age. <i>In vitro</i> gamma oscillations using the kainate model peaked between 30-50 Hz and therefore we analyzed the oscillatory properties in the 20-60 Hz range. Our results indicate that the LEC shows clear alterations in frequency and power of gamma oscillations at an early stage of AD as compared to the MEC. The gamma-frequency oscillation slows down in the LEC and also the gamma power in dorsal LEC is decreased as early as 4-5 months in the tg APP-PS1 mice. The results of this study suggest that the timing of olfactory inputs from LEC to the hippocampus might be affected at an early stage of AD, resulting in a possible erroneous integration of the information carried by the two input pathways to the hippocampal subfields.
Cerebral Cortex · 62 Zitationen · DOI
Synaptic changes impair previously acquired memory traces. The smaller this impairment the larger is the longevity of memories. Two strategies have been suggested to keep memories from being overwritten too rapidly while preserving receptiveness to new contents: either introducing synaptic meta levels that store the history of synaptic state changes or reducing the number of synchronously active neurons, which decreases interference. We find that synaptic metaplasticity indeed can prolong memory lifetimes but only under the restriction that the neuronal population code is not too sparse. For sparse codes, metaplasticity may actually hinder memory longevity. This is important because in memory-related brain regions as the hippocampus population codes are sparse. Comparing 2 different synaptic cascade models with binary weights, we find that a serial topology of synaptic state transitions gives rise to larger memory capacities than a model with cross transitions. For the serial model, memory capacity is virtually independent of network size and connectivity.
Neural Computation · 56 Zitationen · DOI
The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity c, whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number P of stored sequences in a sufficiently large, nonmodular network is independent of its number N of cells. On the other hand, if the number of synapses scales as the network size to the power of 3/2, the number of sequences P is proportional to N. In other words, sequential memory is scalable. Further-more, we find that there is an optimal ratio r between silent and nonsilent synapses at which the storage capacity α = P/[c (1 +r)N] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.
Hearing Research · 53 Zitationen · DOI
Neural Computation · 52 Zitationen · DOI
Phase precession is a relational code that is thought to be important for episodic-like memory, for instance, the learning of a sequence of places. In the hippocampus, places are encoded through bursting activity of so-called place cells. The spikes in such a burst exhibit a precession of their firing phases relative to field potential theta oscillations (4-12 Hz); the theta phase of action potentials in successive theta cycles progressively decreases toward earlier phases. The mechanisms underlying the generation of phase precession are, however, unknown. In this letter, we show through mathematical analysis and numerical simulations that synaptic facilitation in combination with membrane potential oscillations of a neuron gives rise to phase precession. This biologically plausible model reproduces experimentally observed features of phase precession, such as (1) the progressive decrease of spike phases, (2) the nonlinear and often also bimodal relation between spike phases and the animal's place, (3) the range of phase precession being smaller than one theta cycle, and (4) the dependence of phase jitter on the animal's location within the place field. The model suggests that the peculiar features of the hippocampal mossy fiber synapse, such as its large efficacy, long-lasting and strong facilitation, and its phase-locked activation, are essential for phase precession in the CA3 region of the hippocampus.
Neural Computation · 52 Zitationen · DOI
The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity c, whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number P of stored sequences in a sufficiently large, nonmodular network is independent of its number N of cells. On the other hand, if the number of synapses scales as the network size to the power of 3/2, the number of sequences P is proportional to N. In other words, sequential memory is scalable. Furthermore, we find that there is an optimal ratio r between silent and nonsilent synapses at which the storage capacity alpha = P//[c(1 + r)N] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.
Kooperationen5
Bestätigte Forscher↔Partner-Paare aus HU-FIS — Gold-Standard-Positive für das Matching.
SFB 1315/2: Ein neuronales Modell für die Entwicklung von Schemas und ihre Rolle bei der systemischen Gedächtniskonsolidierung (TP B01)
university
SFB 1315/1: Neuronale Grundlagen der systemischen Gedächtniskonsolidierung (TP B01)
university
CompLS - Runde 5 - Verbundprojekt: ATLAS - Al and Simulation for Tumor Liver Assessment - Entwicklung eines Systems zur klinischen Entscheidungsunterstützung in der Diagnose und Behandlung von Lebertumoren auf Basis von künstlicher Intelligenz und Simulationen - Teilprojekt B
university
„SimLivA II - Simulationsgestützte Leberbewertung für Spenderorgane II -- Kontinuumsbiomechanische Modellierung zur Beurteilung von Ischämie-Reperfusionsschäden bei Lebertransplantation und Maschinenperfusion“
university
„SimLivA II - Simulationsgestützte Leberbewertung für Spenderorgane II -- Kontinuumsbiomechanische Modellierung zur Beurteilung von Ischämie-Reperfusionsschäden bei Lebertransplantation und Maschinenperfusion“
university
Stammdaten
Identität, Organisation und Kontakt aus HU-FIS.
- Name
- Prof. Dr. Richard Kempter
- Titel
- Prof. Dr.
- Fakultät
- Lebenswissenschaftliche Fakultät
- Institut
- Institut für Biologie
- Arbeitsgruppe
- Theorie neuronaler Systeme
- Telefon
- +49 30 2093-98404
- HU-FIS-Profil
- Quelle ↗
- Zuletzt gescrapt
- 26.4.2026, 01:07:11