Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing
Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal...
Read more...
Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal...
Read more...
Universal differential equations as a unifying modeling language for neuroscience
The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal...
Read more...
The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal...
Read more...
Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks
Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows...
Read more...
Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows...
Read more...
Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms
The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered...
Read more...
The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered...
Read more...
Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks
The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic...
Read more...
The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic...
Read more...
Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex
Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and...
Read more...
Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and...
Read more...
A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals
Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model...
Read more...
Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model...
Read more...
Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects
Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the...
Read more...
Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the...
Read more...
From generative AI to the brain: five takeaways
The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural...
Read more...
The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural...
Read more...
Simplex polynomial in complex networks and its applications to compute the Euler characteristic
In algebraic topology, a k-dimensional simplex is defined as a convex polytope consisting of k + 1 vertices. If spatial dimensionality is not considered, it corresponds to the complete graph with k + 1 vertices in graph theory. The alternating sum of the number of simplices across dimensions yields a topological invariant known as the Euler characteristic, which has gained significant attention due to its widespread application in fields such as topology, homology theory, complex systems, and...
Read more...
In algebraic topology, a k-dimensional simplex is defined as a convex polytope consisting of k + 1 vertices. If spatial dimensionality is not considered, it corresponds to the complete graph with k + 1 vertices in graph theory. The alternating sum of the number of simplices across dimensions yields a topological invariant known as the Euler characteristic, which has gained significant attention due to its widespread application in fields such as topology, homology theory, complex systems, and...
Read more...
A neural network model combining the successor representation and actor-critic methods reveals effective biological use of the representation
In learning goal-directed behavior, state representation is important for adapting to the environment and achieving goals. A predictive state representation called successive representation (SR) has recently attracted attention as a candidate for state representation in animal brains, especially in the hippocampus. The relationship between the SR and the animal brain has been studied, and several neural network models for computing the SR have been proposed based on the findings. However,...
Read more...
In learning goal-directed behavior, state representation is important for adapting to the environment and achieving goals. A predictive state representation called successive representation (SR) has recently attracted attention as a candidate for state representation in animal brains, especially in the hippocampus. The relationship between the SR and the animal brain has been studied, and several neural network models for computing the SR have been proposed based on the findings. However,...
Read more...
❤1