Frontiers in Computational Neuroscience
67 subscribers
401 photos
401 links
Download Telegram
Time delays in computational models of neuronal and synaptic dynamics

No abstract
Read more...
Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex

Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and...
Read more...
A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals

Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model...
Read more...
Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects

Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the...
Read more...
From generative AI to the brain: five takeaways

The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural...
Read more...
Simplex polynomial in complex networks and its applications to compute the Euler characteristic

In algebraic topology, a k-dimensional simplex is defined as a convex polytope consisting of k + 1 vertices. If spatial dimensionality is not considered, it corresponds to the complete graph with k + 1 vertices in graph theory. The alternating sum of the number of simplices across dimensions yields a topological invariant known as the Euler characteristic, which has gained significant attention due to its widespread application in fields such as topology, homology theory, complex systems, and...
Read more...
A neural network model combining the successor representation and actor-critic methods reveals effective biological use of the representation

In learning goal-directed behavior, state representation is important for adapting to the environment and achieving goals. A predictive state representation called successive representation (SR) has recently attracted attention as a candidate for state representation in animal brains, especially in the hippocampus. The relationship between the SR and the animal brain has been studied, and several neural network models for computing the SR have been proposed based on the findings. However,...
Read more...
1