Frontiers in Computational Neuroscience
65 subscribers
398 photos
398 links
Download Telegram
Editorial: Advancements in smart diagnostics for understanding neurological behaviors and biosensing applications

No abstract
Read more...
Effects of AC induced electric fields on neuronal firing sensitivity and activity patterns

INTRODUCTION: Understanding how neurons respond to time-varying electric fields is essential for both basic neuroscience and the development of neuromodulation strategies. However, the mechanisms by which alternating-current induced electric fields (AC-IEF) influence neuronal sensitivity and firing remain unclear.
Read more...
Intrinsic calcium resonance and its modulation: insights from computational modeling

Hippocampal neurons generate membrane potential resonance due to specific voltage-gated ion channels, known as resonating conductances, which play crucial physiological roles. However, it is not known whether this phenomenon of resonance is limited to membrane voltage or whether it propagates through molecular signaling components such as calcium dynamics. To test this, we first utilized a single-compartment model neuron to study the oscillatory intrinsic calcium response dynamics of hippocampal...
Read more...
Circuit-level modeling of prediction error computation of multi-dimensional features in voluntary actions

INTRODUCTION: Predictive processing posits that the brain minimizes discrepancies between internal predictions and sensory inputs, offering a unifying account of perception, cognition, and action. In voluntary actions, it is thought to suppress self-generated sensory outcomes. Although sensory mismatch signals have been extensively investigated and modeled, mechanistic insights into the neural computation of predictive processing in voluntary actions remain limited.
Read more...
CRISP: a correlation-filtered recursive feature elimination and integration of SMOTE pipeline for gait-based Parkinson's disease screening

CONCLUSION: CRISP is the first VGRF-based pipeline to combine correlation-filtered feature pruning, recursive feature elimination, and SMOTE to enhance PD detection performance, while also introducing a subject-wise evaluation protocol that captures patient-level variability for truly personalized diagnostics. These twin novelties deliver clinically significant gains and lay the foundation for real-time, on-device PD detection and severity monitoring.
Read more...
Neuron synchronization analyzed through spatial-temporal attention

Neuronal synchronization refers to the temporal coordination of activity across populations of neurons, a process that underlies coherent information processing, supports the encoding of diverse sensory stimuli, and facilitates adaptive behavior in dynamic environments. Previous studies of synchronization have predominantly emphasized rate coding and pairwise interactions between neurons, which have provided valuable insights into emergent network phenomena but remain insufficient for capturing...
Read more...
Using noise to distinguish between system and observer effects in multimodal neuroimaging

INTRODUCTION: It has become increasingly common to record brain activity simultaneously at more than one spatiotemporal scale. Here, we address a central question raised by such cross-scale datasets: do they reflect the same underlying dynamics observed in different ways, or different dynamics observed in the same way? In other words, to what extent can variation between modalities be attributed to system-level versus observer-level effects? System-level effects reflect genuine differences in...
Read more...
Modeling cognition through adaptive neural synchronization: a multimodal framework using EEG, fMRI, and reinforcement learning

CONCLUSION: This work provides a novel and testable approach to modeling thinking as a biologically constrained control problem and lays the groundwork for future applications in cognitive modeling and brain-computer interfaces.
Read more...
Advancing epileptic seizure recognition through bidirectional LSTM networks

Seizure detection in a timely and accurate manner remains a primary challenge in clinical neurology, affecting diagnosis planning and patient management. Most of the traditional methods rely on feature extraction and traditional machine learning techniques, which are not efficient in capturing the dynamic characteristics of neural signals. It is the aim of this study to address such limitations by designing a deep learning model from bidirectional Long Short-Term Memory (BiLSTM) networks in a...
Read more...
An AI methodology to reduce training intensity, error rates, and size of neural networks

Massive computing systems are required to train neural networks. The prodigious amount of consumed energy makes the creation of AI applications significant polluters. Despite the enormous training effort, neural network error rates limit its use for medical applications, because errors can lead to intolerable morbidity and mortality. Two reasons contribute to the excessive training requirements and high error rates; an iterative reinforcement process (tuning) that does not guarantee convergence...
Read more...
Sudden restructuring of memory representations in recurrent neural networks with repeated stimulus presentations

While acquisition curves in human learning averaged at the group level display smooth, gradual changes in performance, individual learning curves across cognitive domains reveal sudden, discontinuous jumps in performance. Similar thresholding effects are a hallmark of a range of nonlinear systems which can be explored using simple, abstract models. Here, I investigate discontinuous changes in learning performance using Amari-Hopfield networks with Hebbian learning rules which are repeatedly...
Read more...
Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing

Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal...
Read more...
Universal differential equations as a unifying modeling language for neuroscience

The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal...
Read more...
Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks

Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows...
Read more...
Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms

The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered...
Read more...
Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks

The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic...
Read more...
Time delays in computational models of neuronal and synaptic dynamics

No abstract
Read more...
Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex

Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and...
Read more...
A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals

Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model...
Read more...
Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects

Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the...
Read more...