the last neural cell
1.14K subscribers
86 photos
8 videos
14 files
111 links
we write about BCI, AI and brain research.

authors:
@kovalev_alvi - visual neural interfaces - UMH, Spain | CEO of ALVI Labs
@Altime - comp neuro phd @ GTC Tübingen

Our chat: @neural_cell_chat
Download Telegram
March 31
April 10
Review | Smart stimulation patterns for visual prostheses

🔘Towards biologically plausible phosphene simulation

tl;dr: Differentiable PyTorch simulator translating V1 stimulation to phosphene perception for end-to-end optimization
- Fully differentiable pipeline allowing optimization of all stimulation parameters via backpropagation
- Based on many experimental data.
- Bridges gap between electrode-level stimulation and resulting visual perception

link: https://doi.org/10.7554/eLife.85812

🔘Human-in-the-Loop Optimization for Visual Prostheses

tl;dr: Neural encoder + Preference bayesian optimization.
- Train deep stimulus encoder (DSE): transform images -> stimulation.
- Add "patient params" 13 values as additional input into DSE.
- Uses Preferential Bayesian Optimization with GP prior to update only "patients" params using only binary comparisons
- Achieves 80% preference alignment after only 150 comparisons despite 20% simulated noise in human feedback

link: https://arxiv.org/abs/2306.13104

🔘MiSO: Optimizing brain stimulation for target neural states

tl;dr: ML system that predicts and optimizes multi-electrode stimulation to achieve specific neural activity patterns
- Utah array on monkey PFC
- One-two electrode stimulation with fixed frequency/amplitude
- Collect paired (stim, signals) data across multiple sessions
- Extract latent features using Factor Analysis (FA)
- Align latent spaces across sessions using Procrustes method
- Train CNN to predict latent states from stim patterns
- Apply epsilon-greedy optimizer to find optimal stimulation in closed-loop

link: https://www.nature.com/articles/s41467-023-42338-8

🔘Precise control with dynamically optimized electrical stimulation

tl;dr: Temporal dithering algorithm exploits neural integration window to enhance visual prosthesis performance by 40%
- Uses triphasic pulses at 0.1ms intervals optimized within neural integration time window (10-20ms)
- Implements spatial multiplexing with 200μm exclusion zones to prevent electrode interference
- Achieves 87% specificity in targeting ON vs OFF retinal pathways, solving a fundamental limitation of current implants

link: https://doi.org/10.7554/eLife.83424

my thoughts
The field is finally moving beyond simplistic zap-and-see approaches. These papers tackle predicting perception, minimizing patient burden, targeting neural states, and improving power efficiency. What excites me most is how these methods could work together - imagine MiSO's targeting combined with human feedback and efficient stimulation patterns. The missing piece? Understanding how neural activity translates to actual perception. Current approaches optimize for either brain patterns OR what people see, not both. I think the next breakthrough will come from models that bridge this gap, perhaps using contrastive learning to connect brain recordings with what people actually report seeing.
Please open Telegram to view this post
VIEW IN TELEGRAM
April 10
April 11
April 15
What does it mean to understand the brain function?
In search of neuroscience paradigms [part 0 - introduction]

A lot of papers are published daily on brain function on multiple levels. What I found interesting is that each study contains an implicit set of assumptions, which are part of a larger research program. Thus, different researchers mean different things when generating scientific insight.

This can lead to vastly different interpretations of the same experimental result. The biggest problem is in my opinion that these assumptions/paradigms are kept implicit and researchers are sometimes not even aware which theories they assume to be true while generating hypotheses and conducting experiments.

I will attempt to bridge this brain-science to "meta-science" gap in the next few posts, of course on the level of a beginner PhD student and from a perspective of a neuroscientist (within rather than above science) that seeks precision and awareness of scientific frameworks we all choose to work on.

Neuroscience is one of the fields with a unique position in this regard - as opposed to physics we really don't have a coherent picture unifying different scales where we established certain laws. We actually rarely have laws and theories that are universally accepted - this is the beauty of being in this field, but also a curse because hot debates are unavoidable.

So, in the next posts I will cover some of the old and emerging theories & frameworks about what it means to understand a biological neural network:

1. "Grandmother cells" & single-neuron frameworks
2. Cell-assemblies & Hebbian associations
3. Embodied & ecological cognition, naturalistic settings
4. Predictive coding & Bayesian brain
5. Feedforward processing & I/O relations, decoding
6. Dynamical systems & population codes
7. Connectomics & structural mapping
8. Computations in electric fields vs spiking
9. Cognitive modules vs distributed processing

What I won't cover for now but maybe will, is the philosophy of scientific insight (realism vs instrumentalism, functional vs mechanistic, reductionist vs holistic, explanation vs description). Also I won't touch AI computations for now, however might do in the future when it becomes more relevant to my research.

Hopefully, after this post series you will gain something valuable to apply to your work. Or you will learn about the existential troubles neuroscientists face, if you're just interested in the field 😉

Which topic would you like to read about first?

P.S. As for the extended read for those interested, here is the paper that stimulated my deeper exploration. Frankly I did not enjoy it too much but it definitely asked the right questions and forced me to try to prove the authors wrong.
May 8
May 15
May 22
May 23
Please open Telegram to view this post
VIEW IN TELEGRAM
June 5
June 17
The 2025 PNPL Competition: Speech Detection and Phoneme Classification in the LibriBrain Dataset

Еще одно соревнование по BCI, на этот раз предлагают декодировать речь из MEG данных.

Коротко про соревнование:
Данные: LibriBrain - 50+ часов MEG с одного человека, 306 сенсоров
Дедлайны:
- 31 июля 2025: Speech Detection
- 30 сентября 2025: Phoneme Classification
- Декабрь 2025: презентация на NeurIPS
Призы: Минимум $10k призовых, топ-3 в каждом треке.

Что решаем?
🔘Speech Detection - бинарная классификация: есть речь или нет (F1-macro, рефмодель 68%)
🔘Phoneme Classification - 39 классов фонем (рефмодель 60%)

Ссылки, чтобы не потеряться
proposal
website
instruction
Please open Telegram to view this post
VIEW IN TELEGRAM
June 19
June 25
July 4