I was glad to attend the spring school on neurointerfaces — finally received the certificate!

Сертификат

On one hand, I’m amazed at how far science has come, and on the other, at how much still remains to be done.

Some brief takeaways for myself (perhaps later I’ll expand on these, still processing the information):

When it comes to solving the problem of controlling prosthetics or other devices using neurointerfaces, it turns out that directly accessing the brain isn’t all that convenient. All existing solutions are not particularly efficient because:

1.1 The algorithms that decode signals from brain-implanted chips have to be retrained daily, since neurons in the brain shift position, and what the chip was reading from hours ago is no longer in the same place. 1.2 Chips implanted in the brain (like the Utah Array or Elon Musk’s design) eventually get surrounded by scar tissue, which significantly interferes with reading electrical signals. They typically last a maximum of 4 years (sometimes up to 8 in rare cases), but often much less. 1.3 There’s a lot of noise in the signals.

A similar issue exists with myoelectric sensors from bionic prosthetics (these are sensors that respond to changes in muscle potential) — the algorithms decoding control signals also have to be retrained every few months, since conductivity in muscles changes over time. However, the situation here is better compared to invasive brain interfaces.

From this, it seems that for controlling physical devices, it might make more sense to focus on neurointerfaces that operate closer to the periphery. Of course, for some patients that won’t be an option, but still.

In general, each lecture was packed with links, terminology, and knowledge that still needs to be dissected — read, reflected upon, and “argued with.” I hadn’t paid as much attention to non-invasive types before, although there was some material on those too. ——Translated with ChatGPT from original version—–