Prof. Jan Antolik’s path into computational neuroscience began in artificial intelligence, at a time when the field was far from its current prominence. What started as a detour to better understand neural networks became a long-term commitment to studying the brain itself. Today, his work sits at the intersection of biologically grounded modeling and neurotechnology, with a particular focus on how cortical circuits give rise to perception and how they might be engaged to restore vision.
In this interview, he reflects on the ideas that shaped his approach, the challenges of linking stimulation to perception, and the broader vision behind large-scale collaborative efforts like REVICAN.
Interviewer (Michael Beyeler): Thank you so much for taking the time to speak with us. What first drew you to computational neuroscience, and why the visual system in particular?
Prof. Antolik: My original background is in artificial intelligence, but by the end of my master’s studies the field was still in the depths of the AI winter. Machine learning was dominated by Bayesian approaches, and there were few compelling examples of AI systems being used effectively in practice. Even though the brain itself is proof that neural networks can produce powerful intelligence, at the time they were often dismissed as marginal toys for those less mathematically inclined. I was convinced they had to work, so I decided to take a detour and do a PhD in computational neuroscience. I hoped to learn from the brain and eventually bring those insights back to artificial systems. But, as often happens, the detour became a destination in its own right. I became deeply fascinated by the brain’s inner workings, and I remain so to this day. My connection to artificial intelligence was not lost entirely, as my group now uses modern artificial neural networks extensively in our work.
Interviewer: Was there a moment when you realized your work on cortical models could contribute to restoring vision?
Prof. Antolik: I owe this to my former postdoc supervisor, Ryad Benosman, who recognized that my work on biologically detailed models of the visual system could be relevant for vision restoration. He brought me onto a DARPA-funded project focused on restoring vision via cortical implants. I was immediately hooked. This was the first time I could clearly see how my work, which until then had been purely basic science, might translate into something with real-world impact.
“If we wish to intervene in the brain, it is not enough to know what computations we wish to impact [...] we have to take the biology seriously.”
Interviewer: What shaped your approach to building biologically realistic models rather than more abstract ones?
Prof. Antolik: This goes back to my early conviction that the biological brain can teach us a great deal about how to build intelligent systems. If that is true, then we have to take biology seriously, rather than treat it as an inconvenient obstacle on the way to simpler theories or cleaner models. This idea has become a cornerstone of how we approach modeling in my group. I believe that this commitment to biological realism is what sets our models apart and makes them particularly useful for neurotechnology.
Interviewer: You build large-scale, data-driven spiking models of the primary visual cortex. In simple terms, what do these models capture that simpler approaches miss?
Prof. Antolik: In computational neuroscience, the first question is often what computation a given neural subsystem performs. In principle, that question can be answered without incorporating all the biological details, for example by showing that a model captures the system’s input-output relationship. In practice, however, we rarely achieve such a perfect fit. More often, the field works with imperfect models, and we are not always sure how closely they reflect biological reality.
One advantage of incorporating more biological detail is that it gives us an additional set of constraints. Those constraints increase our confidence that, even if the model is still imperfect, it remains close to the biological system we are trying to understand.
The second, and perhaps even more important, advantage is that truly understanding the brain is not just about knowing what computations it performs, but also how those computations are implemented in the underlying neural substrate. Simpler models often cannot provide that level of mechanistic insight. This becomes especially important in clinical applications such as vision restoration, where interventions must operate at the level of the actual biology of the system, not just at an abstract computational level. To put it simply, if we wish to intervene in the brain, it is not enough to know what computations we wish to impact, but also which specific biological elements that implement those computations must be manipulated and how.
Interviewer: How can detailed models of V1 help us predict what electrically or optogenetically evoked percepts might look like?
Prof. Antolik: At the moment, our detailed V1 models cannot directly predict what a person will perceive when we stimulate the cortex, electrically or optogenetically. What they can do is tell us, in mechanistic detail, how a given stimulation pattern recruits neurons and propagates through the local cortical network. We can then compare those evoked activity patterns to the patterns produced by natural vision.
The underlying idea is straightforward. If we design stimulation strategies that drive V1 into activity states that are closer to those produced by natural visual stimuli, we have a better chance of eliciting percepts that are closer to the intended natural percept. Today, however, we can evaluate “closeness” mainly in neural activity space, not in perceptual space.
Looking ahead, one promising direction is to use modern machine-learning methods to learn the relationship between patterns of V1 activity and the visual percepts they elicit. If that works, it could provide a practical bridge from modeled neural activity to predicted percepts. But it is still an open question how accurate such predictions can be, and crucially, how well a decoder trained on naturally evoked activity would generalize to artificially evoked activity.
Interviewer: Many prosthetic systems assume a fairly direct mapping between stimulation and percept. Based on your modeling, what is a misconception the field needs to move past?
Prof. Antolik: A common misconception is that stimulation produces a simple, local “pixel” of activity that the brain then reads out as a matching local dot of light. The field is built on the simple empirical observation, going all the way back to the early 19th century, that focal stimulation of the visual cortex can elicit punctate percepts (phosphenes). But why the brain interprets stimulation that way still remains a mystery.
“At the moment we do not have a computationally grounded explanation of phosphenes.”
The first-order intuition is that retinotopy makes this straightforward: stimulate neurons at a particular cortical location and you should evoke a percept at a corresponding location in visual space. The problem is that electrical stimulation does not activate only a small, neatly localized population. It tends to produce sparse activation over a much larger volume of cortex, sometimes millimeters away from the electrode. Even near the electrode, the neurons you recruit are highly heterogeneous. They represent different orientations, spatial frequencies, phases, and other features. In other words, the evoked pattern of activity is not something that any natural visual stimulus would produce.
So the real puzzle is: why does the brain so often turn this very unnatural activity pattern into a relatively simple, localized phosphene, rather than some entirely different percept or even perceptual “nonsense”? At the moment we do not have a computationally grounded explanation of phosphenes, even though they are the basic building blocks of essentially all current and envisioned visual prosthetic systems. If we want to move beyond today’s limitations, I think bridging that gap between stimulation, circuit activation, and percept will be essential.
Interviewer: Congratulations on REVICAN, a Marie Skłodowska-Curie Actions Staff Exchange (opens in a new window) program that brings together neuroscience, engineering, medicine, computation, and ethics across Europe and the US. What is the central scientific vision that unites the consortium?
Prof. Antolik: The central scientific vision of REVICAN builds directly on the point I made in the previous question. We aim to understand, in a mechanistic and quantitatively grounded way, how external stimulation, whether electrical or optogenetic, perturbs ongoing cortical activity and, through that perturbation, engages (or fails to engage) the brain’s native visual code in cortex.
With that understanding in hand, the consortium’s goal is not only to improve stimulation protocols, but also to inform the design of the underlying brain-machine interfaces for vision restoration so that they communicate with the brain “in its own language.”
Interviewer: What does collaboration at this scale enable that individual labs cannot achieve alone?
Prof. Antolik: This field really does require deep collaboration across many disciplines. What a consortium like REVICAN enables is progress toward our goals not in isolation, but in synchrony across the different conceptual and practical layers that complex neurotechnology demands: basic neuroscience, modeling, device engineering, clinical constraints, and ethical considerations.
Just as importantly, REVICAN creates the infrastructure for sustained dialogue and hands-on transfer of expertise. Researchers can spend time in each other’s laboratories, learn methods directly at the source, and bring that knowledge back to their home teams. Combined with consortium-wide meetings, this makes the collaboration continuous and integrated in a way that no single lab can achieve on its own.