PhD Thesis Defence by Nikos Melanitis-Paraskevas titled " Development of biologically inspired computer vision methods in retinal prosthetics".

On Μonday 29.05.2023 Nikos Melanitis-Paraskevas successfully defended his PhD Thesis titled " Development of biologically inspired computer vision methods in retinal prosthetics".

Abstract: In this thesis, we introduce Retina Ganglion Cell (RGC) models that integrate the current understanding of RGC functions in a preprocessing feature extraction step. Retina performs complex processing of visual information, as brightness computation, motion and edge detection, that has been explored in neurobiological studies.

Retinal Prosthesis (RP) is an approach to restore vision in blind people affected by degenerative retina diseases, where, despite the damage to retina cells, at least some RGCs remain functional. RP could potentially benefit a great number of individuals with vision problems, as in the case of Retinitis Pigmentosa which has a prevalence of approximately 1. Essential steps to transfer RP 4000 technology to standard medical care have been taken through clinical trials. Implantees with Retinitis Pigmentosa have been able to detect luminous sources and direction of motion while experiencing an overall improvement in their orientation and mobility. RP devices consist of: (i) a camera, to capture images of the scene, (ii) a processing unit, to process the camera images and compute the proper retina stimulation pattern, (iii) a telemetry system to transfer information and power between the external device and the implant, and (iv) an implanted electrode array to stimulate the retina. ARGUS II and Alpha IMS RP devices have received approval for medical use. Photovoltaic RPs that do not require a camera to capture the scene, and cortical implants are studied in ongoing clinical trials. Currently, implants process the images in a simple intensity-based manner, translating proportionally image intensity to stimulation intensity. Progress in vision restoration by RP systems depends on accurate retina’s input-to-output mapping.

Consequently, a fundamental problem in RP is to translate the visual scene to retina neural spike patterns, mimicking the computations normally done by retina neural circuits. Towards the perspective of improved RP interventions, we propose a Computer Vision (CV) image preprocessing method based on RGCs functions and then use the method to reproduce retina output with a standard Generalized Integrate & Fire (GIF) neuron model. “Virtual Retina” simulation software is used to provide the stimulus-retina response data to train and test our model. We use a sequence of natural images as model input and show that models using the proposed CV image preprocessing outperform models using raw image intensity (interspike-interval distance 0.17 vs 0.27). This result is aligned with our hypothesis that raw image intensity is an improper image representation for RGCs response prediction. Moreover, we utilize the aforementioned image features in RGC models that we developed using biological data. In this case we extracted features over the whole image, leading to an increase in the dimensionality of the feature-based image description, to overcome the unspecified local arragement of biological RGCs. We improved models of mouse RGCs by localising the RGCs-using deep learning models- and then extracting the features in image subregions where RGCs are located. In such models, we showed that features combined with unprocessed images lead to improved RGC models.

In conclusion, we have introduced a Computer Vision image preprocessing method to model RGC functions and reproduced retina spiking output with a GIF neuron model. We show that methods developed over the last decades in the Computer Vision field, can be transferred to the area of retinal implants to simulate retina computations. We have demonstrated that the use of features as input improves performance over raw image intensity, defending our hypothesis that raw image intensity is an improper visual input representation. Additionally, we have shown that low image resolution can degrade CV features performance and that model performance is improved when background-only inputs are rejected.

To put further focus on retina models we trained Linear-Nonlinear (LN) models using response data from biological retinae. We show that augmenting raw image input with retina-inspired image features leads to performance improvements: in a smaller set from salamander retina, integration of features leads to improved models in approximately 2/3 of the modeled RGCS; in a larger set from mouse retina, we show that utilizing Spike Triggered Average analysis to localize RGCs in input images and extract features in a cell-based manner leads to improved models in all (except two) of the modeled RGCs.

We explore visual attention, with a focus on improving prosthetic vision, including retinal as well as cortical implants in our analysis. Visual attention forms the basis of understanding the visual world. In this work we follow a computational approach to investigate the biological basis of visual attention. We analyze retinal and cortical electrophysiological data from mouse. Visual Stimuli are Natural Images depicting real world scenes. Our results show that in primary visual cortex (V1), a subset of around 10% of the neurons responds differently to salient versus non-salient visual regions. Visual attention information was not traced in retinal response. It appears that the retina remains naive concerning visual attention; cortical response gets modulated to interpret visual attention information.