ĐÓ°ÉĘÓƵ

Press Release

APL Retinal Prosthetics Tech Offers Blind Patients a New Outlook

retinal prosthesis
APL is working with Second Sight to enhance the Argus II retinal prosthesis, which consists of a head-mounted camera, electrode array and vision processing system.

Credit: Second Sight

One in 4,000 people suffer from retinitis pigmentosa (RP), a genetic disorder that leads to loss of cells in the retina and progressive loss of vision. In 2014, researchers at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland, were awarded $4 million from the Mann Fund to partner with Second Sight, creators of a retinal prosthesis targeted at people with RP, to design the next generation of the system. Since then, APL researchers have made significant advances in adapting new kinds of sensors and computer vision algorithms to enhance the use of the prosthesis.

The prosthesis consists of a 10-by-6-pixel electrode array implanted on the retina and a head-mounted camera connected to a vision processing system worn at the belt. The vision system processes images from the camera and wirelessly transmits image data to the electrode array to stimulate the patient’s retina. The electrode array is small, relative to the visual field (somewhat like looking through a tube), so a key challenge is determining what aspects of a complex scene to portray to the user and how to distill that down to a representation suitable for displaying on the retinal prosthesis.

APL researchers have tackled this challenge in various ways.

Based on the observation that patients have difficulty finding objects, researchers incorporated a capability, using a neural network, to automatically locate objects (such as a cup or cell phone) within the scene. These objects can then be highlighted by the retinal implant to stand out from a cluttered background while the system uses speech synthesis to communicate the identity of the object to the user. A speech recognition interface further enables the user to designate a specific object to be selectively highlighted by the visual prosthesis. In this case, the system also provides auditory cues to help the user find the object, given the limited field of view of the implant. Patients are now able to identify and locate objects faster.

Before developing the object recognition capabilities, APL researchers developed vision algorithms to declutter and simplify a scene — “redesigning” complex images into a few distinct regions suitable for display on the retinal implant. A face recognition algorithm was also implemented to detect and visually cue the user to look toward the faces in a scene.

The object recognition, scene simplification, and face recognition algorithms significantly improved the experience for patients. APL researchers are now tackling a different challenge: the awkward and tiring need for patients to continually scan their head to “paint a scene.” The solution is an eye tracking algorithm that enables inward facing cameras to monitor the position of the pupil, allowing patients to sweep the scene just by moving their eyes.

“After implementing the eye tracking algorithm, patients have demonstrated a 70-percent reduction in head motion when trying to localize an object on the screen,” said Project Manager Kapil Katyal, of APL’s National Health Mission Area.

APL, Wilmer Institute and Second Sight delve into the benefits of eye tracking for blind patients in the February issue of . Their research quantifies the utility of the technology, detailing patients’ reduction in head motion and improved ability to track moving objects in a scene.

APL researchers are also helping Second Sight redesign the vision-processing hardware. The existing prosthesis uses custom hardware that required years of design and testing and hundreds of thousands of dollars to develop. This makes it difficult to keep up with technology trends, which in turn hinders use of cutting-edge computer vision and artificial intelligence algorithms. The APL solution is to develop a custom hardware interface module that acquires sensor data from the headset and sends it to a commercial off-the-shelf (COTS) mobile processing platform (such as a smartphone) using a standard interface, such as a universal serial bus (USB). With this design, the COTS element of the system can be quickly updated to take advantage of newer processing technology.

“Redesigning the system to use COTS hardware for the vision processing element will enable future systems to rapidly integrate advances in computational performance and will also greatly reduce ongoing development costs by eliminating a ground-up development cycle each time the system is upgraded,” said Seth Billings, an engineer in APL’s Research and Exploratory Development Department and member of the research team.

Looking Ahead

Katyal and his team are exploring other ways to expand the usability of the retinal implant and device. While details regarding these continuing activities cannot be publicly disclosed at present, the goal remains the same—to enable prosthetic vision users with heightened levels of independence in their everyday lives.