Next Article in Journal
Recent Insights into the Measurement of Carbon Dioxide Concentrations for Clinical Practice in Respiratory Medicine
Next Article in Special Issue
Flexible Ultra-Thin Nanocomposite Based Piezoresistive Pressure Sensors for Foot Pressure Distribution Measurement
Previous Article in Journal
Calderón’s Method with a Spatial Prior for 2-D EIT Imaging of Ventilation and Perfusion
Previous Article in Special Issue
Common-Mode Voltage Reduction in Capacitive Sensing of Biosignal Using Capacitive Grounding and DRL Electrode
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ambient Light Rejection Integrated Circuit for Autonomous Adaptation on a Sub-Retinal Prosthetic System

1
Department of Medical Science, Korea University, Seoul 02841, Korea
2
Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, 350-27 Gumi-Daero, Gumi 39253, Korea
3
Department of Biomedical Engineering, Gachon Univesity, Hambakmoe-ro 191, Incheon 21936, Korea
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(16), 5638; https://doi.org/10.3390/s21165638
Submission received: 2 August 2021 / Revised: 18 August 2021 / Accepted: 19 August 2021 / Published: 21 August 2021
(This article belongs to the Special Issue Sensors, Circuit and System for Biomedical Applications)

Abstract

:
This paper introduces an ambient light rejection (ALR) circuit for the autonomous adaptation of a subretinal implant system. The sub-retinal implants, located beneath a bipolar cell layer, are known to have a significant advantage in spatial resolution by integrating more than a thousand pixels, compared to epi-retinal implants. However, challenges remain regarding current dispersion in high-density retinal implants, and ambient light induces pixel saturation. Thus, the technical issues of ambient light associated with a conventional image processing technique, which lead to high power consumption and area occupation, are still unresolved. Thus, it is necessary to develop a novel image-processing unit to handle ambient light, considering constraints related to power and area. In this paper, we present an ALR circuit as an image-processing unit for sub-retinal implants. We first introduced an ALR algorithm to reduce the ambient light in conventional retinal implants; next, we implemented the ALR algorithm as an application-specific integrated chip (ASIC). The ALR circuit was fabricated using a standard 0.35-μm CMOS process along with an image-sensor-based stimulator, a sensor pixel, and digital blocks. As experimental results, the ALR circuit occupies an area of 190 µm2, consumes a power of 3.2 mW and shows a maximum response time of 1.6 s at a light intensity of 20,000 lux. The proposed ALR circuit also has a pixel loss rate of 0.3%. The experimental results show that the ALR circuit leads to a sensor pixel (SP) being autonomously adjusted, depending on the light intensity.

1. Introduction

Retinal implants have great promise in restoring vision for the blind, who suffer from retinal diseases such as retinitis pigmentosa and age-related macular degeneration [1,2,3,4]. The fundamental idea for retinal prosthetics is to electrically stimulate impaired retina cells using a microelectrode array and its driving circuitry [5,6,7,8,9]. This retinal prosthesis can be classified into epi-retinal [5,6] and sub-retinal implants [7,8,9], based on the anatomical location. While the epi-retinal implant is placed onto an inner retinal layer, known as the ganglion cells, the subretinal implant is located in the outer retina, called photoreceptor cells. Although the developed implant methods have their advantages and disadvantages, it is widely known that sub-retinal implants can purse for a high resolution of more than 1000 pixels, compared with epi-retinal implants [10,11,12].
It has been reported that high-resolution stimulation pixels can support high visual acuity [12]. According to a clinical trial [11,12], however, the sub-retinal implant with 1500 stimulation pixels shows equal vision restoration compared with the epi-retinal implant with only 60 pixels. This mainly arises from the interface between neighboring pixels during stimulation and a strong ambient light projected onto the subretinal chip. The first interference issue, that results in a current dispersion, becomes more critical when simultaneously stimulating neighboring pixels [13,14]. To suppress the current dispersion, various methods, such as a wall structure between pixels [15] and a sequential stimulation pattern [5,6,16], were applied to the subretinal implant. The second ambient light induces saturation of all stimulation pixels, especially under an outdoor bright environment. As a result, this results in low contrast sensitivity. To solve this issue, an image processing technique was presented in [17,18,19,20]. However, it is challenging to increase the contrast sensitivity from fully statured images. In addition, this processing unit leads to high power consumption and high area occupation, on a limited retinal silicon chip. Another method is to manually adjust the contrast control knob, which is related to the integration time for the photodiode [21,22]. This can cause inconvenience to patients and is tiresome in daily life. Therefore, it is necessary to autonomously cancel out ambient light in the first stage of the retinal implant system.
The ambient-light cancellation system must meet the following three design requirements. Firstly, the system architecture should be realized as small as possible on the limited silicon chip area. An independent imaging processing unit to compensate for the ambient light occupy a big footprint, which can result in the big retinal chip size too. It would be more critical if the number of stimulation pixels increase more than 1000. The large-area retinal chip requires a large incision to insert the chip inside the eyeball. It can cause a side effect, i.e., an infection around the suture site [23]. Secondly, the cancellation circuit must be operated in low-power dissipation. The image processing unit demands to precisely acquire raw data from all the pixels, quickly analyze them, and properly compensate for saturated pixels due to an ambient light. For an image processing, an analog-to-digital data converter is required. In the worst case, one stimulation pixel needs one data converter, which can consume high power in the high-density stimulation retinal chip. Finally, the ambient light must automatically be removed. In reality, an ambient light surrounding patients who have the retinal implant varies with their location. So far, the patients have controlled the knob to avoid a pixel saturation that arises from a bright ambient light. However, many of them feel inconvenienced by the manual compensation method [24]. Accordingly, a low-power and autonomous compensation circuit to get rid of the ambient light must be realized along with high-density stimulation pixels on the retinal chip.
Motivated by this, we propose a novel ambient light rejection (ALR) circuit to autonomously enhance contrast sensitivity. To cancel out the ambient light, we developed a control circuit to adjust the integration time used for 3-Tr complementary metal-oxide-semiconductor (CMOS) image sensors, where the integration time facilitates sensing the light intensity. The procedure of the control circuit is divided into the detection of pixel saturation and modulation of the integration time. In the case of a bright environment, the 3-tr CMOS image sensor is operated on for a short integration time, while a long integration time is required for a dim environment. This ALR circuit was designed and fabricated using a DongBu Hi-tek 0.35 μm CMOS process and integrated with stimulator pixels, tested on a benchtop environment.
The remainder of this paper is organized as follows. First, a circuit optimization is operated (as an image-sensor-based stimulator (ISNS) pixel) to obtain the contrast sensitivity depending on the integration time. In addition, an autonomous adaptation optimization takes place to confirm the modulation procedure for the integration time through the ALR circuit. Second, the ALR circuit implementation is presented with the simulated results. Third, the results measured from the ALR circuit with a modulated integration time corresponding to the incident light intensity, are presented. Finally, the design constraints of the proposed ALR circuit are discussed.

2. Materials and Methods

2.1. ISNS Pixel Design

Figure 1a,b shows the schematic and simulation of the ISNS pixel scheme. In Figure 1b, the photodiode is replaced with an electrical model with a current source of 6 nA and a parasitic capacitor of 8 pF, which are in parallel. When the reset signal, RST is switched to logic “1”, the integration time starts to accumulate a photocurrent until the RST is switched off again. During the integration time, the voltage node of VPD is proportionally decreased, as shown in Equation (1).
V P D = V D D I P D C P D + C 1 · T i n t
where CPD and IPD indicate the photodiode parasitic capacitor and the photocurrent, respectively. In the reference generator (Figure 1a), C1 and C2 capture the final values of VPD at the end of the integration time and maintain the voltages until the next integration time begins. The restored voltages on C1 and C2 generate a cathodic current, CATH, and anodic stimulation current, ANO, due to the current pulse shaper described in Figure 1a. The M14 transistor functions as a switch to remove the residual charge after stimulation, which can lead to harmful effects such as electrode erosion [25] and tissue absorption [16,22]. Figure 1c depicts the results of the stimulation current amplitudes corresponding to the variable integration time.
Figure 2 shows the sensing dynamic range (SDR) of the ISNS versus the integration time, expressed in Equation (2).
1 T i n t · C 1 + C p K · [ 2 · I m a x μ p C o x ( W L ) p ( V D D V T H . P ) ]
where Tint, K, Imax, and VTH.P denote the integration time, coefficient for quantum efficiency, maximum current amplitude from the ISNS, and threshold voltage for the M2 transistor, respectively. Equation (2) shows that the SDR is inversely proportional to the integration time. If an integration time of 16 ms is applied to the ISNS pixel, the SDR would be approximately 300 to 1000 lux. This implies that the sensing dynamic range can be changed by adjusting the integration time. Therefore, it is important to design a circuit that can autonomously vary with the integration time, according to the ambient light intensity. This could prevent the stimulation current saturation. The next section presents a detailed circuit description of the proposed ALR circuit.

2.2. Autonomous Adaptation Optimization

Figure 3a presents the ALR algorithm for adjusting the length of the integration time. Here, the integration time controlled by the ALR algorithm varies in accordance with the pixel saturation denoted as “yes” or “no.” If there is a saturated sensor in the accumulated image data, the integration time of the next sequence is shorter or longer. TREF is the reference integration time, and TLSB is the time variance when changing the least significant bit (LSB) for the integration time control. Therefore, the addition (or subtraction) of TLSB to (or from) TREF results in the integration time affecting the indicator. This simple, yet effective algorithm, autonomously provides an adequate integration time for the ISNS pixel.
Figure 3b shows an example implementation of the ALR algorithm with differential VPD decrements during the integration period. We employed 12 individual VPD decrements, referring to incident light, and tagged each VPD to express light intensity. For instance, the VPD tagged on 12 is the brightest condition, whereas VPD tagged on 1 indicates the dimmest condition. To show the integration time variation for both dim and bright conditions, we set the reference integration time at the middle of the x-axis. First, assuming that the retina implant is exposed to bright conditions, the ISNS pixel on tag 12 is saturated because the integration time is initially set as the reference integration time. Second, the ALR algorithm can detect the pixel saturation from tag 12 and then control the integration time. The ALR algorithm continues to reduce the integration time until the indicator for pixel saturation changes to “no.” Finally, when the integration time generated through the ALR algorithm becomes shorter than the reference integration time, saturated pixels with tags 8–12 operate in the SDR again. This procedure is similar to the light adaptation observed in the human eye. In contrast, dark adaptation is conducted where the retina implant operates under dim conditions, which means that the integration time extension is longer than the reference integration time. To model the dark adaptation, we assumed that the retina implant was exposed to the dim conditions, and the brightest light at that time was tag 5. The indicator for pixel saturation turns to “no” when the ISNS pixel is driven on the reference integration time; thus, the ALR algorithm extends the integration time until the indicator for pixel saturation is converted to “yes.” The brightest light is tag 5 because the ISNS pixel on tag 5 is first saturated. Subsequently, the indicator for pixel saturation becomes “yes,” ceasing the integration time. By increasing the integration time, other pixels on tags 1–4 have precise visual information.
The ALR algorithm shown on Figure 4 is verified with computational simulation. The simulation was performed using MATLAB (MathWorks Inc., Natick, MA, USA). The obtained images had a single dimension of 64 × 64 pixels. We created a brightening image by increasing the brightness of the original image; thus, it partially shows loosened image data caused by pixel saturation. The interpolated images were reconstructed from the brightening images, using the ALR algorithm. These are visually similar to the original images; therefore, we can quantify the similarity between the original and interpolated images. In this study, we prepared three image sets (as shown in Figure 4a) to verify the proposed ALR algorithm on different images. Figure 4b shows the similarity in the number of pixels. Here, the x-axis represents the number of pixels employed in the interpolation, as shown in Figure 3a. As shown in Figure 4b, the image similarity increases up to 100% after 42 pixels. Accordingly, the ALR algorithm achieved the best performance, employing more than 1% of the entire pixel. However, we selected 16 pixels for the ALR algorithm implementation, although employing more pixels ensured high image similarity. This is because the image similarities, calculated on three different images, were over 90% when employing 16 pixels. In addition, the interpolated image in Figure 4a is actually the result of the ALR interpolation performed with 16 employed pixels. Moreover, the increments of the image similarity versus the number of pixels are decreased with an increasing number of pixels. This is related to the power consumption and area occupation because an additional circuit is required to capture the image data from the ISNS. We could not allot sufficient power and area to the ALR circuit to enable the retina chip to be integrated with over 1000 ISNS pixels. Consequently, we decided to design an ALR circuit, with 16 pixels, taking into account the image similarity, power consumption, and area consumption. In the next section, we present the ALR circuit to autonomously manage the integration time for high-contrast sensitivity.
Figure 5a shows a block diagram of the ALR circuit comprising an N-channel sensor pixel (SP), N-input pseudo OR gate, and D-flop flop sequential buffer. We composed 16-pixel SPs with the same structure as the ISNS pixel to compare the results. The SP consists of an APS, a common-source amplifier, and a comparator. We implemented an nMOS-input folded cascode amplifier for the comparator and a common reference bias generator to supply bias voltages to the comparator. VREF, the saturation voltage, is externally controlled to consider the variation of Imax in Equation (2), caused by an impedance variation of the output stage. The image data through the SP are gathered on the N-input pseudo OR gate to inform the pixel saturation. We employed a pseudo OR gate, instead of a conventional logic gate, to efficiently process multichannel input and reduce power and area occupation. Therefore, we realized the indicator presented in Figure 3 using the SP and pseudo OR gates. In the algorithm, shown in Figure 3a, if the indicator detects pixel saturation, it decreases the integration time of the next sequence, and conversely increases the integration time of the next sequence if the pixel saturation is not detected. If the integration time is adjusted with the above mechanism when the pixel operates in a bright environment, a small integration time is applied to have SDR at high illuminance. On the contrary, the SDR at low illuminance is applied when operating in a dark environment. The correlation between integration time and SDR is shown in Figure 1c, which has the inverse relationship as described in Equation (2). In Figure 5, the output of the d-flipflop acts as an indicator and the local processor controls the integration time.
Figure 5b shows the simulation results of the ALR circuit with a single SP. During the integration time, the VOUT gradually increases proportionally to the photocurrent, such as the ISNS pixel in Figure 1a. When the VOUT voltage is higher than the VREF, the VCO rises to logic “1” state, so SP is saturated. A VCTO via the pseudo OR gate indicates that there is one or more increased VCO in the SP array. The results of VCTO are stored at the end of the integration time and utilized for the ALR interpolation. At each end of the integration time, the VCTO is refreshed and delivered to a local processor. The stored VCTO is used to process the ALR interpolation, through a local processor, using the same procedure as in
Figure 3a. For instance, based on the simulation result, the VCO shows logic ‘1′ at the end of the integration time. It means that the integration time on next sequence will be shorter than this sequence. More detail results about the ALR circuit will be presented on next section.

3. Results

Figure 6 shows the micrograph of the proposed ALR circuit, where we integrated a 16-pixel ISNS and SP, as described. Each 4-pixel ISNS and SP are comprised of the ISNS, SP, bias generator, and compensation capacitor. We composed each 4-pixel ISNS and SPs separately to illuminate different light intensities. In the experiment, the ALR chip was tested to determine the VOUT difference between neighboring SPs.
Figure 7 presents the measured results of the ALR circuit for each light intensity. We used custom-made LED light sources and a commercialized LED light source (66088-LED, Newport, Irvine, CA, USA) to project uniform light to the ALR circuit. Continuous light was irradiated to prevent distortion of the SP from a line scan camera [26,27] and the incident light intensity was measured using a commercialized illuminometer (TES 1336A, TES Corp., Taipe, Taiwan). The local processor was implemented with an FPGA board (Basys 3 Artix-7, Digilent Inc., Pullan, WA, USA).
Figure 7a displays the captured oscilloscopic images with an increase in the light intensity. VOUT,1-3 show the time-dependent increments to incident light during the integration time. The integration time was inversely proportional to the incident light after VOUT was higher than saturation voltage, VREF as 2.2 V. Figure 7b shows the integration time for each light intensity. The ALR interpolation occurs after the time-to-saturation is inversely proportional to the light intensity, as described in Equation (2). The increment of the incident light intensity reduces the integration time, corresponding to the time that VOUT exceeds VREF (VOUT2 in this study). From the measured results as shown in Figure 7a, it can be observed that the integration time is reduced to 16.25 ms to have an SDR near 800 lux where pixel saturation is detected. When operating at higher illuminance, the integration time keeps getting shorter so data are obtained from SDR at each illuminance. However, if VOUT does not reach VREF, the integration time increases until VOUT reaches VREF. We set the modulated integration time as 10 ms divided into four bits (0.625 ms as TLSB). Therefore, the maximum integration time is 20 ms with TREF as 10 ms. After the maximum integration time is achieved, the integration time cannot be longer even if the entire pixel is not saturated. In Figure 7b, the measured result shows that the integration time cannot be longer when the ALR circuit is exposed to 600 lux. The estimated result shows the required integration time for under 600 lux to induce VOUT saturation. The solution which makes the ALR circuit operate in dim condition will be described in the next section. As shown in Figure 7a, the modulated integration time followed the time-to-saturation of VOUT2. Accordingly, the middle point of the SDR was also changed corresponding to the incident light. Assuming that the SDR is proportional to the integration time in Equation (2), the middle of SDR is determined between 560 lux to 18200 lux, converted as 30 dB. It means that the ALR circuit offers additional 30 dB sensing dynamic range. Considering the SDR in Figure 1a is about 10 dB, the ALR circuit offers significant benefits, which makes it possible to sense high dynamic range on the ISNS pixel.
Table 1 presents the power and area consumption of the ALR circuit. From the results of the simulation and the layout, we estimated that the power and area consumption of the ALR circuit comprised 16-SP, the pseudo OR gate, and the bias generator. The power consumption and area occupation are dominated by the SP owing to the presence of not just the 16-pixel SP on the ALR circuit. In the ALR circuit, shown in Figure 5a, the SP was constructed with the folded cascode amplifier to achieve sufficient gain of the comparator. If we change the folded cascode amplifier as the self-bias amplifier [28,29] and design the SP without the diode-connected APS (by directly connecting the comparator with the ISNS), we can compromise the area and power consumption. In the next section, we present the summary of the ALR circuit, comparison with relevant research, and future work.

4. Conclusions and Discussion

In this paper, we present a novel ALR circuit that autonomously enhances contrast sensitivity to provide convenience and assistance to blind people suffering from retinal diseases such as retinitis pigmentosa and age-related macular degeneration. First, we introduced the ALR algorithm and verified the efficacy of the algorithm in through MATLAB simulations. Here, the main constraints are power consumption, area occupation, and interpolation efficacy. By optimizing the trade-off between these constraints, we designed an ALR circuit with 16 pixels. Although the 16 pixels are only 0.3% of all pixels, the image similarity is over 90%, and the interpolated image is visually similar to the original image. This is an optimization procedure for the ALR interpolation, which provides a guideline to determine the trade-off between the interpolation efficacy, and power and area occupation. In addition, by performing the optimization procedure, we could reduce the power consumption and area occupation compared to the conventional image-processing unit.
In the experimental results, shown in Figure 1, we used a 3-kΩ resistor as the electrode impedance. However, a tissue-electrode interference (ETI), modeled as an electrode impedance, should be changed with the size, geometry, and material of the electrode [30,31,32]. To compensate for the variation from the electrode, we decided to design an ALR circuit with adjustable VREF. As mentioned previously, the purpose of the ALR circuit is to keep the ISNS pixel operating in the SDR by preventing pixel saturation. Thus, if the VREF is to be lower than that obtained when the ISNS pixel generates the stimulation current, under Imax, the ALR circuit autonomously makes the entire ISNS pixels continuously operate, without pixel saturation. The proposed ALR circuit was implemented on a silicon chip using a DongBu Hi-tek 0.35 μm CMOS process, which occupies an active area of ~190 µm2. When 16 reference pixels to reject the ambient light operate, it dissipates of 3.3 mW that is low enough to work with high-density stimulation pixels on a single chip. In addition, a maximum ALR feedback response time of 1.6 s was measured at a light intensity of 20,000 lux that in the worst case destroys a human retina. Therefore, a response time less than 1.6 s will be fine for the blind who implant the retinal chip to move around in their daily life.
The ALR algorithm implemented by the ASIC is experimented in a customized test bench. As shown in Figure 7a, different light intensities were irradiated on each pixel, and the results of this are clearly shown in Figure 7b. Theoretically, the ALR circuit offers the additional sensing dynamics for the ISNS pixel and will be autonomously worked. As a result, the integration time is autonomously regulated depending on the incident light intensity. However, the integration time could not be sufficiently stretched to operate under dim condition, such as under 400 lux. This implies that the ALR circuit provides a significant assistance to view an image under bright conditions, while it is ineffective under dim conditions. Two possible solutions can be considered: decreasing the accumulation capacitor and increasing the reference integration time TREF. Even though these have advantages as well as disadvantages, we will use both solutions, which is helpful in designing high-density retinal implants. Consequently, the proposed ALR circuit autonomously adapts the ISNS pixel to the incident light. In addition, we present the simulation results to ensure the ALR interpolation efficacy, considering the power and area consumption.
Electrical performance of the proposed work is summarized on Table 2 along with other previous works for comparison. The prior literature presented in [8,9,21,33] only has stimulation pixels. Although Park et al. [7] developed an edge stimulation method to increase contrast sensitivity, it cannot compensate for pixel saturation caused by the ambient light. Rothermel et al. [34,35] proposed an ambient light rejection technique that operates with 3025 stimulation pixels. The scheme shows a possibility that the ambient-light compensation can be applied for high-density stimulation pixels more than 3000. However, it requires 100 reference pixels to measure a saturation status, which can induce high-power consumption and a large area on the silicon chip. Our compensation technique proposed in this work requires few reference pixels due to the similarity optimization as shown in Figure 4b. According to the simulation result presented previously, 16 reference pixels among 64 × 64 stimulation pixels are enough to cancel out the deleterious effect of the ambient light. Therefore, our compensation work, which requires a pixel loss rate of 0.4% (=16 reference pixels/4096 stimulation pixels), is more efficient than the previous research [35] that demands the rate of 3.3% (=100 reference pixels/3025 stimulation pixels). In future work, we will design a retina implant integrated with over 2000 ISNSs on a single chip and will apply this refined chip for clinical trial.

Author Contributions

Conceptualization, H.K. and J.K.; methodology, H.K. and J.K.; formal analysis, H.K. and J.K.; writing—original draft preparation, H.K., H.C. and J.K.; writing—review and editing, H.K., H.C. and J.K.; supervision, H.C. and J.K.; funding acquisition, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea grant (NRF-2017M3A9E2056461), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C4001606), and the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, the Ministry of Food and Drug Safety) (Project Number: 202017D01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are included in this article.

Acknowledgments

The authors would like to express their sincere appreciation to the IC Design Education Center for chip fabrication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ayton, L.N.; Barnes, N.; Dagnelie, G.; Fujikado, T.; Goetz, G.; Hornig, R.; Jones, B.W.; Muqit, M.M.; Rathbun, D.L.; Stingl, K. An update on retinal prostheses. Clin. Neurophysiol. 2020, 131, 1383–1398. [Google Scholar] [CrossRef]
  2. Humayun, M.S.; de Juan, E., Jr.; Weiland, J.D.; Dagnelie, G.; Katona, S.; Greenberg, R.; Suzuki, S. Pattern electrical stimulation of the human retina. Vision Res. 1999, 39, 2569–2576. [Google Scholar] [CrossRef] [Green Version]
  3. Cheng, D.L.; Greenberg, P.B.; Borton, D.A. Advances in retinal prosthetic research: A systematic review of engineering and clinical characteristics of current prosthetic initiatives. Curr. Eye Res. 2017, 42, 334–347. [Google Scholar] [CrossRef] [PubMed]
  4. Edwards, T.L.; Cottriall, C.L.; Xue, K.; Simunovic, M.P.; Ramsden, J.D.; Zrenner, E.; MacLaren, R.E. Assessment of the electronic retinal implant alpha AMS in restoring vision to blind patients with end-stage retinitis pigmentosa. Ophthalmology 2018, 125, 432–443. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Ortmanns, M.; Rocke, A.; Gehrke, M.; Tiedtke, H.-J. A 232-channel epiretinal stimulator ASIC. IEEE J. Solid-State Circuits 2007, 42, 2946–2959. [Google Scholar] [CrossRef]
  6. Chen, K.; Yang, Z.; Hoang, L.; Weiland, J.; Humayun, M.; Liu, W. An integrated 256-channel epiretinal prosthesis. IEEE J. Solid-State Circuits 2010, 45, 1946–1956. [Google Scholar] [CrossRef]
  7. Park, J.H.; Tan, J.S.Y.; Wu, H.; Dong, Y.; Yoo, J. 1225-channel neuromorphic retinal-prosthesis SoC with localized temperature-regulation. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 1230–1240. [Google Scholar] [CrossRef]
  8. Huang, T.W.; Kamins, T.I.; Chen, Z.C.; Wang, B.-Y.; Bhuckory, M.; Galambos, L.; Ho, E.; Ling, T.; Afshar, S.; Shin, A. Vertical-junction photodiodes for smaller pixels in retinal prostheses. J. Neural Eng. 2021, 18, 036015. [Google Scholar] [CrossRef]
  9. Tomioka, K.; Toyoda, K.; Ishizaki, T.; Noda, T.; Ohta, J.; Kimura, M. Retinal Prosthesis Using Thin-Film Devices on a Transparent Substrate and Wireless Power Transfer. IEEE Trans. Electron Devices 2020, 67, 529–534. [Google Scholar] [CrossRef]
  10. Thompson, R.W.; Barnett, G.D.; Humayun, M.S.; Dagnelie, G. Facial recognition using simulated prosthetic pixelized vision. Invest. Ophthalmol. Visual Sci. 2003, 44, 5035–5042. [Google Scholar] [CrossRef] [Green Version]
  11. Ho, A.C.; Humayun, M.S.; Dorn, J.D.; Da Cruz, L.; Dagnelie, G.; Handa, J.; Barale, P.-O.; Sahel, J.-A.; Stanga, P.E.; Hafezi, F. Long-term results from an epiretinal prosthesis to restore sight to the blind. Ophthalmology 2015, 122, 1547–1554. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Daschner, R.; Rothermel, A.; Rudorf, R.; Rudorf, S.; Stett, A. Functionality and performance of the subretinal implant chip Alpha AMS. Sens. Mater. 2018, 30, 179–192. [Google Scholar] [CrossRef]
  13. Matteucci, P.B.; Barriga-Rivera, A.; Eiber, C.D.; Lovell, N.H.; Morley, J.W.; Suaning, G.J. The effect of electric cross-talk in retinal neurostimulation. Invest. Ophthalmol. Visual Sci. 2016, 57, 1031–1037. [Google Scholar] [CrossRef] [Green Version]
  14. Yue, L.; Weiland, J.D.; Roska, B.; Humayun, M.S. Retinal stimulation strategies to restore vision: Fundamentals and systems. Prog. Retin. Eye Res. 2016, 53, 21–47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Ho, E.; Lei, X.; Flores, T.; Lorach, H.; Huang, T.; Galambos, L.; Kamins, T.; Harris, J.; Mathieson, K.; Palanker, D. Characteristics of prosthetic vision in rats with subretinal flat and pillar electrode arrays. J. Neural Eng. 2019, 16, 066027. [Google Scholar] [CrossRef]
  16. Rothermel, A.; Liu, L.; Aryan, N.P.; Fischer, M.; Wuenschmann, J.; Kibbel, S.; Harscher, A. A CMOS chip with active pixel array and specific test features for subretinal implantation. IEEE J. Solid-State Circuits 2008, 44, 290–300. [Google Scholar] [CrossRef]
  17. Bigas, M.; Cabruja, E.; Forest, J.; Salvi, J. Review of CMOS image sensors. Microelectron. J. 2006, 37, 433–451. [Google Scholar] [CrossRef] [Green Version]
  18. Ikebe, M.; Saito, K. A wide-dynamic-range compression image sensor with negative-feedback resetting. IEEE Sens. J. 2007, 7, 897–904. [Google Scholar] [CrossRef] [Green Version]
  19. Kim, D.; Chae, Y.; Cho, J.; Han, G. A dual-capture wide dynamic range CMOS image sensor using floating-diffusion capacitor. IEEE Trans. Electron Devices 2008, 55, 2590–2594. [Google Scholar]
  20. Perenzoni, M.; Massari, N.; Stoppa, D.; Pancheri, L.; Malfatti, M.; Gonzo, L. A 160 × 120-Pixels Range Camera with In-Pixel Correlated Double Sampling and Fixed-Pattern Noise Correction. IEEE J. Solid-State Circuits 2011, 46, 1672–1681. [Google Scholar] [CrossRef]
  21. Oh, S.; Ahn, J.-H.; Lee, S.; Ko, H.; Seo, J.M.; Goo, Y.-S. Light-controlled biphasic current stimulator IC using CMOS image sensors for high-resolution retinal prosthesis and in vitro experimental results with rd1 mouse. IEEE Trans. Biomed. Circuits Syst. 2014, 62, 70–79. [Google Scholar] [CrossRef] [PubMed]
  22. Kang, H.; Abbasi, W.H.; Kim, S.-W.; Kim, J. Fully Integrated Light-Sensing Stimulator Design for Subretinal Implants. Sensors 2019, 19, 536. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Katrin, G.; Karl Ulrich, B.; Helmut, S.; Robert, E.M.; Katarina, S.; Eberhart, Z.; Florian, G. Implantation, removal and replacement of subretinal electronic implants for restoration of vision in patients with retinitis pigmentosa. Curr. Opin. Ophthalmol. 2018, 29, 239–247. [Google Scholar]
  24. Lauren, N.A.; Nick, B.; Gislin, D.; Takashi, F.; Georges, G.; Ralf, H.; Bryan, W.J.; Mahiul, M.K.M.; Daniel, L.R.; Katarina, S.; et al. An update on retinal prostheses. Clin. Neurophysiol. 2020, 131, 1383–1398. [Google Scholar]
  25. Merrill, D.R.; Bikson, M.; Jefferys, J.G. Electrical stimulation of excitable tissue: Design of efficacious and safe protocols. J. Neurosci. Methods 2005, 141, 171–198. [Google Scholar] [CrossRef] [PubMed]
  26. Yoshihara, S.; Nitta, Y.; Kikuchi, M.; Koseki, K.; Ito, Y.; Inada, Y.; Kuramochi, S.; Wakabayashi, H.; Okano, M.; Kuriyama, H. A 1/1.8-inch 6.4 MPixel 60 frames/s CMOS image sensor with seamless mode change. IEEE J. Solid-State Circuits 2006, 41, 2998–3006. [Google Scholar] [CrossRef]
  27. Nitta, Y.; Muramatsu, Y.; Amano, K.; Toyama, T.; Mishina, K.; Suzuki, A.; Taura, T.; Kato, A.; Kikuchi, M.; Yasui, Y. High-speed digital double sampling with analog CDS on column parallel ADC architecture for low-noise active pixel sensor. In Proceedings of the 2006 IEEE International Solid State Circuits Conference-Digest of Technical Papers, San Francisco, CA, USA, 6–9 February 2006; pp. 2024–2031. [Google Scholar]
  28. Bazes, M. Two novel fully complementary self-biased CMOS differential amplifiers. IEEE J. Solid-State Circuits 1991, 26, 165–168. [Google Scholar] [CrossRef]
  29. Figueiredo, M.; Santos-Tavares, R.; Santin, E.; Ferreira, J.; Evans, G.; Goes, J. A two-stage fully differential inverter-based self-biased CMOS amplifier with high efficiency. IEEE Trans. Circuits Syst. I Regul. Pap. 2011, 58, 1591–1603. [Google Scholar] [CrossRef]
  30. Franks, W.; Schenker, I.; Schmutz, P.; Hierlemann, A. Impedance characterization and modeling of electrodes for biomedical applications. IEEE Trans. Biomed. Circuits Syst. 2005, 52, 1295–1302. [Google Scholar] [CrossRef]
  31. John, S.E.; Apollo, N.V.; Opie, N.L.; Rind, G.S.; Ronayne, S.M.; May, C.N.; Oxley, T.J.; Grayden, D.B. In vivo impedance characterization of cortical recording electrodes shows dependence on electrode location and size. IEEE Trans. Biomed. Circuits Syst. 2018, 66, 675–681. [Google Scholar] [CrossRef]
  32. Andrea, C.; Thoralf, H.; Günther, Z. Electrode-size dependent thresholds in subretinal neuroprosthetic stimulation. J. Neural Eng. 2021, 15, 045003. [Google Scholar]
  33. Daniel, P.; Yannick, L.M.; Saddek, M.; Mahiul, M.; Jose, A.S. Photovoltaic Restoration of Central Vision in Atrophic Age-Related Macular Degeneration. Ophthalmology 2020, 127, 1097–1104. [Google Scholar]
  34. Rothermel, A.; Kaim, H.; Gambach, S.; Schuetz, H.; Moll, S.; Steinhoff, R.; Herrmann, T.; Zeck, G. Subretinal Stimulation Chip Set with 3025 Electrodes, Spatial Peaking Filter, Illumination Adaptation and Implant Lifetime Optimization. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 4310–4313. [Google Scholar]
  35. Moll, S.; Gambach, S.; Schütz, H.; Steinhoff, R.; Kaim, H.; Rothermel, A. System design of a physiological ambient illumination adaptation for subretinal stimulator. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1962–1965. [Google Scholar]
Figure 1. A PBStim pixel circuit with simulation results. (a) An image sensor based neural stimulator pixel; (b) A simulation results of the ISNS; (c) The measured results of ISNS with variable integration time.
Figure 1. A PBStim pixel circuit with simulation results. (a) An image sensor based neural stimulator pixel; (b) A simulation results of the ISNS; (c) The measured results of ISNS with variable integration time.
Sensors 21 05638 g001
Figure 2. The sensing dynamic range of the ISNS with variable integration time.
Figure 2. The sensing dynamic range of the ISNS with variable integration time.
Sensors 21 05638 g002
Figure 3. (a) An interpolation algorithm and a timing diagram and (b) an implementation example.
Figure 3. (a) An interpolation algorithm and a timing diagram and (b) an implementation example.
Sensors 21 05638 g003
Figure 4. A (a) Interpolation image example; (b) An image similarity versus number of pixels.
Figure 4. A (a) Interpolation image example; (b) An image similarity versus number of pixels.
Sensors 21 05638 g004
Figure 5. (a) Block diagram of the ambient light rejection circuit; (b) Simulation results of the sensor pixel.
Figure 5. (a) Block diagram of the ambient light rejection circuit; (b) Simulation results of the sensor pixel.
Sensors 21 05638 g005
Figure 6. A layout of the proposed ALR circuit.
Figure 6. A layout of the proposed ALR circuit.
Sensors 21 05638 g006
Figure 7. (a) The measured result versus a transient time and (b) integration time versus light intensity of the ALR circuit.
Figure 7. (a) The measured result versus a transient time and (b) integration time versus light intensity of the ALR circuit.
Sensors 21 05638 g007
Table 1. The power consumption and area occupation for the ALR circuit.
Table 1. The power consumption and area occupation for the ALR circuit.
Power Consumption (μW)Area Occupation (μm2)
ALR circuit1748.88 (100%)189,744.9 (100%)
Sensor pixel93.68 (85.71%)97 × 114 (93.24%)
ISNS pixel56.497 × 114
16:1 OR gate-42.4 × 38.21 (0.85%)
Reference generator250 (14.77%)156.6 × 71.5 (5.9%)
* The period and duty cycle of the integration time for the ISNS and SP pixels are 50 ms and 40%, respectively, when considering the flicker-free vision. This result was calculated without power consumption on a simulated current.
Table 2. Specification comparison of retina prosthesis ASICs.
Table 2. Specification comparison of retina prosthesis ASICs.
TBioCAS’20 [7]TED’20 [9]TBioCAS’14 [21]EMBC’20 [34]Ophthalmol’20 [8,33]This Work
Technology0.18 μmCustom0.35 μm BCD0.18 μm HVCustom0.35 μm
Supply powerWireless coilWireless coilWireless coilWireless coilPhotovoltaicWireless coil
Electrode locationSub-retinaSub-retinaSub-retinaSub-retinaSub-retinaSub-retina
Stimulus approachSimultaneousSimultaneousSequentialSequentialSimultaneousSequential
Pixel number12251001283025378256
Pixel size (μm2)84.3 × 86.6400 × 40050 × 5551.5 × 51.7750097 × 114
Chip size (mm2)5 × 3.454 × 42.5 × 1.23.14 × 3.942 × 25 × 4
Stimulus current
[loading parameter]
≤3 mA
(10 kΩ resistor)
≤3 μA
(PBS solution)
≤300 μA
(10 kΩ resistor)
≤18 μA
(PBS solution)
N/A≤150 μA
(10 kΩ resistor)
Supply voltage± 1.6 V5 V12 V± 1.6 VN/A±1.6 V
Application
-
Edge only stimulation
-
Temperature sensor
N/AN/A
-
Ambient light rejection
-
High-pass filter
N/A
-
Ambient light rejection
Power consumption2.7 mW320 μWN/AN/AN/A3.2 mW
* Power consumption is calculated excluding stimulation current.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kang, H.; Choi, H.; Kim, J. Ambient Light Rejection Integrated Circuit for Autonomous Adaptation on a Sub-Retinal Prosthetic System. Sensors 2021, 21, 5638. https://doi.org/10.3390/s21165638

AMA Style

Kang H, Choi H, Kim J. Ambient Light Rejection Integrated Circuit for Autonomous Adaptation on a Sub-Retinal Prosthetic System. Sensors. 2021; 21(16):5638. https://doi.org/10.3390/s21165638

Chicago/Turabian Style

Kang, Hosung, Hojong Choi, and Jungsuk Kim. 2021. "Ambient Light Rejection Integrated Circuit for Autonomous Adaptation on a Sub-Retinal Prosthetic System" Sensors 21, no. 16: 5638. https://doi.org/10.3390/s21165638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop