Next Article in Journal
A Machine Learning Method for Detection of Surface Defects on Ceramic Tiles Using Convolutional Neural Networks
Next Article in Special Issue
Travel Time Prediction and Explanation with Spatio-Temporal Features: A Comparative Study
Previous Article in Journal
Memory Characteristics of Thin Film Transistor with Catalytic Metal Layer Induced Crystallized Indium-Gallium-Zinc-Oxide (IGZO) Channel
Previous Article in Special Issue
Design and Development of a Blockchain-Based System for Private Data Management
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

The Mechanism of Orientation Detection Based on Artificial Visual System

Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
School of Electrical and Computer Engineering, Kanazawa University, Kanazawa-shi 920-1192, Japan
Author to whom correspondence should be addressed.
Electronics 2022, 11(1), 54;
Submission received: 1 December 2021 / Revised: 21 December 2021 / Accepted: 23 December 2021 / Published: 24 December 2021


As an important part of the nervous system, the human visual system can provide visual perception for humans. The research on it is of great significance to improve our understanding of biological vision and the human brain. Orientation detection, in which visual cortex neurons respond only to linear stimuli in specific orientations, is an important driving force in computer vision and biological vision. However, the principle of orientation detection is still unknown. This paper proposes an orientation detection mechanism based on dendrite calculation of local orientation detection neurons. We hypothesized the existence of orientation detection neurons that only respond to specific orientations and designed eight neurons that can detect local orientation information. These neurons interact with each other based on the nonlinearity of dendrite generation. Then, local orientation detection neurons are used to extract local orientation information, and global orientation information is deduced from local orientation information. The effectiveness of the mechanism is verified by computer simulation, which shows that the machine can perform orientation detection well in all experiments, regardless of the size, shape, and position of objects. This is consistent with most known physiological experiments.

1. Introduction

The study of orientation detection mechanism and visual nervous system provides a strong clue for us to further understand the functional mechanism of the human brain. David Hubel and Torsten Wiesel discovered directed selection cells in the primary visual cortex (V1) in 1981 and presented a simple but powerful model of how such directed selection arises from non-selective thalamic cortical inputs [1,2]. The model has become a central frame of reference for understanding cortical computing and its underlying mechanisms [3]. Hubel and Wiese won the Nobel Prize for Medicine for their landmark discovery of orientational preference and other related work. In this period, they conducted a series of studies and experiments about cortex cells on rabbits and monkeys and observed some biological phenomena: (1) The visual cortex cells especially respond to rectangular light spot and slit; (2) there is a simple type of cortical cells in the visual cortex that only respond to specific angle stimulation in the receptive field [4,5]. The properties of these neurons are called orientation selectivity. These neurons can simply be fired to a specific orientation but with no or little response to other orientations. Orientation detection is one of the basic functions of the visual system and helps us recognize the environment around us and make judgments and choices.
However, questions remain unanswered as to how the computations performed in V1 represent computations performed in many areas of the cerebral cortex, and whether V1 contains highly specific and unique mechanisms for computing orientation from retinal images [6,7,8]. Recent studies provide strong, indirect support that dendrites play important and possibly crucial roles in visually computing invertebrates [9,10,11,12,13,14,15,16,17,18]. The nonlinear interaction of dendrite trees is to use Boolean logic and representation (together), or (separation), rather than (negation), or soft minimum, even for multiplication, and soft maximum [19,20,21,22]. Experiments also show that a single neocortex pyramidal neuron of dendrites can categorize linear, inseparable inputs, the calculation of which is traditionally believed to need a multilayer network [23,24,25].
Almost 60 years ago, Hubel and Wiesel indeed observed that individual cortical cells have directional selectivity, that is, individual neurons respond selectively to certain orientations of the bar or grating and not to others. However, the details of these individual neurons are still unknown [26]. How, to what extent, and by what mechanism does cortical processing affect orientation choice remain unclear. In this paper, we propose a new quantitative mechanism to explain how a circuit model based on V1 cortical anatomy produces directional selectivity. We first hypothesized the presence of locally directed detection neurons in the visual nervous system. Each local orientation detection neuron receives its photoreceptor input, selectively takes adjacent photoreceptor input, and computes to respond only to the orientation of the selected adjacent photoreceptor input. Based on the dendrite neuron model, the local orientation detection neuron is realized and extended to a multiorientation detection neuron. To demonstrate the effectiveness of our mechanism, we conducted a series of experiments on a total dataset of 252,000 images of different shapes, sizes, and positions, moving in different orientations of motion. Computer simulation shows that the machine can detect the motion orientation well in all experiments.
In “Section 2. Mechanism”, local and global orientation detection mechanisms are discussed by establishing the dendritic neuron model, and the artificial visual system (AVS) is proposed. In “Section 3. Experiment”, its validity is verified and compared with CNN.

2. Mechanism

2.1. Dendritic Neuron Model

Artificial neural network (ANN) has been a research hotspot in the field of artificial intelligence since the 1980s [27,28]. By stimulating brain synaptic connection structure and information technology processing mechanisms through mathematical learning models, neural networks play important roles in various fields, such as medical diagnosis, stock index prediction, and autonomous driving, in which they have shown excellent performance [29,30,31].
All of these networks use the traditional McCulloch–Pitts neuron model as their basic computing unit [32]. However, the McCulloch–Pitts model did not take into account the nonlinear mechanism of dendrites [33]. At the same time, recent research on dendrites in neurons plays a key role in the overall calculation, which provides strong support to future research [34,35,36,37,38,39,40,41]. Koch, Poggio, and Torre proposed that, in the dendrites of retinal ganglion cells, if activated inhibitory synapses are closer to the cell body than excitatory synapses, excitatory synapses will be intercepted [42,43]. Thus, the interaction between synapses on the dendritic branch can be regarded as a logical AND operation. The branch node can sum up the current coming from the branch, which can be simulated as a logical OR operation. The outputs of branch nodes are directed to the cell body (soma). When the signal exceeds the threshold, the cell will be fired and send a signal through its axon to other neurons. Figure 1a shows an ideal δ cell model. Here, if the inhibition interaction is described as a NOT gate, the output of the δ cell model can be expressed as follows:
Output = X 1 X 2 + X ¯ 3 X 4 + X ¯ 5 X 6 X 2
where X1, X2, X4, and X6 denote excitatory inputs, whereas X3 and X5 denote inhibitory inputs. Each input can be simulated as a logical 0 or 1 signal. Therefore, the cell body (soma) signal will generate a logical 1 signal only in the following three cases: (i) X1 = 1 and X2 = 1; (ii) X3 = 0 and X4 = 1; (iii) X5 = 0, X6 = 1 and X2 = 1. In addition, the γ cell receives signals from the excitatory and inhibitory synapses, which is presented in Figure 1b. The output of the γ cell model can be described as follows:
Output = X ¯ 1 X 2 X 3

2.2. Local Orientation Detection Neuron

In this section, we describe in detail the structure of the neuron and how it detects orientation. We hypothesized that simple ganglion neurons can provide orientation information by detecting light signals in themselves and around them.
For the sake of simplicity, we set the receptive field as a two-dimensional MXN region; each region corresponds to the smallest visible region [44]. When light hits a receptive region, the electrical signal is transmitted through its photoreceptors to ganglion cells, which process various visual information. The input signal can be represented as Xij, where i and j represent the positions of the two-dimensional receptive field. For the input signal Xij, we used the current neuron and the eight surrounding regions, and the local orientation information can be obtained, as presented in Figure 2.
In this study, the receptive field was set to a 3 × 3 matrix. Thus, the active states of eight neurons corresponding to four orientation information can be obtained, where 135° and 315° are 135° inclines, 90° and 270° are vertical, 45° and 225° are 45° inclines, and 0° and 180° are horizontal. In addition, more orientation information can be achieved with the increasing size of the receptive field.

2.3. Global Orientation Detection

As mentioned above, locally directed detection neurons interact by performing the effects of light falling on their receiving fields. Here, we hypothesized that the local information can be used to determine the global orientation. Therefore, we can measure the activity intensity of all local orientations of the detected neurons in the receptive field and make orientation judgments by summarizing the output of neurons in different orientations.
To measure the activity intensity of local orientational detection neurons in the two-dimensional receptive field (MXN), we have four possible solutions as follows: (1) One neuron scheme—it is assumed that there is only one local orientation detection of retinal ganglion neuron, the neuron scans eight orientations for each location; (2) multiple neuron scheme—it is assumed that eight different neurons scan eight adjacent positions in different orientations to provide local orientation information; (3) neuron array scheme—it is assumed that a number of non-overlapping neurons slide on the receptive field to provide orientation information; (4) all-neuron scheme—we assumed that each photoreceptor corresponds to a 3 × 3 receptive field, which has its local orientation in eight positions. Therefore, in each receptive field, local orientation detection neurons can extract basic orientation information. Then, the global orientation can be judged by local orientation information. To understand the mechanism by which the system performs orientational detection, we used a simple two-dimensional (5 × 5) image with a target angle of 45 degrees, which is shown in Figure 3. Without losing generality, we used the first solution. Local detection of retinal ganglion neurons scan each position from (1,1) to (5,5) on the receptive field and generate local orientation information. As shown in Figure 3, the activation level of 45° neurons is the highest, which is consistent with the target.

2.4. Artificial Visual System (AVS)

The visual system consists of the sensory organs (the eyes), pathways connecting the visual cortex, and other parts of the central nervous system. In the visual system, local visual feature detection neurons can extract local orientation information and other basic local visual features. These features are then combined by subsequent layers to detect higher-order features. Based on this mechanism, we developed an artificial vision system (AVS), as shown in Figure 4. Layer 1 neurons (also known as the layer of local feature detector neurons (LFDNs) correspond to neurons in the V1 region of the cortex, such as local orientation detection neurons, which extract basic local visual features. These features are then sent to a subsequent layer (also known as the global feature-detecting neuron layer) corresponding to the middle temporal (MT) region of the primate brain to detect higher-order features, for example, the global orientation of an object. Neurons in this layer can be the sum of the output of neurons in the simple layer 1, such as the neurons for orientation detection, motion direction detection, motion speed detection, and binocular vision perception; this can be a one-layer, a two-layer corresponding to V4 and V6, or a three-layer corresponding to V2, V3, and V5, and even a multilayer network, for example, for pattern recognition. It is worth noting that AVS is a feedforward neural network, and any feedforward neural network can be trained by error backpropagation. The difference between AVS and traditional multilayer neural networks and convolutional neural networks is that LFDNs of AVS layer 1 can be designed in advance according to prior knowledge, so in most cases, they do not need to learn. Even if learning is required, AVS learning can start from a good starting point, which can greatly improve the efficiency and speed of learning. In addition, the hardware implementation of AVS is simpler and more efficient than CNN, requiring only simple logical calculations for most applications.

3. Experiment

To prove the validity of our proposed mechanism and mechanism-based AVS, we randomly generated a large number of different 32 × 32 pixel images for testing. We scanned every pixel of a two-dimensional image through a 3 × 3 window, extracted the local location information of every pixel of the two-dimensional image, with eight orientation detection neurons, and judged the global location information according to the local location information. In the dataset, we generated 10 groups of graphs with random widths and positions in 4 orientations. In all experiments, the acceptance field was set to 3 × 3, and the step size was set to 1. Experimental parameters shown in Table 1.
As shown in Figure 5 and Figure 6, the objects with different sizes are both at 135° angle. Figure 7 and Figure 8 are horizontal and vertical, respectively. All activation orientations were counted, and the orientation with the strongest signal—namely, the activation orientation with the largest number, was taken as the output result. The experimental results are shown in Figure 5, Figure 6, Figure 7 and Figure 8.
Finally, we proved the universality of this mechanism by detecting the orientation of objects within different sizes, shapes, and positions in thousands of images, among which were six irregular objects, seen in Figure 9, and the bar chart represented in Figure 10, where the X axis denotes distinct items, and the Y axis represents the activation rates of four neurons. By testing two irregular objects in different positions and four arrows of different sizes and shapes, it was found that our proposed mechanism can accurately detect the orientations of objects with different shapes, positions, length–width ratios, and sizes.
Furthermore, we also evaluated the performance of the single-layer perceptron AVS for global orientation detection with a larger image dataset in which objects were sized 2 pixels, 3 pixels, 4 pixels, 8 pixels, 12 pixels, 16 pixels, 32 pixels, and more than 48 pixels, and placed at different positions with different angles. We repeated each experiment 30 times and obtained the average as the testing result. The testing result is shown in Table 2. The result indicates that, regardless of the size and position of the object, its orientation angle can be accurately recognized by our single-layer perceptron orientation detection system.
To compare the global directional detection performance of single-layer perceptron AVS with other methods, CNN was selected because they have achieved great success in the detection, segmentation, and recognition of objects in images. The convolutional neural network used in the experiments is one of the most typical CNN architecture for handwritten character recognition [45]. It comprises 7 layers: (1) a convolutional layer with 30 filters with a size of 3 × 3 to produce 30 feature maps with a size of 32 × 32; (2) a pooling layer with a 2 × 2 maximum pooling; (3) an affine layer with a full net from 8192 (30 × 16 × 16) to hidden layer 100 and then to output layer 4. As the input was a 32 × 32 pixel image, there were a total of 1024 (32 × 32) inputs to the CNN. The convolution layer produced 30 feature maps of 32 × 32. After a 2 × 2 maximum pooling, 8192 (30 × 16 × 16) inputs were applied to a fully connected network from 8192 to hidden layer 100 and then to layer 4. The single-layer perceptron AVS has only two layers: (1) a perceptron layer with 4 types, and a total of 4096 (4 × 32 × 32) local orientation detective neurons, which produce 4 local orientation feature maps of 32 × 32; (2) a pooling layer summing the four local orientation feature maps to four outputs. Compared with CNN, which has 820,004 parameters, the single-layer perceptron AVS has only 12 (4 × 3) parameters for local orientation detective neurons and saves a large portion of parameters and computation cost.
We trained the CNN in global orientation detection. The data used to train and test the systems were 15,000 and 5000, respectively. The sizes of objects were from 2 pixels to 256 pixels; the shapes of objects were all different, and objects were placed randomly. Learning was performed with back-propagation under Adam optimizer. Figure 11 shows the learning results of the loss and accuracy of the CNN. From the learning curves in Figure 11, we can see that CNN learned the orientation detection well and reached only 99.997% identification accuracy; that is to say, CNN performed very well, compared with the single-layer perceptron AVS’s 100% accuracy without training. Although for most applications, the single-layer perceptron AVS does not need learning, the single-layer perceptron system is a learnable network and has absolute advantages over CNN in the following aspects: (1) The parameters that need to be trained are much fewer than CNN, which is becoming deeper and deeper, and millions of parameters are calculated and optimized by a machine; (2) the AVS model can learn from a very good initial values, which can be obtained from our prior knowledge to the systems and tasks, for example, how many neurons and what kind of neurons are needed, while CNN can only start from completely random initial values; (3) the single-layer perceptron AVS is guaranteed to converge within an upper bound on the number of times [45], while CNN usually has considerable costs of learning time and is very easy to fall into the local minimum; (4) more importantly, learning of the single-layer perceptron AVS is controllable, and its learning results are understandable and explainable, while learning of CNN is performed completely in a black box, and its learning results are not explainable and not transparent, and their predictions are not traceable by humans.
In addition, since CNN requires hundreds of layers per turn, whereas single-layer perceptron AVS requires only two layers, the hardware implementation of single-perceptron AVS is obviously much simpler and more efficient than CNN. The comparisons of CNN and the single-layer perceptron AVS are summarized in Table 3.
Finally, in order to compare the noise resistance of both CNN and the single-layer perceptron AVS, we added noises to both systems and observed their noise resistance. Table 4 summarizes the noise resistance of both CNN and the single-layer perceptron AVS. From the table, we can see that if a 5% noise was added, CNN’s identification accuracy dropped to 90%, and AVS dropped to 96%. Additionally, as the number of noises increased to 30%, CNN’s identification accuracy would drop dramatically even to 35%. On the contrary, the single-layer perceptron system could keep 43% identification accuracy, showing superior noise resistance.
This paper introduced an orientation detection mechanism that can be divided into two aspects—local orientation detection neuron and global orientation detection. In the local receptive field, local orientation detection neurons can extract basic orientation information. The proposed mechanism has many desirable characteristics. It can be used in any orientation detection system and appears to be part of the human orientation detection system. This mechanism may be used as a framework for understanding many other fundamental phenomena in visual perception, such as orientational perception, motion speed perception, and binocular visual perception, as shown in Table 5.
Although the mechanism is based on a highly simplified model and ignores some details about the visual system and our brain function, it indeed provides a mechanism to quantitatively explain many known features in the neurobiology of visual phenomena, which may lead to explaining aspects of neuroanatomy and neurophysiology, to review their observations and find the corresponding structure and function. Conversely, advances in the biological sciences may also lead to improved and elaborate mechanisms.

4. Conclusions

In this paper, we proposed a mechanism for global orientation detection by introducing local localization detection neurons to calculate local localization and introduced a new orientation detection mechanism based on a single perceptron AVS. Given that neurons can only perform simple neural calculations, we assumed that some neurons can only locally detect specific directions of objects. We introduced the idea of local acceptance fields into our mechanism, where each local information is collected by a single local localization detection neuron. According to the number of activated directional detection neurons, the overall positioning angle of the object was determined by the most activated directional detection neurons. We used a single-layer perceptron to implement the global directional detection system, and the effectiveness of the system was proved by many computer experiments. Experimental results showed that it has good recognition accuracy regardless of the size, position, and direction of the object. This mechanism and mechanism-based artificial AVS have many desirable properties that can be used in any artificial visual perception system. They seem to be important parts of the human visual system. The proposed mechanism and mechanism-based AVS can serve as frameworks for understanding many other fundamental phenomena in visual perception, including orientation perception, motion direction perception, and perceptual motion speed, as well as the perception of binocular vision. In addition, the proposed mechanism and mechanism-based AVS provide functional frameworks for visual computing in the primary visual cortex, understanding how visual inputs are segmented and reassembled at different stages of the visual system, and how functions are divided between different elements of the visual circuit. The mechanism by which the primary visual cortex acts as a sensory system could also help us understand how other sensory systems such as smell, taste, and touch are encoded at the level of cortical circuits. Although the proposed mechanism and mechanism-based AVS rely on a highly simplified model, ignoring some of the known functions of the visual system with lack of detailed information, it provides a mechanism for quantitative interpretation of many known neurobiological visual phenomena and experiments, and also may help to explain neuroanatomy and neurophysiology, by reviewing their observations or performing some new experiments to find the corresponding structure and function. Conversely, future advances in biological science could also help us further modify its mechanisms and mechanism-based AVS. Finally, to prove the superiority of single-layer perceptron AVS, we compared the single-layer perceptron AVS with traditional convolution neural network (CNN) in terms of the performance of directional detection task and found that the single-layer perceptron AVS entirely surpass CNN in recognition accuracy, noise immunity, and computation and learning cost, hardware implementation and reasoning, and biological reliability. Therefore, we believe that ASV is likely to replace CNN shortly. Subsequent research will focus on improving the application field while keeping the model simple, and adding color recognition and gray recognition, further simulating binoculars for 3D image recognition, to provide a mechanism to quantitatively explain many known neurobiological visual phenomena and experiments.

Author Contributions

Conceptualization, T.Z. and X.Z.; methodology, Y.T.; software, X.Z.; validation, T.Z., Y.T. and X.Z.; formal analysis, X.Z.; investigation, Y.T.; resources, T.Z.; data curation, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, T.Z.; visualization, Y.T.; supervision, Y.T.; project administration, T.Z.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Hubel, D.H.; Wiesel, T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef] [PubMed]
  2. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 1959, 148, 574–591. [Google Scholar] [CrossRef] [PubMed]
  3. Gilbert, C.D.; Li, W. Adult visual cortical plasticity. Neuron 2012, 75, 250–264. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Hubel, D.H.; Wiesel, T.N. Exploration of the primary visual cortex, 1955–1978. Nature 1982, 299, 515–524. [Google Scholar] [CrossRef] [PubMed]
  5. Hubel, D.H.; Wiesel, T.N. Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 1968, 195, 215–243. [Google Scholar] [CrossRef]
  6. Jin, J.; Wang, Y.; Swadlow, H.A.; Alonso, J.M. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex. Nat. Neurosci. 2011, 14, 232–238. [Google Scholar] [CrossRef]
  7. Priebe, N.J.; Ferster, D. Mechanisms of neuronal computation in mammalian visual cortex. Neuron 2012, 75, 194–208. [Google Scholar] [CrossRef] [Green Version]
  8. Wilson, D.E.; Whitney, D.E.; Scholl, B.; Fitzpatrick, D. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex. Nat. Neurosci. 2016, 19, 1003–1009. [Google Scholar] [CrossRef]
  9. McLaughlin, D.; Shapley, R.; Shelley, M.; Wielaard, D.J. A neuronal network model of macaque primary visual cortex (v1): Orientation selectivity and dynamics in the input layer 4cα. Proc. Natl. Acad. Sci. USA 2000, 97, 8087–8092. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, C.; Tonegawa, S. Molecular genetic analysis of synaptic plasticity, activity-dependent neural development, learning, and memory in the mammalian brain. Annu. Rev. Neurosci. 1997, 20, 157–184. [Google Scholar] [CrossRef] [Green Version]
  11. Golding, N.L.; Spruston, N. Dendritic sodium spikes are variable triggers of axonal action potentials in hippocampal CA1 pyramidal neurons. Neuron 1998, 21, 1189–1200. [Google Scholar] [CrossRef] [Green Version]
  12. Häusser, M.; Spruston, N.; Stuart, G.J. Diversity and dynamics of dendritic signaling. Science 2000, 290, 739–744. [Google Scholar] [CrossRef] [Green Version]
  13. Martina, M.; Vida, I.; Jonas, P. Distal initiation and active propagation of action potentials in interneuron dendrites. Science 2000, 287, 295–300. [Google Scholar] [CrossRef] [Green Version]
  14. Schwindt, P.C.; Crill, W.E. Local and propagated dendritic action potentials evoked by glutamate iontophoresis on rat neocortical pyramidal neurons. J. Neurophysiol. 1997, 77, 2466–2483. [Google Scholar] [CrossRef] [PubMed]
  15. Stuart, G.; Spruston, N.; Sakmann, B.; Häusser, M. Action potential initiation and backpropagation in neurons of the mammalian CNS. Trends Neurosci. 1997, 20, 125–131. [Google Scholar] [CrossRef]
  16. Velte, T.J.; Masland, R.H. Action potentials in the dendrites of retinal ganglion cells. J. Neurophysiol. 1999, 81, 1412–1417. [Google Scholar] [CrossRef] [Green Version]
  17. Taylor, W.; Vaney, D.I. New directions in retinal research. Trends Neurosci. 2003, 26, 379–385. [Google Scholar] [CrossRef]
  18. Fried, S.I.; Münch, T.A.; Werblin, F.S. Mechanisms and circuitry underlying directional selectivity in the retina. Nature 2002, 420, 411–414. [Google Scholar] [CrossRef]
  19. Oesch, N.; Euler, T.; Taylor, W.R. Direction-selective dendritic action potentials in rabbit retina. Neuron 2005, 47, 739–750. [Google Scholar] [CrossRef] [Green Version]
  20. Koch, C. Biophysics of Computation: Information Processing in Single Neurons; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  21. Silver, R.A. Neuronal arithmetic. Nat. Rev. Neurosci. 2010, 11, 474–489. [Google Scholar] [CrossRef] [Green Version]
  22. Todo, Y.; Tamura, H.; Yamashita, K.; Tang, Z. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw. 2014, 60, 96–103. [Google Scholar] [CrossRef]
  23. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; John Wiley and Sons: Hoboken, NJ, USA, 1949. [Google Scholar]
  24. Gidon, A.; Zolnik, T.A.; Fidzinski, P.; Bolduan, F.; Papoutsi, A.; Poirazi, P.; Holtkamp, M.; Vida, I.; Larkum, M.E. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 2020, 367, 83–87. [Google Scholar] [CrossRef] [PubMed]
  25. Ramon y Cajal, S. Histologie du Système Nerveux de l’Homme et des Vertébrés; Maloine: Paris, France, 1911; Volume 2, pp. 153–173. [Google Scholar]
  26. Poirazi, P.; Brannon, T.; Mel, B.W. Pyramidal neuron as two-layer neural network. Neuron 2003, 37, 989–999. [Google Scholar] [CrossRef] [Green Version]
  27. Hubel, D.H.; Wiesel, T.N. Receptive fields of optic nerve fibres in the spider monkey. J. Physiol. 1960, 154, 572–580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Hsu, K.-L.; Gupta, H.; Sorooshian, S. Artificial neural network modeling of the rainfall-runoff process. Water Resour. Res. 1995, 31, 2517–2530. [Google Scholar] [CrossRef]
  29. Hassoun, M.H.; Intrator, N.; McKay, S.; Christian, W. Fundamentals of artificial neural networks. Comput. Phys. 1996, 10, 137. [Google Scholar] [CrossRef] [Green Version]
  30. Al-Shayea, Q.K. Artificial neural networks in medical diagnosis. Int. J. Comput. Sci. Issues 2011, 8, 150–154. [Google Scholar]
  31. Khashei, M.; Bijari, M. An artificial neural network (p,d,q) model for timeseries forecasting. Expert Syst. Appl. 2010, 37, 479–489. [Google Scholar] [CrossRef]
  32. Guresen, E.; Kayakutlu, G.; Daim, T.U. Using artificial neural network models in stock market index prediction. Expert Syst. Appl. 2011, 38, 10389–10397. [Google Scholar] [CrossRef]
  33. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  34. London, M.; Häusser, M. Dendritic computation. Annu. Rev. Neurosci. 2005, 28, 503–532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Agmon-Snir, H.; Carr, C.E.; Rinzel, J. The role of dendrites in auditory coincidence detection. Nat. Cell Biol. 1998, 393, 268–272. [Google Scholar] [CrossRef] [PubMed]
  36. Anderson, J.C.; Binzegger, T.; Kahana, O.; Martin, K.A.C.; Segev, I. Dendritic asymmetry cannot account for directional responses of neurons in visual cortex. Nat. Neurosci. 1999, 2, 820–824. [Google Scholar] [CrossRef]
  37. Artola, A.; Brocher, S.; Singer, W. Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex. Nat. Cell Biol. 1990, 347, 69–72. [Google Scholar] [CrossRef] [PubMed]
  38. Euler, T.; Detwiler, P.; Denk, W. Directionally selective calcium signals in dendrites of starburst amacrine cells. Nat. Cell Biol. 2002, 418, 845–852. [Google Scholar] [CrossRef]
  39. Magee, J.C. Dendritic integration of excitatory synaptic input. Nat. Rev. Neurosci. 2000, 1, 181–190. [Google Scholar] [CrossRef]
  40. Single, S.; Borst, A. Dendritic integration and its role in computing image velocity. Science 1998, 281, 1848–1850. [Google Scholar] [CrossRef] [Green Version]
  41. Spruston, N.; Stuart, G.; Hausser, M. Dendritic integration. In Dendrites; Stuart, G., Spruston, N., Hausser, M., Eds.; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  42. Dringenberg, H.C.; Hamze, B.; Wilson, A.; Speechley, W.; Kuo, M.-C. Heterosynaptic facilitation of in vivo thalamocortical long-term potentiation in the adult rat visual cortex by acetylcholine. Cereb. Cortex 2006, 17, 839–848. [Google Scholar] [CrossRef]
  43. Koch, C.; Poggio, T.; Torre, V. Nonlinear interactions in a dendritic tree: Localization, timing, and role in information processing. Proc. Natl. Acad. Sci. USA 1983, 80, 2799–2802. [Google Scholar] [CrossRef] [Green Version]
  44. Koch, C.; Poggio, T.; Torre, V. Retinal ganglion cells: A functional interpretation of dendritic morphology. Philos. Trans. R. Soc. Lond. B Biol. Sci. 1982, 298, 227–263. [Google Scholar]
  45. Kepecs, A.; Wang, X.-J.; Lisman, J. Bursting neurons signal input slope. J. Neurosci. 2002, 22, 9053–9062. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Structure of the dendritic neuron model with inhibitory inputs (■) and excitatory inputs (•): (a) δ cell and (b) γ cell.
Figure 1. Structure of the dendritic neuron model with inhibitory inputs (■) and excitatory inputs (•): (a) δ cell and (b) γ cell.
Electronics 11 00054 g001
Figure 2. The local orientation detection neurons for (a) 0 degree, (b) 45 degree, (c) 90 degree, and (d) 135 degree.
Figure 2. The local orientation detection neurons for (a) 0 degree, (b) 45 degree, (c) 90 degree, and (d) 135 degree.
Electronics 11 00054 g002
Figure 3. An example of global orientation detection.
Figure 3. An example of global orientation detection.
Electronics 11 00054 g003
Figure 4. Artificial visual system (AVS).
Figure 4. Artificial visual system (AVS).
Electronics 11 00054 g004
Figure 5. Experimental result of the mechanism for detecting a 135° bar with a width of 1.
Figure 5. Experimental result of the mechanism for detecting a 135° bar with a width of 1.
Electronics 11 00054 g005
Figure 6. Experimental result of the mechanism for detecting a 135° bar with a width of 4.
Figure 6. Experimental result of the mechanism for detecting a 135° bar with a width of 4.
Electronics 11 00054 g006
Figure 7. Experimental result of the mechanism for detecting a horizontal (0°) bar.
Figure 7. Experimental result of the mechanism for detecting a horizontal (0°) bar.
Electronics 11 00054 g007
Figure 8. Experimental result of the mechanism for detecting a vertical (90°) bar.
Figure 8. Experimental result of the mechanism for detecting a vertical (90°) bar.
Electronics 11 00054 g008
Figure 9. Six objects with different sizes, positions, and shapes.
Figure 9. Six objects with different sizes, positions, and shapes.
Electronics 11 00054 g009
Figure 10. The activation ratios of four neurons under different aspects.
Figure 10. The activation ratios of four neurons under different aspects.
Electronics 11 00054 g010
Figure 11. Learning results of loss (a) and accuracy (b) of the CNN.
Figure 11. Learning results of loss (a) and accuracy (b) of the CNN.
Electronics 11 00054 g011
Table 1. Experimental parameters.
Table 1. Experimental parameters.
Image PixelScan WindowExtracted Orientation InformationOutput Orientation InformationScanning Step
32 × 323 × 3841
Table 2. Accuracy analysis of orientation detective system.
Table 2. Accuracy analysis of orientation detective system.
Object Type Orientation Angle
2 pixelsNo. of samples928841928841
Correct numbers928841928841
3 pixelsNo. of samples960900960900
Correct numbers960900960900
4 pixelsNo. of samples992961992961
Correct numbers992961992961
8 pixelsNo. of samples1699224916992249
Correct numbers1699224916992249
12 pixelsNo. of samples2379341123793411
Correct numbers2379341123793411
16 pixelsNo. of samples1319148913191489
Correct numbers1319148913191489
32 pixelsNo. of samples1284164512841645
Correct numbers1284164512841645
≥48 pixelsNo. of samples2515127525151275
Correct numbers2515127525151275
Table 3. Comparison between CNN and the single-layer perceptron AVS.
Table 3. Comparison between CNN and the single-layer perceptron AVS.
LayersParametersLearning CostReasoningBio-SoundnessNoise Resistance
CNN>7820,004HighBlack BoxLowLow
Table 4. Comparison of noise resistance of both CNN and the single-layer perceptron AVS.
Table 4. Comparison of noise resistance of both CNN and the single-layer perceptron AVS.
Table 5. Table for notations.
Table 5. Table for notations.
Electronics 11 00054 i001The output is 0 degree
Electronics 11 00054 i002The output is 45 degree
Electronics 11 00054 i003The output is 90 degree
Electronics 11 00054 i004The output is 135 degree
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Zheng, T.; Todo, Y. The Mechanism of Orientation Detection Based on Artificial Visual System. Electronics 2022, 11, 54.

AMA Style

Zhang X, Zheng T, Todo Y. The Mechanism of Orientation Detection Based on Artificial Visual System. Electronics. 2022; 11(1):54.

Chicago/Turabian Style

Zhang, Xiliang, Tang Zheng, and Yuki Todo. 2022. "The Mechanism of Orientation Detection Based on Artificial Visual System" Electronics 11, no. 1: 54.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop