Next Article in Journal
Design and Analysis of Microchannels for Heat Dissipation of High-Energy VCSELs Based on Laser 3D Printing
Next Article in Special Issue
Atmospheric Effects on Satellite–Ground Free Space Uplink and Downlink Optical Transmissions
Previous Article in Journal
Portable RGB-D Camera-Based System for Assessing Gait Impairment Progression in ATTRv Amyloidosis
Previous Article in Special Issue
A General Calibration Method for Dual-PTZ Cameras Based on Feedback Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum-Chromodynamics-Inspired 2D Multicolor LED Matrix to Camera Communication for User-Centric MIMO

Department of Information Technology, National Institute of Technology Karnataka, Mangaluru 575025, Karnataka, India
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(20), 10204; https://doi.org/10.3390/app122010204
Submission received: 10 August 2022 / Revised: 3 October 2022 / Accepted: 5 October 2022 / Published: 11 October 2022
(This article belongs to the Special Issue Optical Camera Communications and Applications)

Abstract

:
With the high availability of low-cost and energy-efficient LEDs and cameras, there is increased interest in optical camera communication (OCC) to provide nonradio-frequency-based communication solutions in the domains of advertisement, vehicular communication, and the Internet of Things (IoT). As per the IEEE 802.15.7-2018 standard, new physical-layer clauses support low-frame-rate camera communication with allowable flickering. This paper proposes an OCC system that can provide user-centric multiple-input multiple-output (MIMO) loosely based on quantum-chromodynamics (QCD) concepts. A QCD–OCC simulator and prototype are proposed, implemented, and evaluated on the basis of the pixel intensity profile, peak signal-to-noise ratio (PSNR), the success of reception (%), bit-error rate (BER), and throughput under different ambient lighting conditions and distances. We observed 100% and 84% success of reception using the proposed prototype and simulator, respectively, for the data rate of 720 bps. The maximal tolerable BER of 1.13 × 10 2 for IoT applications was observed at a maximal distance of 200 cm and a maximal data rate of 3600 bps. The proposed system was also compared with other existing OCC systems with similar hardware and implementation requirements. The proposed QCD–OCC system provided rotation support up to 90 degrees and throughput of 4.32 kbps for a 30 fps camera.

1. Introduction

With the advent of compact electronic devices embedded with cameras, there is a driving need to build affordable, efficient, and feature-rich cameras, consequently providing several types of cameras, including CCTV, drone cams, and car dashcams. These cameras can be further enhanced to provision communication by using minor software changes [1]. Meanwhile, in recent years, the problem of radio-frequency (RF) spectral saturation has led to increased issues related to cross-talk and bandwidth allocation [2]. Thus, to reduce RF saturation and provide alternative communication techniques, the IEEE 802.15.7r1 task group was established to investigate camera-based visible-light communication (VLC), also known as optical camera communication (OCC) [3]. Compared to photodiode-based VLC, OCC can be quickly adopted as, unlike the former, it does not need dedicated hardware, and existing devices can be readily used [4].
In OCC, the transmitter (Tx) is typically based on light-emitting diodes (LEDs) that support a range of chromatic values, thus providing access to the visible-light spectrum for modulation. The signal is modulated using light properties such as intensity, blink frequency, polarity, chromaticity, and spatiality [5]. On the receiver (Rx) side, depending on the modulation scheme, these features are extracted from the video stream, and corresponding signal demodulation and data reconstruction are performed. In an environment where the distance and orientation between Tx and Rx are constant, demodulation is performed using preset region-of-interest (RoI) extraction [6]. However, support for variable distance and orientation is required for a practical system.
Moreover, as the signal is demodulated using image frames, the frame rate of the camera defines the data rate and throughput of the OCC link [7]. A theoretical model highlighting the relation between camera properties and LED illuminance was described [8]. Although higher-frame-rate cameras can provide higher data rates, they are costlier than regular cameras, which restricts their deployment to a few specific use cases. Considering these limitations, the performance of OCC can be improved by using spatially arranged multiple neopixel LEDs [9] or display screens [10].
The significant challenges of an OCC system are Tx detection, MIMO, the distance of communication, and rotation mitigation or tolerance. The Tx in OCC can be identified using the luminance characteristics or the shape of the Tx aperture. Trong-Hop Do et al. [11] proposed a simple LED panel detection algorithm that identifies the brightest pixel and performs convex fill to obtain the RoI. Simulated results showed decreasing accuracy in RoI identification with increasing distance. However, the proposed system was not tested under practical scenarios where multiple bright pixels might exist. The author of [12] introduced convolutional neural networks (CNNs) for RoI detection; however, the Tx and Rx were restricted to be arranged in parallel without rotation or skew. Xu Sun et al. [13] proposed a hybrid technique for LED panel detection using a modified YOLOv5 object detection model. Although the proposed technique achieved high accuracy for rotated and skewed normal and blurry images, it was computing-intensive. A nonflickering OCC Tx was proposed, but most have distance limitations due to the relation among Tx size, distance, and throughput [4,14,15]. Thus, the IEEE 802.15.7-2018 standard specifies clauses for OCC modulation techniques that allow for flickering, and provide kbps data rates using dedicated LED panels and displays. Moreover, these transmitters can offer multiple-input multiple-output (MIMO) for Internet of Things (IoT) devices in an indoor environment.
In an OCC system, MIMO can be provisioned using frequency- or color-based modulation techniques. Huy Nguyen et al. [16] proposed and demonstrated MIMO-OFDM using two Tx sources and achieved a combined throughput of 3.840 kbps for a fixed distance. However, the system does not support rotation, as the spatial arrangement of multiple Txs induces error in the demodulation. Shivani Telli et al. [17] proposed an RS-MIMO for the IoT using grouped LED arrays as a transmitter to provide rotation support in a flicker-free RS–OCC system. The maximal throughputs of 1.9 and 0.9 kbps were obtained for 0 and 50 rotations; however, the throughput was also dependent on the distance between Tx and Rx. LED array-based active marker design was proposed by Lorenz Gorse et al. [18] to provide efficient marker detection for augmented reality and positioning applications under low-ambient-light conditions. In [6], a 2D LED array-based OCC system with rotation support and MIMO was proposed. The system provided BER of 10 4 for 11 dB SNR; however, the throughput could be further enhanced by using the chromatic features of light [19,20].
User-centric MIMO provides variable bandwidth depending upon the number of devices under the VLC access point [21,22]. Figure 1a,b shows the bandwidth distribution for three OCC Rx devices and one Rx device, respectively. OCC–MIMO can be achieved by modulating signals using color mapping techniques. However, few phenomena exist in nature that inherently showcase changes in color on the basis of changes in its underlying composition.
Quantum chromodynamics (QCD) is a branch of quantum physics under which the chromatic properties of quantum particles are studied [23]. Recent discoveries in the field of quantum physics have revealed that subatomic particles neutrons and protons are composed of hadrons that consist of quarks held together by the strong force produced during the exchange of gluons. Depending upon the number of quarks in a hadron, the hadron is classified into baryons and mesons. Baryons consist of three quarks, while mesons consist of a pair of quarks and antiquarks [24,25]. Baryons hold the quarks together by exchanging gluons, keeping the overall hadron stable. Loosely inspired by the baryon gluon exchange model, this paper proposes a 2D 8 × 8 neopixel LED matrix for an OCC. The objectives of this paper are as follows:
  • To use quantum chromodynamics principles for data to color map using an 8 × 8 LED matrix.
  • To provide rotation support for the OCC using a QR code-inspired anchoring pattern.
  • To build and evaluate the simulator of proposed QCD-inspired OCC modulation with rotation support and MIMO.
  • To build a low-cost prototype of the proposed QCD-OCC system and compare the success of reception (%) with the proposed simulator.
  • To evaluate the performance of the proposed system on the basis of metrics such as SNR, the success of reception (%), intensity profile, throughput, and BER.
The remainder of the paper is organized as follows: Section 2 describes the proposed modulation scheme inspired by QCD, and the proposed simulation and prototype design system. In Section 3, the implementation details of proposed system are highlighted. Section 4 discusses the experimental setup, experiments, and corresponding results. Conclusions are drawn in Section 5.

2. Proposed Communication System

Section 1 briefly introduces the concept of QCD. This section provides a detailed discussion on the proposed modulation technique inspired by the behavior of subatomic particles.

2.1. Proposed Technique

At the subatomic level, the exchange of gluons between quarks creates a strong force that keeps the hadron stable, which is the composition of neutrons and protons. Baryons, a type of hadrons, are generally represented with white (W) to indicate their stability. Quarks are represented by red (R), green (G), and blue (B), as shown in Figure 2. Cyan, violet, and yellow are used to describe the absence of red, green, and blue from white. These also represent antired (), antigreen (Ĝ), and antiblue (). Thus, combining the original color and its complement always produces white, showcasing a stable hadron as shown in Equation (1).
R + R ^ = G + G ^ = B + B ^ = W
Inspired by this analogy, our proposed technique assigns a color to an LED in an 8 × 8 LED matrix on the basis of the absence or presence of a bit in the set of 3 bits. Table 1 shows the color mapping for an LED in the resultant output. These groups of 3 bits can be generated either by using three separate binary data streams or by using a single data stream divided into groups of 3 bits.
Considering three data streams to be transmitted for users U 1 , U 2 , and U 3 , three colors were assigned, namely, R U 1 , G U 2 and B U 3 . Thus, from Equation (1), the relation between user data streams and the respective assigned colors is: R + G U 1 + U 2 B ^ ; G + B U 2 + U 3 R ^ ; R + B U 1 + U 3 G ^ ; R + G + B U 1 + U 2 + U 3 W . For example, if the U 1 data bit is 0, and U 2 and U 3 data bits are 1, then R ^ would be set at the respective position of an LED in 8 × 8 neopixel LED matrix.

2.2. Overview of Proposed Communication System

The proposed QCD–OCC system is divided into two parts, transmitter (Tx) and receiver (Rx), separated by a free space channel as shown in Figure 3. On the transmitter side, the data to be transmitted can be in two formats, where ( R , G , B ) i = 1 48 is an array of size 48 containing color information for the respective indices of three individual datastreams or a single one:
  • Format 1: In this format, three separate binary data streams, D 1 , D 2 , and D 3 , of variable lengths for users U 1 , U 2 , and U 3 are used to generate a 3D matrix of size 1 × 48 × 3, as shown in Equation (2), where matrices R, G, and B are submatrices along the Z axis at Indices 1, 2, and 3, respectively.
    ( R , G , B ) i = 1 48 = ( D 1 i , D 2 i , D 3 i )
  • Format 2: in this format, a single binary data stream D is folded to obtain a 3D matrix of size 1 × 48 × 3, as shown in Equation (3).
    ( R , G , B ) i = 1 48 = ( D i , D 48 + i , D 96 + i )
Data representation in Format 1 enables MIMO and supports simultaneous communication with three users, while Format 2 provides 3× bandwidth compared to Format 1, directed at one user. Format 1 is used throughout this paper for ease of understanding.
Each matrix of size 1 × 48 × 3 obtained postmultiplexing is reshaped into corresponding 6 × 8 × 3 matrices. These 6 × 8 × 3 matrices are converted into 8 × 8 × 3 matrices by integrating anchor patterns. Anchor patterns are corner 2 × 2 submatrices that retrieve orientation information from the image at the receiver side. The resultant 8 × 8 × 3 matrix is converted into an 8 × 8 color image using bit color mapping as per Table 1. The color from each resultant cell is mapped onto LEDs on the basis of the color value and position in the matrix. An LED driver controls the status of LEDs as per the input from mapping, and produces an 8 × 8 multicolor LED output.
A continuous video capturing the transmitter LED panel is recorded on the receiver side. The video is processed frame by frame to extract the transmitter’s position using the image processing technique. The detected region of interest (RoI) is resized and converted into an 8 × 8 × 3 image that undergoes channel separation and provides three 8 × 8 binary matrices. The data are regenerated at the receiver side after the removal of anchor patterns.
A simulator and prototype of the proposed QCD–OCC system were implemented, and their performance was evaluated. The performance of the system was evaluated on the basis of the peak signal-to-noise ratio (PSNR), data throughput, the success of reception (%), and bit-error rate (BER) at various distances and under different ambient lighting conditions. PSNR serves as a metric for evaluating channel quality. It is computed using the mean square error (MSE) between the images containing inactive Tx( I i n ) and active Tx( I a c ) [26]. Equation (4) shows the computation of MSE, where R o w s and C o l s are the height and width of input images in terms of pixels, respectively.
M S E ( I i n , I a c ) = 1 R o w s × C o l s i = 1 R o w s j = 1 C o l s ( I i n ( i , j ) I a c ( i , j ) ) 2
The peak value of the pixel intensity ( I p e a k ) in the received image I a c is computed as per Equation (5):
I p e a k = m a x R o w s ( m a x C o l s ( I a c ) )
From Equations (4) and (5), PSNR can be expressed as:
P S N R ( ( I i n , I a c ) = 10 log 10 ( I p e a k 2 / M S E ( ( I i n , I a c ) )
The data throughput ( D t ) for the proposed system was evaluated as per Equation (7), where N L E D s is the number of LEDs on the Tx panel, N C o l o r B i t s is the number of bits required to represent color, N A n c h o r L E D s is the number of LEDs used for the anchor pattern, and f p s is the frame rate of the Rx camera.
D t = ( N L E D s N C o l o r B i t s N A n c h o r L E D s ) f p s 2
The percentage success of reception is calculated as per Equation (8), where T b is the total number of transmitted bits, and E b is the number of error bits.
S u c c e s s o f r e c e p t i o n ( % ) = T b E b T b 100

3. Realization of Proposed QCD–OCC System

3.1. Implementation of Proposed QCD–OCC Transmitter Simulation

A simulator is proposed to study challenges for the practical implementation of the proposed QCD–OCC system. Similar to the proposed system, the simulator had a Tx and an Rx. Figure 4 shows the components of Rx simulation. A random binary data generator function generates binary streams of variable lengths. For example, if P is a pseudorandom number between 0 and 1, and S is the size limit, then the generated data D G are as shown in Equation (9):
D G = [ x 1 , x 2 , x 3 , , x N ]
where N = P × S and
x = 1 , if 0.5 > P 0 , otherwise
These streams are written into three files to simulate three individual data sources. The maximal stream length ( L m a x ) across the three files is computed as per Equation (10), where L 1 , L 2 , L 3 are lengths of data stream in Files 1, 2, and 3, respectively.
L m a x = max ( L 1 , L 2 , L 3 )
The lengths of each data stream were equalized by padding 0 s. The number of zeros ( Z i ) to be padded is determined according to Equation (11), where i is the file number.
Z i = ( 48 × L i 48 ) ( L m a x mod 48 )
The individual data streams are appended with the corresponding number of zeros to obtain new data streams of lengths L i n e w , where L i n e w = L i + Z i . The lengths of the resultant data streams are equal to and multiples of 48. The total number of frames ( N F ) required to transmit each file is L 1 n e w ÷ 48 . Thus, each frame consists of 48 bits for individual files. The 1 × 48 scalar is vectorized into 6 × 8 binary matrices. These 6 × 8 matrices are further converted into 8 × 8 matrices as per the proposed Tx frame format shown in Figure 5a, where A i and D i are the anchor and data positions, respectively. The corner 2 × 2 matrices are zeros and overwritten by an anchor pattern postmultiplexing. Each frame with common indices is forwarded to a color channel multiplexer, where an 8 × 8 matrix from three file frames, namely, R 8 × 8 T x , G 8 × 8 T x and B 8 × 8 T x are multiplexed into a single 8 × 8 × 3 matrix O 8 × 8 × 3 T x . Figure 5b represents steps involved in color channel multiplexing. An 8 × 8 anchoring pattern is added across three channels obtaining the output 8 × 8 × 3 matrix. The anchor pattern values are assigned to be [ A 1 , A 2 , A 3 , A 4 ] = [0, 0, 0, 0] and [ A 5 , …, A 16 ] = [1, …, 1]. In the next step, the colors for each cell of the final 8 × 8 matrix are defined per the baryon-color-model-inspired bit color mapping table.
As an 8 × 8 image is very small in terms of pixels, the image was rescaled to 64 × 64. The final generated colored image was overlapped on a camouflage base image sized 1920 × 1080 × 3 using submatrix substitution, and Gaussian noise was added to simulate the process involved in RoI extraction. A magnification factor was integrated to simulate the change in Tx size according to the distance from the Rx. The resultant image underwent a rotation of 45 or 90 to verify the rotation support of the proposed system. Each image was named according to the frame number and stored in a folder.

3.2. Implementation of Proposed QCD–OCC Receiver Simulation

The steps involved in receiver simulation are shown in Figure 6. The process starts with accessing the folder containing images generated by the Tx simulator. Each image is processed sequentially as per the naming convention. Since in a practical system, packet drop is probable due to frame sampling errors, a simulation of this error resulting in the skipping of the image ( S ( i ) ) is implemented as follows:
S ( i ) = i , i f   P < 1 ( F r / 48 / 3 / D r ) 0
where i is an image from the folder, F r is the frame rate of the camera, and D r is the transmitter data rate. The probability of a packet drop for the selected image decreases with an increase in the camera frame rate and/or decrease in transmitter data rate. The selected image undergoes noise removal using a Wiener filter. The extraction of the RoI from the first image is a sequence of preprocessing steps. The input colored image is converted into grayscale ( G r ) as per Equation (13), where x and y are the rows and columns of pixels in the image, and R, G, and B are color channels.
G r x , y = 0.299 R x , y + 0.587 G x , y + 0.114 B x , y
A threshold (T) for binarization is estimated as per Equation (14), where the maximal grayscale pixel intensity is used as the threshold. Equation (15) provides a binarized image (B):
T = m a x x ( m a x y ( G r x , y ) )
B x , y = G r x , y < T 1
A square structural element ( s t r e l ) of size n, as given in Equation (16), is used to dilate the binarized image. Equation (17) provides a dilated binary image.
s t r e l n = 1 1 , 1 . . . . 1 n , n . . . . . . . . . . . . . . . . 1 n , 1 . . . . 1 n , n
B i , j s t r e l = z | ( s t r e l ^ z B i , j c ϕ )
Out of the n blobs from the dilated image, the blob with the maximal area ( B M ) was selected as per Equation (18), where B A n is the area of blob n.
B M = m a x ( B A n )
The coordinates of B M are extracted from each channel to provide an RoI as shown in Equation (19), where p x , y is the pixel at positions x and y.
R o I = { p x , y | ( R x , y B M x , y ) + ( G x , y B M x , y ) + ( B x , y B M x , y ) }
The coordinates of the extracted RoI are applied to all subsequent images to crop the Tx. It was assumed that subsequent images had the same Tx position and magnification in the image as those of the first reference image. A square bounding box was drawn around the Tx. The area of the bounding box was compared with the area of the blob in RoI, and RoI rotation was performed to align the RoI with the bounding box. A Hough transform was applied to identify the angle ( θ ) by which Tx was rotated, and a rotation of Tx in the opposite direction was performed. Thus, the aligned Tx ( A i T x ) from the cropped RoI of the i th image can be expressed as shown in Equation (20), where H T is the Hough transform.
A i T x = r o t a t e ( R o I i , ( H T ( R o I i ) ) )
The aligned RoI of Tx was further quantized into 3-bit color, and a reshaping function was applied to convert the resultant image into an 8 × 8 × 3 matrix. The values from anchoring positions were extracted and compared with the expected values. A counterclockwise matrix rotation was performed till all anchor values matched or four iterations were over. The rotated matrix (O) could be expressed as shown in Equation (21) where T r is a transpose operation, and C r is a column-reverse operation.
O 8 × 8 R x = C r ( T r ( A 8 × 8 ) )
Each cell in the resultant matrix O 8 × 8 R x consisted of RGB values. By undergoing channel separation, O 8 × 8 × 3 R x is converted into three 8 × 8 matrices R 8 × 8 R x , G 8 × 8 R x and B 8 × 8 R x . Each matrix was binarized, and the resultant 8 × 8 binary vector was converted into 64-bit streams. The anchor values from these three streams were removed from the corresponding positions to provide the final 48 bits per stream. The values were written in separate output files, and the entire process was repeated for the next image till no more images existed in the input folder to read. The output of each iteration was written in append mode in the respective files.
Figure 7a shows the image generated by the simulated transmitter, which consisted of a simulated Tx LED panel and background. Figure 7b shows a screenshot of the Rx simulator during demodulation steps that consisted of the cropped RoI, channel-separated binary matrices, PSNR, and decoded data.

3.3. Implementation of Proposed QCD–OCC System Prototype

The proposed QCD–OCC system prototype consists of a programmable board (Arduino Uno)-controlled 8 × 8 neopixel LED matrix (Adafruit WS2812B) as Tx, and a computer with an attached webcam (Logitech C310) as the Rx. This circuit diagram and implemented Tx are shown in Figure 8a,b. Pins v+, v-, and in of the LED matrix were connected to the 5v, ground(GND), and digital pin 6 of the Arduino board in the same order. Pin 6 was set to output using the p i n M o d e ( 6 , O U T P U T ) command. Algorithm 1 shows the steps involved in transmitting data using the QCD–OCC Tx. Three predefined message strings were used as input, namely, F 1 , F 2 and F 3 . M is the maximal length of these three strings, computed using l e n g t h and m a x functions. N f r a m e s is the number of frames required to transmit overall data. f r a m e i is initialized to 0 and used to track the current frame number. A d a f r u i t _ N e o p i x e l is a class of the Adafruit library that provides access to functions used for controlling neopixel LEDs. p i x is an object of the A d a f r u i t _ N e o p i x e l class, and the constructor accepted the number of LEDs (64), connected pin number (6), and the type of the neopixel LEDs ( N E O _ G R B + N E O _ K H Z 800 ) as its parameters. d e l a y V a l is the delay between successive Tx frames. Thus, if T x f r a m e s are 5 per second, then d e l a y V a l is 200 ms. The L o o p (10) function sets the anchor pattern (1), top 2 × 4 data pattern (1), bottom 2 × 4 data pattern (1), and middle 4 × 8 data pattern (1) of the Tx frame as per the input characters.
Algorithm 1 QCD–OCC for transmitter.
  1:
Initialize strings F 1 , F 2 and F 3
  2:
M = m a x ( F 1 . l e n g t h ( ) , m a x ( F 2 . l e n g t h ( ) , F 3 . l e n g t h ( ) ) )
  3:
N f r a m e s = c e i l ( ( m × 8 ) ÷ 48 )
  4:
f r a m e i = 0
  5:
A d a f r u i t _ N e o P i x e l p i x = A d a f r u i t _ N e o P i x e l ( 64 , 6 , t y p e )
  6:
d e l a y V a l = 1000 / T x f r a m e s
  7:
functionSetup
  8:
    Set pin mode to output for pin 6
  9:
end function
10:
functionLoop
11:
    SetAnchorPattern()
12:
    SetTopDataPattern( f r a m e i )
13:
    SetBottomDataPattern( f r a m e i )
14:
    SetMidDatatPattern( f r a m e i )
15:
    pix.show()
16:
    delay( d e l a y V a l )
17:
     f r a m e i + +
18:
    if  mod ( f r a m e i , N f r a m e s ) = 0  then
19:
         f r a m e i = 0
20:
    end if
21:
end function
Algorithm 2, Algorithm 3, Algorithm 4 and Algorithm 5 show the implementation of the supporting functions. The S e t A n c h o r P a t t e r n function sets the pattern for the four corner 2 × 2 positions as zero or unity matrices per the Tx frame format. Each Tx frame represents 6 bytes per channel. The b i t R e a d function is used to read the bit value of a byte at the specific input index.
Algorithm 2 Supporting Functions: SetAnchorPattern.
  • functionSetAnchorPattern
  •      A 1 ( 2 , 2 ) = 0 , A 2 ( 2 , 2 ) = 1 , A 3 ( 2 , 2 ) = 1 , A 4 ( 2 , 2 ) = 1
  • end function
Algorithm 3 Supporting functions: SetTopDataPattern.
  • functionSetTopDataPattern( f r a m e i )
  •      c h a r F 1 c 1 = F 1 . c h a r A t ( 6 × f r a m e i )
  •      c h a r F 2 c 1 = F 2 . c h a r A t ( 6 × f r a m e i )
  •      c h a r F 3 c 1 = F 3 . c h a r A t ( 6 × f r a m e i )
  •     for  ( i = 2 ; i < 6 ; i + + )  do
  •          T D ( 0 , i ) = p i x . s e t P i x e l C o l o r ( i , p i x e l s . C o l o r ( 255 b i t R e a d ( F 1 c 1 , 7 ( i 2 ) ) , 255 b i t R e a d ( F 2 c 1 , 7 ( i 2 ) ) , 255 b i t R e a d ( F 3 c 1 , 7 ( i 2 ) ) ) )
  •     end for
  •     for  ( i = 10 ; i < 14 ; i + + )  do
  •          T D ( 1 , i 8 ) = p i x . s e t P i x e l C o l o r ( i , p i x . C o l o r ( 255 b i t R e a d ( F 1 c 1 , 3 ( i 10 ) ) , 255 b i t R e a d ( F 2 c 1 , 3 ( i 10 ) ) , 255 b i t R e a d ( F 3 c 1 , 3 ( i 10 ) ) ) )
  •     end for
  • end function
Algorithm 4 Supporting functions: SetBottomDataPattern.
  • functionSetBottomDataPattern( f r a m e i )
  •      c h a r F 1 c 1 = F 1 . c h a r A t ( 6 × f r a m e i + 1 )
  •      c h a r F 2 c 1 = F 2 . c h a r A t ( 6 × f r a m e i + 1 )
  •      c h a r F 3 c 1 = F 3 . c h a r A t ( 6 × f r a m e i + 1 )
  •     for  ( i = 51 ; i < 54 ; i + + )  do
  •          B D ( 6 , i 49 ) = p i x . s e t P i x e l C o l o r ( i , p i x e l s . C o l o r ( 255 b i t R e a d ( F 1 c 1 , 7 ( i 51 ) ) , 255 b i t R e a d ( F 2 c 1 , 7 ( i 51 ) ) , 255 b i t R e a d ( F 3 c 1 , 7 ( i 51 ) ) ) )
  •     end for
  •     for  ( i = 58 ; i < 62 ; i + + )  do
  •          B D ( 7 , i 56 ) = p i x . s e t P i x e l C o l o r ( i , p i x . C o l o r ( 255 b i t R e a d ( F 1 c 1 , 3 ( i 58 ) ) , 255 b i t R e a d ( F 2 c 1 , 3 ( i 58 ) ) , 255 b i t R e a d ( F 3 c 1 , 3 ( i 58 ) ) ) )
  •     end for
  • end function
Algorithm 5 Supporting functions: SetMidDataPattern.
  • functionSetMidDataPattern( f r a m e i )
  •     for  ( j = 2 ; j < 6 ; j + + )  do
  •          c h a r F 1 c 1 = F 1 . c h a r A t ( 6 × f r a m e i + j )
  •          c h a r F 2 c 1 = F 2 . c h a r A t ( 6 × f r a m e i + j )
  •          c h a r F 3 c 1 = F 3 . c h a r A t ( 6 × f r a m e i + j )
  •         for ( i = 7 ; i > 0 ; i ) do M D ( 2 + j , 7 i ) = p i x . s e t P i x e l C o l o r ( i , p i x e l s . C o l o r ( 255 b i t R e a d ( F 1 c 1 , i ) , 255 b i t R e a d ( F 2 c 1 , i ) , 255 b i t R e a d ( F 3 c 1 , i ) ) )
  •         end for
  •     end for
  • end function
The S e t T o p D a t a P a t t e r n function converts the first byte of each frame per channel into the top 2 × 4 LED color matrix. The color for each LED is computed as the product of the bit value and 255. Thus, for bit value 1, the respective color is 255 and 0 for bit value 0. p i x C o l o r accepts parameters as R, G, and B values in the range of 0–255. For example, p i x C o l o r ( 255 , 255 , 255 ) represents white. Function s e t P i x e l C o l o r sets the color for corresponding LED number. Similar to S e t T o p D a t a P a t t e r n , the S e t B o t t o m D a t a P a t t e r n function converts the second byte of each Tx frame per channel into the bottom 2 × 4 LED color matrix. The S e t M i d D a t a P a t t e r n function converts the 4 bytes of each Tx frame per channel into the middle 4 × 8 LED color matrix.
The implementation of the proposed prototype Rx was similar to the performance of the proposed Rx simulator. The only change concerns the input and removal of the frame drop function. In the prototype Rx, the input is a direct camera feed for which the image capture rate is controlled using the delay function. Figure 9 shows the hardware implementation of the proposed QCD–OCC system, and components such as the Rx camera, 8 × 8 LED Tx, Arduino Uno, and MATLAB-based app frontend are highlighted.

4. Experimental Setup and Results

Table 2 shows the parameters and their corresponding values used during the performance evaluation of the proposed system. Preliminary experiments were conducted to study the impact of distance on the pixel intensity of Tx. Figure 10a–c shows the original Tx, and its corresponding side and top views of intensity profiles at distances of 50, 100, and 200 cm, respectively. As shown in Figure 10a,b, for L = 50 and 100 cm, there was the least significant difference with respect to pixel density in the RoI. Intensities for white, red, and yellow LEDs were also more prominent across three images. Colors such as blue, green, and cyan had comparatively less intensity. Figure 10c shows that, for L = 200 cm, the intensity of cyan became negligible, resulting in faulty decoding of bits.
The success of reception at a distance of 50 cm for varying Tx data rates was evaluated for the proposed QCD–OCC simulator and proposed QCD–OCC prototype. The magnification factor for the simulated Tx was kept at 0.9 to match the size of the prototype Tx at a 50 cm distance. The ambient light intensity for the prototype was ≈27–31 Lux. The mean of 10 iterations per data rate was used to generate a comparison. Figure 11 shows the obtained results.The success of the reception of the proposed QCD–OCC prototype drastically dropped after the point when the Tx frame rate became greater than 15. This was mainly due to sampling error that resulted in capturing Tx images during frame transitions. Since the proposed equation used for the simulation of the frame drop did not separately handle the condition of Tx data rate > f p s / 2 , the figure shows a linear drop in the success of the reception.
The results of the evaluation of PSNR under various lighting conditions are shown in Figure 12a. With the increase in distance, the PSNR value decreased irrespective of lighting conditions. The PSNR value was also highest for the least ambient light intensity and gradually reduced with the increase in ambient light intensity. As distance and ambient light intensity increased, the difference between the reference and Tx images decreased directly proportionally to PSNR. Figure 12b shows the success of reception at various distances under different lighting conditions. The average of 10 samples per distance per lighting condition is plotted. For ambient light of ≈27 Lux, reception success was ≈100% at distances of 50, 100, 150, and 200 cm. The success rate gradually decreased with the increase in ambient light and distance. These results show that the proposed system performs well under low-light conditions.
The BER of the proposed prototype was evaluated under the low-light condition of ≈27–30 Lux at distances of 50, 100, 150, and 200 cm for Tx data rates of 720, 1440, 2160, 2880, 3600, and 4320 bps. As shown in Figure 13a, the proposed system delivered its minimal BER of 1.37 × 10 5 at a distance of 50 cm and Tx data rate of 720 bps. Since for IoT applications, a BER of 10 3 is sufficient for communication, the BER of 1.21 × 10 3 for 2880 bps at a distance of 200 cm is sufficient for IoT applications. The performance of the system deteriorated with an increase in Tx data rate and distance. Increasing the data rate adds more sampling errors, whereas increasing the distance adds frame drops due to the incorrect detection of LEDs from the group of interfering LEDs.
The performance of the proposed system with respect to throughput and rotation support was compared with existing similar OCC techniques that used an 8 × 8 LED panel as Tx and a 30 FPS camera as Rx for their implementation, as shown in Figure 13b, where T is the angle of rotation by which Rx is rotated. RS-based OCC systems provide high throughput for a fixed short distance, but as the distance increases, the throughput drops. RS-based systems also offer limited support for rotation. The data matrix system provides a throughput of 1.92 × 10 3 bps for zero-degree rotation. QR code-inspired systems provide rotation support; however, with a throughput of 1.44 × 10 3 bps, it is lower than the data matrix system, as few LEDs are constantly used for representing anchor positions. The proposed QCD–OCC system uses 16 LEDs to represent anchor positions, and 3-bit colors to send data, thus achieving a throughput of 4.32 × 10 3 bps with rotation support.
Table 3 shows the comparison of the proposed system with existing systems based on parameters such as the type of Tx device, Rx camera FPS, maximal distance with tolerable BER, flickering, rotation support, type camera, MIMO support, BER, and throughput. The proposed QCD–OCC achieved support for rotation, compatibility with RS and GS cameras, allows for multiple links, and provides a combined throughput of 2160 bps. In a similar system, the multicolor QR code achieved better performance than that of the proposed QCD–OCC system, as it uses 4 LEDs for rotation compensation; however, the system uses an LCD that consumes more power compared to an 8 × 8 LED panel and 60 fps camera at the Rx side.

5. Conclusions and Future Work

In this paper, a novel quantum-chromodynamics-inspired 8 × 8 multicolor LED panel to camera communication was proposed, implemented, and evaluated. A baryon gluon exchange color model was used to map between sets of 3 bits and respective colors. QR-code-inspired anchoring patterns were introduced to provision rotation support, thus offering more robustness. The Tx and Rx simulations of the proposed system were implemented to simulate processes involved in converting data into LED mapping and reconstructing data from images. Two strategies were proposed to achieve user-centric MIMO. Three different files were used to simulate the MIMO at the Tx side, and further decoded and logged into three output files. The content from input and output files was compared to evaluate the success of the reception of the implemented simulator. A function dependent on the camera frame rate and Tx data rate was used to simulate the frame drop. A low-cost prototype was built using 8 × 8 neopixel LEDs and a webcam to test the proposed QCD–OCC system. The intensity profiles for the Tx at various distances were compared to study the impact of color intensity for successful decoding. The success of reception of the simulator and prototype was compared. The prototype achieved 100% success of reception and a maximal PSNR of ≈33 dB at the distance of 50 cm for a Tx data rate of 720 bps and ambient light intensity of ≈27 lux. The proposed prototype demonstrated the lowest BER of 1.37 × 10 5 for 50 cm and 720 bps Tx data rate; however, the maximal allowable BER of 1.13 × 10 2 was observed at a maximal distance of 200 cm and the maximal data rate of 3600 bps. The proposed system supported MIMO and achieved a combined throughput of 2.16 kbps with rotation support up to 90 degrees and a maximal communication distance of <3 m.
The performance of the prototype Rx could be further improved by using a specialized lens to support longer distances of communication and high-frame-rate cameras. Using 16 × 16 and 32 × 32 LED panels would provide higher throughput. The work can also be rendered dynamically scalable by integrating laser projectors to provide flexible size and colors in the proposed code [28]. The performance of the proposed simulator can also be improved by integrating LED light-spreading characteristics.

Author Contributions

Conceptualization, G.V. and S.S.; methodology, S.S.; validation, G.V.; formal analysis, S.S.; investigation, S.S.; resources, S.S.; writing—original draft preparation, S.S.; writing—review and editing, S.S. and G.V.; supervision, G.V.; project administration, G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LEDLight-emitting diode
RFRadio frequency
VLCVisible-light communication
OCCOptical camera communication
QCDQuantum chromodynamics
MIMOMultiple-input multiple-output
RoIRegion of interest
TxTransmitter
RxReceiver
PDMPulse duration modulation
PSNRPeak signal-to-noise ratio
BERBit-error ratio
bpsBits per second

References

  1. Ali, M.O.; Alam, M.M.; Ahmed, M.F.; Jang, Y.M. A New Smart-Meter Data Monitoring System based on Optical Camera Communication. In Proceedings of the 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Korea, 13–16 April 2021; pp. 477–479. [Google Scholar] [CrossRef]
  2. Cahyadi, W.A.; Chung, Y.H.; Ghassemlooy, Z.; Hassan, N.B. Optical Camera Communications: Principles, Modulations, Potential and Challenges. Electronics 2020, 9, 1339. [Google Scholar] [CrossRef]
  3. Uysal, M.; Miramirkhani, F.; Narmanlioglu, O.; Baykas, T.; Panayirci, E. IEEE 802.15.7r1 Reference Channel Models for Visible Light Communications. IEEE Commun. Mag. 2017, 55, 212–217. [Google Scholar] [CrossRef]
  4. Salvi, S.; Vasantha, G. An Optical Camera Communication Using Novel Hybrid Frequency Shift and Pulse Width Modulation Technique for Li-Fi. Computation 2022, 10, 110. [Google Scholar] [CrossRef]
  5. Hasan, M.K.; Chowdhury, M.Z.; Shahjalal, M.; Nguyen, V.T.; Jang, Y.M. Performance Analysis and Improvement of Optical Camera Communication. Appl. Sci. 2018, 8, 2527. [Google Scholar] [CrossRef] [Green Version]
  6. Nguyen, H.; Nguyen, V.; Nguyen, C.; Bui, V.; Jang, Y. Design and Implementation of 2D MIMO-Based Optical Camera Communication Using a Light-Emitting Diode Array for Long-Range Monitoring System. Sensors 2021, 21, 3023. [Google Scholar] [CrossRef] [PubMed]
  7. Preda, R.O.; Dobre, R.A.; Badea, R.A. Influence of Camera Framerate Variations on an Optical Camera Communication System. In Proceedings of the 2021 44th International Conference on Telecommunications and Signal Processing (TSP), Brno, Czech Republic, 26–28 July 2021; pp. 316–319. [Google Scholar] [CrossRef]
  8. Liu, A.; Shi, W.; Ouyang, M.; Liu, W. Characterization of Optical Camera Communication Based on a Comprehensive System Model. J. Light. Technol. 2022, 40, 6087–6100. [Google Scholar] [CrossRef]
  9. Jung, H.; Kim, S.M. Experimental Demonstration of 3 × 3 MIMO LED-to-LED Communication Using RGB Colors. Sensors 2021, 21, 4921. [Google Scholar] [CrossRef] [PubMed]
  10. Bao, X.; Pan, J.; Cai, Z.; Li, J.; Huang, X.; Chen, R.; Fang, J. Real-time display camera communication system based on LED displays and smartphones. Opt. Express 2021, 29, 23558–23568. [Google Scholar] [CrossRef] [PubMed]
  11. Do, T.H.; Yoo, M. A simple LED panel dection algoritum for Optical Camera Communication systems. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 16–18 October 2019; pp. 747–749. [Google Scholar] [CrossRef]
  12. Guan, W.; Li, J.; Wen, S.; Zhang, X.; Ye, Y.; Zheng, J.; Jiang, J. The Detection and Recognition of RGB-LED-ID Based on Visible Light Communication using Convolutional Neural Network. Appl. Sci. 2019, 9, 1400. [Google Scholar] [CrossRef] [Green Version]
  13. Sun, X.; Shi, W.; Cheng, Q.; Liu, W.; Wang, Z.; Zhang, J. An LED Detection and Recognition Method Based on Deep Learning in Vehicle Optical Camera Communication. IEEE Access 2021, 9, 80897–80905. [Google Scholar] [CrossRef]
  14. Teli, S.R.; Matus, V.; Zvanovec, S.; Perez-Jimenez, R.; Vitek, S.; Ghassemlooy, Z. Optical Camera Communications for IoT–Rolling-Shutter Based MIMO Scheme with Grouped LED Array Transmitter. Sensors 2020, 20, 3361. [Google Scholar] [CrossRef] [PubMed]
  15. Zinda, T.; Ito, K.; Chujo, W. Rolling-Shutter-Based Optical Camera Communication Using Distributed LED Array. In Proceedings of the 2018 11th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Budapest, Hungary, 18–20 July 2018; pp. 1–4. [Google Scholar] [CrossRef]
  16. Nguyen, H.; Jang, Y.M. Design and Implementation of Rolling Shutter MIMO-OFDM scheme for Optical Camera Communication System. In Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 20–22 October 2021; pp. 798–800. [Google Scholar] [CrossRef]
  17. Teli, S.R.; Zvanovec, S.; Perez-Jimenez, R.; Ghassemlooy, Z. Spatial frequency-based angular behavior of a short-range flicker-free MIMO–OCC link. Appl. Opt. 2020, 59, 10357–10368. [Google Scholar] [CrossRef] [PubMed]
  18. Gorse, L.; Löffler, C.; Mutschler, C.; Philippsen, M. Optical Camera Communication for Active Marker Identification in Camera-based Positioning Systems. In Proceedings of the 2018 15th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 25–26 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  19. Nguyen, T.; Jang, Y.M. Novel 2D-sequential color code system employing Image Sensor Communications for Optical Wireless Communications. ICT Express 2016, 2, 57–62. [Google Scholar] [CrossRef] [Green Version]
  20. Hao, T.; Zhou, R.; Xing, G. COBRA: Color Barcode Streaming for Smartphone Systems. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services—MobiSys ’12, Seoul, Korea, 17–21 June 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 85–98. [Google Scholar] [CrossRef]
  21. Han, T.; Zhao, D. Energy Efficiency of User-Centric, Cell-Free Massive MIMO-OFDM with Instantaneous CSI. Entropy 2022, 24, 234. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, C.; Yang, H.; Du, P.; Zhong, W.D.; Alphones, A.; Yang, Y.; Deng, X. User-Centric MIMO Techniques for Indoor Visible Light Communication Systems. IEEE Syst. J. 2020, 14, 3202–3213. [Google Scholar] [CrossRef]
  23. Olsen, S.L.; Skwarnicki, T.; Zieminska, D. Nonstandard heavy mesons and baryons: Experimental evidence. Rev. Mod. Phys. 2018, 90, 015003. [Google Scholar] [CrossRef] [Green Version]
  24. Gallmeister, K.; Mosel, U. Hadronization and Color Transparency. Physics 2022, 4, 440–450. [Google Scholar] [CrossRef]
  25. Ahmed, M.F.; Hasan, M.K.; Shahjalal, M.; Alam, M.M.; Jang, Y.M. Experimental Demonstration of Continuous Sensor Data Monitoring Using Neural Network-Based Optical Camera Communications. IEEE Photonics J. 2020, 12, 1–11. [Google Scholar] [CrossRef]
  26. Huynh-Thu, Q.; Ghanbari, M. The accuracy of PSNR in predicting video quality for different video scenes and frame rates. Telecommun. Syst. 2012, 49, 35–48. [Google Scholar] [CrossRef]
  27. Nguyen, H.; Nguyen, H.; Nguyen, V.H.; Jang, Y.M. Design and Implementation of AS-QL Scheme for LED Matrix based Optical Camera Communication. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 21–23 October 2020; pp. 674–676. [Google Scholar] [CrossRef]
  28. Yilmazlar, I.; Sabuncu, M. Implementation of a Current Drive Modulator for Effective Speckle Suppression in a Laser Projection System. IEEE Photonics J. 2015, 7, 1–6. [Google Scholar] [CrossRef]
Figure 1. User-centric MIMO under OCC access point. (a) Three user devices of 1× bandwidth; (b) one user device of 3× bandwidth.
Figure 1. User-centric MIMO under OCC access point. (a) Three user devices of 1× bandwidth; (b) one user device of 3× bandwidth.
Applsci 12 10204 g001
Figure 2. Gluon-exchange-based color changes in quarks, baryon model.
Figure 2. Gluon-exchange-based color changes in quarks, baryon model.
Applsci 12 10204 g002
Figure 3. Overview of proposed QCD–OCC system.
Figure 3. Overview of proposed QCD–OCC system.
Applsci 12 10204 g003
Figure 4. Block diagram of the proposed QCD–OCC transmitter simulator.
Figure 4. Block diagram of the proposed QCD–OCC transmitter simulator.
Applsci 12 10204 g004
Figure 5. Proposed OCC modulation scheme frame format and color channel multiplexing process.
Figure 5. Proposed OCC modulation scheme frame format and color channel multiplexing process.
Applsci 12 10204 g005
Figure 6. Block diagram of the proposed QCD–OCC receiver simulator.
Figure 6. Block diagram of the proposed QCD–OCC receiver simulator.
Applsci 12 10204 g006
Figure 7. Tx and Rx simulator screenshots of proposed QCD–OCC system.
Figure 7. Tx and Rx simulator screenshots of proposed QCD–OCC system.
Applsci 12 10204 g007
Figure 8. Implementation of proposed Tx. (a) Circuit diagram; (b) connected hardware components.
Figure 8. Implementation of proposed Tx. (a) Circuit diagram; (b) connected hardware components.
Applsci 12 10204 g008
Figure 9. Implemented QCD–OCC prototype system.
Figure 9. Implemented QCD–OCC prototype system.
Applsci 12 10204 g009
Figure 10. Tx intensity profile at distances L = 50, 100, and 200 cm.
Figure 10. Tx intensity profile at distances L = 50, 100, and 200 cm.
Applsci 12 10204 g010
Figure 11. Comparison of success of reception (%) between implemented proposed QCD–OCC simulator and proposed QCD–OCC prototype.
Figure 11. Comparison of success of reception (%) between implemented proposed QCD–OCC simulator and proposed QCD–OCC prototype.
Applsci 12 10204 g011
Figure 12. Evaluation of proposed prototype. (a) PSNR at distances 50, 100, 150, and 200 cm under different ambient lighting conditions. (b) Success of reception (%) at distances 50, 100, 150, and 200 cm under different ambient lighting conditions.
Figure 12. Evaluation of proposed prototype. (a) PSNR at distances 50, 100, 150, and 200 cm under different ambient lighting conditions. (b) Success of reception (%) at distances 50, 100, 150, and 200 cm under different ambient lighting conditions.
Applsci 12 10204 g012
Figure 13. Performance evaluation of proposed prototype. (a) BER vs. distance vs. Tx data rate; (b) data throughput of other similar systems.
Figure 13. Performance evaluation of proposed prototype. (a) BER vs. distance vs. Tx data rate; (b) data throughput of other similar systems.
Applsci 12 10204 g013
Table 1. Bits to color mapping using baryon additive color model.
Table 1. Bits to color mapping using baryon additive color model.
BitsColorBitsColor
000R̂ĜB̂ = Black111RGB = White
001B = Blue110B̂ = Yellow
010G = Green101Ĝ = Violet
100R = Red011R̂ = Cyan
Table 2. Parameters of the experimental setup.
Table 2. Parameters of the experimental setup.
ParameterValueParameterValue
Tx dimension6.5 ×  6.5 cmCPUi7-700 3.6GHz
Tx modelWS2812B-64 8 × 8 LED MatrixRAM16 GB
Transmission
Data rates
720, 1440,
2160, 2880,
3600, 4320 bps
File sizesFile 1: 97 bytes
File 2: 123 bytes
File 3: 109 bytes
Camera
Frame rate
30 fps 1/1500sDistances50, 100,
150, 200 cm
Camera
Resolution
1920 × 1080 pixelsAmbient light27, 149,
304 lux
ExposureISO100Rotation angle0, 45, 90
Table 3. Performance comparison.
Table 3. Performance comparison.
TechniqueTx DeviceCamera
FPS
DistanceFlickerRotation
Support
Type of
Camera
Number
of Links
BERThroughput
RS-OCC-
MIMO [14]
8 × 8 LED
Panel
30<1.5 mNoNoRS1> 10 4 1.4 Kbps
Data
Matrix
8 × 8 LED
Panel
30<2 mYesNoGS1> 10 3 1.9 Kbps
QR Code
(Single Color)
8 × 8 LED
Panel
30<3 mYesYesGS1-1.44 Kbps
AS-QL [27]8 × 8 LED
Panel
60-YesYesGS1 10 3 1.180 Kbps
QR Code
(Multi-Color) [19]
LCD
Display
60<3 mYesYesRS+GSMultiple> 10 4 5.4 Kbps
Proposed
QCD-OCC
8 × 8 LED
Panel
30<3 mYesYesRS+GSMultiple> 10 4 4.32 Kbps
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vasantha, G.; Salvi, S. Quantum-Chromodynamics-Inspired 2D Multicolor LED Matrix to Camera Communication for User-Centric MIMO. Appl. Sci. 2022, 12, 10204. https://doi.org/10.3390/app122010204

AMA Style

Vasantha G, Salvi S. Quantum-Chromodynamics-Inspired 2D Multicolor LED Matrix to Camera Communication for User-Centric MIMO. Applied Sciences. 2022; 12(20):10204. https://doi.org/10.3390/app122010204

Chicago/Turabian Style

Vasantha, Geetha, and Sanket Salvi. 2022. "Quantum-Chromodynamics-Inspired 2D Multicolor LED Matrix to Camera Communication for User-Centric MIMO" Applied Sciences 12, no. 20: 10204. https://doi.org/10.3390/app122010204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop