Next Article in Journal
Physiological Synchrony Predict Task Performance and Negative Emotional State during a Three-Member Collaborative Task
Next Article in Special Issue
Internet of Nano-Things (IoNT): A Comprehensive Review from Architecture to Security and Privacy Challenges
Previous Article in Journal
Testing Thermostatic Bath End-Scale Stability for Calibration Performance with a Multiple-Sensor Ensemble Using ARIMA, Temporal Stochastics and a Quantum Walker Algorithm
Previous Article in Special Issue
Over the Limits of Traditional Sampling: Advantages and Issues of AICs for Measurement Instrumentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wideband Spectrum Sensing Using Modulated Wideband Converter and Data Reduction Invariant Algorithms

1
Univ Brest, CNRS, Lab-STICC, CS 93837, 6 Avenue Le Gorgeu, CEDEX 3, 29238 Brest, France
2
ENSTA Bretagne, CNRS, Lab-STICC, 2 rue François Verny, CEDEX 9, 29806 Brest, France
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 2263; https://doi.org/10.3390/s23042263
Submission received: 8 December 2022 / Revised: 13 February 2023 / Accepted: 13 February 2023 / Published: 17 February 2023
(This article belongs to the Collection Advanced Techniques for Acquisition and Sensing)

Abstract

:
Wideband spectrum sensing is a challenging problem in the framework of cognitive radio and spectrum surveillance, mainly because of the high sampling rates required by standard approaches. In this paper, a compressed sensing approach was considered to solve this problem, relying on a sub-Nyquist or Xsampling scheme, known as a modulated wideband converter. First, the data reduction at its output is performed in order to enable a highly effective processing scheme for spectrum reconstruction. The impact of this data transformation on the behavior of the most popular sparse reconstruction algorithms is then analyzed. A new mathematical approach is proposed to demonstrate that greedy reconstruction algorithms, such as Orthogonal Matching Pursuit, are invariant with respect to the proposed data reduction. Relying on the same formalism, a data reduction invariant version of the LASSO (least absolute shrinkage and selection operator) reconstruction algorithm was also introduced. It is finally demonstrated that the proposed algorithm provides good reconstruction results in a wideband spectrum sensing scenario, using both synthetic and measured data.

1. Introduction

The research work presented in this paper is mainly related to the wideband spectrum sensing problem, which consists of detecting the occupied or active frequency bands, at a given moment and in a given place, over a very large frequency domain (e.g., larger than 1 GHz). This information is necessary for cognitive radio systems [1] but also for some spectrum monitoring-related civil and military applications [2,3].
In this framework, standard spectral analysis methods resulted in heavy or impractical spectrum sensing architectures because of the very high sampling frequency required and the huge quantity of data to be processed. Since a finite-dimensional signal with a sparse or compressible representation can be recovered exactly from a small set of linear, non-adaptive measurements [4], the compressed sensing approach [5,6,7,8] allows the input sampling constraint to be relieved by taking advantage of the spectrum sparsity [9]. The new constraint is then that at a given time and in a given location, only a small part of the whole monitored frequency band is really occupied. Hence, by taking advantage of this spectrum sparsity, instead of first sampling at a high rate and then compressing the sampled data before processing, the data can be directly sensed at a lower sampling rate in a compressed form.
A recent survey of wideband spectrum sensing approaches with special attention paid to approaches that utilize sub-Nyquist sampling techniques can be found in [10], and ref. [11] provides an overview of recent advances in this domain.
The general wideband spectrum sensing scheme considered in this paper is given in Figure 1. The first stage of the processing chain is represented by the MWC (modulated wideband converter) [12,13], which is able to sample the received signal at a much lower rate than the Nyquist limit ( F N y q u i s t ), without any information loss, provided that its frequency content is sparse enough. This specific Xsampling method is considered here just because it has been actually used to obtain the experimental results discussed in Section 5, but it is worth noting that some other competing techniques have been also proposed over these last few years.
Thus, ref. [14] describes and discusses an Xsampling architecture named analog-to-information converter (AIC), which aims at acquiring efficiently wideband signals. A blind sub-Nyquist sampling approach, referred to as the quadrature analog-to-information converter (QAIC), is proposed in [15]. It relaxes the analog frontend bandwidth requirements at the cost of some added complexity compared to MWC for an overall improvement in sensitivity and energy consumption. A novel random triggering-based modulated wideband compressive sampling (RT-MWCS) method is also proposed in [16] to facilitate the efficient realization of sub-Nyquist rate compressive sampling systems for sparse wideband signals. Compared to MWC, RT-MWCS has a simple system architecture and can be implemented with one channel at the cost of more sampling time. As a last example, a single channel modulated wideband converter (SCMWC) scheme for the spectrum sensing of band-limited wide-sense stationary (WSS) signals was introduced in [17]. With one antenna or sensor, this scheme can save not only sampling rates but also hardware complexity.
Since the contribution presented in this paper is independent of the type of Xsampling scheme, the MWC architecture was selected for the reason mentioned above. It consists of M identical parallel signal processing paths, with a wideband input signal x(t) on each of them being multiplied with a different T p -periodical binary random signal, low-pass filtered, sampled at F s F N y q u i s t , and analog to digital converted. Note that F s , the most often equal to 1 / T p , is also twice the cut-off frequency of the low-pass filter.
The MWC output then consists of an M × N matrix Y, where N is the number of samples required to ensure a given spectral resolution and MN.
The matrix Y could be directly used as an input for the sparse data reconstructor, which can be solved with the equation below:
Y = W H Z ,
under the sparsity hypothesis for the expected solution, i.e., the L × N matrix Z. The L × M matrix W involved in Equation (1) is known, its elements being calculated directly from the periodical binary random signals (scramblers) used by the MWC.
In the compressed sensing approach, the price to pay for the reduction in the input sampling rate is the additional processing required by the signal recovery. The dimension of the input matrix Y for reconstruction is then of particular importance. Since typical values of N may be very large, it is proposed to reduce the data matrix Y before sparse data reconstruction. Compared to state-of-the-art published research ([18,19,20]), our approach directly exploits the intrinsic sparsity of the matrix Y. Hence, rather than using it as input for the sparse data reconstructor, the following matrix is considered instead:
Y r = Y V Y ,
where V Y is the N × M matrix provided by the “economy size” singular value decomposition (SVD) of the matrix Y, i.e.:
Y = U Y S Y V Y H .
In this way, the sparse data reconstructor will work with an M × M instead of an M × N input matrix, which considerably reduces the computational burden for large values of N. Actually, problem (1) can be rewritten as follows:
Y r = W H Z r ,
where Z r = Z V Y .
Note that the sparse data reconstructor also requires the estimation of the number of active frequency bands N b . This task can be carried out by different algorithms, such as the information-theoretic criteria [21], which makes use of the M singular values of Y provided by the diagonal of the S Y matrix.
The reduced data matrix Y r and the estimated number of active frequency bands N b are then used in the next stage to find out the sparse problem solution Z r through a greedy algorithm, such as OMP (orthogonal matching pursuit) [22,23], or an optimization-based one, such as LASSO (least absolute shrinkage and selection operator) [24,25]. In [26], the authors showed that LASSO was a suitable choice for compressive spectrum sensing and recovery in wideband 5G cognitive radio networks.
As will be demonstrated in this paper, greedy algorithms are already data reduction invariant and do not require any modification when being used in this framework. However, the standard LASSO algorithm does not have this useful property because of the standard l 1 norm, which is involved in the optimization process.
In order to overcome this drawback of the standard LASSO algorithm, a new data reduction invariant l 1 norm was first introduced to replace the standard l 1 norm in the optimization process. Then, it was demonstrated that the newly defined version of the LASSO algorithm became data reduction invariant. To the best of our knowledge, this is the only data reduction invariant version of the LASSO algorithm proposed so far in the literature.
Finally, once the matrix Z r is provided by the sparse data reconstructor, the input signal spectrum is estimated, and the threshold is determined in order to make a decision about the active frequency bands.
The rest of the paper is organized as follows. In Section 2, it is demonstrated that greedy algorithms for sparse signal reconstruction are already data reduction invariant. The new version of the LASSO algorithm was introduced in Section 3, and its invariance with respect to data reduction was also demonstrated. The performance of the proposed algorithm is finally evaluated using both simulated and measured data in Section 4 and Section 5 respectively, while Section 6 summarizes the research work presented in this paper and provides some conclusions about its results. Some mathematical preliminaries are also provided in Appendix A; the standard OMP algorithm is briefly recalled in Appendix B, while the data reduction invariance of the newly defined l 1 norm is demonstrated in Appendix C.
The general notations used in this paper are as follows. Matrices and vectors are denoted by symbols in boldface, including uppercase for matrices and lowercase for vectors. ( . ) T and ( . ) H represent complex transpose and Hermitian operators, respectively. I M denotes the M × M identity matrix. k and F stand for the l k norm and the Frobenius norm, respectively. Some other specific notations are defined in the next sections.

2. Data Reduction Invariance of Greedy Algorithms

In this section, it is shown that greedy reconstruction algorithms are invariant with respect to data reduction. Although the invariance property is demonstrated for the OMP algorithm only, this result can be extended by similarity to the other algorithms of this class, such as a compressive sampling matching pursuit (CoSaMP) [27,28] or Iterative Hard Thresholding (IHT) [8,29]. Also note that while an extensive comparison between the proposed algorithm and OMP is carried out in the next section, in terms of the mean square error and detection probability, some results obtained by the CoSaMP and IHT algorithms on measured data are provided as well in Section 4.
Let us consider the two problems corresponding to the original and reduced data matrix, respectively:
Y = W H Z , Y r = W H Z r ,
where Y r = Y V Y and Z r = Z V Y , as already mentioned above.
Let us also define the following notations:
  • J k : a subset of { 1 , , L } with 0 c a r d { J k } M formed by the indices of non-zero rows of the solution Z ^ k , at the kth iteration;
  • W ( k ) H : the M × c a r d { J k } matrix formed with the columns of W H whose indices belong to J k ;
  • Z ^ ( k ) : the c a r d { J k } × N optimized matrix;
  • Π Y = V Y V Y H .
It can be readily noticed that the matrix Π Y is a projector, since Π Y 2 = Π Y Π Y = V Y V Y H V Y I M V Y H = V Y V Y H = Π Y .
Let us finally denote by F i x { Y } the set of all matrices A invariant with respect to Π Y (fixed points of the projector), so that:
A F i x { Y } A Π Y = A
For the first problem in Equation (5), the residual can be written as follows:
R ( k ) = Y W ( k ) H Z ^ ( k ) ,
where:
Z ^ ( k ) = ( W ( k ) W ( k ) H ) 1 W ( k ) Y = ( W ( k ) W ( k ) H ) 1 W ( k ) U Y S Y Z ^ r ( k ) V Y H L e m m a   2 Z ^ ( k ) F i x { Y } .
Since Y F i x { Y } , according to Lemma A1 and Lemma A3 (see Appendix A), the Equations (7) and (8) result in:
R ( k ) F i x { Y } .
For the second problem in Equation (5), the residual can be written as follows:
R r ( k ) = Y r W ( k ) H Z ^ r ( k ) = Y V Y W ( k ) H Z ^ r ( k ) ,
where:
Z ^ r ( k ) = ( W ( k ) W ( k ) H ) 1 W ( k ) Y r = ( W ( k ) W ( k ) H ) 1 W ( k ) Y Z ^ ( k ) V Y R r ( k ) = Y V Y W ( k ) H Z ^ ( k ) V Y = ( Y W ( k ) H Z ^ ( k ) ) R ( k ) V Y
{ Z ^ r ( k ) = Z ^ ( k ) V Y R r ( k ) = R ( k ) V Y .
Since it has already been shown that Z ^ ( k ) , R ( k ) F i x { Y } , multiplying Equation (11) at the right side by V Y H results in:
{ Z ^ r ( k ) V Y H = Z ^ ( k ) Π Y R r ( k ) V Y H = R ( k ) Π Y { Z ^ r ( k ) V Y H = Z ^ ( k ) R r ( k ) V Y H = R ( k ) ,
and finally:
R = R r .
Hence, reducing the data matrix does not modify the residual norm. Consequently, taking into account the bijective relationship (11) between Z ^ ( k ) and Z ^ r ( k ) , since the OMP algorithm aims at minimizing the residual norm, it can operate as well on the reduced data matrix without changing the final result.
Furthermore, the key elements involved in the OMP algorithm are the scalar products between the W H matrix columns and the residual. More precisely, the relevant information is contained in the diagonal of the matrix below:
( W R ) ( W R ) H = ( W R r V Y H ) ( W R r V Y H ) = W R r V Y H V Y I M R r H W H
( W R ) ( W R ) H = ( W R r ) ( V R r ) H .
Hence, this matrix does not change when using a reduced data matrix instead of the original one. Consequently, there exists an isomorphism between the intermediate calculations required by the OMP algorithm running on the two data matrices since all the intermediate variables are linked by bijective relationships, and all the elements involved in the decision-making steps (i.e., residual norm and scalar products) are invariant.

3. Data Reduction Invariant Version of LASSO Algorithm

A new version of the LASSO algorithm, invariant to data reduction, is introduced in this section. A key point to keep in mind is that it operates on the reduced data matrix, as explained in the previous section, and therefore, it optimizes Z r instead of Z , which results in a significant complexity reduction. Since Z = Z r V Y H , the sparse solution can be then easily recovered from the optimized matrix.
In the case of the standard LASSO algorithm, Z ^ r is obtained as a solution of the following optimization problem:
Z ^ r = min Z r C ( Z r ) = min Z r [ ( 1 / 2 ) Y r W H Z r 2 2 + λ Z r 1 ] .
The objective function C ( Z r ) is not invariant with respect to data reduction because of the l 1 norm Z r 1 . Indeed, Z r 1 = Z V Y 1 , which is not equal to Z r 1 . Hence, it is proposed to replace it with the modified l 1 norm Z r 1 , i n v , defined as follows:
Z r 1 , i n v = T r { Z r Z r H } .
It can be readily noticed that this newly defined norm is different from the Frobenius norm because of the square root under the trace operator. According to Equations (A4) and (16), it is also data reduction invariant (see Appendix C for further details), so it is called the “invariant l 1 norm”.
Consequently, if the initial solution (LASSO starting point) belongs to F i x { Y r } , the final solution of the data reduction invariant LASSO algorithm can be obtained from:
Z ^ r = min Z r C i n v ( Z r ) = min Z r [ ( 1 / 2 ) Y r W H Z r 2 2 + λ Z r 1 , i n v ] .
In order to properly describe the LASSO algorithm in its new invariant form, let us consider the following notations:
  • Z ( i ) : matrix Z r without its ith row;
  • Z ( i ) : the matrix Z r without its ith column;
  • Z ( i ) : the ith row of the matrix Z r ;
  • Z ( i ) : the ith column of the matrix Z r .
One of the basic ideas of the LASSO algorithm is to transform the multidimensional optimization problem (17) into a set of mono-dimensional optimization problems.
This is conducted by expressing the objective function as a sum of two terms, the first one depending on only one component of Z r , and the second one depending on all its other components. Thus, the objective function can be optimized successively with respect to each component of Z r , which is equivalent to globally optimizing it with respect to all its components.
The problem related to the introduction of the new invariant l 1 norm Z r 1 , i n v is that it is not possible anymore to separate a given component of Z r because of the square root function.
By denoting T = W H , the following expression holds:
R = Y r W H Z ^ r = Y r T Z ^ r = Y r ( T ( i ) Z ^ ( i ) + T ( i ) Z ^ ( i ) )
R = ( Y r T ( i ) Z ^ ( i ) ) T ( i ) Z ^ ( i ) .
According to the definition of the invariant l 1 norm, it can be also written as:
Z ^ r 1 , i n v = Z ^ ( i ) 1 , i n v + Z ^ ( i ) 1 , i n v = Z ^ ( i ) 1 , i n v + Z ^ ( i ) 2 .
Let us also denote:
R p a r t ( i ) = Y r T ( i ) Z ^ ( i ) .
C i n v ( Z r ) then becomes:
C i n v ( Z r ) = ( 1 / 2 ) R p a r t ( i ) T ( i ) Z ^ ( i ) 2 2 + λ ( Z ^ ( i ) 1 , i n v + Z ^ ( i ) 2 ) = ( 1 / 2 ) R p a r t ( i ) 2 2 T r { Re [ R p a r t ( i ) H T ( i ) Z ^ ( i ) ] } + ( 1 / 2 ) T ( i ) Z ^ ( i ) 2 2 + λ ( Z ^ ( i ) 1 , i n v + Z ^ ( i ) 2 )
C i n v ( Z r ) = [ ( 1 / 2 ) R p a r t ( i ) 2 2 + λ Z ^ ( i ) 1 , i n v ] + [ ( 1 / 2 ) T ( i ) Z ^ ( i ) 2 2 + λ Z ^ ( i ) 2 T r { Re [ R p a r t ( i ) H T ( i ) Z ^ ( i ) ] } ] .
Focus now only on the second term of C i n v ( Z r ) since the first one does not depend on Z ( i ) . Let us also define the following notations:
μ = Z ^ ( i ) 2 , z ˜ = Z ^ ( i ) / Z ^ ( i ) 2 , a = R p a r t ( i ) H T ( i ) , b = T ( i ) .
Hence, the objective function becomes:
C i n v ( z ˜ , μ ) = ( 1 / 2 ) μ 2 b z ˜ 2 2 + λ μ μ Re [ z ˜ a ] .
The value of µ minimizing C i n v ( Z r ) can be then obtained from:
C i n v ( z ˜ ) μ = 0 μ = Re [ z ˜ a ] λ b z ˜ 2 2 .
If Equation (24) yields a negative value for µ, take μ = 0 since μ 0 according to Equation (22).
For a fixed value of µ, by keeping only the terms depending on z ˜ , the objective function to be minimized with respect to z ˜ can be written as:
C i n v ( z ˜ , μ ) = ( 1 / 2 ) μ b z ˜ 2 2 Re [ z ˜ a ] , subject   to z ˜ 2 2 1 = 0 .
Lagrange’s multipliers method leads to the following objective function:
F ( z ˜ , θ ) = ( 1 / 2 ) μ b z ˜ 2 2 Re [ z ˜ a ] + ( θ / 2 ) ( z ˜ 2 2 1 ) .
where θ stands for the Lagrange multiplier.
Developing Equation (26) to make the components of z ˜ appear results in:
F ( z ˜ 1 , z ˜ 2 , , z ˜ M , θ ) = ( 1 / 2 ) μ k l | b k | 2 | z ˜ l | 2 l Re [ a l z ˜ l ] + ( θ / 2 ) ( l | z ˜ l | 2 1 ) = ( 1 / 2 ) μ b 2 2 l | z ˜ l | 2 l Re [ a l z ˜ l ] + ( θ / 2 ) ( l | z ˜ l | 2 1 ) .
The phase z ˜ l is involved only in the product a l z ˜ l , and it can be readily seen that F is minimized when:
arg { z ˜ l } = arg { a l } ,
so that:
F ( z ˜ 1 , z ˜ 2 , , z ˜ M , θ ) = ( 1 / 2 ) μ b 2 2 l | z ˜ l | 2 l | a l | | z ˜ l | + ( θ / 2 ) ( l | z ˜ l | 2 1 ) .
The value of | z ˜ l | that minimizes F is then obtained from:
F | z ˜ l | = 0 μ b 2 2 | z ˜ l | | a l | + θ | z ˜ l | = 0
| z ˜ l | = | a l | μ b 2 2 + θ .
From Equations (28) and (30), it can be inferred that:
z ˜ = η a H ,
and because z ˜ is a unit vector, it can be finally expressed as:
z ˜ = a H / a 2 .
In practice, z ˜ is first calculated using Equation (32), then µ is evaluated from Equation (24); the value of λ is estimated using the cross-validation method [30]. Finally, Z ^ ( i ) = μ z ˜ is computed from Equation (22).
Algorithm 1 below summarizes the processing flow associated with the proposed data reduction invariant LASSO technique.
Algorithm 1 Processing flow for the proposed data reduction invariant LASSO technique
Input: M × N matrix Y at the output of the MWC scheme, M × L X sampling-related matrix T = W H and N × M matrix V Y , obtained from the SVD of the Y matrix using Equation (3).
Initialization:
  Compute Y r = Y V Y according to Equation (2).
  Take an initial solution for the L × M matrix Z ^ r belonging to F i x { Y r } .
  Find the optimal λ value using the cross-validation method [30].
Fori←1 to L, do
  Obtain the matrices T ( i ) and Z ^ ( i ) by removing the ith column from the matrix T and the ith row from the matrix Z ^ r , respectively.
  In addition, obtain the ith column of matrix T and denote it by b = T ( i ) .
  Calculate R p a r t ( i ) = Y r T ( i ) Z ^ ( i ) according to Equation (20).
  Calculate a = R p a r t ( i ) H T ( i ) according to Equation (22).
  Calculate z ˜ = a H / a 2 according to Equation (32) and then μ = Re [ z ˜ a ] λ b z ˜ 2 2 according to Equation (24).
  Calculate Z ^ ( i ) = μ z ˜ according to Equation (22).
  Update the estimated solution Z ^ r by replacing its ith row with Z ^ ( i ) .
End for
Output: Calculate the final estimated solution, i.e., the L × N matrix Z ^ = Z ^ r V Y H .
A comparison of complexity can be finally performed between the proposed algorithm and the standard one. Thus, based on Algorithm 1 presented above, it can be readily established that the complexity is reduced from O ( M N L 2 ) for the standard LASSO to O ( M N ( L + M ) ) for the data invariant LASSO algorithm. It can be noticed that the complexity gain increases with the value of L since the proposed algorithm reduces its quadratic dependence on this parameter to a linear one. An additional complexity result is provided in the next section in terms of the number of multiplications for a given set of simulation parameters.

4. Simulation Results

This section aims to illustrate the performance of the proposed invariant LASSO algorithm in a simulated wideband spectrum sensing scenario characterized by the following parameters:
  • Monitored frequency band: 1 GHz ν 1 GHz ;
  • The number of active frequency bands: N b = 8 ;
  • Bandwidth of each active frequency band: B = 20 MHz ;
  • Spectral resolution: Δ ν = 30.518 kHz ;
  • The number of MWC parallel processing paths M = 21 ;
  • Sampling frequency on each path: F s = F p = 31.25 MHz ;
  • The number of samples acquired on each path N = 1024 .
Note that for this set of parameters, the sizes of the matrices Y and Z are lowered from 21 × 1024 and 32 × 1024, to 21 × 21 and 32 × 21, respectively. It can be also noticed that given the real nature of the analyzed signal, the eight active frequency bands have to be considered by couples of two so that they actually correspond to four transmitters.
Figure 2 shows the variation in the cost function during the cross-validation process. Its minimum value is obtained for λ 2 10 4 . This value of λ does not depend on the noise level and is used by the invariant LASSO algorithm, as explained in the previous section.
Figure 3 illustrates the new algorithm performance for two signal-to-noise ratios (SNR), i.e., 30 dB and 10 dB, respectively. Note that these two values are in-band SNRs since they are measured only within the active bands. If the same SNRs are calculated over the whole monitored band, they correspond to 19 dB and −1 dB, respectively.
As can be readily seen, the active bands are very well reconstructed for an in-band SNR = 30 dB, and they can be perfectly detected using an appropriate threshold, which is iteratively and blindly updated, as has also been already proposed in [31].
For an in-band SNR = 10 dB, although the results are still exploitable, the algorithm reaches its limits. This can be explained by the fact that the LASSO algorithm introduces an SNR loss in the reconstructed bands of about 11 dB in this configuration. Indeed, as can be noticed from Figure 3b, SNR loss leads to the increasingly challenging detection of active bands, as well as higher false alarm rates and bandwidth estimation errors.
In order to evaluate the complexity gain for the considered set of simulation parameters (M = 21, L = 32), the number of multiplications is also shown in Figure 4 for N { 512 , 1024 , 2048 , 4096 } . It can be noticed that a significant complexity reduction is obtained using the proposed algorithm, which becomes even slightly larger with the value of N.
The performance of the new algorithm was finally evaluated for a wide range of in-band SNR (5–30 dB) and false alarm probabilities ( 10 6 10 1 ) in terms of the normalized mean error and detection rate (Figure 5). The same parameters provided by the OMP algorithm were also plotted for comparison purposes.
These two performance parameters have been obtained at the output of a “threshold and detect” scheme, using Monte-Carlo simulations with 1000 independent noise realizations and random positions of the active frequency bands. The threshold is calculated to keep the false alarm rate constant at the output of this scheme. The normalized error is then obtained as a complement with respect to one of the relative numbers of threshold overruns inside the active frequency bands. The detection rate is calculated as the relative number of detected frequency bands. Note that a frequency band is considered as being detected if there is at least one threshold overrun inside it.
The results depicted here have been obtained for a false alarm probability of 10 3 , but they are similar to the other false alarm probabilities in the range above. It can be noticed that the performances of the two algorithms are close. However, the proposed algorithm appears to be more robust to noise, while OMP provides a slightly better detection rate for high SNRs.

5. Experimental Results

The proposed data reduction invariant LASSO was also evaluated using measured data. Our experimental testbed is shown in Figure 6, and its block diagram, including the external instruments, is provided in Figure 7. It is based on a 4-channels MWC analog board, which is described in more detail in a previously published paper [13], and is able to monitor wideband spectral domains up to 1 GHz. Table 1 provides the main parameters of our experimental testbed.
Our analog front-end board for compressed sampling with its four physical channels is shown in Figure 8, while Figure 9 illustrates its operating principle. On input, an SCA-4-10+ splitter from Mini-Circuits©, with less than 7 dB loss, was used to provide the input signal to the four channels.
Similar to channel 2 depicted in Figure 9, each channel included an M1-0008 mixer from MArki©. The mixer receives an amplified radio-frequency signal to analyze its RF input and a pseudo-random modulating waveform on its LO input.
The mixer output (IF) goes through a low-pass filter, which is an SXLP-36+ low-pass filter from Mini-Circuits©, with a 3 dB cut-off frequency of 40 MHz. This filter was chosen because it has a very flat response (variations lower than 1 dB) from the 0 to 36 MHz band and a sharp cut-off above this band.
For our experiments, the radio-frequency signal was provided by a Keysight 81180A arbitrary waveform generator. An Avnet ML605 DSP Kit, as shown in Figure 10, was also used to generate the pseudo-random modulating waveforms. It included a Xilinx Virtex-6 FPGA, as well as digital-to-analog and analog-to-digital capabilities. Moreover, it enables the selection of each channel sequence from a compiled list. If necessary, recompilation allows new sequences to be added or changes some parameters, such as the bit rate. The embedded Gigabit-Transceiver X (GTX) high-speed Serializer-Deserializer transceivers from 0 to 3 were connected to channels 14 of the front-end analog board.
A DSO90404A Agilent Infiniium 4-channel oscilloscope was used to acquire the output signals and save them. To synchronize the acquisition with respect to the modulating waveforms, a pulse signal was generated by the GTX 7 of the ML605 board and plugged into the oscilloscope external trigger.
The acquisition system was calibrated using the approach described in [32]. Figure 11 shows the relative error between the observed and predicted system output, as a function of output frequency, for the calibrated system and for the uncalibrated one, which clearly demonstrates the interest in the calibration stage. The relative error was evaluated using a formula similar to the criterion considered in [33]:
ε ( f ) = 20 log 10 ( o ( f ) p ( f ) o ( f ) ) .
Here o(f) denotes the observed output signal corresponding to the subband centered on f (the whole frequency band of the signal at the output of the acquisition system has been divided into 28 subbands). Similarly, p(f) denotes the predicted output signal corresponding to the subband centered on f, which is predicted by the calibrated model or by the theoretical model. In any case, even for the theoretical model, a calibrated low-pass filter is always included: the true frequency response of the filter is taken into account in the related equations.
The wideband spectrum sensing results provided by the proposed reconstruction algorithm, using original and reduced data, are shown in Figure 12 and Figure 13 for 2 and 6 active transmitters, respectively.
For the scenario when two active transmitters, i.e., four active frequency bands, are considered (Figure 12), it can be noticed that they are both well detected. The amplitude of the upper-frequency bands is lower than expected because the corresponding transmitter carrier is close to the higher limit of the monitored frequency band. We have noticed that the reconstruction is usually less reliable in this area, probably due to higher non-linear effects in the analog front-end at very high frequencies, and it is interesting to see that the transmitter is detected even in these difficult conditions.
For the scenario with six active transmitters, i.e., 12 active frequency bands (Figure 13), the first five transmitters are well detected, while the last one seems to be lost. In addition to the fact that it is also close to the higher limit of the monitored frequency band, as in the previous case, there is another aspect that explains this result. Actually, with six transmitters to be detected instead of two, the expected solution is significantly less sparse than in the previous case, which leads to some reconstruction quality loss.
However, it can be readily seen that the reconstruction results are slightly better, in terms of SNR and MSE, when the proposed algorithm runs on reduced data, as shown in Figure 13b. The execution time is also about 20 times shorter than when it runs on original data, which confirms the results presented in Figure 4.
Finally, as already mentioned in Section 2, the reconstruction results obtained with two greedy algorithms, CoSaMP and IHT, are also illustrated in Figure 14 for the same measured data scenario with six transmitters. Note that there seems to be less noise on these images just because, contrariwise to LASSO, in the case of greedy algorithms, the spectrum is reconstructed only inside the detected active bands. However, as illustrated in Figure 14, they are more likely to miss some transmitters and generate false alarms. They can also be subject to bandwidth estimation errors if further processing is carried out to extract more information related to the detected bands. Note that this kind of post-processing is out of the scope of the paper and is mentioned here only to illustrate the limitations of CoSaMP and IHT algorithms.

6. Conclusions

This paper introduces a new idea for designing a highly effective wideband spectrum sensing system, which consists of reducing the data matrix at the output of the Xsampling MWC scheme. The second contribution of this paper is the demonstration of data reduction invariance on greedy sparse reconstruction algorithms. Finally, our most important contribution presented in this paper is a new version of the LASSO algorithm, which is also invariant with respect to the same criterion. Coupled with the data reduction idea, the proposed algorithm is a powerful and effective tool in the wideband spectrum sensing framework, especially for low SNR values.
As a future work, the newly proposed method should be tested and further improved in the presence of impulsive noise, as has been already conducted in [34] for greedy algorithms.

Author Contributions

Conceptualization, G.B.; methodology, G.B., E.R., R.G. and D.L.J.; software, E.R.; validation, R.G. and D.L.J.; formal analysis, G.B. and E.R.; investigation, G.B., E.R., R.G. and D.L.J.; resources, R.G. and D.L.J.; data curation, E.R.; writing—original draft preparation, E.R.; writing—review and editing, G.B., R.G. and D.L.J.; visualization, D.L.J.; supervision, G.B.; project administration, R.G.; funding acquisition, R.G. and D.L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the IBNM (Brest Institute of Computer Science and Mathematics) CyberIoT Chair of Excellence, at the University of Brest.

Acknowledgments

The authors would like to thank the company Syrlinks for the design of the analog board.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A1.
The following equality holds:
Y F i x { Y } .
Proof. 
Indeed, Y Π Y = U Y S Y V Y H V Y I M V Y H = Y . □
Lemma A2.
A F i x { Y } there exists a matrix A r (not a necessary square) such that:
A = A r V Y H .
In the following, A r is said to be a reduced form of A, because in our application A r  is considerably smaller than A.
Proof. 
If A = A r V Y H , then A Π Y = A r V Y H V Y I M V Y H = A . In the other sense, if A F i x { Y } , then A V Y V Y H = A A r = A V Y . □
Lemma A3.
If A , B F i x { Y } , then Λ A and Λ B (not necessary square, nor diagonal), and the following equality holds:
Λ A A + Λ B B F i x { Y } .
Proof. 
Λ A A + Λ B B = Λ A A r V Y H + Λ B A r V Y H Λ A A + Λ B B = ( Λ A A r + Λ B A r ) V Y H , so Equation (A3) is true according to Lemma A2. □
Lemma A4.
If A F i x { Y } and A r is a reduced form ofA, then for any norm, which can be expressed by:
A = f ( A A H ) ,
with f any continuous function, the following equality holds:
A = A r .
Proof. 
Indeed, A = f ( A A H ) = f ( A r V Y H V Y I M A r H ) = f ( A r A r H ) = A r . □
Corollary A1.
The Frobenius norm meets Equation (A5), i.e.:
A F = A r F .
Proof. 
The proof of this corollary is straightforward since the Frobenius norm A F = T r { A A H } already has the form required in Equation (A4). □

Appendix B

OMP is a greedy algorithm that is widely used for finding the sparse solution of the problem described by Equation (1). For the sake of simplicity and without any loss of generality, its operating principle is recalled hereafter for the case when the measured input data y and the expected sparse solution z are vectors instead of matrices.
Let us introduce the following notations related to the kth iteration: z ^ k for the estimated vector solution, r k = y W H z ^ k for the residual vector, i k for the index vector of z ^ k non-zero components, z ^ _ k = z ^ k ( i k ) for the estimated vector reduced to its non-zero components, and W _ k H = W H ( : , i k ) for the matrix formed with the columns of W H corresponding to the indices contained in i k .
Basically, OMP starts with a null solution and adds a new non-zero component at each iteration so that, to minimize the norm of the residual vector r k , which measures the reconstruction error, the algorithm stops when the targeted number of non-zero components is reached or when the residual norm is below a given threshold.
Since W H z ^ k = W _ k H z ^ _ k , the residual vector can be also written as r k = y W _ k H z ^ _ k . It is then possible to place the problem at the kth iteration in the following equivalent form:
min z ^ k y W H z ^ k 2 = min z ^ _ k y W _ k H z ^ _ k 2 .
The advantage of this equivalent form is that, contrary to the initial problem, it results in the unique solution:
z ^ _ k = W _ k y ,
where W _ k is the MoorePenrose pseudo-inverse of W _ k H .
Note that z ^ k can be obtained from z ^ _ k , provided that i k is known. Actually, only the last component of the vector i k has to be determined at the kth iteration, the other k 1 components being found out during the previous iterations.
Keeping in mind that the OMP algorithm aims at minimizing the residual norm r k 2 , and since the product W H z ^ k can be seen as the linear combination of the columns of W H matrix weighted by the non-zero components of z ^ k , the kth component of i k can be determined as follows:
i k k = arg max l ( | r k 1 , w l | r k 1 w l ) ,
where r k 1 , w l stands for the scalar product between the residual vector at the ( k 1 ) t h iteration and the lth column of the matrix W H .
In other words, the newly added non-zero component of the vector z ^ k corresponds to the most correlated column of the matrix W H with the residual vector r k 1 . Note that all the non-zero components of the vector z ^ k are updated using Equation (A8).
The convergence of the OMP algorithm toward the global optimum is ensured in the noiseless case, for a random W H matrix and provided that M 2 s log ( L ) , where M is the length of the measured vector y, L is the length of the sparse vector z, and s is the number of its non-zero components.

Appendix C

This appendix explains in more detail why the modified l 1 norm 1 , i n v is invariant with respect to data reduction.
Let us first consider the case of a 2 × 2 matrix Z. According to Equation (15), its modified l 1 norm can be calculated as follows:
Z 1 , i n v = T r { Z Z H } = T r { [ z 11 z 12 z 21 z 22 ] [ z 11 * z 21 * z 12 * z 22 * ] } = T r { [ | z 11 | 2 + | z 12 | 2 z 11 z 21 * + z 12 z 22 * z 21 z 11 * + z 22 z 12 * | z 21 | 2 + | z 22 | 2 ] } = | z 11 | 2 + | z 12 | 2 + | z 21 | 2 + | z 22 | 2 .
It can be readily seen that if the matrix Z becomes a vector Z = [ z 11 z 21 ] T , then its modified l 1 norm results in the standard l 1 norm:
Z 1 , i n v = | z 11 | 2 + | z 21 | 2 = | z 11 | + | z 21 | = Z 1 .
Consequently, the new defined invariant l 1 norm is equivalent to the standard l 1 norm whenever Z is a vector, so that it is able to take into account the sparsity of the searched solution.
In the case of a M × N matrix Z, the invariant l 1 norm takes the form:
Z 1 , i n v = | z 11 | 2 + + | z 1 N | 2 + + | z M 1 | 2 + + | z M N | 2
Z 1 , i n v = Z ( 1 ) 2 + + Z ( M ) 2 .
where Z ( i ) denotes the ith row of matrix Z.
Just for comparison, the standard l 1 norm, in this case, yields:
Z 1 = | z 11 | + | z 12 | + | z 1 N | + + | z M 1 | + | z M 2 | + | z M N |
Z 1 = Z ( 1 ) 1 + + Z ( M ) 1 .
Once again, it can be readily seen that when Z is a vector (i.e., each of its rows become a scalar) Z 1 , i n v = Z 1 , otherwise Z 1 , i n v Z 1 .
However, while Z 1 is not invariant with respect to data reduction, Z 1 , i n v has this property according to Equations (A4) and (16).

References

  1. Mitola, J.; Maguire, G. Cognitive radio: Making software radios more personal. IEEE Wirel. Commun. 1999, 6, 13–18. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, C.; Wang, H.; Zhang, J.; He, Z. Wideband Spectrum Sensing Based on Single-Channel Sub-Nyquist Sampling for Cognitive Radio. Sensors 2018, 18, 2222. [Google Scholar] [CrossRef] [Green Version]
  3. Arjoune, Y.; Kaabouch, N. A Comprehensive Survey on Spectrum Sensing in Cognitive Radio Networks: Recent Advances, New Challenges, and Future Research Directions. Sensors 2019, 19, 126. [Google Scholar] [CrossRef] [Green Version]
  4. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  5. Mishali, M.; Eldar, Y.C. Wideband Spectrum Sensing at Sub-Nyquist Rates [Applications Corner]. IEEE Signal Process. Mag. 2011, 28, 102–135. [Google Scholar] [CrossRef] [Green Version]
  6. Eldar, Y.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar] [CrossRef]
  7. Mishra, K.V.; Eldar, Y.C.; Shoshan, E.; Namer, M.; Meltsin, M. A Cognitive Sub-Nyquist MIMO Radar Prototype. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 937–955. [Google Scholar] [CrossRef] [Green Version]
  8. Wei, J.; Mao, S.; Dai, J.; Wang, Z.; Huang, W.; Yu, Y. A Faster and More Accurate Iterative Threshold Algorithm for Signal Reconstruction in Compressed Sensing. Sensors 2022, 22, 4218. [Google Scholar] [CrossRef]
  9. FCC Spectrum Policy Task Force. Report of the Spectrum Efficiency Working Group; Technical Report. 2002. Available online: https://transition.fcc.gov/sptf/files/SEWGFinalReport_1.pdf (accessed on 10 February 2023).
  10. Ahmad, B. A Survey of Wideband Spectrum Sensing Algorithms for Cognitive Radio Networks and Sub-Nyquist Approaches. In Multimedia over Cognitive Radio Networks; CRC Press: Boca Raton, FL, USA, 2015; ISBN 978-1-4822-1485-7. [Google Scholar]
  11. Fang, J.; Wang, B.; Li, H.; Liang, Y.-C. Recent Advances on Sub-Nyquist Sampling-Based Wideband Spectrum Sensing. IEEE Wirel. Commun. 2021, 28, 115–121. [Google Scholar] [CrossRef]
  12. Mishali, M.; Eldar, Y.C. Sub-Nyquist Sampling. IEEE Signal Process. Mag. 2011, 28, 98–124. [Google Scholar] [CrossRef]
  13. Burel, G.; Fiche, A.; Gautier, R.; Martin-Guennou, A. A Modulated Wideband Converter Calibration Technique Based on a Single Measurement of a White Noise Signal with Advanced Resynchronization Preprocessing. Electronics 2022, 11, 774. [Google Scholar] [CrossRef]
  14. Haque, T.; Yazicigil, R.T.; Pan, K.J.-L.; Wright, J.; Kinget, P.R. Theory and Design of a Quadrature Analog-to-Information Converter for Energy-Efficient Wideband Spectrum Sensing. IEEE Trans. Circuits Syst. I Regul. Pap. 2014, 62, 527–535. [Google Scholar] [CrossRef]
  15. Iadarola, G.; Daponte, P.; De Vito, L.; Rapuano, S. Over the Limits of Traditional Sampling: Advantages and Issues of AICs for Measurement Instrumentation. Sensors 2023, 23, 861. [Google Scholar] [CrossRef]
  16. Zhao, Y.; Hu, Y.H.; Liu, J. Random Triggering-Based Sub-Nyquist Sampling System for Sparse Multiband Signal. IEEE Trans. Instrum. Meas. 2017, 66, 1789–1797. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, W.; Huang, Z.; Wang, X.; Sun, W. Design of a Single Channel Modulated Wideband Converter for Wideband Spectrum Sensing: Theory, Architecture and Hardware Implementation. Sensors 2017, 17, 1035. [Google Scholar] [CrossRef] [Green Version]
  18. Qin, Z.; Gao, Y.; Plumbley, M.D.; Parini, C.G. Wideband Spectrum Sensing on Real-Time Signals at Sub-Nyquist Sampling Rates in Single and Cooperative Multiple Nodes. IEEE Trans. Signal Process. 2015, 64, 3106–3117. [Google Scholar] [CrossRef] [Green Version]
  19. Yang, Z.; Li, J.; Stoica, P.; Xie, L. Sparse methods for direction-of-arrival estimation. In Academic Press Library in Signal Processing, Volume 7: Array, Radar and Communications Engineering; Chellappa, R., Theodoridis, S., Eds.; Academic Press: London, UK, 2018; Chapter 11; pp. 509–581. [Google Scholar]
  20. Ren, S.; Zeng, Z.; Guo, C.; Sun, X. A Low Complexity Sensing Algorithm for Wideband Sparse Spectra. IEEE Commun. Lett. 2016, 21, 92–95. [Google Scholar] [CrossRef]
  21. Wax, M.; Kailath, T. Detection of signals by information theoretic criteria. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 387–392. [Google Scholar] [CrossRef] [Green Version]
  22. Cai, T.T.; Wang, L. Orthogonal Matching Pursuit for Sparse Signal Recovery with Noise. IEEE Trans. Inf. Theory 2011, 57, 4680–4688. [Google Scholar] [CrossRef]
  23. Anupama, R.; Kulkarni, S.Y.; Prasad, S.N. Compressive Spectrum Sensing for Wideband Signals Using Improved Matching Pursuit Algorithms. In International Conference on Artificial Intelligence and Sustainable Engineering; Springer: Singapore, 2022; pp. 241–250. [Google Scholar]
  24. Tibshirani, R. Regression Shrinkage and Selection Via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  25. Xu, H.; Caramanis, C.; Mannor, S. Robust Regression and Lasso. IEEE Trans. Inf. Theory 2010, 56, 3561–3574. [Google Scholar] [CrossRef] [Green Version]
  26. Koteeshwari, R.; Malarkodi, B. Compressive spectrum sensing for 5G cognitive radio networks—LASSO approach. Heliyon 2022, 8, e09621. [Google Scholar] [CrossRef] [PubMed]
  27. Needell, D.; Tropp, J. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  28. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef] [Green Version]
  29. Maleki, A.; Donoho, D.L. Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing. IEEE J. Sel. Top. Signal Process. 2010, 4, 330–341. [Google Scholar] [CrossRef] [Green Version]
  30. Arlot, S.; Celisse, A. A survey of cross-validation procedures for model selection. Stat. Surv. 2010, 4, 40–79. [Google Scholar] [CrossRef]
  31. Stanković, S.; Orović, I.; Stanković, L. An automated signal reconstruction method based on analysis of compressive sensed signals in noisy environment. Signal Process. 2014, 104, 43–50. [Google Scholar] [CrossRef]
  32. Burel, G.; Fiche, A.; Gautier, R. A Modulated Wideband Converter Model Based on Linear Algebra and Its Application to Fast Calibration. Sensors 2022, 22, 7381. [Google Scholar] [CrossRef]
  33. Daponte, P.; De Vito, L.; Iadarola, G.; Iovini, M.; Rapuano, S. Experimental comparison of two mathematical models for Analog-to-Information Converters. In Proceedings of the 21st IMEKO TC4 International Symposium and 19th International Workshop on ADC Modelling and Testing, Understanding the World through Electrical and Electronic Measurement, Budapest, Hungary, 7–9 September 2016. [Google Scholar]
  34. Stanković, S.; Orović, I.; Amin, M. L-statistics based modification of reconstruction algorithms for compressive sensing in the presence of impulse noise. Signal Process. 2013, 93, 2927–2931. [Google Scholar] [CrossRef]
Figure 1. Signal and data processing block diagram for wideband spectrum sensing.
Figure 1. Signal and data processing block diagram for wideband spectrum sensing.
Sensors 23 02263 g001
Figure 2. Choice of optimal λ value (red circle) using the cross-validation method.
Figure 2. Choice of optimal λ value (red circle) using the cross-validation method.
Sensors 23 02263 g002
Figure 3. Wideband spectrum sensing results obtained using the invariant LASSO algorithm, for an In-band SNR of (a) 30 dB; (b) 10 dB.
Figure 3. Wideband spectrum sensing results obtained using the invariant LASSO algorithm, for an In-band SNR of (a) 30 dB; (b) 10 dB.
Sensors 23 02263 g003
Figure 4. Complexity gain comparison in terms of number of multiplications.
Figure 4. Complexity gain comparison in terms of number of multiplications.
Sensors 23 02263 g004
Figure 5. Performance comparison between OMP and LASSO algorithms in terms of: (a) Normalized mean error; (b) Mean detection rate.
Figure 5. Performance comparison between OMP and LASSO algorithms in terms of: (a) Normalized mean error; (b) Mean detection rate.
Sensors 23 02263 g005
Figure 6. Experimental testbed for the evaluation of data reduction invariant LASSO algorithm.
Figure 6. Experimental testbed for the evaluation of data reduction invariant LASSO algorithm.
Sensors 23 02263 g006
Figure 7. Block diagram of our compressed sampling testbed.
Figure 7. Block diagram of our compressed sampling testbed.
Sensors 23 02263 g007
Figure 8. Photo of our analog front-end board.
Figure 8. Photo of our analog front-end board.
Sensors 23 02263 g008
Figure 9. Illustration of the operating principle of our analog front-end board.
Figure 9. Illustration of the operating principle of our analog front-end board.
Sensors 23 02263 g009
Figure 10. Avnet ML605 DSP Kit used for the generation of modulating waveforms.
Figure 10. Avnet ML605 DSP Kit used for the generation of modulating waveforms.
Sensors 23 02263 g010
Figure 11. Relative error between observed and predicted system output, as a function of output frequency.
Figure 11. Relative error between observed and predicted system output, as a function of output frequency.
Sensors 23 02263 g011
Figure 12. Wideband spectrum sensing results provided by the proposed reconstruction algorithm for two active transmitters: (a) Original measured data; (b) Reduced data.
Figure 12. Wideband spectrum sensing results provided by the proposed reconstruction algorithm for two active transmitters: (a) Original measured data; (b) Reduced data.
Sensors 23 02263 g012
Figure 13. Wideband spectrum sensing results provided by the proposed reconstruction algorithm for six active transmitters: (a) Original measured data; (b) Reduced data.
Figure 13. Wideband spectrum sensing results provided by the proposed reconstruction algorithm for six active transmitters: (a) Original measured data; (b) Reduced data.
Sensors 23 02263 g013
Figure 14. Wideband spectrum sensing results for six active transmitters obtained using two greedy algorithms: (a) CoSaMP; (b) IHT.
Figure 14. Wideband spectrum sensing results for six active transmitters obtained using two greedy algorithms: (a) CoSaMP; (b) IHT.
Sensors 23 02263 g014
Table 1. Main parameters of the experimental testbed used to evaluate the proposed algorithm.
Table 1. Main parameters of the experimental testbed used to evaluate the proposed algorithm.
ParameterSymbolValue
Number of physical channels   M 4
Nyquist frequency   F N y q 1 GHz
Length of scramblers   L 96
Scramblers repetition frequency   F p = F N y q / L 10.41667 MHz
ADC sampling frequency   F s 104.1667 MHz
Measured samples/channel   N 448
Spectral resolution Δ ν = F N y q / ( L × N ) 23.251 kHz
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Burel, G.; Radoi, E.; Gautier, R.; Le Jeune, D. Wideband Spectrum Sensing Using Modulated Wideband Converter and Data Reduction Invariant Algorithms. Sensors 2023, 23, 2263. https://doi.org/10.3390/s23042263

AMA Style

Burel G, Radoi E, Gautier R, Le Jeune D. Wideband Spectrum Sensing Using Modulated Wideband Converter and Data Reduction Invariant Algorithms. Sensors. 2023; 23(4):2263. https://doi.org/10.3390/s23042263

Chicago/Turabian Style

Burel, Gilles, Emanuel Radoi, Roland Gautier, and Denis Le Jeune. 2023. "Wideband Spectrum Sensing Using Modulated Wideband Converter and Data Reduction Invariant Algorithms" Sensors 23, no. 4: 2263. https://doi.org/10.3390/s23042263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop