Topic Editors

1. CMSE, Michigan State University, Lansing, MI 48824, USA
2. Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
Department of Applied Cybernetics, Faculty of Science, University of Hradec Králové, 50003 Hradec Králové, Czech Republic

Application of Deep Learning Method in 6G Communication Technology

Abstract submission deadline
closed (31 January 2024)
Manuscript submission deadline
31 March 2024
Viewed by
6617

Topic Information

Dear Colleagues,

Today’s economic growth highly relay on innovation in digital technology. Still, modern society does not fully utilize 5G, but the need for 6G communication is widespread. Deep learning is well utilized in advanced communication infrastructure to offload the data, resource allocation, security, etc. Internet of things in every smart application like vehicles, health care, home, and cities requires deep learning to predict and classify the incoming data accurately. Tasks that require analyzing, collecting, and large data interpretation require deep learning. It makes computation faster similar to human thinking. The dynamic application requires an automatic decision-making system which is possible using deep learning techniques. Next-generation ultra-high networking application requires challenging technology to operate with high reliability. The 6G era with deep learning can be used to communicate in a network with ultra-high latency, ultra-high reliability with low energy consumption. The 6g network must be sufficient to handle big data with sustainable networking. Therefore, big data prediction, big data classification, and data analytics using deep learning techniques give sustainable data management. The smart 6g applications like smart waste management, smart women safety things, smart ecosystem monitoring, etc., require deep supervised learning algorithms. Diseases diagnosis, automatic vehicle routing, and security applications are computed efficiently using deep reinforcement learning. The 6g networking using cloud computing, edge computing, fog computing, and pervasive computing technologies must be approached using deep learning algorithms. This multidisciplinary topic invites innovative ideologies as follows,

  1. Deep learning techniques for ultra-low latency communication using 6G
  2. IoT /IoMT/IIoT ideologies for 6g networks using deep learning models
  3. Mobile communication using 6g architectures and artificial intelligence
  4. Smart women safety applications using deep 6G technologies
  5. Deep learning-based 6G optimization techniques
  6. THz communication using deep learning in 6G
  7. Wireless sensor 6G network
  8. Time, storage cost, computation cost design using deep learning in 6g applications
  9. Smart city, smart healthcare, smart banking in 6G framework
  10. Network security, blockchain, quantum computing in 6G using deep learning
  11. Edge, cloud, and fog computing for managing big data in 6G networks
  12. Data analytics, data science, and Information management using deep learning in 6G
  13. Unmanned Ariel vehicle (UAV), satellite image processing in 6G environment
  14. Blockchainhain, supply chain management using 6G networks.
  15. Industrial 6G applications and underwater 6G surveillance
  16. Defense application enhancement using 6g and deep learning

Dr. Mohamed Abouhawwash
Dr. K. Venkatachalam
Topic Editors

Keywords

  • IoV using 6G networks
  • quantum deep learning models for 6G communication
  • data security and cryptographic techniques for 6G networks
  • image processing techniques for 6G applications
  • 6G agricultural applications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Data
data
2.6 4.6 2016 22 Days CHF 1600 Submit
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600 Submit
Information
information
3.1 5.8 2010 18 Days CHF 1600 Submit
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600 Submit
Symmetry
symmetry
2.7 4.9 2009 16.2 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (3 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
12 pages, 634 KiB  
Article
Policy Optimization of the Power Allocation Algorithm Based on the Actor–Critic Framework in Small Cell Networks
Mathematics 2023, 11(7), 1702; https://doi.org/10.3390/math11071702 - 02 Apr 2023
Cited by 1 | Viewed by 1207
Abstract
A practical solution to the power allocation problem in ultra-dense small cell networks can be achieved by using deep reinforcement learning (DRL) methods. Unlike traditional algorithms, DRL methods are capable of achieving low latency and operating without the need for global real-time channel [...] Read more.
A practical solution to the power allocation problem in ultra-dense small cell networks can be achieved by using deep reinforcement learning (DRL) methods. Unlike traditional algorithms, DRL methods are capable of achieving low latency and operating without the need for global real-time channel state information (CSI). Based on the actor–critic framework, we propose a policy optimization of the power allocation algorithm (POPA) for small cell networks in this paper. The POPA adopts the proximal policy optimization (PPO) algorithm to update the policy, which has been shown to have stable exploration and convergence effects in our simulations. Thanks to our proposed actor–critic architecture with distributed execution and centralized exploration training, the POPA can meet real-time requirements and has multi-dimensional scalability. Through simulations, we demonstrate that the POPA outperforms existing methods in terms of spectral efficiency. Our findings suggest that the POPA can be of practical value for power allocation in small cell networks. Full article
Show Figures

Figure 1

13 pages, 2717 KiB  
Article
Deep-Learning-Based Carrier Frequency Offset Estimation and Its Cross-Evaluation in Multiple-Channel Models
Information 2023, 14(2), 98; https://doi.org/10.3390/info14020098 - 06 Feb 2023
Cited by 2 | Viewed by 2228
Abstract
The most widely used Wi-Fi wireless communication system, which is based on OFDM, is currently developing quickly. The receiver must, however, accurately estimate the carrier frequency offset between the transmitter and the receiver due to the characteristics of the OFDM system that make [...] Read more.
The most widely used Wi-Fi wireless communication system, which is based on OFDM, is currently developing quickly. The receiver must, however, accurately estimate the carrier frequency offset between the transmitter and the receiver due to the characteristics of the OFDM system that make it sensitive to carrier frequency offset. The autocorrelation of training symbols is typically used by the conventional algorithm to estimate the carrier frequency offset. Although this method is simple to use and low in complexity, it has poor estimation performance at low signal-to-noise ratios, which has a significant negative impact on the performance of the wireless communication system. Meanwhile, the design of the communication physical layer using deep-learning-based (DL-based) methods is receiving more and more attention but is rarely used in carrier frequency offset estimation. In this paper, we propose a DL-based carrier frequency offset (CFO) model architecture for 802.11n standard OFDM systems. With regard to multipath channel models with varied degrees of multipath fadding, the estimation error of the proposed model is 70.54% lower on average than that of the conventional method under 802.11n standard channel models, and the DL-based method can outperform the estimation range of conventional methods. Besides, the model trained in one channel environment and tested in another was cross-evaluated to determine which models could be used for deployment in the real world. The cross-evaluation demonstrates that the DL-based model can perform well over a large class of channels without extra training when trained under the worst-case (most severe) multipath channel model. Full article
Show Figures

Figure 1

17 pages, 3993 KiB  
Article
Collaborative Screening of COVID-19-like Disease from Multi-Institutional Radiographs: A Federated Learning Approach
Mathematics 2022, 10(24), 4766; https://doi.org/10.3390/math10244766 - 15 Dec 2022
Cited by 1 | Viewed by 1024
Abstract
COVID-19-like pandemics are a major threat to the global health system have the potential to cause high mortality across age groups. The advance of the Internet of Medical Things (IoMT) technologies paves the way toward developing reliable solutions to combat these pandemics. Medical [...] Read more.
COVID-19-like pandemics are a major threat to the global health system have the potential to cause high mortality across age groups. The advance of the Internet of Medical Things (IoMT) technologies paves the way toward developing reliable solutions to combat these pandemics. Medical images (i.e., X-rays, computed tomography (CT)) provide an efficient tool for disease detection and diagnosis. The cost, time, and efforts for acquiring and annotating, for instance, large CT datasets make it complicated to obtain large numbers of samples from a single institution. However, owing to the necessity to preserve the privacy of patient data, it is challenging to build a centralized dataset from many institutions, especially during a pandemic. Moreover, heterogeneity between institutions presents a barrier to building efficient screening solutions. Thus, this paper presents a fog-based federated generative domain adaption framework (FGDA), where fog nodes aggregate patients’ data necessary to collaboratively train local deep-learning models for disease screening in medical images from different institutions. Local differential privacy is presented to protect the local gradients against attackers during the global model aggregation. In FGDA, the generative domain adaptation (DA) method is introduced to handle data discrepancies. Experimental evaluation on a case study of COVID-19 segmentation demonstrated the efficiency of FGDA over competing learning approaches with statistical significance. Full article
Show Figures

Figure 1

Back to TopTop