All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning has been used to increase the privacy and security of medical data, which is a sort of machine learning technique. The training data is disseminated across numerous machines in federated learning, and the learning process is collaborative. There are numerous privacy attacks on deep learning (DL) models that attackers can use to obtain sensitive information. As a result, the DL model should be safeguarded from adversarial attacks, particularly in medical data applications. Homomorphic encryption-based model security from the adversarial collaborator is one of the answers to this challenge. Using homomorphic encryption, this research presents a privacy-preserving federated learning system for medical data. The proposed technique employs a secure multi-party computation protocol to safeguard the deep learning model from adversaries. The proposed approach is tested in terms of model performance using a real-world medical dataset in this paper.
Machine learning (ML) is a widely used technique in almost all fields, where a computer system can learn from data to improve its performance. This technique is widely used in many application areas such as image recognition, natural language processing, and machine translation. Federated learning is a machine learning technique where the training data is distributed across multiple machines, and the learning process is performed in a collaborative manner . This technique can be used to improve the privacy and security of medical data .
Medical data is highly sensitive and is often subject to data privacy and security concerns . For example, a person’s health information is often confidential and can be used to identify the person. Thus, it is essential to protect the privacy and security of medical data. The Health Insurance Portability and Accountability Act (HIPAA) (US Department of Health and Human Services, 2014) and General Data Protection Regulation (GDPR) (The European Union, 2018) strictly mandate personal health information privacy. There are various methods to safeguard the private information. Federated learning is one of the techniques that can be utilized for the protection of sensitive data during multi-party computation tasks. This technique can be used to improve the privacy and security of medical data by preventing the data from being centralized and vulnerable.
Keeping the data local is not sufficient for the security of the data and the ML model. However, there are several privacy attacks on deep learning models to get the private data [4,5]. For example, the attackers can use the gradient information of the deep learning model to get the sensitive information. Thus, the deep learning model itself should be protected from the adversaries as well. One of the solutions for this problem is homomorphic encryption-based model protection from the adversary collaborator. Homomorphic encryption is a technique where the data can be encrypted, and the operations can be performed on the encrypted data . This technique can be used to protect the deep learning model from the adversaries.
This paper proposes a privacy-preserving federated learning algorithm based on a convolutional neural network (CNN) for medical data using homomorphic encryption. The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries. We evaluate the proposed algorithm using a real-world medical dataset and show that the proposed algorithm can protect the deep learning model from the adversaries. We limited our work to binary classification. In our subsequent work, we plan to use multi-class approaches in the literature [7,8].
The main contributions of this paper are as follows:
A homomorphic encryption-based federated learning algorithm is proposed to protect the confidentiality of the sensitive medical data.
A secure multi-party computation protocol is proposed to protect the deep learning models from the adversaries.
A real-world medical dataset is used to evaluate the proposed algorithm. The experimental results show that the proposed algorithm can protect the deep learning model from the adversaries.
The rest of the paper is organized as follows: Section 2 describes related work, while Section 3 describes the preliminaries, including homomorphic encryption and federated learning. Section 4 provides detailed information of the proposed system model. Section 5 and Section 6 discuss the results, while Section 7 concludes the paper.
2. Related Work
Data-driven ML models provide unprecedented opportunities for healthcare with the use of sensitive health data. These models are trained locally to protect the sensitive health data. However, it is difficult to build robust models without diverse and large datasets utilizing the full spectrum of health concerns. Prior proposed works to overcome this problem include federated learning techniques. For instance, the studies [9,10,11] reviewed the current applications and technical considerations of the federated learning technique to preserve the sensitive biomedical data. The impact of federated learning is examined through the stakeholders, such as patients, clinicians, healthcare facilities, and manufacturers. In another study, the authors in  utilized federated learning systems for brain tumor segmentation on the BraTS dataset, which consists of magnetic resonance imaging brain scans. The results show that performance is decreased by privacy protection costs. The same BraTS dataset is used in  to compare three collaborative training techniques, i.e., federated learning, institutional incremental learning (IIL), and cyclic institutional learning (CIIL). In IIL and CIIL, institutions train a shared model successively, where CIIL adds a cycling loop through organizations. The results indicate that federated learning achieves similar Dice scores to that of models trained by sharing data. It outperforms the IIL and CIIL methods since these methods suffer from catastrophic forgetting and complexity.
Medical data is also safeguarded by encryption techniques such as homomorphic encryption. In , authors propose an online secure multi-party computation sharing patient information to hospitals using homomorphic encryption. Bocu et al.  proposed a homomorphic encryption model that is integrated with a personal health information system utilizing heart rate data. The results indicate that the described technique successfully addressed the requirements for secure data processing for the 500 patients with expected storage and network challenges. In another study by Wang et al. , a data division scheme based on homomorphic encryption for wireless sensor networks was proposed. The results show that there is a trade-off between resources and data security. In , the applicability of homomorphic encryption is shown by measuring the vitals of the patients with a lightweight encryption scheme. Sensor data such as respiration and heart rate are encrypted using homomorphic encryption before transmitting to the non-trusting third party, while encryption takes place only in a medical facility. The study in  developed an IoT-based architecture with homomorphic encryption to combat data loss and spoofing attacks for chronic disease monitoring. The results suggest that homomorphic encryption provides cost-effective and straightforward protection of the sensitive health information. Blockchain technologies are also utilized in cooperation with homomorphic encryption for the security of medical data. Authors in  proposed a practical pandemic infection tracking tool using homomorphic encryption and blockchain technologies in intelligent transportation systems using automatic healthcare monitoring. In another study, Ali et al.  developed a search-able distributed medical database on a blockchain using homomorphic encryption. The increase need to secure sensitive information leads to the use of various techniques together. In the scope of this study, a multi-party computation tool using federated learning with homomorphic encryption is developed and analyzed.
3.1. Homomorphic Encryption
The definition of the homomorphic encryption (HE) scheme is given in  as follows:
Definition1 (Homomorphic Encryption).
A family of schemes is said to be homomorphic with respect to an operator ∘ if there exist decryption algorithms such that for any two ciphertexts , the following equality is satisfied:
where are the corresponding randomness.
A homomorphic encryption scheme is a pair of algorithms, and , with the following properties:
takes as input a plaintext , and outputs a ciphertext c such that c is a homomorphic image of m, i.e., ;
takes as input a ciphertext c, and outputs a plaintext m such that m is a homomorphic image of c;
and are computationally efficient.
There are two types of homomorphic encryption: additively homomorphic and multiplicatively homomorphic.
Additively homomorphic encryptionconsists of a pair of algorithms and such that, for all , ,, and , we have .
Multiplicatively homomorphic encryption consists of a pair of algorithms and such that, for all , ,, and , we have .
Partially homomorphic encryption is a variant of homomorphic encryption where homomorphism is only partially supported, i.e., the encryption scheme is homomorphic for some operations while not homomorphic for others.
Somewhat homomorphic encryption is a variant of fully homomorphic encryption where homomorphism is only limited supported, i.e., the encryption scheme is homomorphic for all operations for a limited number of operations.
Fully homomorphic encryption (FHE) is a variant of homomorphic encryption which allows for homomorphism over all functions, i.e., the encryption scheme is homomorphic for all operations. In other words, an FHE scheme consists of a pair of algorithms and such that, for all , ,, and , we have .
Table 1 shows a summary of the major homomorphic encryption schemes.
3.2. Brakerski–Fan–Vercauteren (BFV) Scheme
Since the work of Brakerski, Fan, and Vercauteren (BFV), the somewhat homomorphic encryption (SHE) scheme has become one of the most important research topics in cryptography. In this section, we give the definition of this scheme.
Definition2 (BFV scheme).An SHE scheme is said to be in the BFV family of schemes if it consists of the following three algorithms:
Key generation algorithm: It takes the security parameter k as input, and outputs a public key and a secret key .
Encryption algorithm: It takes the message , a public key , and a randomness as inputs, and outputs a ciphertext .
Decryption algorithm: It takes a ciphertext , a secret key , and an integer as inputs, and outputs a message .
In the above definition, the integer i is called the decryption index. It is introduced to allow for efficient decryption of ciphertexts that are the result of homomorphic operations. For example, when the ciphertext is the result of homomorphic operations on ciphertexts and , that is, , then can be decrypted by taking the decryption index .
In the following, we give a brief description of the BFV scheme.
The key generation algorithm of the BFV scheme consists of the following two steps.
Let t be the security parameter. For a positive integer t, define a number and a positive integer p where is a polynomial, and p is a prime number satisfying .
Let d be a positive integer such that . Choose a monic polynomial of degree d with for some . Let . Choose a quadratic nonresidue b of , and let .
Let . The secret key is chosen to be a nonnegative integer s less than q. The public key is chosen to be the sequence .
The encryption algorithm of the BFV scheme consists of the following three steps.
Let be the public key. Choose a random polynomial of degree less than d.
Given a message , compute .
Choose a random integer , and output the ciphertext .
The decryption algorithm of the BFV scheme consists of the following two steps.
Let be the secret key. Compute .
Given a ciphertext , compute .
In the BFV scheme, the message space is .
3.2.1. Homomorphic Operations
In the BFV scheme, the additive homomorphism is defined as follows:
Definition3 (Additive homomorphism).
Let and be two ciphertexts. The additive homomorphism is defined to be the ciphertext .
In the BFV scheme, the standard polynomial addition algorithm implements the additive homomorphism.
In the BFV scheme, the multiplicative homomorphism is defined as follows:
Definition4 (Multiplicative homomorphism).
Let be a ciphertext and be a message. The multiplicative homomorphism is defined to be the ciphertext .
In the BFV scheme, the standard polynomial multiplication algorithm implements the multiplicative homomorphism.
The multiplicative homomorphism is sometimes called the “plaintext multiplication” or the “scalar multiplication”.
Relinearization is a homomorphic operation used in the BFV scheme to reduce the number of ciphertexts generated by homomorphic operations. In the following, we give the definition of this operation.
Let and be two ciphertexts. The relinearization homomorphism is defined to be the ciphertext .
In the BFV scheme, the relinearization homomorphism is implemented by the standard polynomial addition and multiplication algorithms.
Rotation is a homomorphic operation used in the BFV scheme to implement the power operation efficiently. It can be used to implement a large class of homomorphic operations on encrypted data. In the following, we give the definition of this operation.
Let be a ciphertext. The rotation homomorphism is defined to be the ciphertext , where r is an integer.
In the BFV scheme, the rotation homomorphism is implemented by the standard polynomial multiplication algorithm.
The rotation is sometimes called the “power operation”.
3.3. Federated Learning
In this section, we briefly describe the federated learning (FL) framework. We refer to [22,23] for more details.
Definition7(FL model).Let N be a positive integer, and be a probability space. Let m be a positive integer such that , and be a collection of random variables on with for . The FL model consists of the following four algorithms:
Initialization algorithm: It takes the security parameter k as input, and outputs the global model , where n is the number of free parameters in .
Local training algorithm: It takes the global model , a local dataset , and a positive integer t as inputs, and outputs a local model .
Upload algorithm: It takes the local model , and a positive integer t as inputs, and outputs a vector .
Aggregation algorithm: It takes a set of vectors , and a positive integer t as inputs, and outputs the global model .
In the above definition, the integer t is called the training round. The global model is a function of the training round t. The global model is trained by the local models , which are trained on the local datasets . The global model is trained on the aggregated dataset . The global model is initialized to be the global model .
In the FL model, the local training algorithm, upload algorithm, and aggregation algorithm can be implemented by any machine learning algorithm.
The global model can be trained on the aggregated dataset using any machine learning algorithm.
In the FL model, the global model is shared among all the participating clients, and the local models are not shared among the clients.
4. System Model
This section gives a high-level system overview of the proposed BFV crypto-scheme-based privacy-preserving federated learning COVID-19 detection training method. The proposed privacy-preserving scheme is a two-phase approach: (1) local model training at each client and (2) encrypted model weight aggregation at the server. In the local model training phase, each client builds their local CNN-based DL model using their local electronic health record dataset. The clients encrypt the model weights matrix using the public key. In the second step, the server aggregates all clients’ encrypted weight matrices and sends the final matrix to the clients. Each client decrypts the aggregated encrypted weight matrix to update the model weights of their DL model. Figure 1 shows the system overview.
Figure 2 shows the CNN-based COVID-19 detection model used in the experiments.
Boldface lowercase letters show the vectors (e.g., );
shows the ciphertext of a matrix W;
⊕ shows the homomorphic encryption-based addition, ⊗ homomorphic encryption-based multiplication;
shows public/private key pairs.
4.2. Client Initialization
Algorithm 1 shows the overall process in the initialization phase. Each client trains the local classifier, , with their private dataset . The trained model’s weight matrix, W, is encrypted, , and shared with the server
Algorithm 1 Model training in each client
The dataset at client c: , public key:
// Create an empty matrix for the encrypted layer weights
// Encrypt the layer weights () with public key.
Return // The encrypted weight matrix
4.3. Model Aggregation
The server collects all encrypted weight matrices, , from the clients. It calculates the average weight value of each neuron in the encrypted domain. Algorithm 2 shows the overall process in the aggregation phase.
Algorithm 2 Model aggregation at the server
public key: , the number of clients: c, client model weights:
// Homomorphic addition
// Homomorphic multiplication.
Return // Return the aggregated weight matrix in the encrypted domain
4.4. Client Decryption
The last step is client decryption in which each client decrypts the aggregated and encrypted weight matrix, , and updates their local model, h. Algorithm 3 shows the overall process in the client decryption phase.
Algorithm 3 Client decryption
private key: , encrypted aggregated weights:
// Get the corresponding row for layer
// Decrypt the row and update the layer weights
// Save the aggregated model as global_model at client.
In this work, a COVID-19 radiography dataset collected in previous works related to a COVID-19 detection model [24,25] was used. The dataset contains X-ray lung images with four different classifications, which are COVID, Lung_OPACITY, Normal, and Viral Pneumonia. In this work, we utilized two classifications, which are COVID and Normal, focusing only on the COVID-19 detection machine learning process.
From the original dataset, we obtained the first 1000 records from each classification, with 80% of the sample used for the training set and the remaining 20% as the test set. The training dataset was further split with 10% of the dataset as the train-validation dataset.
We obtained 1000 records for each classification with 800 records used as the training dataset and 200 records used as the test dataset.
Data preprocessing performed in the work consists of data augmentation and rescaling for the training dataset, while only data rescaling was applied for the test dataset. Data augmentation was necessary in order to provide data variety in the training dataset. The dataset was rescaled by multiplying the pixel value with . This was aimed to transform the pixel value range from [0, 255] to [0, 1] so that pixels were treated in the same way.
6.1. Experimental Setup
Table 2 shows the CNN model used to predict COVID-19 detection.
The implementation was developed with Python 3.8.8, using existing libraries. There were standard libraries used, such as Keras and Tensorflow, which were used in the machine learning processes; Numpy, which was used to process weight arrays and data structures; pickle, to serialize exported weights; and most importantly, Pyfhel, which is used for homomorphic encryption. Pyfhel  is basically a Python wrapper for Microsoft SEAL, which provides the same functionalities as the Microsoft Simple Encrypted Arithmetic Library (SEAL).
Microsoft SEAL is a homomorphic encryption library developed by Microsoft, which was released in 2015. The SEAL library implements both Brakerski–Fan–Vercauteren (BFV) [27,28] and Cheon–Kim–Kim–Song (CKKS)  homomorphic encryption schemes and provides standard SHE functions starting from encoding, key generation, encryption, decryption, additive, multiplicative, and relinearization functions.
In the Pyfhel library implementation, we implemented the BFV scheme and used pre-defined default values in the HE context parameters, but with exception to parameter . Parameter is used to determine the bit-wise security level provided. At the time of writing, there are two possible values—128 and 192. We experimented with these values and observed the overall model performance in terms of time and accuracy. The parameter determines the length of coefficient modulus based on polynomial degree parameter, as described in the below Table 3 .
We have implemented our proposed protocols and the classifier training phase in Python by using the Keras/Tensorflow libraries for the model building and the Microsoft SEAL library for the somewhat homomorphic encryption implementation. To show the training phase time performance of the proposed protocols, we tested the COVID-19 X-ray scans public dataset with different numbers of clients and the ciphertext modulus, , which determines how much noise can accumulate before decryption fails. Table 4 shows the dataset details.
The dataset is arbitrarily partitioned among each client (), and then the prediction performance results in the encrypted-domain are compared with the results of the plain-domain.
6.2. Experimental Results
We first experimented with the COVID-19 detection model with no encryption and federated learning. In this case, there is only one client observed. Table 5 shows the model performance scores, and the running time was 599.169577s.
We then applied federated learning in the model and observed the model performance based on evaluation matrices. Table 6 shows the performance results of federated learning without encryption.
We then continued our observation by adjusting the hyperparameter secbetween values 128 and 192.
Table 7 shows the performance measurements of federated learning with encryption level is 128.
Table 8 shows the performance measurements of federated learning with encryption level is 192.
Apart from measuring the model performance with evaluation matrices, we also measured total processing time, starting from model training at clients until the end of the federated learning process, and prediction result using the aggregated model. Table 9 shows the processing time in federated learning with various numbers of clients.
Below are histograms by evaluation matrices, accuracy, precision, recall, F1 score to visualize influence of encryption and encryption key length to model performance. Figure 5 the prediction performance score with different metrics.
From the histograms above, we see that homomorphic encryption and its key length variations do not have much influence to model performance.
We also used histogram to visualize running time growth against homomorphic encryption and its key length variations. From Figure 6, it shows that running time increased quite significantly with homomorphic encryption implemented. It also shows that running time was also influenced by encryption length—the longer the key, the longer the execution time.
One last thing we observed was that the pickle file size of encrypted weights was around 7 GB regardless of execution time.
Privacy-preserving has become an essential practice of healthcare institutions as both the EU, and we, mandate it. Federated learning and homomorphic encryption will play a critical role in maintaining data security and model training. By benefiting from both techniques, the proposed model achieves competitive performance while there is a significant trade-off for the execution time and the number of clients. In some cases, where privacy-preserving is very crucial, for instance, in healthcare fields, the trade-off is very acceptable.
The classification metrics, i.e., accuracy, F1, precision, and recall, reached over 80% using both encrypted and plain data for each federated learning case, which means that homomorphic encryption, in this case, SHE, does not deteriorate model performance.
Privacy attacks will cause immense damage to the security and privacy of patient information. This will hinder the advancement in healthcare using data-driven models. Therefore, it is indispensable to take crucial steps to strengthen the safety of the information, and the way data is processed. This study demonstrated that federated learning with homomorphic encryption could successfully enhance data-driven models by eliminating and minimizing the share of sensitive data. It is envisioned that this study could be helpful for the scientists and researchers working on sensitive healthcare data in multi-party computation settings.
Conceptualization, F.W. and F.O.C.; methodology, F.W.; software, F.W.; validation, F.W. and F.O.C.; formal analysis, S.S. and M.K.; writing—original draft preparation, F.W. and F.O.C.; writing—review and editing, F.W., F.O.C., S.S. and M.K.; visualization, F.W.; supervision, F.O.C.; All authors have read and agreed to the published version of the manuscript.
Alloghani, M.; Alani, M.M.; Al-Jumeily, D.; Baker, T.; Mustafina, J.; Hussain, A.; Aljaaf, A.J. A systematic review on the status and progress of homomorphic encryption technologies. J. Inf. Secur. Appl.2019, 48, 102362. [Google Scholar] [CrossRef]
Molina-Carballo, A.; Palacios-López, R.; Jerez-Calero, A.; Augustín-Morales, M.C.; Agil, A.; Muñoz-Hoyos, A.; Muñoz-Gallego, A. Protective Effect of Melatonin Administration against SARS-CoV-2 Infection: A Systematic Review. Curr. Issues Mol. Biol.2022, 44, 31–45. [Google Scholar] [CrossRef]
Checa-Ros, A.; Muñoz-Hoyos, A.; Molina-Carballo, A.; Muñoz-Gallego, A.; Narbona-Galdó, S.; Jerez-Calero, A.; del Carmen Augustín-Morales, M. Analysis of Different Melatonin Secretion Patterns in Children With Sleep Disorders: Melatonin Secretion Patterns in Children. J. Child Neurol.2017, 32, 1000–1008. [Google Scholar] [CrossRef]
Xu, J.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J.; Wang, F. Federated learning for healthcare informatics. J. Healthc. Inform. Res.2021, 5, 1–19. [Google Scholar] [CrossRef]
Rieke, N.; Hancox, J.; Li, W.; Milletari, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. NPJ Digit. Med.2020, 3, 1–7. [Google Scholar] [CrossRef]
Antunes, R.S.; da Costa, C.A.; Küderle, A.; Yari, I.A.; Eskofier, B. Federated Learning for Healthcare: Systematic Review and Architecture Proposal. ACM Trans. Intell. Syst. Technol. (TIST)2022, 13, 1–23. [Google Scholar] [CrossRef]
Li, W.; Milletarì, F.; Xu, D.; Rieke, N.; Hancox, J.; Zhu, W.; Baust, M.; Cheng, Y.; Ourselin, S.; Cardoso, M.J.; et al. Privacy-preserving federated brain tumour segmentation. In Proceedings of the International Workshop on Machine Learning in Medical Imaging; Springer: Berlin/Heidelberg, Germany, 2019; pp. 133–141. [Google Scholar]
Sheller, M.J.; Reina, G.A.; Edwards, B.; Martin, J.; Bakas, S. Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation. In Poceedings of the International MICCAI Brainlesion Workshop; Springer: Berlin/Heidelberg, Germany, 2018; pp. 92–104. [Google Scholar]
Kumar, A.V.; Sujith, M.S.; Sai, K.T.; Rajesh, G.; Yashwanth, D.J.S. Secure Multiparty computation enabled E-Healthcare system with Homomorphic encryption. In Proceedings of the IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 981, p. 022079. [Google Scholar]
Bocu, R.; Costache, C. A homomorphic encryption-based system for securely managing personal health metrics data. IBM J. Res. Dev.2018, 62, 1:1–1:10. [Google Scholar] [CrossRef]
Wang, X.; Zhang, Z. Data division scheme based on homomorphic encryption in WSNs for health care. J. Med. Syst.2015, 39, 1–7. [Google Scholar] [CrossRef] [PubMed]
Kara, M.; Laouid, A.; Yagoub, M.A.; Euler, R.; Medileh, S.; Hammoudeh, M.; Eleyan, A.; Bounceur, A. A fully homomorphic encryption based on magic number fragmentation and El-Gamal encryption: Smart healthcare use case. Expert Syst.2022, 39, e12767. [Google Scholar] [CrossRef]
Talpur, M.S.H.; Bhuiyan, M.Z.A.; Wang, G. Shared–node IoT network architecture with ubiquitous homomorphic encryption for healthcare monitoring. Int. J. Embed. Syst.2015, 7, 43–54. [Google Scholar] [CrossRef]
Tan, H.; Kim, P.; Chung, I. Practical homomorphic authentication in cloud-assisted vanets with blockchain-based healthcare monitoring for pandemic control. Electronics2020, 9, 1683. [Google Scholar] [CrossRef]
Ali, A.; Pasha, M.F.; Ali, J.; Fang, O.H.; Masud, M.; Jurcut, A.D.; Alzain, M.A. Deep Learning Based Homomorphic Secure Search-Able Encryption for Keyword Search in Blockchain Healthcare System: A Novel Approach to Cryptography. Sensors2022, 22, 528. [Google Scholar] [CrossRef]
Gentry, C. Fully Homomorphic Encryption Using Ideal Lattices. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, Bethesda, MD, USA, 31 May–2 June 2009; pp. 169–178. [Google Scholar] [CrossRef][Green Version]
Brendan McMahan, H.; Moore, E.; Ramage, D.; Hampson, S.; Agüera y Arcas, B. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv2016, arXiv:1602.05629. [Google Scholar]
Konečný, J.; Brendan McMahan, H.; Yu, F.X.; Richtárik, P.; Theertha Suresh, A.; Bacon, D. Federated Learning: Strategies for Improving Communication Efficiency. arXiv2016, arXiv:1610.05492. [Google Scholar]
Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Emadi, N.A.; et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access2020, 8, 132665–132676. [Google Scholar] [CrossRef]
Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Abul Kashem, S.B.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med.2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
Ibarrondo, A.; Viand, A. Pyfhel: Python for homomorphic encryption libraries. In Proceedings of the 9th Workshop on Encrypted Computing & Applied Homomorphic Cryptography, Seoul, Korea, 15 November 2021. [Google Scholar]
Brakerski, Z. Fully Homomorphic Encryption without Modulus Switching from Classical GapSVP. In Advances in Cryptology—CRYPTO 2012; Safavi-Naini, R., Canetti, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 868–886. [Google Scholar]
Fan, J.; Vercauteren, F. Somewhat Practical Fully Homomorphic Encryption. Cryptology ePrint Archive, Report 2012/144. 2012. Available online: https://ia.cr/2012/144 (accessed on 15 March 2022).
Cheon, J.H.; Kim, A.; Kim, M.; Song, Y. Homomorphic Encryption for Arithmetic of Approximate Numbers. In Advances in Cryptology—ASIACRYPT 2017; Takagi, T., Peyrin, T., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 409–437. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely
those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or
the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,
methods, instructions or products referred to in the content.