# A Secure Multi-Party Computation Protocol for Graph Editing Distance against Malicious Attacks

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

## Abstract

**:**

## 1. Introduction

- (1)
- First, an encoding method applicable to the Paillier encryption algorithm is proposed, which is simpler and more efficient.
- (2)
- Using the Paillier additive homomorphism and XOR homomorphism, the Paillier encryption scheme used to realize the secure XOR operation scheme is proposed to facilitate the problem of confidential calculation of the graph editing distance for research.
- (3)
- An MPC algorithm of graph editing distance under the semi-honest model is designed, and analyzed for correctness using the coding methods and XOR operation schemes, and the algorithm has been analyzed for correctness.
- (4)
- With the help of hash functions, an MPC algorithm for GED that can resist malicious attacks is designed for the malicious behaviors that may be committed by malicious participants. The security of the algorithm is demonstrated using the real/ideal model paradigm, and the efficiency of the algorithm is verified with the efficiency analysis and experimental simulations.

## 2. Related Work

#### 2.1. Paillier Cryptosystem

- (1)
- Key generation: Set the security parameter $k$ to generate large prime numbers $p,q$, satisfying $\mathrm{gcd}\left(pq,\left(p-1\right)\left(q-1\right)\right)=1$. Calculate $N=pq$, $\lambda =\mathrm{lcm}\left(p-1,q-1\right)$, where $lcm$ denotes the least common multiple. Choose $g\in {Z}_{N}^{*}$ randomly to satisfy $\mathrm{gcd}\left(L\left({g}^{\lambda}\mathrm{mod}{N}^{2}\right),N\right)=1$, where $L\left(x\right)=\frac{x-1}{N}$. The public key is $\left(g,N\right)$ and the private key is $\lambda $.
- (2)
- Encryption process: An arbitrary plaintext message $m\in {Z}_{N}$, and arbitrarily chosen random number $r\in {Z}_{N}^{*}$, computed to obtain the ciphertext $C=E\left(m\right)={g}^{m}{r}^{N}\mathrm{mod}{N}^{2}$.
- (3)
- Decryption process: For the ciphertext $C\in {Z}_{{N}^{2}}^{*}$, calculate $m=\frac{L\left({c}^{\lambda}\mathrm{mod}{N}^{2}\right)}{L\left({g}^{\lambda}\mathrm{mod}{N}^{2}\right)}\mathrm{mod}N$.

**Theorem**

**1.**

#### 2.2. Hash Function

#### 2.3. Coding Rules

**Example**

**1.**

#### 2.4. Security under the Malicious Model

**Ideal Algorithm:**${P}_{1}$ and ${P}_{2}$ have private data $x$ and $y$, respectively, and ${P}_{1}$ and ${P}_{2}$ want to jointly compute the function $f(x,y)=({f}_{1}(x,y),{f}_{2}(x,y))$. The computation process requires a trusted third party (TTP), and finally both parties obtain the results ${f}_{1}(x,y)$ and ${f}_{2}(x,y)$, respectively. The concrete implementation process is as follows:

- (1)
- ${P}_{1}$ and ${P}_{2}$ send $x$ and $y$ to TTP, respectively. If ${P}_{i}(i=1,2)$ is honest, the correct data are sent to TTP. If ${P}_{i}$ is malicious, it may send false input ${x}^{\prime}$ or ${y}^{\prime}$ based on the private data, or it may refuse to execute the Algorithm. However, such cases will affect the computation results, and should not be considered.
- (2)
- If TTP receives $x$, $y$ and calculates $f(x,y)$, send ${f}_{1}(x,y)$ to ${P}_{1}$ and send ${f}_{2}(x,y)$ to ${P}_{2}$.

- (1)
- If ${P}_{1}$ is honest, there is$$\gamma (x,y,z,r)=({f}_{1}(x,{y}^{\prime}),{B}_{2}(y,z,r,{f}_{2}(x,{y}^{\prime}))),$$
- (2)
- If ${P}_{2}$ is honest, there is$$\gamma (x,y,z,r)=\left\{\begin{array}{cc}({B}_{1}(x,z,r,{f}_{1}({x}^{\prime},y),\perp ),\perp ),& \mathrm{if}\begin{array}{cc}& \end{array}{B}_{1}(x,z,r,{f}_{1}({x}^{\prime},y))=\perp \\ ({B}_{1}(x,z,r,{f}_{1}({x}^{\prime},y)),{f}_{2}({x}^{\prime},y)),& \mathrm{otherwise}\end{array}\right..$$

**Definition**

**1.**

## 3. Secure Computation Algorithm for Graph Editing Distances under the Semi-Honest Model

#### 3.1. Specific Algorithm

Algorithm 1 The MPC Algorithm of graphs editing distance under the semi-honest model |

Input: Alice has the graph ${G}_{A}$ and Bob has the graph ${G}_{B}$.Output: Graphs editing distance $GED({G}_{A},{G}_{B})$.Algorithm start:(1) Alice generates the Paillier cryptosystem public key $g$, private key $\lambda $ and sends $g$ to Bob. (2) Alice encodes ${G}_{A}$ into matrix ${M}_{A}$ according to the encoding method, takes out the elements of ${M}_{A}$ diagonally, arranges them in rows to get a one-dimensional array $A=({a}_{1},{a}_{2},\dots ,{a}_{k})$, then expands $A$ into ${A}^{*}=({a}_{1}{}^{*},{a}_{2}{}^{*},\dots ,{a}_{2k}{}^{*})=({a}_{1},{a}_{2},\dots ,{a}_{k},{\overline{a}}_{1},{\overline{a}}_{2},\dots ,{\overline{a}}_{k})$ and encrypts $E\left({A}^{*}\right)$, and sends $E\left({A}^{*}\right)$ to Bob. (3) Bob encodes ${G}_{B}$ into matrix ${M}_{B}$ according to the encoding method, takes out the elements diagonal to ${M}_{B}$, arranges them in order by rows to obtain a one-dimensional array $B=({b}_{1},{b}_{2},\dots ,{b}_{k})$, and then expands $B$ into ${B}^{*}=({b}_{1}{}^{*},{b}_{2}{}^{*},\dots ,{b}_{2k}{}^{*})=({\overline{b}}_{1},{\overline{b}}_{2},\dots ,{\overline{b}}_{k},{b}_{1},{b}_{2},\dots ,{b}_{k})$. (4) Bob selects the ciphertext $E\left({a}_{i}^{*}\right)$ from $E\left({A}^{*}\right)$ at the corresponding position according to the element ${b}_{i}^{*}$ in ${B}^{*}$ with value ‘1’, calculates $C={\displaystyle {\prod}_{{b}_{i}^{*}}E\left({a}_{i}^{*}\right)}$, and then sends $C$ to Alice. (5) Alice decrypt $C$, which is the editing distance $GED({G}_{A},{G}_{B})$ of the graph ${G}_{A}$ and ${G}_{B}$, and Alice outputs $GED({G}_{A},{G}_{B})$. End. |

#### 3.2. Correctness Analysis

- (1)
- Alice expands $A$ into ${A}^{*}=({a}_{1}{}^{*},{a}_{2}{}^{*},\dots ,{a}_{2k}{}^{*})=({a}_{1},{a}_{2},\dots ,{a}_{k},{\overline{a}}_{1},{\overline{a}}_{2},\dots ,{\overline{a}}_{k})$, where ${a}_{i}=0,{\overline{a}}_{i}=1$, using the XOR operation; similarly, Bob generates ${B}^{*}=({b}_{1}{}^{*},{b}_{2}{}^{*},\dots ,{b}_{2k}{}^{*})=({\overline{b}}_{1},{\overline{b}}_{2},\dots ,{\overline{b}}_{k},{b}_{1},{b}_{2},\dots ,{b}_{k})$.
- (2)
- Bob computes $C={\displaystyle {\sum}_{{b}_{i}^{*}=1}E\left({a}_{i}^{*}\right)}$ and sends it to Alice, who decrypts it to obtain the number of elements with value ‘1’ among the elements selected by Bob, i.e., the editing distance of ${G}_{A}$ and ${G}_{B}$.

## 4. Secure Computation Algorithm for Graphs’ Editing Distances under the Malicious Model

- (1)
- In Algorithm 1, Alice has $g$ and $\lambda $, while Bob only has $g$, and the final result is only decrypted unilaterally by Alice, which is unfair. It is also possible for Alice to tell Bob the wrong result. The countermeasure to solve this situation is that both parties can decrypt.
- (2)
- In step (4) of Algorithm 1, Bob may provide a false ciphertext to Alice, and the solution is to use a hash function to avoid the situation.
- (3)
- In step (5) of Algorithm 1, Alice told Bob the wrong result after decryption, causing him to draw the wrong conclusion. The solution is to request equal status and generate their respective public and private keys at the same time, and in the algorithm, Alice and Bob decrypt the computation results separately to obtain the editing distance of the graphs, and finally both parties compute the correct result.

#### 4.1. Specific Algorithm

- (1)
- ${E}_{pk}({A}^{*})$ ($P{K}_{A}$, Bob): Alice encodes ${G}_{A}$ as ${M}_{A}$, takes the elements below the diagonal and arranges them in rows to obtain a one-dimensional array $A=({a}_{1},{a}_{2},\dots ,{a}_{k})$, and then expands it to ${A}^{*}=({a}_{1}{}^{*},{a}_{2}{}^{*},\dots ,{a}_{2k}{}^{*})=({a}_{1},{a}_{2},\dots ,{a}_{k},{\overline{a}}_{1},{\overline{a}}_{2},\dots ,{\overline{a}}_{k})$ according to the XOR operation and sends it to Bob.
- (2)
- ${E}_{pk}({B}^{*})$ ($P{K}_{B}$, Alice): Bob encodes ${G}_{B}$ as ${M}_{B}$, takes the elements below the diagonal and arranges them in rows to obtain a one-dimensional array $B=({b}_{1},{b}_{2},\dots ,{b}_{k})$, and then sends it to Alice as ${B}^{*}=({b}_{1}{}^{*},{b}_{2}{}^{*},\dots ,{b}_{2k}{}^{*})=({\overline{b}}_{1},{\overline{b}}_{2},\dots ,{\overline{b}}_{k},{b}_{1},{b}_{2},\dots ,{b}_{k})$ based on the XOR operation expansion.
- (3)
- ${D}_{s{k}_{A}}({C}_{B})$ ($s{k}_{A}$, Bob): Alice decodes ${C}_{B}$ to obtain ${C}_{b}$ and sends ${C}_{b}$ to Bob for verification.
- (4)
- ${D}_{s{k}_{B}}({C}_{A})$ ($s{k}_{B}$, Alice): Bob decodes ${C}_{A}$ to obtain ${C}_{a}$ and sends ${C}_{a}$ to Alice for verification.

#### 4.2. Correctness Analysis

- (1)
- In step (2), Alice and Bob use their public keys to encrypt ${A}^{*},{B}^{*}$ item by item to obtain ${E}_{p{k}_{A}}({A}^{*})$ and ${E}_{p{k}_{B}}({B}^{*})$, respectively, and then publish the ciphertext to the other party, which is secure because the other party does not have its own private key.
- (2)
- In step (5), Alice and Bob decrypt ${C}_{b}$, ${C}_{a}$ using their own ${\lambda}_{a}$ and ${\lambda}_{b}$, respectively, and send them to each other.
- (3)
- After each side receives the message from the other, Alice computes ${G}_{A}={C}_{a}/{r}_{a}$ and sends it to Bob, Bob computes ${G}_{B}={C}_{b}/{r}_{b}$ and sends it to Alice, and both parties obtain their respective ciphertexts, i.e., ${\prod}_{{}_{{a}_{i}*=1}}{E}_{p{k}_{B}}({b}_{i}*)$ and ${\prod}_{{b}_{i}*=1}{E}_{p{k}_{A}}({a}_{i}*)$.
- (4)
- In step (7), Alice verifies the equation $Hash({C}_{b}/GE{D}_{B})\underset{\xaf}{\underset{\xaf}{?}}{H}_{B}$, and if it holds, outputs the editing distance $GE{D}_{B}({G}_{A},{G}_{B})={G}_{B}$. In step (8), Bob verifies that equation $Hash({C}_{a}/GE{D}_{A})\underset{\xaf}{\underset{\xaf}{?}}{H}_{A}$ holds, and if it does, outputs the editing distance $GE{D}_{A}({G}_{A},{G}_{B})={G}_{A}$. If $GE{D}_{A}({G}_{A},{G}_{B})=GE{D}_{B}({G}_{A},{G}_{B})$, the result is proven correct.
- (5)
- No secure information is revealed throughout the process, and both parties are able to arrive at their respective results, avoiding the unfairness associated with one party telling the other the result.

#### 4.3. Security Analysis

- (1)
- Alice fills her matrix expansion ${M}_{A}$ with a one-dimensional array ${A}^{*}$. Alice makes an incorrect input for the elements in ${A}^{*}$. This is a case of Alice changing her own inputs, which is not considered because it cannot be avoided in an ideal algorithm.
- (2)
- During the message passing in step (6), Bob leaks the result of ${C}_{B}$ to Alice, but since Alice only has the public key but not the private key, Alice cannot obtain any information about ${C}_{B}$.
- (3)
- In step (7), Alice has to prove that ${G}_{A}={C}_{a}/{r}_{a}$ is correct using the hash function, which cannot be spoofed in this step, then after announcing ${G}_{A}$, Bob can compute $GE{D}_{A}({G}_{A},{G}_{B})={G}_{A}$.
- (4)
- Thus, all steps of the algorithm are secure, and we further prove that the algorithm is secure using the real/ideal model paradigm.

#### 4.4. Security Proof

**Theorem**

**2.**

Algorithm 2 The MPC Algorithm of graphs editing distance under the malicious model |

Input: Alice has graph ${G}_{A}$ and Bob has graph ${G}_{B}$.Output: Graphs editing distance $GED({G}_{A},{G}_{B})$.Preparation: Alice and Bob generate their own public keys $({g}_{a},{N}_{a})$, $({g}_{b},{N}_{b})$ and private keys ${\lambda}_{a}$$,{\lambda}_{b}$ respectively. Exchange $({g}_{a},{N}_{a})$ and $({g}_{b},{N}_{b})$.Algorithm start:(1) Alice follows the encoding method and encodes ${G}_{A}$ into a matrix ${M}_{A}$. She takes the elements of ${M}_{A}$ diagonally and below and arranges them in order to obtain a one-dimensional array $A=({a}_{1},{a}_{2},\dots ,{a}_{k})$. She then expands $A$ into ${A}^{*}=({a}_{1}{}^{*},{a}_{2}{}^{*},\dots ,{a}_{2k}{}^{*})=({a}_{1},{a}_{2},\dots ,{a}_{k},{\overline{a}}_{1},{\overline{a}}_{2},\dots ,{\overline{a}}_{k})$. Bob operates on ${G}_{B}$ in the same way to obtain the array ${B}^{*}=({b}_{1}{}^{*},{b}_{2}{}^{*},\dots ,{b}_{2k}{}^{*})=({\overline{b}}_{1},{\overline{b}}_{2},\dots ,{\overline{b}}_{k},{b}_{1},{b}_{2},\dots ,{b}_{k})$. (2) Alice and Bob use their respective public keys to encrypt ${A}^{*},{B}^{*}$ item by item. Alice gets ${E}_{p{k}_{A}}({A}^{*})=[{E}_{p{k}_{A}}({a}_{1}),{E}_{p{k}_{A}}({a}_{2}),\dots ,{E}_{p{k}_{A}}({a}_{k}),{E}_{p{k}_{A}}({\overline{a}}_{1}),$${E}_{p{k}_{A}}({\overline{a}}_{2}),\dots ,{E}_{p{k}_{A}}({\overline{a}}_{k})]$ and Bob gets ${E}_{p{k}_{B}}({B}^{*})=[{E}_{p{k}_{B}}({\overline{b}}_{1}),{E}_{p{k}_{B}}({\overline{b}}_{2}),\dots ,{E}_{p{k}_{B}}({\overline{b}}_{k}),{E}_{p{k}_{B}}({b}_{1}),{E}_{p{k}_{B}}({b}_{2}),\dots ,{E}_{p{k}_{B}}({b}_{k})]$. Alice and Bob publish ${E}_{p{k}_{A}}({A}^{*})$$,{E}_{p{k}_{B}}({B}^{*})$ to each other respectively. (3) Alice selects the random number ${r}_{a}$, computes ${H}_{A}=Hash({r}_{a})$, and refers to the position of the element of ${a}_{i}{}^{*}=1(i\in \left[1,2k\right])$ in ${A}^{*}$, selects the ciphertext ${E}_{p{k}_{B}}({b}_{i}{}^{*})$ in ${E}_{p{k}_{B}}({B}^{*})$ for the corresponding position to calculate ${C}_{A}=\left[{\prod}_{{a}_{i}*=1}{E}_{p{k}_{B}}({b}_{i}*)\right]{r}_{a}$. Alice sends ${H}_{A}$ and ${C}_{A}$ to Bob. (4) Bob selects the random number ${r}_{b}$, calculates ${H}_{B}=Hash({r}_{b})$, and refers to the position of the elements of ${b}_{i}{}^{*}=1(i\in \left[1,2k\right])$ in ${B}^{*}$, selects the ciphertext ${E}_{p{k}_{A}}({a}_{i}{}^{*})$ in the corresponding position in ${E}_{p{k}_{A}}({A}^{*})$ to calculate ${C}_{B}=\left[{\prod}_{{b}_{i}*=1}{E}_{p{k}_{A}}({a}_{i}*)\right]{r}_{b}$. Bob sends ${H}_{B}$ and ${C}_{B}$ to Alice. (5) Alice decrypts ${C}_{B}$ with her private key to get ${C}_{b}={D}_{s{k}_{A}}({C}_{B})$. Bob decrypts ${C}_{A}$ with his private key to get ${C}_{a}={D}_{s{k}_{B}}({C}_{A})$. Alice and Bob send ${C}_{b}$ and ${C}_{a}$ to each other respectively. (6) Alice computes ${G}_{A}={C}_{a}/{r}_{a}$ and sends it to Bob. Bob computes ${G}_{B}={C}_{b}/{r}_{b}$ and sends it to Alice. (7) Alice verifies $Hash({C}_{b}/GE{D}_{B})\underset{\xaf}{\underset{\xaf}{?}}{H}_{B}$. If the equation holds, it means that Bob is not spoofing and Alice gets the editing distance $GE{D}_{B}({G}_{A},{G}_{B})={G}_{B}$ between the two graphs and outputs it; otherwise the algorithm is terminated. (8) Bob verifies $Hash({C}_{a}/GE{D}_{A})\underset{\xaf}{\underset{\xaf}{?}}{H}_{A}$. If the equation holds, it shows that Alice did not deceive and Bob gets the editing distance $GE{D}_{A}({G}_{A},{G}_{B})={G}_{A}$ between the two graphs and outputs it; otherwise terminate the algorithm. (9) If $GE{D}_{A}({G}_{A},{G}_{B})=GE{D}_{B}({G}_{A},{G}_{B})$, prove that the result is correct; otherwise show that the result is wrong and not accepted. End. |

**Proof.**

**Case**

**1.**

**Case**

**2.**

- (a)
- ${A}_{1}$ does not disclose the result or ignores TTP (considered as ${A}_{1}$ aborting the algorithm), TTP sends $\perp $ to ${A}_{2}$. There is$$REA{L}_{\stackrel{\u2014}{A}}({C}_{b},{C}_{a})=\{{A}_{1}\left({C}_{b}^{\prime},{C}_{a}^{\prime}\right),GED\left({G}_{A},{G}_{B}\right)S,\perp \}.$$
- (b)
- Conversely, the TTP sends $F({A}_{1}({C}_{b}),{C}_{a})$ to ${A}_{2}$, there is$$REA{L}_{\stackrel{\u2014}{A}}({C}_{b},{C}_{a})=\{{A}_{1}({C}_{b}^{\prime},{C}_{a}^{\prime}),GED\left({G}_{A},{G}_{B}\right),S,F({A}_{1}({C}_{b}),{C}_{a})\},$$

- (a)
- Under the ideal model, when ${B}_{1}$ informs TTP not to send the result to ${B}_{2}$, it is obtained that$$IDEA{L}_{\stackrel{\u2014}{B}}({C}_{b},{C}_{a})=\{{A}_{1}({C}_{b}^{\prime},{C}_{a}^{\prime}),GED\left({G}_{A},{G}_{B}\right),{S}^{\prime},\perp \}.$$
- (b)
- Conversely, it is$$IDEA{L}_{\stackrel{\u2014}{B}}({C}_{b},{C}_{a})=\{{A}_{1}({C}_{b}^{\prime},{C}_{a}^{\prime}),GED\left({G}_{A},{G}_{B}\right),{S}^{\prime},F({A}_{1}({C}_{b}),{C}_{a})\}.$$

## 5. Performance Analysis

#### 5.1. Computational Complexity

#### 5.2. Communication Complexity

#### 5.3. Experimental Simulation

#### 5.4. Applications

- (1)
- In bioinformatics, gene structure needs to be represented as a graph, and similarity between two genes can be measured using the graph editing distance. In practice, most genetic data are more private and solving such detection and query problems without compromising privacy requires secure computation of graph editing distances. For example, in association similarity studies concerning disease, crime, drugs, social aspects, etc., the data involved are highly private, and if the DNA of a suspect is highly similar to the DNA structure of the evidence information left by the criminal at the crime scene, then the suspect may be directly related to the criminal, and thus the similarity of the DNA graph structure needs to be calculated confidentially, and the problem can be abstracted as graph editing distance secure computation, which can be solved using the method of this paper’s algorithm.
- (2)
- In artificial intelligence, computer vision is a simulation of biological vision using computers and related equipment. Its main task is to obtain the 3D information about the corresponding scene by processing the collected pictures or videos. Using computer vision, a search function can be realized with a picture, and similar or identical pictures can be found quickly. For example, in a game with a terrain map, computer vision can find similarities between the virtual game and reality.

## 6. Summary Outlook

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

$N$ | $N=pq$, Both $p$ and $q$ are large primes |

$m$ | Plaintext |

$c$ | Ciphertext |

$({g}_{a},{N}_{a})$ | The public key of Alice’s Paillier encryption system |

$({g}_{b},{N}_{b})$ | The public key of Bob’s Paillier encryption system |

${\lambda}_{a}$ | The private key of Alice’s Paillier encryption system |

${\lambda}_{b}$ | The private key of Bob’s Paillier encryption system |

$E()$ | The process of converting encrypted plaintext into ciphertext |

$D()$ | The process of decrypting ciphertext into plaintext |

${r}_{i}$ | Random numbers |

${E}_{pk(A)}$ | Encrypted calculation with $A\u2019s$ public key |

${E}_{pk(B)}$ | Encrypted calculation with $B\u2019s$ public key |

$IDEA{L}_{\stackrel{\u2014}{B}}({C}_{b},{C}_{a})$ | The function calculation results of ${C}_{b}$ and ${C}_{a}$ in the ideal case |

$REA{L}_{\stackrel{\u2014}{A}}({C}_{b},{C}_{a})$ | The function calculation results of ${C}_{b}$ and ${C}_{a}$ in the practical case |

$F()$ | Function calculation results |

## References

- Zhao, C.; Zhao, S.N.; Zhao, M.H.; Chen, Z.X.; Gao, C.Z.; Li, H.W.; Tan, Y.A. Secure multi-party computation: Theory, practice and applications. Inf. Sci.
**2019**, 476, 357–372. [Google Scholar] [CrossRef] - Knott, B.; Venkataraman, S.; Hannun, A.; Sengupta, S.; Ibrahim, M. Crypten: Secure multi-party computation meets machine learning. Adv. Neural Inf. Process. Syst.
**2021**, 34, 4961–4973. [Google Scholar] - Volgushev, N.; Schwarzkopf, M.; Getchell, B.; Varia, M.; Lapets, A.; Bestavros, A. Conclave: Secure multi-party computation on big data. In Proceedings of the Fourteenth EuroSys Conference, Dresden Germany, 25–28 March 2019. [Google Scholar]
- Feng, Q.; He, D.B.; Zeadally, S.; Khan, M.K.; Kumar, N. A survey on privacy protection in blockchain system. J. Netw. Comput. Appl.
**2019**, 126, 45–58. [Google Scholar] [CrossRef] - Pang, H.P.; Wang, B.C. Privacy-preserving association rule mining using homomorphic encryption in a multikey environment. IEEE Syst. J.
**2020**, 15, 3131–3141. [Google Scholar] [CrossRef] - Dong, C.Y.; Loukide, G. Approximating private set union/intersection cardinality with logarithmic complexity. IEEE Trans. Inf. Forensics Secur.
**2017**, 12, 2792–2806. [Google Scholar] [CrossRef] - Goldreich, O. Secure multi-party computation. Manuscript. Prelim. Version
**1998**, 78, 110. [Google Scholar] - Prathik, A.; Uma, K.; Anuradha, J. An Overview of application of Graph theory. Int. J. ChemTech Res.
**2016**, 9, 242–248. [Google Scholar] - Dou, J.W.; Liu, X.H.; Zhou, S.F.; Li, S.D. Efficient pooled secure multi-party computing protocols and applications. Chin. J. Comput.
**2018**, 41, 1844–1860. [Google Scholar] - Wei, Q.; Li, S.D.; Wang, W.L.; Du, R.M. Safe multiparty computation of graph intersections and mergers. J. Cryptologic Res.
**2020**, 7, 774–788. [Google Scholar] - He, J.; Erfani, S.; Ma, X.; Bailey, J.; Chi, Y.; Hua, X.S. α-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression. Adv. Neural Inf. Process. Syst.
**2021**, 34, 20230–20242. [Google Scholar] - Zhao, X.L.; Jia, Z.L.; Li, S.D. Safe computation of set intersection problems. J. Cryptologic Res.
**2022**, 9, 294–307. [Google Scholar] - Tang, C.M.; Lin, X.H. Privacy Protection Set Intersection Computing protocol. Netinfo Secur.
**2020**, 20, 9–15. [Google Scholar] - Gao, A.; Liang, Y.; Xie, X.J.; Wang, Z.S.; Li, J.T. Social network information dissemination methods that support privacy protection. J. Front. Comput. Sci. Technol.
**2021**, 15, 233–248. [Google Scholar] - Wang, S.L.; Zheng, Y.F.; Jia, X.H.; Wang, C. OblivGM: Oblivious Attributed Subgraph Matching as a Cloud Service. IEEE Trans. Inf. Forensics Secur.
**2022**, 17, 3582–3596. [Google Scholar] [CrossRef] - Zuo, X.J.; Li, L.X.; Peng, H.P.; Luo, S.S.; Yang, Y.X. Privacy-preserving subgraph matching scheme with authentication in social networks. IEEE Trans. Cloud Comput.
**2020**, 10, 2038–2049. [Google Scholar] [CrossRef] - Sharmila, G.; Devi, M.K. BTLA-LSDG: Blockchain-Based Triune Layered Architecture for Authenticated Subgraph Query Search in Large-Scale Dynamic Graphs. IETE J. Res.
**2023**, 1–24. [Google Scholar] [CrossRef] - Xu, C.; Chen, Q.; Hu, H.B.; Hei, X.J. Authenticating aggregate queries over set-valued data with confidentiality. IEEE Trans. Knowl. Data Eng.
**2017**, 30, 630–644. [Google Scholar] [CrossRef] - Bringmann, K.; Gawrychowski, P.; Mozes, S.; Weimann, O. Tree edit distance cannot be computed in strongly subcubic time (unless APSP can). TALG
**2020**, 16, 1–22. [Google Scholar] [CrossRef] - Garcia-Hernandez, C.; Fernandez, A.; Serratosa, F. Ligand-based virtual screening using graph edit distance as molecular similarity measure. J. Chem. Inf. Model.
**2019**, 59, 1410–1421. [Google Scholar] [CrossRef] - Li, S.D.; Yang, X.L.; Zuo, X.J.; Zhou, S.F.; Kang, j.; Liu, X. Graphical similarity determination for the protection of private information. Acta Electron. Sin.
**2017**, 45, 2184–2189. [Google Scholar] - Blumenthal, D.B.; Gamper, J. On the exact computation of the graph edit distance. Pattern Recogn. Lett.
**2020**, 134, 46–57. [Google Scholar] [CrossRef] - Yuan, Y.; Lian, X.; Wang, G.R.; Ma, Y.L.; Wang, Y.S. Constrained shortest path query in a large time-dependent graph. Proc. Vldb Endow.
**2019**, 12, 1058–1070. [Google Scholar] [CrossRef] - Dey, R.; Balabantaray, R.C.; Mohanty, S.H. Sliding window based off-line handwritten text recognition using edit distance. Multimed. Tools Appl.
**2022**, 81, 22761–22788. [Google Scholar] [CrossRef] - Ma, J.C.; Zheng, H.B.; Zhao, J.H.; Chen, X.; Zhai, J.Q.; Zhang, C.H. An islanding detection and prevention method based on path query of distribution network topology graph. IEEE Trans. Sustain. Energy
**2021**, 13, 81–90. [Google Scholar] [CrossRef] - Ghosh, E.; Kamara, S.; Tamassia, R. Efficient graph encryption scheme for shortest path queries. In Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, Hong Kong, China, 7–11 June 2021. [Google Scholar]
- Zhang, M.W.; Chen, Y.; Susilo, W. PPO-CPQ: A privacy-preserving optimization of clinical pathway query for e-healthcare systems. IEEE Internet Things
**2020**, 7, 10660–10672. [Google Scholar] [CrossRef] - Zhou, J.; Qin, X.; Ding, Y.; Ma, H. Spatial–Temporal Dynamic Graph Differential Equation Network for Traffic Flow Forecasting. Mathematics
**2023**, 11, 2867. [Google Scholar] [CrossRef] - Fang, W.T.; Mohsen, Z.; Chen, Z.Y. Secure and privacy preserving consensus for second-order systems based on paillier encryption. Syst. Control Lett.
**2021**, 148, 104869. [Google Scholar] [CrossRef] - Sobti, R.; Geetha, G. Cryptographic hash functions: A review. IJCSI
**2012**, 9, 461. [Google Scholar] - Kociumaka, T.; Pissis, S.P.; Radoszewski, J. Pattern matching and consensus problems on weighted sequences and profiles. Theor. Comput. Syst.
**2019**, 63, 506–542. [Google Scholar] [CrossRef]

Algorithm | Computational Complexity | Communication Complexity | Anti-Malicious Adversaries |
---|---|---|---|

Reference [15] | ${n}^{3}$ mode index | 6 | × |

Reference [16] | $6h$ | 5 | × |

Algorithm 1 | $4k+3$ mode index | 1 | × |

Algorithm 2 | $8k$$\mathrm{mode}\mathrm{index}+4h$ | 3 | √ |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, X.; Kong, J.; Peng, L.; Luo, D.; Xu, G.; Chen, X.; Liu, X.
A Secure Multi-Party Computation Protocol for Graph Editing Distance against Malicious Attacks. *Mathematics* **2023**, *11*, 4847.
https://doi.org/10.3390/math11234847

**AMA Style**

Liu X, Kong J, Peng L, Luo D, Xu G, Chen X, Liu X.
A Secure Multi-Party Computation Protocol for Graph Editing Distance against Malicious Attacks. *Mathematics*. 2023; 11(23):4847.
https://doi.org/10.3390/math11234847

**Chicago/Turabian Style**

Liu, Xin, Jianwei Kong, Lu Peng, Dan Luo, Gang Xu, Xiubo Chen, and Xiaomeng Liu.
2023. "A Secure Multi-Party Computation Protocol for Graph Editing Distance against Malicious Attacks" *Mathematics* 11, no. 23: 4847.
https://doi.org/10.3390/math11234847