Molecular docking excels at creating a plethora of potential models of protein–protein complexes. To correctly distinguish the favorable, native-like models from the remaining ones remains, however, a challenge.

We assessed here if a protocol based on molecular dynamics (MD) simulations would allow distinguishing native from non-native models to complement scoring functions used in docking. To this end, the first models for 25 protein–protein complexes were generated using HADDOCK.

Next, MD simulations complemented with machine learning were used to discriminate between native and non-native complexes based on a combination of metrics reporting on the stability of the initial models. Native models showed higher stability in almost all measured properties, including the key ones used for scoring in the Critical Assessment of PRedicted Interaction (CAPRI) competition, namely the positional root mean square deviations and fraction of native contacts from the initial docked model.

A random forest classifier was trained, reaching a 0.85 accuracy in correctly distinguishing native from non-native complexes. Reasonably modest simulation lengths of the order of 50–100 ns are sufficient to reach this accuracy, which makes this approach applicable in practice.

[maxbutton id=”4″ url=”″ text=”Read more” linktitle=”Journal of Chemical Theory and Computation: Native or Non-Native Protein–Protein Docking Models? Molecular Dynamics to the Rescue” ]


Zuzana Jandova, Attilio Vittorio Vargiu,  Alexandre M. J. J. Bonvin (2022):
Native or Non-Native Protein–Protein Docking Models? Molecular Dynamics to the Rescue.
Journal of Chemical Theory and Computation 17(9)

About the author

Stian works in School of Computer Science, at the University of Manchester in Carole Goble‘s eScience Lab as a technical software architect and researcher. In addition to BioExcel, Stian’s involvements include Open PHACTS (pharmacological data warehouse), Common Workflow Language (CWL), Apache Taverna (scientific workflow system), Linked Data and identifiers, research objects (open science) and digital preservation, myExperiment (sharing scientific workflows), provenance (where did things come from and who did it) and annotations (who said what).