Background:
On Apr 8, 2013, at 10:38 AM, Jan Jensen wrote:
Dear Dr xxx
You also mention "strong concerns about the methodology". Since all reviewer 2's comments are aimed at "significance" (and misses the point of the paper.) I assume this is referring to Reviewer 1's points. Points 1, 2, and 4 reflect a complete ignorance of the current field of computational enzymology, which I am happy to elaborate on. Point 3 is ridiculous as we present predictions for close to 400 mutants, so the method is demonstratively high-throughput.
The strongest objections of both reviewers is further proof, e.g. more experimental data. We describe a theoretical method that offers experimentally testable predictions. Since purely theoretical papers are also appropriate for PLoS ONE (such as our previous PLoS ONE paper on this method
http://dx.doi.org/10.1371/journal.pone.0049849), the only goal of additional experiments must be to establish the significance or impact of the method.
In conclusion, I firmly believe our paper meets all stated criteria for publication in PLoS ONE and your stated reasons (echoing that of the reviewers) for rejection includes a criterium (impact) that is not a review criterion for PLoS ONE. I would therefore like you to reconsider your decision and perhaps consult other editors, keeping in mind the extremely positive comment of reviewer 3.
Best regards,
Jan Jensen
----
On Apr 11, 2013, at 3:24 PM, plosone wrote:
Dear Dr Jensen
Thank you for your email.
I am writing to inquire whether you would be interested in formally appealing the original decision rendered through PLOS ONE regarding the manuscript PONE-D-13-07851R1. While I cannot guarantee that your appeal will be approved by our in house editors, they will consider appeals via the formal appeals process when you submit a detailed rebuttal letter.
Appeal requests should be made in writing, not by telephone, and should be addressed to plosone@plos.org with the word "appeal" in the subject line. Authors should provide detailed reasons for the appeal and point-by-point responses to the reviewers' and/or Academic Editor's comments. Decisions on appeals are final without exception.
If you have any further questions or concerns, please do not hesitate to contact us.
With kind regards
xxx
Staff EO
PLOS ONE
---
On Apr 17, 2013, at 2:35 PM, Jan Jensen wrote:
This is an appeal-request for the decision to reject manuscript PONE-D-13-07851R1. The reason for the appeal is that the primary reason for rejection is the perceived impact of the study, which is not a publication criteria for PLoS ONE (
http://www.plosone.org/static/publication). What follows is a point-by-point response to the points raised by the editor and reviewers. I note that there was also a third, very positive, review of the manuscript in the previous round of reviews.
The editor:
** “The Reviewers have considered your responses and revisions not convincing. They raised again strong concerns on the methodology and on the overall significance of the conclusions.”
Our response: “significance” is not a publication criteria for PLoS ONE. Concerns regarding methodology is addressed in response to Reviewer 1 below.
Reviewer #2:
** “As the authors claim in their answer, they compare to experiment, the gold standard in science. This is missing for all presented mutants. Unortunately no further experimental or computational characterization of the selected mutants were carried out, therefore the study remains inconclusive and incomplete. As also in experimental screening methods applied, a rescreening of interesting hits is mandatory in any way.”
Our response: This paper offers a computational method for generating hundreds of experimentally testable predictions. PLoS ONE accepts papers in all areas of science, including purely computational studies such as our previous PLoS ONE paper: DOI: 10.1371/journal.pone.0049849. Thus, the absence of any experimental data should not in itself preclude publication in PLoS ONE. However, we do offer some experimental verification which is in reasonable agreement with our computational results.
We hope that future experimental studies test our predictions. However, even if we are proven wrong this would not alter the fact that our current conclusions are supported by the current data: (1) Barriers of hundreds of mutants are estimated. (2) There is general qualitative agreement with available experimental data - the best one can expect given the many approximations we make and duly note. (3) We offer experimentally testable predictions for other mutants.
Whether future experimental studies verify these prediction or not will determine the impact of our method, but this is not a criterion for publication in PLoS ONE.
** “According to my specific questions, none of them was sufficiently answered and no changes were applied to the manuscript.
E.g. my simple question was, why certain active site residues were not considered in the chosen set. The answer, that the criteria are already given in the text is complete nonsense, because all residues questioned by me fulfill exactly the authors diffuse criteria, albeit were not selected. This is highly disappointing and not scientific sound, because from the given criteria one is not able to reproduce the expert choice of residues performed by the authors. Especially for the protonation of P38H I expected a more competent answer from the group of Prof. Jensen instead of no answer at all.”
Our response: We test the qualitative agreement between our computed data with experiment for some mutants. These mutants are not selected based on our computational method and could just as well have come from already published results or from randomly chosen mutants. We clearly state that these mutants were picked using heuristic criteria as is common in rational enzyme design. We never claim that the selection of single mutants is automated, only that double, triple, and quadruple mutants made from this initial selection can be efficiently screened to offer suggestions for promising mutants.
When applied to a new system, single mutants must again be selected heuristically. However, as this is currently how most experimental rational design of enzymatic activity is done, this is hardly a major limitation. Clearly, an reliable automated selection of mutants would increase the impact of the study but impact is not a criterion for publication in PLoS ONE.
** “According to the quantitative interpretation of the computed results, the authors claim, that the intent of the method is not a quantitative ranking. Nevertheless they still give a discrete energy-cutoff in the paper, suggesting a quantitative meaningful barrier to the reader.
If the goal is just to identify N interesting mutants from a larger subset, this should be clarified in the manuscript and not only in the answer to the editor.”
Our response: As we clearly state in the paper (emphasis added): “We note that defining the cutoff is done purely for a post hoc comparison of experimental and computed data. When using the computed barriers to identify promising experimental mutants, one simply chooses the N mutants with the lowest barriers, where N is the number of mutants affordable to do experimentally (e.g. 20 in the discussion of set L).”
** “I can see no attempt for a scientific discussion about the accuracy and the aim of the method compared with current state of the art methods to predict enzyme activity and conformational space of protein mutants. Therefore the scientific perspective and evaluation of the scientific contribution with regard to existing methods is completely missing. This is in my view not acceptable for a scientific publication, even from an industrial perspective.”
Our response: As we clearly state in the manuscript: “The computational method used to estimate the reaction barriers of the CalB mutants has been described in detail earlier [1] and is only summarized here. As described previously [1], in order to make the method computationally feasible, relatively approximate treatments of the wave function, structural model, dynamics and reaction path are used. Given this and the automated setup of calculations, some inaccurate results will be unavoidable. However, the intent of the method is similar to experimental high throughput screens of enzyme activity where, for example, negative results may result from issues unrelated to intrinsic activity of the enzyme such as imperfections in the activity assay, low expression yield, protein aggregation, etc. Just like its experimental counterpart our technique is intended to identify potentially interesting mutants for further study.”
The claim that there is “no attempt for a scientific discussion about the accuracy and the aim of the method” is clearly false.
Furthermore “scientific perspective and evaluation of the scientific contribution” is not a publication criteria in PLoS ONE.
Reviewer #1: This paper is based on unsound methodology.
** “1. as any textbook will show, the transition state is a saddle point and should therefore have only 1 negative eigenvalue. Calculation of the eigenvalues is therefore a standard and common practice to proof that the transition state was indeed obtained. The authors' claim in the rebuttal that a vibrational analysis would not be valid does not make any sense; without eigenvalues it cannot be proven that a transition state was obtained. Such a proof is absolutely necessary to show that the method works (especially given my other concerns, see below).”
Our response: Adiabatic mapping, which we employ in our study, is the most common way to estimate barriers in QM/MM studies of enzymatic reaction mechanisms. The resulting barriers tend to be in good agreement with experiment, which indicates that this is a reasonable approximation (see for example DOI: 10.1146/annurev.physchem.53.091301.150114 and DOI: 10.1146/annurev.physchem.55.091602.094410). This is common knowledge in the QM/MM community, but we are happy to add text that explains this.
** “2. The authors cherry-picked data by deciding that certain shapes of the transition barrier should be thrown away, that certain atoms moved too much in the minimization and should be held fixed, etc. etc. How can anyone believe this is a proper procedure with so much arbitrary and manual input, especially without further proof that the transition states were indeed identified.”
Our response: “Cherry pick” implies that we selectively discard data that does not fit with experiment, which we have not done.
As explicitly stated in the manuscript the aim is “to identify promising mutants for, and to eliminate non-promising mutants from, experimental consideration.“ Occasionally the shape of the reaction profile is inconclusive, i.e. it is not clear whether a particular mutation is promising or not. The most conservative choice is to classify the mutation as non-promising, but this was only an issue for mutants where we do not know the experimental answer. Similarly, the same constraints are applied to all mutants, so there can be no question of “cherry picking”.
We are happy to clarify this point in the manuscript.
** “3. All this manual and arbitrary input indicates that the procedure is not robust; therefore, it cannot be used for high-throughput screening.”
Our response: Since we use our method to screen nearly 400 mutants this statement is demonstratively wrong. Yes, there is some manual intervention, but the method is automated to such a degree that hundreds of mutants can be screened.
** “4. The authors' claim in the rebuttal that an error analysis is not needed since a comparison is made to experiments would be correct if exactly the same property was compared in the experiments as in the computation, but here this is not the case. Experimental activities are characterized by kcat/KM while the computationally obtained number is a barrier height. Since the entropic contribution is missing and since the authors do not know the value of the transmission coefficient, not even kcat can be correctly calculated. Given the arbitrariness of the procedure, and the inherent limitations of a semiempirical method like PM6, an error analysis would be highly appropriate.”
Our response: As we clearly state (emphasis added): “Given the approximations introduced to make the method sufficiently efficient, it is noted that the intent of the method is not a quantitative ranking of the reaction barriers, but to identify promising mutants for, and to eliminate non-promising mutants from, experimental consideration. Therefore only qualitative changes in overall activity are considered.“
The reviewer points out that we cannot compute exactly what is measured, yet insists on a quantitative comparison of computed and experimental data. This does not make any sense.
** “In conclusion, the desire to have a high-throughput algorithm has led to way too many concessions on accuracy and robustness; without further proofs, the accuracy of data and conclusion is in question.”
Our response (repeated from above): This paper offers a computational method for generating hundreds of experimentally testable predictions. PLoS ONE accepts papers in all areas of science, including purely computational studies such as our previous PLoS ONE paper: DOI: 10.1371/journal.pone.0049849. Thus, the absence of any experimental data should not in itself preclude publication in PLoS ONE. We, however, do offer some experimental verification which is in reasonable agreement with our computational results.
We hope that future experimental studies test our predictions. However, even if we are proven wrong this would not alter the fact that our current conclusions are supported by the current data: (1) Barriers of hundreds of mutants are estimated. (2) There is general qualitative agreement with available experimental data - the best one can expect given the many approximations we make and duly note. (3) We offer experimentally testable predictions for other mutants.