Monday, January 21, 2019

Screening for large energy storage capacity of meta-stable dihydroazulene Part 2



This is a follow up to this post. Briefly, we have computed $\Delta E_{rxn}$ and $\Delta E^\ddagger$ for about 32,500 molecules using xTB and PM3 respectively. We can afford to do a reasonably careful (DFT/TZV) study on at most 50 molecules, so the next question is how to identify the top 50 candidates. In other words to what extent can we trust the conformational search and the xTB and PM3 energies?

To try to answer the latter question we (i.e. Mads) randomly chose 20, 60, and 20 molecules with high, medium, and low xTB-$\Delta E_{rxn}$ values and recomputed $\Delta E_{rxn}$ at the M06-2X/6-31G(d) level of theory using the lowest xTB-energy DHA and VHF structures as starting geometries. The results are shown below


The vertical and and horizontal lines denote $\Delta E_{rxn}$ and $\Delta E^\ddagger$, respectively, for an experimentally characterised "reference" compound that we want to improve upon, i.e. we are looking for compounds that have larger $\Delta E_{rxn}$ and $\Delta E^\ddagger$ values. $\Delta E_{rxn}$ should ideally be at least 5 kcal/mol higher than the reference (but in general as large as possible) and $\Delta E^\ddagger$ should be at least 2 kcal/mol higher than the reference, but anything higher than that is not necessarily better. The half life depends very sensitively on the barrier, and is hard to compute accurately, so we have to be careful about excluding molecules with high storage capacity in cases where the barrier is close to the reference.

Using these criteria I would select 5 points for further study indicated by the red points (let's call them Square, Star, Triangle, Dot, and Diamond). Square is clearly the most promising one with significantly higher $\Delta E_{rxn}$ and $\Delta E^\ddagger$ compared to the reference. The remaining 4 points are chosen because they either have reasonably high $\Delta E_{rxn}$ and $\Delta E^\ddagger$-values  that are only a few kcal/mol below the reference (Triangle and Star) or reasonably high $\Delta E^\ddagger$ and $\Delta E_{rxn}$ that are somewhat (ca 3 kcal/mol) higher than the reference. I'd call the last 4 points pseudo-positives.


The corresponding plot for SQM storage energies and barrier heights is shown above. The good news is that Square is still the clear winner and I would also have picked Star and Triangle for further investigation. However, there are many more points that I also would picked (false pseudo-positives) and Diamond and Dot would not have been picked (false pseudo-negatives).


The false positives are due to PM3 overestimating the barrier, so let's use B3LYP/6-31+G(d)//xTB barriers instead. Square is still the winner, Star would still be picked (but not Triangle), and the false positives are gone. However, the barriers for Dot and Diamond are now so low that they will not be picked (false pseudo-negatives).

Summary and outlook
SQM finds the only true positive (Square). PM3 tends to over estimate the barrier relative to the reference, which leads to false pseudo-positives. xTB tends to underestimate the storage capacity which leads to a few false pseudo-negatives. DFT/PM3 barriers removes the false pseudo-positives but also leads to some false pseudo-negatives.


The above plot shows the molecules with an xTB storage energy that is at least 4 kcal/mol higher than the reference (4 rather than 5 kcal/mol since xTB tends to underestimate the storage energy). The solid line shows the reference barrier and the dotted line lies 2 kcal/mol above that (since PM3 tends to underestimate the barrier). There are 41 points above that line, including good old Square, that we should investigate further and it's probably also good to look at a few more points with very high storage energy and reasonably high barrier.

But all this assumes that we can trust the conformational search so this has to be tested next.


This work is licensed under a Creative Commons Attribution 4.0

Wednesday, January 16, 2019

Open access chemistry publishing options in 2019



Here is an updated list of affordable impact neutral and other select OA publishing options for chemistry

Impact neutral journals
$0 (in 2019) PeerJ chemistry journals. Open peer review. (Disclaimer I am an editor on PeerJ Physical Chemistry)

$425 (normally $850) Results in Chemistry. Closed peer review

$750 ACS Omega (+ ACS membership $166/year). Closed peer review. WARNING: not real OA. You still sign away your copyright to the ACS.

$1000 F1000Research. Open peer review. Bio-related

$1095 PeerJ - Life and Environment. Open peer review. Bio-related. PeerJ also has a membership model, which may be cheaper than the APC.

(The RSC manages "the journal’s chemistry section by commissioning articles and overseeing the peer-review process")

$1350 Cogent Chemistry. Has a "pay what you can" policy. Closed peer review.

$1595 PLoS ONE. Closed peer review.

$1790 Scientific Reports. Closed peer review



Free or reasonably priced journals that judge perceived impact
$0 Chemical Science Closed peer review

$0 Beilstein Journal of Organic Chemistry. Closed peer review.

$0 Beilstein Journal of Nanotechnology. Closed peer review.

$0 ACS Central Science. Closed peer review. ($500-1000 for CC-BY, WARNING: not real OA. You still sign away your copyright to the ACS as far as I know) 

$100 Living Journal of Computational Molecular Science. Closed peer review

€500 Chemistry2. Closed peer review.

£750 RSC Advances. Closed peer review.


Let me know if I have missed anything.




This work is licensed under a Creative Commons Attribution 4.0

Thursday, January 10, 2019

Screening for large energy storage capacity of meta-stable dihydroazulenes


Here's a summary of where we are at with Mads project

The Challenge
Dihydroazulenes (DHAs) are promising candidates for storing solar energy as chemical energy, which can be released as thermal energy when needed. The ideal DHA derivative has a large $\Delta H_{rxn}$ and a $\Delta G^{\ddagger}$ that is large enough to give a half life of days to months but low enough so that the energy release can be reduced.  Of course any modification should not affect light adsorption adversely. This presents an interesting optimisation challenge!


We asked Mogens Brøndsted Nielsen to come up with a list of substituents and he suggested 40 different substituents and 7 positions, which results in 164 billion different molecules (we chose to interpret the right hand figure more generally). We decided to start by looking at all single and double substitutions, which amounts to 35,588 different molecules. The first question is what level of theory will allow us to screen this many molecules. 

The initial screen
At a minimum we need to compute $\Delta E_{rxn}$ which involves at least a rudimentary conformational search for both reactants and products. We used an approach similar to this study, ($5+5n_{rot}$ RDKit generated start geometries) which results in over 1 million SQM geometry optimisations, but used GFN-xTB instead of PM3 because the former is about 10 times faster.

To find the barriers, we did a 12-point scan along the breaking bond in DHA (out to 3.5Å) starting from the lowest energy DHA conformer. The highest energy structure was then used as a starting point for a TS search using Gaussian and "calcall". We used ORCA for the scan and Gaussian for the TS search, and used PM3 because it is implemented in both programs. We also optimised the lowest VHF structure with PM3 to compute the barrier. We verify the TS by checking whether the imaginary normal mode lies along the reacting bond. This worked in 32,623 cases. The whole thing took roughly 5 days using roughly 250 cores.

Note that this approach finds a TS to cis-VHF, which we assume is in thermal equilibrium with the lower energy trans-VHF form. For both barriers and reaction energies we use the electronic energy differences rather than free energies of activation and reaction enthalpies.

The next step
We can afford to do a reasonably careful (DFT/TZV) study on at most 50 molecules, so the next question is how to identify the top 50 candidates. In other words to what extent can we trust the conformational search and the xTB and PM3 energies? I plan to cover this in a future blog post.

A more efficient initial screen
We now have data with which to test more efficient ways of performing the initial screen:

1. (a) We could perform the conformational search using MMFF and only xTB-optimise the lowest energy DHA and VHF structures.
(b) We could only perform the TS search for molecules with large $ \Delta E_{rxn}$ values.
(c) We could perform the bond-scan with xTB rather than ORCA. 
(d) We could test whether the bond-scan barrier can be used instead of the TS-based barrier.

2. We could test the use ML-based energy functions such as ANI-1 instead of SQM.

3. We could test whether ML can be trained to predict $\Delta E_{rxn}$ and/or $\Delta E^{\ddagger}$ based on the DHA structure.

We'd be very happy to collaborate on this.

Beyond double substitution
No matter how efficient we make the initial screen, screening all 164 billion molecules is simply not feasible. Instead we'll need to use search algorithms such as genetic algorithms or random forest.

Other ideas/comments/questions on this or anything else related to this blogpost are very welcome.


This work is licensed under a Creative Commons Attribution 4.0

Friday, January 4, 2019

Planned papers for 2019

A year ago I thought I'd probably publish three papers in 2018:

Accepted
1. Fast and accurate prediction of the regioselectivity of electrophilic aromatic substitution reactions

Probable
2. Random Versus Systematic Errors in Reaction Enthalpies Computed using Semi-empirical and Minimal Basis Set Methods
3. Improving Solvation Energy Predictions using the SMD Solvation Method and Semi-empirical Electronic Structure Methods

Unexpected
5. Empirical corrections and pair interaction energies in the fragment molecular orbital method

The end result was five papers. In addition I also published a proposal.

Here's the plan for 2019

Submitted

Probable
2. Screening for energy storage capacity of meta-stable vinylheptafulvenes
3. Testing algorithms for finding the global minimum of drug-like compounds
4. Towards a barrier height benchmark set for biologically relevant systems - part 2
5. SMILES-based genetic algorithms for chemical space exploration

Maybe
6. Further screening of bicyclo[2.2.2]octane-based molecular insulators
7. Screening for electronic properties using a graph-based genetic algorithm
8. Further screening for energy storage capacity of meta-stable vinylheptafulvenes

I haven't started a draft on any of papers 2-5 so I'm not exactly brimming with confidence that all 4 will make it into print in 2019. However, we have 80-90% of all the data needed to write each paper, and I've blogged a bit about paper 3.

Sunday, November 25, 2018

Conformational search for the global minimum



We've been working on conformational search for a while and are nearing the point were we have enough to write it up. This post is to get my head around the central point of the study.

Motivation
I'm interested in conformational search because I want to compute accurate reaction energies. Therefore I need to find the global energy minimum of both reactants and products (or something very close to them as explained below).

I need to make to make 2 choices: what conformational search method and what level of theory to use.

Establishing a benchmark set
The only way to make reasonably sure you find the global minimum is to do a systematic search with a relatively fine dihedral angle resolution. This can be painful, even with MMFF, but needs only to be done once. So what if it runs for 24 hours?

A dihedral scan doesn't sample ring conformations directly, so you may have to repeat the systematic search for several ring conformations.

The global energy minimum is obviously fully energy minimised so when I say "search" I mean that you are generating starting geometries that are then fully energy minimised. Also, if you are interested in reaction enthalpies, the global minimum is the structure with the lowest enthalpy and similarly for free energies.

Test your new conformational search method
Here's the only test that matters: run your new conformational search method N times and record how many of the N runs find the global minimum, i.e. the success rate. Do that for M different molecules and calculate an average success rate.

Let's say the average success rate is 90%. That tells me that if I use your method once there is 10% chance I that I won't find the global minimum, but if I use the method twice on the same molecule that chance drops to (0.1 x 0.1) 1% (assuming your method is stochastic).

The larger N and M are, the lower the uncertainty in the average success rates: 5 is no good, 10 is borderline, >15-20 is acceptable.

The success rate will depend on the number of rotatable bonds so it's important that the sizes of your M molecules are representative of the molecules you want to study.

Obviously, if your new method finds a lower energy value then there you need to go back and look the way you found your benchmark set.

Depending on the accuracy you can live with, you can loosen the success criteria to having found a structure within, say, 0.5 kcal/mol of the global minimum.

Benchmarking energy functions for conformational search
Most benchmark studies of this kind compare conformational energies and report RMSD or MAE values. The problem with this approach is that large errors for high energy conformers can lead to large RMSD values which are misleading for the purposes of finding global minima.

The only test of an energy function that really matters is whether the "true" global minimum structure is among the predicted structures and, if so, how close in energy it is to the "predicted" global minimum. Here the "true" global minimum is the global minimum predicted by your reference energy function that you trust.

Let's say you want to compare a semiempirical method (SQM) to a DFT method you trust and you can afford to do 100 geometry optimisations with DFT per molecule. The SQM is cheap enough that you can perform either a systematic search or a search using a conformational search method you have found reliable. Now you take the 100 (unique) lowest energy structures and use them as starting points for 100 DFT optimisations. The lowest energy structure found with DFT is your best guess for the "true" global minimum and the question is what is the minimum number of DFT optimisations you need to perform to find the "true" global minimum for each molecule? The lower the number, the better the SQM method for that molecule.

Let's say you do this for 5 molecules and the answer for SQM1 is 3, 1, 4, 10, and 3 while for SQM2 it is 4, 6, 4, 7, and 4. The I would argue that SQM2 is better because you need to perform 7 DFT optimisations to be 100% sure to find the "true" global minimum for all 5 molecules, compared to 10 DFT optimisations for SQM1.

Another metric would be to say that in practice I can only afford, say, 5 DFT optimisations and compute the energy relative to the "true" global minimum, e.g. 0.0, 0.0. 0.0, 0.5, and 0.0 kcal/mol for SQM1 and 0.0, 0.2, 0.0, 0.8, and 0.0 kcal/mol for SQM2. In this case you could argue that SQM1 is better since the maximum error is smaller. The best metric depends on your computational resources and what kind of error you can live with.

A need for a global minimum benchmark sets
Most, if not all, conformer benchmark sets that are currently available are made starting from  semirandomly chosen starting geometries, with no guarantee that the true global minimum is among the structures. What is really needed is a diverse set of molecular structures and total energies, obtained using trustworthy methods, that one is reasonably sure correspond to global minima.

As I mentioned above, the only "sure" way is to perform a systematic search but for large molecules this may be practically impossible for energy functions that you trust.

One option is to perform the systematic search using a cheaper method and then re-optimise the P lowest energy structures with the more expensive method. The danger here is that the global minimum on the expensive energy surface is not a minimum on the cheap energy surface or, more precisely, that none of the P starting geometries leads to the global minimum on the expensive surface. One way to test this is to start the stochastic search, which hopefully is so efficient that you can afford to use a trustworthy energy function, from the global minimum candidate you found.

Additionally, it is useful to use two different stochastic conformational search algorithms, such as Monte Carlo and genetic algorithm. If both method locate the same global minimum, then there is a good chance it truly is a global minimum, since it is very unlikely to find the global minimum by random chance.



This work is licensed under a Creative Commons Attribution 4.0

Thursday, October 4, 2018

Why I support Plan S


1. The world spends $10 billion annually on scientific publishing.

2. Most scientific papers are only accessible with subscription, which means they are only accessible to academia and large companies in countries with a large per capita GDP. The papers are not accessible to the tax payers who paid for the research and the university subscriptions, nor to small and medium-sized companies.

3. The price of scientific publishing has been increasing at an unsustainable rate

Image result for rising cost of scientific publishing

4. The increased price is not due to increased costs on the side of the publishers, but rather to their aggressive negotiation tactics, leading to profit margins unheard of in any other business.



5. Pushback during price-negotiations with publishers in the EU has resulted in the publishers denying access to their journals in several countries, as a negotiation tactic. This includes access to past issues. We can't do anything about it because they own the papers. They own the papers because scientists signed away their copyright.

I believe this is an untenable situation. Who's going to do anything about it?

6. Publishers are obviously not going to do anything about it on their own accord. Any company would fight to preserve these profit margins, and they do.

7. Scientific societies are not going to do anything about it. Scientific societies derive the bulk of their income from subscriptions and are every bit as ruthless in their negotiations on subscription price as commercial publishers. For all intends-and-purposes they are publishers first, societies second.

8. By-and-large (one notable exception is PLoS) scientists haven’t done anything about it either. In my experience, scientists are first on foremost focussed on career advancement and competition for research funds and don't think about the (rising) cost of publishing.

9. Now it seems some EU funders are finally trying to do something about it with Plan S. Plan S is designed to bring about change in the current system.

This is why I fully support Plan S

If Twitter is any indication most scientists are not happy with Plan S. From what I can tell, the worries center mostly on not being able to publish in "good" journals and can be classified into two main categories:

None of the "good" journals will change
Unless something changes within the next 2-3 years researchers would not be able to publish work funded by some EU-based funding agencies in most, but not all, "good" journals.

So your colleagues who are not funded by some EU-based funding agencies publish in "good" journals and get all the recognition. As a result you may not get promoted or get new grants.

It's an unlikely scenario but if this does happen remember that your colleagues, who decide on your promotion or that you are competing against for funds, are in exactly the same boat.

Another worry is that people won't want to collaborate with you because you can't publish together in "good" journals. In my experience, this is not how scientific collaborations work, but if you do happen to meet such a potential collaborator my advice would be to avoid them at all cost.

All of the "good" journals will change
If all the "good" journals change to Gold OA in response to Plan S, then people without funding who can't pay the APC won't be able to publish in "good" journals.

This is also an unlikely scenario, but if it does happen society would spend considerably less on subscription fees that could be used to pay APCs. Notice that Plan S calls for a APC-cap, meaning that Plan S-friendly journal should be affordable. Remember that the current APCs are designed to maintain a very large profit margin for the publishers, so there is plenty of "fat" to trim. Finally, APCs are tied to volume. If the number of submissions increase the cost of each individual paper decreases.

The most likely scenario in my opinion
Plan S will be (sadly) softened a little. Some "good" journals will change to comply with the final version of Plan S, some won't, and some new journals will spring up. Some researchers won't be able to publish some of their work in some of their favorit "good" journals and will have to find a new favorit for some of their papers. Your colleagues are in the same boat, so it won't have much effect on either career advancements nor funding success rate. The world of scientific publishing may become a little bit less ridiculous but not thanks to us scientists.

I view Plan S as a signal to the publishers for the next round of negotiations. We pay the bills and this is how we would like it. It's not an unreasonable  position at all. Something does not need to change. Publisher will fight this tooth and nail and their main argument will be that scientists say it will be bad for science. Sadly, many scientists are saying just that. However, the main worry of the publishers is their profit margin and the main worry of the scientists is, I believe, their careers. The only thing they have in common is a fear of change.  


This work is licensed under a Creative Commons Attribution 4.0




Tuesday, September 11, 2018

Reviews of Solvation Energy Predictions Using The SMD Solvation Method and Semiempirical Electronic Structure Methods

Really late posting this. The paper is already out at JCP. Here are the reviews for the record.

Reviewer #1 Evaluations:
Recommendation: Revision
New Potential Energy Surface: No

Reviewer #1 (Comments to the Author):

In this contribution, the authors report a set of systematic analyses of semi-empirical (NDDO and DFTB) methods combined with continuum solvation models (COSMO and SMD) for the description of solvation free energies of well-documented benchmark cases (the MNSOL dataset). They found that the performance of NDDO and DFTB continuum solvation models can be substantially improved when the atomic radii are optimized, and that the results are most sensitive to the radii of HCNO. Another interesting observation is that the optimized radii have a considerable degree of transferability to other solvents.

Since an efficient computation of solvation free energies and related quantities (e.g., pKa values) is valuable in many chemical and biological applications, the results of this study are of considerable interest to the computational chemistry community.

In the current form, the ms can benefit from further discussion of several points:

1. The authors chose to optimize the atomic radii based entirely on element type (e.g., HCNOS). In the literature, many solvation models either further consider atom types (e.g., UAKS) or atomic charge distribution (based on either atomic point charges or charge density); in many cases, a higher degree of accuracy appears to be obtained. It would be useful to further clarify the principle behind the current optimization and the expected level of accuracy; for example, to what degree should we expect the same set of radii to work well for both neutral and ionic (especially anionic) species?
2. Although it is well known - it is useful to explicitly point out that the experimental values for neutral and charged species have different magnitudes of errors.
3. It would be informative to further dissect/discuss the physical origins for the errors of NDDO/DFTB continuum solvation models. For example, are the larger errors (as compared to, for example, HF based calculations) due primarily to the less reliable description of the solute charge density (e.g., multipole moments) or solute polarizability? Discussion along this line might be relevant to the transferability of the optimized model to non-aqueous solvents.

4. Cases with very large errors deserve further analysis/discussion - for example, some neutral solutes apparently have very large errors at HF, NDDO and DFTB levels - as much as 20 or even 30 kcal/mol! What are these molecules? Are the same set of molecules problematic for all methods? What is the physical origin for these large errors?


Reviewer #2 Evaluations:
Recommendation: Revision
New Potential Energy Surface: No

Reviewer #2 (Comments to the Author):

In this paper, the authors make the case for efficient solvation models in
fast electronic structure methods (currently heavily utilized for high-throughput
screening approaches). They extend an implementation of PM6 in the Gamess
programm to account for d orbitals. The SMD and COSMO continuum models in combination with
various semi-empirical NDDO and also DFT tight-binding approaches is considered.
Their analysis clearly highlights deficiencies of the semi-empiricial approaches
compared to HF/DFT. The authors then proceed to propose a remedy (changing the
radii for H, C, O, N, and S). Although this change was driven by data on aqueous
solvation energies, the authors find that other polar solvents (DMSO, CH3CN, CH3OH)
are also improved, which is a sign of transferability of this simple fix.
The prediction of pKa data, as an important application field, concludes the
results section. The paper is clearly written, however, it raises questions that
should be addressed in a revision:
1) Table 2 shows very (too) small Coulomb radii for H and on page 6 this is commented on.
The authors note that for radii smaller than 0.6 A the proton moved into the solvent.
However, no further analysis if provided. I assume that this is due to an increased outlying
charge and this outlying charge shoud be quantified. Apparently, some error compensation
is in operation. This also relates to the statement 'error for the ions is considerably larger
than for neutral molecules' on page 5. Error compensation also raises a concern about
transferability that the authors must address.
2) The authors should also review their list of references (I assume that the first author's
surname in Ref. 1 is misspelled, the abbreviations of journals are not in JCP style, Ref. 34
appears to lack a journal title, Ref. 16 lacks author names ...).
3) Moreover, figures 1-3 lack a label for the y-axis, figure 4 lacks units
on the y-axis.
4) Few typos need also be removed (see, e.g., "mainly only" -> "mainly on"
on page 2).


This work is licensed under a Creative Commons Attribution 4.0