Friday, March 30, 2018

Monte Carlo tree search

This is an attempt to explain MCTS to myself. If I made any mistakes let me know in the comments.

Let's say we have to make three sets of binary decisions (e.g. left/right). This leads to $2^3$ = 8 possible outcomes (A-H) and let's say three of these (A, E, and F) are desirable (i.e. "winners"). We'd like to find one or more of these, without having to go through all the options, which can be represented by a tree-graph as shown here (the circles as called nodes and the lines are called edges).



We start at the top of the tree and pick one path on each side of the tree at random, basically by flipping a coin at each node to decide wether we go left or right (hence Monte Carlo). Let's say we end up at point A and G.


We now go to the second layer of the graph and on each node record whether we found a winner ($w_i$ = 1) or not ($w_j$ = -1) and the number of times we visited the node ($n_i = n_j$ = 1). In the top node we add the wins and losses (1 + -1 = 0) and the total number of attempts ($N$ = 2).



Now we compute the "action" ($a_i$) for each edge in the first branch point according to
$$ a_i = \frac{w_i}{n_i}+c \sqrt{\frac{\ln (N)}{n_i}}$$
$c$ is the exploration parameter that, theoretically, is $\sqrt{2}$ but in practise is an empirical parameter. Here I'll use $c$ = 4.

Now we start at the top node and choose the node with the highest action, which in this case it the left one. From this node we choose a direction at random and repeat this process until we hit the end of the tree. Let's say we ended up at C. C is not a winner so we update the $w_i$s and $n_i$s on this path and add  $w_i$ and $n_i$ to the first node we chose at random.



Now we update the actions. Note that because $N$ changes, the action on the right hand side of the tree also changes. In fact, that action is now larger than on the left, so in our fourth exploration we'll now start exploring the right side of the tree further.



We now repeat this process as many times as we can afford. The path with the most action at the end of the simulation is chosen as the best one.

You'll notice that paths A and B will never be explored again because we (arbitrarily) chose to branch to the right at some point. So another option that's often used, is to chose random paths leading in both directions as we did in the very first step.



This work is licensed under a Creative Commons Attribution 4.0

Tuesday, March 6, 2018

Reviews of Random Versus Systematic Errors in Reaction Enthalpies Computed Using Semi-empirical and Minimal Basis Set Methods

We submitted this paper to ACS Omega January 31st and the reviews just came back

Reviewer: 1

Recommendation: Publish after minor revisions.

Comments:
The authors explored the CBH approach proposed by Sengupta and Raghavachari to compute the reaction enthalpy of a series of organic reactions using semi-empirical and low-cost HF/DFT as the low-level method. They also discussed the origin of errors for several cases that exhibited very large errors. The results will be very useful to the computational chemistry community, in terms of identifying effective means to compute reaction energetics and better ways to improve low-cost methods.

I have only a few minor comments:
1. It appears that dispersion correction was not included for DFTB3 and some NDDO methods. For reactions that involve very large molecules, dispersion may make a non-negligible contribution, as found, for example, for Diels-Alder reactions in the recent benchmark analysis by Gruden et al. (J. Comp. Chem. 38, 2171-2185).

2. There are several typos: line 55 of pg 2, "corrections WERE not included"; line 48 of pg 6, there is one additional "is".

3. It might be useful to report and comment on the computational cost for the different approaches. For example, PBEh-3c is still rather expensive compared to the semi-empirical methods.

4. Is there a "simple" explanation for the difference between xTB and DFTB3? For example, does the improved description of frequencies by xTB make a major difference?


Reviewer: 2

Recommendation: Publish after minor revisions.

Comments:
The authors have carried out an analysis of the performance of highly efficient computational methods for the computation of enthalpies of organic reactions using the connectivity-based hierarchy. The methods considered include DFT, HF, and a range of semi-empirical methods. The analysis is clear and some of the reported findings are indeed significant. While the good performance of DFT and HF is consistent with previous results, the lack of significant improvement with semi-empirical methods is particularly noteworthy. The paper is acceptable for publication after the authors address the following comment.

A more detailed analysis is reported for reaction 19 that is an outlier for some methods such as HF. In this system, the larger errors are attributed partly to the presence of the strained oxirane ring. Similarly, reaction 23 poses problems for some semi-empirical methods due to the presence of larger errors involving allene. In light of these observations, it may be useful to add a cautionary note to the range of problems that can be studied with such methods. I suggest a small paragraph to address the potential limitations of the inexpensive methods for such systems containing unusual bonding situations.


This work is licensed under a Creative Commons Attribution 4.0

Sunday, March 4, 2018

Improvements to take_elementary_step.py


First of all I have fixed a bug in the bond order energy function so the program now suggests significantly fewer structures. Second I have written some code that creates input files for a QM program (GFN-xTB) and analyses the output. I choose GFN-xTB because it's a super fast semiempirical QM method that is also reasonably accurate. I have tried to keep things fairly modular so that it's easy to interface it with other QM programs (more details in the code comments). 

Using the Diels-Alder reaction as an example, staring with ethene and butadiene as the reactant the program now suggests 28 other structures with energies no more than 100 kcal/mol higher than the reactants. I choose 100 kcal/mol because the energy function is very crude.




All structures are then optimized with GFN-xTB and I now select structures with energies no more than 20 kcal/mol higher than the reactants. I choose a lower value here because I trust the energy function a bit more. I use the electronic energy plus an estimate of the translational entropy, but the code can easily be modified to use free energies. This narrows it down to 21 structures (ranked by energy below) in addition to the reactants, so the bond order energy function is actually doing quite well.


You'll notice that some of the structures appears to have two H atoms, but this is in fact an H2 molecule. The reason for this is that the code that translates the Cartesian coordinates back to SMILES uses a distance criterion to assign bonds and that apparently needs some tweaking for H2.  It's important to recompute the SMILES because the bonds can rearrange during the geometry optimization.

I also need to fix the code for calculations on atoms where GFN-xTB doesn't print out coordinates.

The next step is to find TSs connecting all these structures to the reactants to find the lowest barrier.


This work is licensed under a Creative Commons Attribution 4.0

Saturday, March 3, 2018

Very simple way to animate in Jupyter notebooks

I want to use Google Colaboratory for my python course, but the module we use for animation doesn't work there because the necessary libraries are not installed. I had planned to spend part of the weekend to see if I could find a solution but by amazing coincidence a tweet in my Twitter stream this morning had the answer!



The answer is simply to display the images sequentially and clear the output each time with clear_output(wait=True) from IPython.display. You can see the demo code I wrote below.  This works very nicely on my off-line notebook (where you want to execute this line in a cell above the notebook itself: %matplotlib inline) while the animation is a bit choppy on Colaboratory, presumably due  to network latency. But it's certainly good enough.

Thanks again to Twitter for saving me tons of time!



This work is licensed under a Creative Commons Attribution 4.0