As I understand it, the Annealer within AMP operate in this sequence:
1. Generate a random initial parameter vector X0
2. Calculate the value of the loss function with X0
3. Generate a trial move resulting in a new parameter vector X1
4. Calculate the value of the loss function with X1
5. Decide to accept or reject the new vector X1 based on the Metropolis criteria
6. If reject, continue with step 3  If accept, update X0=X1 and continue with step 3
So, this describes a Monte Carlo based search in the model parameter space.
Rather than perform the Metropolis test on the loss function resulting from the randomly generated X1, I would like to perform this test on an optimized set of parameters X1' that is obtained by minimizing Lossfunction(X1) with say a BFGS minimization.
I believe that this can be accomplished with the "basinhopping" algorithm using the Regressor class, as indicated in the "Adjusting convergence parameters" section of the AMP documentation.
Unfortunately, errors are generated when using this option:
train.py
=================================================
from ase.io import read,write
from ase.io.trajectory import Trajectory
from ase.calculators.singlepoint import SinglePointCalculator
from amp.model import LossFunction
import os
from amp import Amp
from amp.model.neuralnetwork import NeuralNetwork
from amp.descriptor.zernike import Zernike, FingerprintCalculator
from amp.regression import Regressor
from scipy.optimize import basinhopping
num_cores = 10
train_file = 'training.traj'
convergence={'energy_rmse':0.0005,'force_rmse':None}
calc=Amp(descriptor=Zernike(cutoff=10),model=NeuralNetwork(hiddenlayers=(8,8,8)),cores=num_cores)
regressor = Regressor(optimizer=basinhopping)
calc.model.regressor = regressor
calc.model.lossfunction=LossFunction(convergence=convergence)
calc.train(images=train_file)
====================================================
produces errors:
Traceback (most recent call last):
File "training_bhop.py", line 42, in <module>
calc.train(images=train_file)
File "/Apps/software/anaconda2/lib/python2.7/sitepackages/ampdevpy2.7.egg/amp/__init__.py", line 311, in train
parallel=self._parallel)
File "/Apps/software/anaconda2/lib/python2.7/sitepackages/ampdevpy2.7.egg/amp/model/neuralnetwork.py", line 228, in fit
result = self.regressor.regress(model=self, log=log)
File "/Apps/software/anaconda2/lib/python2.7/sitepackages/ampdevpy2.7.egg/amp/regression/__init__.py", line 85, in regress
**self.optimizer_kwargs)
File "/Apps/software/anaconda2/lib/python2.7/sitepackages/scipy/optimize/_basinhopping.py", line 632, in basinhopping
niter_success = niter + 2
TypeError: unsupported operand type(s) for +: 'instancemethod' and 'int'
I don't believe there something wrong in the training.py script. Can someone offer some insight into the problem?
Anthony
